1675699526
Metacat is a unified metadata exploration API service. You can explore Hive, RDS, Teradata, Redshift, S3 and Cassandra. Metacat provides you information about what data you have, where it resides and how to process it. Metadata in the end is really data about the data. So the primary purpose of Metacat is to give a place to describe the data so that we could do more useful things with it.
Metacat focusses on solving these three problems:
TODO
Metacat builds are run on Travis CI here.
git clone git@github.com:Netflix/metacat.git
cd metacat
./gradlew clean build
Once the build is completed, the metacat WAR file is generated under metacat-war/build/libs
directory. Metacat needs two basic configurations:
metacat.plugin.config.location
: Path to the directory containing the catalog configuration. Please look at catalog samples used for functional testing.metacat.usermetadata.config.location
: Path to the configuration file containing the connection properties to store user metadata. Please look at this sample.Take the build WAR in metacat-war/build/libs
and deploy it to an existing Tomcat as ROOT.war
.
The REST API can be accessed @ http://localhost:8080/mds/v1/catalog
Swagger API documentation can be accessed @ http://localhost:8080/swagger-ui/index.html
Pre-requisite: Docker compose is installed
To start a self contained Metacat environment with some sample catalogs run the command below. This will start a docker-compose
cluster containing a Metacat container, a Hive Metastore Container, a Cassandra container and a PostgreSQL container.
./gradlew metacatPorts
metacatPorts
- Prints out what exposed ports are mapped to the internal container ports. Look for the mapped port (MAPPED_PORT
) to port 8080.REST API can be accessed @ http://localhost:<MAPPED_PORT>/mds/v1/catalog
Swagger API documentation can be accessed @ http://localhost:<MAPPED_PORT>/swagger-ui/index.html
To stop the docker compose cluster:
./gradlew stopMetacatCluster
Author: Netflix
Source Code: https://github.com/Netflix/metacat
License: Apache-2.0 license
1675576320
Wiqaytna is the official Moroccan exposure notification app.
Sample Configuration
ORG="MAR"
STORE_URL="<Play store URL>"
PRIVACY_URL="<Privacy policy URL>"
SERVICE_FOREGROUND_NOTIFICATION_ID=771579
SERVICE_FOREGROUND_CHANNEL_ID="Wiqaytna Updates"
SERVICE_FOREGROUND_CHANNEL_NAME="Wiqaytna Foreground Service"
PUSH_NOTIFICATION_ID=771578
PUSH_NOTIFICATION_CHANNEL_NAME="Wiqaytna Notifications"
ERROR_NOTIFICATION_ID=771580
#service configurations
SCAN_DURATION=8000
MIN_SCAN_INTERVAL=36000
MAX_SCAN_INTERVAL=43000
ADVERTISING_DURATION=180000
ADVERTISING_INTERVAL=5000
PURGE_INTERVAL=86400000
PURGE_TTL=1814400000
MAX_QUEUE_TIME=7000
BM_CHECK_INTERVAL=540000
HEALTH_CHECK_INTERVAL=900000
CONNECTION_TIMEOUT=6000
BLACKLIST_DURATION=100000
FIREBASE_REGION = "<Your Firebase region>"
STAGING_FIREBASE_UPLOAD_BUCKET = "wiqayetna-app-staging"
STAGING_SERVICE_UUID = "17E033D3-490E-4BC9-9FE8-2F567643F4D3"
V2_CHARACTERISTIC_ID = "117BDD58-57CE-4E7A-8E87-7CCCDDA2A804"
PRODUCTION_FIREBASE_UPLOAD_BUCKET = "wiqaytna-app"
PRODUCTION_SERVICE_UUID = "B82AB3FC-1595-4F6A-80F0-FE094CC218F9"
android.useAndroidX=true
android.enableJetifier=true
ORG: For international federation usage
To obtain the official BlueTrace Service ID and Characteristic ID, please email info@bluetrace.io
Change the package name and other configurations accordingly such as the resValue
in in the different settings in buildTypes
For example,
buildTypes {
debug {
buildConfigField "String", "FIREBASE_UPLOAD_BUCKET", STAGING_FIREBASE_UPLOAD_BUCKET
buildConfigField "String", "BLE_SSID", STAGING_SERVICE_UUID
String ssid = STAGING_SERVICE_UUID
versionNameSuffix "-debug-${getGitHash()}-${ssid.substring(ssid.length() - 5,ssid.length() - 1 )}"
resValue "string", "app_name", "Wiqaytna"
applicationIdSuffix "stg"
}
Values such as STAGING_FIREBASE_UPLOAD_BUCKET, STAGING_SERVICE_UUID have been defined in gradle.properties as described above.
Setup Firebase for the different environment. Download the google-services.json for each of the environments and put it in the corresponding folder.
Debug: ./app/src/debug/google-services.json
Production: ./app/src/release/google-services.json
The app currently relies on Firebase Functions to work. More information can be obtained by referring to opentrace-cloud-functions.
Remote config is used for retrieving the "Share" message used in the app. The key for it is "ShareText". If it is unable to be retrieved, it falls back to R.string.share_message
Protocol version used should be 2 (or above) Version 1 of the protocol has been deprecated
SSL pinning is not included as part of the repo. It is recommended to add in a check for SSL certificate returned by the backend.
The following is a statement from Google: "At Google Play we take our responsibility to provide accurate and relevant information for our users very seriously. For that reason, we are currently only approving apps that reference COVID-19 or related terms in their store listing if the app is published, commissioned, or authorized by an official government entity or public health organization, and the app does not contain any monetization mechanisms such as ads, in-app products, or in-app donations. This includes references in places such as the app title, description, release notes, or screenshots. For more information visit https://android-developers.googleblog.com/2020/04/google-play-updates-and-information.html"
Wiqaytna uses the following third party libraries / tools.
Author: Wiqaytna-app
Source Code: https://github.com/Wiqaytna-app/wiqaytna_android
License: GPL-3.0 license
1672831020
The demand for accurate, real-time data has never been greater for today's data engineering teams, yet data downtime has always been a reality. So, how do we break the cycle and obtain reliable data?
Data teams in the early 2020s, like their software engineering counterparts 20 years ago, experienced a severe conundrum: reliability. Businesses are ingesting more operational and third-party data than ever before. Employees from across the organization, including those on non-data teams, interact with data at all stages of its lifecycle. Simultaneously, data sources, pipelines, and workflows are becoming more complex.
While software engineers have resolved application downtime with specialized fields (such as DevOps and Site Reliability Engineering), frameworks (such as Service Level Agreements, Indicators, and Objectives), and a plethora of acronyms (SRE, SLAs, SLIs, and SLOs, respectively), data teams haven't yet given data downtime the due importance. Now it is up to data teams to do the same: prioritize, standardize, and evaluate data reliability. I believe that data quality or reliability engineering will become its specialization over the next decade, in charge of this crucial business component. In the meantime, let's look at what data reliability SLAs are, why they're essential, as well as how to develop them.
"Slack's SLA guarantees 99.999 service uptime. If breached, they apply for a service credit."
The best way to describe Service Level Agreements (SLAs) is a method that many businesses use to define and measure the standard of service that a given vendor, product, or internal team will provide—and potential remedies if they do not.
As an example, for customers on Plus plans and above, Slack's customer-facing SLA guarantees 99.99 percent uptime every fiscal quarter with no more than 10 hours of scheduled downtime. If they come up short, impacted customers will be given service credits for future use on their accounts.
Customers use service level agreements (SLAs) to guarantee that they receive what they paid for from a vendor: a robust, dependable product. Many software teams develop SLAs for internal projects or users instead of end-users.
As an example, consider internal software engineering SLAs. Why bother formalizing SLAs if you don't have a customer urging you to commit to certain thresholds in an agreement? Why not simply rely on everyone to do their best and aim for as close to 100 percent uptime as possible? Would that not be adding extraneous burdensome regulations?
No, not at all. The exercise of defining, complying with, and evaluating critical characteristics of what defines reliable software can be immensely beneficial while also setting clear expectations for internal stakeholders. SLAs can help developing, product, and business teams think about the bigger picture about their applications and prioritize incoming requests. SLAs provide confidence that different software engineering teams and their stakeholders mean the same thing, caring about the same metrics and sharing a pledge to thoroughly documented requirements.
Setting non-zero-uptime requirements allow for room to improve. There is no risk of downtime if there is no room for improvement. Furthermore, it is simply not feasible. Even with the best practices and techniques in place, systems will fail from time to time. However, with good SLAs, engineers will know precisely when and how to intervene if anything ever goes wrong.
Likewise, data teams and their data consumers must categorize, measure, and track the reliability of their data throughout its lifecycle. Consumers may make inaccurate assumptions or rely on empirical information about the trustworthiness of your data platform if these metrics are not strictly established. Attempting to determine data dependability SLAs help build trust and strengthen bonds between your data, your data team, and downstream consumers, whether your customers or cross-functional teams within your organization. In other words, data SLAs assist your organization in becoming more "data-driven" in its approach to data.
SLAs organize and streamline communication, ensuring that your team and stakeholders share a common language and refer to the same metrics. And, because defining SLAs helps your data team quickly identify the business's priority areas, they'll be able to prioritize more rapidly and respond more rapidly when cases arise.
A DQ SLA, like a more traditional SLA, governs the roles and responsibilities of a hardware or software vendor in accordance with regulations and levels of acceptability, as well as realistic expectations for response and restoration when data errors and flaws are identified. DQ SLAs can be defined for any circumstance where a data provider transfers data to a data consumer.
More specifically, a data recipient would specify expectations regarding measurable aspects related to one or more dimensions of data quality (such as completeness, accuracy, consistency, timeliness, and so on) within any business process. The DQ SLA would then include an expected data quality level and even a list of processes to be followed if those expectations are not fulfilled, such as:
The DQ SLA is distinctive because it recognizes that data quality issues and resolution are almost always linked to business operations. To benefit from the processes suggested by the definition of a DQ SLA (particularly items 5, 7, 9, and 12), systems facilitating those operations, namely:
These concepts are critical in establishing the DQ SLA's goal: data quality control, which is based on the definition of rules based on agreed-upon data quality dimensions.
Suppose it is determined that the information does not meet the defined expectations. In that case, the remediation process can include a variety of tasks, such as writing the non-confirming text to an outlier file, emailing a system administrator or data steward to resolve the issue, running an immediate corrective data quality action, or any combination of these.
Creating and adhering to data reliability SLAs is a cohesive and precise exercise.
First, let's go over some terminology. According to Google's service level agreements (SLAs), clear service level indicators (SLIs), quantitative measures of quality service, and accepted service level objectives (SLOs), the expected values or ranges of values where each criterion must meet, are necessary. Many engineering teams, for example, use availability as a criterion of site reliability and set a goal of maintaining the availability of at least 99 percent.
Creating reliability SLAs for data teams typically involves three key steps: defining, measuring, and tracking.
The first phase is to consent and clearly articulate what reliable data signifies to your company.
Setting a baseline is a good place to start. Begin by taking stock of your data, how it's being used, and by whom. Examine your data's historical performance to establish a baseline metric for reliability.
You should also solicit feedback from your data consumers on what "reliability" means to them. Even with a thorough knowledge of data lineage, data engineers are frequently isolated from their colleagues' day-to-day workflows and use cases. When developing reliability agreements with internal teams, it is crucial to know how consumers interact with data, what is most important, or which potential complications require the most stringent, critical intervention.
Furthermore, you'll want to ensure that all relevant stakeholders — all data leaders or business consumers with a stake in reliability — have assessed it and agreed on the descriptions of reliability you're constructing.
You'll be able to set clear, actionable SLAs once you understand
Once you've established a comprehensive understanding and baseline, you can begin to home in on the key metrics that will serve as your service-level reliability indicators.
As a general rule, data SLIs should portray the mutually agreed-upon state of data you defined in step 1, as well as limitations on how data can and cannot be used and a detailed description of data downtime. This may include incomplete, duplicated, or out-of-date data.
Your particular use case will determine sLIs, so here are a few metrics used to assess data health:
You can set objectives, i.e., reasonable ranges of data downtime, when you've already identified key indicators (SLIs) for data reliability. All such SLOs should be appropriate based on your current situation. For instance, if you choose to include TTD as a metric but are not using automated monitoring tools, your SLO should be lower than that of a mature organization with extensive data reliability tooling. Aligning those scopes makes it easy to create a consistent framework that rates incidents depending on the severity, making it easier to interact and quickly respond when issues arise.
Once you've established these priorities and integrated them into your SLAs, you can create a dashboard to track and evaluate progress. Some data teams build ad hoc dashboards, whereas others depend on dedicated data observability options.
The delivery of services for millions of customers via data centers involves resource management challenges. Data processing of risk management, consumer-driven service management, and independent resource management, measuring the service, system design, and reiteration assessment resource allocation in SLA with virtualization are the challenges of service level agreements.
To satisfy the customer requirement, three user-centric objectives are used: Receiving feedback from customers. Providing reliable communication between customers. Increasing access efficiency to understand the specific necessities of the customer. Believing the customer. When developing a service, if customer expectations are taken into account, those expectations are imported into the service provider.
The Risk Management process includes:
Grid service customers' service quality conditions necessitate the formation of service level agreements between service providers and customers. Because resources are disrupted and unavailable, service providers must decide whether to continue or reject service level agreement requests.
The data processing center should keep the reservation process going smoothly by managing the existing service requisition, improving the future service requisition, and changing the price for incoming requests. The resource management paradigm maps resource interactions to a platform-independent service level agreements pool. The resource management architecture with the cooperation of computing systems via numerous virtual machines enhances the effectiveness of computational models and the utilization of resources designed for on-demand resource utilization.
Virtual machines with various resource management policies facilitate resource allocation in SLA by meeting the needs of multiple users. An optimal joint multiple resource allocation method is used in the Allocation of Resource Model of Distributed Environment. A resource allocation methodology is introduced to execute user applications for the multi-dimensional resource allocation problem.
Various service providers offer various computing services. Original cloud impressions from numerous public documents must be assessed for service performance to design the application and service needs. As part of service level agreement, service measurement includes the current system's configuration and runtime information metrics.
Various sources and consumers with varying service standards are assessed to demonstrate the efficiency of resource management plans. Because resources are transferred, and service requisitions will come from multiple consumers at any stage, it is tedious to perform a performance evaluation of monitoring the resource plans in a repetitive and administrable fashion.
Data SLAs help the organization stay on track. They are defined as a public pledge to others. They are a bilateral agreement; you agree to continue providing data within specified criteria in exchange for people's participation and awareness. A lot can go wrong in data engineering, and a lot is due to misunderstanding. Documenting your SLA will go a long way toward setting the record straight, allowing you to achieve your primary objective of instilling greater data trust within your organization.The good news is when defining metrics, service, and deliverable targets for big data analytics, you don't have to start from scratch since the technique can be borrowed from the transactional side of your IT work. For so many businesses, it's simply a case of examining the level of service processes that are already in the place for their transactional applications, then applying these processes to big data and making the required changes to address distinct features of the big data environment, such as parallel processing and the handling of several types and forms of data.
Original article source at: https://www.xenonstack.com/
1672303224
A concurrency (based on Amp) framework, that lets you implement an asynchronous messaging, a transparent workflow and control of long-lived business transactions by means of the Saga pattern. It implements the message based architecture and it includes the following patterns: Saga, Publish\Subscribe, Message Bus.
Jump into our Quick Start and build your first distributed solution in just 15 minutes.
Documentation can be found in the .documentation
directory
Contributions are welcome! Please read CONTRIBUTING for details.
You can find help and discussion in the following places:
Contributions are welcome! Please read CONTRIBUTING for details.
Author: php-service-bus
Source Code: https://github.com/php-service-bus/service-bus
License: MIT license
1670261400
In this article, I am going to show you how to use Spring Boot as a RESTful web service and MarkLogic as a backend database and how to do marklogic integration with springboot application
Assuming that you have a Springboot application which uses MarkLogic as backend service, this guide will show you how to integrate your application with MarkLogic. The first thing you need to do is install the MarkLogic Java Client API. You can find instructions on how to do this in the readme file of the GitHub repository. Once you have installed the client API, you need to add the following dependency to your project’s pom.xml file:
<dependency>
<groupId>com.marklogic</groupId>
<artifactId>marklogic-client-api</artifactId>
<version>RELEASE</version>
</dependency>
Next, you need to configure the connection details for your MarkLogic server in your application.properties file. Below configurations are minimum require:
ml.host=localhost # Hostname or IP address of your MarkLogic server
ml.port=8010 # Port number of your MarkLogic server’s Application Server
ml.database=Documents # Name of the database to connect to
spring.data.marklogic.username=admin # Username for connecting to MarkLogic
spring.data.marklogic.password=admin # Password for connecting to MarkLogic
With this basic configuration in place, you can now start using the Spring Data for MarkLogic library in your application code!
MarkLogic is a powerful NoSQL database that can be used as a backend service for Springboot applications. In this blog post, we will take a look at the MarkLogic architecture and how it can be used to integrate a Springboot application with MarkLogic.
The MarkLogic architecture is based on a shared nothing architecture. This means that each node in a cluster is independent and can scale horizontally. Uses shading to distribute data across nodes. MarkLogic also uses replication to ensure high availability and disaster recovery.
MarkLogic has a flexible indexing system that can index any kind of data. This makes it easy to search and retrieve data from the database. MarkLogic also supports geospatial indexing which allows you to store and query data based on location.
The MarkLogic API is RESTful and there are language bindings for Java, Node.js, and .NET. This makes it easy to develop applications that use MarkLogic as a backend service.
n this section, we will learn how to use the Springboot Json Converter to easily convert your Java objects to and from JSON. We will also learn how to configure the converter to work with MarkLogic.
The Springboot Json Converter is a powerful tool that can be used to easily convert your Java objects to and from JSON. The converter is very easy to use and can be configured to work with MarkLogic.
To use the converter, you first need to add the following dependency to your project:
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-json</artifactId>
</dependency>
Once you have added the dependency, you can use the converter in your code like this:
ObjectMapper mapper = new ObjectMapper(); // Convert a Java object to JSON
String jsonString = mapper.writeValueAsString(myObject); // Convert a JSON
string to a Java object MyObject myObject = mapper.readValue(jsonString,
MyObject.class);
The converter is configured to work with MarkLogic by adding the following property to your application.properties file:
spring.jackson.marklogic.enabled=true
A Springboot Application can be easily integrated with MarkLogic as backend service. All you need to do is add the following dependency in your pom.xml file for Marklogic integration:
<dependency>
<groupId>com.marklogic</groupId>
<artifactId>marklogic-client-api</artifactId>
<version>RELEASE</version>
</dependency>
spring.data.marklogic.username=your_username //required
spring.data.marklogic.password=your_password //required
spring.data.marklogic.connection_string=localhost:8010 //optional, defaults
to localhost:8040
Now you can use all the features of MarkLogic from your Springboot Application!
In this article, we have seen how to integrate a Springboot application with MarkLogic as backend service. We have also seen how to perform various operations like CRUD, search and aggregation using the MarkLogic Java API. I hope this article has been helpful in understanding how to work with MarkLogic from a Springboot application.
Original article source at: https://blog.knoldus.com/
1669894103
What is Service-Oriented Architecture?
What is a microservice?
Comparison
Original article source at: https://www.c-sharpcorner.com/
1669884065
In comparison to others, the Nextbox wifi extender setup is far too simple. The steps below will help you install it, access its login admin page, and configure its settings. So, without further ado, let's begin the Nextbox extender setup process.
The user interface methods for connecting your Nextbox to your router wifi are as follows. All of these steps are stated below.
The following steps will guide you through the process of accessing the Nextbox extender admin page.
You can easily manage and control all of your range extender's wireless, basic, and advanced settings via the nextbox wifi extender setup page. The steps below will assist you in configuring your Nextbox wifi extender settings.
Finally, your nextbox extender will be set up. If facing any issues then you can contact our expert team they will guide you. You can also visit our website www.wirelessextendersetup.org
#internet #wifi #setup #usa #technology #service #extender #router
1669631700
In this ServiceNow interview questions blog, I have collected the most frequently asked questions by interviewers. If you wish to brush up your ServiceNow basics, then I would recommend you take a look at this video first. This video will introduce you to ServiceNow basics and hold you in good state to get started with this ‘ServiceNow Interview Questions’ blog
In case you have attended a ServiceNow interview in the recent past, do paste those ServiceNow interview questions in the comments section and we’ll answer them ASAP. So let us not waste any time and quickly start with this compilation of ServiceNow Interview Questions.
I have divided these questions in two sections:
So let us start then,
ServiceNow is a cloud based IT Service Management (ITSM) tool. It provides a single system of record for:
All aspects of IT Services live in the ServiceNow ecosystem. It gives us a complete view of services and resources. This allows for broad control of how to best allocate resources and design the process flow of those services.Refer this link to know more What Is ServiceNow?
Applications in ServiceNow represent packaged solutions for delivering services and managing business processes. In simple words it is a group of modules which provides information related to those modules. For example Incident application will provide information related to Incident Management process.
CMDB stands for Configuration Management Database. CMDB is a repository. It acts as a data warehouse for information technology installations. It holds data related to a collection of IT assets, and descriptive relationships between such assets.
LDAP is Light weight Directory Access Protocol. You can use it for user data population and user authentication. ServiceNow integrates with LDAP directory to streamline the user log in process and to automate the creation of user and assigning them roles.
Data lookup and record matching feature helps to set a field value based on some condition instead of writing scripts.
For example:
On Incident forms, the priority lookup rules sample data automatically. Then, set the incident Priority based on the incident Impact and Urgency values. Data lookup rules allow to specify the conditions and fields where they want data lookup to occur.
CMDB Baselines will help you, understand and control the changes made to a configuration Item(CI). These Baselines act as a snapshot of a CI.
Following steps will help you do the same:
View defines the arrangement of fields on a form or a list. For one single form we can define multiple views according to the user preferences or requirement.
An ACL is access control list that defines what data users can access and how they can access it in ServiceNow.
Impersonating a user means giving the administrator access to what the user would have access to. This includes the same menus and modules. ServiceNow records the administrator activities when the user impersonates another user. This feature helps in testing. You can impersonate that user and can test instead of logging out from your session and logging again with the user credentials.
Dictionary overrides provide the ability to define a field on an extended table differently from the field on the parent table. For example, for a field on the Task [task] table, a dictionary override can change the default value on the Incident [incident] table without affecting the default value on Task [task] or Change [change].
Coalesce is a property of a field that we use in transform map field mapping. Coalescing on a field (or set of fields) lets you use the field as a unique key. If a match is found using the coalesce field, the existing record will be updated with the information being imported. If a match is not found, then a new record will be inserted into the database.
UI policies dynamically change information on a form and control custom process flows for tasks. UI policies are alternative to client scripts. You can use UI policies to set mandatory fields,which are read only and visible on a form. You can also use UI policy for dynamically changing a field on a form.
With data policies, you can enforce data consistency by setting mandatory and read-only states for fields. Data policies are similar to UI policies, but UI policies only apply to data entered on a form through the standard browser. Data policies can apply rules to all data entered into the system, including data brought in through email, import sets or web services and data entered through the mobile UI.
Client script sits on the client side(the browser) and runs on client side only.Following are the types of client script:
In order to cancel a form submission the onSubmit function should return false. Refer the below mentioned syntax:
function onSubmit() { return false; }
Business rule is a server side script. It executes each time a record is inserted, updated, deleted, displayed or queried. The key thing to note while creating a business rule is, when and on what action it has to be executed. The business can be run or executed for following states
Yes, it is possible to call a business rule through a client script. You can use glide ajax for the same.
The Task table is the parent table of Incident, Problem & Change. It makes sure any fields, or configurations defined on the parent table automatically apply to the child tables.
A catalog item that allows users to create task-based records from the Service Catalog is called as a record producer. For example, creating a change record or a problem record using record producer. Record producers provide an alternative way to create records through Service Catalog
Glide record is a java class. It is used for performing database operations instead of writing SQL queries.
An import set is a tool that imports data from various data sources and, then maps that data into ServiceNow tables using transform map. It acts as a staging table for records imported.
A transform map transforms the record imported into ServiceNow import set table to the target table. It also determines the relationships between fields displaying in an Import Set table and fields in a target table.
When an import makes a change to a table that is not the target table for that import, this is when we say foreign record insert occurs. This happens when updating a reference field on a table.
Zing is the text indexing and search engine that performs all text searches in ServiceNow.
It is used to enhance the system logs. It provides more information on the duration of transactions between the client and the server.
It triggers an event for a task record if the task is inactive for a certain period of time. If the task remains inactive, the monitor repeats at regular intervals.
Domain separation is a way to separate data into logically-defined domains. For example a client ABC has two businesses and they are using ServiceNow single instance. They do not want users from one business to see data of other business. Here we can configure domain separation to isolate the records from both business.
You can set the property – “glide.ui.forgetme” to true to remove the ‘Remember me’ check box from login page.
The HTML Sanitizer is used to automatically clean up HTML markup in HTML fields and removes unwanted code and protect against security concerns such as cross-site scripting attacks. The HTML sanitizer is active for all instances starting with the Eureka release.
Check box is used to select whether the variables used should cascade, which passes their values to the ordered items. If this check box is cleared, variable information entered in the order guide is not passed on to ordered items.
A gauge is visible on a ServiceNow homepage and can contain up-to-the-minute information about current status of records that exists on ServiceNow tables. A gauge can be based on a report. It can be put on a homepage or a content page.
Metrics, record and measure the workflow of individual records. With metrics, customers can arm their process by providing tangible figures to measure. For example, how long it takes before a ticket is reassigned.
Following searches will help you find information in ServiceNow:
Lists: Find records in a list;
Global text search: Finds records in multiple tables from a single search field.
Knowledge base: Finds knowledge articles.
Navigation filter: Filters the items in the application navigator.
Search screens: Use a form like interface to search for records in a table. Administrators can create these custom modules.
BSM Map is a Business Service Management map. It graphically displays the Configuration Items (CI). These items support a business service and indicates the status of those Configuration Items.
Each update set is stored in the Update Set [sys_update_set] table. The customizations that are associated with the update set, are stored in [sys_update_xml] table.
If the Default update set is marked Complete, the system creates another update set named Default1 and uses it as the default update set.
Homepages and content pages don’t get added to ‘update sets’ by default. You need to manually add pages to the current ‘update sets’ by unloading them.
Reference qualifiers restricts the data, that can be selected for a reference field.
Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any Key Performance Indicator (KPI) in the organization.
The latest user interface is UI16 interface. It came in Helsinki release.
It is a unique 32-character GUID that identifies each record created in each table in ServiceNow.
A scorecard measures the performance of an employee or a business process. It is a graphical representation of progress over time. A scorecard belongs to an indicator. The first step is to define the indicators that you want to measure. You can enhance scorecards by adding targets, breakdowns (scores per group), aggregates, and time series.
Yes, you can do it by using a function autoSysFields() in your server side scripting. Whenever you are updating a record set the autoSysFields() to false.
Consider following Example:
var gr = new GlideRecord(‘incident’);
gr.query();
if(gr.next()){
gr.autoSysFields(false);
short_description = “Test from Examsmyntra” ;
gr.update();
}
Reference qualifier is used to restrict the data that is select able for a reference field.
Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any key performance indicator (KPI) in the organisation.
Navigate to User Administration > Role and click New.
You can, but there is no guarantee of sequencing. You cannot predict what order your event handlers will run.
You can use addActiveQuery() method to get all the active records and addInactiveQuery() to get the all inactive records.
next() method is responsible to move to the next record in GlideRecord. _next() provides the same functionality as next(), intended to be used in cases when we query the table having a column name as next.
So this brings us to the end of the blog. I hope you enjoyed these ServiceNow Interview Questions. The topics that you learnt in this ServiceNow Interview questions blog are the most sought-after skill sets that recruiters look for in a ServiceNow Professional.
You can also check out our ServiceNow YouTube playlist:
www.youtube.com/playlist?list=PL9ooVrP1hQOGOrWF7soRFiTVepwQI6Dfw
In case if you wish to build a career in ServiceNow then check out our ServiceNow Certification Training.
Got a question for us? Please mention it in the comments section of this ServiceNow Interview Questions and we will get back to you.
Original article source at: https://www.edureka.co/
1669427700
Salesforce being a CRM is used to connect people and information. In this blog, I am going to explain one of the core service – Salesforce Service Cloud and how it revolutionized customer support by making interactions easier between an organization and its customers. In my previous blog, you learned how to create a custom Salesforce Application. Moving forward, I will help you understand how Salesforce Service Cloud can add value to your business. First, I will explain the need for Salesforce Service Cloud, what it is and what all services it provides to engage your customers. In the end, I will explain one use case on how Coca-Cola has been extremely successful in enhancing their customer’s experience using Service Cloud.
So, let’s get started with why your organization should choose Salesforce Service Cloud.
If your company deeply cares about the customer service, then Salesforce Service Cloud is what you should go for. Irrespective of whether you are in B2C or B2B domain, you will have several customers raising tickets and queries on a regular basis. These tickets will be received by your service agents. Salesforce Service Cloud helps you in tracking and solving these tickets efficiently.
This is not the only way how you can transform customer experience. Let’s dig deeper and see how Salesforce Service Cloud is creating an impression.
To sum up, Salesforce Service Cloud definitely helps in improving your operational processes leading to better experience for your customers. Based on a study done across companies using Salesforce Service Cloud, growth in performance metric has been drastically increased. If you see the below infographic, agent productivity increased by 40%, case resolution increased by 41%, which eventually led to a 31% increase in customer retention.
This growth illustrates why people prefer Salesforce Service Cloud and how it plays an important role in improving your customer support team.
Now let’s understand what Salesforce Service Cloud is and what services it has to offer.
Salesforce offers Service Cloud as Software as a Service. Service Cloud is built on the Salesforce Customer Success Platform, giving you a 360-degree view of your customers and enabling you to deliver smarter, faster and more personalized service.
With Salesforce Service Cloud, you can create a connected knowledge base, enable live agent chat, manage case interactions – all at one platform. You can have personalized customer interactions or even up-sell your products/ services based on his/her past activity data.
Now, you may be wondering how to access Service Cloud. Let me walk you through the steps to access a Service Cloud Console.
Step 1: Login to login.salesforce.com
Step 2: Create a SF Console App
Step 3: Choose its display
Step 4: Customize push notifications
Step 5: Grant users Console Access – Sc User
As I had mentioned earlier, there are case tracking and knowledge base features. There are several other services that Salesforce Service Cloud offer which will enable you to provide a differentiated customer experience. You can refer the below image to see what Salesforce Service Cloud has to offer you.
You can take your console to the next level by learning the following features in Salesforce:
Case Management – Any customer issues raised are usually captured and tracked as cases. Cases can be further classified into the following:
At the core of the Service Cloud lies the ‘Case’ module. Let us understand the Case module with an example. Assume in a large organization like Coca-Cola, few of the employees’ systems get crashed, let’s call it as ‘breakdown of laptops’. Now you need to fix this as soon as possible to ensure business continuity. Service Cloud helps you track the progress and provides you with all the necessary information of every Coca-Cola agent. You can solve the problem by creating a case. You can then assign them as ‘high’ priority and also categorize the origin of this case (such as phone, email or web) and then click on ‘Save’. Refer the below screenshot to get a better understanding.
Solutions – You can categorize your solutions into query types – making your solution search easier and closing the case faster. With this, the agent does not need to create a new solution to existing queries every time. This helps in enhancing your agent productivity. Solutions do not need any additional license.
For the same Coca-Cola scenario, if you want to solve a case as an agent, then you will definitely search for a solution. Firstly, you can check whether the solution has been already present or not. If it is not present, then your admin can create a solution stating that the case has been resolved and hence can be closed. You can refer to the screenshot attached below.
As you can see in the above screenshot, I have created a solution- ‘Laptop Solution’ that displays the title, status and the details of the solution created.
Knowledge – Salesforce Knowledge is a knowledge base where users can edit, create and manage content. Knowledge articles are documents of information. Customers can go to the company’s website and search for solutions. Knowledge articles can be associated with a case before it is closed unlike solutions. Salesforce Knowledge needs a separate license to be purchased.
Communities – Communities are a way to collaborate with business partners and customers, distributors, resellers and suppliers who are not part of your organization. Typically, these are the people who are not your regular SFDC users, but you want to provide them some channel to connect with your organization and provide them access to some data as well. To learn more, get Salesforce developer certification and become certified.
In Salesforce, if you go to the ‘Call Center’ dropdown, you will find Success Community. A Salesforce user can use their user id and password to login there. This community is accessible to all the developers, functional consultants or admins. In this community, user can search anything as it has a lot of things like documentation, articles, knowledge, feed, questions and many more. For example: If you want to know about record type, then you can search here. Have a look at the screenshot attached below.
As you can see in the above search, you got a lot of customer’s problems, documentation, known issues, ideas etc. You can now start exploring them, understand the major issues faced by the customers and fix them accordingly.
Console – Agent console provides unified agent experience. It reduces response time by placing all the information together. In a console, you can find everything from customer profiles, to case histories, to dashboards – all in one place.
As I have shown you the basics of how to set up a Salesforce console in the beginning of this blog. Admin can grant Console Access to the users, Service Cloud gives you the console access where you can assign users to it. Refer the below screenshot, you can assign user profile for the console. Also, you can assign the Service Cloud user license to agents with those profiles so that they can start using your console.
Social Media – Service Cloud lets you leverage social media platforms such as Facebook, Twitter to engage visitors. With Salesforce Social Studio, customer requests are escalated directly to your social service team. Social media plays an important role in bridging the gap in virtual world, engaging them in real time.
Live Agent – Live agents deal with 1:1 customer interaction. Agents can provide answers faster with customer chat and keyboard shortcuts. They stay totally connected to the customers as their team members are alerted immediately to get the issue resolved. Also, it makes the agents smarter and more productive in the process with real-time assistance. This in turn improves customer satisfaction.
Salesforce Service Cloud is all about providing services to your customers and building a relationship with them. You can use other features such as call center, email & chat, phone, google search, contracts and entitlements, chatter and call Scripting.
Salesforce Service Cloud offers three pricing packages- Professional, Enterprise and Unlimited. You can refer to the table below and select your plan accordingly.
Professional – $75 USD/user/month | Enterprise – $150 USD/user/month | Unlimited – $300 USD/user/month |
Case management Service contracts and entitlements Single Service Console app Web and email response Social customer service Lead-contact account management Order management Opportunity tracking Chatter collaboration Customizable reports and dashboards CTI integration Mobile access and administration Limited process automation Limited number of record types, profiles, and role permission sets Unlimited apps and tabs |
Advanced case management |
Live Agent web chat |
“Our agents love Salesforce CRM Service. They tell us how easy it is to use and how phenomenal it is when it comes to driving a better customer experience” – Charter
This is how Salesforce Service Cloud has revolutionized the way customers interact with organizations using the services over the internet. Now, let’s have a look at how Coca-Cola implemented Salesforce Service Cloud to solve its business challenges.
Many global organizations leverage Salesforce Service Cloud for a better customer relationship management solution. Here, I will talk about how Coca-Cola Germany used Service Cloud to analyze consumer behavior and build data driven business strategies. This use case will give you an idea on how Service Cloud can be used extensively across any domain.
Salesforce Service Cloud is an integrated platform to connect employees, customers, and suppliers around the world.
Earlier, Coca-Cola was facing several issues while managing their customers. Some of them are listed below:
“In the past, big companies outcompeted smaller companies. But that’s history. Today, the fast companies outcompete the slow companies,” explained Ulrik Nehammer – CEO of Coca-Cola.
Now when they are connected to the Salesforce Service Cloud, technicians are alerted in real-time on customer issues. This helps reduce response time dramatically. Also, call center support agents receive instant access to customer history. With all of this, productivity of Coca-Cola Germany’s technical services has shot up by 30%.
With the Service Cloud, they wanted to understand their customers’ need and cater to them more effectively. Here are some key points that contributed to their excellence.
“This has been a massive step forward for us,” said Andrea Malende, business process expert and mobile solutions in Coca-Cola. “I’m amazed how quick and smooth the implementation was.”
This is how Coca-Cola implemented Salesforce Service Cloud thus making their customers happy. There are several other Salesforce Service Cloud use case stories which show how various companies have benefited and grown their business.
Salesforce Service Cloud supports integration with various application and business system as shown in the image below:
Since everyone and everything is connected on one platform, you should definitely go for Salesforce Service Cloud. Hope you enjoyed reading my blog, you can also go through the video below for a detailed explanation and demo on Salesforce Service Cloud.
Original article source at: https://www.edureka.co/
1668084360
🚀 Reflare is a lightweight and scalable reverse proxy and load balancing library built for Cloudflare Workers. It sits in front of web servers (e.g. web application, storage platform, or RESTful API), forwards HTTP requests or WebSocket traffics from clients to upstream servers, and transforms responses with several optimizations to improve page loading time.
reflare-template
Install wrangler
CLI and authorize wrangler
with a Cloudflare account.
npm install -g wrangler
wrangler login
Generate a new project from reflare-template and install the dependencies.
npm init cloudflare reflare-app https://github.com/xiaoyang-sde/reflare-template
cd reflare-app
npm install
Edit or add route definitions in src/index.ts
. Please read the examples and route definition section below for more details.
npm run dev
to preview Reflare with local development server provided by Miniflare.npm run deploy
to publish Reflare on Cloudflare Workers.Install the reflare
package.
npm install reflare
Import useReflare
from reflare
. useReflare
accepts an object of options.
provider
: The location of the list of route definitions. (optional, defaults to static
)static
: Reflare loads the route definitions from routeList
.kv
: Reflare loads the route definitions from Workers KV. (Experimental)routeList
: The initial list of route definitions. (optional, defaults to []
, ignored if provider
is not static
)namespace
: The Workers KV namespace that stores the list of route definitions. (required if provider
is kv
)useReflare
returns an object with the handle
method and push
method.
handle
method takes the inbound Request to the Worker and returns the Response fetched from the upstream service.push
method takes a route and appends it to routeList
.import useReflare from 'reflare';
const handleRequest = async (
request: Request,
): Promise<Response> => {
const reflare = await useReflare();
reflare.push({
path: '/*',
upstream: {
domain: 'httpbin.org',
protocol: 'https',
},
});
return reflare.handle(request);
};
addEventListener('fetch', (event) => {
event.respondWith(handleRequest(event.request));
});
Edit the route definition to change the behavior of Reflare. For example, the route definition below lets Reflare add the Access-Control-Allow-Origin: *
header to each response from the upstream service.
{
path: '/*',
upstream: {
domain: 'httpbin.org',
protocol: 'https',
},
cors: {
origin: '*',
},
}
Set up a reverse proxy for MDN Web Docs:
{
path: '/*',
upstream: {
domain: 'developer.mozilla.org',
protocol: 'https',
},
}
Reflare could proxy WebSocket traffic to upstream services. Set up a reverse proxy for wss://echo.websocket.org:
{
path: '/*',
upstream: {
domain: 'echo.websocket.org',
protocol: 'https',
},
}
Reflare could set custom headers to the request and response. Set up a reverse proxy for https://example.s3.amazonaws.com:
{
path: '/*',
upstream: {
domain: 'example.s3.amazonaws.com',
protocol: 'https',
},
headers: {
response: {
'x-response-header': 'Hello from Reflare',
},
},
cors: {
origin: ['https://www.example.com'],
methods: ['GET', 'POST'],
credentials: true,
},
}
Reflare implements express-like route matching. Reflare matches the path and HTTP method of each incoming request with the list of route definitions and forwards the request to the first matched route.
path
(string | string[]
): The path or the list of paths that matches the routemethods
(string[]
): The list of HTTP methods that match the route// Matches all requests
reflare.push({
path: '/*',
/* ... */
});
// Matches GET and POST requests with path `/api`
reflare.push({
path: '/api',
methods: ['GET', 'POST'],
});
// Matches GET requests with path ending with `.json` or `.yaml` in `/data`
reflare.push({
path: ['/data/*.json', '/data/*.yaml'],
methods: ['GET'],
});
domain
(string
): The domain name of the upstream serverprotocol
(string
): The protocol scheme of the upstream server (optional, defaults to 'https'
)port
(number
): The port of the upstream server (optional, defaults to 80
or 443
based on protocol
)timeout
(number
): The maximum wait time on a request to the upstream server (optional, defaults to 10000
)weight
(number
): The weight of the server that will be accounted for as part of the load balancing decision (optional, defaults to 1
)onRequest(request: Request, url: string)
: The callback function that will be called before sending the request to upstreamonResponse(response: Response, url: string)
: The callback function that will be called after receiving the response from upstreamreflare.push({
path: '/*',
upstream: {
domain: 'httpbin.org',
protocol: 'https',
port: 443,
timeout: 10000,
weight: 1,
},
/* ... */
});
The onRequest
and onResponse
callback functions could change the content of the request or response. For example, the following example replaces the URL of the request and sets the cache-control
header of the response based on its URL.
reflare.push({
path: '/*',
upstream: {
domain: 'httpbin.org',
protocol: 'https',
port: 443,
timeout: 10000,
weight: 1,
onRequest: (request: Request, url: string): Request => {
// Modifies the URL of the request
return new Request(url.replace('/original/request/path', ''), request);
},
onResponse: (response: Response, url: string): Response => {
// If the URL ends with `.html` or `/`, sets the `cache-control` header
if (url.endsWith('.html') || url.endsWith('/')) {
response.headers.set('cache-control', 'public, max-age=240, s-maxage=60');
}
return response;
}
},
/* ... */
});
To load balance HTTP traffic to a group of servers, pass an array of server configurations to upstream
. The load balancer will forward the request to an upstream server based on the loadBalancing.policy
option.
random
: The load balancer will select a random upstream server from the server group. The optional weight
parameter in the server configuration could influence the load balancing algorithm.ip-hash
: The client's IP address is used as a hashing key to select the upstream server from the server group. It ensures that the requests from the same client will always be directed to the same server.reflare.push({
path: '/*',
loadBalancing: {
policy: 'random',
},
upstream: [
{
domain: 's1.example.com',
protocol: 'https',
weight: 20,
},
{
domain: 's2.example.com',
protocol: 'https',
weight: 30,
},
{
domain: 's3.example.com',
protocol: 'https',
weight: 50,
},
],
/* ... */
});
Each incoming request is inspected against the firewall rules defined in the firewall
property of the options object. The request will be blocked if it matches at least one firewall rule.
field
: The property of the incoming request to be inspectedasn
: The ASN number of the incoming request (number
)ip
: The IP address of the incoming request, e.g. 1.1.1.1
(string
)hostname
: The content of the host
header, e.g. github.com
(string | undefined
)user-agent
: The content of the user-agent
header, e.g. Mozilla/5.0
(string | undefined
)country
: The two-letter country code in the request, e.g. US
(string | undefined
)continent
: The continent of the incoming request, e.g. NA
(string | undefined
)value
(string | string[] | number | number[] | RegExp
): The value of the firewall ruleoperator
: The operator to be used to determine if the request is blockedequal
: Block the request if field
is equal to value
not equal
: Block the request if field
is not equal to value
match
: Block the request if value
matches field
(Expect field
to be string
and value
to be RegExp
)not match
: Block the request if value
doesn't match field
(Expect field
to be string
and value
to be RegExp
)in
: Block the request if field
is in value
(Expect value
to be Array
)not in
: Block the request if field
is not in value
(Expect value
to be Array
)contain
: Block the request if field
contains value
(Expect field
and value
to be string
)not contain
: Block the request if field
doesn't contain value
(Expect field
and value
to be string
)greater
: Block the request if field
is greater than value
(Expect field
and value
to be number
)less
: Block the request if field
is less than value
(Expect field
and value
to be number
)reflare.push('/', {
path: '/*',
/* ... */
firewall: [
{
field: 'ip',
operator: 'in',
value: ['1.1.1.1', '1.0.0.1'],
},
{
field: 'user-agent',
operator: 'match',
value: /Chrome/,
}
],
});
request
(Record<string, string>
): Sets request header going upstream to the backend. Accepts an object. (optional, defaults to {}
)response
(Record<string, string>
): Sets response header coming downstream to the client. Accepts an object. (optional, defaults to {}
)reflare.push({
path: '/*',
/* ... */
headers: {
request: {
'x-example-header': 'hello server',
},
response: {
'x-example-header': 'hello client',
},
},
});
origin
: Configures the Access-Control-Allow-Origin
CORS header. (optional, defaults to false
)
boolean
: set to true
to reflect the origin of the request, or set to false
to disable CORS.string[]
: an array of acceptable origins.*
: allow any origin to access the resource.methods
(string[]
): Configures the Access-Control-Allow-Methods
CORS header. Expect an array of valid HTTP methods or *
. (optional, defaults to reflecting the method specified in the request’s Access-Control-Request-Method
header)
allowedHeaders
(string[]
): Configures the Access-Control-Allow-Headers
CORS header. Expect an array of HTTP headers or *. (optional, defaults to reflecting the headers specified in the request’s Access-Control-Request-Headers
header.)
exposedHeaders
(string[]
): Configures the Access-Control-Expose-Headers
CORS header. Expect an array of HTTP headers or *
. (optional, defaults to []
)
credentials
(boolean
): Configures the Access-Control-Allow-Credentials
CORS header. Set to true to pass the header, or it is omitted. (optional, defaults to false
)
maxAge
(number
): Configures the Access-Control-Max-Age
CORS header. Set to an integer to pass the header, or it is omitted. (optional)
reflare.push({
path: '/*',
/* ... */
cors: {
origin: true,
methods: [
'GET',
'POST',
],
allowedHeaders: [
'Example-Header',
],
exposedHeaders: [
'Example-Header',
],
credentials: true,
maxAge: 86400,
},
});
Cloudflare Workers provides several optimizations by default.
Reflare could load the route definitions from Workers KV. Set the provider
to kv
and namespace
to a Workers KV namespace (e.g. REFLARE
) that binds to the current Worker. Reflare fetches the route definitions from namespace
and handles each incoming request with the latest route definitions.
import useReflare from 'reflare';
declare const REFLARE: KVNamespace;
const handleRequest = async (
request: Request,
): Promise<Response> => {
const reflare = await useReflare({
provider: 'kv',
namespace: REFLARE,
});
return reflare.handle(request);
};
addEventListener('fetch', (event) => {
event.respondWith(handleRequest(event.request));
});
The route definitions should be stored as a JSON array in the route-list
key of namespace
. The KV namespace could be modified with wrangler
or Cloudflare API. The Reflare dashboard for route management is under development and will be released soon.
wrangler kv:key put --binding=[namespace] 'route-list' '[{"path":"/*","upstream":{"domain":"httpbin.org","protocol":"https"}}]'
Author: Xiaoyang-sde
Source Code: https://github.com/xiaoyang-sde/reflare
License: MIT license
1667460432
A simple service for looking up your IP address. This is the code that powers https://ifconfig.co.
Just the business, please:
$ curl ifconfig.co
127.0.0.1
$ http ifconfig.co
127.0.0.1
$ wget -qO- ifconfig.co
127.0.0.1
$ fetch -qo- https://ifconfig.co
127.0.0.1
$ bat -print=b ifconfig.co/ip
127.0.0.1
Country and city lookup:
$ curl ifconfig.co/country
Elbonia
$ curl ifconfig.co/country-iso
EB
$ curl ifconfig.co/city
Bornyasherk
$ curl ifconfig.co/asn
AS59795
As JSON:
$ curl -H 'Accept: application/json' ifconfig.co # or curl ifconfig.co/json
{
"city": "Bornyasherk",
"country": "Elbonia",
"country_iso": "EB",
"ip": "127.0.0.1",
"ip_decimal": 2130706433,
"asn": "AS59795",
"asn_org": "Hosting4Real"
}
Port testing:
$ curl ifconfig.co/port/80
{
"ip": "127.0.0.1",
"port": 80,
"reachable": false
}
Pass the appropriate flag (usually -4
and -6
) to your client to switch between IPv4 and IPv6 lookup.
curl
, httpie
, ht
, wget
and fetch
)/port
) can return information about a custom IP address specified via ?ip=
query parameterCompiling requires the Golang compiler to be installed. This package can be installed with:
go install github.com/mpolden/echoip/...@latest
For more information on building a Go project, see the official Go documentation.
A Docker image is available on Docker Hub, which can be downloaded with:
docker pull mpolden/echoip
$ echoip -h
Usage of echoip:
-C int
Size of response cache. Set to 0 to disable
-H value
Header to trust for remote IP, if present (e.g. X-Real-IP)
-a string
Path to GeoIP ASN database
-c string
Path to GeoIP city database
-f string
Path to GeoIP country database
-l string
Listening address (default ":8080")
-p Enable port lookup
-r Perform reverse hostname lookups
-t string
Path to template directory (default "html")
Author: Mpolden
Source Code: https://github.com/mpolden/echoip
License: BSD-3-Clause license
1659346140
This plugin can be used to consume the RD Service of biometric device via android intents.
Run this command:
With Flutter:
$ flutter pub add rdservice
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
rdservice: ^0.0.1
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:rdservice/rdservice.dart';
example/lib/main.dart
import 'package:flutter/material.dart';
import 'dart:async';
import 'package:flutter/services.dart';
import 'package:rdservice/rdservice.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({Key? key}) : super(key: key);
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
String _platformVersion = 'Unknown';
@override
Widget build(BuildContext context) {
print(_platformVersion);
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Plugin example app'),
),
body: SingleChildScrollView(
child: Column(
children: [
ElevatedButton(
child: const Text("Init Device"),
onPressed: initDevice,
),
ElevatedButton(
child: const Text("Capture"),
onPressed: captureFromDevice,
),
Padding(
padding: const EdgeInsets.all(8.0),
child: Text(_platformVersion),
),
],
),
),
),
);
}
Future<void> initDevice() async {
RDService? result;
try {
result = await Msf100.getDeviceInfo();
} on PlatformException catch (e) {
if (mounted) {
setState(() {
_platformVersion = e.message ?? 'Unknown exception';
});
}
return;
}
if (!mounted) return;
setState(() {
_platformVersion = result?.status ?? "Unknown";
});
}
Future<void> captureFromDevice() async {
PidData? result;
try {
result = await Msf100.capture();
} on PlatformException catch (e) {
if (mounted) {
setState(() {
_platformVersion = e.message ?? 'Unknown exception';
});
}
return;
}
if (!mounted) return;
setState(() {
_platformVersion = result?.resp.errInfo ?? 'Unknown Error';
});
}
}
Author: jeevareddy
Source Code: https://github.com/jeevareddy/rdservice
License: MIT license
1657974540
electron-crash-report-service
Aggregate crash reports for Electron applications
$ npm install # Install dependencies
$ npm start # Start service in development
var electron = require('electron')
electron.crashReporter.start({
companyName: '<company-name>',
productName: '<product-name>',
submitURL: '<reporter-url>'
})
PORT [80] # Set the port the service should listen to
STORAGE_PATH [/var/crash-reports] # Location to store crash reports
NODE_ENV [production] # production|development
/crash-report POST Submit a new crash report
/404 GET 404 handler
None
Save the unit file as /etc/systemd/system/electron-crash-reporter.service
, and the application image as /images/electron-crash-report-service.aci
[Unit]
Description=electron-crash-report-service
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
Delegate=true
CPUQuota=10%
MemoryLimit=1G
Environment=PORT=80
Environment=STORAGE_PATH=/var/crash-reports
Environment=NODE_ENV=production
ExecStart=/usr/bin/rkt run --inherit-env /images/electron-crash-report-service.aci
ExecStopPost=/usr/bin/rkt gc --mark-only
KillMode=mixed
Restart=always
You can then run it using systemctl
:
$ sudo systemctl start etcd.service
$ sudo systemctl stop etcd.service
$ sudo systemctl restart etcd.service
Author: Yoshuawuyts
Source Code: https://github.com/yoshuawuyts/electron-crash-report-service
License: MIT license
1657873620
Well, in this case, since someone has visited this link before you, the file was cached with leveldb. But if you were to try and grab a bundle that nobody else has tried to grab before, what would happen is this:
API
There are a few API endpoints:
Get the latest version of :module.
Get a version of :module
which satisfies the given :version
semver range. Defaults to latest.
The same as the prior two, except with --debug
passed to browserify.
In this case, --standalone
is passed to browserify.
Both --debug
and --standalone
are passed to browserify!
POST a body that looks something like this:
{
"options": {
"debug": true
},
"dependencies": {
"concat-stream": "0.1.x",
"hyperstream": "0.2.x"
}
}
"options" is where you get to set "debug", "standalone", and "fullPaths". Usually, in this case, you'll probably only really care about debug. If you don't define "options", it will default to { "debug": false, "standalone": false, "fullPaths": false }
.
What you get in return looks something like this:
HTTP/1.1 200 OK
X-Powered-By: Express
Location: /multi/48GOmL0XvnRZn32bkpz75A==
content-type: application/json
Date: Sat, 22 Jun 2013 22:36:32 GMT
Connection: keep-alive
Transfer-Encoding: chunked
{
"concat-stream": {
"package": /* the concat-stream package.json */,
"bundle": /* the concat-stream bundle */
},
"hyperstream": {
"package": /* the hyperstream package.json */,
"bundle": /* the hyperstream bundle */
}
}
The bundle gets permanently cached at /multi/48GOmL0XvnRZn32bkpz75A==
for future GETs.
If you saved the Location url from the POST earlier, you can just GET it instead of POSTing again.
Get information on the build status of a module. Returns build information for all versions which satisfy the given semver (or latest in the event of a missing semver).
Blobs generally look something like this:
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 109
ETag: "-9450086"
Date: Sun, 26 Jan 2014 08:05:59 GMT
Connection: keep-alive
{
"module": "concat-stream",
"builds": {
"1.4.1": {
"ok": true
}
}
}
The "module" and "builds" fields should both exist. Keys for "builds" are the versions. Properties:
Versions which have not been built will not be keyed onto "builds".
browserify-cdn is ready to run on Heroku:
heroku create my-browserify-cdn
git push heroku master
heroku ps:scale web=1
You can build and run an image doing the following:
docker build -t "wzrd.in" /path/to/wzrd.in
docker run -p 8080:8080 wzrd.in
Keep in mind that a new deploy will wipe the cache.
Quick Start
Try visiting this link:
/standalone/concat-stream@latest
Also, wzrd.in has a nice url generating form.
Author: Browserify
Source Code: https://github.com/browserify/wzrd.in
License: MIT license
1655015280
Leaps is a service for collaboratively editing your local files over a web UI, using operational transforms to ensure zero-collision synchronization across any number of editing clients.
Simply navigate to a directory you want to share, run leaps
, open the hosted page (default http://localhost:8080
) in your browser and direct any friends on your LAN to the same page. You can now collaboratively edit any documents in that directory.
Your files will be written to in the background as you edit. If you aren't using version control, or simply want extra protection, you can run leaps in safe mode with the --safe
flag. In safe mode any changes you make will be placed in a .leaps_cot.json
file, which you can then apply to your files once you are happy by running with the --commit
flag.
When writing code it sucks to have to leave the editor for running tests, linters or builds. However, allowing the internet to run arbitrary commands on the host machine is a recipe for disaster.
Instead, leaps allows you to specify pre-written commands using the -cmd
flag, which are then available for clients to trigger asynchronously while they edit. Results are broadcast to all connected users, so you can all see the outcome as a team.
For example, leaps -cmd "golint ./..." -cmd "go build ./cmd/leaps"
gives users both a linter and a build command that they can trigger on your machine.
Leaps can also be used as a library, with implementations of accessors for various document hosting solutions and plugable authentication layers, allowing you to build your own services to suit many service architectures.
Leaps server components are implemented in Golang, and has a client implemented in JavaScript that can currently be used with ACE, CodeMirror and Textarea editors.
To read more about the service library components and find examples check out the godocs.
To read about the JavaScript client check out the README.
Leaps is a single binary, with no runtime dependencies. Just download a package for your OS from the latest releases page.
brew install leaps
leaps -h
go get github.com/Jeffail/leaps/cmd/...
leaps -h
OS | Status |
---|---|
OSX x86_64 | Supported, tested |
Linux x86 | Supported |
Linux x86_64 | Supported, tested |
Linux ARMv5 | Builds |
Linux ARMv7 | Supported, tested |
Windows x86 | Builds |
Windows x86_64 | Builds |
Contributions are very welcome, just fork and submit a pull request.
Ashley Jeffs
WARNING: This project is no longer actively maintained.
Author: jeffail
Source Code: https://github.com/jeffail/leaps
License: MIT license