Royce  Reinger

Royce Reinger

1675699526

Metacat: A Unified Metadata Exploration API Service

Metacat

Introduction

Metacat is a unified metadata exploration API service. You can explore Hive, RDS, Teradata, Redshift, S3 and Cassandra. Metacat provides you information about what data you have, where it resides and how to process it. Metadata in the end is really data about the data. So the primary purpose of Metacat is to give a place to describe the data so that we could do more useful things with it.

Metacat focusses on solving these three problems:

  • Federate views of metadata systems.
  • Allow arbitrary metadata storage about data sets.
  • Metadata discovery

Documentation

TODO

Releases

Releases

Builds

Metacat builds are run on Travis CI here. Build Status

Getting Started

git clone git@github.com:Netflix/metacat.git
cd metacat
./gradlew clean build

Once the build is completed, the metacat WAR file is generated under metacat-war/build/libs directory. Metacat needs two basic configurations:

  • metacat.plugin.config.location: Path to the directory containing the catalog configuration. Please look at catalog samples used for functional testing.
  • metacat.usermetadata.config.location: Path to the configuration file containing the connection properties to store user metadata. Please look at this sample.

Running Locally

Take the build WAR in metacat-war/build/libs and deploy it to an existing Tomcat as ROOT.war.

The REST API can be accessed @ http://localhost:8080/mds/v1/catalog

Swagger API documentation can be accessed @ http://localhost:8080/swagger-ui/index.html

Docker Compose Example

Pre-requisite: Docker compose is installed

To start a self contained Metacat environment with some sample catalogs run the command below. This will start a docker-compose cluster containing a Metacat container, a Hive Metastore Container, a Cassandra container and a PostgreSQL container.

./gradlew metacatPorts
  • metacatPorts - Prints out what exposed ports are mapped to the internal container ports. Look for the mapped port (MAPPED_PORT) to port 8080.

REST API can be accessed @ http://localhost:<MAPPED_PORT>/mds/v1/catalog

Swagger API documentation can be accessed @ http://localhost:<MAPPED_PORT>/swagger-ui/index.html

To stop the docker compose cluster:

./gradlew stopMetacatCluster

Download Details:

Author: Netflix
Source Code: https://github.com/Netflix/metacat 
License: Apache-2.0 license

#machinelearning #api #service 

Metacat: A Unified Metadata Exploration API Service
Bongani  Ngema

Bongani Ngema

1675576320

Wiqaytna is The Official Moroccan Exposure Notification App

Wiqaytna Android

Wiqaytna is the official Moroccan exposure notification app.


Configs in gradle.properties

Sample Configuration

ORG="MAR"
STORE_URL="<Play store URL>"
PRIVACY_URL="<Privacy policy URL>"

SERVICE_FOREGROUND_NOTIFICATION_ID=771579
SERVICE_FOREGROUND_CHANNEL_ID="Wiqaytna Updates"
SERVICE_FOREGROUND_CHANNEL_NAME="Wiqaytna Foreground Service"

PUSH_NOTIFICATION_ID=771578
PUSH_NOTIFICATION_CHANNEL_NAME="Wiqaytna Notifications"
ERROR_NOTIFICATION_ID=771580

#service configurations
SCAN_DURATION=8000
MIN_SCAN_INTERVAL=36000
MAX_SCAN_INTERVAL=43000

ADVERTISING_DURATION=180000
ADVERTISING_INTERVAL=5000

PURGE_INTERVAL=86400000
PURGE_TTL=1814400000
MAX_QUEUE_TIME=7000
BM_CHECK_INTERVAL=540000
HEALTH_CHECK_INTERVAL=900000
CONNECTION_TIMEOUT=6000
BLACKLIST_DURATION=100000

FIREBASE_REGION = "<Your Firebase region>"

STAGING_FIREBASE_UPLOAD_BUCKET = "wiqayetna-app-staging"
STAGING_SERVICE_UUID = "17E033D3-490E-4BC9-9FE8-2F567643F4D3"

V2_CHARACTERISTIC_ID = "117BDD58-57CE-4E7A-8E87-7CCCDDA2A804"

PRODUCTION_FIREBASE_UPLOAD_BUCKET = "wiqaytna-app"
PRODUCTION_SERVICE_UUID = "B82AB3FC-1595-4F6A-80F0-FE094CC218F9"

android.useAndroidX=true
android.enableJetifier=true

ORG: For international federation usage

To obtain the official BlueTrace Service ID and Characteristic ID, please email info@bluetrace.io


Build Configurations in build.gradle

Change the package name and other configurations accordingly such as the resValue in in the different settings in buildTypes For example,

buildTypes {
    debug {
            buildConfigField "String", "FIREBASE_UPLOAD_BUCKET", STAGING_FIREBASE_UPLOAD_BUCKET
            buildConfigField "String", "BLE_SSID", STAGING_SERVICE_UUID

            String ssid = STAGING_SERVICE_UUID
            versionNameSuffix "-debug-${getGitHash()}-${ssid.substring(ssid.length() - 5,ssid.length() - 1 )}"
            resValue "string", "app_name", "Wiqaytna"
            applicationIdSuffix "stg"
        }

Values such as STAGING_FIREBASE_UPLOAD_BUCKET, STAGING_SERVICE_UUID have been defined in gradle.properties as described above.


Firebase and google-services.json

Setup Firebase for the different environment. Download the google-services.json for each of the environments and put it in the corresponding folder.

Debug: ./app/src/debug/google-services.json

Production: ./app/src/release/google-services.json

The app currently relies on Firebase Functions to work. More information can be obtained by referring to opentrace-cloud-functions.


Remote Config

Remote config is used for retrieving the "Share" message used in the app. The key for it is "ShareText". If it is unable to be retrieved, it falls back to R.string.share_message


Protocol Version

Protocol version used should be 2 (or above) Version 1 of the protocol has been deprecated


Security Enhancements

SSL pinning is not included as part of the repo. It is recommended to add in a check for SSL certificate returned by the backend.


Statement from Google

The following is a statement from Google: "At Google Play we take our responsibility to provide accurate and relevant information for our users very seriously. For that reason, we are currently only approving apps that reference COVID-19 or related terms in their store listing if the app is published, commissioned, or authorized by an official government entity or public health organization, and the app does not contain any monetization mechanisms such as ads, in-app products, or in-app donations. This includes references in places such as the app title, description, release notes, or screenshots. For more information visit https://android-developers.googleblog.com/2020/04/google-play-updates-and-information.html"


Acknowledgements

Wiqaytna uses the following third party libraries / tools.


Download Details:

Author: Wiqaytna-app
Source Code: https://github.com/Wiqaytna-app/wiqaytna_android 
License: GPL-3.0 license

#kotlin #android 

Wiqaytna is The Official Moroccan Exposure Notification App
Sheldon  Grant

Sheldon Grant

1672831020

Service Level Agreement Benefits

Introduction Service Level Agreements

The demand for accurate, real-time data has never been greater for today's data engineering teams, yet data downtime has always been a reality. So, how do we break the cycle and obtain reliable data?

Data teams in the early 2020s, like their software engineering counterparts 20 years ago, experienced a severe conundrum: reliability. Businesses are ingesting more operational and third-party data than ever before. Employees from across the organization, including those on non-data teams, interact with data at all stages of its lifecycle. Simultaneously, data sources, pipelines, and workflows are becoming more complex.

While software engineers have resolved application downtime with specialized fields (such as DevOps and Site Reliability Engineering), frameworks (such as Service Level Agreements, Indicators, and Objectives), and a plethora of acronyms (SRE, SLAs, SLIs, and SLOs, respectively), data teams haven't yet given data downtime the due importance. Now it is up to data teams to do the same: prioritize, standardize, and evaluate data reliability. I believe that data quality or reliability engineering will become its specialization over the next decade, in charge of this crucial business component. In the meantime, let's look at what data reliability SLAs are, why they're essential, as well as how to develop them.

What is a Service Level Agreements?

"Slack's SLA guarantees 99.999 service uptime. If breached, they apply for a service credit."

The best way to describe Service Level Agreements (SLAs) is a method that many businesses use to define and measure the standard of service that a given vendor, product, or internal team will provide—and potential remedies if they do not.

As an example, for customers on Plus plans and above, Slack's customer-facing SLA guarantees 99.99 percent uptime every fiscal quarter with no more than 10 hours of scheduled downtime. If they come up short, impacted customers will be given service credits for future use on their accounts.

sla-checklist

Customers use service level agreements (SLAs) to guarantee that they receive what they paid for from a vendor: a robust, dependable product. Many software teams develop SLAs for internal projects or users instead of end-users.

Importance of data reliability Service Level Agreements for Data Engineers

As an example, consider internal software engineering SLAs. Why bother formalizing SLAs if you don't have a customer urging you to commit to certain thresholds in an agreement? Why not simply rely on everyone to do their best and aim for as close to 100 percent uptime as possible? Would that not be adding extraneous burdensome regulations?

No, not at all. The exercise of defining, complying with, and evaluating critical characteristics of what defines reliable software can be immensely beneficial while also setting clear expectations for internal stakeholders. SLAs can help developing, product, and business teams think about the bigger picture about their applications and prioritize incoming requests. SLAs provide confidence that different software engineering teams and their stakeholders mean the same thing, caring about the same metrics and sharing a pledge to thoroughly documented requirements.

Setting non-zero-uptime requirements allow for room to improve. There is no risk of downtime if there is no room for improvement. Furthermore, it is simply not feasible. Even with the best practices and techniques in place, systems will fail from time to time. However, with good SLAs, engineers will know precisely when and how to intervene if anything ever goes wrong.

Likewise, data teams and their data consumers must categorize, measure, and track the reliability of their data throughout its lifecycle. Consumers may make inaccurate assumptions or rely on empirical information about the trustworthiness of your data platform if these metrics are not strictly established. Attempting to determine data dependability SLAs help build trust and strengthen bonds between your data, your data team, and downstream consumers, whether your customers or cross-functional teams within your organization. In other words, data SLAs assist your organization in becoming more "data-driven" in its approach to data.

SLAs organize and streamline communication, ensuring that your team and stakeholders share a common language and refer to the same metrics. And, because defining SLAs helps your data team quickly identify the business's priority areas, they'll be able to prioritize more rapidly and respond more rapidly when cases arise.

What is DQ SLA (Data Quality Service Level Agreement)?

A DQ SLA, like a more traditional SLA, governs the roles and responsibilities of a hardware or software vendor in accordance with regulations and levels of acceptability, as well as realistic expectations for response and restoration when data errors and flaws are identified. DQ SLAs can be defined for any circumstance where a data provider transfers data to a data consumer.

More specifically, a data recipient would specify expectations regarding measurable aspects related to one or more dimensions of data quality (such as completeness, accuracy, consistency, timeliness, and so on) within any business process. The DQ SLA would then include an expected data quality level and even a list of processes to be followed if those expectations are not fulfilled, such as:

  1. The location in the business process flow that the SLA covers.
  2. The SLA covers critical data elements.
  3. Each data element has its own set of data quality dimensions.
  4. Quality expectations for each data element for each of the identified dimensions.
  5. Specified data quality rules that formalize those expectations.
  6. Business consequences of noncompliance with defined data quality rules.
  7. Methods for determining non-compliance with those expectations.
  8. Acceptance criteria for each measurement
  9. How and where should concerns be classified, prioritized, and documented.
  10. The individual(s) will be notified if the acceptability thresholds are not met.
  11. Expected resolution or restoration times for the issues.
  12. Method for keeping track of the status of the resolution process.
  13. When the resolution times are not met, an escalation tactic and hierarchy are implemented.

data-quality-benefits

The DQ SLA is distinctive because it recognizes that data quality issues and resolution are almost always linked to business operations. To benefit from the processes suggested by the definition of a DQ SLA (particularly items 5, 7, 9, and 12), systems facilitating those operations, namely:

  1. Management of data quality rules
  2. Monitoring, measurement, and notification
  3. Categorization, prioritization, and tracking of data quality incidents

These concepts are critical in establishing the DQ SLA's goal: data quality control, which is based on the definition of rules based on agreed-upon data quality dimensions.

Suppose it is determined that the information does not meet the defined expectations. In that case, the remediation process can include a variety of tasks, such as writing the non-confirming text to an outlier file, emailing a system administrator or data steward to resolve the issue, running an immediate corrective data quality action, or any combination of these.

How to build Service Level Agreements for Data Platforms?

Creating and adhering to data reliability SLAs is a cohesive and precise exercise.

First, let's go over some terminology. According to Google's service level agreements (SLAs), clear service level indicators (SLIs), quantitative measures of quality service, and accepted service level objectives (SLOs), the expected values or ranges of values where each criterion must meet, are necessary. Many engineering teams, for example, use availability as a criterion of site reliability and set a goal of maintaining the availability of at least 99 percent.

Creating reliability SLAs for data teams typically involves three key steps: defining, measuring, and tracking.

Using SLAs to define Data Reliability

The first phase is to consent and clearly articulate what reliable data signifies to your company.

Setting a baseline is a good place to start. Begin by taking stock of your data, how it's being used, and by whom. Examine your data's historical performance to establish a baseline metric for reliability.

You should also solicit feedback from your data consumers on what "reliability" means to them. Even with a thorough knowledge of data lineage, data engineers are frequently isolated from their colleagues' day-to-day workflows and use cases. When developing reliability agreements with internal teams, it is crucial to know how consumers interact with data, what is most important, or which potential complications require the most stringent, critical intervention.

Furthermore, you'll want to ensure that all relevant stakeholders — all data leaders or business consumers with a stake in reliability — have assessed it and agreed on the descriptions of reliability you're constructing.

You'll be able to set clear, actionable SLAs once you understand

  1. What data you're working with
  2. How it's used, and
  3. Who uses it.

SLIs for measuring Data Reliability

Once you've established a comprehensive understanding and baseline, you can begin to home in on the key metrics that will serve as your service-level reliability indicators.

As a general rule, data SLIs should portray the mutually agreed-upon state of data you defined in step 1, as well as limitations on how data can and cannot be used and a detailed description of data downtime. This may include incomplete, duplicated, or out-of-date data.

Your particular use case will determine sLIs, so here are a few metrics used to assess data health:

  1. The number of data points associated with a specific data asset (N). Although this may be well outside your control, given that you most likely rely on external data sources, it would still be a significant cause of data downtime and, therefore, should be determined by measuring.
  2. Time-to-detection (TTD): This metric quantifies how quickly your team is alerted when an issue arises. This could take weeks or even months if you don't have proper detection and emergency notification strategies in place. Bad data can cause "silent errors," leading to costly issues that influence both your company and your customers.
  3. Time-to-resolution (TTR): This measures how quickly your team was capable of resolving an issue after being notified about it.

Using SLOs to track Data Reliability

You can set objectives, i.e., reasonable ranges of data downtime, when you've already identified key indicators (SLIs) for data reliability. All such SLOs should be appropriate based on your current situation. For instance, if you choose to include TTD as a metric but are not using automated monitoring tools, your SLO should be lower than that of a mature organization with extensive data reliability tooling. Aligning those scopes makes it easy to create a consistent framework that rates incidents depending on the severity, making it easier to interact and quickly respond when issues arise.

Once you've established these priorities and integrated them into your SLAs, you can create a dashboard to track and evaluate progress. Some data teams build ad hoc dashboards, whereas others depend on dedicated data observability options.

What are the challenges of Service Level Agreements in Data Platforms?

The delivery of services for millions of customers via data centers involves resource management challenges. Data processing of risk management, consumer-driven service management, and independent resource management, measuring the service, system design, and reiteration assessment resource allocation in SLA with virtualization are the challenges of service level agreements.

Consumer-Driven Service Management

To satisfy the customer requirement, three user-centric objectives are used: Receiving feedback from customers. Providing reliable communication between customers. Increasing access efficiency to understand the specific necessities of the customer. Believing the customer. When developing a service, if customer expectations are taken into account, those expectations are imported into the service provider.

Data Processing of Risk Management

The Risk Management process includes:

  1. Identifying risk factors and assessing them.
  2. Identifying risk management techniques.
  3. Reviewing the risk management plan.

Grid service customers' service quality conditions necessitate the formation of service level agreements between service providers and customers. Because resources are disrupted and unavailable, service providers must decide whether to continue or reject service level agreement requests.

Independent Resource Management

The data processing center should keep the reservation process going smoothly by managing the existing service requisition, improving the future service requisition, and changing the price for incoming requests. The resource management paradigm maps resource interactions to a platform-independent service level agreements pool. The resource management architecture with the cooperation of computing systems via numerous virtual machines enhances the effectiveness of computational models and the utilization of resources designed for on-demand resource utilization.

SLA Resource Allocation Using Virtualization

Virtual machines with various resource management policies facilitate resource allocation in SLA by meeting the needs of multiple users. An optimal joint multiple resource allocation method is used in the Allocation of Resource Model of Distributed Environment. A resource allocation methodology is introduced to execute user applications for the multi-dimensional resource allocation problem.

Measuring the Service

Various service providers offer various computing services. Original cloud impressions from numerous public documents must be assessed for service performance to design the application and service needs. As part of service level agreement, service measurement includes the current system's configuration and runtime information metrics.

System Design and Reiteration Valuation

Various sources and consumers with varying service standards are assessed to demonstrate the efficiency of resource management plans. Because resources are transferred, and service requisitions will come from multiple consumers at any stage, it is tedious to perform a performance evaluation of monitoring the resource plans in a repetitive and administrable fashion.

Conclusion

Data SLAs help the organization stay on track. They are defined as a public pledge to others. They are a bilateral agreement; you agree to continue providing data within specified criteria in exchange for people's participation and awareness. A lot can go wrong in data engineering, and a lot is due to misunderstanding. Documenting your SLA will go a long way toward setting the record straight, allowing you to achieve your primary objective of instilling greater data trust within your organization.The good news is when defining metrics, service, and deliverable targets for big data analytics, you don't have to start from scratch since the technique can be borrowed from the transactional side of your IT work. For so many businesses, it's simply a case of examining the level of service processes that are already in the place for their transactional applications, then applying these processes to big data and making the required changes to address distinct features of the big data environment, such as parallel processing and the handling of several types and forms of data.

Original article source at: https://www.xenonstack.com/

#service #level 

Service Level Agreement Benefits

PHP Service Bus (publish-subscribe Pattern) Implementation

Introduction

A concurrency (based on Amp) framework, that lets you implement an asynchronous messaging, a transparent workflow and control of long-lived business transactions by means of the Saga pattern. It implements the message based architecture and it includes the following patterns: Saga, Publish\Subscribe, Message Bus.

Main Features

  • Сooperative multitasking
  • Asynchronous messaging (Publish\Subscribe pattern implementation)
  • Event-driven architecture
  • Distribution (messages can be handled by different applications)
    • Subscribers can be implemented on any programming language
  • High performance
  • Orchestration of long-lived business transactions (for example, a checkout) with the help of Saga Pattern
  • Full history of aggregate changes (EventSourcing)

See it in action

Jump into our Quick Start and build your first distributed solution in just 15 minutes.

Documentation

Documentation can be found in the .documentation directory

Requirements

  • PHP >=8.1
  • RabbitMQ/Redis/Nsq
  • PostgreSQL

Contributions are welcome! Please read CONTRIBUTING for details.

Communication Channels

You can find help and discussion in the following places:

Contributing

Contributions are welcome! Please read CONTRIBUTING for details.

Download Details:

Author: php-service-bus
Source Code: https://github.com/php-service-bus/service-bus 
License: MIT license

#php #service #async #messaging 

PHP Service Bus (publish-subscribe Pattern) Implementation

Integrating A Springboot Application with MarkLogic As Backend Service

In this article, I am going to show you how to use Spring Boot as a RESTful web service and MarkLogic as a backend database and how to do marklogic integration with springboot application

Introduction

Assuming that you have a Springboot application which uses MarkLogic as backend service, this guide will show you how to integrate your application with MarkLogic. The first thing you need to do is install the MarkLogic Java Client API. You can find instructions on how to do this in the readme file of the GitHub repository. Once you have installed the client API, you need to add the following dependency to your project’s pom.xml file:

<dependency>

    <groupId>com.marklogic</groupId>

    <artifactId>marklogic-client-api</artifactId>

    <version>RELEASE</version>

</dependency>

Next, you need to configure the connection details for your MarkLogic server in your application.properties file. Below configurations are minimum require:

ml.host=localhost # Hostname or IP address of your MarkLogic server

ml.port=8010 # Port number of your MarkLogic server’s Application Server

ml.database=Documents # Name of the database to connect to

spring.data.marklogic.username=admin # Username for connecting to MarkLogic

spring.data.marklogic.password=admin # Password for connecting to MarkLogic

With this basic configuration in place, you can now start using the Spring Data for MarkLogic library in your application code!

Understanding Architecture for marklogic integration

MarkLogic is a powerful NoSQL database that can be used as a backend service for Springboot applications. In this blog post, we will take a look at the MarkLogic architecture and how it can be used to integrate a Springboot application with MarkLogic.

The MarkLogic architecture is based on a shared nothing architecture. This means that each node in a cluster is independent and can scale horizontally. Uses shading to distribute data across nodes. MarkLogic also uses replication to ensure high availability and disaster recovery.

MarkLogic has a flexible indexing system that can index any kind of data. This makes it easy to search and retrieve data from the database. MarkLogic also supports geospatial indexing which allows you to store and query data based on location.

The MarkLogic API is RESTful and there are language bindings for Java, Node.js, and .NET. This makes it easy to develop applications that use MarkLogic as a backend service.

Springboot Json Converter

n this section, we will learn how to use the Springboot Json Converter to easily convert your Java objects to and from JSON. We will also learn how to configure the converter to work with MarkLogic.

The Springboot Json Converter is a powerful tool that can be used to easily convert your Java objects to and from JSON. The converter is very easy to use and can be configured to work with MarkLogic.

To use the converter, you first need to add the following dependency to your project:

<dependency> 

<groupId>com.fasterxml.jackson.dataformat</groupId> 

<artifactId>jackson-dataformat-json</artifactId> 

</dependency>

Once you have added the dependency, you can use the converter in your code like this:

ObjectMapper mapper = new ObjectMapper(); // Convert a Java object to JSON 

String jsonString = mapper.writeValueAsString(myObject); // Convert a JSON 

string to a Java object MyObject myObject = mapper.readValue(jsonString, 

MyObject.class);

The converter is configured to work with MarkLogic by adding the following property to your application.properties file:

spring.jackson.marklogic.enabled=true

Marklogic integration with a RESTful service

A Springboot Application can be easily integrated with MarkLogic as backend service. All you need to do is add the following dependency in your pom.xml file for Marklogic integration:

<dependency>

    <groupId>com.marklogic</groupId>

    <artifactId>marklogic-client-api</artifactId>

    <version>RELEASE</version>

</dependency>

And configure the application properties file like this:

spring.data.marklogic.username=your_username  //required

spring.data.marklogic.password=your_password  //required

spring.data.marklogic.connection_string=localhost:8010  //optional, defaults

to localhost:8040

Now you can use all the features of MarkLogic from your Springboot Application!

Conclusion

In this article, we have seen how to integrate a Springboot application with MarkLogic as backend service. We have also seen how to perform various operations like CRUD, search and aggregation using the MarkLogic Java API. I hope this article has been helpful in understanding how to work with MarkLogic from a Springboot application.

Original article source at: https://blog.knoldus.com/

#springboot #service 

Integrating A Springboot Application with MarkLogic As Backend Service
Rupert  Beatty

Rupert Beatty

1669894103

Learn Overview Of Microservices and Service-Oriented Architecture

What is Service-Oriented Architecture?

  • Service-Oriented Architecture (SOA) is a software architectural style that structures an application by breaking it down into multiple components called services.
  • Each service represents a functional business domain.
  • In SOA applications, each service is independent and provides its own business purposes but can communicate with others across various platforms and languages.
  • SOA components are loosely coupled and use a central Enterprise Service Bus (ESB) to communicate.

What is a microservice?

  • On the other hand, a microservice is an architectural style that focuses on maintaining several independent services that work collectively to create an application.
  • Each individual service within a microservice uses internal APIs to communicate.

Comparison

  • Although SOA and Microservices seem similar, they are still two different architecture types. Microservices are like a more fine-grained evolution of SOA.
  • One of their main differences is scope. Microservices are suited to smaller modern web services.
  • Each service within a microservices generally has one specific purpose, whereas components in SOA have more complex business purposes and functionality and are often implemented as subsystems.
  • SOA is therefore suited to larger enterprise application environments.
  • Another significant difference is how both architectures communicate. Every service in SOA communicates through an ESB. If this ESB fails, it compromises functionality across all services.
  • On the other hand, services within a microservice are entirely independent. If one fails, the rest of the services remain functional. Overall, Microservices are more error tolerant.
  • Today SOA applications are uncommon as it's an older architecture that may not be suitable for modern cloud-based applications. 
  • However, microservices were developed for the cloud-native movement, and most developers prefer the versatility of service independence they offer.

Original article source at: https://www.c-sharpcorner.com/

#microservices #service #architecture 

Learn Overview Of Microservices and Service-Oriented Architecture

Nextbox WiFi Extender Setup Via Re.nextbox.home

In comparison to others, the Nextbox wifi extender setup is far too simple. The steps below will help you install it, access its login admin page, and configure its settings. So, without further ado, let's begin the Nextbox extender setup process.

Nextbox range extender setup with the user interface-

The user interface methods for connecting your Nextbox to your router wifi are as follows. All of these steps are stated below.

  1. After connecting your phone to your Nextbox range extender, navigate to the website.
  2. When you connect its network via wifi settings, it will show that the Nextbox is connected but there is no internet.
  3. To connect your range extender to your main device, you must first configure its settings in the browser.
  4. It will present you with two options. One is using properties to manage its network settings, while the other is using the browser to configure its network settings.
  5. The Nextbox configuration will be selected via the user interface.
  6. It walks you through its website, or you can go directly to its website by typing http /re.nextbox.home.
  7. Enter your Nextbox login credentials in the admin box to find them and log in to your account.
  8. Then, navigate to the wifi configuration page and connect it to your host device network.
  9. You will find your host device's network name among the available network names and select it to connect its network to your range extender's network.
  10. To complete the Nextbox extender wifi setup, follow the on-screen instructions.
  11. So, these are the steps to connect your wifi extender to your wifi router network.

Steps for the Nextbox Extender Login-

The following steps will guide you through the process of accessing the Nextbox extender admin page.

  1. Enter http /re.nextbox.home login into the web browser's search bar. It allows you to quickly access the Nextbox range extender's home page.
  2. If you want to access your wireless range extender's admin page, select the Nextbox login option.
  3. The Nextbox extender login box will appear on the screen of your device.
  4. Enter the Nextbox wifi extender login username and password for your device.
  5. Check both the entered information.
  6. Wait until the nextbox extender login confirmation message disappears from the screen.
  7. Once completed, you will be able to manage and configure your nextbox range extender in a different location.
  8. You can easily set your Nextbox extender password and username via the nextbox wifi extender setup page. If you want to reset the Nextbox extender password, go to the Nextbox setup page and enter your password through the security encryption settings. It enables you to easily configure your wireless device's password and other settings.

Steps for the Nextbox WiFi Extender Configuration-

You can easily manage and control all of your range extender's wireless, basic, and advanced settings via the nextbox wifi extender setup page. The steps below will assist you in configuring your Nextbox wifi extender settings.

  1. Enter re.nextbox.home into the browser's address bar.
  2. After that, wait a minute before tapping on the login option.
  3. The nextbox login box basically asks you for your login information.
  4. In this Nextbox login box, you will enter your admin username and password. If you have forgotten your Nextbox wifi extender password, you can reset it by selecting the Nextbox extender password reset option. Make a new password by following the on-screen instructions.
  5. You will now configure the nextbox wifi extender settings.
  6. Enter the setting provided by the menu option.
  7. Locate it and configure your device's settings by following the instructions on your computer screen.
  8. Finally, you will save all of your settings in order to properly configure your extender.

Finally, your nextbox extender will be set up. If facing any issues then you can contact our expert team they will guide you. You can also visit our website www.wirelessextendersetup.org 
#internet #wifi #setup #usa #technology #service #extender #router 

Monty  Boehm

Monty Boehm

1669631700

Best 50 ServiceNow Interview Questions and Answers

In this ServiceNow interview questions blog, I have collected the most frequently asked questions by interviewers. If you wish to brush up your ServiceNow basics, then I would recommend you take a look at this video first. This  video will introduce you to ServiceNow basics and hold you in good state to get started with this ‘ServiceNow Interview Questions’ blog

In case you have attended a ServiceNow interview in the recent past, do paste those ServiceNow interview questions in the comments section and we’ll answer them ASAP. So let us not waste any time and quickly start with this compilation of ServiceNow Interview Questions.

I have divided these questions in two sections:

  1. Basic ServiceNow Interview Questions
  2. Intermediate ServiceNow Interview Questions
  3. Advanced ServiceNow Interview Questions

 So let us start then,

Basic ServiceNow Interview Questions

1) What is ServiceNow?

ServiceNow is a cloud based IT Service Management (ITSM) tool. It provides a single system of record for:

  • IT services
  • Operations                                                                                                                                  
  • Business management

All aspects of IT Services live in the ServiceNow ecosystem. It gives us a complete view of services and resources. This allows for broad control of how to best allocate resources and design the process flow of those services.Refer this link to know more What Is ServiceNow?

2) What is an ‘Application’ in ServiceNow?

Applications in ServiceNow represent packaged solutions for delivering services and managing business processes. In simple words it is a group of modules which provides information related to those modules. For example Incident application will provide information related to Incident Management process.

3) What is full form of CMDB and what is it?

CMDB stands for Configuration Management Database. CMDB is a repository. It acts as a data warehouse for information technology installations. It holds data related to a collection of IT assets, and descriptive relationships between such assets.

4) What is LDAP Integration and its use?

LDAP is Light weight Directory Access Protocol. You can use it for user data population and user authentication. ServiceNow integrates with LDAP directory to streamline the user log in process and to automate the creation of user and assigning them roles.

5) What do you mean by data lookup and record matching?

Data lookup and record matching feature helps to set a field value based on some condition instead of writing scripts.

For example:

On Incident forms, the priority lookup rules sample data automatically. Then, set the incident Priority based on the incident Impact and Urgency values. Data lookup rules allow to specify the conditions and fields where they want data lookup to occur.

6) What is CMDB Baseline?

CMDB Baselines will help you, understand and control the changes made to a configuration Item(CI). These Baselines act as a snapshot of a CI.

7) How to enable or disable an application in ServiceNow?

Following steps will help you do the same:

  • Navigate to “Application Menus” module
  • Open the respective application.
  • Set value for active as ‘true’ to enable it or set it to ‘false’ to disable it.

8) What is a view?

View defines the arrangement of fields on a form or a list. For one single form we can define multiple views according to the user preferences or requirement.

9) What is ACL?

An ACL is access control list that defines what data users can access and how they can access it in ServiceNow.

10) What do you mean by impersonating a user? How it is useful?

Impersonating a user means giving the administrator  access to what the user would have access to. This includes the same menus and modules. ServiceNow records the administrator activities when the user impersonates another user. This feature helps in testing. You can impersonate that user and can test instead of logging out from your session and logging again with the user credentials.

Intermediate ServiceNow Interview Questions

11) What are dictionary overrides?

Dictionary overrides provide the ability to define a field on an extended table differently from the field on the parent table. For example, for a field on the Task [task] table, a dictionary override can change the default value on the Incident [incident] table without affecting the default value on Task [task] or Change [change].

12) What do you mean by coalesce?

Coalesce is a property of a field that we use in transform map field mapping. Coalescing on a field (or set of fields) lets you use the field as a unique key. If a match is found using the coalesce field, the existing record will be updated with the information being imported. If a match is not found, then a new record will be inserted into the database.

13) What are UI policies?

UI policies dynamically change information on a form and control custom process flows for tasks. UI policies are alternative to client scripts. You can use UI policies to set mandatory fields,which are read only and visible on a form. You can also use UI policy for dynamically changing a field on a form.

14) What is a data policy?

With data policies, you can enforce data consistency by setting mandatory and read-only states for fields. Data policies are similar to UI policies, but UI policies only apply to data entered on a form through the standard browser. Data policies can apply rules to all data entered into the system, including data brought in through email, import sets or web services and data entered through the mobile UI.

15) What is a client script?

Client script sits on the client side(the browser) and runs on client side only.Following are the types of client script:

  • OnLoad()
  • OnSubmit()
  • OnChange()
  • OncellEdit)

16) How can you cancel a form submission through client script?

In order to cancel a form submission the onSubmit function should return false. Refer the below mentioned syntax:

function onSubmit() { return false; }

17) What is a business rule?

Business rule is a server side script. It executes each time a record is inserted, updated, deleted, displayed or queried. The key thing to note while creating a business rule is, when and on what action it has to be executed. The business can be run or executed for following states

  • Display
  • Before
  • After

18) Can you call a business rule through a client script?

Yes, it is possible to call a business rule through a client script. You can use glide ajax for the same.

19) What is the Parent table for incident, change and problem? What does it do?

The Task table is the parent table of Incident, Problem & Change. It makes sure any fields, or configurations defined on the parent table automatically apply to the child tables.

20) What is a record producer?

A catalog item that allows users to create task-based records from the Service Catalog is called as a record producer. For example, creating a change record or a problem record using record producer. Record producers provide an alternative way to create records through Service Catalog

21) What is a glide record?

Glide record is a java class. It is used for performing database operations instead of writing SQL queries.

22) What is import set?

An import set is a tool that imports data from various data sources and, then maps that data into ServiceNow tables using transform map. It acts as a staging table for records imported.

23) What is transform Map?

A transform map transforms the record imported into ServiceNow import set table to the target table. It also determines the relationships between fields displaying in an Import Set table and fields in a target table.

24) What do you mean by foreign record insert?

When an import makes a change to a table that is not the target table for that import, this is when we say foreign record insert occurs. This happens when updating a reference field on a table.

25) Which searching technique is used to search a text or record in ServiceNow?

Zing is the text indexing and search engine that performs all text searches in ServiceNow.

b) Advanced ServiceNow Interview Questions:

26) What does the Client Transaction Timings plugin do?

It is used to enhance the system logs. It provides more information on the duration of transactions between the client and the server.

27) What is inactivity monitor?

It triggers an event for a task record if the task is inactive for a certain period of time. If the task remains inactive, the monitor repeats at regular intervals.

28) What is domain separation?

Domain separation is a way to separate data into logically-defined domains. For example a client ABC has two businesses and they are using ServiceNow single instance. They do not want users from one business to see data of other business. Here we can configure domain separation to isolate the records from both business.

29) How can you remove ‘Remember me’ check box from login page?

You can set the property – “glide.ui.forgetme” to true to remove the ‘Remember me’ check box from login page.

30) What is HTML Sanitizer?

The HTML Sanitizer is used to automatically clean up HTML markup in HTML fields and removes unwanted code and protect against security concerns such as cross-site scripting attacks. The HTML sanitizer is active for all instances starting with the Eureka release.

31) What is the significance of cascade variable checkbox in order guide?

Check box is used to select whether the variables used should cascade, which passes their values to the ordered items. If this check box is cleared, variable information entered in the order guide is not passed on to ordered items.

32) What are Gauges?

A gauge is visible on a ServiceNow homepage and can contain up-to-the-minute information about current status of records that exists on ServiceNow tables. A gauge can be based on a report. It can be put on a homepage or a content page.

33) What do you mean by Metrics in ServiceNow?

Metrics, record and measure the workflow of individual records. With metrics, customers can arm their process by providing tangible figures to measure. For example, how long it takes before a ticket is reassigned.

34) What types of searches are available in ServiceNow?

Following searches will help you find information in ServiceNow:

Lists: Find records in a list;

Global text search: Finds records in multiple tables from a single search field.

Knowledge base: Finds knowledge articles.

Navigation filter: Filters the items in the application navigator.

Search screens: Use a form ­like interface to search for records in a table. Administrators can create these custom modules.

35) What is a BSM Map?

BSM Map is a Business Service Management map. It graphically displays the Configuration Items (CI). These items support a business service and indicates the status of those Configuration Items.

36) Which table stores update sets and customization?

Each update set is stored in the Update Set [sys_update_set] table. The customizations that are associated with the update set, are stored in [sys_update_xml] table.

37) What happens when you mark a default update set as complete?

If the Default update set is marked Complete, the system creates another update set named Default1 and uses it as the default update set.

38) Can you add Homepages and Content pages to ‘update sets’ in ServiceNow?

Homepages and content pages don’t get added to ‘update sets’ by default. You need to manually add pages to the current ‘update sets’ by unloading them.

39) What is Reference qualifier?

Reference qualifiers restricts the data, that can be selected for a reference field.

40) What is Performance Analytics in ServiceNow?

Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any Key Performance Indicator (KPI) in the organization.

41) What is the latest servicenow user interface and when was it released?

The latest user interface is UI16 interface. It came in Helsinki release.

42) What is a sys_id?

It is a unique 32-character GUID that identifies each record created in each table in ServiceNow.

43) What is scorecard?

A scorecard measures the performance of an employee or a business process. It is a graphical representation of progress over time. A scorecard belongs to an indicator. The first step is to define the indicators that you want to measure. You can enhance scorecards by adding targets, breakdowns (scores per group), aggregates, and time series.

44) Can you update a record without updating its system fields(like sys_updated_by, sys_updated_on)?

Yes, you can do it by using a function autoSysFields() in your server side scripting. Whenever you are updating a record set the autoSysFields() to false.

Consider following Example:

var gr = new GlideRecord(‘incident’);
gr.query();
if(gr.next()){
gr.autoSysFields(false);
short_description = “Test from Examsmyntra” ;
gr.update();
}

45) What is Reference qualifier?

Reference qualifier is used to restrict the data that is select able for a reference field.

46) What is Performance Analytics in ServiceNow?

Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any key performance indicator (KPI) in the organisation.

47) How to create a new role?

Navigate to User Administration > Role and click New.

48) Can I have more than one function listening to the same thing?
 

You can, but there is no guarantee of sequencing. You cannot predict what order your event handlers will run.

49) Which method do you use to get all the active/inactive records from a table?
 

You can use addActiveQuery() method to get all the active records and addInactiveQuery() to get the all inactive records.

50) What is the difference between next() and _next() method?
 

next() method is responsible to move to the next record in GlideRecord. _next() provides the same functionality as next(), intended to be used in cases when we query the table having a column name as next.

So this brings us to the end of the blog. I hope you enjoyed these ServiceNow Interview Questions. The topics that you learnt in this ServiceNow Interview questions blog are the most sought-after skill sets that recruiters look for in a ServiceNow Professional.

You can also check out our ServiceNow YouTube playlist: 

www.youtube.com/playlist?list=PL9ooVrP1hQOGOrWF7soRFiTVepwQI6Dfw

In case  if you wish to build a career in ServiceNow then check out our ServiceNow Certification Training.

Got a question for us? Please mention it in the comments section of this ServiceNow Interview Questions and we will get back to you.

Original article source at: https://www.edureka.co/

#service #interviewquestion 

Best 50 ServiceNow Interview Questions and Answers
Monty  Boehm

Monty Boehm

1669427700

One Stop Solution for Customer Needs with Salesforce Service Cloud

Salesforce Service Cloud – One Stop Solution For Customer Needs

Salesforce being a CRM is used to connect people and information. In this blog, I am going to explain one of the core service – Salesforce Service Cloud and how it revolutionized customer support by making interactions easier between an organization and its customers. In my previous blog, you learned how to create a custom Salesforce Application. Moving forward, I will help you understand how Salesforce Service Cloud can add value to your business. First, I will explain the need for Salesforce Service Cloud, what it is and what all services it provides to engage your customers. In the end, I will explain one use case on how Coca-Cola  has been extremely successful in enhancing their customer’s experience using Service Cloud.

So, let’s get started with why your organization should choose Salesforce Service Cloud.

Why Salesforce Service Cloud?

If your company deeply cares about the customer service, then Salesforce Service Cloud is what you should go for. Irrespective of whether you are in B2C or B2B  domain, you will have several customers raising tickets and queries on a regular basis. These tickets will be received by your service agents. Salesforce Service Cloud helps you in tracking and solving these tickets efficiently.
This is not the only way how you can transform customer experience. Let’s dig deeper and see how Salesforce Service Cloud is creating an impression.

  • Maximize Agent Productivity – Using Service Cloud, agents can work from anywhere. With the easy management options available (such as web-based application, mobile device, knowledge base) , the agent productivity is enhanced leading to reduction of overhead costs of agents. Get Salesforce CPQ certification to showcase your mastery of advanced billing processes, invoice generation, and CPQ.
     
  • Transforms Customer experience – Customer relations are drastically enhanced – connecting one to one with every customer via live agents. You can increase your customer loyalty, satisfaction and customer retention, leading to repeat business from existing customers, increase in LTV (Lifetime value) of your customers, positive word of mouth for your brand.
  • Security – Your data is completely safe and secure with the Service Cloud platform. It follows a multilayered approach to protect the information which is vital to your business.
  • Leverage Social Media Platforms – You can also interact with your customers on social media such as Facebook or Twitter in real-time.
  • Case Tracking – Tracking helps you in faster case resolution. This leads to better management of a person’s day to day activities and manual errors are drastically reduced.

To sum up, Salesforce Service Cloud definitely helps in improving your operational processes leading to better experience for your customers. Based on a study done across companies using Salesforce Service Cloud, growth in performance metric has been drastically increased. If you see the below infographic, agent productivity increased by 40%, case resolution increased by 41%, which eventually led to a 31% increase in customer retention.  

Growth in performance using Salesforce Service Cloud- Edureka

This growth illustrates why people prefer Salesforce Service Cloud and how it plays an important role in improving your customer support team.

Now let’s understand what Salesforce Service Cloud is and what services it has to offer.

What is Salesforce Service Cloud?

Salesforce offers Service Cloud as Software as a Service. Service Cloud is built on the Salesforce Customer Success Platform, giving you a 360-degree view of your customers and enabling you to deliver smarter, faster and more personalized service. 

With Salesforce Service Cloud, you can create a connected knowledge base, enable live agent chat, manage case interactions –  all at one platform. You can have personalized customer interactions or even up-sell your products/ services based on his/her past activity data.

Now, you may be wondering how to access Service Cloud. Let me walk you through the steps to access a Service Cloud Console.
Step 1: Login to login.salesforce.com
Step 2: Create a SF Console App
Step 3: Choose its display
Step 4: Customize push notifications
Step 5: Grant users Console Access – Sc User 

What services it offers?

As I had mentioned earlier, there are case tracking and knowledge base features. There are several other services that Salesforce Service Cloud offer which will enable you to provide a differentiated customer experience.  You can refer the below image to see what Salesforce Service Cloud has to offer you.

Salesforce Service Cloud - Edureka

You can take your console to the next level by learning the following features in Salesforce:

Case Management Salesforce Service Cloud - Edureka
Case Management – Any customer issues raised are usually captured and tracked as cases. Cases can be further classified into the following:

  • Email-To-Case: Email-To-Case helps you create a case automatically when an email is sent to one of your company’s email addresses, such as support@edureka.co. These generated cases will be displayed in an ‘Emails related list’. This Emails related list includes all emails sent by your customer on a particular case, as well as the email threads. 
  • Web-to-Case: Web-to-case helps you create a new case automatically in Salesforce whenever a support request comes directly from your company’s website. To enable it, you can go to Setup → Build → Self-service → Web-to-case settings.
    Check the “Enable Web-to-Case” checkbox. You can select an Auto-response template and select the default case origin as ‘Web’.
  • Escalation and Auto-Response: Case escalation rules are used to reassign and optionally notify individuals when a case is not closed within a specified time period. Also, you can configure auto-response rules to respond to cases either from the web or email. 

At the core of the Service Cloud lies the ‘Case’ module. Let us understand the Case module with an example. Assume in a large organization like Coca-Cola, few of the employees’ systems get crashed, let’s call it as ‘breakdown of laptops’. Now you need to fix this as soon as possible to ensure business continuity. Service Cloud helps you track the progress and provides you with all the necessary information of every Coca-Cola agent. You can solve the problem by creating a case. You can then assign them as ‘high’ priority and also categorize the origin of this case (such as phone, email or web) and then click on ‘Save’. Refer the below screenshot to get a better understanding.

New case in Salesforce Service Cloud - Edureka

Solutions in Salesforce Service Cloud - Edureka
Solutions – You can categorize your solutions into query types – making your solution search easier and closing the case faster. With this, the agent does not need to create a new solution to existing queries every time. This helps in enhancing your agent productivity. Solutions do not need any additional license.

For the same Coca-Cola scenario, if you want to solve a case as an agent, then you will definitely search for a solution. Firstly, you can check whether the solution has been already present or not. If it is not present, then your admin can create a solution stating that the case has been resolved and hence can be closed. You can refer to the screenshot attached below.

laptop solution in Salesforce Service Cloud- Edureka

As you can see in the above screenshot, I have created a solution- ‘Laptop Solution’ that displays the title, status and the details of the solution created.

Salesforce Knowledge - Edureka
Knowledge – Salesforce Knowledge is a knowledge base where users can edit, create and manage content. Knowledge articles are documents of information. Customers can go to the company’s website and search for solutions. Knowledge articles can be associated with a case before it is closed unlike solutions. Salesforce Knowledge needs a separate license to be purchased.

 

Salesforce Communities - Edureka
Communities – Communities are a way to collaborate with business partners and customers, distributors, resellers and suppliers who are not part of your organization. Typically, these are the people who are not your regular SFDC users, but you want to provide them some channel to connect with your organization and provide them access to some data as well. To learn more, get Salesforce developer certification and become certified.

In Salesforce, if you go to the ‘Call Center’ dropdown, you will find Success Community. A Salesforce user can use their user id and password to login there. This community is accessible to all the developers, functional consultants or admins. In this community, user can search anything as it has a lot of things like documentation, articles, knowledge, feed, questions and many more. For example: If you want to know about record type, then you can search here. Have a look at the screenshot attached below.

Salesforce Service Community - Edureka


As you can see in the above search, you got a lot of customer’s problems, documentation, known issues, ideas etc. You can now start exploring them, understand the major issues faced by the customers and fix them accordingly.

Salesforce Service Cloud Console - Edureka
Console – Agent console provides unified agent experience. It reduces response time by placing all the information together. In a console, you can find everything from customer profiles, to case histories, to dashboards – all in one place.

As I have shown you the basics of how to set up a Salesforce console in the beginning of this blog. Admin can grant Console Access to the users, Service Cloud gives you the console access where you can assign users to it. Refer the below screenshot, you can assign user profile for the console. Also, you can assign the Service Cloud user license to agents with those profiles so that they can start using your console.

Console Access Salesforce Service Cloud - Edureka

Salesforce Service Social Media - Edureka
Social Media – Service Cloud lets you leverage social media platforms such as Facebook, Twitter to engage visitors. With Salesforce Social Studio, customer requests are escalated directly to your social service team. Social media plays an important role in bridging the gap in virtual world, engaging them in real time. 

 

Salesforce Live Agent - EdurekaLive Agent – Live agents deal with 1:1 customer interaction. Agents can provide answers faster with customer chat and keyboard shortcuts. They stay totally connected to the customers as their team members are alerted immediately to get the issue resolved. Also, it makes the agents smarter and more productive in the process with real-time assistance. This in turn improves customer satisfaction.  

Salesforce Service Cloud is all about providing services to your customers and building a relationship with them. You can use other features such as call center, email & chat, phone, google search, contracts and entitlements, chatter and call Scripting.


 

How Much Salesforce Service Cloud Cost?

Salesforce Service Cloud offers three pricing packages- Professional, Enterprise and Unlimited. You can refer to the table below and select your plan accordingly.

Professional – $75 USD/user/monthEnterprise – $150 USD/user/monthUnlimited – $300 USD/user/month

Case management
Service contracts and entitlements
Single Service Console app
Web and email response
Social customer service
Lead-contact account management
Order management
Opportunity tracking
Chatter collaboration
Customizable reports and dashboards
CTI integration
Mobile access and administration
Limited process automation
Limited number of record types, profiles, and role permission sets
Unlimited apps and tabs
 

 

Advanced case management
Multiple Service Console apps
Workflow and approvals
Integration via web service API
Enterprise analytics
Call scripting
Offline access
Salesforce Identity
Salesforce Private AppExchange
Custom app development
Multiple sandboxes
Knowledge base
Live Agent web chat
Customer Community
Live video chat (SOS)

 

Live Agent web chat
Knowledge base
Additional data storage
Expanded sandbox environments
24/7 toll-free support
Access to 100+ admin services
Unlimited online training
Customer Community
Live video chat (SOS)


“Our agents love Salesforce CRM Service. They tell us how easy it is to use and how phenomenal it is when it comes to driving a better customer experience” – Charter

This is how Salesforce Service Cloud has revolutionized the way customers interact with organizations using the services over the internet. Now, let’s have a look at how Coca-Cola implemented Salesforce Service Cloud to solve its business challenges.

Salesforce Service Cloud Use Case: Coca-Cola 

coca-coca

Many global organizations leverage Salesforce Service Cloud for a better customer relationship management solution. Here, I will talk about how Coca-Cola Germany used Service Cloud to analyze consumer behavior and build data driven business strategies. This use case will give you an idea on how Service Cloud can be used extensively across any domain.
Salesforce Service Cloud is an integrated platform to connect employees, customers, and suppliers around the world.

Earlier, Coca-Cola was facing several issues while managing their customers. Some of them are listed below:

  • The company’s in-house repair facility formerly had technicians who were tracking their jobs on paper. They took a lot of time and effort.
  • Call center and repair department suffered from frequent downtime.
  • Lack of speed, functionality, scalability and connectivity with a fully-mobile experience.
  • Slow mobile app sync-up.
  • Overall unsatisfactory user experience.

“In the past, big companies outcompeted smaller companies. But that’s history. Today, the fast companies outcompete the slow companies,” explained Ulrik Nehammer – CEO of Coca-Cola.
 

Now when they are connected to the Salesforce Service Cloud, technicians are alerted in real-time on customer issues. This helps reduce response time dramatically. Also, call center support agents receive instant access to customer history. With all of this, productivity of Coca-Cola Germany’s technical services has shot up by 30%.

A Big Fix for Coca-Cola

With the Service Cloud, they wanted to understand their customers’ need and cater to them more effectively. Here are some key points that contributed to their excellence.

  • Customer satisfaction – One to one support to customers through any channels or product with services for app like video chat or agents instantly guiding them to solutions.
  • Mobile App – Using app mobile support, customers can interact via live agent video chat, screen sharing and on-screen guided assistance. These services transform customer support resulting in making their customers happy.   
  • Analytics – Using Salesforce Service Cloud, all information is gathered and evaluated through custom dashboard. Coca-Cola performed analysis to check the past transactions and immediately took action at the location they serve. This helped them in making better and profitable decisions in lesser time.
  • Agent productivity is supercharged – With features such as email-to-case, skills-based routing, milestone tracking, Service Cloud gave their agents the tool to respond quickly and efficiently to customers on any channel. This is how Coca-Cola has enhanced the overall productivity.

“This has been a massive step forward for us,” said Andrea Malende, business process expert and mobile solutions in Coca-Cola. “I’m amazed how quick and smooth the implementation was.”

This is how Coca-Cola implemented Salesforce Service Cloud thus making their customers happy. There are several other Salesforce Service Cloud use case stories which show how various companies have benefited and grown their business.

Integrations available for Salesforce Service Cloud

Salesforce Service Cloud supports integration with various application and business system as shown in the image below:

Integrations in Salesforce Service Cloud - Edureka

Since everyone and everything is connected on one platform, you should definitely go for Salesforce Service Cloud. Hope you enjoyed reading my blog, you can also go through the video below for a detailed explanation and demo on Salesforce Service Cloud.

Original article source at: https://www.edureka.co/

#salesforce #service #cloud 

One Stop Solution for Customer Needs with Salesforce Service Cloud
Hermann  Frami

Hermann Frami

1668084360

Reflare: Lightweight & Scalable Reverse Proxy & Load Balancing Library

🚀 Reflare is a lightweight and scalable reverse proxy and load balancing library built for Cloudflare Workers. It sits in front of web servers (e.g. web application, storage platform, or RESTful API), forwards HTTP requests or WebSocket traffics from clients to upstream servers, and transforms responses with several optimizations to improve page loading time.

  • ⚡ Serverless: Deploy instantly to the auto-scaling serverless platform built by Cloudflare. There's no need to manage virtual machines or containers.
  • ✈️ Load Balancing: Distribute incoming traffics among different upstream services.
  • ⚙️ Hackable: Deliver unique content based on visitor attributes, conduct A/B testing, or build custom middleware to hook into the lifecycle. (Experimental)
  • 🛳️ Dynamic (Experimental): Store and update route definitions with Workers KV to avoid redundant redeployment.

📦 Installation

Start with reflare-template

Install wrangler CLI and authorize wrangler with a Cloudflare account.

npm install -g wrangler

wrangler login

Generate a new project from reflare-template and install the dependencies.

npm init cloudflare reflare-app https://github.com/xiaoyang-sde/reflare-template
cd reflare-app
npm install

Edit or add route definitions in src/index.ts. Please read the examples and route definition section below for more details.

  • Run npm run dev to preview Reflare with local development server provided by Miniflare.
  • Run npm run deploy to publish Reflare on Cloudflare Workers.

Integrate with existing project

Install the reflare package.

npm install reflare

Import useReflare from reflare. useReflare accepts an object of options.

  • provider: The location of the list of route definitions. (optional, defaults to static)
    • static: Reflare loads the route definitions from routeList.
    • kv: Reflare loads the route definitions from Workers KV. (Experimental)
  • routeList: The initial list of route definitions. (optional, defaults to [], ignored if provider is not static)
  • namespace: The Workers KV namespace that stores the list of route definitions. (required if provider is kv)

useReflare returns an object with the handle method and push method.

  • The handle method takes the inbound Request to the Worker and returns the Response fetched from the upstream service.
  • The push method takes a route and appends it to routeList.
import useReflare from 'reflare';

const handleRequest = async (
  request: Request,
): Promise<Response> => {
  const reflare = await useReflare();

  reflare.push({
    path: '/*',
    upstream: {
      domain: 'httpbin.org',
      protocol: 'https',
    },
  });

  return reflare.handle(request);
};

addEventListener('fetch', (event) => {
  event.respondWith(handleRequest(event.request));
});

Edit the route definition to change the behavior of Reflare. For example, the route definition below lets Reflare add the Access-Control-Allow-Origin: * header to each response from the upstream service.

{
  path: '/*',
  upstream: {
    domain: 'httpbin.org',
    protocol: 'https',
  },
  cors: {
    origin: '*',
  },
}

📔 Example

MDN Web Docs Mirror

Set up a reverse proxy for MDN Web Docs:

{
  path: '/*',
  upstream: {
    domain: 'developer.mozilla.org',
    protocol: 'https',
  },
}

WebSocket Proxy

Reflare could proxy WebSocket traffic to upstream services. Set up a reverse proxy for wss://echo.websocket.org:

{
  path: '/*',
  upstream: {
    domain: 'echo.websocket.org',
    protocol: 'https',
  },
}

S3 Bucket with custom response headers

Reflare could set custom headers to the request and response. Set up a reverse proxy for https://example.s3.amazonaws.com:

{
  path: '/*',
  upstream: {
    domain: 'example.s3.amazonaws.com',
    protocol: 'https',
  },

  headers: {
    response: {
      'x-response-header': 'Hello from Reflare',
    },
  },

  cors: {
    origin: ['https://www.example.com'],
    methods: ['GET', 'POST'],
    credentials: true,
  },
}

⚙️ Route Definition

Route Matching

Reflare implements express-like route matching. Reflare matches the path and HTTP method of each incoming request with the list of route definitions and forwards the request to the first matched route.

  • path (string | string[]): The path or the list of paths that matches the route
  • methods (string[]): The list of HTTP methods that match the route
// Matches all requests
reflare.push({
  path: '/*',
  /* ... */
});

// Matches GET and POST requests with path `/api`
reflare.push({
  path: '/api',
  methods: ['GET', 'POST'],
});

// Matches GET requests with path ending with `.json` or `.yaml` in `/data`
reflare.push({
  path: ['/data/*.json', '/data/*.yaml'],
  methods: ['GET'],
});

Upstream

  • domain (string): The domain name of the upstream server
  • protocol (string): The protocol scheme of the upstream server (optional, defaults to 'https')
  • port (number): The port of the upstream server (optional, defaults to 80 or 443 based on protocol)
  • timeout (number): The maximum wait time on a request to the upstream server (optional, defaults to 10000)
  • weight (number): The weight of the server that will be accounted for as part of the load balancing decision (optional, defaults to 1)
  • onRequest(request: Request, url: string): The callback function that will be called before sending the request to upstream
  • onResponse(response: Response, url: string): The callback function that will be called after receiving the response from upstream
reflare.push({
  path: '/*',
  upstream: {
    domain: 'httpbin.org',
    protocol: 'https',
    port: 443,
    timeout: 10000,
    weight: 1,
  },
  /* ... */
});

The onRequest and onResponse callback functions could change the content of the request or response. For example, the following example replaces the URL of the request and sets the cache-control header of the response based on its URL.

reflare.push({
  path: '/*',
  upstream: {
    domain: 'httpbin.org',
    protocol: 'https',
    port: 443,
    timeout: 10000,
    weight: 1,

    onRequest: (request: Request, url: string): Request => {
      // Modifies the URL of the request
      return new Request(url.replace('/original/request/path', ''), request);
    },

    onResponse: (response: Response, url: string): Response => {
      // If the URL ends with `.html` or `/`, sets the `cache-control` header
      if (url.endsWith('.html') || url.endsWith('/')) {
        response.headers.set('cache-control', 'public, max-age=240, s-maxage=60');
      }
      return response;
    }
  },
  /* ... */
});

Load Balancing

To load balance HTTP traffic to a group of servers, pass an array of server configurations to upstream. The load balancer will forward the request to an upstream server based on the loadBalancing.policy option.

  • random: The load balancer will select a random upstream server from the server group. The optional weight parameter in the server configuration could influence the load balancing algorithm.
  • ip-hash: The client's IP address is used as a hashing key to select the upstream server from the server group. It ensures that the requests from the same client will always be directed to the same server.
reflare.push({
  path: '/*',
  loadBalancing: {
    policy: 'random',
  },
  upstream: [
    {
      domain: 's1.example.com',
      protocol: 'https',
      weight: 20,
    },
    {
      domain: 's2.example.com',
      protocol: 'https',
      weight: 30,
    },
    {
      domain: 's3.example.com',
      protocol: 'https',
      weight: 50,
    },
  ],
  /* ... */
});

Firewall

Each incoming request is inspected against the firewall rules defined in the firewall property of the options object. The request will be blocked if it matches at least one firewall rule.

  • field: The property of the incoming request to be inspected
    • asn: The ASN number of the incoming request (number)
    • ip: The IP address of the incoming request, e.g. 1.1.1.1 (string)
    • hostname: The content of the host header, e.g. github.com (string | undefined)
    • user-agent: The content of the user-agent header, e.g. Mozilla/5.0 (string | undefined)
    • country: The two-letter country code in the request, e.g. US (string | undefined)
    • continent: The continent of the incoming request, e.g. NA (string | undefined)
  • value (string | string[] | number | number[] | RegExp): The value of the firewall rule
  • operator: The operator to be used to determine if the request is blocked
    • equal: Block the request if field is equal to value
    • not equal: Block the request if field is not equal to value
    • match: Block the request if value matches field (Expect field to be string and value to be RegExp)
    • not match: Block the request if value doesn't match field (Expect field to be string and value to be RegExp)
    • in: Block the request if field is in value (Expect value to be Array)
    • not in: Block the request if field is not in value (Expect value to be Array)
    • contain: Block the request if field contains value (Expect field and value to be string)
    • not contain: Block the request if field doesn't contain value (Expect field and value to be string)
    • greater: Block the request if field is greater than value (Expect field and value to be number)
    • less: Block the request if field is less than value (Expect field and value to be number)
reflare.push('/', {
  path: '/*',
  /* ... */
  firewall: [
    {
      field: 'ip',
      operator: 'in',
      value: ['1.1.1.1', '1.0.0.1'],
    },
    {
      field: 'user-agent',
      operator: 'match',
      value: /Chrome/,
    }
  ],
});

Headers

  • request (Record<string, string>): Sets request header going upstream to the backend. Accepts an object. (optional, defaults to {})
  • response (Record<string, string>): Sets response header coming downstream to the client. Accepts an object. (optional, defaults to {})
reflare.push({
  path: '/*',
  /* ... */
  headers: {
    request: {
      'x-example-header': 'hello server',
    },
    response: {
      'x-example-header': 'hello client',
    },
  },
});

Cross-Origin Resource Sharing (CORS)

origin: Configures the Access-Control-Allow-Origin CORS header. (optional, defaults to false)

  • boolean: set to true to reflect the origin of the request, or set to false to disable CORS.
  • string[]: an array of acceptable origins.
  • *: allow any origin to access the resource.

methods (string[]): Configures the Access-Control-Allow-Methods CORS header. Expect an array of valid HTTP methods or *. (optional, defaults to reflecting the method specified in the request’s Access-Control-Request-Method header)

allowedHeaders (string[]): Configures the Access-Control-Allow-Headers CORS header. Expect an array of HTTP headers or *. (optional, defaults to reflecting the headers specified in the request’s Access-Control-Request-Headers header.)

exposedHeaders (string[]): Configures the Access-Control-Expose-Headers CORS header. Expect an array of HTTP headers or *. (optional, defaults to [])

credentials (boolean): Configures the Access-Control-Allow-Credentials CORS header. Set to true to pass the header, or it is omitted. (optional, defaults to false)

maxAge (number): Configures the Access-Control-Max-Age CORS header. Set to an integer to pass the header, or it is omitted. (optional)

reflare.push({
  path: '/*',
  /* ... */
  cors: {
    origin: true,
    methods: [
      'GET',
      'POST',
    ],
    allowedHeaders: [
      'Example-Header',
    ],
    exposedHeaders: [
      'Example-Header',
    ],
    credentials: true,
    maxAge: 86400,
  },
});

Optimization

Cloudflare Workers provides several optimizations by default.

  • Brotli: Speed up page load times for visitors’ HTTPS traffic by applying Brotli compression.
  • HTTP/2: Improve page load time by connection multiplexing, header compression, and server push.
  • HTTP/3 with QUIC: Accelerate HTTP requests by using QUIC, which provides encryption and performance improvements compared to TCP and TLS.
  • 0-RTT Connection Resumption: Improve performance for clients who have previously connected to the website.

🛳️ Dynamic Route Definition (Experimental)

Reflare could load the route definitions from Workers KV. Set the provider to kv and namespace to a Workers KV namespace (e.g. REFLARE) that binds to the current Worker. Reflare fetches the route definitions from namespace and handles each incoming request with the latest route definitions.

import useReflare from 'reflare';

declare const REFLARE: KVNamespace;

const handleRequest = async (
  request: Request,
): Promise<Response> => {
  const reflare = await useReflare({
    provider: 'kv',
    namespace: REFLARE,
  });
  return reflare.handle(request);
};

addEventListener('fetch', (event) => {
  event.respondWith(handleRequest(event.request));
});

The route definitions should be stored as a JSON array in the route-list key of namespace. The KV namespace could be modified with wrangler or Cloudflare API. The Reflare dashboard for route management is under development and will be released soon.

wrangler kv:key put --binding=[namespace] 'route-list' '[{"path":"/*","upstream":{"domain":"httpbin.org","protocol":"https"}}]'

🌎 Contributing

  • Request a feature: Create an issue with the Feature request template.
  • Report bugs: Create an issue with the Bug report template.
  • Add new feature or fix bugs: Fork this repository, edit code, and send a pull request.

Download Details:

Author: Xiaoyang-sde
Source Code: https://github.com/xiaoyang-sde/reflare 
License: MIT license

#serverless #service #worker #api #load 

Reflare: Lightweight & Scalable Reverse Proxy & Load Balancing Library
Elian  Harber

Elian Harber

1667460432

Echoip: IP Address Lookup Service

echoip

A simple service for looking up your IP address. This is the code that powers https://ifconfig.co.

Usage

Just the business, please:

$ curl ifconfig.co
127.0.0.1

$ http ifconfig.co
127.0.0.1

$ wget -qO- ifconfig.co
127.0.0.1

$ fetch -qo- https://ifconfig.co
127.0.0.1

$ bat -print=b ifconfig.co/ip
127.0.0.1

Country and city lookup:

$ curl ifconfig.co/country
Elbonia

$ curl ifconfig.co/country-iso
EB

$ curl ifconfig.co/city
Bornyasherk

$ curl ifconfig.co/asn
AS59795

As JSON:

$ curl -H 'Accept: application/json' ifconfig.co  # or curl ifconfig.co/json
{
  "city": "Bornyasherk",
  "country": "Elbonia",
  "country_iso": "EB",
  "ip": "127.0.0.1",
  "ip_decimal": 2130706433,
  "asn": "AS59795",
  "asn_org": "Hosting4Real"
}

Port testing:

$ curl ifconfig.co/port/80
{
  "ip": "127.0.0.1",
  "port": 80,
  "reachable": false
}

Pass the appropriate flag (usually -4 and -6) to your client to switch between IPv4 and IPv6 lookup.

Features

  • Easy to remember domain name
  • Fast
  • Supports IPv6
  • Supports HTTPS
  • Supports common command-line clients (e.g. curl, httpie, ht, wget and fetch)
  • JSON output
  • ASN, country and city lookup using the MaxMind GeoIP database
  • Port testing
  • All endpoints (except /port) can return information about a custom IP address specified via ?ip= query parameter
  • Open source under the BSD 3-Clause license

Why?

  • To scratch an itch
  • An excuse to use Go for something
  • Faster than ifconfig.me and has IPv6 support

Building

Compiling requires the Golang compiler to be installed. This package can be installed with:

go install github.com/mpolden/echoip/...@latest

For more information on building a Go project, see the official Go documentation.

Docker image

A Docker image is available on Docker Hub, which can be downloaded with:

docker pull mpolden/echoip

Usage

$ echoip -h
Usage of echoip:
  -C int
        Size of response cache. Set to 0 to disable
  -H value
        Header to trust for remote IP, if present (e.g. X-Real-IP)
  -a string
        Path to GeoIP ASN database
  -c string
        Path to GeoIP city database
  -f string
        Path to GeoIP country database
  -l string
        Listening address (default ":8080")
  -p    Enable port lookup
  -r    Perform reverse hostname lookups
  -t string
        Path to template directory (default "html")

Download Details:

Author: Mpolden
Source Code: https://github.com/mpolden/echoip 
License: BSD-3-Clause license

#go #golang #ip #service 

Echoip: IP Address Lookup Service
Gordon  Murray

Gordon Murray

1659346140

RDservice: This Plugin Can Be Used to Consume The RD Service

rdservice

This plugin can be used to consume the RD Service of biometric device via android intents.

Installing

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add rdservice

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  rdservice: ^0.0.1

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:rdservice/rdservice.dart';

example/lib/main.dart

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:rdservice/rdservice.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatefulWidget {
  const MyApp({Key? key}) : super(key: key);

  @override
  State<MyApp> createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  String _platformVersion = 'Unknown';

  @override
  Widget build(BuildContext context) {
    print(_platformVersion);
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Plugin example app'),
        ),
        body: SingleChildScrollView(
          child: Column(
            children: [
              ElevatedButton(
                child: const Text("Init Device"),
                onPressed: initDevice,
              ),
              ElevatedButton(
                child: const Text("Capture"),
                onPressed: captureFromDevice,
              ),
              Padding(
                padding: const EdgeInsets.all(8.0),
                child: Text(_platformVersion),
              ),
            ],
          ),
        ),
      ),
    );
  }

  Future<void> initDevice() async {
    RDService? result;
    try {
      result = await Msf100.getDeviceInfo();
    } on PlatformException catch (e) {
      if (mounted) {
        setState(() {
          _platformVersion = e.message ?? 'Unknown exception';
        });
      }
      return;
    }
    if (!mounted) return;

    setState(() {
      _platformVersion = result?.status ?? "Unknown";
    });
  }

  Future<void> captureFromDevice() async {
    PidData? result;
    try {
      result = await Msf100.capture();
    } on PlatformException catch (e) {
      if (mounted) {
        setState(() {
          _platformVersion = e.message ?? 'Unknown exception';
        });
      }
      return;
    }
    if (!mounted) return;

    setState(() {
      _platformVersion = result?.resp.errInfo ?? 'Unknown Error';
    });
  }
}

Download Details: 

Author: jeevareddy
Source Code: https://github.com/jeevareddy/rdservice 
License: MIT license

#flutter #dart #service #android 

RDservice: This Plugin Can Be Used to Consume The RD Service
Lawrence  Lesch

Lawrence Lesch

1657974540

Aggregate Crash Reports for Electron Apps

electron-crash-report-service

Aggregate crash reports for Electron applications

Usage

Commands

$ npm install   # Install dependencies
$ npm start     # Start service in development

Client code

var electron = require('electron')

electron.crashReporter.start({
  companyName: '<company-name>',
  productName: '<product-name>',
  submitURL: '<reporter-url>'
})

Environment variables

PORT [80]                                # Set the port the service should listen to
STORAGE_PATH [/var/crash-reports]  # Location to store crash reports
NODE_ENV [production]                    # production|development

Routes

/crash-report   POST   Submit a new crash report
/404            GET    404 handler

Peer Dependencies

None

Unit file

Save the unit file as /etc/systemd/system/electron-crash-reporter.service, and the application image as /images/electron-crash-report-service.aci

[Unit]
Description=electron-crash-report-service
Requires=network-online.target
After=network-online.target

[Service]
Slice=machine.slice
Delegate=true
CPUQuota=10%
MemoryLimit=1G
Environment=PORT=80
Environment=STORAGE_PATH=/var/crash-reports
Environment=NODE_ENV=production
ExecStart=/usr/bin/rkt run --inherit-env /images/electron-crash-report-service.aci
ExecStopPost=/usr/bin/rkt gc --mark-only
KillMode=mixed
Restart=always

You can then run it using systemctl:

$ sudo systemctl start etcd.service
$ sudo systemctl stop etcd.service
$ sudo systemctl restart etcd.service

See Also

Author: Yoshuawuyts
Source Code: https://github.com/yoshuawuyts/electron-crash-report-service 
License: MIT license

#javascript #electron #service 

Aggregate Crash Reports for Electron Apps
Gordon  Taylor

Gordon Taylor

1657873620

Browserify/wzrd.in: Browserify As A Service

browserify-as-a-service

What just happened?

Well, in this case, since someone has visited this link before you, the file was cached with leveldb. But if you were to try and grab a bundle that nobody else has tried to grab before, what would happen is this:

  • The module gets pulled down from npm and installed
  • The module gets browserified as a standalone bundle
  • The module gets sent to you, piping hot
  • The module gets cached so that you don't have to wait later on

API

There are a few API endpoints:

GET /bundle/:module

Get the latest version of :module.

GET /bundle/:module@:version

Get a version of :module which satisfies the given :version semver range. Defaults to latest.

GET /debug-bundle/:module

GET /debug-bundle/:module@:version

The same as the prior two, except with --debug passed to browserify.

GET /standalone/:module

GET /standalone/:module@:version

In this case, --standalone is passed to browserify.

GET /debug-standalone/:module

GET /debug-standalone/:module@:version

Both --debug and --standalone are passed to browserify!

POST /multi

POST a body that looks something like this:

{
  "options": {
    "debug": true
  },
  "dependencies": {
    "concat-stream": "0.1.x",
    "hyperstream": "0.2.x"
  }
}

"options" is where you get to set "debug", "standalone", and "fullPaths". Usually, in this case, you'll probably only really care about debug. If you don't define "options", it will default to { "debug": false, "standalone": false, "fullPaths": false }.

What you get in return looks something like this:

HTTP/1.1 200 OK
X-Powered-By: Express
Location: /multi/48GOmL0XvnRZn32bkpz75A==
content-type: application/json
Date: Sat, 22 Jun 2013 22:36:32 GMT
Connection: keep-alive
Transfer-Encoding: chunked

{
  "concat-stream": {
    "package": /* the concat-stream package.json */,
    "bundle": /* the concat-stream bundle */
  },
  "hyperstream": {
    "package": /* the hyperstream package.json */,
    "bundle": /* the hyperstream bundle */
  }
}

The bundle gets permanently cached at /multi/48GOmL0XvnRZn32bkpz75A== for future GETs.

GET /multi/:existing-bundle

If you saved the Location url from the POST earlier, you can just GET it instead of POSTing again.

GET /status/:module

GET /status/:module@:version

Get information on the build status of a module. Returns build information for all versions which satisfy the given semver (or latest in the event of a missing semver).

Blobs generally look something like this:

HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 109
ETag: "-9450086"
Date: Sun, 26 Jan 2014 08:05:59 GMT
Connection: keep-alive

{
  "module": "concat-stream",
  "builds": {
    "1.4.1": {
      "ok": true
    }
  }
}

The "module" and "builds" fields should both exist. Keys for "builds" are the versions. Properties:

  • "ok": Whether the package has last built or not
  • "error": If the package was built insuccessfully ("ok" is false), this property will contain information about the error

Versions which have not been built will not be keyed onto "builds".

Heroku Installation

browserify-cdn is ready to run on Heroku:

heroku create my-browserify-cdn
git push heroku master
heroku ps:scale web=1

Docker Installation

You can build and run an image doing the following:

docker build -t "wzrd.in" /path/to/wzrd.in
docker run -p 8080:8080 wzrd.in

Keep in mind that a new deploy will wipe the cache.

Places

Quick Start

Try visiting this link:

/standalone/concat-stream@latest

Also, wzrd.in has a nice url generating form.

Author: Browserify
Source Code: https://github.com/browserify/wzrd.in 
License: MIT license

#javascript #service #browserify 

Browserify/wzrd.in: Browserify As A Service

Leaps: A Pair Programming Service using Operational Transforms

Leaps is a service for collaboratively editing your local files over a web UI, using operational transforms to ensure zero-collision synchronization across any number of editing clients.

Screenshot

Run

Simply navigate to a directory you want to share, run leaps, open the hosted page (default http://localhost:8080) in your browser and direct any friends on your LAN to the same page. You can now collaboratively edit any documents in that directory.

Your files will be written to in the background as you edit. If you aren't using version control, or simply want extra protection, you can run leaps in safe mode with the --safe flag. In safe mode any changes you make will be placed in a .leaps_cot.json file, which you can then apply to your files once you are happy by running with the --commit flag.

Build/test commands from the UI

When writing code it sucks to have to leave the editor for running tests, linters or builds. However, allowing the internet to run arbitrary commands on the host machine is a recipe for disaster.

Instead, leaps allows you to specify pre-written commands using the -cmd flag, which are then available for clients to trigger asynchronously while they edit. Results are broadcast to all connected users, so you can all see the outcome as a team.

For example, leaps -cmd "golint ./..." -cmd "go build ./cmd/leaps" gives users both a linter and a build command that they can trigger on your machine.

API

Leaps can also be used as a library, with implementations of accessors for various document hosting solutions and plugable authentication layers, allowing you to build your own services to suit many service architectures.

Leaps server components are implemented in Golang, and has a client implemented in JavaScript that can currently be used with ACE, CodeMirror and Textarea editors.

To read more about the service library components and find examples check out the godocs.

To read about the JavaScript client check out the README.

Install

Leaps is a single binary, with no runtime dependencies. Just download a package for your OS from the latest releases page.

From homebrew

brew install leaps
leaps -h

Build with Go

go get github.com/Jeffail/leaps/cmd/...
leaps -h

System compatibility

OSStatus
OSX x86_64Supported, tested
Linux x86Supported
Linux x86_64Supported, tested
Linux ARMv5Builds
Linux ARMv7Supported, tested
Windows x86Builds
Windows x86_64Builds

Contributing and customizing

Contributions are very welcome, just fork and submit a pull request.

Contact

Ashley Jeffs

WARNING: This project is no longer actively maintained.

Author: jeffail
Source Code: https://github.com/jeffail/leaps 
License: MIT license

#go #golang #service 

Leaps: A Pair Programming Service using Operational Transforms