Lawrence  Lesch

Lawrence Lesch

1662122520

Frisbee: Modern Fetch-based Alternative to Axios/superagent/request

Frisbee 

❤️ Love this project? Support @niftylettuce's FOSS on Patreon or PayPal 🦄:

Modern fetch-based alternative to axios/superagent/request. Great for React Native.

New in v2.0.4++: baseURI is now optional and you can pass raw: true as a global or request-based option to get the raw fetch() response (e.g. if you want to use res.arrayBuffer() or any other method manually).

Install

Node (Koa, Express, React Native, ...)

Install the required package:

npm install --save frisbee

See usage example and API below

Browser

VanillaJS

Load the package via <script> tag (note you will need to polyfill with required features):

<script crossorigin="anonymous" src="https://polyfill.io/v3/polyfill.min.js?features=es6,Array.from,Object.getOwnPropertyDescriptors,Object.getOwnPropertySymbols,Promise,Promise.race,Promise.reject,Promise.resolve,Reflect,Symbol.for,Symbol.iterator,Symbol.prototype,Symbol.species,Symbol.toPrimitive,Symbol.toStringTag,Uint8Array"></script>
<script src="https://unpkg.com/frisbee"></script>
<script type="text/javascript">
  (function() {
    // create a new instance of Frisbee
    var api = new Frisbee({
      baseURI: 'https://api.startup.com', // optional
      headers: {
        'Accept': 'application/json',
        'Content-Type': 'application/json'
      }
    });

    // this is a simple example using `.then` and `.catch`
    api.get('/hello-world').then(console.log).catch(console.error);

    //
    // see the Usage section below in Frisbee's README for more information
    // https://github.com/niftylettuce/frisbee
    //
  })();
</script>

See usage example and API below for a more complete example.

Bundler

Install the required package:

npm install frisbee

Ensure that your environment is polyfilled with required features (e.g. use @babel/polyfill globally or a service like polyfill.io)

See usage example and API below

Usage

Example

const Frisbee = require('frisbee');

// create a new instance of Frisbee
const api = new Frisbee({
  baseURI: 'https://api.startup.com', // optional
  headers: {
    'Accept': 'application/json',
    'Content-Type': 'application/json'
  }
});

// this is a simple example using `.then` and `.catch`
api.get('/hello-world').then(console.log).catch(console.error);

// this is a more complex example using async/await and basic auth
(async () => {
  // log in to our API with a user/pass
  try {
    // make the request
    let res = await api.post('/v1/login');

    // handle HTTP or API errors
    if (res.err) throw res.err;

    // set basic auth headers for all
    // future API requests we make
    api.auth(res.body.api_token);

    // now let's post a message to our API
    res = await api.post('/v1/messages', { body: 'Hello' });

    // handle HTTP or API errors
    if (res.err) throw res.err;

    // now let's get a list of messages filtered by page and limit
    res = await api.get('/v1/messages', {
      body: {
        limit: 10,
        page: 2
      }
    });

    // handle HTTP or API errors
    if (res.err) throw res.err;

    // now let's logout
    res = api.post('/v1/logout');

    // handle HTTP or API errors
    if (res.err) throw res.err;

    // unset auth now since we logged out
    api.auth();

    // for more information on `fetch` headers and
    // how to send and expect various types of data:
    // <https://github.com/github/fetch>
  } catch (err) {
    console.error(err);
  }
})();

API

const Frisbee = require('frisbee');

Frisbee is a function that optionally accepts an argument options, which is an object full of options for constructing your API instance.

Frisbee - accepts an options object, with the following accepted options:

baseURI (String) - the default URI to use as a prefix for all HTTP requests (optional as of v2.0.4+)

If your API server is running on http://localhost:8080, then use that as the value for this option

If you use React Native, then you most likely want to set baseURI as follows (e.g. making use of __DEV__ global variable):

const api = new Frisbee({
  baseURI: __DEV__
    ? process.env.API_BASE_URI || 'http://localhost:8080'
    : 'https://api.startup.com'
});

You could also set API_BASE_URI as an environment variable, and then set the value of this option to process.env.API_BASE_URI (e.g. API_BASE_URI=http://localhost:8080 node app)

Using React Native? You might want to read this article about automatic IP configuration.

headers (Object) - an object containing default headers to send with every request

  • Tip: You'll most likely want to set the "Accept" header to "application/json" and the "Content-Type" header to "application/json"

body (Object) - an object containing default body payload to send with every request. Either the default body set in options will be used or it will be overridden with a request provided body. Body will not merge nor deep merge.

params (Object) - an object containing default querystring parameters to send with every request (API method specific params options will override or extend properties defined here, but will not deep merge)

logRequest (Function) - a function that accepts two arguments path (String) and opts (Object) and will be called with before a fetch request is made with (e.g. fetch(path, opts) – see Logging and Debugging below for example usage) - this defaults to false so no log request function is called out of the box

logResponse (Function) - a function that accepts three arguments path (String), opts (Object), and response (Object) and has the same parameters as logRequest, with the exception of the third response, which is the raw response object returned from fetch (see Logging and Debugging below for example usage) - this defaults to false so no log response function is called out of the box

auth - will call the auth() function below and set it as a default

parse - options passed to qs.parse method (see qs for all available options)

  • ignoreQueryPrefix (Boolean) - defaults to true, and parses querystrings from URL's properly

stringify - options passed to qs.stringify method (see qs for all available options)

addQueryPrefix (Boolean) - defaults to true, and affixes the path with required ? parameter if a querystring is to be passed

format (String) - defaults to RFC1738

arrayFormat (String) - defaults to 'indices'

preventBodyOnMethods (Array) - defaults to [ 'GET', 'HEAD', 'DELETE', 'CONNECT' ], and is an Array of HTTP method names that we will convert a body option to be querystringified URL parameters (e.g. api.get('/v1/users', { search: 'foo' }) will result in GET /v1/users?search=foo). According to RFC 7231, the default methods defined here have no defined semantics for having a payload body, and having one may cause some implementations to reject the request (which is why we set this as a default). If you wish to disable this, you may pass preventBodyOnMethods: false or your own custom Array preventBodyOnMethods: [ ... ]

interceptableMethods (Array) - defaults to all API methods supported below (defaults to GET, HEAD, POST, PUT, DELETE, OPTIONS, PATCH)

raw (Boolean) - return a raw fetch response (new as of v2.0.4+)

abortToken (Symbol) - some Symbol that you can use to abort one or more frisbee requests

signal (Object) - an AbortController Signal used to cancel a fetch request

mode (String) - passed to fetch, defaults to "same-origin" (see Fetch's documentation for more info)

cache (String) - passed to fetch, defaults to "default" (see Fetch's documentation for more info)

credentials (String) - passed to fetch, defaults to "same-origin" (see Fetch's documentation for more info)

redirect (String) - passed to fetch, defaults to "follow" (see Fetch's documentation for more info)

referrer (String) - passed to fetch, defaults to "client" (see Fetch's documentation for more info)

Upon being invoked, Frisbee returns an object with the following chainable methods:

api.auth(creds) - helper function that sets BasicAuth headers, and it accepts user and pass arguments

  • You can pass creds user and pass as an array, arguments, or string: ([user, pass]), (user, pass), or ("user:pass"), so you shouldn't have any problems!
  • If you don't pass both user and pass arguments, then it removes any previously set BasicAuth headers from prior auth() calls
  • If you pass only a user, then it will set pass to an empty string '')
  • If you pass : then it will assume you are trying to set BasicAuth headers using your own user:pass string
  • If you pass more than two keys, then it will throw an error (since BasicAuth only consists of user and pass anyways)

api.setOptions(opts) - helper function to update instance options (note this does not call api.auth internally again even if opts.auth is passed)

api.jwt(token) - helper function that sets a JWT Bearer header. It accepts the jwt_token as a single string argument. If you simply invoke the function null as the argument for your token, it will remove JWT headers.

api.abort(token) - aborts all current/queued requests that were created using token

api.abortAll() - aborts all current/queued - i.e. await-ing in an interceptor - requests

All exposed HTTP methods return a Promise, and they require a path string, and accept an optional options object:

Accepted method arguments:

path required - the path for the HTTP request (e.g. /v1/login, will be prefixed with the value of baseURI if set)

options optional - an object containing options, such as header values, a request body, form data, or a querystring to send along with the request. These options by default are inherited from global options passed to new Frisbee({ options }). For the GET method (and the DELETE method as of version 1.3.0), body data will be encoded in the query string. **This options object is passed to the native Fetch API method, which means you can use native Fetch API method options as well from Fetch's documentation

To make only a certain request be raw and not parsed by Frisbee:

const res = await api.get('/v1/messages', { raw: false });

Here are a few examples (you can override/merge your set default headers as well per request):

To turn off caching, pass cache: 'reload' to native fetch options:

const res = await api.get('/v1/messages', { cache: 'reload' });

To set a custom header value of X-Reply-To on a POST request:

const res = await api.post('/messages', {
  headers: {
    'X-Reply-To': '7s9inuna748y4l1azchi'
  }
});

raw optional - will override a global raw option if set, and if it is true it will return a raw fetch response (new as of v2.0.4+)

List of available HTTP methods:

  • api.get(path, options) - GET
  • api.head(path, options) - HEAD (does not currently work - see tests)
  • api.post(path, options) - POST
  • api.put(path, options) - PUT
  • api.del(path, options) - DELETE
  • api.delete(path, options) - DELETE
  • api.options(path, options) - OPTIONS (does not currently work - see tests)
  • api.patch(path, options) - PATCH

Note that you can chain the auth method and a HTTP method together:

const res = await api.auth('foo:bar').get('/');

interceptor - object that can be used to manipulate request and response interceptors. It has the following methods:

api.interceptor.register(interceptor): Accepts an interceptor object that can have one or more of the following functions

{
request: function (path, options) {
    // Read/Modify the path or options
    // ...
    return [path, options];
},
requestError: function (err) {
    // Handle an error occured in the request method
    // ...
    return Promise.reject(err);
},
response: function (response) {
    // Read/Modify the response
    // ...
    return response;
},
responseError: function (err) {
    // Handle error occured in api/response methods
    return Promise.reject(err);
}

the register method returns an unregister() function so that you can unregister the added interceptor.

api.interceptor.unregister(interceptor): Accepts the interceptor reference that you want to delete.

api.interceptor.clear(): Removes all the added interceptors.

Note that when interceptors are added in the order ONE->TWO->THREE:

  • The request/requestError functions will run in the same order ONE->TWO->THREE.
  • The response/responseError functions will run in reversed order THREE->TWO->ONE.

Logging and Debugging

We highly recommend to use CabinJS as your Node.js and JavaScript logging utility (see Automatic Request Logging for complete examples).

Logging Requests and Responses

You can log both requests and/or responses made to fetch internally in Frisbee. Simply pass a logRequest and/or logResponse function.

logRequest accepts two arguments path (String) and opts (Object) and these two arguments are what we call fetch with internally (e.g. fetch(path, opts)):

const cabin = require('cabin');
const frisbee = require('frisbee');
const pino = require('pino')({
  customLevels: {
    log: 30
  },
  hooks: {
    // <https://github.com/pinojs/pino/blob/master/docs/api.md#logmethod>
    logMethod(inputArgs, method) {
      return method.call(this, {
        // <https://github.com/pinojs/pino/issues/854>
        // message: inputArgs[0],
        msg: inputArgs[0],
        meta: inputArgs[1]
      });
    }
  }
});

const logger = new Cabin({
  // (optional: your free API key from https://cabinjs.com)
  // key: 'YOUR-CABIN-API-KEY',
  axe: { logger: pino }
});

const api = new Frisbee({
  logRequest: (path, opts) => {
    logger.info('fetch request', { path, opts });
  }
});

logResponse accepts three arguments, the first two are the same as logRequest (e.g. path and opts), but the third argument is response (Object) and is the raw response object returned from fetch (e.g. const response = await fetch(path, opts)):

const cabin = require('cabin');
const frisbee = require('frisbee');
const pino = require('pino')({
  customLevels: {
    log: 30
  }
});

const logger = new Cabin({
  // (optional: your free API key from https://cabinjs.com)
  // key: 'YOUR-CABIN-API-KEY',
  axe: { logger: pino }
});

const api = new Frisbee({
  logResponse: (path, opts, res) => {
    logger.info('fetch response', { path, opts, res });
  }
});

Debug Statements

You can run your application with DEBUG=frisbee node app.js to output debug logging statements with Frisbee.

Common Issues

Required Features

This list is sourced from ESLint output and polyfilled settings through eslint-plugin-compat.

  • Array.from() is not supported in IE 11
  • Object.getOwnPropertyDescriptors() is not supported in IE 11
  • Object.getOwnPropertySymbols() is not supported in IE 11
  • Promise is not supported in Opera Mini all, IE Mobile 11, IE 11
  • Promise.race() is not supported in Opera Mini all, IE Mobile 11, IE 11
  • Promise.reject() is not supported in Opera Mini all, IE Mobile 11, IE 11
  • Promise.resolve() is not supported in Opera Mini all, IE Mobile 11, IE 11
  • Reflect is not supported in IE 11
  • Symbol.for() is not supported in IE 11
  • Symbol.iterator() is not supported in IE 11
  • Symbol.prototype() is not supported in IE 11
  • Symbol.species() is not supported in IE 11
  • Symbol.toPrimitive() is not supported in IE 11
  • Symbol.toStringTag() is not supported in IE 11
  • Uint8Array is not supported in IE Mobile 11

Frequently Asked Questions

How do I unset a default header

Simply set its value to null, '', or undefined – and it will be unset and removed from the headers sent with your request.

A common use case for this is when you are attempting to use FormData and need the content boundary automatically added.

Why do my form uploads randomly fail with React Native

This is due to a bug with setting the boundary. For more information and temporary workaround if you are affected please see facebook/react-native#7564 (comment).

Does this support callbacks, promises, or both

As of version 1.0.0 we have dropped support for callbacks, it now only supports Promises.

What is the fetch method

It is a WHATWG browser API specification. You can read more about at the following links:

Does the Browser or Node.js support fetch yet

Yes, a lot of browsers are now supporting it! See this reference for more information http://caniuse.com/#feat=fetch.

If my engine does not support fetch yet, is there a polyfill

Yes you can use the fetch method (polyfill) from whatwg-fetch or node-fetch.

By default, React Native already has a built-in fetch out of the box!

Can I make fetch support older browsers

Yes, but you'll need a promise polyfill for older browsers.

What is this project about

Use this package as a universal API wrapper for integrating your API in your client-side or server-side projects.

It's a better working alternative (and with less headaches; at least for me) – for talking to your API – than superagent and the default fetch Network method provide.

Use it for projects in Node, React, Angular, React Native, ...

It supports and is tested for both client-side usage (e.g. with Bower, Browserify, or Webpack, with whatwg-fetch) and also server-side (with node-fetch).

Why not just use superagent or fetch

See Background for more information.

Want to build an API back-end with Node.js

See Lad as a great starting point, and read this article about building Node.js API's with authentication.

Need help or want to request a feature

File an issue on GitHub and we'll try our best help you out.

Tests

This package is tested to work with whatwg-fetch and node-fetch.

This means that it is compatible for both client-side and server-side usage.

Development

  1. Fork/clone this repository
  2. Run npm install
  3. Run npm run watch to watch the src directory for changes
  4. Make changes in src directory
  5. Write unit tests in /test/ if you add more stuff
  6. Run npm test when you're done
  7. Submit a pull request

Background

The docs suggest that you use superagent with React Native, but in our experience it did not work properly, therefore we went with the next best solution, the Github fetch API polyfill included with React Native. After having several issues trying to use fetch and writing our own API wrapper for a project with it (and running into roadblocks along the way) – we decided to publish this.

Here were the issues we discovered/filed related to this:

We know that solutions like superagent exist, but they don't seem to work well with React Native (which was our use case for this package).

In addition, the authors of WHATWG's fetch API only support throwing errors instead of catching them and bubbling them up to the callback/promise (for example, with Frisbee any HTTP or API errors are found in the res.err object).

Therefore we created frisbee to serve as our API glue, and hopefully it'll serve as yours too.

Contributors

NameWebsite
Nick Baughhttp://niftylettuce.com/
Alexis Tyler 
Assem-Hafez 
Jordan Denison 
James 
Sampsa Saarela 
Julien Moutte 
Charles Soetan 
Kesha Antonov 
Ben Turley 
Richard Evans 
Hawken Rives 
Fernando Montoya 
Brent Vatne 
Hosmel Quintana 
Kyle Kirbatski 
Adam Jenkins 

Credits

Download Details:

Author: ladjs
Source Code: https://github.com/ladjs/frisbee 
License: MIT license

#javascript #react #fetch #android #ios 

Frisbee: Modern Fetch-based Alternative to Axios/superagent/request
Beth  Cooper

Beth Cooper

1659701760

Fast Read and Highly Scalable Optimized Social Activity Feed in Ruby

SimpleFeed

1. Scalable, Easy to Use Activity Feed Implementation.

Noteplease feel free to read this README in the formatted-for-print PDF Version.

1.1. Build & Gem Status

1.2. Test Coverage Map

Coverage Map

ImportantPlease read the (somewhat outdated) blog post Feeding Frenzy with SimpleFeed launching this library. Please leave comments or questions in the discussion thread at the bottom of that post. Thanks!

If you like to see this project grow, your donation of any amount is much appreciated.

Donate

This is a fast, pure-ruby implementation of an activity feed concept commonly used in social networking applications. The implementation is optimized for read-time performance and high concurrency (lots of users), and can be extended with custom backend providers. One data provider come bundled: the production-ready Redis provider.

Important Notes and Acknowledgements:

SimpleFeed does not depend on Ruby on Rails and is a pure-ruby implementation

SimpleFeed requires MRI Ruby 2.3 or later

SimpleFeed is currently live in production

SimpleFeed is open source thanks to the generosity of Simbi, Inc.

2. Features

SimpleFeed is a Ruby Library that can be plugged into any application to power a fast, Redis-based activity feed implementation so common on social networking sites. SimpleFeed offers the following features:

Modelled after graph-relationships similar to those on Twitter (bi-directional independent follow relationships):

Feed maintains a reverse-chronological order for heterogeneous events for each user.

It offers a constant time lookup for user’s feed, avoiding complex SQL joins to render it.

An API to read/paginate the feed for a given user

As well as to query the total unread items in the feed since it was last read by the user (typically shown on App icons).

Scalable and well performing Redis-based activity feed —

Scales to millions of users (will need to use Twemproxy to shard across several Redis instances)

Stores a fixed number of events for each unique "user" — the default is 1000. When the feed reaches 1001 events, the oldest event is offloaded from the activity.

Implementation properties:

Fully thread-safe implementation, writing events can be done in eg. Sidekiq.

Zero assumptions about what you are storing: the "data" is just a string. Serialize it with JSON, Marshall, YAML, or whatever.

You can create as many different types of feeds per application as you like (no Ruby Singletons used).

Customize mapping from user_id to the activity id based on your business logic (more on this later).

2.1. Publishing Events

Pushing events to the feed requires the following:

An Event consisting of:

String data that, most commonly, is a foreign key to a database table, but can really be anything you like.

Float at (typically, the timestamp, but can be any float number)

One or more user IDs, or event consumers: basically — who should see the event being published in their feed.

You publish an event by choosing a set of users whose feed should be updated. For example, were you re-implementing Twitter, your array of user_ids when publishing an event would be all followers of the Tweet’s author. While the data would probably be the Tweet ID.

NotePublishing an event to the feeds of N users is roughly a O(N * log(N)) operation

2.2. Consuming Events (Reading / Rendering the Feed)

You can fetch the chronologically ordered events for a particular user, using:

Methods on the activity such as paginate, fetch.

Reading feed for one user (or one type of user) is a O(1) operation

For each activity (user) you can fetch the total_count and the unread_count — the number of total and new items in the feed, where unread_count is computed since the user last reset their read status.

Note: total_count can never exceed the maximum size of the feed that you configured. The default is 1000 items.

The last_read timestamp can be automatically reset when the user is shown the feed via paginate method (whether or not its reset is controlled via a method argument).

2.3. Modifying User’s Feed

For any given user, you can:

Wipe their feed with wipe

Selectively remove items from the feed with delete_if.

For instance, if a user un-follows someone they shouldn’t see their events anymore, so you’d have to call delete_if and remove any events published by the unfollowed user.

2.4. Aggregating Events

This is a feature planned for future versions.

Help us much appreciated, even if you are not a developer, but have a clear idea about how it should work.

3. Commercial & Enterprise Support

Commercial Support plans are available for SimpleFeed through author’s ReinventONE Inc consulting company. Please reach out to kig AT reinvent.one for more information.

4. Usage

4.1. Example

Please read the additional documentation, including the examples, on the project’s Github Wiki.

Below is a screen shot of an actual activity feed powered by this library.

usage

4.2. Providers

A key concept to understanding SimpleFeed gem, is that of a provider, which is effectively a persistence implementation for the events belonging to each user.

One providers are supplied with this gem: the production-ready :redis provider, which uses the sorted set Redis data type to store and fetch the events, scored by time (but not necessarily).

You initialize a provider by using the SimpleFeed.provider([Symbol]) method.

4.3. Configuration

Below we configure a feed called :newsfeed, which in this example will be populated with the various events coming from the followers.

require 'simplefeed'

# Let's define a Redis-based feed, and wrap Redis in a in a ConnectionPool.

SimpleFeed.define(:newsfeed) do |f|
  f.provider   = SimpleFeed.provider(:redis,
                                      redis: -> { ::Redis.new },
                                      pool_size: 10)
  f.per_page   = 50     # default page size
  f.batch_size = 10     # default batch size
  f.namespace  = 'nf'   # only needed if you use the same redis for more than one feed
end

After the feed is defined, the gem creates a similarly named method under the SimpleFeed namespace to access the feed. For example, given a name such as :newsfeed the following are all valid ways of accessing the feed:

SimpleFeed.newsfeed

SimpleFeed.get(:newsfeed)

You can also get a full list of currently defined feeds with SimpleFeed.feed_names method.

4.4. Reading from and writing to the feed

For the impatient, here is a quick way to get started with the SimpleFeed.

# Let's use the feed we defined earlier and create activity for all followers of the current user
publish_activity = SimpleFeed.newsfeed.activity(@current_user.followers.map(&:id))

# Store directly the value and the optional time stamp
publish_activity.store(value: 'hello', at: Time.now)
# => true  # indicates that value 'hello' was not yet in the feed (all events must be unique)

# Or, using the event form:
publish_activity.store(event: SimpleFeed::Event.new('good bye', Time.now))
# => true

As we’ve added the two events for these users, we can now read them back, sorted by the time and paginated:

# Let's grab the first follower
user_activity = SimpleFeed.newsfeed.activity(@current_user.followers.first.id)

# Now we can paginate the events, while resetting this user's last-read timestamp:
user_activity.paginate(page: 1, reset_last_read: true)
# [
#     [0] #<SimpleFeed::Event: value=hello, at=1480475294.0579991>,
#     [1] #<SimpleFeed::Event: value=good bye, at=1480472342.8979871>,
# ]
ImportantNote that we stored the activity by passing an array of users, but read the activity for just one user. This is how you’d use SimpleFeed most of the time, with the exception of the alternative mapping described below.

4.5. User IDs

In the previous section you saw the examples of publishing events to many feeds, and then reading the activity for a given user.

SimpleFeed supports user IDs that are either numeric (integer) or string-based (eg, UUID). Numeric IDs are best for simplest cases, and are the most compact. String keys offer the most flexibility.

4.5.1. Activity Keys

In the next section we’ll talk about generating keys from user_ids. We mean — Redis Hash keys that uniquely map a user (or a set of users) to the activity feed they should see.

There are up to two keys that are computed depending on the situation:

data_key is used to store the actual feed events

meta_key is used to store user’s last_read status

4.5.2. Partitioning Schema

NoteThis feature is only available in SimpleFeed Version 3+.

You can take advantage of string user IDs for situations where your feed requires keys to be composite for instance. Just remember that SimpleFeed does not care about what’s in your user ID, and even what you call "a user". It’s convenient to think of the activities in terms of users, because typically each user has a unique feed that only they see.

But you can just as easily use zip code as the unique activity ID, and create one feed of events per geographical location, that all folks living in that zip code share. But what about other countries?

Now you use partitioning scheme: make the "user_id" argument a combination iso_country_code.postal_code, eg for San Francisco, you’d use us.94107, but for Australia you could use, eg au.3148.

4.5.3. Relationship between an Activity and a User

One to One

In the most common case, you will have one activity per user.

For instance, in the Twitter example, each Twitter user has a unique tweeter feed that only they see.

The events are published when someone posts a tweet, to the array of all users that follow the Tweet author.

One to Many

However, SimpleFeed supports one additional use-case, where you might have one activity shared among many users.

Imagine a service that notifies residents of important announcements based on user’s zip code of residence.

We want this feed to work as follows:

All users that share a zip-code should see the same exact feed.

However, all users should never share the individual’s last_read status: so if two people read the same activity from the same zip code, their unread_count should change independently.

In terms of the activity keys, this means:

data_key should be based on the zip-code of each user, and be one to many with users.

meta_key should be based on the user ID as we want it to be 1-1 with users.

To support this use-case, SimpleFeed supports two optional transformer lambdas that can be applied to each user object when computing their activity feed hash key:

SimpleFeed.define(:zipcode_alerts) do |f|
  f.provider   = SimpleFeed.provider(:redis, redis: -> { ::Redis.new }, pool_size: 10)
  f.namespace  = 'zc'
  f.data_key_transformer = ->(user) { user.zip_code }  # actual feed data is stored once per zip code
  f.meta_key_transformer = ->(user) { user.id }        # last_read status is stored once per user
end

When you publish events into this feed, you would need to provide User objects that all respond to .zip_code method (based on the above configuration). Since the data is only defined by Zip Code, you probably don’t want to be publishing it via a giant array of users. Most likely, you’ll want to publish event based on the zip code, and consume them based on the user ID.

To support this user-case, let’s modify our transformer lambda (only the data one) as follows — so that it can support both the consuming read by a user case, and the publishing a feed by zip code case:

Alternatively, you could do something like this:

  f.data_key_transformer = ->(entity) do
    case entity
      when User
        entity.zip_code.to_i
      when String # UUIDs
        User.find(entity)&.zip_code.to_i
      when ZipCode, Numeric
        entity.to_i
      else
        raise ArgumentError, "Invalid type #{entity.class.name}"
    end
  end

Just make sure that your users always have .zip_code defined, and that ZipCode.new(94107).to_i returns exactly the same thing as @user.zip_code.to_i or your users won’t see the feeds they are supposed to see.

4.6. The Two Forms of the Feed API

The feed API is offered in two forms:

single-user form, and

a batch (multi-user) form.

The method names and signatures are the same. The only difference is in what the methods return:

In the single user case, the return of, say, #total_count is an Integer value representing the total count for this user.

In the multi-user case, the return is a SimpleFeed::Response instance, that can be thought of as a Hash, that has the user IDs as the keys, and return results for each user as a value.

Please see further below the details about the Batch API.

Single-User API

In the examples below we show responses based on a single-user usage. As previously mentioned, the multi-user usage is the same, except what the response values are, and is discussed further down below.

Let’s take a look at a ruby session, which demonstrates return values of the feed operations for a single user:

require 'simplefeed'

# Define the feed using Redis provider, which uses
# SortedSet to keep user's events sorted.
SimpleFeed.define(:followers) do |f|
  f.provider = SimpleFeed.provider(:redis)
  f.per_page = 50
  f.per_page = 2
end

# Let's get the Activity instance that wraps this
activity = SimpleFeed.followers.activity(user_id)         # => [... complex object removed for brevity ]

# let's clear out this feed to ensure it's empty
activity.wipe                                             # => true

# Let's verify that the counts for this feed are at zero
activity.total_count                                      # => 0
activity.unread_count                                     # => 0

# Store some events
activity.store(value: 'hello')                            # => true
activity.store(value: 'goodbye', at: Time.now - 20)       # => true
activity.unread_count                                     # => 2

# Now we can paginate the events, while resetting this user's last-read timestamp:
activity.paginate(page: 1, reset_last_read: true)
# [
#     [0] #<SimpleFeed::Event: value=good bye, at=1480475294.0579991>,
#     [1] #<SimpleFeed::Event: value=hello, at=1480475294.057138>
# ]
# Now the unread_count should return 0 since the user just "viewed" the feed.
activity.unread_count                                     # => 0
activity.delete(value: 'hello')                           # => true
# the next method yields to a passed in block for each event in the user's feed, and deletes
# all events for which the block returns true. The return of this call is the
# array of all events that have been deleted for this user.
activity.delete_if do |event, user_id|
  event.value =~ /good/
end
# => [
#     [0] #<SimpleFeed::Event: value=good bye, at=1480475294.0579991>
# ]
activity.total_count                                      # => 0

You can fetch all items (optionally filtered by time) in the feed using #fetch, #paginate and reset the last_read timestamp by passing the reset_last_read: true as a parameter.

 

Batch (Multi-User) API

This API should be used when dealing with an array of users (or, in the future, a Proc or an ActiveRecord relation).

There are several reasons why this API should be preferred for operations that perform a similar action across a range of users: various provider implementations can be heavily optimized for concurrency, and performance.

The Redis Provider, for example, uses a notion of pipelining to send updates for different users asynchronously and concurrently.

Multi-user operations return a SimpleFeed::Response object, which can be used as a hash (keyed on user_id) to fetch the result of a given user.

# Using the Feed API with, eg #find_in_batches
@event_producer.followers.find_in_batches do |group|

  # Convert a group to the array of IDs and get ready to store
  activity = SimpleFeed.get(:followers).activity(group.map(&:id))
  activity.store(value: "#{@event_producer.name} liked an article")

  # => [Response] { user_id1 => [Boolean], user_id2 => [Boolean]... }
  # true if the value was stored, false if it wasn't.
end

Activity Feed DSL (Domain-Specific Language)

The library offers a convenient DSL for adding feed functionality into your current scope.

To use the module, just include SimpleFeed::DSL where needed, which exports just one primary method #with_activity. You call this method and pass an activity object created for a set of users (or a single user), like so:

require 'simplefeed/dsl'
include SimpleFeed::DSL

feed = SimpleFeed.newsfeed
activity = feed.activity(current_user.id)
data_to_store = %w(France Germany England)

def report(value)
  puts value
end

with_activity(activity, countries: data_to_store) do
  # we can use countries as a variable because it was passed above in **opts
  countries.each do |country|
    # we can call #store without a receiver because the block is passed to
    # instance_eval
    store(value: country) { |result| report(result ? 'success' : 'failure') }
    # we can call #report inside the proc because it is evaluated in the
    # outside context of the #with_activity

    # now let's print a color ASCII dump of the entire feed for this user:
    color_dump
  end
  printf "Activity counts are: %d unread of %d total\n", unread_count, total_count
end

The DSL context has access to two additional methods:

#event(value, at) returns a fully constructed SimpleFeed::Event instance

#color_dump prints to STDOUT the ASCII text dump of the current user’s activities (events), as well as the counts and the last_read shown visually on the time line.

#color_dump

Below is an example output of color_dump method, which is intended for the debugging purposes.

sf color dump

Figure 1. #color_dump method output

 

5. Complete API

For completeness sake we’ll show the multi-user API responses only. For a single-user use-case the response is typically a scalar, and the input is a singular user_id, not an array of ids.

Multi-User (Batch) API

Each API call at this level expects an array of user IDs, therefore the return value is an object, SimpleFeed::Response, containing individual responses for each user, accessible via response[user_id] method.

@multi = SimpleFeed.get(:feed_name).activity(User.active.map(&:id))

@multi.store(value:, at:)
@multi.store(event:)
# => [Response] { user_id => [Boolean], ... } true if the value was stored, false if it wasn't.

@multi.delete(value:, at:)
@multi.delete(event:)
# => [Response] { user_id => [Boolean], ... } true if the value was removed, false if it didn't exist

@multi.delete_if do |event, user_id|
  # if the block returns true, the event is deleted and returned
end
# => [Response] { user_id => [deleted_event1, deleted_event2, ...], ... }

# Wipe the feed for a given user(s)
@multi.wipe
# => [Response] { user_id => [Boolean], ... } true if user activity was found and deleted, false otherwise

# Return a paginated list of all items, optionally with the total count of items
@multi.paginate(page: 1,
                per_page: @multi.feed.per_page,
                with_total: false,
                reset_last_read: false)
# => [Response] { user_id => [Array]<Event>, ... }
# Options:
#   reset_last_read: false — reset last read to Time.now (true), or the provided timestamp
#   with_total: true — returns a hash for each user_id:
#        => [Response] { user_id => { events: Array<Event>, total_count: 3 }, ... }

# Return un-paginated list of all items, optionally filtered
@multi.fetch(since: nil, reset_last_read: false)
# => [Response] { user_id => [Array]<Event>, ... }
# Options:
#   reset_last_read: false — reset last read to Time.now (true), or the provided timestamp
#   since: <timestamp> — if provided, returns all items posted since then
#   since: :last_read — if provided, returns all unread items and resets +last_read+

@multi.reset_last_read
# => [Response] { user_id => [Time] last_read, ... }

@multi.total_count
# => [Response] { user_id => [Integer, String] total_count, ... }

@multi.unread_count
# => [Response] { user_id => [Integer, String] unread_count, ... }

@multi.last_read
# => [Response] { user_id => [Time] last_read, ... }

6. Providers

As we’ve discussed above, a provider is an underlying persistence mechanism implementation.

It is the intention of this gem that:

it should be easy to write new providers

it should be easy to swap out providers

One provider is included with this gem:

6.1. SimpleFeed::Providers::Redis::Provider

Redis Provider is a production-ready persistence adapter that uses the sorted set Redis data type.

This provider is optimized for large writes and can use either a single Redis instance for all users of your application, or any number of Redis shards by using a Twemproxy in front of the Redis shards.

If you set environment variable REDIS_DEBUG to true and run the example (see below) you will see every operation redis performs. This could be useful in debugging an issue or submitting a bug report.

7. Running the Examples and Specs

Source code for the gem contains the examples folder with an example file that can be used to test out the providers, and see what they do under the hood.

Both the specs and the example requires a local redis instance to be available.

To run it, checkout the source of the library, and then:

git clone https://github.com/kigster/simple-feed.git
cd simple-feed

# on OSX with HomeBrew:
brew install redis
brew services start redis

# check that your redis is up:
redis-cli info

# install bundler and other dependencies
gem install bundler --version 2.1.4
bundle install
bundle exec rspec  # make sure tests are passing

# run the example:
ruby examples/redis_provider_example.rb

The above command will help you download, setup all dependencies, and run the examples for a single user:

running example

Figure 2. Running Redis Example in a Terminal

If you set REDIS_DEBUG variable prior to running the example, you will be able to see every single Redis command executed as the example works its way through. Below is a sample output:

running example redis debug

Figure 3. Running Redis Example with REDIS_DEBUG set

7.1. Generating Ruby API Documentation

rake doc

This should use Yard to generate the documentation, and open your browser once it’s finished.

7.2. Installation

Add this line to your application’s Gemfile:

gem 'simple-feed'

And then execute:

$ bundle

Or install it yourself as:

$ gem install simple-feed

7.3. Development

After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.

To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.

7.4. Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/kigster/simple-feed

7.5. License

The gem is available as open source under the terms of the MIT License.

FOSSA Scan Status

7.6. Acknowledgements

This project is conceived and sponsored by Simbi, Inc..

Author’s personal experience at Wanelo, Inc. has served as an inspiration.


Author: kigster
Source code: https://github.com/kigster/simple-feed
License: MIT license

#ruby   #ruby-on-rails 

Fast Read and Highly Scalable Optimized Social Activity Feed in Ruby

Jsonapi Serializer: A Fast JSON:API Serializer for Ruby Objects.

JSON:API Serialization Library

:warning: :construction: v2 (the master branch) is in maintenance mode! :construction: :warning:

We'll gladly accept bugfixes and security-related fixes for v2 (the master branch), but at this stage, contributions for new features/improvements are welcome only for v3. Please feel free to leave comments in the v3 Pull Request.


A fast JSON:API serializer for Ruby Objects.

Previously this project was called fast_jsonapi, we forked the project and renamed it to jsonapi/serializer in order to keep it alive.

We would like to thank the Netflix team for the initial work and to all our contributors and users for the continuous support!

Performance Comparison

We compare serialization times with ActiveModelSerializer and alternative implementations as part of performance tests available at jsonapi-serializer/comparisons.

We want to ensure that with every change on this library, serialization time stays significantly faster than the performance provided by the alternatives. Please read the performance article in the docs folder for any questions related to methodology.

Table of Contents

Features

  • Declaration syntax similar to Active Model Serializer
  • Support for belongs_to, has_many and has_one
  • Support for compound documents (included)
  • Optimized serialization of compound documents
  • Caching

Installation

Add this line to your application's Gemfile:

gem 'jsonapi-serializer'

Execute:

$ bundle install

Usage

Rails Generator

You can use the bundled generator if you are using the library inside of a Rails project:

rails g serializer Movie name year

This will create a new serializer in app/serializers/movie_serializer.rb

Model Definition

class Movie
  attr_accessor :id, :name, :year, :actor_ids, :owner_id, :movie_type_id
end

Serializer Definition

class MovieSerializer
  include JSONAPI::Serializer

  set_type :movie  # optional
  set_id :owner_id # optional
  attributes :name, :year
  has_many :actors
  belongs_to :owner, record_type: :user
  belongs_to :movie_type
end

Sample Object

movie = Movie.new
movie.id = 232
movie.name = 'test movie'
movie.actor_ids = [1, 2, 3]
movie.owner_id = 3
movie.movie_type_id = 1
movie

movies =
  2.times.map do |i|
    m = Movie.new
    m.id = i + 1
    m.name = "test movie #{i}"
    m.actor_ids = [1, 2, 3]
    m.owner_id = 3
    m.movie_type_id = 1
    m
  end

Object Serialization

Return a hash

hash = MovieSerializer.new(movie).serializable_hash

Return Serialized JSON

json_string = MovieSerializer.new(movie).serializable_hash.to_json

Serialized Output

{
  "data": {
    "id": "3",
    "type": "movie",
    "attributes": {
      "name": "test movie",
      "year": null
    },
    "relationships": {
      "actors": {
        "data": [
          {
            "id": "1",
            "type": "actor"
          },
          {
            "id": "2",
            "type": "actor"
          }
        ]
      },
      "owner": {
        "data": {
          "id": "3",
          "type": "user"
        }
      }
    }
  }
}

The Optionality of set_type

By default fast_jsonapi will try to figure the type based on the name of the serializer class. For example class MovieSerializer will automatically have a type of :movie. If your serializer class name does not follow this format, you have to manually state the set_type at the serializer.

Key Transforms

By default fast_jsonapi underscores the key names. It supports the same key transforms that are supported by AMS. Here is the syntax of specifying a key transform

class MovieSerializer
  include JSONAPI::Serializer

  # Available options :camel, :camel_lower, :dash, :underscore(default)
  set_key_transform :camel
end

Here are examples of how these options transform the keys

set_key_transform :camel # "some_key" => "SomeKey"
set_key_transform :camel_lower # "some_key" => "someKey"
set_key_transform :dash # "some_key" => "some-key"
set_key_transform :underscore # "some_key" => "some_key"

Attributes

Attributes are defined using the attributes method. This method is also aliased as attribute, which is useful when defining a single attribute.

By default, attributes are read directly from the model property of the same name. In this example, name is expected to be a property of the object being serialized:

class MovieSerializer
  include JSONAPI::Serializer

  attribute :name
end

Custom attributes that must be serialized but do not exist on the model can be declared using Ruby block syntax:

class MovieSerializer
  include JSONAPI::Serializer

  attributes :name, :year

  attribute :name_with_year do |object|
    "#{object.name} (#{object.year})"
  end
end

The block syntax can also be used to override the property on the object:

class MovieSerializer
  include JSONAPI::Serializer

  attribute :name do |object|
    "#{object.name} Part 2"
  end
end

Attributes can also use a different name by passing the original method or accessor with a proc shortcut:

class MovieSerializer
  include JSONAPI::Serializer

  attributes :name

  attribute :released_in_year, &:year
end

Links Per Object

Links are defined using the link method. By default, links are read directly from the model property of the same name. In this example, public_url is expected to be a property of the object being serialized.

You can configure the method to use on the object for example a link with key self will get set to the value returned by a method called url on the movie object.

You can also use a block to define a url as shown in custom_url. You can access params in these blocks as well as shown in personalized_url

class MovieSerializer
  include JSONAPI::Serializer

  link :public_url

  link :self, :url

  link :custom_url do |object|
    "https://movies.com/#{object.name}-(#{object.year})"
  end

  link :personalized_url do |object, params|
    "https://movies.com/#{object.name}-#{params[:user].reference_code}"
  end
end

Links on a Relationship

You can specify relationship links by using the links: option on the serializer. Relationship links in JSON API are useful if you want to load a parent document and then load associated documents later due to size constraints (see related resource links)

class MovieSerializer
  include JSONAPI::Serializer

  has_many :actors, links: {
    self: :url,
    related: -> (object) {
      "https://movies.com/#{object.id}/actors"
    }
  }
end

Relationship links can also be configured to be defined as a method on the object.

  has_many :actors, links: :actor_relationship_links

This will create a self reference for the relationship, and a related link for loading the actors relationship later. NB: This will not automatically disable loading the data in the relationship, you'll need to do that using the lazy_load_data option:

  has_many :actors, lazy_load_data: true, links: {
    self: :url,
    related: -> (object) {
      "https://movies.com/#{object.id}/actors"
    }
  }

Meta Per Resource

For every resource in the collection, you can include a meta object containing non-standard meta-information about a resource that can not be represented as an attribute or relationship.

class MovieSerializer
  include JSONAPI::Serializer

  meta do |movie|
    {
      years_since_release: Date.current.year - movie.year
    }
  end
end

Meta on a Relationship

You can specify relationship meta by using the meta: option on the serializer. Relationship meta in JSON API is useful if you wish to provide non-standard meta-information about the relationship.

Meta can be defined either by passing a static hash or by using Proc to the meta key. In the latter case, the record and any params passed to the serializer are available inside the Proc as the first and second parameters, respectively.

class MovieSerializer
  include JSONAPI::Serializer

  has_many :actors, meta: Proc.new do |movie_record, params|
    { count: movie_record.actors.length }
  end
end

Compound Document

Support for top-level and nested included associations through options[:include].

options = {}
options[:meta] = { total: 2 }
options[:links] = {
  self: '...',
  next: '...',
  prev: '...'
}
options[:include] = [:actors, :'actors.agency', :'actors.agency.state']
MovieSerializer.new(movies, options).serializable_hash.to_json

Collection Serialization

options[:meta] = { total: 2 }
options[:links] = {
  self: '...',
  next: '...',
  prev: '...'
}
hash = MovieSerializer.new(movies, options).serializable_hash
json_string = MovieSerializer.new(movies, options).serializable_hash.to_json

Control Over Collection Serialization

You can use is_collection option to have better control over collection serialization.

If this option is not provided or nil autodetect logic is used to try understand if provided resource is a single object or collection.

Autodetect logic is compatible with most DB toolkits (ActiveRecord, Sequel, etc.) but cannot guarantee that single vs collection will be always detected properly.

options[:is_collection]

was introduced to be able to have precise control this behavior

  • nil or not provided: will try to autodetect single vs collection (please, see notes above)
  • true will always treat input resource as collection
  • false will always treat input resource as single object

Caching

To enable caching, use cache_options store: <cache_store>:

class MovieSerializer
  include JSONAPI::Serializer

  # use rails cache with a separate namespace and fixed expiry
  cache_options store: Rails.cache, namespace: 'jsonapi-serializer', expires_in: 1.hour
end

store is required can be anything that implements a #fetch(record, **options, &block) method:

  • record is the record that is currently serialized
  • options is everything that was passed to cache_options except store, so it can be everyhing the cache store supports
  • &block should be executed to fetch new data if cache is empty

So for the example above it will call the cache instance like this:

Rails.cache.fetch(record, namespace: 'jsonapi-serializer', expires_in: 1.hour) { ... }

Caching and Sparse Fieldsets

If caching is enabled and fields are provided to the serializer, the fieldset will be appended to the cache key's namespace.

For example, given the following serializer definition and instance:

class ActorSerializer
  include JSONAPI::Serializer

  attributes :first_name, :last_name

  cache_options store: Rails.cache, namespace: 'jsonapi-serializer', expires_in: 1.hour
end

serializer = ActorSerializer.new(actor, { fields: { actor: [:first_name] } })

The following cache namespace will be generated: 'jsonapi-serializer-fieldset:first_name'.

Params

In some cases, attribute values might require more information than what is available on the record, for example, access privileges or other information related to a current authenticated user. The options[:params] value covers these cases by allowing you to pass in a hash of additional parameters necessary for your use case.

Leveraging the new params is easy, when you define a custom id, attribute or relationship with a block you opt-in to using params by adding it as a block parameter.

class MovieSerializer
  include JSONAPI::Serializer

  set_id do |movie, params|
    # in here, params is a hash containing the `:admin` key
    params[:admin] ? movie.owner_id : "movie-#{movie.id}"
  end

  attributes :name, :year
  attribute :can_view_early do |movie, params|
    # in here, params is a hash containing the `:current_user` key
    params[:current_user].is_employee? ? true : false
  end

  belongs_to :primary_agent do |movie, params|
    # in here, params is a hash containing the `:current_user` key
    params[:current_user]
  end
end

# ...
current_user = User.find(cookies[:current_user_id])
serializer = MovieSerializer.new(movie, {params: {current_user: current_user}})
serializer.serializable_hash

Custom attributes and relationships that only receive the resource are still possible by defining the block to only receive one argument.

Conditional Attributes

Conditional attributes can be defined by passing a Proc to the if key on the attribute method. Return true if the attribute should be serialized, and false if not. The record and any params passed to the serializer are available inside the Proc as the first and second parameters, respectively.

class MovieSerializer
  include JSONAPI::Serializer

  attributes :name, :year
  attribute :release_year, if: Proc.new { |record|
    # Release year will only be serialized if it's greater than 1990
    record.release_year > 1990
  }

  attribute :director, if: Proc.new { |record, params|
    # The director will be serialized only if the :admin key of params is true
    params && params[:admin] == true
  }

  # Custom attribute `name_year` will only be serialized if both `name` and `year` fields are present
  attribute :name_year, if: Proc.new { |record|
    record.name.present? && record.year.present?
  } do |object|
    "#{object.name} - #{object.year}"
  end
end

# ...
current_user = User.find(cookies[:current_user_id])
serializer = MovieSerializer.new(movie, { params: { admin: current_user.admin? }})
serializer.serializable_hash

Conditional Relationships

Conditional relationships can be defined by passing a Proc to the if key. Return true if the relationship should be serialized, and false if not. The record and any params passed to the serializer are available inside the Proc as the first and second parameters, respectively.

class MovieSerializer
  include JSONAPI::Serializer

  # Actors will only be serialized if the record has any associated actors
  has_many :actors, if: Proc.new { |record| record.actors.any? }

  # Owner will only be serialized if the :admin key of params is true
  belongs_to :owner, if: Proc.new { |record, params| params && params[:admin] == true }
end

# ...
current_user = User.find(cookies[:current_user_id])
serializer = MovieSerializer.new(movie, { params: { admin: current_user.admin? }})
serializer.serializable_hash

Specifying a Relationship Serializer

In many cases, the relationship can automatically detect the serializer to use.

class MovieSerializer
  include JSONAPI::Serializer

  # resolves to StudioSerializer
  belongs_to :studio
  # resolves to ActorSerializer
  has_many :actors
end

At other times, such as when a property name differs from the class name, you may need to explicitly state the serializer to use. You can do so by specifying a different symbol or the serializer class itself (which is the recommended usage):

class MovieSerializer
  include JSONAPI::Serializer

  # resolves to MovieStudioSerializer
  belongs_to :studio, serializer: :movie_studio
  # resolves to PerformerSerializer
  has_many :actors, serializer: PerformerSerializer
end

For more advanced cases, such as polymorphic relationships and Single Table Inheritance, you may need even greater control to select the serializer based on the specific object or some specified serialization parameters. You can do by defining the serializer as a Proc:

class MovieSerializer
  include JSONAPI::Serializer

  has_many :actors, serializer: Proc.new do |record, params|
    if record.comedian?
      ComedianSerializer
    elsif params[:use_drama_serializer]
      DramaSerializer
    else
      ActorSerializer
    end
  end
end

Ordering has_many Relationship

You can order the has_many relationship by providing a block:

class MovieSerializer
  include JSONAPI::Serializer

  has_many :actors do |movie|
    movie.actors.order(position: :asc)
  end
end

Sparse Fieldsets

Attributes and relationships can be selectively returned per record type by using the fields option.

class MovieSerializer
  include JSONAPI::Serializer

  attributes :name, :year
end

serializer = MovieSerializer.new(movie, { fields: { movie: [:name] } })
serializer.serializable_hash

Using helper methods

You can mix-in code from another ruby module into your serializer class to reuse functions across your app.

Since a serializer is evaluated in a the context of a class rather than an instance of a class, you need to make sure that your methods act as class methods when mixed in.

Using ActiveSupport::Concern


module AvatarHelper
  extend ActiveSupport::Concern

  class_methods do
    def avatar_url(user)
      user.image.url
    end
  end
end

class UserSerializer
  include JSONAPI::Serializer

  include AvatarHelper # mixes in your helper method as class method

  set_type :user

  attributes :name, :email

  attribute :avatar do |user|
    avatar_url(user)
  end
end

Using Plain Old Ruby

module AvatarHelper
  def avatar_url(user)
    user.image.url
  end
end

class UserSerializer
  include JSONAPI::Serializer

  extend AvatarHelper # mixes in your helper method as class method

  set_type :user

  attributes :name, :email

  attribute :avatar do |user|
    avatar_url(user)
  end
end

Customizable Options

OptionPurposeExample
set_typeType name of Objectset_type :movie
keyKey of Objectbelongs_to :owner, key: :user
set_idID of Objectset_id :owner_id or set_id { |record, params| params[:admin] ? record.id : "#{record.name.downcase}-#{record.id}" }
cache_optionsHash with store to enable caching and optional further cache optionscache_options store: ActiveSupport::Cache::MemoryStore.new, expires_in: 5.minutes
id_method_nameSet custom method name to get ID of an object (If block is provided for the relationship, id_method_name is invoked on the return value of the block instead of the resource object)has_many :locations, id_method_name: :place_ids
object_method_nameSet custom method name to get related objectshas_many :locations, object_method_name: :places
record_typeSet custom Object Type for a relationshipbelongs_to :owner, record_type: :user
serializerSet custom Serializer for a relationshiphas_many :actors, serializer: :custom_actor, has_many :actors, serializer: MyApp::Api::V1::ActorSerializer, or has_many :actors, serializer -> (object, params) { (return a serializer class) }
polymorphicAllows different record types for a polymorphic associationhas_many :targets, polymorphic: true
polymorphicSets custom record types for each object class in a polymorphic associationhas_many :targets, polymorphic: { Person => :person, Group => :group }

Performance Instrumentation

Performance instrumentation is available by using the active_support/notifications.

To enable it, include the module in your serializer class:

require 'jsonapi/serializer'
require 'jsonapi/serializer/instrumentation'

class MovieSerializer
  include JSONAPI::Serializer
  include JSONAPI::Serializer::Instrumentation

  # ...
end

Skylight integration is also available and supported by us, follow the Skylight documentation to enable it.

Running Tests

The project has and requires unit tests, functional tests and performance tests. To run tests use the following command:

rspec

Deserialization

We currently do not support deserialization, but we recommend to use any of the next gems:

JSONAPI.rb

This gem provides the next features alongside deserialization:

  • Collection meta
  • Error handling
  • Includes and sparse fields
  • Filtering and sorting
  • Pagination

Migrating from Netflix/fast_jsonapi

If you come from Netflix/fast_jsonapi, here is the instructions to switch.

Modify your Gemfile

- gem 'fast_jsonapi'
+ gem 'jsonapi-serializer'

Replace all constant references

class MovieSerializer
- include FastJsonapi::ObjectSerializer
+ include JSONAPI::Serializer
end

Replace removed methods

- json_string = MovieSerializer.new(movie).serialized_json
+ json_string = MovieSerializer.new(movie).serializable_hash.to_json

Replace require references

- require 'fast_jsonapi'
+ require 'jsonapi/serializer'

Update your cache options

See docs.

- cache_options enabled: true, cache_length: 12.hours
+ cache_options store: Rails.cache, namespace: 'jsonapi-serializer', expires_in: 1.hour

Contributing

Please follow the instructions we provide as part of the issue and pull request creation processes.

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.


Author: jsonapi-serializer
Source code: https://github.com/jsonapi-serializer/jsonapi-serializer
License: Apache-2.0 license

#ruby  #ruby-on-rails 

Jsonapi Serializer: A Fast JSON:API Serializer for Ruby Objects.

James Wilhelm

1655635119

How to use fetch in JavaScript: GET, POST, PUT and DELETE requests

https://youtu.be/hOXWY9Ng_KU

How to use fetch in JavaScript: GET, POST, PUT and DELETE requests!

Also covered is error-handing, using catch() and by querying the response object. 
#javascript #api #fetch #webdevelopment #programming 

🔔 Subscribe for more tutorials just like this: 
http://www.youtube.com/channel/UC26_rFZReLXNu8bULVARUXg?sub_confirmation=1

How to use fetch in JavaScript: GET, POST, PUT and DELETE requests
Code  JS

Code JS

1654326531

JavaScript Fetch API Explained | Callbacks, Promises, Async Await

Learn about Callbacks, Promises, and Async Await as the JavaScript Fetch API is explained in this tutorial. You will also learn about thenables and how async / await replaces them in our JS code. The first 30 minutes covers the concepts. The last 30 minutes gives examples of retrieving data from different APIs with Fetch.

Quick Concepts outline:
Fetch API with Async / Await
(0:00) Intro
(0:29) What is a callback function?
(1:15) What is the problem with callbacks?
(3:00) JavaScript Promises have 3 states
(5:28) A promise may not return a value where you expect it to: You need to wait for a promise to resolve
(6:58) Using thenables with a promise
(20:15) An easy mistake to make with promises
(24:00) Creating an async function
(25:00) Applying await inside the function
(33:45) Example 1: Retrieving user data
(40:00) Example 2: Retrieving dad jokes
(47:00) Example 3: Posting data
(49:40) Example 4: Retrieving data with URL parameters
(54:55) Abstract it all into single responsibility functions

Subscribe: https://www.youtube.com/c/DaveGrayTeachesCode/featured 

#async  #await  #fetch #javascript 

JavaScript Fetch API Explained | Callbacks, Promises, Async Await
Desmond  Gerber

Desmond Gerber

1651965300

Got-fetch: A Fetch interface to Got

got-fetch 

A fetch-compatible wrapper around got for those times when you need to fetch stuff over HTTP 😉

Why would you use this instead of got? Sometimes you might need a fetch wrapper and this is it (e.g. Apollo uses fetch to query remote schemas).

Install

Support table:

got-fetch versionworks with got versionNotes
^5.0.0^12.0.0ESM package. You have to use import
^4.0.0^11.0.0CJS package. You can use require

got is a peer dependency so you will need to install it alongside got-fetch:

npm install --save got got-fetch

For CommonJS support, we maintain v4 of this package.

Usage

Use the default export:

import fetch from 'got-fetch';

// in ESM we can use top-level await
const resp = await fetch('https://example.com');

console.log(resp.status); // 200
console.log(await resp.text()); // a HTML document

The module also exports a function which allows you to use your own custom got instance:

import got from 'got';
import { createFetch } from 'got-fetch';

const myGot = got.extend({
  headers: {
    'x-api-key': 'foo bar'
  }
});

const fetch = createFetch(myGot);

// this request will send the header `x-api-key: foo bar`
fetch('https://example.com');

Limitations

fetch is designed for browser environments and this package is just a wrapper around a Node-based HTTP client. Not all fetch features are supported:

Author: Alexghr
Source Code: https://github.com/alexghr/got-fetch 
License: MIT license

#node #fetch #http 

Got-fetch: A Fetch interface to Got

How to Build a Simple Web Scraper using Node.JS, Fetch and Cheerio

In this tutorial, we are going to scrape the Formula 1 Drivers 2022 from the official Formula 1 website using Node.JS, Node-Fetch and Cheerio. The main reason why I chose Node-Fetch and Cheerio is simply that many people will be familiar with the syntax and both of them are very easy to use and understand.

Cheerio uses core jQuery which makes selecting elements extremely easy and if you have worked with Fetch on the front end of the web, then this would be very familiar to you. Also, I need to mention that Fetch will be bundled in Node.js in the near future so you won’t have to install it as a separate package. That’s always a plus.

GitHub:           https://www.github.com/RaddyTheBrand

Chapters:
0:00 Introduction: 
0:14 What is Web Scraping
1:09 Is Web Scraping Illegal?
1:47 Create Web Scraper
32:06 End

#node #nodejs #fetch #Cheerio #webscraping 

How to Build a Simple Web Scraper using Node.JS, Fetch and Cheerio

AbortController polyfill for abortable fetch()

AbortController polyfill for abortable fetch()

Minimal stubs so that the AbortController DOM API for terminating fetch() requests can be used in browsers that doesn't yet implement it. This "polyfill" doesn't actually close the connection when the request is aborted, but it will call .catch() with err.name == 'AbortError' instead of .then().

const controller = new AbortController();
const signal = controller.signal;
fetch('/some/url', {signal})
  .then(res => res.json())
  .then(data => {
    // do something with "data"
  }).catch(err => {
    if (err.name == 'AbortError') {
      return;
    }
  });
// controller.abort(); // can be called at any time

You can read about the AbortController API in the DOM specification.

How to use

$ npm install --save abortcontroller-polyfill

If you're using webpack or similar, you then import it early in your client entrypoint .js file using

import 'abortcontroller-polyfill/dist/polyfill-patch-fetch'
// or:
require('abortcontroller-polyfill/dist/polyfill-patch-fetch')

Using it on browsers without fetch

If you need to support browsers where fetch is not available at all (for example Internet Explorer 11), you first need to install a fetch polyfill and then import the abortcontroller-polyfill afterwards.

The unfetch npm package offers a minimal fetch() implementation (though it does not offer for example a Request class). If you need a polyfill that implements the full Fetch specification, use the whatwg-fetch npm package instead. Typically you will also need to load a polyfill that implements ES6 promises, for example promise-polyfill, and of course you need to avoid ES6 arrow functions and template literals.

Example projects showing abortable fetch setup so that it works even in Internet Explorer 11, using both unfetch and GitHub fetch, is available here.

Using it along with 'create-react-app'

create-react-app enforces the no-undef eslint rule at compile time so if your version of eslint does not list AbortController etc as a known global for the browser environment, then you might run into an compile error like:

  'AbortController' is not defined  no-undef

This can be worked around by (temporarily, details here) adding a declaration like:

  const AbortController = window.AbortController;

Using the AbortController/AbortSignal without patching fetch

If you just want to polyfill AbortController/AbortSignal without patching fetch you can use:

import 'abortcontroller-polyfill/dist/abortcontroller-polyfill-only'

Using it on Node.js

You can either import it as a ponyfill without modifying globals:

const { AbortController, abortableFetch } = require('abortcontroller-polyfill/dist/cjs-ponyfill');
const { fetch } = abortableFetch(require('node-fetch'));
// or
// import { AbortController, abortableFetch } from 'abortcontroller-polyfill/dist/cjs-ponyfill';
// import _fetch from 'node-fetch';
// const { fetch } = abortableFetch(_fetch);

or if you're lazy

global.fetch = require('node-fetch');
require('abortcontroller-polyfill/dist/polyfill-patch-fetch');

If you also need a Request class with support for aborting you can do:

const { AbortController, abortableFetch } = require('abortcontroller-polyfill/dist/cjs-ponyfill');
const _nodeFetch = require('node-fetch');
const { fetch, Request } = abortableFetch({fetch: _nodeFetch, Request: _nodeFetch.Request});

const controller = new AbortController();
const signal = controller.signal;
controller.abort();
fetch(Request("http://api.github.com", {signal}))
  .then(r => r.json())
  .then(j => console.log(j))
  .catch(err => {
      if (err.name === 'AbortError') {
          console.log('aborted');
      }
  })

See also Node.js examples here

Using it on Internet Explorer 11 (MSIE11)

The abortcontroller-polyfill works on Internet Explorer 11. However, to use it you must first install separate polyfills for promises and for fetch(). For the promise polyfill, you can use the promise-polyfill package from npm, and for fetch() you can use either the whatwg-fetch npm package (complete fetch implementation) or the unfetch npm package (not a complete polyfill but it's only 500 bytes large and covers a lot of the basic use cases).

If you choose unfetch, the imports should be done in this order for example:

import 'promise-polyfill/src/polyfill';
import 'unfetch/polyfill';
import 'abortcontroller-polyfill';

See example code here.

Using it on Internet Explorer 8 (MSIE8)

The abortcontroller-polyfill works on Internet Explorer 8. However, since github-fetch only supports IE 10+ you need to use the fetch-ie8 npm package instead and also note that IE 8 only implements ES 3 so you need to use the es5-shim package (or similar). Finally, just like with IE 11 you also need to polyfill promises. One caveat is that CORS requests will not work out of the box on IE 8.

Here is a basic example of abortable fetch running in IE 8.

Contributors

Author: Mo
Source Code: https://github.com/mo/abortcontroller-polyfill 
License: MIT License

#javascript #fetch #browser 

AbortController polyfill for abortable fetch()
Elian  Harber

Elian Harber

1649926080

Guble: Websocket Based Messaging Server Written in Golang

Guble Messaging Server

Guble is a simple user-facing messaging and data replication server written in Go.

Overview

Guble is in an early state (release 0.4). It is already working well and is very useful, but the protocol, API and storage formats may still change (until reaching 0.7). If you intend to use guble, please get in contact with us.

The goal of guble is to be a simple and fast message bus for user interaction and replication of data between multiple devices:

  • Very easy consumption of messages with web and mobile clients
  • Fast realtime messaging, as well as playback of messages from a persistent commit log
  • Reliable and scalable over multiple nodes
  • User-aware semantics to easily support messaging scenarios between people using multiple devices
  • Batteries included: usable as front-facing server, without the need of a proxy layer
  • Self-contained: no mandatory dependencies to other services

Working Features (0.4)

  • Publishing and subscription of messages to topics and subtopics
  • Persistent message store with transparent live and offline fetching
  • WebSocket and REST APIs for message publishing
  • Commandline client and Go client library
  • Firebase Cloud Messaging (FCM) adapter: delivery of messages as FCM push notifications
  • Docker images for server and client
  • Simple Authentication and Access-Management
  • Clean shutdown
  • Improved logging using logrus and logstash formatter
  • Health-Check with Endpoint
  • Collection of Basic Metrics, with Endpoint
  • Added Postgresql as KV Backend
  • Load testing with 5000 messages per instance
  • Support for Apple Push Notification services (a new connector alongside Firebase)
  • Upgrade, cleanup, abstraction, documentation, and test coverage of the Firebase connector
  • GET list of subscribers / list of topics per subscriber (userID , deviceID)
  • Support for SMS-sending using Nexmo (a new connector alongside Firebase)

Throughput

Measured on an old notebook with i5-2520M, dual core and SSD. Message payload was 'Hello Word'. Load driver and server were set up on the same machine, so 50% of the cpu was allocated to the load driver.

  • End-2-End: Delivery of ~35.000 persistent messages per second
  • Fetching: Receive of ~70.000 persistent messages per second

During the tests, the memory consumption of the server was around ~25 MB.

Table of Contents

Roadmap

This is the current (and fast changing) roadmap and todo list:

Roadmap Release 0.5

  • Replication across multiple servers (in a Guble cluster)
  • Acknowledgement of message delivery for connectors
  • Storing the sequence-Id of topics in KV store, if we turn off persistence
  • Filtering of messages in guble server (e.g. sent by the REST client) according to URL parameters: UserID, DeviceID, Connector name
  • Updating README to show subscribe/unsubscribe/get/posting, health/metrics

Roadmap Release 0.6

  • Make notification messages optional by client configuration
  • Correct behaviour of receive command with maxCount on subtopics
  • Cancel of fetch in the message store and multiple concurrent fetch commands for the same topic
  • Configuration of different persistence strategies for topics
  • Delivery semantics: user must read on one device / deliver only to one device / notify if not connected, etc.
  • User-specific persistent subscriptions across all clients of the user
  • Client: (re-)setup of subscriptions after client reconnect
  • Message size limit configurable by the client with fetching by URL

Roadmap Release 0.7

  • HTTPS support in the service
  • Minimal example: chat application
  • Stable JavaScript client: https://github.com/smancke/guble-js
  • (TBD) Improved authentication and access-management
  • (TBD) Add Consul as KV Backend
  • (TBD) Index-based search of messages using GoLucene

Guble Docker Image

We are providing Docker images of the server and client for your convenience.

Start the Guble Server

There is an automated Docker build for the master at the Docker Hub. To start the server with Docker simply type:

docker run -p 8080:8080 smancke/guble

To see available configuration options:

docker run smancke/guble --help

All options can be supplied on the commandline or by a corresponding environment variable with the prefix GUBLE_. So to let guble be more verbose, you can either use:

docker run smancke/guble --log=info

or

docker run -e GUBLE_LOG=info smancke/guble

The Docker image has a volume mount point at /var/lib/guble, so if you want to bind-mount the persistent storage from your host you should use:

docker run -p 8080:8080 -v /host/storage/path:/var/lib/guble smancke/guble

Connecting with the Guble Client

The Docker image includes the guble commandline client guble-cli. You can execute it within a running guble container and connect to the server:

docker run -d --name guble smancke/guble
docker exec -it guble /usr/local/bin/guble-cli

Visit the guble-cli documentation for more details.

Build and Run

Since Go makes it very easy to build from source, you can compile guble using a single command. A prerequisite is having an installed Go environment and an empty directory:

sudo apt-get install golang
mkdir guble && cd guble
export GOPATH=`pwd`

Build and Start the Server

Build and start guble with the following commands (assuming that directory /var/lib/guble is already created with read-write rights for the current user):

go get github.com/smancke/guble
bin/guble --log=info

Configuration

CLI OptionEnv VariableValuesDefaultDescription
--envGUBLE_ENVdevelopment | integration | preproduction | productiondevelopmentName of the environment on which the application is running. Used mainly for logging
--health-endpointGUBLE_HEALTH_ENDPOINTresource/path/to/healthendpoint/admin/healthcheckThe health endpoint to be used by the HTTP server.Can be disabled by setting the value to ""
--httpGUBLE_HTTP_LISTENformat: [host]:port The address to for the HTTP server to listen on
--kvsGUBLE_KVSmemory | file | postgresfileThe storage backend for the key-value store to use
--logGUBLE_LOGpanic | fatal | error | warn | info | debugerrorThe log level in which the process logs
--metrics-endpointGUBLE_METRICS_ENDPOINTresource/path/to/metricsendpoint/admin/metricsThe metrics endpoint to be used by the HTTP server.Can be disabled by setting the value to ""
--msGUBLE_MSmemory | filefileThe message storage backend
--profileGUBLE_PROFILEcpu | mem | block The profiler to be used
--storage-pathGUBLE_STORAGE_PATHpath/to/storage/var/lib/gubleThe path for storing messages and key-value data like subscriptions if defined.The path must exists!

APNS

CLI OptionEnv VariableValuesDefaultDescription
--apnsGUBLE_APNStrue | falsefalseEnable the APNS module in general as well as the connector to the development endpoint
--apns-productionGUBLE_APNS_PRODUCTIONtrue | falsefalseEnables the connector to the apns production endpoint, requires the apns option to be set
--apns-cert-fileGUBLE_APNS_CERT_FILEpath/to/cert/file The APNS certificate file name, use this as an alternative to the certificate bytes option
--apns-cert-bytesGUBLE_APNS_CERT_BYTEScert-bytes-as-hex-string The APNS certificate bytes, use this as an alternative to the certificate file option
--apns-cert-passwordGUBLE_APNS_CERT_PASSWORDpassword The APNS certificate password
--apns-app-topicGUBLE_APNS_APP_TOPICtopic The APNS topic (as used by the mobile application)
--apns-prefixGUBLE_APNS_PREFIXprefix/apns/The APNS prefix / endpoint
--apns-workersGUBLE_APNS_WORKERSnumber of workersNumber of CPUsThe number of workers handling traffic with APNS (default: number of CPUs)

SMS

CLI OptionEnv VariableValuesDefaultDescription
smsGUBLE_SMStrue | falsefalseEnable the SMS gateway
sms_api_keyGUBLE_SMS_API_KEYapi key The Nexmo API Key for Sending sms
sms_api_secretGUBLE_SMS_API_SECRETapi secret The Nexmo API Secret for Sending sms
sms_topicGUBLE_SMS_TOPICtopic/smsThe topic for sms route
sms_workersGUBLE_SMS_WORKERSnumber of workersNumber of CPUsThe number of workers handling traffic with Nexmo sms endpoint

FCM

CLI OptionEnv VariableValuesDefaultDescription
`--fcmGUBLE_FCM`true | falsefalseEnable the Google Firebase Cloud Messaging connector
--fcm-api-keyGUBLE_FCM_API_KEYapi key The Google API Key for Google Firebase Cloud Messaging
--fcm-workersGUBLE_FCM_WORKERSnumber of workersNumber of CPUsThe number of workers handling traffic with Firebase Cloud Messaging
--fcm-endpointGUBLE_FCM_ENDPOINTformat: url-schemahttps://fcm.googleapis.com/fcm/sendThe Google Firebase Cloud Messaging endpoint
--fcm-prefixGUBLE_FCM_PREFIXprefix/fcm/The FCM prefix / endpoint

Postgres

CLI OptionEnv VariableValuesDefaultDescription
--pg-hostGUBLE_PG_HOSThostnamelocalhostThe PostgreSQL hostname
--pg-portGUBLE_PG_PORTport5432The PostgreSQL port
--pg-userGUBLE_PG_USERusergubleThe PostgreSQL user
--pg-passwordGUBLE_PG_PASSWORDpasswordgubleThe PostgreSQL password
--pg-dbnameGUBLE_PG_DBNAMEdatabasegubleThe PostgreSQL database name

Run All Tests

go get -t github.com/smancke/guble/...
go test github.com/smancke/guble/...

Clients

The following clients are available:

Protocol Reference

REST API

Currently there is a minimalistic REST API, just for publishing messages.

POST /api/message/<topic>

URL parameters:

  • userId: The PublisherUserId
  • messageId: The PublisherMessageId

Headers

You can set fields in the header JSON of the message by providing the corresponding HTTP headers with the prefix X-Guble-.

Curl example with the resulting message:

curl -X POST -H "x-Guble-Key: Value" --data Hello 'http://127.0.0.1:8080/api/message/foo?userId=marvin&messageId=42'

Results in:

16,/foo,marvin,VoAdxGO3DBEn8vv8,42,1451236804
{"Key":"Value"}
Hello

WebSocket Protocol

The communication with the guble server is done by ordinary WebSockets, using a binary encoding.

Message Format

All payload messages sent from the server to the client are using the following format:

<path:string>,<sequenceId:int64>,<publisherUserId:string>,<publisherApplicationId:string>,<publisherMessageId:string>,<messagePublishingTime:unix-timestamp>\n
[<application headers json>]\n
<body>

example 1:
/foo/bar,42,user01,phone1,id123,1420110000
{"Content-Type": "text/plain", "Correlation-Id": "7sdks723ksgqn"}
Hello World

example 2:
/foo/bar,42,user01,54sdcj8sd7,id123,1420110000

anyByteData
  • All text formats are assumed to be UTF-8 encoded.
  • Message sequenceIds are int64, and distinct within a topic. The message sequenceIds are strictly monotonically increasing depending on the message age, but there is no guarantee for the right order while transmitting.

Client Commands

The client can send the following commands.

Send

Publish a message to a topic:

> <path> [<publisherMessageId>]\n
[<header>\n]..
\n
<body>

example:
> /foo

Hello World

Subscribe/Receive

Receive messages from a path (e.g. a topic or subtopic). This command can be used to subscribe for incoming messages on a topic, as well as for replaying the message history.

+ <path> [<startId>[,<maxCount>]]
  • path: the topic to receive the messages from
  • startId: the message id to start the replay ** If no startId is given, only future messages will be received (simple subscribe). ** If the startId is negative, it is interpreted as relative count of last messages in the history.
  • maxCount: the maximum number of messages to replay

Note: Currently, the fetching of stored messages does not recognize subtopics.

Examples:

+ /foo         # Subscribe to all future messages matching /foo
+ /foo/bar     # Subscribe to all future messages matching /foo/bar

+ /foo 0       # Receive all message from the topic and subscribe for further incoming messages.

+ /foo 42      # Receive all message with message ids >= 42
               # from the topic and subscribe for further incoming messages.

+ /foo 0 20    # Receive the first (oldest) 20 messages within the topic and stop.
               # (If the topic has less messages, it will stop after receiving all existing ones.)

+ /foo -20     # Receive the last (newest) 20 messages from the topic and then
               # subscribe for further incoming messages.

+ /foo -20 20  # Receive the last (newest) 20 messages within the topic and stop.
               # (If the topic has less messages, it will stop after receiving all existing ones.)

Unsubscribe/Cancel

Cancel further receiving of messages from a path (e.g. a topic or subtopic).

- <path>

example:
- /foo
- /foo/bar

Server Status Messages

The server sends status messages to the client. All positive status messages start with >. Status messages reporting an error start with !. Status messages are in the following format.

'#'<msgType> <Explanation text>\n
<json data>

Connection Message

#ok-connected You are connected to the server.\n
{"ApplicationId": "the app id", "UserId": "the user id", "Time": "the server time as unix timestamp "}

Example:

#connected You are connected to the server.
{"ApplicationId": "phone1", "UserId": "user01", "Time": "1420110000"}

Send Success Notification

This notification confirms, that the messaging system has successfully received the message and now starts transmitting it to the subscribers:

#send <publisherMessageId>
{"sequenceId": "sequence id", "path": "/foo", "publisherMessageId": "publishers message id", "messagePublishingTime": "unix-timestamp"}

Receive Success Notification

Depending on the type of + (receive) command, up to three different notification messages will be sent back. Be aware, that a server may send more receive notifications that you would have expected in first place, e.g. when:

  • Additional messages are stored, while the first fetching is in progress
  • The server decides to meanwhile stop the online subscription and change to fetching, because your client is too slow to read all incoming messages.

When the fetch operation starts:

#fetch-start <path> <count>
  • path: the topic path
  • count: the number of messages that will be returned

When the fetch operation is done:

#fetch-done <path>
  • path: the topic path

When the subscription to new messages was taken:

#subscribed-to <path>
  • path: the topic path

Unsubscribe Success Notification

An unsubscribe/cancel operation is confirmed by the following notification:

#canceled <path>

Send Error Notification

This message indicates, that the message could not be delivered.

!error-send <publisherMessageId> <error text>
{"sequenceId": "sequence id", "path": "/foo", "publisherMessageId": "publishers message id", "messagePublishingTime": "unix-timestamp"}

Bad Request

This notification has the same meaning as the http 400 Bad Request.

!error-bad-request unknown command 'sdcsd'

Internal Server Error

This notification has the same meaning as the http 500 Internal Server Error.

!error-server-internal this computing node has problems

Topics

Messages can be hierarchically routed by topics, so they are represented by a path, separated by /. The server takes care, that a message only gets delivered once, even if it is matched by multiple subscription paths.

Subtopics

The path delimiter gives the semantic of subtopics. With this, a subscription to a parent topic (e.g. /foo) also results in receiving all messages of the subtopics (e.g. /foo/bar).

Author: Smancke
Source Code: https://github.com/smancke/guble 
License: MIT License

#go #golang #server 

Guble: Websocket Based Messaging Server Written in Golang
Veronica  Roob

Veronica Roob

1649539620

Fetch: An IMAP library for PHP

Fetch

Fetch is a library for reading email and attachments, primarily using the POP and IMAP protocols.

Installing

N.b. A note on Ubuntu 14.04 (probably other Debian-based / Apt managed systems), the install of php5-imap does not enable the extension for CLI (possibly others as well), which can cause composer to report fetch requires ext-imap

sudo ln -s /etc/php5/mods-available/imap.ini /etc/php5/cli/conf.d/30-imap.ini

Composer

Installing Fetch can be done through a variety of methods, although Composer is recommended.

Until Fetch reaches a stable API with version 1.0 it is recommended that you review changes before even Minor updates, although bug fixes will always be backwards compatible.

"require": {
  "tedivm/fetch": "0.7.*"
}

Pear

Fetch is also available through Pear.

$ pear channel-discover pear.tedivm.com
$ pear install tedivm/Fetch

Github

Releases of Fetch are available on Github.

Sample Usage

This is just a simple code to show how to access messages by using Fetch. It uses Fetch own autoload, but it can (and should be, if applicable) replaced with the one generated by composer.

use Fetch\Server;
use Fetch\Message;

$server = new Server('imap.example.com', 993);
$server->setAuthentication('username', 'password');

/** @var Message[] $message */
$messages = $server->getMessages();

foreach ($messages as $message) {
    echo "Subject: {$message->getSubject()}", PHP_EOL;
    echo "Body: {$message->getMessageBody()}", PHP_EOL;
}

License

Fetch is licensed under the BSD License. See the LICENSE file for details.

Author: tedious
Source Code: https://github.com/tedious/Fetch
License: View license

#php #fetch 

Fetch: An IMAP library for PHP

Fetch Event Source Para Eventos Enviados Por El Servidor En React

La pieza crítica de cualquier aplicación full-stack es la conexión entre el frontend y el backend. Generalmente, la comunicación es implementada por el cliente haciendo la solicitud al servidor y el servidor devolviendo la respuesta con los datos.

Esto les da a los usuarios el control para decidir cuándo recibir los datos, pero puede haber casos específicos en los que el enfoque tradicional de solicitud y respuesta no sea suficiente.

Tome aplicaciones web con contenido en tiempo real, como puntajes de juegos en vivo, precios de acciones o notificaciones en Twitter, por ejemplo. En estos casos, el usuario no controla cuándo se actualiza la información y, por lo tanto, no sabe cuándo realizar una solicitud. Sin embargo, la información que se muestra en la aplicación es siempre nueva y actualizada.

La funcionalidad descrita se logra mediante el uso de eventos enviados por el servidor que ayudan a los desarrolladores a crear aplicaciones dinámicas con una experiencia de usuario perfecta.

En este tutorial, exploraremos los principios de funcionamiento de los eventos enviados por el servidor centrándonos en un paquete de código abierto diseñado específicamente Fetch Event Source desarrollado por Microsoft y colaboradores para ayudar a los desarrolladores a aumentar el control sobre los datos en tiempo real del servidor.

¿Qué son los eventos enviados por el servidor?

Los eventos enviados por el servidor (SSE) son eventos unidireccionales que se envían desde el servidor al cliente a través del Protocolo de transferencia de hipertexto (HTTP). El servidor envía los eventos tan pronto como ocurren, lo que significa que el usuario tiene acceso a los datos en tiempo real:

Debido a que el usuario no puede influir directamente en los eventos enviados por el servidor mientras se envían, todos los parámetros necesarios deben enviarse en la solicitud de conexión y procesarse en el servidor, para que sepa a qué datos en tiempo real necesita acceder el usuario.

La forma tradicional de trabajar con eventos enviados por el servidor es mediante la interfaz API de EventSource , que se incluye en la especificación HTML WC3 . Ofrece la base para crear una conexión con el servidor, recibir los mensajes del servidor y mostrar errores.

 

Desafortunadamente, la API de EventSource es una interfaz primitiva y tiene muchas limitaciones. Los revisaremos a continuación y brindaremos soluciones alternativas de Fetch Event Source , que permite una mayor personalización y control sobre cómo realizar una solicitud y obtener una respuesta.

¿Por qué elegir Fetch Event Source?

Como sugiere el nombre, la principal ventaja de Fetch Event Source es la capacidad de utilizar todas las funciones ampliadas proporcionadas por Fetch API . Esto significa que los usuarios pueden enviar métodos de solicitud personalizados, encabezados e incluso cuerpos con parámetros específicos al servidor. Por el contrario, la API de EventSource solo permitía enviar urly withCredentialspropiedades.

Cuando se trabaja con Fetch Event Source, el desarrollador también tiene acceso al objeto de respuesta que proviene del servidor. Esto puede ser útil si el usuario desea tener alguna validación para el origen del evento. También permite mucho más control sobre los errores y las estrategias de reintento, mientras que la API de EventSource no proporciona ninguna forma confiable de controlar esto.

Fetch Event Source también admite la API de visibilidad de página , lo que significa que los eventos enviados por el servidor se detendrán cuando la ventana del navegador se minimice y se reanudarán automáticamente una vez que regrese a la ventana gráfica. Esto ayuda a reducir la carga en el servidor, lo cual es crucial, especialmente si ejecuta varias tareas en el servidor.

Para probar las características de Fetch Event Source, crearemos una aplicación práctica que simulará los cambios en el precio de las acciones en el gráfico de líneas en tiempo real, demostrando todas las ventajas descritas en la práctica.

Configuración del espacio de trabajo

Nuestra aplicación consistirá tanto en el frontend como en el backend, así que creemos espacios de trabajo separados para toda la aplicación para mantener todo organizado.

Para hacer eso, abra la terminal y ejecute el siguiente comando: mkdir sse-fetch-event-source && cd sse-fetch-event-source && mkdir frontend server. Esto creará una nueva carpeta sse-fetch-event-source, señalará el directorio de trabajo actual y creará carpetas frontenddentro serverde ella.

Implementando la interfaz

Primero, creemos una aplicación simple del lado del cliente para que podamos tener una interfaz de usuario para mostrar la información que recibimos del backend.

Mientras aún está en el sse-fetch-event-sourcedirectorio, cambie el directorio a la frontendcarpeta ejecutando el comando, cd frontend.

Usaremos Create React App , que es una utilidad para crear un proyecto React completamente funcional en un minuto o menos. Para hacer eso, ejecute el siguiente comando: npx create-react-app frontend.

Esto creará una carpeta llamada frontenden nuestro espacio de trabajo principal, que consta de todo el código de interfaz.

A continuación, configuraremos el paquete Fetch Event Source y la biblioteca de recargas , que luego nos permitirá mostrar los datos. Ejecute el siguiente comando para instalar ambos:

npm install @microsoft/fetch-event-source recharts

Luego, expanda frontendel árbol de archivos y busque el srcdirectorio. En él encontrarás el archivo App.js. Reemplace su contenido con el siguiente código:

javascript
import { useState, useEffect } from "react";
import { fetchEventSource } from "@microsoft/fetch-event-source";
import {
  LineChart,
  Line,
  XAxis,
  YAxis,
  CartesianGrid,
  Tooltip,
  Legend,
} from "recharts";

const serverBaseURL = "http://localhost:5000";

const App = () => {
  const [data, setData] = useState([]);

  useEffect(() => {
    const fetchData = async () => {
      await fetchEventSource(`${serverBaseURL}/sse`, {
        method: "POST",
        headers: {
          Accept: "text/event-stream",
        },
        onopen(res) {
          if (res.ok && res.status === 200) {
            console.log("Connection made ", res);
          } else if (
            res.status >= 400 &&
            res.status < 500 &&
            res.status !== 429
          ) {
            console.log("Client side error ", res);
          }
        },
        onmessage(event) {
          console.log(event.data);
          const parsedData = JSON.parse(event.data);
          setData((data) => [...data, parsedData]);
        },
        onclose() {
          console.log("Connection closed by the server");
        },
        onerror(err) {
          console.log("There was an error from server", err);
        },
      });
    };
    fetchData();
  }, []);

  return (
    <div style={{ display: "grid", placeItems: "center" }}>
      <h1>Stock prices of aTech and bTech (USD)</h1>
      <LineChart width={1000} height={400} data={data}>
        <CartesianGrid strokeDasharray="3 3" />
        <XAxis dataKey="time" />
        <YAxis domain={[20, 26]} />
        <Tooltip />
        <Legend />
        <Line type="monotone" dataKey="aTechStockPrice" stroke="#8884d8" />
        <Line type="monotone" dataKey="bTechStockPrice" stroke="#82ca9d" />
      </LineChart>
    </div>
  );
};

export default App;

Primero, importamos los ganchos React incorporados useStatey useEffect. Luego, importamos la fetchEventSourcebiblioteca en sí y los rechartscomponentes necesarios para mostrar los datos recibidos en una interfaz de usuario agradable. También creamos una variable para la ruta del servidor.

Dentro de la Appfunción, creamos una variable de estado para los datos e hicimos una fetchEventSourcellamada al servidor. Incluimos el método de llamada personalizado (POST), así como valores de encabezado configurados para aceptar tipos de medios particulares.

Utilizamos los eventos onopen, onmessage, onclosey onerrorpara controlar el comportamiento de la aplicación en función de la respuesta del servidor.

Finalmente, creamos un rechartsgráfico de líneas simple en la sección de devolución que se representará en la pantalla. Observe que pasamos datadesde useState, lo que significa que el gráfico de líneas se actualizará cada vez que se actualice el datavalor.

Asegúrese de dejar las console.logdeclaraciones, ya que nos ayudarán a probar la aplicación más tarde al proporcionar los datos a la consola del desarrollador en el navegador.

Implementación del lado del servidor

 Ahora, implementaremos un servidor Node.js simple . Para facilitar la configuración del servidor, usaremos Express , que es un marco web rápido y minimalista para Node.

Si siguió configurando la interfaz en la sección anterior, aún debería estar en el frontenddirectorio. Para configurar el backend, cambie a la backendcarpeta. Puede hacerlo ejecutando el comando cd ../server.

Primero, inicialice el npm usando el comando npm init -y. Esto creará un package.jsonarchivo simple con toda la información del punto de entrada predeterminado. Abra el archivo y cambie el mainvalor de index.jsa server.js.

Luego, instale el paquete Express framework y cors ejecutando el comando npm install express cors. CORS (Cross-Origin Resource Sharing) nos permitirá realizar solicitudes entre los diferentes puertos de nuestra aplicación (frontend y backend).

A continuación, necesitamos crear un archivo para crear un servidor. Mientras aún está en el serverdirectorio, ejecute el siguiente comando: touch server.js.

Abra el archivo recién creado e incluya el siguiente código:

javascript
const express = require("express");
const cors = require("cors");

const app = express();
app.use(cors());

const PORT = 5000;

const getStockPrice = (range, base) =>
  (Math.random() * range + base).toFixed(2);
const getTime = () => new Date().toLocaleTimeString();

app.post("/sse", function (req, res) {
  res.writeHead(200, {
    Connection: "keep-alive",
    "Content-Type": "text/event-stream",
    "Cache-Control": "no-cache",
  });
  setInterval(() => {
    res.write(
      `data: {"time": "${getTime()}", "aTechStockPrice": "${getStockPrice(
        2, 20)}", "bTechStockPrice": "${getStockPrice(4, 22)}"}`
    );
    res.write("\n\n");
  }, 5000);
});

app.listen(PORT, function () {
  console.log(`Server is running on port ${PORT}`);
});

Primero importamos Express y cors y creamos un nuevo servidor para ejecutarlo en el puerto 5000. Usamos cors como un middleware que nos permitiera hacer las llamadas desde la interfaz.

Luego creamos un par de funciones personalizadas getStockPricey getTime. El primero generará un precio aleatorio para la acción, recibiendo como argumentos el valor base y el valor más alto (rango). El segundo devolverá la hora actual en el formato de hh:mm:ss.

Hicimos que el servidor escuche las postsolicitudes en la sseruta. En el interior, escribimos un encabezado de respuesta definiendo el connectiony content-type, así como deshabilitando el cache. Finalmente, usamos a setIntervalpara generar nuevos datos cada 5000ms (5 segundos).

Ejecutando la aplicación

Debido a que nuestra aplicación tiene servidores front-end y back-end, debemos ejecutar ambos por separado para poder interactuar entre los dos mediante el envío de solicitudes y la recepción de eventos enviados por el servidor.

Su directorio de trabajo actual debe ser server. Lanzar el servidor debería ser tan fácil como ejecutar el comando node server.js. Si hiciste todo bien, ahora debería decir Server is running on port 5000en la terminal.

Para ejecutar la interfaz, abra otra terminal, cambie al frontenddirectorio a través cd frontendde y ejecute el comando npm start, que iniciará la aplicación de la interfaz.

Dependiendo de qué terminal use y de si usa o no los terminales integrados en el editor de texto, ejecutar el backend y el frontend en paralelo se vería así:

Después de iniciar la interfaz, su navegador predeterminado debería abrirse automáticamente. Si no es así, ábralo manualmente, escriba http://localhost:3000en la barra de URL y ejecútelo. Ahora debería mostrar la aplicación funcional de pila completa con todos los datos que se reciben en tiempo real utilizando los eventos enviados por el servidor:

Probando la aplicación React

Ahora, probemos las funciones de Fetch Event Source. Mientras aún está en el navegador, abra la consola del desarrollador desde la configuración del navegador o presionando la F12tecla en el teclado.

Primero, abra la etiqueta Red y actualice la aplicación usando la F5tecla en el teclado. La pestaña Red permite a los usuarios ver todas las solicitudes que se envían al servidor.

Haga clic en sse para ver la solicitud de conexión Fetch Event Source. Tenga en cuenta que el tipo de solicitud es POST y pudimos configurar los parámetros de encabezado personalizados, como aceptar, tipo para la solicitud. Todo esto es gracias a Fetch Event Source, que no sería posible con la API de EventSource .

Luego, navegue a la pestaña Consola , donde debería ver información adicional gracias a las console.logdeclaraciones que usamos en la interfaz.

Cuando el usuario intenta establecer una conexión, recibe un objeto de respuesta (también una característica de Fetch Event Source), lo que significa que podría usarse más para detectar las posibles causas de los problemas en la conexión.

En nuestro caso, la conexión fue exitosa y recibimos eventos basados ​​en el intervalo de 5000ms que definimos en el servidor:


Ahora, probemos algunos errores. Cambie la URL de búsqueda en la interfaz a algo que no exista, como ${serverBaseURL}/noroute. Guarde el código, actualice la página del navegador y debería ver la consola del navegador.

Fetch Event Source nos informa automáticamente que el error proviene de la interfaz y que el servidor cerró la conexión:

Finalmente, probemos la API de visibilidad de página . Cambie el código de interfaz de nuevo al existente sse. Guarde el código y actualice la ventana del navegador. Deje que la aplicación se ejecute durante un minuto.

Ahora, minimiza el navegador. Después de un tiempo, maximice la ventana del navegador y observe que durante el tiempo que se minimizó la ventana, no se recibieron eventos del servidor. El proceso se reanudó automáticamente después de maximizar el navegador.

Conclusión

En este tutorial, aprendimos que Fetch Event Source se puede usar para enviar encabezados y métodos específicos, obtener los detalles de los objetos de respuesta, tener un control más amplio sobre el manejo de errores y guardar los recursos del servidor mientras la ventana del navegador no está visible.

La próxima vez que necesite implementar la funcionalidad de recibir actualizaciones en vivo desde el servidor, no solo conocerá los principios de funcionamiento de los eventos enviados por el servidor, sino que también sabrá cómo implementarlos utilizando las características flexibles de Fetch Event Source.

Fuente: https://blog.logrocket.com/using-fetch-event-source-server-sent-events-react/

#react #fetch 

Fetch Event Source Para Eventos Enviados Por El Servidor En React
Jackie  White

Jackie White

1642775880

Como Pegar Dados De Uma API? Como Fazer AJAX Ou "AJAJ"!

Quando eu comecei a programar essa foi minha maior dúvida, como fazer um sistema de comentários? Como fazer um botão de like? Como fazer um sistema de login em uma single page application? Essas dúvidas me fizeram passar por inúmeros passos e eu vi que entender como a web funciona era o caminho ideal pra isso, então nesse vídeo eu explico com carinho como o protocolo HTTP funciona, como a gente pode pegar informações pela rede e como usar a função fetch do JavaScript!

#fetch  #ajax  #javascript 

Como Pegar Dados De Uma API? Como Fazer AJAX Ou "AJAJ"!
Verdie  Murray

Verdie Murray

1636259100

The Complete Guide to The Fetch API

Welcome to the complete guide to the fetch API. In this lesson we will look at the api details and give you practical code snippets that interact with various forms of http endpoints.

Towards the end we will look at how we can use fetch to create domain specific functions for a REST API

#fetch #javascript #api 

The Complete Guide to The Fetch API
Marisol  Kuhic

Marisol Kuhic

1627585200

Flutter App: Fetch Location Description of Source Code

This part of the tutorial deals with fetching the user location i.e., #latitude and #longitude values using which we can exactly locate user on maps.

Source Code : http://www.androidcoding.in/2021/01/12/flutter-location/

#flutter #code #fetch

Flutter App: Fetch Location Description of Source Code
Marisol  Kuhic

Marisol Kuhic

1627340400

Android : Fetch latitude and longitude on your device | GPS, Network

#Android fetch the current #latitude and #longitude values in your app is explained in this part of the tutorial.

The location can be traced using the #GPS or the #Network.

Source Code : http://www.androidcoding.in/2016/07/11/android-fetch-current-latitude-longitude-values/

#android #fetch #device

Android : Fetch latitude and longitude on your device | GPS, Network