Maitri Sharma

Maitri Sharma


Physics Wallah NCERT Solutions For Class 7 Social Science

There is a website named Physics Wallah. They have a variety of reference books that contain solutions for NCERT as well. Ncert Solutions For Class 7 Social Science is created by expert faculties which helps students to clear their core concepts. You can check the website if you want.



 #ncertsolutions #physics #notes #class7 #social #history #geography 


  Physics Wallah NCERT Solutions For Class 7 Social Science
Rupert  Beatty

Rupert Beatty


Revisionable: Easily Create A Revision History for any Laravel Model

Wouldn't it be nice to have a revision history for any model in your project, without having to do any work for it. By simply adding the RevisionableTrait Trait to your model, you can instantly have just that, and be able to display a history similar to this:

  • Chris changed title from 'Something' to 'Something else'
  • Chris changed category from 'News' to 'Breaking news'
  • Matt changed category from 'Breaking news' to 'News'

So not only can you see a history of what happened, but who did what, so there's accountability.

Revisionable is a laravel package that allows you to keep a revision history for your models without thinking. For some background and info, see this article

Working with 3rd party Auth / Eloquent extensions

Revisionable has support for Auth powered by

(Recommended) Revisionable can also now be used as a Trait, so your models can continue to extend Eloquent, or any other class that extends Eloquent (like Ardent).


Revisionable is installable via composer, the details are on packagist, here.

Add the following to the require section of your projects composer.json file:

"venturecraft/revisionable": "1.*",

Run composer update to download the package

php composer.phar update

Open config/app.php and register the required service provider (Laravel 5.x)

'providers' => [

Publish the configuration and migrations (Laravel 5.x)

php artisan vendor:publish --provider="Venturecraft\Revisionable\RevisionableServiceProvider"

Finally, you'll also need to run migration on the package (Laravel 5.x)

php artisan migrate

For Laravel 4.x users:

php artisan migrate --package=venturecraft/revisionable

If you're going to be migrating up and down completely a lot (using migrate:refresh), one thing you can do instead is to copy the migration file from the package to your app/database folder, and change the classname from CreateRevisionsTable to something like CreateRevisionTable (without the 's', otherwise you'll get an error saying there's a duplicate class)

cp vendor/venturecraft/revisionable/src/migrations/2013_04_09_062329_create_revisions_table.php database/migrations/


The new, Trait based implementation (recommended)

Traits require PHP >= 5.4

For any model that you want to keep a revision history for, include the VentureCraft\Revisionable namespace and use the RevisionableTrait in your model, e.g.,

namespace App;

use \Venturecraft\Revisionable\RevisionableTrait;

class Article extends \Illuminate\Database\Eloquent\Model {
    use RevisionableTrait;

Being a trait, Revisionable can now be used with the standard Eloquent model, or any class that extends Eloquent, such as Ardent.

Legacy class based implementation

The new trait based approach is backwards compatible with existing installations of Revisionable. You can still use the below installation instructions, which essentially is extending a wrapper for the trait.

For any model that you want to keep a revision history for, include the VentureCraft\Revisionable namespace and use the RevisionableTrait in your model, e.g.,

use Venturecraft\Revisionable\Revisionable;

namespace App;

class Article extends Revisionable { }

Note: This also works with namespaced models.

Implementation notes

If needed, you can disable the revisioning by setting $revisionEnabled to false in your Model. This can be handy if you want to temporarily disable revisioning, or if you want to create your own base Model that extends Revisionable, which all of your models extend, but you want to turn Revisionable off for certain models.

namespace App;

use \Venturecraft\Revisionable\RevisionableTrait;

class Article extends \Illuminate\Database\Eloquent\Model {
    protected $revisionEnabled = false;

You can also disable revisioning after X many revisions have been made by setting $historyLimit to the number of revisions you want to keep before stopping revisions.

namespace App;

use \Venturecraft\Revisionable\RevisionableTrait;

class Article extends \Illuminate\Database\Eloquent\Model {
    protected $revisionEnabled = true;
    protected $historyLimit = 500; //Stop tracking revisions after 500 changes have been made.

In order to maintain a limit on history, but instead of stopping tracking revisions if you want to remove old revisions, you can accommodate that feature by setting $revisionCleanup.

namespace App;

use \Venturecraft\Revisionable\RevisionableTrait;

class Article extends \Illuminate\Database\Eloquent\Model {
    protected $revisionEnabled = true;
    protected $revisionCleanup = true; //Remove old revisions (works only when used with $historyLimit)
    protected $historyLimit = 500; //Maintain a maximum of 500 changes at any point of time, while cleaning up old revisions.

Storing Soft Deletes

By default, if your model supports soft deletes, Revisionable will store this and any restores as updates on the model.

You can choose to ignore deletes and restores by adding deleted_at to your $dontKeepRevisionOf array.

To better format the output for deleted_at entries, you can use the isEmpty formatter (see Format output for an example of this.)

Storing Force Delete

By default the Force Delete of a model is not stored as a revision.

If you want to store the Force Delete as a revision you can override this behavior by setting revisionForceDeleteEnabled to true by adding the following to your model:

protected $revisionForceDeleteEnabled = true;

In which case, the created_at field will be stored as a key with the oldValue() value equal to the model creation date and the newValue() value equal to null.

Attention! Turn on this setting carefully! Since the model saved in the revision, now does not exist, so you will not be able to get its object or its relations.

Storing Creations

By default the creation of a new model is not stored as a revision. Only subsequent changes to a model is stored.

If you want to store the creation as a revision you can override this behavior by setting revisionCreationsEnabled to true by adding the following to your model:

protected $revisionCreationsEnabled = true;

More Control

No doubt, there'll be cases where you don't want to store a revision history only for certain fields of the model, this is supported in two different ways. In your model you can either specifiy which fields you explicitly want to track and all other fields are ignored:

protected $keepRevisionOf = ['title'];

Or, you can specify which fields you explicitly don't want to track. All other fields will be tracked.

protected $dontKeepRevisionOf = ['category_id'];

The $keepRevisionOf setting takes precedence over $dontKeepRevisionOf

Storing additional fields in revisions

In some cases, you'll want additional metadata from the models in each revision. An example of this might be if you have to keep track of accounts as well as users. Simply create your own new migration to add the fields you'd like to your revision model, add them to your config/revisionable.php in an array like so:

'additional_fields' => ['account_id', 'permissions_id', 'other_id'], 

If the column exists in the model, it will be included in the revision.

Make sure that if you can't guarantee the column in every model, you make that column nullable() in your migrations.


Every time a model revision is created an event is fired. You can listen for revisionable.created,
revisionable.saved or revisionable.deleted.

// app/Providers/EventServiceProvider.php

public function boot()

    $events->listen('revisionable.*', function($model, $revisions) {
        // Do something with the revisions or the changed model. 
        dd($model, $revisions);

Format output

You can continue (and are encouraged to) use Eloquent accessors in your model to set the output of your values, see the Laravel Documentation for more information on accessors The below documentation is therefor deprecated

In cases where you want to have control over the format of the output of the values, for example a boolean field, you can set them in the $revisionFormattedFields array in your model. e.g.,

protected $revisionFormattedFields = [
    'title'      => 'string:<strong>%s</strong>',
    'public'     => 'boolean:No|Yes',
    'modified'   => 'datetime:m/d/Y g:i A',
    'deleted_at' => 'isEmpty:Active|Deleted'

You can also override the field name output using the $revisionFormattedFieldNames array in your model, e.g.,

protected $revisionFormattedFieldNames = [
    'title'      => 'Title',
    'small_name' => 'Nickname',
    'deleted_at' => 'Deleted At'

This comes into play when you output the revision field name using $revision->fieldName()


To format a string, simply prefix the value with string: and be sure to include %s (this is where the actual value will appear in the formatted response), e.g.,



Booleans by default will display as a 0 or a 1, which is pretty bland and won't mean much to the end user, so this formatter can be used to output something a bit nicer. Prefix the value with boolean: and then add your false and true options separated by a pipe, e.g.,



Analogous to "boolean", only any text or numeric values can act as a source value (often flags are stored in the database). The format allows you to specify different outputs depending on the value. Look at this as an associative array in which the key is separated from the value by a dot. Array elements are separated by a vertical line.

options: search.On the search|network.In networks


DateTime by default will display as Y-m-d H:i:s. Prefix the value with datetime: and then add your datetime format, e.g.,

datetime:m/d/Y g:i A

Is Empty

This piggy backs off boolean, but instead of testing for a true or false value, it checks if the value is either null or an empty string.


This can also accept %s if you'd like to output the value, something like the following will display 'Nothing' if the value is empty, or the actual value if something exists:


Load revision history

To load the revision history for a given model, simply call the revisionHistory method on that model, e.g.,

$article = Article::find($id);
$history = $article->revisionHistory;

Displaying history

For the most part, the revision history will hold enough information to directly output a change history, however in the cases where a foreign key is updated we need to be able to do some mapping and display something nicer than plan_id changed from 3 to 1.

To help with this, there's a few helper methods to display more insightful information, so you can display something like Chris changed plan from bronze to gold.

The above would be the result from this:

@foreach($account->revisionHistory as $history )
    <li>{{ $history->userResponsible()->first_name }} changed {{ $history->fieldName() }} from {{ $history->oldValue() }} to {{ $history->newValue() }}</li>

If you have enabled revisions of creations as well you can display it like this:

@foreach($resource->revisionHistory as $history)
  @if($history->key == 'created_at' && !$history->old_value)
    <li>{{ $history->userResponsible()->first_name }} created this resource at {{ $history->newValue() }}</li>
    <li>{{ $history->userResponsible()->first_name }} changed {{ $history->fieldName() }} from {{ $history->oldValue() }} to {{ $history->newValue() }}</li>


Returns the User that was responsible for making the revision. A user model is returned, or null if there was no user recorded.

The user model that is loaded depends on what you have set in your config/auth.php file for the model variable.


Returns the name of the field that was updated, if the field that was updated was a foreign key (at this stage, it simply looks to see if the field has the suffix of _id) then the text before _id is returned. e.g., if the field was plan_id, then plan would be returned.

Remember from above, that you can override the output of a field name with the $revisionFormattedFieldNames array in your model.


This is used when the value (old or new) is the id of a foreign key relationship.

By default, it simply returns the ID of the model that was updated. It is up to you to override this method in your own models to return something meaningful. e.g.,

use Venturecraft\Revisionable\Revisionable;

class Article extends Revisionable
    public function identifiableName()
        return $this->title;

oldValue() and newValue()

Get the value of the model before or after the update. If it was a foreign key, identifiableName() is called.

Unknown or invalid foreign keys as revisions

In cases where the old or new version of a value is a foreign key that no longer exists, or indeed was null, there are two variables that you can set in your model to control the output in these situations:

protected $revisionNullString = 'nothing';
protected $revisionUnknownString = 'unknown';


Sometimes temporarily disabling a revisionable field can come in handy, if you want to be able to save an update however don't need to keep a record of the changes.

$object->disableRevisionField('title'); // Disables title


$object->disableRevisionField(['title', 'content']); // Disables title and content


Contributions are encouraged and welcome; to keep things organised, all bugs and requests should be opened in the GitHub issues tab for the main project, at venturecraft/revisionable/issues

All pull requests should be made to the develop branch, so they can be tested before being merged into the master branch.

Having troubles?

If you're having troubles with using this package, odds on someone else has already had the same problem. Two places you can look for common answers to your problems are:

If you do prefer posting your questions to the public on StackOverflow, please use the 'revisionable' tag.

Author: VentureCraft 
Source Code: 
License: MIT license

#laravel #model #history 

Revisionable: Easily Create A Revision History for any Laravel Model
Nat  Grady

Nat Grady


Shell-history: Get The Command History Of The User's Shell


Get the command history of the user's shell


$ npm install shell-history


import {shellHistory, shellHistoryPath} from 'shell-history';

//=> ['ava', 'echo unicorn', 'node', 'npm test', …]

//=> '/Users/sindresorhus/.history'



Get an array of commands.

On Windows, unless the HISTFILE environment variable is set, this will only return commands from the current session.


Get the path of the file containing the shell history.

On Windows, this will return either the HISTFILE environment variable or undefined.


Parse a shell history string into an array of commands.


Author: Sindresorhus
Source Code: 
License: MIT license

#electron #shell #history 

Shell-history: Get The Command History Of The User's Shell

Level-historical-json: Keep A History Of All The Changes Of A JSON


Keep a history of all the changes of a JSON document.


npm install level-historical-json



var db = require('level-test')()()
  , LHJ = require('./level-historical-json')(db)

LHJ.put({ 'hello': 'world' }, function (err) {
  LHJ.get(function (err, res) {
    console.log(JSON.stringify(res, undefined, 2))
    LHJ.put({ 'hello': 'another world', key: res[0].key }, function (err) {
      LHJ.getHistorical(function (err, res) {
        console.log(JSON.stringify(res, undefined, 2))


    "key": "53a7dc28061c9998c36dfebe000002",
    "hello": "world"
    "key": "53a7dc28061c9998c36dfebe000002",
    "changes": [
        "from": "world",
        "to": "another world",
        "property": "hello",
        "at": "2014-06-23T07:50:00.000Z"

Author: Ellell
Source Code: 
License: View license

#javascript #node #json #history 

Level-historical-json: Keep A History Of All The Changes Of A JSON
Royce  Reinger

Royce Reinger


RESTClient: Simple HTTP and REST Client for Ruby

REST Client -- simple DSL for accessing HTTP and REST resources    

A simple HTTP and REST client for Ruby, inspired by the Sinatra's microframework style of specifying actions: get, put, post, delete.


MRI Ruby 2.0 and newer are supported. Alternative interpreters compatible with 2.0+ should work as well.

Earlier Ruby versions such as 1.8.7, 1.9.2, and 1.9.3 are no longer supported. These versions no longer have any official support, and do not receive security updates.

The rest-client gem depends on these other gems for usage at runtime:

There are also several development dependencies. It's recommended to use bundler to manage these dependencies for hacking on rest-client.

Upgrading to rest-client 2.0 from 1.x

Users are encouraged to upgrade to rest-client 2.0, which cleans up a number of API warts and wrinkles, making rest-client generally more useful. Usage is largely compatible, so many applications will be able to upgrade with no changes.

Overview of significant changes:

  • requires Ruby >= 2.0
  • RestClient::Response objects are a subclass of String rather than a Frankenstein monster. And #body or #to_s return a true String object.
  • cleanup of exception classes, including new RestClient::Exceptions::Timeout
  • improvements to handling of redirects: responses and history are properly exposed
  • major changes to cookie support: cookie jars are used for browser-like behavior throughout
  • encoding: Content-Type charset response headers are used to automatically set the encoding of the response string
  • HTTP params: handling of GET/POST params is more consistent and sophisticated for deeply nested hash objects, and ParamsArray can be used to pass ordered params
  • improved proxy support with per-request proxy configuration, plus the ability to disable proxies set by environment variables
  • default request headers: rest-client sets Accept: */* and User-Agent: rest-client/...

See for a more complete description of changes.

Usage: Raw URL

Basic usage:

require 'rest-client'

RestClient.get(url, headers={}), payload, headers={})

In the high level helpers, only POST, PATCH, and PUT take a payload argument. To pass a payload with other HTTP verbs or to pass more advanced options, use RestClient::Request.execute instead.

More detailed examples:

require 'rest-client'

RestClient.get ''

RestClient.get '', {params: {id: 50, 'foo' => 'bar'}}

RestClient.get '', {accept: :json} '', {param1: 'one', nested: {param2: 'two'}} "", {'x' => 1}.to_json, {content_type: :json, accept: :json}

RestClient.delete ''

>> response = RestClient.get ''
=> <RestClient::Response 200 "<!doctype h...">
>> response.code
=> 200
>> response.cookies
=> {"Foo"=>"BAR", "QUUX"=>"QUUUUX"}
>> response.headers
=> {:content_type=>"text/html; charset=utf-8", :cache_control=>"private" ... }
>> response.body
=> "<!doctype html>\n<html>\n<head>\n    <title>Example Domain</title>\n\n ..." url,
    :transfer => {
      :path => '/foo/bar',
      :owner => 'that_guy',
      :group => 'those_guys'
     :upload => {
      :file =>, 'rb')

Passing advanced options

The top level helper methods like RestClient.get accept a headers hash as their last argument and don't allow passing more complex options. But these helpers are just thin wrappers around RestClient::Request.execute.

RestClient::Request.execute(method: :get, url: '',
                            timeout: 10)

RestClient::Request.execute(method: :get, url: '',
                            ssl_ca_file: 'myca.pem',
                            ssl_ciphers: 'AESGCM:!aNULL')

You can also use this to pass a payload for HTTP verbs like DELETE, where the RestClient.delete helper doesn't accept a payload.

RestClient::Request.execute(method: :delete, url: '',
                            payload: 'foo', headers: {myheader: 'bar'})

Due to unfortunate choices in the original API, the params used to populate the query string are actually taken out of the headers hash. So if you want to pass both the params hash and more complex options, use the special key :params in the headers hash. This design may change in a future major release.

RestClient::Request.execute(method: :get, url: '',
                            timeout: 10, headers: {params: {foo: 'bar'}})



Yeah, that's right! This does multipart sends for you! '/data', :myfile =>"/path/to/image.jpg", 'rb')

This does two things for you:

  • Auto-detects that you have a File value sends it as multipart
  • Auto-detects the mime of the file and sets it in the HEAD of the payload for each entry

If you are sending params that do not contain a File object but the payload needs to be multipart then: '/data', {:foo => 'bar', :multipart => true}

Usage: ActiveResource-Style

resource = ''

private_resource = '', 'user', 'pass'
private_resource.put'pic.jpg'), :content_type => 'image/jpg'

See RestClient::Resource module docs for details.

Usage: Resource Nesting

site ='')
site['posts/1/comments'].post 'Good article.', :content_type => 'text/plain'

See RestClient::Resource docs for details.

Exceptions (see

  • for result codes between 200 and 207, a RestClient::Response will be returned
  • for result codes 301, 302 or 307, the redirection will be followed if the request is a GET or a HEAD
  • for result code 303, the redirection will be followed and the request transformed into a GET
  • for other cases, a RestClient::ExceptionWithResponse holding the Response will be raised; a specific exception class will be thrown for known error codes
  • call .response on the exception to get the server's response
>> RestClient.get ''
Exception: RestClient::NotFound: 404 Not Found

>> begin
     RestClient.get ''
   rescue RestClient::ExceptionWithResponse => e
=> <RestClient::Response 404 "<!doctype h...">

Other exceptions

While most exceptions have been collected under RestClient::RequestFailed aka RestClient::ExceptionWithResponse, there are a few quirky exceptions that have been kept for backwards compatibility.

RestClient will propagate up exceptions like socket errors without modification:

>> RestClient.get 'http://localhost:12345'
Exception: Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 12345

RestClient handles a few specific error cases separately in order to give better error messages. These will hopefully be cleaned up in a future major release.

RestClient::ServerBrokeConnection is translated from EOFError to give a better error message.

RestClient::SSLCertificateNotVerified is raised when HTTPS validation fails. Other OpenSSL::SSL::SSLError errors are raised as is.


By default, rest-client will follow HTTP 30x redirection requests.

New in 2.0: RestClient::Response exposes a #history method that returns a list of each response received in a redirection chain.

>> r = RestClient.get('')
=> <RestClient::Response 200 "{\n  \"args\":...">

# see each response in the redirect chain
>> r.history
=> [<RestClient::Response 302 "<!DOCTYPE H...">, <RestClient::Response 302 "">]

# see each requested URL
>> r.request.url
=> ""
>> {|x| x.request.url}
=> ["", ""]

Manually following redirection

To disable automatic redirection, set :max_redirects => 0.

New in 2.0: Prior versions of rest-client would raise RestClient::MaxRedirectsReached, with no easy way to access the server's response. In 2.0, rest-client raises the normal RestClient::ExceptionWithResponse as it would with any other non-HTTP-20x response.

>> RestClient::Request.execute(method: :get, url: '')
=> RestClient::Response 200 "{\n  "args":..."

>> RestClient::Request.execute(method: :get, url: '', max_redirects: 0)
RestClient::Found: 302 Found

To manually follow redirection, you can call Response#follow_redirection. Or you could of course inspect the result and choose custom behavior.

>> RestClient::Request.execute(method: :get, url: '', max_redirects: 0)
RestClient::Found: 302 Found
>> begin
       RestClient::Request.execute(method: :get, url: '', max_redirects: 0)
   rescue RestClient::ExceptionWithResponse => err
>> err
=> #<RestClient::Found: 302 Found>
>> err.response
=> RestClient::Response 302 "<!DOCTYPE H..."
>> err.response.headers[:location]
=> "/get"
>> err.response.follow_redirection
=> RestClient::Response 200 "{\n  "args":..."

Result handling

The result of a RestClient::Request is a RestClient::Response object.

New in 2.0: RestClient::Response objects are now a subclass of String. Previously, they were a real String object with response functionality mixed in, which was very confusing to work with.

Response objects have several useful methods. (See the class rdoc for more details.)

  • Response#code: The HTTP response code
  • Response#body: The response body as a string. (AKA .to_s)
  • Response#headers: A hash of HTTP response headers
  • Response#raw_headers: A hash of HTTP response headers as unprocessed arrays
  • Response#cookies: A hash of HTTP cookies set by the server
  • Response#cookie_jar: New in 1.8 An HTTP::CookieJar of cookies
  • Response#request: The RestClient::Request object used to make the request
  • Response#history: New in 2.0 If redirection was followed, a list of prior Response objects
➔ <RestClient::Response 200 "<!doctype h...">

rescue RestClient::ExceptionWithResponse => err
➔ <RestClient::Response 404 "<!doctype h...">

Response callbacks, error handling

A block can be passed to the RestClient method. This block will then be called with the Response. Response.return! can be called to invoke the default response's behavior.

# Don't raise exceptions but return the response
>> RestClient.get('') {|response, request, result| response }
=> <RestClient::Response 404 "<!doctype h...">
# Manage a specific error code
RestClient.get('') { |response, request, result, &block|
  case response.code
  when 200
    p "It worked !"
  when 423
    raise SomeCustomExceptionIfYouWant

But note that it may be more straightforward to use exceptions to handle different HTTP error response cases:

  resp = RestClient.get('')
rescue RestClient::Unauthorized, RestClient::Forbidden => err
  puts 'Access denied'
  return err.response
rescue RestClient::ImATeapot => err
  puts 'The server is a teapot! # RFC 2324'
  return err.response
  puts 'It worked!'
  return resp

For GET and HEAD requests, rest-client automatically follows redirection. For other HTTP verbs, call .follow_redirection on the response object (works both in block form and in exception form).

# Follow redirections for all request types and not only for get and head
# RFC : "If the 301, 302 or 307 status code is received in response to a request other than GET or HEAD,
#        the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user,
#        since this might change the conditions under which the request was issued."

# block style'', 'body') { |response, request, result|
  case response.code
  when 301, 302, 307

# exception style by explicit classes
begin'', 'body')
rescue RestClient::MovedPermanently,
       RestClient::TemporaryRedirect => err

# exception style by response code
begin'', 'body')
rescue RestClient::ExceptionWithResponse => err
  case err.http_code
  when 301, 302, 307

Non-normalized URIs

If you need to normalize URIs, e.g. to work with International Resource Identifiers (IRIs), use the Addressable gem ( in your code:

  require 'addressable/uri'

Lower-level access

For cases not covered by the general API, you can use the RestClient::Request class, which provides a lower-level API.

You can:

  • specify ssl parameters
  • override cookies
  • manually handle the response (e.g. to operate on it as a stream rather than reading it all into memory)

See RestClient::Request's documentation for more information.

Streaming request payload

RestClient will try to stream any file-like payload rather than reading it into memory. This happens through RestClient::Payload::Streamed, which is automatically called internally by RestClient::Payload.generate on anything with a read method.

>> r = RestClient.put('','/tmp/foo.txt', 'r'),
                      content_type: 'text/plain')
=> <RestClient::Response 200 "{\n  \"args\":...">

In Multipart requests, RestClient will also stream file handles passed as Hash (or new in 2.1 ParamsArray).

>> r = RestClient.put('',
                      {file_a:'a.txt', 'r'),
                       file_b:'b.txt', 'r')})
=> <RestClient::Response 200 "{\n  \"args\":...">

# received by server as two file uploads with multipart/form-data
>> JSON.parse(r)['files'].keys
=> ['file_a', 'file_b']

Streaming responses

Normally, when you use RestClient.get or the lower level RestClient::Request.execute method: :get to retrieve data, the entire response is buffered in memory and returned as the response to the call.

However, if you are retrieving a large amount of data, for example a Docker image, an iso, or any other large file, you may want to stream the response directly to disk rather than loading it in memory. If you have a very large file, it may become impossible to load it into memory.

There are two main ways to do this:

raw_response, saves into Tempfile

If you pass raw_response: true to RestClient::Request.execute, it will save the response body to a temporary file (using Tempfile) and return a RestClient::RawResponse object rather than a RestClient::Response.

Note that the tempfile created by will be in Dir.tmpdir (usually /tmp/), which you can override to store temporary files in a different location. This file will be unlinked when it is dereferenced.

If logging is enabled, this will also print download progress. New in 2.1: Customize the interval with :stream_log_percent (defaults to 10 for printing a message every 10% complete).

For example:

>> raw = RestClient::Request.execute(
           method: :get,
           url: '',
           raw_response: true)
=> <RestClient::RawResponse @code=200, @file=#<Tempfile:/tmp/rest-client.20170522-5346-1pptjm1>, @request=<RestClient::Request @method="get", @url="">>
>> raw.file.size
=> 1554186240
>> raw.file.path
=> "/tmp/rest-client.20170522-5346-1pptjm1"
=> "/tmp/rest-client.20170522-5346-1pptjm1"

>> require 'digest/sha1'
>> Digest::SHA1.file(raw.file.path).hexdigest
=> "4375b73e3a1aa305a36320ffd7484682922262b3"

block_response, receives raw Net::HTTPResponse

If you want to stream the data from the response to a file as it comes, rather than entirely in memory, you can also pass RestClient::Request.execute a parameter :block_response to which you pass a block/proc. This block receives the raw unmodified Net::HTTPResponse object from Net::HTTP, which you can use to stream directly to a file as each chunk is received.

Note that this bypasses all the usual HTTP status code handling, so you will want to do you own checking for HTTP 20x response codes, redirects, etc.

The following is an example:'/some/output/file', 'w') {|f|
  block = proc { |response|
    response.read_body do |chunk|
      f.write chunk
  RestClient::Request.execute(method: :get,
                              url: '',
                              block_response: block)


The restclient shell command gives an IRB session with RestClient already loaded:

$ restclient
>> RestClient.get ''

Specify a URL argument for get/post/put/delete on that resource:

$ restclient
>> put '/resource', 'data'

Add a user and password for authenticated resources:

$ restclient user pass
>> delete '/private/resource'

Create ~/.restclient for named sessions:

    url: http://localhost:4567
    url: http://localhost:9292
    username: user
    password: pass

Then invoke:

$ restclient private_site

Use as a one-off, curl-style:

$ restclient get > output_body

$ restclient put < input_body


To enable logging globally you can:

  • set RestClient.log with a Ruby Logger
RestClient.log = STDOUT
  • or set an environment variable to avoid modifying the code (in this case you can use a file name, "stdout" or "stderr"):
$ RESTCLIENT_LOG=stdout path/to/my/program

You can also set individual loggers when instantiating a Resource or making an individual request:

resource = '', log:
RestClient::Request.execute(method: :get, url: '', log:

All options produce logs like this:

RestClient.get "http://some/resource"
# => 200 OK | text/html 250 bytes
RestClient.put "http://some/resource", "payload"
# => 401 Unauthorized | application/xml 340 bytes

Note that these logs are valid Ruby, so you can paste them into the restclient shell or a script to replay your sequence of rest calls.


All calls to RestClient, including Resources, will use the proxy specified by RestClient.proxy:

RestClient.proxy = ""
RestClient.get "http://some/resource"
# => response from some/resource as proxied through

Often the proxy URL is set in an environment variable, so you can do this to use whatever proxy the system is configured to use:

  RestClient.proxy = ENV['http_proxy']

New in 2.0: Specify a per-request proxy by passing the :proxy option to RestClient::Request. This will override any proxies set by environment variable or by the global RestClient.proxy value.

RestClient::Request.execute(method: :get, url: '',
                            proxy: '')
# => single request proxied through the proxy

This can be used to disable the use of a proxy for a particular request.

RestClient.proxy = ""
RestClient::Request.execute(method: :get, url: '', proxy: nil)
# => single request sent without a proxy

Query parameters

Rest-client can render a hash as HTTP query parameters for GET/HEAD/DELETE requests or as HTTP post data in x-www-form-urlencoded format for POST requests.

New in 2.0: Even though there is no standard specifying how this should work, rest-client follows a similar convention to the one used by Rack / Rails servers for handling arrays, nested hashes, and null values.

The implementation in ./lib/rest-client/utils.rb closely follows Rack::Utils.build_nested_query, but treats empty arrays and hashes as nil. (Rack drops them entirely, which is confusing behavior.)

If you don't like this behavior and want more control, just serialize params yourself (e.g. with URI.encode_www_form) and add the query string to the URL directly for GET parameters or pass the payload as a string for POST requests.

Basic GET params:

RestClient.get('', params: {foo: 'bar', baz: 'qux'})
# GET ""

Basic x-www-form-urlencoded POST params:

>> r ='', {foo: 'bar', baz: 'qux'})
# POST "", data: "foo=bar&baz=qux"
=> <RestClient::Response 200 "{\n  \"args\":...">
>> JSON.parse(r.body)
=> {"args"=>{},
    "form"=>{"baz"=>"qux", "foo"=>"bar"},
        "Accept-Encoding"=>"gzip, deflate",

JSON payload: rest-client does not speak JSON natively, so serialize your payload to a string before passing it to rest-client.

>> payload = {'name' => 'newrepo', 'description': 'A new repo'}
>>'', payload.to_json, content_type: :json)
=> <RestClient::Response 201 "{\"id\":75149...">

Advanced GET params (arrays):

>> r = RestClient.get('', params: {foo: [1,2,3]})
# GET "[]=1&foo[]=2&foo[]=3"
=> <RestClient::Response 200 "Method: GET...">
>> puts r.body
query_string: "foo[]=1&foo[]=2&foo[]=3"
decoded:      "foo[]=1&foo[]=2&foo[]=3"

  {"foo"=>["1", "2", "3"]}

Advanced GET params (nested hashes):

>> r = RestClient.get('', params: {outer: {foo: 123, bar: 456}})
# GET "[foo]=123&outer[bar]=456"
=> <RestClient::Response 200 "Method: GET...">
>> puts r.body
query_string: "outer[foo]=123&outer[bar]=456"
decoded:      "outer[foo]=123&outer[bar]=456"

  {"outer"=>{"foo"=>"123", "bar"=>"456"}}

New in 2.0: The new RestClient::ParamsArray class allows callers to provide ordering even to structured parameters. This is useful for unusual cases where the server treats the order of parameters as significant or you want to pass a particular key multiple times.

Multiple fields with the same name using ParamsArray:

>> RestClient.get('', params:
        [[:foo, 1], [:foo, 2]]))
# GET ""

Nested ParamsArray:

>> RestClient.get('', params:
                  {foo:[[:a, 1], [:a, 2]])})
# GET "[a]=1&foo[a]=2"


Request headers can be set by passing a ruby hash containing keys and values representing header names and values:

# GET request with modified headers
RestClient.get '', {:Authorization => 'Bearer cT0febFoD5lxAlNAXHo6g'}

# POST request with modified headers '', {:foo => 'bar', :baz => 'qux'}, {:Authorization => 'Bearer cT0febFoD5lxAlNAXHo6g'}

# DELETE request with modified headers
RestClient.delete '', {:Authorization => 'Bearer cT0febFoD5lxAlNAXHo6g'}


By default the timeout for a request is 60 seconds. Timeouts for your request can be adjusted by setting the timeout: to the number of seconds that you would like the request to wait. Setting timeout: will override both read_timeout: and open_timeout:.

RestClient::Request.execute(method: :get, url: '',
                            timeout: 120)

Additionally, you can set read_timeout: and open_timeout: separately.

RestClient::Request.execute(method: :get, url: '',
                            read_timeout: 120, open_timeout: 240)


Request and Response objects know about HTTP cookies, and will automatically extract and set headers for them as needed:

response = RestClient.get ''
# => {"_applicatioN_session_id" => "1234"}

response2 =
  {:param1 => "foo"},
  {:cookies => {:session_id => "1234"}}
# ...response body

Full cookie jar support (new in 1.8)

The original cookie implementation was very naive and ignored most of the cookie RFC standards. New in 1.8: An HTTP::CookieJar of cookies

Response objects now carry a cookie_jar method that exposes an HTTP::CookieJar of cookies, which supports full standards compliant behavior.

SSL/TLS support

Various options are supported for configuring rest-client's TLS settings. By default, rest-client will verify certificates using the system's CA store on all platforms. (This is intended to be similar to how browsers behave.) You can specify an :ssl_ca_file, :ssl_ca_path, or :ssl_cert_store to customize the certificate authorities accepted.

SSL Client Certificates
  :ssl_client_cert  =>"cert.pem")),
  :ssl_client_key   =>"key.pem"), "passphrase, if any"),
  :ssl_ca_file      =>  "ca_certificate.pem",
  :verify_ssl       =>  OpenSSL::SSL::VERIFY_PEER

Self-signed certificates can be generated with the openssl command-line tool.


RestClient.add_before_execution_proc add a Proc to be called before each execution. It's handy if you need direct access to the HTTP request.


# Add oauth support using the oauth gem
require 'oauth'
access_token = ...

RestClient.add_before_execution_proc do |req, params|
  access_token.sign! req

RestClient.get ''


Need caching, more advanced logging or any ability provided by Rack middleware?

Have a look at rest-client-components:


REST Client TeamAndy Brody
CreatorAdam Wiggins
Maintainers EmeritiLawrence Leonard Gilbert, Matthew Manning, Julien Kirch
Major contributionsBlake Mizerany, Julien Kirch

A great many generous folks have contributed features and patches. See AUTHORS for the full list.

New mailing list

We have a new email list for announcements, hosted by

Subscribe on the web:

Subscribe by sending an email:

Open discussion subgroup:

The old Librelist mailing list is defunct, as Librelist appears to be broken and not accepting new mail. The old archives are still up, but have been imported into the new list archives as well.

Author: Rest-client
Source Code: 
License: MIT license

#ruby #client #http #rest 

RESTClient: Simple HTTP and REST Client for Ruby

D3-history: Proper URL Bar History for D3.js


simple URL support for D3.js user interfaces


D3.js provides d3.dispatch, an event listening utility which can be used to cleanly decouple project components from the user interaction events used by the d3.on method. d3.history is largely a drop-in replacement for d3.dispatch, so the API methods intentionally match, with one important exception: with d3.history, the call() method requires a third argument containing the new URL string, in addition to the event name and the context object as required by the native d3.dispatch.

var dispatcher,

// which data item will be passed to the dispatcher as an argument?
index = 12;
datum = data[datum];

// perform the action without giving a URL to the new state
dispatcher = d3.dispatch('action');
selection.on('action', function() {, this, datum);

// perform the action and give the new state a URL -- much better!
history_dispatcher = d3.history('action');
selection.on('click', function() {'action', this, 'displaying-item-' + index, datum);

Just as with d3.dispatch, you can optionally provide additional arguments to a d3.history object which will be passed to the event methods. These arguments are also combined into an array which is then stored in the state object provided by the HTML5 History API.

var dispatcher;
// create a d3.history dispatcher object with an "action" method
dispatcher = d3.history('action');
// fire action method listener on click
selection.on('click', function() {
  // pass arguments to the event handler function, this, url, datum, additional_information);
// arguments are available in the event handler function
dispatcher.on('action', function(datum, additional_information) {
  console.log(datum, additional_information);

d3.history handles the URL bar, but it doesn't try to manage your application state. If two items are clicked in quick succession, should the URL bar mention them both, or should the second replace the first? You'll need to handle that decision yourself when compiling your new URL, before using d3.history. URLs are important, so d3.history will never try to decide them for you.

In the vast majority of cases, it should be sufficient to track key-value pairs using a hashmap, and then flatten that hashmap to a URL fragment string immediately before updating the user interface with d3.history.

var dispatcher,
// create a d3.history dispatcher object with an "action" method
dispatcher = d3.history('action');
// keep track of project state
state = {
  country: 'Spain',
  zoom: false
// perform the action on click
selection.on('click', function() {
  // compile project state to URL
  url_fragment = '';
  Object.keys(state).forEach(function(key) {
    var value;
    value = state[key];
    url_fragment += key + '=' + value + '&';
  // convert to query parameters and remove trailing ampersand
  url_fragment = '?' + url_fragment.slice(0, -1);
  // fire action event handler and update url bar accordingly, this, url_fragment);

d3.history will automatically handle storing state and history of data and URLs, but it can't make the project respond to the URL bar entirely on its own, because it doesn't know how to render everything else. To fully enable deep linking, you'll need to make sure your project includes an initialization function which can read the URL bar and set the project state accordingly on load. (Using d3.history or d3.dispatch inside that initialization function can make this a lot easier.)

To support the browser's "forward" and "back" buttons, run the initialization function in response to the popstate event.

  window.addEventListener('popstate', function() {

d3.history uses d3.dispatch internally, creating a closure around it which also contains the logic for handling the HTML5 History API.

Custom URL Handling

By default, URLs are simply updated with pushState. However, you can override this to insert your own custom URL handling function if you'd like to do something unusual. Your custom function must accept three arguments, which should match those used for pushState:

  1. the data item, if any, which is to be stored as the page state object
  2. the page title (although currently this is unused in all major browsers)
  3. the new URL fragment
  var history_dispatcher,
  // create a d3.history object
  history_dispatcher = d3.history();
  // do whatever you want with the URL and state data
  url_handler = function(data, title, url) {
    console.log("Let's do something unusual with the URL.");
  // attach custom URL handling function


d3.history is a plugin for D3.js which adds simple support for deep-linking and URLs based on the user interface state. It automatically updates the URL bar through the HTML5 History API as you use the d3.dispatch event listening utility.

live demonstration


If you use NPM, npm install d3-history. Otherwise, download the latest release.

Author: Vijithassar
Source Code: 
License: BSD-3-Clause license

#javascript #3d #history 

D3-history: Proper URL Bar History for D3.js
Royce  Reinger

Royce Reinger


Generate Changelogs & Release Notes From A Project's Commit Messages

Conventional Changelog

Having problems? want to contribute? join our community slack.

Generate a CHANGELOG from git metadata

About this Repo

The conventional-changelog repo is managed as a monorepo; it's composed of many npm packages.

The original conventional-changelog/conventional-changelog API repo can be found in packages/conventional-changelog.

Getting started

It's recommended you use the high level standard-version library, which is a drop-in replacement for npm's version command, handling automated version bumping, tagging and CHANGELOG generation.

Alternatively, if you'd like to move towards completely automating your release process as an output from CI/CD, consider using semantic-release.

You can also use one of the plugins if you are already using the tool:

Plugins Supporting Conventional Changelog

Modules Important to Conventional Changelog Ecosystem

Node Support Policy

We only support Long-Term Support versions of Node.

We specifically limit our support to LTS versions of Node, not because this package won't work on other versions, but because we have a limited amount of time, and supporting LTS offers the greatest return on that investment.

It's possible this package will work correctly on newer versions of Node. It may even be possible to use this package on older versions of Node, though that's more unlikely as we'll make every effort to take advantage of features available in the oldest LTS version we support.

As each Node LTS version reaches its end-of-life we will remove that version from the node engines property of our package's package.json file. Removing a Node version is considered a breaking change and will entail the publishing of a new major version of this package. We will not accept any requests to support an end-of-life version of Node. Any merge requests or issues supporting an end-of-life version of Node will be closed.

We will accept code that allows this package to run on newer, non-LTS, versions of Node. Furthermore, we will attempt to ensure our own changes work on the latest version of Node. To help in that commitment, our continuous integration setup runs against all LTS versions of Node in addition the most recent Node release; called current.

JavaScript package managers should allow you to install this package with any version of Node, with, at most, a warning if your version of Node does not fall within the range specified by our node engines property. If you encounter issues installing this package, please report the issue to your package manager.

Author: Conventional-changelog
Source Code: 
License: ISC License

#git #metadata #history 

Generate Changelogs & Release Notes From A Project's Commit Messages

The Mobile App Boom – Roman Taranov Explores Its History And Future

Who would have thought back in the early 1980s that there would be such a thing as the mobile app boom? Yet here we are in the 21st century, and there’s scarcely an individual on the planet who hasn’t used a mobile application. Roman Taranov - entrepreneur and owner of Ruby Labs, a mobile app development company - is perhaps in the very best position to take a look back at the history of their development and to give his insights into where the industry could be going in the future.

The Past Of The App

Are Apps Dead?

The Profitability Of The App Development Sector

The Rise Of The Chatbot

#mobile-app-development #mobile-apps #mobile #android-app-development #ios-app-development #chatbots #history #technology-trends

The Mobile App Boom – Roman Taranov Explores Its History And Future
Franz  Becker

Franz Becker


The Great Comeback Of HTML Widgets

Do you remind those web widgets all websites have been using in the 90s? Guess what, they’re coming back.

I’m not talking about the good old live feed that you’ve used to see 30 years ago, or the flashing popups using vivid colors that probably flashed your screen a few times.

No, I’m not talking about that. Widgets have improved and changed their design, but they’re definitely back in this decade.

Live chat widgets, social shared, cookie notices, exit intent popups, … Do you think that they were dead? Nope, they are back!

There is a good reason behind that. In fact, the same reasons that drove webmaster to adopt website widgets in 1990 are still valid today, but in a different way.

Widgets are here to fullfill the need of complex features. Of course, web technologies have simplified programming over the past years, but the recent increase of web technologies is raising the bar in terms of needs for new websites. You can’t imagine building a professional website without a news feed, an image galery, a cookie consent, a scheduling plugin, …

How can you solve that if your IT workforce is limited? Use widgets!

Widgets are changing the way we build websites in 2021, and it’s probably a trend to watch for the upcoming years, so, stay tuned!

#website #website-development #programming #web #history

The Great Comeback Of HTML Widgets
Lina  Biyinzika

Lina Biyinzika


History of AI: Timeline, Advancement & Development

Artificial Intelligence is a young domain of 60 years, which includes a set of techniques, theories, sciences, and algorithms to emulate the intelligence of a human being. Artificial intelligence plays a very significant role in our lives. The revolution of industries has made a lot of developments in business with the implementation of artificial intelligence. In this blog, we will discuss an outlook on the history of artificial intelligence.

What is Artificial Intelligence?

Artificial intelligence is defined as the ability of a machine to perform tasks and activities that are usually performed by humans. Artificial intelligence organizes and gathers a vast amount of data to make useful insights. It is also known as machine intelligence. It is a domain of computer science.

#artificial intelligence #ai #history #history of ai

History of AI: Timeline, Advancement & Development
Karlee  Will

Karlee Will


SQL's 50 Year Reign: Here's Why SQL Is Still Relevant Today

In March 1971, Intel introduced the world’s first general microprocessor, the Intel 4004. It had ~2,300 transistors and cost $60.

Fast forward almost 50 years, and the newest iPhone has nearly 12 billion transistors (but unfortunately costs a little more than $60).

Many of the programming languages we use today were not introduced until the 90s (Java was introduced in 1996). However, there is one programming language that is still as popular today as it was when it was introduced nearly 50 years ago: SQL.

This article will discuss the events that led to the introduction of relational databases, why SQL grew in popularity, and what we can learn from its success.

#sql #history

SQL's 50 Year Reign: Here's Why SQL Is Still Relevant Today
Kacey  Hudson

Kacey Hudson


An easy guide to the history of Artificial Intelligence

If we start to coming back in history… until the ancient Greek, we can discover that intelligent machines and artificial beings first appeared as myths of Antiquity.
Aristotle’s development of the syllogism and its use of deductive reasoning was a crucial moment in mankind’s quest to understand its own intelligence.
But when it comes to AI and Machine Learning, we don’t go so far with the memory because the history of artificial intelligence as we think of it today spans less than a century.
I want to share here a quick look at some of the most critical events in AI since its beginning and some interesting links.

#machine-learning #deep-learning #history #artificial-intelligence #guides-and-tutorials

An easy guide to the history of Artificial Intelligence
Aisu  Joesph

Aisu Joesph


How To Make A Go Board With CSS

I was inspired to write about Go after watching The Queen’s Gambit recently. Something is alluring about learning a five-hundred-year-old chess opening move. The modern computer can simulate millions of game patterns in a fraction of a second. But it will never know the joy of learning the name and history behind a move.

This article will feature the famous opening sequence in a match between Go Seigen and Honinbo Shusai Meijin in HTML and CSS.

#history #web-development #css #design #go

How To Make A Go Board With CSS
Ssekidde  Nat

Ssekidde Nat


The Power Of HTML And CSS Evolution

The HTML evolution from HTML 2 to HTML 5 has seen an enormous shift of things which has empowered web developers in tremendous ways. Committed web engineers that have been in this space long enough will tell you that these changes have made web development much easier. A release of an HTML version means a better and easier way of doing things and for those that have not been writing HTML for some time, catching up with the rest without taking a course is next to impossible. With HTML 2 that was launched in 1995 all the styling and how the page looked was a responsibility of HTML.

#web-development #html #html5 #css #history #coding

The Power Of HTML And CSS Evolution

A Brief History of Artificial Intelligence

Artificial Intelligence (AI) has become a common phrase in current day and it no longer surprises us what some of the AI based systems used in our daily lives can achieve. However, this is a result of decades of progress. In this article, I present to you a brief history of AI. Pretty much all of it is a summary of a chapter in the book “Artificial Intelligence A Modern Approach” by Stuart J. Russell and Peter Norvig

#artificial-intelligence #algorithms #history #neural-networks #ai

A Brief History of Artificial Intelligence