Instrumental: Ruby Agent for instrumental Application Monitoring

Instrumental Ruby Agent

Instrumental is a application monitoring platform built for developers who want a better understanding of their production software. Powerful tools, like the Instrumental Query Language, combined with an exploration-focused interface allow you to get real answers to complex questions, in real-time.

This agent supports custom metric monitoring for Ruby applications. It provides high-data reliability at high scale, without ever blocking your process or causing an exception.

Setup & Usage

Add the gem to your Gemfile.

gem 'instrumental_agent'

Visit instrumentalapp.com and create an account, then initialize the agent with your project API token.

I = Instrumental::Agent.new('PROJECT_API_TOKEN', :enabled => Rails.env.production?)

You'll probably want something like the above, only enabling the agent in production mode so you don't have development and production data writing to the same value. Or you can setup two projects, so that you can verify stats in one, and release them to production in another.

Now you can begin to use Instrumental to track your application.

I.gauge('load', 1.23)                # value at a point in time

I.increment('signups')               # increasing value, think "events"

I.time('query_time') do              # time a block of code
  post = Post.find(1)
end
I.time_ms('query_time_in_ms') do     # prefer milliseconds?
  post = Post.find(1)
end

Note: For your app's safety, the agent is meant to isolate your app from any problems our service might suffer. If it is unable to connect to the service, it will discard data after reaching a low memory threshold.

Want to track an event (like an application deploy, or downtime)? You can capture events that are instantaneous, or events that happen over a period of time.

I.notice('Jeffy deployed rev ef3d6a') # instantaneous event
I.notice('Testing socket buffer increase', 3.days.ago, 20.minutes) # an event with a duration

Backfilling

Streaming data is better with a little historical context. Instrumental lets you backfill data, allowing you to see deep into your project's past.

When backfilling, you may send tens of thousands of metrics per second, and the command buffer may start discarding data it isn't able to send fast enough. We provide a synchronous mode that will ensure every stat makes it to Instrumental before continuing on to the next.

Warning: You should only enable synchronous mode for backfilling data as any issues with the Instrumental service issues will cause this code to halt until it can reconnect.

I.synchronous = true # every command sends immediately
User.find_each do |user|
  I.increment('signups', 1, user.created_at)
end

Aggregation

Aggregation collects more data on your system before sending it to Instrumental. This reduces the total amount of data being sent, at the cost of a small amount of additional latency. You can control this feature with the frequency parameter:

I = Instrumental::Agent.new('PROJECT_API_TOKEN', :frequency => 15) # send data every 15 seconds
I.frequency = 6 # send batches of data every 6 seconds

The agent may send data more frequently if you are sending a large number of different metrics. Values between 3 and 15 are generally reasonable. If you want to disable this behavior and send every metric as fast as possible, set frequency to zero or nil. Note that a frequency of zero will still use a seperate thread for performance - it is NOT the same as synchronous mode.

Server Metrics

Want server stats like load, memory, etc.? Check out InstrumentalD.

Agent Control

Need to quickly disable the agent? set :enabled to false on initialization and you don't need to change any application code.

Capistrano Integration

Add require "instrumental/capistrano" to your capistrano configuration and your deploys will be tracked by Instrumental. Add the API token for the project you want to track to by setting the following Capistrano var:

set :instrumental_key, "MY_API_KEY"

The following configuration will be added:

before "deploy", "instrumental:util:deploy_start"
after  "deploy", "instrumental:util:deploy_end"
before "deploy:migrations", "instrumental:util:deploy_start"
after  "deploy:migrations", "instrumental:util:deploy_end"
after  "instrumental:util:deploy_end", "instrumental:record_deploy_notice"

The default message sent is "USER deployed COMMIT_HASH". If you need to customize it, set a capistrano variable named deploy_message to the value you'd prefer.

Tracking metrics in Resque jobs (and Resque-like scenarios)

If you plan on tracking metrics in Resque jobs, you will need to explicitly cleanup after the agent when the jobs are finished. You can accomplish this by adding after_perform and on_failure hooks to your Resque jobs. See the Resque hooks documentation for more information.

You're required to do this because Resque calls exit! when a worker has finished processing, which bypasses Ruby's at_exit hooks. The Instrumental Agent installs an at_exit hook to flush any pending metrics to the servers, but this hook is bypassed by the exit! call; any other code you rely that uses exit! should call I.cleanup to ensure any pending metrics are correctly sent to the server before exiting the process.

Automated Metric Collection

v2.x+ of the Instrumental Agent introduced automated metric collection for your application by way of the Metrician gem. You can read more about the metrics it collects in the Instrumental documentation.

Upgrading from 1.x

If you are upgrading from the pre-2.x version of instrumental and do not want automated metric collection, you can disable it by setting the following in your agent setup:

I = Instrumental::Agent.new('PROJECT_API_TOKEN',
  :enabled => Rails.env.production?,
  :metrician => false
)

Upgrading from 2.x

Agent version 3.x drops support for some older rubies, but should otherwise be a drop-in replacement. If you wish to enable Aggregation, enable the agent with the frequency option set to the number of seconds you would like to wait between flushes. For example:

I = Instrumental::Agent.new('PROJECT_API_TOKEN',
  :enabled => Rails.env.production?,
  :frequency => 15
)

Troubleshooting & Help

We are here to help. Email us at support@instrumentalapp.com.

Release Process

  1. Pull latest master
  2. Merge feature branch(es) into master
  3. script/test
  4. Increment version in:
  • lib/instrumental/version.rb
  1. Update CHANGELOG.md
  2. Commit "Release vX.Y.Z"
  3. Push to GitHub
  4. Release packages: rake release
  5. Update documentation on instrumentalapp.com

Version Policy

This library follows Semantic Versioning 2.0.0.


Author:  Instrumental
Source code: https://github.com/Instrumental/instrumental_agent-ruby
License: MIT license

#ruby #rails 

What is GEEK

Buddha Community

Instrumental: Ruby Agent for instrumental Application Monitoring
Gerhard  Brink

Gerhard Brink

1624006278

The Rising Value of Big Data in Application Monitoring

In an ecosystem that has become increasingly integrated with huge chunks of data and information traveling through the airwaves, Big Data has become irreplaceable for establishments.

From day-to-day business operations to detailed customer interactions, many ventures heavily invest in data sciences and data analysis  to find breakthroughs and marketable insights.

Plus, surviving in the current era, mandates taking informed decisions and surgical precision based on the projected forecast of current trends to retain profitability. Hence these days, data is revered as the most valuable resource.

According to a recent study by Sigma Computing , the world of Big Data is only projected to grow bigger, and by 2025 it is estimated that the global data-sphere will grow to reach 17.5 Zettabytes. FYI one Zettabyte is equal to 1 million Petabytes.

Moreover, the Big Data industry will be worth an estimate of $77 billion by 2023. Furthermore, the Banking sector generates unparalleled quantities of data, with the amount of data generated by the financial industry each second growing by 700% in 2021.

In light of this information, let’s take a quick look at some of the ways application monitoring can use Big Data, along with its growing importance and impact.

#ai in business #ai application #application monitoring #big data #the rising value of big data in application monitoring #application monitoring

Shardul Bhatt

Shardul Bhatt

1618576835

Why should we use Ruby on Rails for Software Development?

What is Rails, Ruby on Rails?

Rails is a server-side web application development framework written in the Ruby programming language. Its emergence in 2005 has influenced and impacted web application development to a vast range, including but not limited to seamless database tables, migrations, and scaffolding of views. In the simplest understanding, Rails is a highly productive and intuitive software developer. 

Websites and applications of any complexity can be achieved with Ruby on Rails. The software is designed to perceive the needs of ruby on rails developers and encourage them with the best way out. It is designed to allow developers to write lesser code while spiking productivity much more than any other framework or language. Ruby on Rails rapid application development offers everyday web development tasks easier and uniquely out-of-the-box, both with the same effectiveness.

The Ruby on Rails framework is based on two philosophies:

 

  • Don’t Repeat Yourself (DRY): It is a software development principle that ensures that every piece or entity of knowledge must be assigned with a single and unambiguous representation within a development system.

    It not only reduces the need to write lengthy codes but also eliminates the repetitive writing of codes. As a result, it provides a much more manageable web app development with the least possible bugs.

 

  • Convention over Configuration (CoC): It indicates the highly opinionated feature that the Ruby on Rails framework possesses. It offers ready-made solutions or the “best way out” for many tasks in a web application and defaults them to the convention without the need for external specification. The programmer using the software is required to specify only the unconventional aspects of the web application. 

Some of the commonly known websites built by the Ruby on Rails software developers are Instacart, Scribd, Shopify, Github, ConvertKit, Soundcloud, GoodReads, Airbnb. It finds its application in Sa-as Solutions, Social Networking Platforms, Dating websites, Stock Exchange Platforms, etc.  

Read more: Why Ruby on Rails is Perfect for eCommerce Web Development

Why use Ruby on Rails: The multifold benefits

  • Community and its abundant resources 

    • There is a large community that is dedicated to Ruby on Rails that keeps it up-to-date and indeed encourages its family of developers to continue using it. They make sure the benefits are soaring with every update they make. 

    • The community is committed to developing several ready-to-use code packages, commonly known as gems, for its users. They discuss and announce new project launches, help each other with queries, and engage in framework discussions and betterment. While Ruby on Rails helps developers in rapid application development, it also connects and grows businesses together.

  • Project Scalability

    • To talk about scalability, we indicate the ability to grow and manage more and more user requests per minute (RPM). However, this depends on the architecture rather than the framework. The right architecture of Ruby on Rails web application development allows it to write bulky codes and programs as compared to early-stage difficulties with scalability. 

    • It uses the Representational State Transfer (REST) architecture. This will enable Rails to create efficient web applications based on Rails 6, launched last year in 2020, which addresses most scalability issues. The portable components are agile and help in a better understanding of new requirements and needful adaptations for any business. The framework and architecture allow both vertical and horizontal scalability.

  • Fast Application Development and Cost Effectiveness

    • Ruby on Rails is lucid, logical, and has lean code requirements, thereby cutting down redundancy and improving the overall development speed. Lesser amount of code is proportional to lesser time investment with optimal results. The more time it takes for development, the more expensive it becomes for the end customers.

    • Considering the ready-made code modules/packages (gems) available, Ruby on Rails development company will less time and money are spent creating and modifying Rails websites and applications. Another advantage that has made Ruby on Rails super attractive for startups is its use of Model-View-Controller (MVC) architecture. It has a component separation scheme that speeds up the web development process and fixes any errors that occur.  

  • Data Protection

    • Rails framework and the Ruby on Rails community put in a lot of efforts for data protection and security of its customer base. It is also one of the efficient frameworks for developing database-backed applications. 

    • The developers at Ruby on Rails cover many aspects of cybersecurity, including encryptions of  passwords, credit card information, and users’ personal database. Special measures are taken to prevent the framework from SQL injections and XSS attacks. 

  • Ruby on Rails Enterprise Application Development

    • Ruby on Rails simplifies the daily operations and lowers the cost of enterprise app developments. The prominent features include data management, seamless updating of applications, easy and efficient code development, and high scalability, as discussed above. 

    • Ruby on Rails enterprise application development is preferred by companies and is slightly cost-intensive. It can be easily integrated with third-party apps like Oracle Business, Oracle, Windows services, and others. Ruby enterprise app development allows the developers and programmers to solve the problems at the root level, given its transparency.

Ruby on Rails V/S Django

Checkout Blog on Django vs Ruby on Rails Comparison

Bottom Line

There are several reasons to prefer Ruby on Rails discussed above and extend further to early detection of errors, reduced time to market, and easy adaptation for API developments. It makes web programming much easier and simplifies website building of any complexity. Its flexibility and acceptance among new developers and programmers make it the perfect, one-stop choice for software application development company in 2021. 

Source: https://techsite.io/p/2121044

#ruby on rails examples #ruby on rails rapid application development #ruby on rails web application development #ruby on rails software developer #ruby on rails enterprise application development

Carmen  Grimes

Carmen Grimes

1598959140

How to Monitor Third Party API Integrations

Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.

If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.

How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.

What to monitor

Latency

Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.

Latency best practices

Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.

Reliability

Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.

Reliability best practices

While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.

Not every API provider and integration partner follows suggested status code mapping

Availability

While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.

AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds

Usage

Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.

Usage best practices

It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.

How to monitor API integrations

Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.

#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices

Elliott  Owen

Elliott Owen

1659446460

Excon: Usable, Fast, Simple HTTP 1.1 for Ruby

excon

Usable, fast, simple Ruby HTTP 1.1

Excon was designed to be simple, fast and performant. It works great as a general HTTP(s) client and is particularly well suited to usage in API clients. 

Getting Started

Install the gem.

$ sudo gem install excon

Require with rubygems.

require 'rubygems'
require 'excon'

The easiest way to get started is by using one-off requests. Supported one-off request methods are connect, delete, get, head, options, post, put, and trace. Requests return a response object which has body, headers, remote_ip and status attributes.

response = Excon.get('http://geemus.com')
response.body       # => "..."
response.headers    # => {...}
response.remote_ip  # => "..."
response.status     # => 200

For API clients or other ongoing usage, reuse a connection across multiple requests to share options and improve performance.

connection = Excon.new('http://geemus.com')
get_response = connection.get
post_response = connection.post(:path => '/foo')
delete_response = connection.delete(:path => '/bar')

By default, each connection is non-persistent. This means that each request made against a connection behaves like a one-off request. Each request will establish a socket connection to the server, then close the socket once the request is complete.

To use a persistent connection, use the :persistent option:

connection = Excon.new('http://geemus.com', :persistent => true)

The initial request will establish a socket connection to the server and leave the socket open. Subsequent requests will reuse that socket. You may call Connection#reset at any time to close the underlying socket, and the next request will establish a new socket connection.

You may also control persistence on a per-request basis by setting the :persistent option for each request.

connection = Excon.new('http://geemus.com') # non-persistent by default
connection.get # socket established, then closed
connection.get(:persistent => true) # socket established, left open
connection.get(:persistent => true) # socket reused
connection.get # socket reused, then closed

connection = Excon.new('http://geemus.com', :persistent => true)
connection.get # socket established, left open
connection.get(:persistent => false) # socket reused, then closed
connection.get(:persistent => false) # socket established, then closed
connection.get # socket established, left open
connection.get # socket reused

Note that sending a request with :persistent => false to close the socket will also send Connection: close to inform the server the connection is no longer needed. Connection#reset will simply close our end of the socket.

Options

Both one-off and persistent connections support many other options. The final options for a request are built up by starting with Excon.defaults, then merging in options from the connection and finally merging in any request options. In this way you have plenty of options on where and how to set options and can easily setup connections or defaults to match common options for a particular endpoint.

Here are a few common examples:

# Output debug info, similar to ENV['EXCON_DEBUG']
connection = Excon.new('http://geemus.com/', :debug => true)

# Custom headers
Excon.get('http://geemus.com', :headers => {'Authorization' => 'Basic 0123456789ABCDEF'})
connection.get(:headers => {'Authorization' => 'Basic 0123456789ABCDEF'})

# Changing query strings
connection = Excon.new('http://geemus.com/')
connection.get(:query => {:foo => 'bar'})

# POST body encoded with application/x-www-form-urlencoded
Excon.post('http://geemus.com',
  :body => 'language=ruby&class=fog',
  :headers => { "Content-Type" => "application/x-www-form-urlencoded" })

# same again, but using URI to build the body of parameters
Excon.post('http://geemus.com',
  :body => URI.encode_www_form(:language => 'ruby', :class => 'fog'),
  :headers => { "Content-Type" => "application/x-www-form-urlencoded" })

# request takes a method option, accepting either a symbol or string
connection.request(:method => :get)
connection.request(:method => 'GET')

# expect one or more status codes, or raise an error
connection.request(:expects => [200, 201], :method => :get)

# this request can be repeated safely, so retry on errors up to 4 times
connection.request(:idempotent => true)

# this request can be repeated safely, retry up to 6 times
connection.request(:idempotent => true, :retry_limit => 6)

# this request can be repeated safely, retry up to 6 times and sleep 5 seconds
# in between each retry
connection.request(:idempotent => true, :retry_limit => 6, :retry_interval => 5)

# set longer read_timeout (default is 60 seconds)
connection.request(:read_timeout => 360)

# set longer write_timeout (default is 60 seconds)
connection.request(:write_timeout => 360)

# Enable the socket option TCP_NODELAY on the underlying socket.
#
# This can improve response time when sending frequent short
# requests in time-sensitive scenarios.
#
connection = Excon.new('http://geemus.com/', :tcp_nodelay => true)

# set longer connect_timeout (default is 60 seconds)
connection = Excon.new('http://geemus.com/', :connect_timeout => 360)

# opt-out of nonblocking operations for performance and/or as a workaround
connection = Excon.new('http://geemus.com/', :nonblock => false)

# use basic authentication by supplying credentials in the URL or as parameters
connection = Excon.new('http://username:password@secure.geemus.com')
# Note: username & password is unescaped for request, so you should provide escaped values here
# i. e. instead of `password: 'pa%%word'` you should use `password: Excon::Utils.escape_uri('pa%%word')`,
# which return `pa%25%25word`
connection = Excon.new('http://secure.geemus.com',
  :user => 'username', :password => 'password')

# use custom uri parser
require 'addressable/uri'
connection = Excon.new('http://geemus.com/', uri_parser: Addressable::URI)

Compared to web browsers and other http client libraries, e.g. curl, Excon is a bit more low-level and doesn't assume much by default. If you are seeing different results compared to other clients, the following options might help:

# opt-in to omitting port from http:80 and https:443
connection = Excon.new('http://geemus.com/', :omit_default_port => true)

# accept gzip encoding
connection = Excon.new('http://geemus.com/', :headers => { "Accept-Encoding" => "gzip" })

# turn off peer verification (less secure)
Excon.defaults[:ssl_verify_peer] = false
connection = Excon.new('https://...')

Chunked Requests

You can make Transfer-Encoding: chunked requests by passing a block that will deliver chunks, delivering an empty chunk to signal completion.

file = File.open('data')

chunker = lambda do
  # Excon.defaults[:chunk_size] defaults to 1048576, ie 1MB
  # to_s will convert the nil received after everything is read to the final empty chunk
  file.read(Excon.defaults[:chunk_size]).to_s
end

Excon.post('http://geemus.com', :request_block => chunker)

file.close

Iterating in this way allows you to have more granular control over writes and to write things where you can not calculate the overall length up front.

Pipelining Requests

You can make use of HTTP pipelining to improve performance. Instead of the normal request/response cycle, pipelining sends a series of requests and then receives a series of responses. You can take advantage of this using the requests method, which takes an array of params where each is a hash like request would receive and returns an array of responses.

connection = Excon.new('http://geemus.com/')
connection.requests([{:method => :get}, {:method => :get}])

By default, each call to requests will use a separate persistent socket connection. To make multiple requests calls using a single persistent connection, set :persistent => true when establishing the connection.

For large numbers of simultaneous requests please consider using the batch_requests method. This will automatically slice up the requests into batches based on the file descriptor limit of your operating system. The results are the same as the requests method, but using this method can help prevent timeout errors.

large_array_of_requests = [{:method => :get, :path => 'some_path'}, { ... }] # Hundreds of items
connection.batch_requests(large_array_of_requests)

Streaming Responses

You can stream responses by passing a block that will receive each chunk.

streamer = lambda do |chunk, remaining_bytes, total_bytes|
  puts chunk
  puts "Remaining: #{remaining_bytes.to_f / total_bytes}%"
end

Excon.get('http://geemus.com', :response_block => streamer)

Iterating over each chunk will allow you to do work on the response incrementally without buffering the entire response first. For very large responses this can lead to significant memory savings.

Proxy Support

You can specify a proxy URL that Excon will use with both HTTP and HTTPS connections:

connection = Excon.new('http://geemus.com', :proxy => 'http://my.proxy:3128')
connection.request(:method => 'GET')

Excon.get('http://geemus.com', :proxy => 'http://my.proxy:3128')

The proxy URL must be fully specified, including scheme (e.g. "http://") and port.

Proxy support must be set when establishing a connection object and cannot be overridden in individual requests.

NOTE: Excon will use HTTP_PROXY and HTTPS_PROXY environment variables. If set they will take precedence over any :proxy option specified in code. If "HTTPS_PROXY" is not set, "HTTP_PROXY" will be used for both HTTP and HTTPS connections. To disable this behavior, set the NO_PROXY environment variable and other environment variable proxy settings will be disregarded.

Reusable ports

For advanced cases where you'd like to reuse the local port assigned to the excon socket in another socket, use the :reuseaddr option.

connection = Excon.new('http://geemus.com', :reuseaddr => true)
connection.get

s = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0)
s.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, true)
if defined?(Socket::SO_REUSEPORT)
  s.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEPORT, true)
end

s.bind(Socket.pack_sockaddr_in(connection.local_port, connection.local_address))
s.connect(Socket.pack_sockaddr_in(80, '1.2.3.4'))
puts s.read
s.close

Unix Socket Support

The Unix socket will work for one-off requests and multiuse connections. A Unix socket path must be provided separate from the resource path.

connection = Excon.new('unix:///', :socket => '/tmp/unicorn.sock')
connection.request(:method => :get, :path => '/ping')

Excon.get('unix:///ping', :socket => '/tmp/unicorn.sock')

NOTE: Proxies will be ignored when using a Unix socket, since a Unix socket has to be local.

Stubs

You can stub out requests for testing purposes by enabling mock mode on a connection.

connection = Excon.new('http://example.com', :mock => true)

Or by enabling mock mode for a request.

connection.request(:method => :get, :path => 'example', :mock => true)

Add stubs by providing the request attributes to match and response attributes to return. Response params can be specified as either a hash or block which will yield with the request params.

Excon.stub({}, {:body => 'body', :status => 200})
Excon.stub({}, lambda {|request_params| {:body => request_params[:body], :status => 200}})

Omitted attributes are assumed to match, so this stub will match any request and return an Excon::Response with a body of 'body' and status of 200.

Excon.stub({ :scheme => 'https', :host => 'example.com', :path => /\/examples\/\d+/, :port => 443 }, { body: 'body', status: 200 })

The above code will stub this:

Excon.get('https://example.com/examples/123', mock: true)

You can add whatever stubs you might like this way and they will be checked against in the order they were added, if none of them match then excon will raise an Excon::Errors::StubNotFound error to let you know.

If you want to allow unstubbed requests without raising StubNotFound, set the allow_unstubbed_requests option either globally or per request.

connection = Excon.new('http://example.com', :mock => true, :allow_unstubbed_requests => true)

To remove a previously defined stub, or all stubs:

Excon.unstub({})  # remove first/oldest stub matching {}
Excon.stubs.clear # remove all stubs

For example, if using RSpec for your test suite you can clear stubs after running each example:

config.after(:each) do
  Excon.stubs.clear
end

You can also modify Excon.defaults to set a stub for all requests, so for a test suite you might do this:

# Mock by default and stub any request as success
config.before(:all) do
  Excon.defaults[:mock] = true
  Excon.stub({}, {:body => 'Fallback', :status => 200})
  # Add your own stubs here or in specific tests...
end

By default stubs are shared globally, to make stubs unique to each thread, use Excon.defaults[:stubs] = :local.

Instrumentation

Excon calls can be timed using the ActiveSupport::Notifications API.

connection = Excon.new(
  'http://geemus.com',
  :instrumentor => ActiveSupport::Notifications
)

Excon will then instrument each request, retry, and error. The corresponding events are named excon.request, excon.retry, and excon.error respectively.

ActiveSupport::Notifications.subscribe(/excon/) do |*args|
  puts "Excon did stuff!"
end

If you prefer to label each event with a namespace other than "excon", you may specify an alternate name in the constructor:

connection = Excon.new(
  'http://geemus.com',
  :instrumentor => ActiveSupport::Notifications,
  :instrumentor_name => 'my_app'
)

Note: Excon's ActiveSupport::Notifications implementation has the following event format: <namespace>.<event> which is the opposite of the Rails' implementation.

ActiveSupport provides a subscriber interface which lets you attach a subscriber to a namespace. Due to the incompability above, you won't be able to attach a subscriber to the "excon" namespace out of the box.

If you want this functionality, you can use a simple adapter such as this one:

class ExconToRailsInstrumentor
  def self.instrument(name, datum, &block)
    namespace, *event = name.split(".")
    rails_name = [event, namespace].flatten.join(".")
    ActiveSupport::Notifications.instrument(rails_name, datum, &block)
  end
end

If you don't want to add ActiveSupport to your application, simply define a class which implements the same #instrument method like so:

class SimpleInstrumentor
  class << self
    attr_accessor :events

    def instrument(name, params = {}, &block)
      puts "#{name} just happened."
      yield if block_given?
    end
  end
end

The #instrument method will be called for each HTTP request, response, retry, and error.

For debugging purposes you can also use Excon::StandardInstrumentor to output all events to stderr. This can also be specified by setting the EXCON_DEBUG ENV var.

See the documentation for ActiveSupport::Notifications for more detail on using the subscription interface. See excon's instrumentation_test.rb for more examples of instrumenting excon.

HTTPS client certificate

You can supply a client side certificate if the server requires it for authentication:

connection = Excon.new('https://example.com',
                       client_cert: 'mycert.pem',
                       client_key: 'mycert.key',
                       client_key_pass: 'my pass phrase')

client_key_pass is optional.

If you already have loaded the certificate and key into memory, then pass it through like:

client_cert_data = File.load 'mycert.pem'
client_key_data = File.load 'mycert.key'

connection = Excon.new('https://example.com',
                       client_cert_data: client_cert_data,
                       client_key_data: client_key_data)

This can be useful if your program has already loaded the assets through another mechanism (E.g. a remote API call to a secure K:V system like Vault).

HTTPS/SSL Issues

By default excon will try to verify peer certificates when using HTTPS. Unfortunately on some operating systems the defaults will not work. This will likely manifest itself as something like Excon::Errors::CertificateError: SSL_connect returned=1 ...

If you have the misfortune of running into this problem you have a couple options. If you have certificates but they aren't being auto-discovered, you can specify the path to your certificates:

Excon.defaults[:ssl_ca_path] = '/path/to/certs'

Failing that, you can turn off peer verification (less secure):

Excon.defaults[:ssl_verify_peer] = false

Either of these should allow you to work around the socket error and continue with your work.

Getting Help

Contributing

Please refer to CONTRIBUTING.md.

Plugins and Middlewares

Using Excon's Middleware system, you can easily extend Excon's functionality with your own. The following plugins extend Excon in their own way:

excon-addressable

Set addressable as the default URI parser, and add support for URI templating.

excon-hypermedia

Teaches Excon to talk with HyperMedia APIs. Allowing you to use all of Excon's functionality, while traversing APIs in an easy and self-discovering way.


Author: excon
Source code: https://github.com/excon/excon
License: MIT license

#ruby #ruby-on-rails 

Ruby on Rails Development Services | Ruby on Rails Development

Ruby on Rails is a development tool that offers Web & Mobile App Developers a structure for all the codes they write resulting in time-saving with all the common repetitive tasks during the development stage.

Want to build a Website or Mobile App with Ruby on Rails Framework

Connect with WebClues Infotech, the top Web & Mobile App development company that has served more than 600 clients worldwide. After serving them with our services WebClues Infotech is ready to serve you in fulfilling your Web & Mobile App Development Requirements.

Want to know more about development on the Ruby on Rails framework?

Visit: https://www.webcluesinfotech.com/ruby-on-rails-development/

Share your requirements https://www.webcluesinfotech.com/contact-us/

View Portfolio https://www.webcluesinfotech.com/portfolio/

#ruby on rails development services #ruby on rails development #ruby on rails web development company #ruby on rails development company #hire ruby on rails developer #hire ruby on rails developers