Eye: Process Monitoring tool. inspired From Bluepill and God In Ruby

Eye 

Process monitoring tool. Inspired from Bluepill and God. Requires Ruby(MRI) >= 1.9.3-p194. Uses Celluloid and Celluloid::IO.

Little demo, shows general commands and how chain works:

Eye

Installation:

$ gem install eye

Why?

We have used god and bluepill in production and always ran into bugs (segfaults, crashes, lost processes, kill not-related processes, load problems, deploy problems, ...)

We wanted something more robust and production stable.

We wanted the features of bluepill and god, with a few extras like chains, nested configuring, mask matching, easy debug configs

I hope we've succeeded, we're using eye in production and are quite happy.

Config example

examples/test.eye (more examples)

# load submodules, here just for example
Eye.load('./eye/*.rb')

# Eye self-configuration section
Eye.config do
  logger '/tmp/eye.log'
end

# Adding application
Eye.application 'test' do
  # All options inherits down to the config leafs.
  # except `env`, which merging down

  # uid "user_name" # run app as a user_name (optional) - available only on ruby >= 2.0
  # gid "group_name" # run app as a group_name (optional) - available only on ruby >= 2.0

  working_dir File.expand_path(File.join(File.dirname(__FILE__), %w[ processes ]))
  stdall 'trash.log' # stdout,err logs for processes by default
  env 'APP_ENV' => 'production' # global env for each processes
  trigger :flapping, times: 10, within: 1.minute, retry_in: 10.minutes
  check :cpu, every: 10.seconds, below: 100, times: 3 # global check for all processes

  group 'samples' do
    chain grace: 5.seconds # chained start-restart with 5s interval, one by one.

    # eye daemonized process
    process :sample1 do
      pid_file '1.pid' # pid_path will be expanded with the working_dir
      start_command 'ruby ./sample.rb'

      # when no stop_command or stop_signals, default stop is [:TERM, 0.5, :KILL]
      # default `restart` command is `stop; start`

      daemonize true
      stdall 'sample1.log'

      # ensure the CPU is below 30% at least 3 out of the last 5 times checked
      check :cpu, below: 30, times: [3, 5]
    end

    # self daemonized process
    process :sample2 do
      pid_file '2.pid'
      start_command 'ruby ./sample.rb -d --pid 2.pid --log sample2.log'
      stop_command 'kill -9 {PID}'

      # ensure the memory is below 300Mb the last 3 times checked
      check :memory, every: 20.seconds, below: 300.megabytes, times: 3
    end
  end

  # daemon with 3 children
  process :forking do
    pid_file 'forking.pid'
    start_command 'ruby ./forking.rb start'
    stop_command 'ruby forking.rb stop'
    stdall 'forking.log'

    start_timeout 10.seconds
    stop_timeout 5.seconds

    monitor_children do
      restart_command 'kill -2 {PID}' # for this child process
      check :memory, below: 300.megabytes, times: 3
    end
  end

  # eventmachine process, daemonized with eye
  process :event_machine do
    pid_file 'em.pid'
    start_command 'ruby em.rb'
    stdout 'em.log'
    daemonize true
    stop_signals [:QUIT, 2.seconds, :KILL]

    check :socket, addr: 'tcp://127.0.0.1:33221', every: 10.seconds, times: 2,
                   timeout: 1.second, send_data: 'ping', expect_data: /pong/
  end

  # thin process, self daemonized
  process :thin do
    pid_file 'thin.pid'
    start_command 'bundle exec thin start -R thin.ru -p 33233 -d -l thin.log -P thin.pid'
    stop_signals [:QUIT, 2.seconds, :TERM, 1.seconds, :KILL]

    check :http, url: 'http://127.0.0.1:33233/hello', pattern: /World/,
                 every: 5.seconds, times: [2, 3], timeout: 1.second
  end
end

Start eye daemon and/or load config:

$ eye l(oad) examples/test.eye

load folder with configs:

$ eye l examples/
$ eye l examples/*.rb

foreground load:

$ eye l CONF -f

If the eye daemon has already started and you call the load command, the config will be updated (into eye daemon). New objects(applications, groups, processes) will be added and monitored. Processes removed from the config will be removed (and stopped if the process has stop_on_delete true). Other objects will update their configs.

Two global configs loaded by default, if they exist (with the first eye load):

/etc/eye.conf
~/.eyeconfig

Process statuses:

$ eye i(nfo)
test
  samples
    sample1 ....................... up  (21:52, 0%, 13Mb, <4107>)
    sample2 ....................... up  (21:52, 0%, 12Mb, <4142>)
  event_machine ................... up  (21:52, 3%, 26Mb, <4112>)
  forking ......................... up  (21:52, 0%, 41Mb, <4203>)
    child-4206 .................... up  (21:52, 0%, 41Mb, <4206>)
    child-4211 .................... up  (21:52, 0%, 41Mb, <4211>)
    child-4214 .................... up  (21:52, 0%, 41Mb, <4214>)
  thin ............................ up  (21:53, 2%, 54Mb, <4228>)
$ eye i -j # show info in JSON format

Commands:

start, stop, restart, delete, monitor, unmonitor

Command params (with restart for example):

$ eye r(estart) all
$ eye r test
$ eye r samples
$ eye r sample1
$ eye r sample*
$ eye r test:samples
$ eye r test:samples:sample1
$ eye r test:samples:sample*
$ eye r test:*sample*

Check config syntax:

$ eye c(heck) examples/test.eye

Config explain (for debug):

$ eye e(xplain) examples/test.eye

Log tracing (tail and grep):

$ eye t(race)
$ eye t test
$ eye t sample

Quit monitoring:

$ eye q(uit)
$ eye q -s # stop all processes and quit

Interactive info:

$ eye w(atch)

Process statuses history:

$ eye hi(story)

Eye daemon info:

$ eye x(info)
$ eye x -c # for show current config

Local Eye version LEye (like foreman):

LEye

Process states and events:

Eye

How to write Eye extensions, plugins, gems:

Eye-http Eye-rotate Eye-hipchat Plugin example

Eye related projects

Articles

Wiki

Thanks Bluepill for the nice config ideas.


Author: kostya
Source code: https://github.com/kostya/eye
License: MIT license

#ruby 

What is GEEK

Buddha Community

Eye: Process Monitoring tool. inspired From Bluepill and God In Ruby
Carmen  Grimes

Carmen Grimes

1598959140

How to Monitor Third Party API Integrations

Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.

If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.

How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.

What to monitor

Latency

Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.

Latency best practices

Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.

Reliability

Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.

Reliability best practices

While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.

Not every API provider and integration partner follows suggested status code mapping

Availability

While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.

AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds

Usage

Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.

Usage best practices

It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.

How to monitor API integrations

Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.

#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices

Eye: Process Monitoring tool. inspired From Bluepill and God In Ruby

Eye 

Process monitoring tool. Inspired from Bluepill and God. Requires Ruby(MRI) >= 1.9.3-p194. Uses Celluloid and Celluloid::IO.

Little demo, shows general commands and how chain works:

Eye

Installation:

$ gem install eye

Why?

We have used god and bluepill in production and always ran into bugs (segfaults, crashes, lost processes, kill not-related processes, load problems, deploy problems, ...)

We wanted something more robust and production stable.

We wanted the features of bluepill and god, with a few extras like chains, nested configuring, mask matching, easy debug configs

I hope we've succeeded, we're using eye in production and are quite happy.

Config example

examples/test.eye (more examples)

# load submodules, here just for example
Eye.load('./eye/*.rb')

# Eye self-configuration section
Eye.config do
  logger '/tmp/eye.log'
end

# Adding application
Eye.application 'test' do
  # All options inherits down to the config leafs.
  # except `env`, which merging down

  # uid "user_name" # run app as a user_name (optional) - available only on ruby >= 2.0
  # gid "group_name" # run app as a group_name (optional) - available only on ruby >= 2.0

  working_dir File.expand_path(File.join(File.dirname(__FILE__), %w[ processes ]))
  stdall 'trash.log' # stdout,err logs for processes by default
  env 'APP_ENV' => 'production' # global env for each processes
  trigger :flapping, times: 10, within: 1.minute, retry_in: 10.minutes
  check :cpu, every: 10.seconds, below: 100, times: 3 # global check for all processes

  group 'samples' do
    chain grace: 5.seconds # chained start-restart with 5s interval, one by one.

    # eye daemonized process
    process :sample1 do
      pid_file '1.pid' # pid_path will be expanded with the working_dir
      start_command 'ruby ./sample.rb'

      # when no stop_command or stop_signals, default stop is [:TERM, 0.5, :KILL]
      # default `restart` command is `stop; start`

      daemonize true
      stdall 'sample1.log'

      # ensure the CPU is below 30% at least 3 out of the last 5 times checked
      check :cpu, below: 30, times: [3, 5]
    end

    # self daemonized process
    process :sample2 do
      pid_file '2.pid'
      start_command 'ruby ./sample.rb -d --pid 2.pid --log sample2.log'
      stop_command 'kill -9 {PID}'

      # ensure the memory is below 300Mb the last 3 times checked
      check :memory, every: 20.seconds, below: 300.megabytes, times: 3
    end
  end

  # daemon with 3 children
  process :forking do
    pid_file 'forking.pid'
    start_command 'ruby ./forking.rb start'
    stop_command 'ruby forking.rb stop'
    stdall 'forking.log'

    start_timeout 10.seconds
    stop_timeout 5.seconds

    monitor_children do
      restart_command 'kill -2 {PID}' # for this child process
      check :memory, below: 300.megabytes, times: 3
    end
  end

  # eventmachine process, daemonized with eye
  process :event_machine do
    pid_file 'em.pid'
    start_command 'ruby em.rb'
    stdout 'em.log'
    daemonize true
    stop_signals [:QUIT, 2.seconds, :KILL]

    check :socket, addr: 'tcp://127.0.0.1:33221', every: 10.seconds, times: 2,
                   timeout: 1.second, send_data: 'ping', expect_data: /pong/
  end

  # thin process, self daemonized
  process :thin do
    pid_file 'thin.pid'
    start_command 'bundle exec thin start -R thin.ru -p 33233 -d -l thin.log -P thin.pid'
    stop_signals [:QUIT, 2.seconds, :TERM, 1.seconds, :KILL]

    check :http, url: 'http://127.0.0.1:33233/hello', pattern: /World/,
                 every: 5.seconds, times: [2, 3], timeout: 1.second
  end
end

Start eye daemon and/or load config:

$ eye l(oad) examples/test.eye

load folder with configs:

$ eye l examples/
$ eye l examples/*.rb

foreground load:

$ eye l CONF -f

If the eye daemon has already started and you call the load command, the config will be updated (into eye daemon). New objects(applications, groups, processes) will be added and monitored. Processes removed from the config will be removed (and stopped if the process has stop_on_delete true). Other objects will update their configs.

Two global configs loaded by default, if they exist (with the first eye load):

/etc/eye.conf
~/.eyeconfig

Process statuses:

$ eye i(nfo)
test
  samples
    sample1 ....................... up  (21:52, 0%, 13Mb, <4107>)
    sample2 ....................... up  (21:52, 0%, 12Mb, <4142>)
  event_machine ................... up  (21:52, 3%, 26Mb, <4112>)
  forking ......................... up  (21:52, 0%, 41Mb, <4203>)
    child-4206 .................... up  (21:52, 0%, 41Mb, <4206>)
    child-4211 .................... up  (21:52, 0%, 41Mb, <4211>)
    child-4214 .................... up  (21:52, 0%, 41Mb, <4214>)
  thin ............................ up  (21:53, 2%, 54Mb, <4228>)
$ eye i -j # show info in JSON format

Commands:

start, stop, restart, delete, monitor, unmonitor

Command params (with restart for example):

$ eye r(estart) all
$ eye r test
$ eye r samples
$ eye r sample1
$ eye r sample*
$ eye r test:samples
$ eye r test:samples:sample1
$ eye r test:samples:sample*
$ eye r test:*sample*

Check config syntax:

$ eye c(heck) examples/test.eye

Config explain (for debug):

$ eye e(xplain) examples/test.eye

Log tracing (tail and grep):

$ eye t(race)
$ eye t test
$ eye t sample

Quit monitoring:

$ eye q(uit)
$ eye q -s # stop all processes and quit

Interactive info:

$ eye w(atch)

Process statuses history:

$ eye hi(story)

Eye daemon info:

$ eye x(info)
$ eye x -c # for show current config

Local Eye version LEye (like foreman):

LEye

Process states and events:

Eye

How to write Eye extensions, plugins, gems:

Eye-http Eye-rotate Eye-hipchat Plugin example

Eye related projects

Articles

Wiki

Thanks Bluepill for the nice config ideas.


Author: kostya
Source code: https://github.com/kostya/eye
License: MIT license

#ruby 

Ruby on Rails Development Services | Ruby on Rails Development

Ruby on Rails is a development tool that offers Web & Mobile App Developers a structure for all the codes they write resulting in time-saving with all the common repetitive tasks during the development stage.

Want to build a Website or Mobile App with Ruby on Rails Framework

Connect with WebClues Infotech, the top Web & Mobile App development company that has served more than 600 clients worldwide. After serving them with our services WebClues Infotech is ready to serve you in fulfilling your Web & Mobile App Development Requirements.

Want to know more about development on the Ruby on Rails framework?

Visit: https://www.webcluesinfotech.com/ruby-on-rails-development/

Share your requirements https://www.webcluesinfotech.com/contact-us/

View Portfolio https://www.webcluesinfotech.com/portfolio/

#ruby on rails development services #ruby on rails development #ruby on rails web development company #ruby on rails development company #hire ruby on rails developer #hire ruby on rails developers

Shardul Bhatt

Shardul Bhatt

1626850869

7 Reasons to Trust Ruby on Rails

Ruby on Rails is an amazing web development framework. Known for its adaptability, it powers 3,903,258 sites internationally. Ruby on Rails development speeds up the interaction within web applications. It is productive to such an extent that a Ruby on Rails developer can develop an application 25% to 40% quicker when contrasted with different frameworks. 

Around 2.1% (21,034) of the best 1 million sites utilize Ruby on Rails. The framework is perfect for creating web applications in every industry. Regardless of whether it's medical services or vehicles, Rails carries a higher degree of dynamism to each application. 

Be that as it may, what makes the framework so mainstream? Some say that it is affordable, some say it is on the grounds that the Ruby on Rails improvement environment is simple and basic. There are numerous reasons that make it ideal for creating dynamic applications.

Read more: Best Ruby on Rails projects Examples

7 reasons Ruby on Rails is preferred

There are a few other well-known backend services for web applications like Django, Flask, Laravel, and that's only the tip of the iceberg. So for what reason should organizations pick Ruby on Rails application development? We believe the accompanying reasons will feature why different organizations trust the framework -

Quick prototyping 

Rails works on building MVPs in a couple of months. Organizations incline toward Ruby on Rails quick application development as it offers them more opportunity to showcase the elements. Regular development groups accomplish 25% to 40% higher efficiency when working with Rails. Joined with agile, Ruby on Rails empowers timely delivery.

Basic and simple 

Ruby on Rails is easy to arrange and work with. It is not difficult to learn also. Both of these things are conceivable as a result of Ruby. The programming language has one of the most straightforward sentence structures, which is like the English language. Ruby is a universally useful programming language, working on things for web applications. 

Cost-effective 

Probably the greatest advantage of Rails is that it is very reasonable. The system is open-source, which implies there is no licensing charge included. Aside from that, engineers are additionally effectively accessible, that too at a lower cost. There are a large number of Ruby on Rails engineers for hire at an average compensation of $107,381 each year. 

Startup-friendly

Ruby on Rails is regularly known as "the startup technology." It offers adaptable, fast, and dynamic web improvement to new companies. Most arising organizations and new businesses lean toward this as a direct result of its quick application improvement capacities. It prompts quicker MVP development, which permits new companies to rapidly search for venture investment. 

Adaptable framework 

Ruby on Rails is profoundly adaptable and versatile. In any event, when engineers miss adding any functions, they can utilize different modules to add highlights into the application. Aside from that, they can likewise reclassify components by eliminating or adding them during the development environment. Indeed, even individual projects can be extended and changed. 

Convention over configuration

Regardless of whether it's Ruby on Rails enterprise application development or ecommerce-centered applications, the system utilizes convention over configuration. Developers don't have to go through hours attempting to set up the Ruby on Rails improvement environment. The standard conventions cover everything, improving on things for engineers on the task. The framework likewise utilizes the standard of "Don't Repeat Yourself" to guarantee there are no redundancies. 

Versatile applications 

At the point when organizations scale, applications regularly slack. However, this isn't the situation with Ruby on Rails web application development. The system powers sites with high traffic, It can deal with a huge load of worker demands immediately. Adaptability empowers new businesses to keep utilizing the structure even after they prepare their first model for dispatch. 

Checkout Pros and Cons of Ruby on Rails for Web Development

Bottom Line 

Ruby on Rails is as yet a significant framework utilized by organizations all over the world - of every kind. In this day and age, it is probably the best framework to digitize endeavors through powerful web applications.

A software development company provides comprehensive Ruby on Rails development to guarantee startups and MNCs can benefit as much as possible from their digital application needs. 

Reach us today for a FREE CONSULTATION

#ruby on rails development #ruby on rails application development #ruby on rails web application development #ruby on rails developer

8 Open-Source Tools To Start Your NLP Journey

Teaching machines to understand human context can be a daunting task. With the current evolving landscape, Natural Language Processing (NLP) has turned out to be an extraordinary breakthrough with its advancements in semantic and linguistic knowledge. NLP is vastly leveraged by businesses to build customised chatbots and voice assistants using its optical character and speed recognition techniques along with text simplification.

To address the current requirements of NLP, there are many open-source NLP tools, which are free and flexible enough for developers to customise it according to their needs. Not only these tools will help businesses analyse the required information from the unstructured text but also help in dealing with text analysis problems like classification, word ambiguity, sentiment analysis etc.

Here are eight NLP toolkits, in no particular order, that can help any enthusiast start their journey with Natural language Processing.


Also Read: Deep Learning-Based Text Analysis Tools NLP Enthusiasts Can Use To Parse Text

1| Natural Language Toolkit (NLTK)

About: Natural Language Toolkit aka NLTK is an open-source platform primarily used for Python programming which analyses human language. The platform has been trained on more than 50 corpora and lexical resources, including multilingual WordNet. Along with that, NLTK also includes many text processing libraries which can be used for text classification tokenisation, parsing, and semantic reasoning, to name a few. The platform is vastly used by students, linguists, educators as well as researchers to analyse text and make meaning out of it.


#developers corner #learning nlp #natural language processing #natural language processing tools #nlp #nlp career #nlp tools #open source nlp tools #opensource nlp tools