1659694200
public_activity
provides easy activity tracking for your ActiveRecord, Mongoid 3 and MongoMapper models in Rails 3 and 4.
Simply put: it can record what happens in your application and gives you the ability to present those recorded activities to users - in a similar way to how GitHub does it.
You probably don't want to read the docs for this unreleased version 2.0.
For the stable 1.5.X
readme see: https://github.com/chaps-io/public_activity/blob/1-5-stable/README.md
Here is a simple example showing what this gem is about:
Ryan Bates made a great screencast describing how to integrate Public Activity.
A great step-by-step guide on implementing activity feeds using public_activity by Ilya Bodrov.
You can see an actual application using this gem here: http://public-activity-example.herokuapp.com/feed
The source code of the demo is hosted here: https://github.com/pokonski/activity_blog
You can install public_activity
as you would any other gem:
gem install public_activity
or in your Gemfile:
gem 'public_activity'
By default public_activity
uses Active Record. If you want to use Mongoid or MongoMapper as your backend, create an initializer file in your Rails application with the corresponding code inside:
For Mongoid:
# config/initializers/public_activity.rb
PublicActivity.configure do |config|
config.orm = :mongoid
end
For MongoMapper:
# config/initializers/public_activity.rb
PublicActivity.configure do |config|
config.orm = :mongo_mapper
end
(ActiveRecord only) Create migration for activities and migrate the database (in your Rails project):
rails g public_activity:migration
rake db:migrate
Include PublicActivity::Model
and add tracked
to the model you want to keep track of:
For ActiveRecord:
class Article < ActiveRecord::Base
include PublicActivity::Model
tracked
end
For Mongoid:
class Article
include Mongoid::Document
include PublicActivity::Model
tracked
end
For MongoMapper:
class Article
include MongoMapper::Document
include PublicActivity::Model
tracked
end
And now, by default create/update/destroy activities are recorded in activities table. This is all you need to start recording activities for basic CRUD actions.
Optional: If you don't need #tracked
but still want the comfort of #create_activity
, you can include only the lightweight Common
module instead of Model
.
You can trigger custom activities by setting all your required parameters and triggering create_activity
on the tracked model, like this:
@article.create_activity key: 'article.commented_on', owner: current_user
See this entry http://rubydoc.info/gems/public_activity/PublicActivity/Common:create_activity for more details.
To display them you simply query the PublicActivity::Activity
model:
# notifications_controller.rb
def index
@activities = PublicActivity::Activity.all
end
And in your views:
<%= render_activities(@activities) %>
Note: render_activities
is an alias for render_activity
and does the same.
You can also pass options to both activity#render
and #render_activity
methods, which are passed deeper to the internally used render_partial
method. A useful example would be to render activities wrapped in layout, which shares common elements of an activity, like a timestamp, owner's avatar etc:
<%= render_activities(@activities, layout: :activity) %>
The activity will be wrapped with the app/views/layouts/_activity.html.erb
layout, in the above example.
Important: please note that layouts for activities are also partials. Hence the _
prefix.
Sometimes, it's desirable to pass additional local variables to partials. It can be done this way:
<%= render_activity(@activity, locals: {friends: current_user.friends}) %>
Note: Before 1.4.0, one could pass variables directly to the options hash for #render_activity
and access it from activity parameters. This functionality is retained in 1.4.0 and later, but the :locals
method is preferred, since it prevents bugs from shadowing variables from activity parameters in the database.
public_activity
looks for views in app/views/public_activity
.
For example, if you have an activity with :key
set to "activity.user.changed_avatar"
, the gem will look for a partial in app/views/public_activity/user/_changed_avatar.html.(|erb|haml|slim|something_else)
.
Hint: the "activity."
prefix in :key
is completely optional and kept for backwards compatibility, you can skip it in new projects.
If you would like to fallback to a partial, you can utilize the fallback
parameter to specify the path of a partial to use when one is missing:
<%= render_activity(@activity, fallback: 'default') %>
When used in this manner, if a partial with the specified :key
cannot be located it will use the partial defined in the fallback
instead. In the example above this would resolve to public_activity/_default.html.(|erb|haml|slim|something_else)
.
If a view file does not exist then ActionView::MisingTemplate will be raised. If you wish to fallback to the old behaviour and use an i18n based translation in this situation you can specify a :fallback
parameter of text
to fallback to this mechanism like such:
<%= render_activity(@activity, fallback: :text) %>
Translations are used by the #text
method, to which you can pass additional options in form of a hash. #render
method uses translations when view templates have not been provided. You can render pure i18n strings by passing {display: :i18n}
to #render_activity
or #render
.
Translations should be put in your locale .yml
files. To render pure strings from I18n Example structure:
activity:
article:
create: 'Article has been created'
update: 'Someone has edited the article'
destroy: 'Some user removed an article!'
This structure is valid for activities with keys "activity.article.create"
or "article.create"
. As mentioned before, "activity."
part of the key is optional.
For RSpec you can first disable public_activity
and add require helper methods in the rails_helper.rb
with:
#rails_helper.rb
require 'public_activity/testing'
PublicActivity.enabled = false
In your specs you can then blockwise decide whether to turn public_activity
on or off.
# file_spec.rb
PublicActivity.with_tracking do
# your test code goes here
end
PublicActivity.without_tracking do
# your test code goes here
end
For more documentation go here
You can set up a default value for :owner
by doing this:
PublicActivity::StoreController
in your ApplicationController
like this:class ApplicationController < ActionController::Base
include PublicActivity::StoreController
end
:owner
attribute for tracked
class method in your desired model. For example:class Article < ActiveRecord::Base
tracked owner: Proc.new{ |controller, model| controller.current_user }
end
Note: current_user
applies to Devise, if you are using a different authentication gem or your own code, change the current_user
to a method you use.
If you need to disable tracking temporarily, for example in tests or db/seeds.rb
then you can use PublicActivity.enabled=
attribute like below:
# Disable p_a globally
PublicActivity.enabled = false
# Perform some operations that would normally be tracked by p_a:
Article.create(title: 'New article')
# Switch it back on
PublicActivity.enabled = true
You can also disable public_activity for a specific class:
# Disable p_a for Article class
Article.public_activity_off
# p_a will not do anything here:
@article = Article.create(title: 'New article')
# But will be enabled for other classes:
# (creation of the comment will be recorded if you are tracking the Comment class)
@article.comments.create(body: 'some comment!')
# Enable it again for Article:
Article.public_activity_on
Besides standard, automatic activities created on CRUD actions on your model (deactivatable), you can post your own activities that can be triggered without modifying the tracked model. There are a few ways to do this, as PublicActivity gives three tiers of options to be set.
Because every activity needs a key (otherwise: NoKeyProvided
is raised), the shortest and minimal way to post an activity is:
@user.create_activity :mood_changed
# the key of the action will be user.mood_changed
@user.create_activity action: :mood_changed # this is exactly the same as above
Besides assigning your key (which is obvious from the code), it will take global options from User class (given in #tracked
method during class definition) and overwrite them with instance options (set on @user
by #activity
method). You can read more about options and how PublicActivity inherits them for you here.
Note the action parameter builds the key like this: "#{model_name}.#{action}"
. You can read further on options for #create_activity
here.
To provide more options, you can do:
@user.create_activity action: 'poke', parameters: {reason: 'bored'}, recipient: @friend, owner: current_user
In this example, we have provided all the things we could for a standard Activity.
Besides the few fields that every Activity has (key
, owner
, recipient
, trackable
, parameters
), you can also set custom fields. This could be very beneficial, as parameters
are a serialized hash, which cannot be queried easily from the database. That being said, use custom fields when you know that you will set them very often and search by them (don't forget database indexes :) ).
owner
and recipient
based on associationsclass Comment < ActiveRecord::Base
include PublicActivity::Model
tracked owner: :commenter, recipient: :commentee
belongs_to :commenter, :class_name => "User"
belongs_to :commentee, :class_name => "User"
end
class Post < ActiveRecord::Base
include PublicActivity::Model
tracked only: [:update], parameters: :tracked_values
def tracked_values
{}.tap do |hash|
hash[:tags] = tags if tags_changed?
end
end
end
Skip this step if you are using ActiveRecord in Rails 4 or Mongoid
The first step is similar in every ORM available (except mongoid):
PublicActivity::Activity.class_eval do
attr_accessible :custom_field
end
place this code under config/initializers/public_activity.rb
, you have to create it first.
To be able to assign to that field, we need to move it to the mass assignment sanitizer's whitelist.
If you're using ActiveRecord, you will also need to provide a migration to add the actual field to the Activity
. Taken from our tests:
class AddCustomFieldToActivities < ActiveRecord::Migration
def change
change_table :activities do |t|
t.string :custom_field
end
end
end
Assigning is done by the same methods that you use for normal parameters: #tracked
, #create_activity
. You can just pass the name of your custom variable and assign its value. Even better, you can pass it to #tracked
to tell us how to harvest your data for custom fields so we can do that for you.
class Article < ActiveRecord::Base
include PublicActivity::Model
tracked custom_field: proc {|controller, model| controller.some_helper }
end
If you need help with using public_activity please visit our discussion group and ask a question there:
https://groups.google.com/forum/?fromgroups#!forum/public-activity
Please do not ask general questions in the Github Issues.
Author: public-activity
Source code: https://github.com/public-activity/public_activity
License: MIT license
1659684180
Ransack will help you easily add searching to your Rails application, without any additional dependencies.
There are advanced searching solutions around, like ElasticSearch or Algolia. Ransack will do the job for many Rails websites, without the need to run additional infrastructure or work in a different language. With Ransack you do it all with standard Ruby and ERB.
Ready to move beyond the basics? Use advanced features like i18n and extensive configuration options.
Ransack is supported for Rails 7.0, 6.x on Ruby 2.6.6 and later.
To install ransack
and add it to your Gemfile, run
gem 'ransack'
If you would like to use the latest updates not yet published to RubyGems, use the main
branch:
gem 'ransack', :github => 'activerecord-hackery/ransack', :branch => 'main'
There is extensive documentation on Ransack, which is a Docusaurus project and run as a GitHub Pages site.
To support the project:
Ransack was created by Ernie Miller and is developed and maintained by:
Alumni Maintainers
Author: Activerecord-hackery
Source Code: https://github.com/activerecord-hackery/ransack
License: MIT license
1659676740
Kaminari
A Scope & Engine based, clean, powerful, customizable and sophisticated paginator for modern web app frameworks and ORMs
Does not globally pollute Array
, Hash
, Object
or AR::Base
.
Just bundle the gem, then your models are ready to be paginated. No configuration required. Don't have to define anything in your models or helpers.
Everything is method chainable with less "Hasheritis". You know, that's the modern Rails way. No special collection class or anything for the paginated values, instead using a general AR::Relation
instance. So, of course you can chain any other conditions before or after the paginator scope.
As the whole pagination helper is basically just a collection of links and non-links, Kaminari renders each of them through its own partial template inside the Engine. So, you can easily modify their behaviour, style or whatever by overriding partial templates.
Kaminari supports multiple ORMs (ActiveRecord, DataMapper, Mongoid, MongoMapper) multiple web frameworks (Rails, Sinatra, Grape), and multiple template engines (ERB, Haml, Slim).
The pagination helper outputs the HTML5 <nav>
tag by default. Plus, the helper supports Rails unobtrusive Ajax.
Ruby 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2
Rails 4.1, 4.2, 5.0, 5.1, 5.2, 6.0, 6.1, 7.0, 7.1
Sinatra 1.4, 2.0
Haml 3+
Mongoid 3+
MongoMapper 0.9+
DataMapper 1.1.0+
To install kaminari on the default Rails stack, just put this line in your Gemfile:
gem 'kaminari'
Then bundle:
% bundle
If you're building non-Rails of non-ActiveRecord app and want the pagination feature on it, please take a look at Other Framework/Library Support section.
page
ScopeTo fetch the 7th page of users (default per_page
is 25)
User.page(7)
Note: pagination starts at page 1, not at page 0 (page(0) will return the same results as page(1)).
Kaminari does not add an order
to queries. To avoid surprises, you should generally include an order in paginated queries. For example:
User.order(:name).page(7)
You can get page numbers or page conditions by using below methods.
User.count #=> 1000
User.page(1).limit_value #=> 20
User.page(1).total_pages #=> 50
User.page(1).current_page #=> 1
User.page(1).next_page #=> 2
User.page(2).prev_page #=> 1
User.page(1).first_page? #=> true
User.page(50).last_page? #=> true
User.page(100).out_of_range? #=> true
per
ScopeTo show a lot more users per each page (change the per
value)
User.order(:name).page(7).per(50)
Note that the per
scope is not directly defined on the models but is just a method defined on the page scope. This is absolutely reasonable because you will never actually use per
without specifying the page
number.
Keep in mind that per
internally utilizes limit
and so it will override any limit
that was set previously. And if you want to get the size for all request records you can use total_count
method:
User.count #=> 1000
a = User.limit(5); a.count #=> 5
a.page(1).per(20).size #=> 20
a.page(1).per(20).total_count #=> 1000
padding
ScopeOccasionally you need to pad a number of records that is not a multiple of the page size.
User.order(:name).page(7).per(50).padding(3)
Note that the padding
scope also is not directly defined on the models.
If for some reason you need to unscope page
and per
methods you can call except(:limit, :offset)
users = User.order(:name).page(7).per(50)
unpaged_users = users.except(:limit, :offset) # unpaged_users will not use the kaminari scopes
You can configure the following default values by overriding these values using Kaminari.configure
method.
default_per_page # 25 by default
max_per_page # nil by default
max_pages # nil by default
window # 4 by default
outer_window # 0 by default
left # 0 by default
right # 0 by default
page_method_name # :page by default
param_name # :page by default
params_on_first_page # false by default
There's a handy generator that generates the default configuration file into config/initializers directory. Run the following generator command, then edit the generated file.
% rails g kaminari:config
page_method_name
You can change the method name page
to bonzo
or plant
or whatever you like, in order to play nice with existing page
method or association or scope or any other plugin that defines page
method on your models.
paginates_per
You can specify default per_page
value per each model using the following declarative DSL.
class User < ActiveRecord::Base
paginates_per 50
end
max_paginates_per
You can specify max per_page
value per each model using the following declarative DSL. If the variable that specified via per
scope is more than this variable, max_paginates_per
is used instead of it. Default value is nil, which means you are not imposing any max per_page
value.
class User < ActiveRecord::Base
max_paginates_per 100
end
max_pages
You can specify max_pages
value per each model using the following declarative DSL. This value restricts the total number of pages that can be returned. Useful for setting limits on large collections.
class User < ActiveRecord::Base
max_pages 100
end
If you are using the ransack_memory
gem and experience problems navigating back to the previous or first page, set the params_on_first_page
setting to true
.
params[:page]
Typically, your controller code will look like this:
@users = User.order(:name).page params[:page]
Just call the paginate
helper:
<%= paginate @users %>
This will render several ?page=N
pagination links surrounded by an HTML5 <nav>
tag.
paginate
Helper Method<%= paginate @users %>
This would output several pagination links such as « First ‹ Prev ... 2 3 4 5 6 7 8 9 10 ... Next › Last »
<%= paginate @users, window: 2 %>
This would output something like ... 5 6 7 8 9 ...
when 7 is the current page.
<%= paginate @users, outer_window: 3 %>
This would output something like 1 2 3 ...(snip)... 18 19 20
while having 20 pages in total.
<%= paginate @users, left: 1, right: 3 %>
This would output something like 1 ...(snip)... 18 19 20
while having 20 pages in total.
:param_name
) for the Links<%= paginate @users, param_name: :pagina %>
This would modify the query parameter name on each links.
:params
) for the Links<%= paginate @users, params: {controller: 'foo', action: 'bar', format: :turbo_stream} %>
This would modify each link's url_option
. :controller
and :action
might be the keys in common.
<%= paginate @users, remote: true %>
This would add data-remote="true"
to all the links inside.
<%= paginate @users, views_prefix: 'templates' %>
This would search for partials in app/views/templates/kaminari
. This option makes it easier to do things like A/B testing pagination templates/themes, using new/old templates at the same time as well as better integration with other gems such as cells.
link_to_next_page
and link_to_previous_page
(aliased to link_to_prev_page
) Helper Methods<%= link_to_next_page @items, 'Next Page' %>
This simply renders a link to the next page. This would be helpful for creating a Twitter-like pagination feature.
The helper methods support a params
option to further specify the link. If format
needs to be set, include it in the params
hash.
<%= link_to_next_page @items, 'Next Page', params: {controller: 'foo', action: 'bar', format: :turbo_stream} %>
page_entries_info
Helper Method<%= page_entries_info @posts %>
This renders a helpful message with numbers of displayed vs. total entries.
By default, the message will use the humanized class name of objects in collection: for instance, "project types" for ProjectType models. The namespace will be cut out and only the last name will be used. Override this with the :entry_name
parameter:
<%= page_entries_info @posts, entry_name: 'item' %>
#=> Displaying items 6 - 10 of 26 in total
rel_next_prev_link_tags
Helper Method<%= rel_next_prev_link_tags @users %>
This renders the rel next and prev link tags for the head.
path_to_next_page
Helper Method<%= path_to_next_page @users %>
This returns the server relative path to the next page.
path_to_prev_page
Helper Method<%= path_to_prev_page @users %>
This returns the server relative path to the previous page.
The default labels for 'first', 'last', 'previous', '...' and 'next' are stored in the I18n yaml inside the engine, and rendered through I18n API. You can switch the label value per I18n.locale for your internationalized application. Keys and the default values are the following. You can override them by adding to a YAML file in your Rails.root/config/locales
directory.
en:
views:
pagination:
first: "« First"
last: "Last »"
previous: "‹ Prev"
next: "Next ›"
truncate: "…"
helpers:
page_entries_info:
one_page:
display_entries:
zero: "No %{entry_name} found"
one: "Displaying <b>1</b> %{entry_name}"
other: "Displaying <b>all %{count}</b> %{entry_name}"
more_pages:
display_entries: "Displaying %{entry_name} <b>%{first} - %{last}</b> of <b>%{total}</b> in total"
If you use non-English localization see i18n rules for changing one_page:display_entries
block.
Kaminari includes a handy template generator.
Run the generator first,
% rails g kaminari:views default
then edit the partials in your app's app/views/kaminari/
directory.
You can use the html2haml gem or the html2slim gem to convert erb templates. The kaminari gem will automatically pick up haml/slim templates if you place them in app/views/kaminari/
.
In case you need different templates for your paginator (for example public and admin), you can pass --views-prefix directory
like this:
% rails g kaminari:views default --views-prefix admin
that will generate partials in app/views/admin/kaminari/
directory.
The generator has the ability to fetch several sample template themes from the external repository (https://github.com/amatsuda/kaminari_themes) in addition to the bundled "default" one, which will help you creating a nice looking paginator.
% rails g kaminari:views THEME
To see the full list of available themes, take a look at the themes repository, or just hit the generator without specifying THEME
argument.
% rails g kaminari:views
To utilize multiple themes from within a single application, create a directory within the app/views/kaminari/ and move your custom template files into that directory.
% rails g kaminari:views default (skip if you have existing kaminari views)
% cd app/views/kaminari
% mkdir my_custom_theme
% cp _*.html.* my_custom_theme/
Next, reference that directory when calling the paginate
method:
<%= paginate @users, theme: 'my_custom_theme' %>
Customize away!
Note: if the theme isn't present or none is specified, kaminari will default back to the views included within the gem.
Generally the paginator needs to know the total number of records to display the links, but sometimes we don't need the total number of records and just need the "previous page" and "next page" links. For such use case, Kaminari provides without_count
mode that creates a paginatable collection without counting the number of all records. This may be helpful when you're dealing with a very large dataset because counting on a big table tends to become slow on RDBMS.
Just add .without_count
to your paginated object:
User.page(3).without_count
In your view file, you can only use simple helpers like the following instead of the full-featured paginate
helper:
<%= link_to_prev_page @users, 'Previous Page' %>
<%= link_to_next_page @users, 'Next Page' %>
Kaminari provides an Array wrapper class that adapts a generic Array object to the paginate
view helper. However, the paginate
helper doesn't automatically handle your Array object (this is intentional and by design). Kaminari::paginate_array
method converts your Array object into a paginatable Array that accepts page
method.
@paginatable_array = Kaminari.paginate_array(my_array_object).page(params[:page]).per(10)
You can specify the total_count
value through options Hash. This would be helpful when handling an Array-ish object that has a different count
value from actual count
such as RSolr search result or when you need to generate a custom pagination. For example:
@paginatable_array = Kaminari.paginate_array([], total_count: 145).page(params[:page]).per(10)
or, in the case of using an external API to source the page of data:
page_size = 10
one_page = get_page_of_data params[:page], page_size
@paginatable_array = Kaminari.paginate_array(one_page.data, total_count: one_page.total_count).page(params[:page]).per(page_size)
Because of the page
parameter and Rails routing, you can easily generate SEO and user-friendly URLs. For any resource you'd like to paginate, just add the following to your routes.rb
:
resources :my_resources do
get 'page/:page', action: :index, on: :collection
end
If you are using Rails 4 or later, you can simplify route definitions by using concern
:
concern :paginatable do
get '(page/:page)', action: :index, on: :collection, as: ''
end
resources :my_resources, concerns: :paginatable
This will create URLs like /my_resources/page/33
instead of /my_resources?page=33
. This is now a friendly URL, but it also has other added benefits...
Because the page
parameter is now a URL segment, we can leverage on Rails page caching!
NOTE: In this example, I've pointed the route to my :index
action. You may have defined a custom pagination action in your controller - you should point action: :your_custom_action
instead.
Technically, the kaminari gem consists of 3 individual components:
kaminari-core: the core pagination logic
kaminari-activerecord: Active Record adapter
kaminari-actionview: Action View adapter
So, bundling gem 'kaminari'
is equivalent to the following 2 lines (kaminari-core is referenced from the adapters):
gem 'kaminari-activerecord'
gem 'kaminari-actionview'
If you want to use other supported ORMs instead of ActiveRecord, for example Mongoid, bundle its adapter instead of kaminari-activerecord.
gem 'kaminari-mongoid'
gem 'kaminari-actionview'
Kaminari currently provides adapters for the following ORMs:
If you want to use other web frameworks instead of Rails + Action View, for example Sinatra, bundle its adapter instead of kaminari-actionview.
gem 'kaminari-activerecord'
gem 'kaminari-sinatra'
Kaminari currently provides adapters for the following web frameworks:
Check out Kaminari recipes on the GitHub Wiki for more advanced tips and techniques. https://github.com/kaminari/kaminari/wiki/Kaminari-recipes
Feel free to message me on Github (amatsuda) or Twitter (@a_matsuda) ☇☇☇ :)
Fork, fix, then send a pull request.
To run the test suite locally against all supported frameworks:
% bundle install
% rake test:all
To target the test suite against one framework:
% rake test:active_record_50
You can find a list of supported test tasks by running rake -T
. You may also find it useful to run a specific test for a specific framework. To do so, you'll have to first make sure you have bundled everything for that configuration, then you can run the specific test:
% BUNDLE_GEMFILE='gemfiles/active_record_50.gemfile' bundle install
% BUNDLE_GEMFILE='gemfiles/active_record_50.gemfile' TEST=kaminari-core/test/requests/navigation_test.rb bundle exec rake test
Author: Kaminari
Source Code: https://github.com/kaminari/kaminari
License: MIT license
1659668461
Arbre - HTML Views in Ruby
Arbre makes it easy to generate HTML directly in Ruby. This gem was extracted from Active Admin.
The purpose of Arbre is to leave the view as ruby objects as long as possible. This allows OO Design to be used to implement the view layer.
Please use StackOverflow for help requests and how-to questions.
Please open GitHub issues for bugs and enhancements only, not general help requests. Please search previous issues (and Google and StackOverflow) before creating a new issue.
Subscribe to Tidelift to support Arbre and get licensing assurances and timely security notifications.
Please use the Tidelift security contact to report a security vulnerability. Tidelift will coordinate the fix and disclosure.
Author: Activeadmin
Source Code: https://github.com/activeadmin/arbre
License: MIT license
1659664566
Active Admin
Active Admin is a Ruby on Rails framework for creating elegant backends for website administration.
Active Admin for enterprise is available via the Tidelift subscription. Learn More.
Please use StackOverflow for help requests and how-to questions.
Please open GitHub issues for bugs and enhancements only, not general help requests. Please search previous issues (and Google and StackOverflow) before creating a new issue.
Google Groups, IRC #activeadmin and Gitter are not actively monitored.
If you want to contribute through code or documentation, the Contributing guide is the best place to start. If you have questions, feel free to ask.
If you want to support us financially, you can help fund the project through a Tidelift subscription. By buying a Tidelift subscription you make sure your whole dependency stack is properly maintained, while also getting a comprehensive view of outdated dependencies, new releases, security alerts, and licensing compatibility issues.
You can also support us with a weekly tip via Liberapay.
Finally, we have an Open Collective where you can become a backer or sponsor for the project, and also submit expenses to it.
We try not to reinvent the wheel, so Active Admin is built with other open source projects:
Tool | Description |
---|---|
Arbre | Ruby -> HTML, just like that. |
Devise | Powerful, extensible user authentication |
Formtastic | A Rails form builder plugin with semantically rich and accessible markup |
Inherited Resources | Simplifies controllers with pre-built RESTful controller actions |
Kaminari | Elegant pagination for any sort of collection |
Ransack | Provides a simple search API to query your data |
Please use the Tidelift security contact to report a security vulnerability. Tidelift will coordinate the fix and disclosure.
Thanks to Greg Bell for creating and sharing this project with the open source community.
Thanks to all the people that ever contributed through code or other means such as bug reports, issue triaging, feature suggestions, code snippet tips, Slack discussions and so on.
Thanks to Tidelift and all our Tidelift subscribers.
Thanks to Open Collective and all our Open Collective contributors.
Author: Activeadmin
Source Code: https://github.com/activeadmin/activeadmin
License: MIT license
1659576913
Suspenders is the base Rails application used at thoughtbot.
First install the suspenders gem:
gem install suspenders
Then run:
suspenders projectname
This will create a Rails app in projectname
using the latest version of Rails.
To see the latest and greatest gems, look at Suspenders' Gemfile, which will be appended to the default generated projectname/Gemfile.
It includes application gems like:
And development gems like:
And testing gems like:
Suspenders also comes with:
./bin/setup
convention for new developer setup./bin/deploy
convention for deploying to HerokuRack::Deflater
to compress responses with GzipSECRET_KEY_BASE
environment variable in all environmentsRead the documentation on deploying to Heroku
You can optionally create Heroku staging and production apps:
suspenders app --heroku true
This:
staging
and production
Git remotesHONEYBADGER_ENV
environment variable set to staging
staging
and production
You can optionally specify alternate Heroku flags:
suspenders app \
--heroku true \
--heroku-flags "--region eu --addons sendgrid,ssl"
See all possible Heroku flags:
heroku help create
This will initialize a new git repository for your Rails app. You can bypass this with the --skip-git
option:
suspenders app --skip-git true
You can optionally create a GitHub repository for the suspended Rails app. It requires that you have Hub on your system:
brew install hub # macOS, for other systems see https://github.com/github/hub#installation
suspenders app --github organization/project
This has the same effect as running:
hub create organization/project
Suspenders requires the latest version of Ruby.
Some gems included in Suspenders have native extensions. You should have GCC installed on your machine before generating an app with Suspenders.
Use OS X GCC Installer for Snow Leopard (OS X 10.6).
Use Command Line Tools for Xcode for Lion (OS X 10.7) or Mountain Lion (OS X 10.8).
We use Google Chromedriver for full-stack JavaScript integration testing. It requires Google Chrome or Chromium.
PostgreSQL needs to be installed and running for the db:create
rake task.
Redis needs to be installed and running for Sidekiq
If you have problems, please create a GitHub Issue.
See CONTRIBUTING.md.
Thank you, contributors!
Suspenders is Copyright © 2008-2017 thoughtbot. It is free software, and may be redistributed under the terms specified in the LICENSE file.
Suspenders is maintained and funded by thoughtbot, inc. The names and logos for thoughtbot are trademarks of thoughtbot, inc.
We love open source software! See our other projects. We are available for hire.
Author: thoughtbot
Download Link: Download The Source Code
Official Website: https://github.com/thoughtbot/suspenders
License: MIT license
#rails #heroku
1659556800
Instrumental is a application monitoring platform built for developers who want a better understanding of their production software. Powerful tools, like the Instrumental Query Language, combined with an exploration-focused interface allow you to get real answers to complex questions, in real-time.
This agent supports custom metric monitoring for Ruby applications. It provides high-data reliability at high scale, without ever blocking your process or causing an exception.
Add the gem to your Gemfile.
gem 'instrumental_agent'
Visit instrumentalapp.com and create an account, then initialize the agent with your project API token.
I = Instrumental::Agent.new('PROJECT_API_TOKEN', :enabled => Rails.env.production?)
You'll probably want something like the above, only enabling the agent in production mode so you don't have development and production data writing to the same value. Or you can setup two projects, so that you can verify stats in one, and release them to production in another.
Now you can begin to use Instrumental to track your application.
I.gauge('load', 1.23) # value at a point in time
I.increment('signups') # increasing value, think "events"
I.time('query_time') do # time a block of code
post = Post.find(1)
end
I.time_ms('query_time_in_ms') do # prefer milliseconds?
post = Post.find(1)
end
Note: For your app's safety, the agent is meant to isolate your app from any problems our service might suffer. If it is unable to connect to the service, it will discard data after reaching a low memory threshold.
Want to track an event (like an application deploy, or downtime)? You can capture events that are instantaneous, or events that happen over a period of time.
I.notice('Jeffy deployed rev ef3d6a') # instantaneous event
I.notice('Testing socket buffer increase', 3.days.ago, 20.minutes) # an event with a duration
Streaming data is better with a little historical context. Instrumental lets you backfill data, allowing you to see deep into your project's past.
When backfilling, you may send tens of thousands of metrics per second, and the command buffer may start discarding data it isn't able to send fast enough. We provide a synchronous mode that will ensure every stat makes it to Instrumental before continuing on to the next.
Warning: You should only enable synchronous mode for backfilling data as any issues with the Instrumental service issues will cause this code to halt until it can reconnect.
I.synchronous = true # every command sends immediately
User.find_each do |user|
I.increment('signups', 1, user.created_at)
end
Aggregation collects more data on your system before sending it to Instrumental. This reduces the total amount of data being sent, at the cost of a small amount of additional latency. You can control this feature with the frequency parameter:
I = Instrumental::Agent.new('PROJECT_API_TOKEN', :frequency => 15) # send data every 15 seconds
I.frequency = 6 # send batches of data every 6 seconds
The agent may send data more frequently if you are sending a large number of different metrics. Values between 3 and 15 are generally reasonable. If you want to disable this behavior and send every metric as fast as possible, set frequency to zero or nil. Note that a frequency of zero will still use a seperate thread for performance - it is NOT the same as synchronous mode.
Want server stats like load, memory, etc.? Check out InstrumentalD.
Need to quickly disable the agent? set :enabled to false on initialization and you don't need to change any application code.
Add require "instrumental/capistrano"
to your capistrano configuration and your deploys will be tracked by Instrumental. Add the API token for the project you want to track to by setting the following Capistrano var:
set :instrumental_key, "MY_API_KEY"
The following configuration will be added:
before "deploy", "instrumental:util:deploy_start"
after "deploy", "instrumental:util:deploy_end"
before "deploy:migrations", "instrumental:util:deploy_start"
after "deploy:migrations", "instrumental:util:deploy_end"
after "instrumental:util:deploy_end", "instrumental:record_deploy_notice"
The default message sent is "USER deployed COMMIT_HASH". If you need to customize it, set a capistrano variable named deploy_message
to the value you'd prefer.
If you plan on tracking metrics in Resque jobs, you will need to explicitly cleanup after the agent when the jobs are finished. You can accomplish this by adding after_perform
and on_failure
hooks to your Resque jobs. See the Resque hooks documentation for more information.
You're required to do this because Resque calls exit!
when a worker has finished processing, which bypasses Ruby's at_exit
hooks. The Instrumental Agent installs an at_exit
hook to flush any pending metrics to the servers, but this hook is bypassed by the exit!
call; any other code you rely that uses exit!
should call I.cleanup
to ensure any pending metrics are correctly sent to the server before exiting the process.
v2.x+ of the Instrumental Agent introduced automated metric collection for your application by way of the Metrician gem. You can read more about the metrics it collects in the Instrumental documentation.
If you are upgrading from the pre-2.x version of instrumental and do not want automated metric collection, you can disable it by setting the following in your agent setup:
I = Instrumental::Agent.new('PROJECT_API_TOKEN',
:enabled => Rails.env.production?,
:metrician => false
)
Agent version 3.x drops support for some older rubies, but should otherwise be a drop-in replacement. If you wish to enable Aggregation, enable the agent with the frequency option set to the number of seconds you would like to wait between flushes. For example:
I = Instrumental::Agent.new('PROJECT_API_TOKEN',
:enabled => Rails.env.production?,
:frequency => 15
)
We are here to help. Email us at support@instrumentalapp.com.
script/test
lib/instrumental/version.rb
rake release
This library follows Semantic Versioning 2.0.0.
Author: Instrumental
Source code: https://github.com/Instrumental/instrumental_agent-ruby
License: MIT license
1659408900
TinyTDS - Simple and fast FreeTDS bindings for Ruby using DB-Library.
The TinyTDS gem is meant to serve the extremely common use-case of connecting, querying and iterating over results to Microsoft SQL Server or Sybase databases from Ruby using the FreeTDS's DB-Library API.
TinyTDS offers automatic casting to Ruby primitives along with proper encoding support. It converts all SQL Server datatypes to native Ruby primitives while supporting :utc or :local time zones for time-like types. To date it is the only Ruby client library that allows client encoding options, defaulting to UTF-8, while connecting to SQL Server. It also properly encodes all string and binary data. The motivation for TinyTDS is to become the de-facto low level connection mode for the SQL Server Adapter for ActiveRecord.
The API is simple and consists of these classes:
Installing with rubygems should just work. TinyTDS is currently tested on Ruby version 2.0.0 and upward.
$ gem install tiny_tds
If you use Windows, we pre-compile TinyTDS with static versions of FreeTDS and supporting libraries. If you're using RubyInstaller the binary gem will require that devkit is installed and in your path to operate properly.
On all other platforms, we will find these dependencies. It is recommended that you install the latest FreeTDS via your method of choice. For example, here is how to install FreeTDS on Ubuntu. You might also need the build-essential
and possibly the libc6-dev
packages.
$ apt-get install wget
$ apt-get install build-essential
$ apt-get install libc6-dev
$ wget http://www.freetds.org/files/stable/freetds-1.1.24.tar.gz
$ tar -xzf freetds-1.1.24.tar.gz
$ cd freetds-1.1.24
$ ./configure --prefix=/usr/local --with-tdsver=7.3
$ make
$ make install
Please read the MiniPortile and/or Windows sections at the end of this file for advanced configuration options past the following:
--with-freetds-dir=DIR
Use the freetds library placed under DIR.
Optionally, Microsoft has done a great job writing some articles on how to get started with SQL Server and Ruby using TinyTDS. Please checkout one of the following posts that match your platform.
TinyTDS is developed against FreeTDS 0.95, 0.99, and 1.0 current. Our default and recommended is 1.0. We also test with SQL Server 2008, 2014, and Azure. However, usage of TinyTDS with SQL Server 2000 or 2005 should be just fine. Below are a few QA style notes about installing FreeTDS.
NOTE: Windows users of our pre-compiled native gems need not worry about installing FreeTDS and its dependencies.
Do I need to install FreeTDS? Yes! Somehow, someway, you are going to need FreeTDS for TinyTDS to compile against.
OK, I am installing FreeTDS, how do I configure it? Contrary to what most people think, you do not need to specially configure FreeTDS in any way for client libraries like TinyTDS to use it. About the only requirement is that you compile it with libiconv for proper encoding support. FreeTDS must also be compiled with OpenSSL (or the like) to use it with Azure. See the "Using TinyTDS with Azure" section below for more info.
Do I need to configure --with-tdsver
equal to anything? Most likely! Technically you should not have to. This is only a default for clients/configs that do not specify what TDS version they want to use. We are currently having issues with passing down a TDS version with the login bit. Till we get that fixed, if you are not using a freetds.conf or a TDSVER environment variable, then make sure to use 7.1.
But I want to use TDS version 7.2 for SQL Server 2005 and up! TinyTDS uses TDS version 7.1 (previously named 8.0) and fully supports all the data types supported by FreeTDS, this includes varchar(max)
and nvarchar(max)
. Technically compiling and using TDS version 7.2 with FreeTDS is not supported. But this does not mean those data types will not work. I know, it's confusing If you want to learn more, read this thread. http://lists.ibiblio.org/pipermail/freetds/2011q3/027306.html
I want to configure FreeTDS using --enable-msdblib
and/or --enable-sybase-compat
so it works for my database. Cool? It's a waste of time and totally moot! Client libraries like TinyTDS define their own C structure names where they diverge from Sybase to SQL Server. Technically we use the MSDBLIB structures which does not mean we only work with that database vs Sybase. These configs are just a low level default for C libraries that do not define what they want. So I repeat, you do not NEED to use any of these, nor will they hurt anything since we control what C structure names we use internally!
Our goal is to support every SQL Server data type and covert it to a logical Ruby object. When dates or times are returned, they are instantiated to either :utc
or :local
time depending on the query options. Only [datetimeoffset] types are excluded. All strings are associated the to the connection's encoding and all binary data types are associated to Ruby's ASCII-8BIT/BINARY
encoding.
Below is a list of the data types we support when using the 7.3 TDS protocol version. Using a lower protocol version will result in these types being returned as strings.
Connect to a database.
client = TinyTds::Client.new username: 'sa', password: 'secret', host: 'mydb.host.net'
Creating a new client takes a hash of options. For valid iconv encoding options, see the output of iconv -l
. Only a few have been tested and highly recommended to leave blank for the UTF-8 default.
TinyTds::Client
object. If you are using 1.0rc5 or later, all clients will have an independent timeout setting as you'd expect. Timeouts caused by network failure will raise a timeout error 1 second after the configured timeout limit is hit (see #481 for details).call
-able object such as a Proc
or a method to receive info messages from the database. It should have a single parameter, which will be a TinyTds::Error
object representing the message. For example:opts = ... # host, username, password, etc
opts[:message_handler] = Proc.new { |m| puts m.message }
client = TinyTds::Client.new opts
# => Changed database context to 'master'.
# => Changed language setting to us_english.
client.execute("print 'hello world!'").do
# => hello world!
Use the #active?
method to determine if a connection is good. The implementation of this method may change but it should always guarantee that a connection is good. Current it checks for either a closed or dead connection.
client.dead? # => false
client.closed? # => false
client.active? # => true
client.execute("SQL TO A DEAD SERVER")
client.dead? # => true
client.closed? # => false
client.active? # => false
client.close
client.closed? # => true
client.active? # => false
Escape strings.
client.escape("How's It Going'") # => "How''s It Going''"
Send a SQL string to the database and return a TinyTds::Result object.
result = client.execute("SELECT * FROM [datatypes]")
A result object is returned by the client's execute command. It is important that you either return the data from the query, most likely with the #each method, or that you cancel the results before asking the client to execute another SQL batch. Failing to do so will yield an error.
Calling #each on the result will lazily load each row from the database.
result.each do |row|
# By default each row is a hash.
# The keys are the fields, as you'd expect.
# The values are pre-built Ruby primitives mapped from their corresponding types.
end
A result object has a #fields
accessor. It can be called before the result rows are iterated over. Even if no rows are returned, #fields will still return the column names you expected. Any SQL that does not return columned data will always return an empty array for #fields
. It is important to remember that if you access the #fields
before iterating over the results, the columns will always follow the default query option's :symbolize_keys
setting at the client's level and will ignore the query options passed to each.
result = client.execute("USE [tinytdstest]")
result.fields # => []
result.do
result = client.execute("SELECT [id] FROM [datatypes]")
result.fields # => ["id"]
result.cancel
result = client.execute("SELECT [id] FROM [datatypes]")
result.each(:symbolize_keys => true)
result.fields # => [:id]
You can cancel a result object's data from being loading by the server.
result = client.execute("SELECT * FROM [super_big_table]")
result.cancel
You can use results cancelation in conjunction with results lazy loading, no problem.
result = client.execute("SELECT * FROM [super_big_table]")
result.each_with_index do |row, i|
break if row > 10
end
result.cancel
If the SQL executed by the client returns affected rows, you can easily find out how many.
result.each
result.affected_rows # => 24
This pattern is so common for UPDATE and DELETE statements that the #do method cancels any need for loading the result data and returns the #affected_rows
.
result = client.execute("DELETE FROM [datatypes]")
result.do # => 72
Likewise for INSERT
statements, the #insert method cancels any need for loading the result data and executes a SCOPE_IDENTITY()
for the primary key.
result = client.execute("INSERT INTO [datatypes] ([xml]) VALUES ('<html><br/></html>')")
result.insert # => 420
The result object can handle multiple result sets form batched SQL or stored procedures. It is critical to remember that when calling each with a block for the first time will return each "row" of each result set. Calling each a second time with a block will yield each "set".
sql = ["SELECT TOP (1) [id] FROM [datatypes]",
"SELECT TOP (2) [bigint] FROM [datatypes] WHERE [bigint] IS NOT NULL"].join(' ')
set1, set2 = client.execute(sql).each
set1 # => [{"id"=>11}]
set2 # => [{"bigint"=>-9223372036854775807}, {"bigint"=>9223372036854775806}]
result = client.execute(sql)
result.each do |rowset|
# First time data loading, yields each row from each set.
# 1st: {"id"=>11}
# 2nd: {"bigint"=>-9223372036854775807}
# 3rd: {"bigint"=>9223372036854775806}
end
result.each do |rowset|
# Second time over (if columns cached), yields each set.
# 1st: [{"id"=>11}]
# 2nd: [{"bigint"=>-9223372036854775807}, {"bigint"=>9223372036854775806}]
end
Use the #sqlsent?
and #canceled?
query methods on the client to determine if an active SQL batch still needs to be processed and or if data results were canceled from the last result object. These values reset to true and false respectively for the client at the start of each #execute
and new result object. Or if all rows are processed normally, #sqlsent?
will return false. To demonstrate, lets assume we have 100 rows in the result object.
client.sqlsent? # = false
client.canceled? # = false
result = client.execute("SELECT * FROM [super_big_table]")
client.sqlsent? # = true
client.canceled? # = false
result.each do |row|
# Assume we break after 20 rows with 80 still pending.
break if row["id"] > 20
end
client.sqlsent? # = true
client.canceled? # = false
result.cancel
client.sqlsent? # = false
client.canceled? # = true
It is possible to get the return code after executing a stored procedure from either the result or client object.
client.return_code # => nil
result = client.execute("EXEC tinytds_TestReturnCodes")
result.do
result.return_code # => 420
client.return_code # => 420
Every TinyTds::Result
object can pass query options to the #each method. The defaults are defined and configurable by setting options in the TinyTds::Client.default_query_options
hash. The default values are:
Each result gets a copy of the default options you specify at the client level and can be overridden by passing an options hash to the #each method. For example
result.each(:as => :array, :cache_rows => false) do |row|
# Each row is now an array of values ordered by #fields.
# Rows are yielded and forgotten about, freeing memory.
end
Besides the standard query options, the result object can take one additional option. Using :first => true
will only load the first row of data and cancel all remaining results.
result = client.execute("SELECT * FROM [super_big_table]")
result.each(:first => true) # => [{'id' => 24}]
By default row caching is turned on because the SQL Server adapter for ActiveRecord would not work without it. I hope to find some time to create some performance patches for ActiveRecord that would allow it to take advantages of lazily created yielded rows from result objects. Currently only TinyTDS and the Mysql2 gem allow such a performance gain.
TinyTDS takes an opinionated stance on how we handle encoding errors. First, we treat errors differently on reads vs. writes. Our opinion is that if you are reading bad data due to your client's encoding option, you would rather just find ?
marks in your strings vs being blocked with exceptions. This is how things wold work via ODBC or SMS. On the other hand, writes will raise an exception. In this case we raise the SYBEICONVO/2402 error message which has a description of Error converting characters into server's character set. Some character(s) could not be converted.
. Even though the severity of this message is only a 4
and TinyTDS will automatically strip/ignore unknown characters, we feel you should know that you are inserting bad encodings. In this way, a transaction can be rolled back, etc. Remember, any database write that has bad characters due to the client encoding will still be written to the database, but it is up to you rollback said write if needed. Most ORMs like ActiveRecord handle this scenario just fine.
TinyTDS will raise a TinyTDS::Error
when a timeout is reached based on the options supplied to the client. Depending on the reason for the timeout, the connection could be dead or alive. When db processing is the cause for the timeout, the connection should still be usable after the error is raised. When network failure is the cause of the timeout, the connection will be dead. If you attempt to execute another command batch on a dead connection you will see a DBPROCESS is dead or not enabled
error. Therefore, it is recommended to check for a dead?
connection before trying to execute another command batch.
The TinyTDS gem uses binstub wrappers which mirror compiled FreeTDS Utilities binaries. These native executables are usually installed at the system level when installing FreeTDS. However, when using MiniPortile to install TinyTDS as we do with Windows binaries, these binstubs will find and prefer local gem exe
directory executables. These are the following binstubs we wrap.
TinyTDS is the default connection mode for the SQL Server adapter in versions 3.1 or higher. The SQL Server adapter can be found using the links below.
TinyTDS is fully tested with the Azure platform. You must set the azure: true
connection option when connecting. This is needed to specify the default database name in the login packet since Azure has no notion of USE [database]
. FreeTDS must be compiled with OpenSSL too.
IMPORTANT: Do not use username@server.database.windows.net
for the username connection option! You must use the shorter username@server
instead!
Also, please read the Azure SQL Database General Guidelines and Limitations MSDN article to understand the differences. Specifically, the connection constraints section!
A DBLIB connection does not have the same default SET options for a standard SMS SQL Server connection. Hence, we recommend the following options post establishing your connection.
SET ANSI_DEFAULTS ON
SET QUOTED_IDENTIFIER ON
SET CURSOR_CLOSE_ON_COMMIT OFF
SET IMPLICIT_TRANSACTIONS OFF
SET TEXTSIZE 2147483647
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_NULL_DFLT_ON ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET QUOTED_IDENTIFIER ON
SET CURSOR_CLOSE_ON_COMMIT OFF
SET IMPLICIT_TRANSACTIONS OFF
SET TEXTSIZE 2147483647
SET CONCAT_NULL_YIELDS_NULL ON
TinyTDS must be used with a connection pool for thread safety. If you use ActiveRecord or the Sequel gem this is done for you. However, if you are using TinyTDS on your own, we recommend using the ConnectionPool gem when using threads:
Please read our thread_test.rb file for details on how we test its usage.
This is possible using FreeTDS version 0.95 or higher. You must use the use_utf16
login option or add the following config to your freetds.conf
in either the global section or a specfic dataserver. If you are on Windows, the default location for your conf file will be in C:\Sites
.
[global]
use utf-16 = true
The default is true and since FreeTDS v1.0 would do this as well.
For the convenience of Windows users, TinyTDS ships pre-compiled gems for supported versions of Ruby on Windows. In order to generate these gems, rake-compiler-dock is used. This project provides several Docker images with rvm, cross-compilers and a number of different target versions of Ruby.
Run the following rake task to compile the gems for Windows. This will check the availability of Docker (and boot2docker on Windows or OS-X) and will give some advice for download and installation. When docker is running, it will download the docker image (once-only) and start the build:
$ rake gem:windows
The compiled gems will exist in ./pkg
directory.
First, clone the repo using the command line or your Git GUI of choice.
$ git clone git@github.com:rails-sqlserver/tiny_tds.git
After that, the quickest way to get setup for development is to use Docker. Assuming you have downloaded docker for your platform, you can use docker-compose to run the necessary containers for testing.
$ docker-compose up -d
This will download our SQL Server for Linux Docker image based from microsoft/mssql-server-linux/. Our image already has the [tinytdstest]
DB and tinytds
users created. This will also download a toxiproxy Docker image which we can use to simulate network failures for tests. Basically, it does the following.
$ docker network create main-network
$ docker pull metaskills/mssql-server-linux-tinytds
$ docker run -p 1433:1433 -d --name sqlserver --network main-network metaskills/mssql-server-linux-tinytds
$ docker pull shopify/toxiproxy
$ docker run -p 8474:8474 -p 1234:1234 -d --name toxiproxy --network main-network shopify/toxiproxy
If you are using your own database. Make sure to run these SQL commands as SA to get the test database and user installed.
CREATE DATABASE [tinytdstest];
CREATE LOGIN [tinytds] WITH PASSWORD = '', CHECK_POLICY = OFF, DEFAULT_DATABASE = [tinytdstest];
USE [tinytdstest];
CREATE USER [tinytds] FOR LOGIN [tinytds];
EXEC sp_addrolemember N'db_owner', N'tinytds';
From here you can build and run tests against an installed version of FreeTDS.
$ bundle install
$ bundle exec rake
Examples us using enviornment variables to customize the test task.
$ rake TINYTDS_UNIT_DATASERVER=mydbserver
$ rake TINYTDS_UNIT_DATASERVER=mydbserver TINYTDS_SCHEMA=sqlserver_2008
$ rake TINYTDS_UNIT_HOST=mydb.host.net TINYTDS_SCHEMA=sqlserver_azure
$ rake TINYTDS_UNIT_HOST=mydb.host.net TINYTDS_UNIT_PORT=5000 TINYTDS_SCHEMA=sybase_ase
If you use a multi stage Docker build to assemble your gems in one phase and then copy your app and gems into another, lighter, container without build tools you will need to make sure you tell the OS how to find dependencies for TinyTDS.
After you have built and installed FreeTDS it will normally place library files in /usr/local/lib
. When TinyTDS builds native extensions, it already knows to look here but if you copy your app to a new container that link will be broken.
Set the LD_LIBRARY_PATH environment variable export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}
and run ldconfig
. If you run ldd tiny_tds.so
you should not see any broken links. Make sure you also copied in the library dependencies from your build container with a command like COPY --from=builder /usr/local/lib /usr/local/lib
.
My name is Ken Collins and I currently maintain the SQL Server adapter for ActiveRecord and wrote this library as my first cut into learning Ruby C extensions. Hopefully it will help promote the power of Ruby and the Rails framework to those that have not yet discovered it. My blog is metaskills.net and I can be found on twitter as @metaskills. Enjoy!
TinyTDS is Copyright (c) 2010-2015 Ken Collins, ken@metaskills.net and Will Bond (Veracross LLC) wbond@breuer.com. It is distributed under the MIT license. Windows binaries contain pre-compiled versions of FreeTDS http://www.freetds.org/ which is licensed under the GNU LGPL license at http://www.gnu.org/licenses/lgpl-2.0.html
Author: rails-sqlserver
Source code: https://github.com/rails-sqlserver/tiny_tds
License:
1659013500
WE ARE LOOKING FOR MAINTAINERS. CONTACT @johnmcaliley IF YOU ARE INTERESTED IN HELPING
impressionist
A lightweight plugin that logs impressions per action or manually per model
Logs an impression... and I use that term loosely. It can log page impressions (technically action impressions), but it is not limited to that. You can log impressions multiple times per request. And you can also attach it to a model. The goal of this project is to provide customizable stats that are immediately accessible in your application as opposed to using Google Analytics and pulling data using their API. You can attach custom messages to impressions. No reporting yet.. this thingy just creates the data.
They are ignored. 1200 known bots have been added to the ignore list as of February 1, 2011. Impressionist uses this list: http://www.user-agents.org/allagents.xml
Add it to your Gemfile
#rails 6
gem 'impressionist'
#rails 5 or lower
gem 'impressionist', '~>1.6.1'
Install with Bundler
bundle install
Generate the impressions table migration
rails g impressionist
Run the migration
rake db:migrate
The following fields are provided in the migration:
t.string "impressionable_type" # model type: Widget
t.integer "impressionable_id" # model instance ID: @widget.id
t.integer "user_id" # automatically logs @current_user.id
t.string "controller_name" # logs the controller name
t.string "action_name" # logs the action_name
t.string "view_name" # TODO: log individual views (as well as partials and nested partials)
t.string "request_hash" # unique ID per request, in case you want to log multiple impressions and group them
t.string "session_hash" # logs the rails session
t.string "ip_address" # request.remote_ip
t.text "params" # request.params, except action name, controller name and resource id
t.string "referrer" # request.referer
t.string "message" # custom message you can add
t.datetime "created_at" # I am not sure what this is.... Any clue?
t.datetime "updated_at" # never seen this one before either.... Your guess is as good as mine?? ;-)
Log all actions in a controller
WidgetsController < ApplicationController
impressionist
end
Specify actions you want logged in a controller
WidgetsController < ApplicationController
impressionist :actions=>[:show,:index]
end
Make your models impressionable. This allows you to attach impressions to an AR model instance. Impressionist will automatically log the Model name (based on action_name) and the id (based on params[:id]), but in order to get the count of impressions (example: @widget.impression_count), you will need to make your model impressionable
class Widget < ActiveRecord::Base
is_impressionable
end
Log an impression per model instance in your controller. Note that it is not necessary to specify "impressionist" (usage #1) in the top of you controller if you are using this method. If you add "impressionist" to the top of your controller and also use this method in your action, it will result in 2 impressions being logged (but associated with one request_hash). If you're using friendly_id be sure to log impressionist this way, as params[:id] will return a string(url slug) while impressionable_id is a Integer column in database. Also note that you have to take step #3 for the Widget model for this to work.
def show
@widget = Widget.find
impressionist(@widget, "message...") # 2nd argument is optional
end
Get unique impression count from a model. This groups impressions by request_hash, so if you logged multiple impressions per request, it will only count them one time. This unique impression count will not filter out unique users, only unique requests
@widget.impressionist_count
@widget.impressionist_count(:start_date=>"2011-01-01",:end_date=>"2011-01-05")
@widget.impressionist_count(:start_date=>"2011-01-01") #specify start date only, end date = now
Get the unique impression count from a model filtered by IP address. This in turn will give you impressions with unique request_hash, since rows with the same request_hash will have the same IP address.
@widget.impressionist_count(:filter=>:ip_address)
Get the unique impression count from a model filtered by params. This in turn will give you impressions with unique params.
@widget.impressionist_count(:filter => :params)
Get the unique impression count from a model filtered by session hash. Same as #6 regarding request hash. This may be more desirable than filtering by IP address depending on your situation, since filtering by IP may ignore visitors that use the same IP. The downside to this filtering is that a user could clear session data in their browser and skew the results.
@widget.impressionist_count(:filter=>:session_hash)
Get total impression count. This may return more than 1 impression per http request, depending on how you are logging impressions
@widget.impressionist_count(:filter=>:all)
Get impression count by message. This only counts impressions of the given message.
@widget.impressionist_count(:message=>"pageview", :filter=>:all)
Logging impressions for authenticated users happens automatically. If you have a current_user helper or use @current_user in your before_filter (or before_action in Rails >= 5.0) to set your authenticated user, current_user.id will be written to the user_id field in the impressions table.
Impressionist makes it easy to add a counter_cache
column to your model. The most basic configuration looks like:
is_impressionable :counter_cache => true
This will automatically increment the impressions_count
column in the included model. Note: You'll need to add that column to your model. If you'd like specific a different column name, you can:
is_impressionable :counter_cache => true, :column_name => :my_column_name
If you'd like to include only unique impressions in your count:
# default will be filtered by ip_address
is_impressionable :counter_cache => true, :column_name => :my_column_name, :unique => true
If you'd like to specify what sort of unique impression you'd like to save? Fear not, Any option you pass to unique, impressionist_count will use it as its filter to update_counters based on that unique option.
# options are any column in the impressions' table.
is_impressionable :counter_cache => true, :column_name => :my_column_name, :unique => :request_hash
is_impressionable :counter_cache => true, :column_name => :my_column_name, :unique => :all
It is as simple as this:
t.integer :my_column_name, :default => 0
If you want to use the typical Rails 4 migration generator, you can:
rails g migration AddImpressionsCountToBook impressions_count:int
Maybe you only care about unique impressions and would like to avoid unnecessary database records. You can specify conditions for recording impressions in your controller:
# only record impression if the request has a unique combination of type, id, and session
impressionist :unique => [:impressionable_type, :impressionable_id, :session_hash]
# only record impression if the request has a unique combination of controller, action, and session
impressionist :unique => [:controller_name, :action_name, :session_hash]
# only record impression if session is unique
impressionist :unique => [:session_hash]
# only record impression if param is unique
impressionist :unique => [:params]
Or you can use the impressionist
method directly:
impressionist(impressionable, "some message", :unique => [:session_hash])
Execute this command on your terminal/console:
rails g impressionist --orm mongoid
This command create a file impression.rb
on config/initializer
folder. Add config.orm = :mongoid
to this file:
# Use this hook to configure impressionist parameters
Impressionist.setup do |config|
# Define ORM. Could be :active_record (default), :mongo_mapper or :mongoid
# config.orm = :active_record
config.orm = :mongoid
end
WE ARE CURRENTLY LOOKING FOR SOMEONE TO HELP MAINTAIN THIS REPOSITORY. IF YOU ARE INTERESTED, MESSAGE @johnmcaliley.
Copyright (c) 2011 John McAliley. See LICENSE.txt for further details.
Author: charlotte-ruby
Source code: https://github.com/charlotte-ruby/impressionist
License: MIT license
1658850060
Thinking Sphinx is a library for connecting ActiveRecord to the Sphinx full-text search tool, and integrates closely with Rails (but also works with other Ruby web frameworks). The current release is v5.4.0.
Please refer to the changelog and release notes for any changes you need to make when upgrading. The release notes in particular are quite good at covering breaking changes and more details for new features.
The documentation also has more details on what’s involved for upgrading from v4 to v5, v3 to v4, and v1/v2 to v3.
It’s a gem, so install it like you would any other gem. You will also need to specify the mysql2 gem if you’re using MRI, or jdbc-mysql if you’re using JRuby:
gem 'mysql2', '~> 0.4', :platform => :ruby
gem 'jdbc-mysql', '~> 5.1.35', :platform => :jruby
gem 'thinking-sphinx', '~> 5.4'
The MySQL gems mentioned are required for connecting to Sphinx, so please include it even when you’re using PostgreSQL for your database.
You’ll also need to install Sphinx – this is covered in the extended documentation.
Begin by reading the quick-start guide, and beyond that, the documentation should serve you pretty well.
The current release of Thinking Sphinx works with the following versions of its dependencies:
Library | Minimum | Tested Against |
---|---|---|
Ruby | v2.4 | v2.4, v2.5, v2.6, v2.7, v3.0 |
Sphinx | v2.2.11 | v2.2.11, v3.3.1 |
Manticore | v2.8 | v3.5, v4.0 |
ActiveRecord | v4.2 | v4.2..v7.0 |
It might work with older versions of Ruby, but it’s highly recommended to update to a supported release.
It should also work with JRuby, but the test environment for that in CI has been unreliable, hence that’s not actively tested against at the moment.
If you’re using Sphinx, v2.2.11 is recommended even though it’s quite old, as it works well with PostgreSQL databases (but if you’re using MySQL – or real-time indices – then v3.3.1 should also be fine).
If you’re opting for Manticore instead, v2.8 or newer works, but v3 or newer is recommended as that’s what is actively tested against.
Currently Thinking Sphinx is built to support Rails/ActiveRecord 4.2 or newer. If you’re using Sinatra and ActiveRecord instead of Rails, that’s fine – just make sure you add the :require => 'thinking_sphinx/sinatra'
option when listing thinking-sphinx
in your Gemfile.
If you want ActiveRecord 3.2-4.1 support, then refer to the 4.x releases of Thinking Sphinx. Or, for ActiveRecord 3.1 support, then refer to the 3.0.x releases. Anything older than that, then you’re stuck with Thinking Sphinx v2.x (for Rails/ActiveRecord 3.0) or v1.x (Rails 2.3). Please note that these older versions are no longer actively supported.
You’ll need either the standard Ruby (v2.4 or newer) or JRuby (9.1 or newer).
MySQL 5.x and Postgres 8.4 or better are supported.
Please note that this project has a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
To contribute, clone this repository and have a good look through the specs – you’ll notice the distinction between acceptance tests that actually use Sphinx and go through the full stack, and unit tests (everything else) which use liberal test doubles to ensure they’re only testing the behaviour of the class in question. I’ve found this leads to far better code design.
All development is done on the develop
branch; please base any pull requests off of that branch. Please write the tests and then the code to get them passing, and send through a pull request.
In order to run the tests, you’ll need to create a database named thinking_sphinx
:
# Either fire up a MySQL console:
mysql -u root
# OR a PostgreSQL console:
psql
# In that console, create the database:
CREATE DATABASE thinking_sphinx;
You can then run the unit tests with rake spec:unit
, the acceptance tests with rake spec:acceptance
, or all of the tests with just rake
. To run these with PostgreSQL, you’ll need to set the DATABASE
environment variable accordingly:
DATABASE=postgresql rake
Author: Pat
Source Code: https://github.com/pat/thinking-sphinx
License: MIT license
1658778480
The dotiw
library that adds distance_of_time_in_words
to any Ruby project, or overrides the default implementation in Rails with more accurate output.
Do you crave accuracy down to the second? So do I. That's why I made this gem.
Add to your Gemfile
.
gem 'dotiw'
Run bundle install
.
require 'dotiw'
include DOTIW::Methods
require 'dotiw'
include ActionView::Helpers::DateHelper
include ActionView::Helpers::TextHelper
include ActionView::Helpers::NumberHelper
Take this for a totally kick-ass example:
>> distance_of_time_in_words(Time.now, Time.now + 1.year + 2.months + 3.weeks + 4.days + 5.hours + 6.minutes + 7.seconds, true)
=> "1 year, 2 months, 3 weeks, 4 days, 5 hours, 6 minutes, and 7 seconds"
Also if one of the measurement is zero it will not output it:
>> distance_of_time_in_words(Time.now, Time.now + 1.year + 2.months + 5.hours + 6.minutes + 7.seconds, true)
=> "1 year, 2 months, 4 days, 6 minutes, and 7 seconds"
Better than "about 1 year", am I right? Of course I am.
"But Ryan!", you say, "What happens if the time is only in seconds but because of the default the seconds aren't shown? Won't it be blank?" "No!" I triumphantly reply:
>> distance_of_time_in_words(Time.now, Time.now + 1.second, false)
=> "1 second"
It also supports numeric arguments like the original Rails version:
>> distance_of_time_in_words(0, 150)
=> "2 minutes and 30 seconds"
as an alternative to:
`>> distance_of_time_in_words(Time.now, Time.now + 2.5.minutes)
=> "2 minutes and 30 seconds"
This is useful if you're just interested in "stringifying" the length of time. Alternatively, you can use the #distance_of_time
helper as described below.
The third argument for this method is whether or not to include seconds. By default this is false
(because in Rails' distance_of_time_in_words
it is), you can turn it on though by passing true
as the third argument:
>> distance_of_time_in_words(Time.now, Time.now + 1.year + 1.second, true)
=> "1 year, and 1 second"
Yes this could just be merged into the options hash but I'm leaving it here to ensure "backwards-compatibility", because that's just an insanely radical thing to do. \m/
Alternatively this can be included in the options hash as include_seconds: true
removing this argument altogether.
The last argument is an optional options hash that can be used to manipulate behavior and (which uses to_sentence
).
Don't like having to pass in Time.now
all the time? Then use time_ago_in_words
or distance_of_time_in_words_to_now
which also will rock your world:
>> time_ago_in_words(Time.now + 3.days + 1.second)
=> "3 days, and 1 second"
>> distance_of_time_in_words_to_now(Time.now + 3.days + 1.second)
=> "3 days, and 1 second"
Oh, and did I mention it supports I18n? Oh yeah. Rock on!
You can pass in a locale and it'll output it in whatever language you want (provided you have translations, otherwise it'll default to your app's default locale (the config.i18n.default_locale
you have set in /config/application.rb
):
>> distance_of_time_in_words(Time.now, Time.now + 1.minute, false, locale: :es)
=> "1 minuto"
This will also be passed to to_sentence
.
Specify this if you want it to use the old distance_of_time_in_words
. The value can be anything except nil
or false
.
As described above this option is the equivalent to the third argument whether to include seconds.
Specifies the maximum output unit which will accumulate all the surplus. Say you set it to seconds and your time difference is of 2 minutes then the output would be 120 seconds.
>> distance_of_time_in_words(Time.now, Time.now + 2.hours + 70.seconds, true, accumulate_on: :minutes)
=> "121 minutes and 10 seconds"
Only want a specific measurement of time? No problem!
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute, false, only: :minutes)
=> "1 minute"
You only want some? No problem too!
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.day + 1.minute, false, only: [:minutes, :hours])
=> "1 hour and 1 minute"
Don't want a measurement of time? No problem!
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute, false, except: :minutes)
=> "1 hour"
Culling a whole group of measurements of time:
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.day + 1.minute, false, except: [:minutes, :hours])
=> "1 day"
For times when Rails distance_of_time_in_words
is not precise enough and DOTIW
is too precise. For instance, if you only want to know the highest time part (measure) that elapsed between two dates.
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute + 1.second, true, highest_measure_only: true)
=> "1 hour"
Notice how minutes and seconds were removed from the output. Another example:
>> distance_of_time_in_words(Time.now, Time.now + 1.minute + 1.second, true, highest_measure_only: true)
=> "1 minute"
Minutes are the highest measure, so seconds were discarded from the output.
When you want variable precision from DOTIW
:
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute + 1.second, true, highest_measures: 2)
=> "1 hour and 1 minute"
This is an option for to_sentence
, defaults to ', '.
Using something other than a comma:
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute + 1.second, true, words_connector: ' - ')
=> "1 hour - 1 minute, and 1 second"
This is an option for to_sentence
, defaults to ' and '.
Using something other than 'and':
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute, true, two_words_connector: ' plus ')
=> "1 hour plus 1 minute"
This is an option for to_sentence
, defaults to ', and '.
Using something other than ', and':
>> distance_of_time_in_words(Time.now, Time.now + 1.hour + 1.minute + 1.second, true, last_word_connector: ', finally ')
=> "1 hour, 1 minute, finally 1 second"
If you have simply a number of seconds you can get the "stringified" version of this by using distance_of_time
:
>> distance_of_time(300)
=> "5 minutes"
Don't like any format you're given? That's cool too! Here, have an indifferent hash version:
>> distance_of_time_in_words_hash(Time.now, Time.now + 1.year + 2.months + 3.weeks + 4.days + 5.hours + 6.minutes + 7.seconds)
=> { days: 4, weeks: 3, seconds: 7, minutes: 6, years: 1, hours: 5, months: 2 }
Indifferent means that you can access all keys by their String
or Symbol
version.
This method is only available with Rails ActionView.
If you want to calculate a distance of time in percent, use distance_of_time_in_percent
. The first argument is the beginning time, the second argument the "current" time and the third argument is the end time.
>> distance_of_time_in_percent("04-12-2009".to_time, "29-01-2010".to_time, "04-12-2010".to_time)
=> '15%'
This method takes the same options as number_with_precision
.
>> distance_of_time_in_percent("04-12-2009".to_time, "29-01-2010".to_time, "04-12-2010".to_time, precision: 1)
=> '15.3%'
Pressed for space? Try compact: true
.
>> distance_of_time_in_words(Time.now, Time.now + 2.year + 1.day + 1.second, compact: true)
=> "2y1d"
Pairs well with words_connector
, last_word_connector
, and two_words_connector
if you can spare just a little more room:
>> distance_of_time_in_words(Time.now, Time.now + 5.years + 1.day + 23.seconds, words_connector: " ", last_word_connector: " ", two_words_connector: " ", compact: true)
=> "5y 1d 23s"
Author: Radar
Source Code: https://github.com/radar/distance_of_time_in_words
License: MIT license
1658582760
Stealth is a Ruby framework for creating text and voice chatbots. It's design is inspired by Ruby on Rails's philosophy of convention over configuration. It has an MVC architecture with the slight caveat that views
are aptly named replies
.
Getting started with Stealth is simple:
> gem install stealth
> stealth new <bot>
Stealth is extensible. All service integrations are split out into separate Ruby Gems. Things like analytics and natural language processing (NLP) can be added in as gems as well.
Currently, there are gems for:
You can find our full docs here. If something is not clear in the docs, please file an issue! We consider all shortcomings in the docs as bugs.
Stealth is versioned using Semantic Versioning, but it's more like the Linux Kernel. Major version releases are just as arbitrary as minor version releases. We strive to never break anything with any version change. Patches are still issues as the "third dot" in the version string.
Author: Hellostealth
Source Code: https://github.com/hellostealth/stealth
License: MIT license
1658417340
This is a base project for creating FB Chatbots. It has a state machine and User Management and allows you to add functionality with modules.
Put all your logic into lib/bot. We've already prepared everything for you to kickstart your project.
@msg_meta holds @request_type holds the type of the request. Could be one of the following:
@current_user hold infos of your current user (last seen, state machine, user id ...)
This function will reply a message back to the user who sent one. you can use Spintax and Emojs.
def reply_message(msg, options={})
def example()
reply_message "make {:pizza:|:sushi:|:lemon:} great again!"
end
This function will send an image back to the user who sent a message to your bot.
def reply_image(img_url)
This function will render HTML and send an image back to the user who sent a message to your bot.
def reply_html(html)
This function will render a bubble and send it to the user.
def reply_bubble
This function will return a string along with a set of button options specified in an array.
def reply_quick_buttons(msg, options=%W(Yes No))
This function will return a string containing the message a user sent to your bot.
def get_message
Most of the time you will not need the Emoji Module because it is already integrated into the reply module.
This function will return the UTF-8 representation of the given Emoji Name
def get_emoji(name)
This function will send a reply message with the UTF-8 representation of the given Emoji Name
def reply_emoji(name)
This function will be always used with the reply_message function of the repy module. It will search for emoji names surrounded with : and replaces them with the UTF-8 representation of the given [Emoji Name]
def compute_emojis(content)
This function is the opposite of the function above.
def parse_emojis(content)
With the web search module you can transport websites to messengers. Just add two methods to your bot logic. One for handling search requests and one for handling user input on the search results.
search_request_on_website(
url: "http://www.example.com/",
form_name: 'search',
result_css_selector: '.result > a',
image_css_selector: 'img',
button_text: 'more infos'
)
handle_search_result(
url: "http://www.example.com",
result_css_selector: ".result"
)
This Module will help you with guiding users through different states of your bot.
Example usage of the State Machine Module:
class BotLogic < BaseBotLogic
def self.bot_logic
state_action 0, :greeting
state_action 1, :turorial
state_action 2, :bye
end
def self.greeting
reply_message "greeting"
state_go
end
def self.turorial
reply_message "turorial"
state_go
end
def self.bye
reply_message "bye"
state_reset
end
end
clone the repo copy config/settings.yml to settings.local.yml and enter your api keys use ngrok or another vpn to tunnel your connection run the following commands
bundle install
rails s
set the webhook to https://tunnel_url/bot and use your token (default: github)
git checkout -b my-new-feature
git commit -am 'Useful information about your new features'
git push origin my-new-feature
Development
branch :DAuthor: Davidmann4
Source Code: https://github.com/davidmann4/botstack
License:
1658220240
Shoryuken sho-ryu-ken is a super-efficient Amazon SQS thread-based message processor.
Ruby 2.4 or greater.
Add this line to your application's Gemfile:
gem 'shoryuken'
If you are using AWS SDK version 3, please also add this line:
gem 'aws-sdk-sqs'
The extra gem aws-sdk-sqs
is required in order to keep Shoryuken compatible with AWS SDK version 2 and 3.
And then execute:
$ bundle
Check the Getting Started page.
For more information check the wiki page.
Mike Perham, creator of Sidekiq, and everybody who contributed to it. Shoryuken wouldn't exist as it is without those contributions.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)To run all unit specs against the latest dependency vesions, execute
bundle exec rake spec
To run all Rails-related specs against all supported versions of Rails, execute
bundle exec appraisal rake spec:rails
To run integration specs, start a mock SQS server on localhost:5000
. One such option is cjlarose/moto-sqs-server. Then execute
bundle exec rake spec:integration
shoryuken help sqs
I'm looking for Shoryuken maintainers, are you interested on helping to maintain Shoryuken? Join our Slack
Author: ruby-shoryuken
Source Code: https://github.com/ruby-shoryuken/shoryuken
License: View license
1657951980
factory_bot_rails
factory_bot is a fixtures replacement with a straightforward definition syntax, support for multiple build strategies (saved instances, unsaved instances, attribute hashes, and stubbed objects), and support for multiple factories for the same class (user
, admin_user
, and so on), including factory inheritance.
factory_bot_rails provides Rails integration for factory_bot.
Supported Rails versions are listed in Appraisals
. Supported Ruby versions are listed in .travis.yml
.
Github: http://github.com/thoughtbot/factory_bot_rails
Gem:
$ gem install factory_bot_rails
Add factory_bot_rails
to your Gemfile in both the test and development groups:
group :development, :test do
gem 'factory_bot_rails'
end
You may want to configure your test suite to include factory_bot methods; see configuration.
By default, factory_bot_rails will automatically load factories defined in the following locations, relative to the root of the Rails project:
factories.rb
test/factories.rb
spec/factories.rb
factories/*.rb
test/factories/*.rb
spec/factories/*.rb
You can configure by adding the following to config/application.rb
or the appropriate environment configuration in config/environments
:
config.factory_bot.definition_file_paths = ["custom/factories"]
This will cause factory_bot_rails to automatically load factories in custom/factories.rb
and custom/factories/*.rb
.
It is possible to use this setting to share factories from a gem:
begin
require 'factory_bot_rails'
rescue LoadError
end
class MyEngine < ::Rails::Engine
config.factory_bot.definition_file_paths +=
[File.expand_path('../factories', __FILE__)] if defined?(FactoryBotRails)
end
You can also disable automatic factory definition loading entirely by using an empty array:
config.factory_bot.definition_file_paths = []
Including factory_bot_rails in the development group of your Gemfile will cause Rails to generate factories instead of fixtures. If you want to disable this feature, you can either move factory_bot_rails out of the development group of your Gemfile, or add the following configuration:
config.generators do |g|
g.factory_bot false
end
If fixture replacement is enabled and you already have a test/factories.rb
file (or spec/factories.rb
if using rspec_rails), generated factories will be inserted at the top of the existing file. Otherwise, factories will be generated in the test/factories
directory (spec/factories
if using rspec_rails), in a file matching the name of the table (e.g. test/factories/users.rb
).
To generate factories in a different directory, you can use the following configuration:
config.generators do |g|
g.factory_bot dir: 'custom/dir/for/factories'
end
Note that factory_bot_rails will not automatically load files in custom locations unless you add them to config.factory_bot.definition_file_paths
as well.
The suffix option allows you to customize the name of the generated file with a suffix:
config.generators do |g|
g.factory_bot suffix: "factory"
end
This will generate test/factories/users_factory.rb
instead of test/factories/users.rb
.
For even more customization, use the filename_proc
option:
config.generators do |g|
g.factory_bot filename_proc: ->(table_name) { "prefix_#{table_name}_suffix" }
end
To override the default factory template, define your own template in lib/templates/factory_bot/model/factories.erb
. This template will have access to any methods available in FactoryBot::Generators::ModelGenerator
. Note that factory_bot_rails will only use this custom template if you are generating each factory in a separate file; it will have no effect if you are generating all of your factories in test/factories.rb
or spec/factories.rb
.
Check out the guide.
Please see CONTRIBUTING.md.
factory_bot_rails was originally written by Joe Ferris and is maintained by thoughtbot. Many improvements and bugfixes were contributed by the open source community.
factory_bot_rails is maintained and funded by thoughtbot, inc. The names and logos for thoughtbot are trademarks of thoughtbot, inc.
We are passionate about open source software. See our other projects. We are available for hire.
Author: thoughtbot
Source Code: https://github.com/thoughtbot/factory_bot_rails
License: MIT license