Royce  Reinger

Royce Reinger


Minitest: A Complete Suite Of Testing Facilities Supporting TDD, BDD



minitest provides a complete suite of testing facilities supporting TDD, BDD, mocking, and benchmarking.

"I had a class with Jim Weirich on testing last week and we were
 allowed to choose our testing frameworks. Kirk Haines and I were
 paired up and we cracked open the code for a few test

 I MUST say that minitest is *very* readable / understandable
 compared to the 'other two' options we looked at. Nicely done and
 thank you for helping us keep our mental sanity."

-- Wayne E. Seguin

minitest/test is a small and incredibly fast unit testing framework. It provides a rich set of assertions to make your tests clean and readable.

minitest/spec is a functionally complete spec engine. It hooks onto minitest/test and seamlessly bridges test assertions over to spec expectations.

minitest/benchmark is an awesome way to assert the performance of your algorithms in a repeatable manner. Now you can assert that your newb co-worker doesn't replace your linear algorithm with an exponential one!

minitest/mock by Steven Baker, is a beautifully tiny mock (and stub) object framework.

minitest/pride shows pride in testing and adds coloring to your test output. I guess it is an example of how to write IO pipes too. :P

minitest/test is meant to have a clean implementation for language implementors that need a minimal set of methods to bootstrap a working test suite. For example, there is no magic involved for test-case discovery.

"Again, I can't praise enough the idea of a testing/specing
 framework that I can actually read in full in one sitting!"

-- Piotr Szotkowski

Comparing to rspec:

rspec is a testing DSL. minitest is ruby.

-- Adam Hawkins, "Bow Before MiniTest"

minitest doesn't reinvent anything that ruby already provides, like: classes, modules, inheritance, methods. This means you only have to learn ruby to use minitest and all of your regular OO practices like extract-method refactorings still apply.


minitest/autorun - the easy and explicit way to run all your tests.

minitest/test - a very fast, simple, and clean test system.

minitest/spec - a very fast, simple, and clean spec system.

minitest/mock - a simple and clean mock/stub system.

minitest/benchmark - an awesome way to assert your algorithm's performance.

minitest/pride - show your pride in testing!

minitest/test_task - a full-featured and clean rake task generator.

Incredibly small and fast runner, but no bells and whistles.

Written by squishy human beings. Software can never be perfect. We will all eventually die.


See design_rationale.rb to see how specs and tests work in minitest.


Given that you'd like to test the following class:

class Meme
  def i_can_has_cheezburger?

  def will_it_blend?

Unit tests

Define your tests as methods beginning with test_.

require "minitest/autorun"

class TestMeme < Minitest::Test
  def setup
    @meme =

  def test_that_kitty_can_eat
    assert_equal "OHAI!", @meme.i_can_has_cheezburger?

  def test_that_it_will_not_blend
    refute_match /^no/i, @meme.will_it_blend?

  def test_that_will_be_skipped
    skip "test this later"


require "minitest/autorun"

describe Meme do
  before do
    @meme =

  describe "when asked about cheeseburgers" do
    it "must respond positively" do
      _(@meme.i_can_has_cheezburger?).must_equal "OHAI!"

  describe "when asked about blending possibilities" do
    it "won't say no" do
      _(@meme.will_it_blend?).wont_match /^no/i

For matchers support check out:


Add benchmarks to your tests.

# optionally run benchmarks, good for CI-only work!
require "minitest/benchmark" if ENV["BENCH"]

class TestMeme < Minitest::Benchmark
  # Override self.bench_range or default range is [1, 10, 100, 1_000, 10_000]
  def bench_my_algorithm
    assert_performance_linear 0.9999 do |n| # n is a range value

Or add them to your specs. If you make benchmarks optional, you'll need to wrap your benchmarks in a conditional since the methods won't be defined. In minitest 5, the describe name needs to match /Bench(mark)?$/.

describe "Meme Benchmark" do
  if ENV["BENCH"] then
    bench_performance_linear "my_algorithm", 0.9999 do |n|
      100.times do

outputs something like:

# Running benchmarks:

TestBlah	100	1000	10000
bench_my_algorithm	 0.006167	 0.079279	 0.786993
bench_other_algorithm	 0.061679	 0.792797	 7.869932

Output is tab-delimited to make it easy to paste into a spreadsheet.


Mocks and stubs defined using terminology by Fowler & Meszaros at

“Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.”

class MemeAsker
  def initialize(meme)
    @meme = meme

  def ask(question)
    method =" ", "_") + "?"

require "minitest/autorun"

describe MemeAsker, :ask do
  describe "when passed an unpunctuated question" do
    it "should invoke the appropriate predicate method on the meme" do
      @meme =
      @meme_asker = @meme
      @meme.expect :will_it_blend?, :return_value

      @meme_asker.ask "will it blend"


Multi-threading and Mocks

Minitest mocks do not support multi-threading. If it works, fine, if it doesn't you can use regular ruby patterns and facilities like local variables. Here's an example of asserting that code inside a thread is run:

def test_called_inside_thread
  called = false
  pr = { called = true }
  thread =
  assert called, "proc not called"


Mocks and stubs are defined using terminology by Fowler & Meszaros at

“Stubs provide canned answers to calls made during the test”.

Minitest's stub method overrides a single method for the duration of the block.

def test_stale_eh
  obj_under_test =

  refute obj_under_test.stale?

  Time.stub :now, do   # stub goes away once the block is done
    assert obj_under_test.stale?

A note on stubbing: In order to stub a method, the method must actually exist prior to stubbing. Use a singleton method to create a new non-existing method:

def obj_under_test.fake_method

Running Your Tests

Ideally, you'll use a rake task to run your tests (see below), either piecemeal or all at once. BUT! You don't have to:

% ruby -Ilib:test test/minitest/test_minitest_test.rb
Run options: --seed 37685

# Running:

...................................................................... (etc)

Finished in 0.107130s, 1446.8403 runs/s, 2959.0217 assertions/s.

155 runs, 317 assertions, 0 failures, 0 errors, 0 skips

There are runtime options available, both from minitest itself, and also provided via plugins. To see them, simply run with --help:

% ruby -Ilib:test test/minitest/test_minitest_test.rb --help
minitest options:
    -h, --help                       Display this help.
    -s, --seed SEED                  Sets random seed. Also via env. Eg: SEED=n rake
    -v, --verbose                    Verbose. Show progress processing files.
    -n, --name PATTERN               Filter run on /regexp/ or string.
    -e, --exclude PATTERN            Exclude /regexp/ or string from run.

Known extensions: pride, autotest
    -p, --pride                      Pride. Show your testing pride!
    -a, --autotest                   Connect to autotest server.

Rake Tasks

You can set up a rake task to run all your tests by adding this to your Rakefile:

require "minitest/test_task"

Minitest::TestTask.create # named test, sensible defaults

# or more explicitly:

Minitest::TestTask.create(:test) do |t|
  t.libs << "test"
  t.libs << "lib"
  t.warning = false
  t.test_globs = ["test/**/*_test.rb"]

task :default => :test

Each of these will generate 4 tasks:

rake test          :: Run the test suite.
rake test:cmd      :: Print out the test command.
rake test:isolated :: Show which test files fail when run separately.
rake test:slow     :: Show bottom 25 tests sorted by time.

Rake Task Variables

There are a bunch of variables you can supply to rake to modify the run.

MT_LIB_EXTRAS :: Extra libs to dynamically override/inject for custom runs.
N             :: -n: Tests to run (string or /regexp/).
X             :: -x: Tests to exclude (string or /regexp/).
A             :: Any extra arguments. Honors shell quoting.
MT_CPU        :: How many threads to use for parallel test runs
SEED          :: -s --seed Sets random seed.
TESTOPTS      :: Deprecated, same as A
FILTER        :: Deprecated, same as A

Writing Extensions

To define a plugin, add a file named minitest/XXX_plugin.rb to your project/gem. That file must be discoverable via ruby's LOAD_PATH (via rubygems or otherwise). Minitest will find and require that file using Gem.find_files. It will then try to call plugin_XXX_init during startup. The option processor will also try to call plugin_XXX_options passing the OptionParser instance and the current options hash. This lets you register your own command-line options. Here's a totally bogus example:

# minitest/bogus_plugin.rb:

module Minitest
  def self.plugin_bogus_options(opts, options)
    opts.on "--myci", "Report results to my CI" do
      options[:myci] = true
      options[:myci_addr] = get_myci_addr
      options[:myci_port] = get_myci_port

  def self.plugin_bogus_init(options)
    self.reporter << if options[:myci]

Adding custom reporters

Minitest uses composite reporter to output test results using multiple reporter instances. You can add new reporters to the composite during the init_plugins phase. As we saw in plugin_bogus_init above, you simply add your reporter instance to the composite via <<.

AbstractReporter defines the API for reporters. You may subclass it and override any method you want to achieve your desired behavior.


Called when the run has started.


Called for each result, passed or otherwise.


Called at the end of the run.


Called to see if you detected any problems.

Using our example above, here is how we might implement MyCI:

# minitest/bogus_plugin.rb

module Minitest
  class MyCI < AbstractReporter
    attr_accessor :results, :addr, :port

    def initialize options
      self.results = []
      self.addr = options[:myci_addr]
      self.port = options[:myci_port]

    def record result
      self.results << result

    def report
      CI.connect(addr, port).send_results self.results

  # code from above...


What versions are compatible with what? Or what versions are supported?

Minitest is a dependency of rails, which until fairly recently had an overzealous backwards compatibility policy. As such, I'm stuck supporting versions of ruby that are long past EOL. Once rails 5.2 is dropped (hopefully April 2021), I get to drop a bunch of versions of ruby that I have to currently test against.

(As of 2021-01-31)

Current versions of rails: (

| rails | min ruby | rec ruby | minitest | status   |
|   7.0 | >= 2.7   |      3.0 | >= 5.1   | Future   |
|   6.1 | >= 2.5   |      3.0 | >= 5.1   | Current  |
|   6.0 | >= 2.5   |      2.6 | >= 5.1   | Security |
|   5.2 | >= 2.2.2 |      2.5 | ~> 5.1   | Security | EOL @railsconf 2021?

Current versions of ruby: (

| ruby | Status  |   EOL Date |
|  3.0 | Current | 2024-03-31 |
|  2.7 | Maint   | 2023-03-31 |
|  2.6 | Maint*  | 2022-03-31 |
|  2.5 | EOL     | 2021-03-31 |
|  2.4 | EOL     | 2020-03-31 |
|  2.3 | EOL     | 2019-03-31 |
|  2.2 | EOL     | 2018-03-31 |

See also:

How to test SimpleDelegates?

The following implementation and test:

class Worker < SimpleDelegator
  def work

describe Worker do
  before do
    @worker =

  it "must respond to work" do
    _(@worker).must_respond_to :work

outputs a failure:

  1) Failure:
Worker#test_0001_must respond to work [bug11.rb:16]:
Expected #<Object:0x007f9e7184f0a0> (Object) to respond to #work.

Worker is a SimpleDelegate which in 1.9+ is a subclass of BasicObject. Expectations are put on Object (one level down) so the Worker (SimpleDelegate) hits method_missing and delegates down to the instance. That object doesn't respond to work so the test fails.

You can bypass SimpleDelegate#method_missing by extending the worker with Minitest::Expectations. You can either do that in your setup at the instance level, like:

before do
  @worker =
  @worker.extend Minitest::Expectations

or you can extend the Worker class (within the test file!), like:

class Worker
  include ::Minitest::Expectations

How to share code across test classes?

Use a module. That's exactly what they're for:

module UsefulStuff
  def useful_method
    # ...

describe Blah do
  include UsefulStuff

  def test_whatever
    # useful_method available here

Remember, describe simply creates test classes. It's just ruby at the end of the day and all your normal Good Ruby Rules (tm) apply. If you want to extend your test using setup/teardown via a module, just make sure you ALWAYS call super. before/after automatically call super for you, so make sure you don't do it twice.

How to run code before a group of tests?

Use a constant with begin…end like this:

describe Blah do
  SETUP = begin
     # ... this runs once when describe Blah starts
  # ...

This can be useful for expensive initializations or sharing state. Remember, this is just ruby code, so you need to make sure this technique and sharing state doesn't interfere with your tests.

Why am I seeing uninitialized constant MiniTest::Test (NameError)?

Are you running the test with Bundler (e.g. via bundle exec )? If so, in order to require minitest, you must first add the gem 'minitest' to your Gemfile and run bundle. Once it's installed, you should be able to require minitest and run your tests.

Prominent Projects using Minitest:





rails (active_support et al)



…and of course, everything from seattle.rb…

Developing Minitest:

Minitest requires Hoe.

Minitest's own tests require UTF-8 external encoding.

This is a common problem in Windows, where the default external Encoding is often CP850, but can affect any platform. Minitest can run test suites using any Encoding, but to run Minitest's own tests you must have a default external Encoding of UTF-8.

If your encoding is wrong, you'll see errors like:

--- expected
+++ actual
@@ -1,2 +1,3 @@
 # encoding: UTF-8
 -"Expected /\\w+/ to not match \"blah blah blah\"."
 +"Expected /\\w+/ to not match # encoding: UTF-8
 +\"blah blah blah\"."

To check your current encoding, run:

ruby -e 'puts Encoding.default_external'

If your output is something other than UTF-8, you can set the RUBYOPTS env variable to a value of '-Eutf-8'. Something like:

RUBYOPT='-Eutf-8' ruby -e 'puts Encoding.default_external'

Check your OS/shell documentation for the precise syntax (the above will not work on a basic Windows CMD prompt, look for the SET command). Once you've got it successfully outputing UTF-8, use the same setting when running rake in Minitest.

Minitest's own tests require GNU (or similar) diff.

This is also a problem primarily affecting Windows developers. PowerShell has a command called diff, but it is not suitable for use with Minitest.

If you see failures like either of these, you are probably missing diff tool:

  4) Failure:
TestMinitestUnitTestCase#test_assert_equal_different_long [D:/ruby/seattlerb/minitest/test/minitest/test_minitest_test.rb:936]:
Expected: "--- expected\n+++ actual\n@@ -1 +1 @@\n-\"hahahahahahahahahahahahahahahahahahahaha\"\n+\"blahblahblahblahblahblahblahblahblahblah\"\n"
  Actual: "Expected: \"hahahahahahahahahahahahahahahahahahahaha\"\n  Actual: \"blahblahblahblahblahblahblahblahblahblah\""

  5) Failure:
TestMinitestUnitTestCase#test_assert_equal_different_collection_hash_hex_invisible [D:/ruby/seattlerb/minitest/test/minitest/test_minitest_test.rb:845]:
Expected: "No visible difference in the Hash#inspect output.\nYou should look at the implementation of #== on Hash or its members.\n
  Actual: "Expected: {1=>#<Object:0x00000003ba0470>}\n  Actual: {1=>#<Object:0x00000003ba0448>}"

If you use Cygwin or MSYS2 or similar there are packages that include a GNU diff for Windows. If you don't, you can download GNU diffutils from (make sure to add it to your PATH).

You can make sure it's installed and path is configured properly with:

diff.exe -v

There are multiple lines of output, the first should be something like:

diff (GNU diffutils) 2.8.1

If you are using PowerShell make sure you run diff.exe, not just diff, which will invoke the PowerShell built in function.

Known Extensions:


Bridge between Capybara RSpec matchers and Minitest::Spec expectations (e.g. page.must_have_content("Title")).


Test names print Ruby Object types in color with your Minitest Spec style tests.


Metadata for describe/it blocks & CLI tag filter. E.g. it "requires JS driver", js: true do & ruby test.rb --tag js runs tests tagged :js.


Minimal support to use Spec style in Rails 5+.


for swagger based automated API testing.


Around block for minitest. An alternative to setup/teardown dance.


Adds Minitest assertions to test for errors raised or not raised by Minitest itself.


autotest is a continuous testing facility meant to be used during development.


minitest-bacon extends minitest with bacon-like functionality.


Adds support for RSpec-style let! to immediately invoke let statements before each test.


Helps you isolate and debug random test failures.


Display test results with a Blink1.


Assertions and expectations for testing Capistrano recipes.


Capybara matchers support for minitest unit and spec.


Run Minitest suites as Chef report handlers


CI reporter plugin for Minitest.


Defines contexts for code reuse in Minitest specs that share common expectations.


Wraps assert so failed assertions drop into the ruby debugger.


Patches Minitest to allow for an easily configurable output.


Minimal documentation format inspired by rspec's.


Detailed output inspired by rspec's documentation format.


Print out emoji for your test passes, fails, and skips.


Semantically symmetric aliases for assertions and expectations.


Clean API for excluding certain tests you don't want to run under certain conditions.


Reimplements RSpec's “fail fast” feature


Support unit tests with expectation results in files. Differing results will be stored again in files.


Adds assertion and expectation to help testing filesystem contents.


Makes your Minitest mocks more resilient.


Focus on one test at a time.


A minitest plugin that adds a report of the top tests by number of objects allocated.


Support minitest expectation methods for all objects


Generally useful additions to minitest's assertions and expectations.


Test notifier for minitest via growl.




Adds Minitest assertions to test for the existence of HTML tags, including contents, within a provided string.


Reporting that builds a heat map of failure locations


Around and before_all/after_all/around_all hooks


Pretty, single-page HTML reports for your Minitest runs


Implicit declaration of the test subject.


Instrument ActiveSupport::Notifications when test method is executed.


Store information about speed of test execution provided by minitest-instrument in database.


JUnit-style XML reporter for minitest.


Use Minitest assertions with keyword arguments.


Test notifier for minitest via libnotify.


Run test at line number.


Define assert_log and enable minitest to test log messages. Supports Logger and Log4r::Logger.


Provides extensions to minitest for macruby UI testing.


Adds support for RSpec-style matchers to minitest.


Adds assertions that adhere to the matcher spec, but without any expectation infections.


Annotate tests with metadata (key-value).


Provides method call assertions for minitest.


Mongoid assertion matchers for Minitest.


Provides must_not as an alias for wont in Minitest.


Automatically retry failed test to help with flakiness.


Reporter for the Mac OS X notification center.


Fork-based parallelization


Run tests in parallel with a single database.


PowerAssert for Minitest.


Adds support for .predicate? methods.


List the 10 slowest tests in your suite.


Minitest integration for Rails 3.x.


Capybara integration for Minitest::Rails.


Create customizable Minitest output formats.


Colored red/green output for Minitest.


Use RSpec Mocks with Minitest.


minitest-server provides a client/server setup with your minitest process, allowing your test run to send its results directly to a handler.


Minitest assertions to speed-up development and testing of Ruby Sequel database setups.


Support for shared specs and shared spec subclasses


RSpec-style x.should == y assertions for Minitest.


Adding all manner of shoulds to Minitest (bad idea)


Print a list of tests that take too long


Provides rspec-ish context method to Minitest::Spec.


Expect syntax for Minitest::Spec (e.g. expect(sequences).to_include :celery_man).


Minitest::Spec extensions for Rails and beyond.


Drop in Minitest::Spec superclass for ActiveSupport::TestCase.


Runs (Get it? It's fast!) your tests and makes it easier to rerun individual failures.


Find leaking state between tests


Stub any instance of a method on the given class for the duration of a block.


Stub constants for the duration of a block.


Add tags for minitest.


Adds a new assertion to minitest for checking the contents of a collection, ignoring element order.


Automatic cassette managment with Minitest::Spec and VCR.


Adds structured logging, data explication, and verdicts.


Get tests results as a TestResult object.


Shoulda style syntax for minitest test::unit.


Bridges between test/unit and minitest.


Minitest matchers for Mongoid.


Minitest integration for mutant.


A pry plugin w/ minitest support. See pry-rescue/minitest.rb.


Declutter your test files from large hardcoded data and update them automatically when your code changes.


Easily translate any RSpec matchers to Minitest assertions and expectations.


Multiple stubbing 'berries', sweet and useful stub helpers and assertions. ( stub_must, assert_method_called, stubbing ORM objects by id )

Unknown Extensions:

Authors… Please send me a pull request with a description of your minitest extension.


















Minitest related goods

minitest/pride fabric:


Ruby 2.3+. No magic is involved. I hope.


sudo gem install minitest

On 1.9, you already have it. To get newer candy you can still install the gem, and then requiring “minitest/autorun” should automatically pull it in. If not, you'll need to do it yourself:

gem "minitest"     # ensures you"re using the gem, and not the built-in MT
require "minitest/autorun"

# ... usual testing stuffs ...

DO NOTE: There is a serious problem with the way that ruby 1.9/2.0 packages their own gems. They install a gem specification file, but don't install the gem contents in the gem path. This messes up Gem.find_files and many other things (gem which, gem contents, etc).

Just install minitest as a gem for real and you'll be happier.





Download Details:

Author: Minitest 
Source Code: 
License: MIT License

#ruby #testing #test 

Minitest: A Complete Suite Of Testing Facilities Supporting TDD, BDD
Monty  Boehm

Monty Boehm


Twitter.jl: Julia Package to Access Twitter API


A Julia package for interacting with the Twitter API.

Twitter.jl is a Julia package to work with the Twitter API v1.1. Currently, only the REST API methods are supported; streaming API endpoints aren't implemented at this time.

All functions have required arguments for those parameters required by Twitter and an options keyword argument to provide a Dict{String, String} of optional parameters Twitter API documentation. Most function calls will return either a Dict or an Array <: TwitterType. Bad requests will return the response code from the API (403, 404, etc).

DataFrame methods are defined for functions returning composite types: Tweets, Places, Lists, and Users.


Before one can make use of this package, you must create an application on the Twitter's Developer Platform.

Once your application is approved, you can access your dashboard/portal to grab your authentication credentials from the "Details" tab of the application.

Note that you will also want to ensure that your App has Read / Write OAuth access in order to post tweets. You can find out more about this on Stack Overflow.


To install this package, enter ] on the REPL to bring up Julia's package manager. Then add the package:

julia> ]
(v1.7) pkg> add Twitter

Tip: Press Ctrl+C to return to the julia> prompt.


To run Twitter.jl, enter the following command in your Julia REPL

julia> using Twitter

Then the a global variable has to be declared with the twitterauth function. This function holds the consumer_key(API Key), consumer_secret(API Key Secret), oauth_token(Access Token), and oauth_secret(Access Token Secret) respectively.

twitterauth("6nOtpXmf...", # API Key
            "sES5Zlj096S...", # API Key Secret
            "98689850-Hj...", # Access Token
            "UroqCVpWKIt...") # Access Token Secret
  • Ensure you put your credentials in an env file to avoid pushing your secrets to the public 🙀.

Note: This package does not currently support OAuth authentication.

Code examples

See runtests.jl for example function calls.

using Twitter, Test
using JSON, OAuth

# set debugging


mentions_timeline_default = get_mentions_timeline()
tw = mentions_timeline_default[1]
tw_df = DataFrame(mentions_timeline_default)
@test 0 <= length(mentions_timeline_default) <= 20
@test typeof(mentions_timeline_default) == Vector{Tweets}
@test typeof(tw) == Tweets
@test size(tw_df)[2] == 30

user_timeline_default = get_user_timeline(screen_name = "randyzwitch")
@test typeof(user_timeline_default) == Vector{Tweets}

home_timeline_default = get_home_timeline()
@test typeof(home_timeline_default) == Vector{Tweets}

get_tweet_by_id = get_single_tweet_id(id = "434685122671939584")
@test typeof(get_tweet_by_id) == Tweets

duke_tweets = get_search_tweets(q = "#Duke", count = 200)
@test typeof(duke_tweets) <: Dict

#test sending/deleting direct messages
#commenting out because Twitter API changed. Come back to fix
# send_dm = post_direct_messages_send(text = "Testing from Julia, this might disappear later $(time())", screen_name = "randyzwitch")
# get_single_dm = get_direct_messages_show(id =
# destroy = post_direct_messages_destroy(id =
# @test typeof(send_dm) == Tweets
# @test typeof(get_single_dm) == Tweets
# @test typeof(destroy) == Tweets

#creating/destroying friendships
add_friend = post_friendships_create(screen_name = "kyrieirving")

unfollow = post_friendships_destroy(screen_name = "kyrieirving")
unfollow_df = DataFrame(unfollow)
@test typeof(add_friend) == Users
@test typeof(unfollow) == Users
@test size(unfollow_df)[2] == 40

# create a cursor for follower ids
follow_cursor_test = get_followers_ids(screen_name = "twitter", count = 10_000)
@test length(follow_cursor_test["ids"]) == 10_000

# create a cursor for friend ids - use barackobama because he follows a lot of accounts!
friend_cursor_test = get_friends_ids(screen_name = "BarackObama", count = 10_000)
@test length(friend_cursor_test["ids"]) == 10_000

# create a test for home timelines
home_t = get_home_timeline(count = 2)
@test length(home_t) > 1

# TEST of cursoring functionality on user timelines
user_t = get_user_timeline(screen_name = "stefanjwojcik", count = 400)
@test length(user_t) == 400
# get the minimum ID of the tweets returned (the earliest)
minid = minimum( for x in user_t);

# now iterate until you hit that tweet: should return 399
# WARNING: current versions of julia cannot use keywords in macros? read here:
# eventually replace since_id = minid
tweets_since = get_user_timeline(screen_name = "stefanjwojcik", count = 400, since_id = 1001808621053898752, include_rts=1)

@test length(tweets_since)>=399

# testing get_mentions_timeline
mentions = get_mentions_timeline(screen_name = "stefanjwojcik", count = 300) 
@test length(mentions) >= 50 #sometimes API doesn't return number requested (twitter API specifies count is the max returned, may be much lower)
@test Tweets<:typeof(mentions[1])

# testing retweets_of_me
my_rts = get_retweets_of_me(count = 300)
@test Tweets<:typeof(my_rts[1])

Want to contribute?

Contributions are welcome! Kindly refer to the contribution guidelines.

Linux: Build Status 

CodeCov: codecov

Author: Randyzwitch
Source Code: 
License: View license

#julia #api #twitter 

Twitter.jl: Julia Package to Access Twitter API
Lawson  Wehner

Lawson Wehner


A Package to Make Unit Test Code More Readable and Well Documented


A Flutter package for creating more readable tests. If you are not familiar with Flutter's Unit tests

Given we feel that our tests are the best documentation of the behaviors in our code.
When we read our tests.
Then we want them to be easy to understand.
And test code be elegant.
But be written without any pain.


Improve test code readability

// Without `given_when_then`
group('calculator', () {
  // ...
  group('add 1', () => calc.add(1), then: () {
    test('result should be 1', () {
      // ...

    group('[and] subtract 1', () => calc.subtract(1), body: () {
      test('res should be 0', () {
        // ...

// 🔥 With `given_when_then` as a common English sentence
given('calculator', () {
  // ...
  when('add 1', () => calc.add(1), then: () {
    then('result should be 1', () {
      // ...

    when('[and] subtract 1', () => calc.subtract(1), body: () {
      then('res should be 0', () {
        // ...

With shouldly it makes super readable test code 😍

given('calculator', () {
  late Calculator calc;

  before(() {
    calc = Calculator();

  when('add 1', () {
    before(() => calc.add(1));

    then('result should be 1', () {;

    when('[and] subtract 1', () {
      before(() => calc.subtract(1));
      then('res should be 0', () {

Auto compose test message as BDD style

Without given_when_then

✓ calculator When add 1 result should be 1
✓ calculator When add 1 [and] subtract 1 res should be 0

With given_when_then with minimal effort

✓ Given empty calculator When add 1 Then result should be 1
✓ Given empty calculator When add 1 and subtract 1 Then res should be 0


Simple example

Without given_when_then

  group('empty calculator', body: () {
    late Calculator calc;

    setUp(() {
      calc = Calculator();

    group('add 1', () {

      setUp(() {

      test('result should be 1', () {;

      group('[and] subtract 1', () {

        setUp(() {

        test('res should be 0', () {

With given_when_then

  given('empty calculator', () {
    late Calculator calc;

    before(() {
      calc = Calculator();

    when('add 1', () => calc.add(1), then: () {
      then('result should be 1', () {;

      and('subtract 1', () => calc.subtract(1), body: () {
        then('res should be 0', () {

Advanced example with mocking

given('Post Controller', body: () {
    late PostController postController;
    late IPostRepository mockPostRepository;
    late IToastr mockToastr;

    before(() {
      mockPostRepository = MockPostRepository();
      mockToastr = MockToastr();
      postController = PostController(
        repo: mockPostRepository,
        toastr: mockToastr,

    whenn('save new valid post', () {
      bool? saveResult;

      before(() async {
        when(() => mockPostRepository.addNew('new post'))
            .thenAnswer((_) => Future.value(true));

        saveResult = await postController.addNew('new post');

      then('should return true', () async {

      then('toastr shows success', () async {
        verify(() => mockToastr.success('ok')).called(1);

    whenn('save new invalid post', () {
      bool? saveResult;
      before(() async {
        when(() => mockPostRepository.addNew('new invalid post'))
            .thenAnswer((_) => Future.value(false));

        saveResult = await postController.addNew('new invalid post');

      then('should return false', () async {

      then('toastr shows error', () async {
        verify(() => mockToastr.error('invalid post')).called(1);

Test cases

There are two ways how to use test cases:

version 1

  const TestCase([1, 1, 2]),
  const TestCase([5, 3, 8])
], (testCase) {
  final x = testCase.args[0] as int;
  final y = testCase.args[1] as int;

  given('two numbers $x and $y', () {
    when('summarizing them', () {
      then('the result should be equal to ${testCase.args.last}', () {
        (x + y)[2] as int);
Given two numbers 1 and 1 
When summarizing them 
Then the result should be equal to 2
Given two numbers 5 and 3 
When summarizing them 
Then the result should be equal to 8

version 2 - with generic

testCases2<String, String>([
  const TestCase2('Flutter', 'F'),
  const TestCase2('Awesome', 'A'),
], (args) {
  test('Word ${args.arg1} start with ${args.arg2}', () {
✓ Word Flutter start with F
✓ Word Awesome start with A

Formatting the test report 📜

You can format the test report, make it in a single line, or print every step on each line by setting variable GivenWhenThenOptions.pads with any integer value, e.g.

GivenWhenThenOptions.pads = 4;

and result will be:

Given the account balance is $100 
    And the card is valid 
    And the machine contains enough money 
When the Account Holder requests $20 
Then the Cashpoint should dispense
    And the account balance should be $80
    And the card should be returned

Known Issues

  • Collision with mocktail or mockito packages which bring where method too, you can hide when and use whenn of this package like below

But prefer to hide and rename imports like so.

import 'package:mocktail/mocktail.dart' hide when;
import 'package:mocktail/mocktail.dart' as mktl show when;
import 'package:given_when_then_unit_test/given_when_then_unit_test.dart' hide when;

void main() {
  given('Post Controller', () {
    // .. omit
    whenn('save new invalid post', () {
      // ... omit
      then('should return false', () async {

      then('toastr shows error', () async {
        verify(() => mockToastr.error('invalid post')).called(1);

And & But

Steps And and But are inter-changeable.

However, But in the English language is generally used in a negative context. And using But makes the intent of the test explicit and removes any possible ambiguities.


Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add given_when_then_unit_test

With Flutter:

 $ flutter pub add given_when_then_unit_test

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  given_when_then_unit_test: ^0.1.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:given_when_then_unit_test/given_when_then_unit_test.dart';


// ignore_for_file: public_member_api_docs

import 'package:given_when_then_unit_test/given_when_then_unit_test.dart';
import 'package:shouldly/shouldly.dart';
import 'package:test/test.dart';

class Calculator {
  num _val = 0;
  num get res => _val;

  void add(num value) {
    _val = _val + value;

  void subtract(int value) {
    _val = _val - value;

void main() {
  group('calculator', () {
    late Calculator calc;
    setUp(() => calc = Calculator());

    group('add 1', () {
      setUp(() => calc.add(1));
      test('result should be 1', () {
        expect(calc.res, 1);

      group('[and] substract 1', () {
        setUp(() => calc.subtract(1));
        test('res should be 0', () {
          expect(calc.res, isZero);

  // You can rewrite tests above with Given When Then + Shouldly as a common English sentence
  given('calculator', () {
    late Calculator calc;

    before(() {
      calc = Calculator();

    when('add 1', () {
      before(() => calc.add(1));
      then('result should be 1', () {;

      and('subtract 1', () {
        before(() => calc.subtract(1));

        then('res should be 0', () {


We accept the following contributions:

  • Ideas how to improve readability or performance
  • Reporting issues
  • Fixing bugs
  • Improving documentation and comments

Author: AndrewPiterov
Source Code: 
License: MIT license

#flutter #dart #unit #test 

A Package to Make Unit Test Code More Readable and Well Documented
Monty  Boehm

Monty Boehm


LARS.jl: Least angle Regression and The Lasso Covariance Test

Least angle regression 


Least angle regression is a variable selection/shrinkage procedure for high-dimensional data. It is also an algorithm for efficiently finding all knots in the solution path for the aforementioned this regression procedure, as well as for lasso (L1-regularized) linear regression. Fitting the entire solution path is useful for selecting the optimal value of the shrinkage parameter λ for a given dataset, and for the lasso covariance test, which provides the significance of each variable addition along the lasso path.


LARS solution paths are provided by the lars function:

lars(X, y; method=:lasso, intercept=true, standardize=true, lambda2=0.0,
     use_gram=true, maxiter=typemax(Int), lambda_min=0.0, verbose=false)

X is the design matrix and y is the dependent variable. The optional parameters are:

method - either :lasso or :lars.

intercept - whether to fit an intercept in the model. The intercept is always unpenalized.

standardize - whether to standardize the predictor matrix. In contrast to linear regression, this affects the algorithm's results. The returned coefficients are always unstandardized.

lambda2 - the elastic net ridge penalty. Zero for pure lasso. Note that the returned coefficients are the "naive" elastic net coefficients. They can be adjusted as recommended by Zhou and Hastie (2005) by scaling by 1 + lambda2.

use_gram - whether to use a precomputed Gram matrix in computation.

maxiter - maximum number of iterations of the algorithm. If this is exceeded, an incomplete path is returned. lambda_min - value of λ at which the algorithm should stop.

verbose - if true, prints information at each step.

The covtest function computes the lasso covariance test based on a LARS path:

covtest(path, X, y; errorvar)

path is the output of the LARS function above, and X and y are the independent and dependent variables used in fitting the path. If specified, errorvar is the variance of the error. If not specified, the error variance is computed based on the least squares fit of the full model.


The output of covtest has minor discrepancies with that of the covTest package. This is because the covTest package does not take into account the intercept in the least squares model fit when computing the error variance, which I believe is incorrect. I have emailed the authors but have yet to receive a response.


scikit-learn Performance Comparison

LARS.jl is substantially faster than scikit-learn for cases where the number of samples exceeds the number of features, particularly when using a Gram matrix. For cases where the number of features greatly exceeds the number of samples, scikit-learn is still occasionally faster. I am still tracking down the cause.

See also

GLMNet fits the lasso solution path using coordinate descent and supports fitting L1-regularized generalized linear models.


This package is written and maintained by Simon Kornblith

The lars function is derived from code from scikit-learn written by:

Author: Simonster
Source Code: 
License: View license

#julia #test 

LARS.jl: Least angle Regression and The Lasso Covariance Test
Royce  Reinger

Royce Reinger


A Capybara Driver for Headless WebKit to Test JavaScript Web Apps


Development has been suspended on this project because QtWebKit was deprecated in favor of QtWebEngine, which is not a suitable replacement for our purposes.

We instead recommend using the Selenium or Apparition drivers.

Qt Dependency and Installation Issues

capybara-webkit depends on a WebKit implementation from Qt, a cross-platform development toolkit. You'll need to download the Qt libraries to build and install the gem. You can find instructions for downloading and installing Qt on the capybara-webkit wiki. capybara-webkit requires Qt version 4.8 or greater.


If you're like us, you'll be using capybara-webkit on CI.

On Linux platforms, capybara-webkit requires an X server to run, although it doesn't create any visible windows. Xvfb works fine for this. You can setup Xvfb yourself and set a DISPLAY variable, try out the headless gem, or use the xvfb-run utility as follows:

xvfb-run -a bundle exec rspec

This automatically sets up a virtual X server on a free server number.


Add the capybara-webkit gem to your Gemfile:

gem "capybara-webkit"

Set your Capybara Javascript driver to webkit:

Capybara.javascript_driver = :webkit

In cucumber, tag scenarios with @javascript to run them using a headless WebKit browser.

In RSpec, use the :js => true flag. See the capybara documentation for more information about using capybara with RSpec.

Take note of the transactional fixtures section of the capybara README.

If you're using capybara-webkit with Sinatra, don't forget to set =


You can configure global options using Capybara::Webkit.configure:

Capybara::Webkit.configure do |config|
  # Enable debug mode. Prints a log of everything the driver is doing.
  config.debug = true

  # By default, requests to outside domains (anything besides localhost) will
  # result in a warning. Several methods allow you to change this behavior.

  # Silently return an empty 200 response for any requests to unknown URLs.

  # Allow pages to make requests to any URL without issuing a warning.

  # Allow a specific domain without issuing a warning.

  # Allow a specific URL and path without issuing a warning.

  # Wildcards are allowed in URL expressions.

  # Silently return an empty 200 response for any requests to the given URL.

  # Timeout if requests take longer than 5 seconds
  config.timeout = 5

  # Don't raise errors when SSL certificates can't be validated

  # Don't load images

  # Use a proxy
    host: "",
    port: 1234,
    user: "proxy",
    pass: "secret"

  # Raise JavaScript errors as exceptions
  config.raise_javascript_errors = true

These options will take effect for all future sessions and only need to be set once. It's recommended that you configure these in your spec_helper.rb or test_helper.rb rather than a before or setup block.

Offline Application Cache

The offline application cache needs a directory to write to for the cached files. Capybara-webkit will look at if the working directory has a tmp directory and when it exists offline application cache will be enabled.

Non-Standard Driver Methods

capybara-webkit supports a few methods that are not part of the standard capybara API. You can access these by calling driver on the capybara session. When using the DSL, that will look like page.driver.method_name.

console_messages: returns an array of messages printed using console.log

// In Javascript:
# In Ruby:
=> [{:source=>"", :line_number=>1, :message=>"hello"}]

error_messages: returns an array of Javascript errors that occurred

=> [{:source=>"", :line_number=>1, :message=>"SyntaxError: Parse error"}]

cookies: allows read-only access of cookies for the current session

=> "abc"

header: set the given HTTP header for subsequent requests

page.driver.header 'Referer', ''

Author: Thoughtbot
Source Code: 
License: MIT license

#ruby #test #javascript 

A Capybara Driver for Headless WebKit to Test JavaScript Web Apps
Rupert  Beatty

Rupert Beatty


Laravel-api-tester: Test Your Routes without Hassle

Laravel Api Tester



Require this package with composer:

composer require asvae/laravel-api-tester

After updating composer, add the ServiceProvider to the providers array in config/app.php


That's it. Go to [your site]/api-tester and start testing routes. It works for Laravel 5.1+.


By default, the package is bound to APP_DEBUG .env value. But you can easily override it. Just publish config:

php artisan vendor:publish --provider="Asvae\ApiTester\ServiceProvider"

And edit config/api-tester.php as you please.

Live demo

Try it out:


Those are short and easy to read. Take a look.


  • Display routes for your application.
  • Prepare and save requests.
  • Collaborate with your team using firebase.
  • Live search for everything.
  • Filter out routes in config.
  • CSRF token is handled for you.
  • Fill request in JSON editor.
  • Preview response depending on type (html or json).
  • Clean and intuitive interface.
  • Lightweight and no dependencies (except on laravel).

Powered By


Don't hesitate to raise an issue if something doesn't work or you have a feature request. You're welcome to.


Check badges on the top for details.

Author: Asvae
Source Code: 
License: MIT license

#laravel #api #test 

Laravel-api-tester: Test Your Routes without Hassle
Rupert  Beatty

Rupert Beatty


Laravel-mail-preview: A Mail Driver to Quickly Preview Mail

A mail driver to quickly preview mail   

This package can display a small overlay whenever a mail is sent. The overlay contains a link to the mail that was just sent.


This can be handy when testing out emails in a local environment.

Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.


You can install the package via composer:

composer require spatie/laravel-mail-preview

Configuring the mail transport

This package contains a mail transport called preview. We recommend to only use this transport in non-production environments. To use the preview transport, change the mailers.smtp.transport to preview in your config/mail.php file:

// in config/mail.php

'mailers' => [
    'smtp' => [
        'transport' => 'preview',
        // ...
    // ...

Registering the preview middleware route

The package can display a link to sent mails whenever they are sent. To use this feature, you must add the Spatie\MailPreview\Http\Middleware\AddMailPreviewOverlayToResponse middleware to the web middleware group in your kernel.

// in app/Http/Kernel.php

protected $middlewareGroups = [
    'web' => [
        // other middleware
    // ...

You must also add the mailPreview to your routes file. Typically, the routes file will be located at routes/web.php.

// in routes/web.php


This will register a route to display sent mails at /spatie-mail-preview. To customize the URL, pass the URL you want to the macro.


Publishing the config file

Optionally, you can publish the config file with:

php artisan vendor:publish --provider="Spatie\MailPreview\MailPreviewServiceProvider" --tag="mail-preview-config"

This is the content of the config file that will be published at config/mail-preview.php:

return [
     * By default, the overlay will only be shown and mail will only be stored
     * when the application is in debug mode.
    'enabled' => env('APP_DEBUG', false),

     * All mails will be stored in the given directory.
    'storage_path' => storage_path('email-previews'),

     * This option determines how long generated preview files will be kept.
    'maximum_lifetime_in_seconds' => 60,

     * When enabled, a link to mail will be added to the response
     * every time a mail is sent.
    'show_link_to_preview' => true,

     * Determines how long the preview pop up should remain visible.
     * Set this to `false` if the popup should stay visible.
    'popup_timeout_in_seconds' => 8,

Publishing the views

Optionally, you can publish the views that render the preview overlay and the mail itself.

php artisan vendor:publish --provider="Spatie\MailPreview\MailPreviewServiceProvider" --tag="mail-preview-views"

You can modify the views that will be published at resources/views/vendor/mail-preview to your liking.


Everytime an email is sent, an .html and .eml file will be saved in the directory specified in the storage_path of the mail-preview config file. The name includes the first recipient and the subject:


You can open the .html file in a web browser. The .eml file in your default email client to have a realistic look of the final output.

Preview in a web browser

When you open the .html file in a web browser you'll be able to see how your email will look.

At the beginning of the generated file you'll find an HTML comment with all the message info:

From:{"":"Acme HQ"},
to:{"":"Jack Black"},
cc:[{"":"Acme Finance"}, {"":"Acme Management"}],
subject:Invoice #000234


Whenever a mail is stored on disk, the Spatie\MailPreview\Events\MailStoredEvent will be fired. It has three public properties:

  • message: an instance of Swift_Mime_SimpleMessage
  • pathToHtmlVersion: the path to the html version of the sent mail
  • pathToEmlVersion: the path to the email version of the sent mail

Making assertions against sent mails

Currently, using Laravel's Mail::fake you cannot make any assertions against the content of a mail, as the using the fake will not render the mail.

The SentMails facade provided this package does allow you to make asserts against the content.

This allows you to make assertions on the content of a mail, without having the mailable in scope.

// in a test


Spatie\MailPreview\Facades\SentMails::assertLastContains('something in your mail');

Let's explain other available assertions method using this mailable as example.

namespace App\Mail;

use Illuminate\Mail\Mailable;

class MyNewSongMailable extends Mailable
    public function build()
            ->subject('Here comes the sun')
            ->html("It's been a long cold lonely winter");

In your code you can send that mailable with:

Mail::send(new MyNewSongMailable());

In your tests you can assert that the mail was sent using the assertSent function. You should pass a callable to assertSent which will get an instance of SentMail to it. Each sent mail will be passed to the callable. If the callable returns true the assertions passes.

use Spatie\MailPreview\Facades\SentMails;
use \Spatie\MailPreview\SentMails\SentMail;

SentMails::assertSent(fn (SentMail $mail) => $mail->bodyContains('winter')); // will pass
SentMails::assertSent(fn (SentMail $mail) => $mail->bodyContains('spring')); // will not pass

You can use as many assertion methods on the SentMail as you like.

SentMails::assertSent(function (SentMail $mail)  {
        $mail->subjectContains('sun') &&

The Spatie\MailPreview\Facades\SentMails has the following assertions methods:

  • assertCount(int $expectedCount): assert how many mails were sent
  • assertLastContains(string $expectedSubstring): assert that the body of the last sent mail contains a given substring
  • assertSent($findMailCallable, int $expectedCount = 1): explained above
  • assertTimesSent(int $expectedCount, Closure $findMail)
  • assertNotSent(Closure $findMail)

Additionally, the Spatie\MailPreview\Facades\SentMails has these methods:

  • all: returns an array of sent mails. Each item will be an instance of sentMail
  • count(): returns the amount of mails sent
  • last: returns an instance of SentMail for the last sent mail. If no mail was sent null will be returned.
  • lastContains: returns true if the body of the last sent mail contains the given substring
  • timesSent($findMailCallable): returns the amount of mails the were sent and that passed the given callable

The sentMail class provides these assertions:

  • assertSubjectContains($expectedSubstring)
  • assertFrom($expectedAddress)
  • assertTo$expectedAddress)
  • assertCc($expectedAddress)
  • assertBcc($expectedAddress)
  • assertContains($substring): will pass if the body of the mail contains the substring

Additionally, sentMail contains these methods:

  • subject(): return the body of a mail
  • to(): returns all to recipients as an array
  • cc(): returns all cc recipients as an array
  • bcc(): returns all bcc recipients as an array
  • body(): returns the body of a mail
  • subjectContains): returns a boolean
  • hasFrom($expectedAddress): return a boolean
  • hasTo($expectedAddress): return a boolean
  • hasCc($expectedAddress): return a boolean
  • hasBcc($expectedAddress): return a boolean


Please see CHANGELOG for more information on what has changed recently.


Please see UPGRADING for what to do to switch over from themsaid/laravel-mail-preview, and how to upgrade to newer major versions.


Please see CONTRIBUTING for details.

Security Vulnerabilities

Please review our security policy on how to report security vulnerabilities.


The initial version of this package was created by Mohamed Said, who graciously entrusted this package to us at Spatie.

Author: Spatie
Source Code: 
License: MIT license

#laravel #test 

Laravel-mail-preview: A Mail Driver to Quickly Preview Mail
Reid  Rohan

Reid Rohan


Ltest: Test Function for Setting Up Leveldb Tests


A test function that:

  • Sets up a fresh temporary levelup instance.
  • Calls back with t, db and createReadStream.
  • Closes db and removes files when a test ends via t.end().
  • Adds tests to make sure db is opened, closed and removed properly.
  • Supports multiple levelup backends via level-test, which also has built in support for MemDOWN. Default is leveldown.
  • Supports any test framework that has a test function and t.end and t.ok methods.

Extracted from the test code in level-ttl and made more generic.


$ npm install ltest --save


var tape = require('tape')
var test = require('ltest')(tape)
test('put and stream', function (t, db, createReadStream) {
  db.put('foo', 'bar', function (err) {
    t.ok(!err, 'no put error')
    var count = 0
      .on('data', function (data) {
        t.equal(data.key, 'foo')
        t.equal(data.value, 'bar')
      .on('end', function () {
        t.equal(count, 1)
        t.end() // <-- will close the db and delete files
TAP version 13
# put and stream
ok 1 no error on open()
ok 2 valid db object
ok 3 no put error
ok 4 should be equal
ok 5 should be equal
ok 6 should be equal
ok 7 no error on close()
ok 8 db removed

# tests 8
# pass  8

# ok


ltest([options, ]testFn)

Returns a test function of the form function (desc[, opts], cb) where desc is the test description, opts is an optional options object passed to underlying db and cb is a callback of the form function (t, db, createReadStream).

options object is optional and is passed on to levelup and to level-test. Use this to define things like 'keyEncoding' or other settings for levelup.

Set options.mem to true if you want an in memory db.

testFn is the test function that should be used. Use any framework you like as long as it's a function and supports t.end and t.ok methods.

var ltest = require('ltest')(require('tape'))


var ltest = require('ltest')(require('tap').test)

Author: ralphtheninja
Source Code: 
License: MIT

#javascript #leveldb #test #node 

Ltest: Test Function for Setting Up Leveldb Tests
Reid  Rohan

Reid Rohan


Level-test: inject Temporary & Isolated Level Stores Into Your Tests


Inject temporary and isolated level stores (leveldown, level-js, memdown or custom) into your tests.

If you are upgrading: please see


Create a fresh db, without refering to any file system or DOM specifics, so that the same test can be used in the server or the browser! Use whatever test framework you like.

const level = require('level-test')()
const db = level({ valueEncoding: 'json' })

In node it defaults to leveldown for storage, using a unique temporary directory. In the browser it defaults to level-js.

No database name is needed since level-test generates unique random names. For disk-based systems it uses tempy and in the browser it uses uuid/v4.

const level = require('level-test')()
const db = level()

In either environment use of memdown can be forced with options.mem:

const level = require('level-test')({ mem: true })
const db = level({ valueEncoding: 'json' })

Or use any abstract-leveldown compliant store! In this case level-test assumes the storage is on disk and will thus create a unique temporary directory, unless you pass mem: true.

const rocksdb = require('rocksdb')
const level = require('level-test')(rocksdb)
const db = level({ valueEncoding: 'json' })


ctor = levelTest([store][, options])

Returns a function ctor that creates preconfigured levelup instances with temporary storage. The store if provided must be a function and abstract-leveldown compliant. Options:

  • mem: use memdown as store (or assume that store is memdown), default false.
  • Any other option will be merged into ctor options, the latter taking precedence.

These are equal:

const db1 = require('level-test')({ valueEncoding: 'json' })()
const db2 = require('level-test')()({ valueEncoding: 'json' })

db = ctor([options][, callback])

Returns a levelup instance via level-packager which wraps the underlying store with encoding-down. In short: the db is functionally equivalent to level. You get deferred open, encodings, Promise support, readable streams and more!

Options are passed to levelup (which in turn passes them on to the store when opened) as well as encoding-down.

Please refer to the levelup documentation for usage of the optional callback.


Level/level-test is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Author: Level
Source Code: 
License: MIT license

#javascript #node #test 

Level-test: inject Temporary & Isolated Level Stores Into Your Tests
Thierry  Perret

Thierry Perret


Qu'est-ce Que L'imbrication Et La Désimbrication Des Données ?

Le cycle de vie des données, ou le chemin que les données empruntent de la création à la destruction et tout le reste, s'est élargi en raison de notre dépendance aux données pour toutes les activités commerciales.

Une information qui était auparavant utilisée puis détruite est aujourd'hui conservée pendant des années puisque tout est désormais connecté et chaque donnée est une pièce d'un puzzle qui construit une immense image.

La gestion du cycle de vie des données devient de plus en plus importante car elle nous aide à gérer correctement les données et à gérer le volume croissant de données. Ce type de traitement des données peut être réalisé grâce à l'utilisation de serveurs sur site ou d'un entrepôt de données cloud .

Les données peuvent venir ou être construites pour des besoins fonctionnels sous n'importe quelle forme. Les données imbriquées en sont un exemple. Lorsque les informations sont disposées en couches ou lorsque les éléments contiennent d'autres objets de nature similaire, on parle d'imbrication. Il fait presque allusion à des structures récursives ou auto-similaires.

Qu'est-ce que l'imbrication et la désimbrication des données ?

Les données imbriquées font référence à des structures de données qui, à l'occasion, rendent plus logiques l'enregistrement, la collecte, l'accès et l'accumulation de données. Ce n'est pas un sujet très difficile, et les données en couches simplifient souvent la gestion et la représentation des données. Dans cette session, nous allons passer en revue quelques-unes des situations de données imbriquées les plus typiques.

Vous pouvez trouver des scénarios de données imbriquées dans votre programme dans des formulaires de données très simples, tels que des listes restreintes, ou vous pouvez rencontrer des instances plus complexes ou volumineuses dans des applications en ligne et informatiques, telles que : des API, des logiciels, des bases de données, des jeux, etc.

La désimbrication est l'exact opposé de l'imbrication. L'imbrication crée une colonne de liste de trames de données ; la désimbrication l'aplatit en colonnes régulières.

Exemple de données imbriquées en Python

En raison de sa simplicité, Python rend incroyablement simple le traitement des nombreuses structures de données intégrées qu'il inclut, telles que les listes, les ensembles, les tuples et les dictionnaires. Voyons comment créer des types de gestion de données complexes en superposant ces structures de données.

Accéder aux données imbriquées

Les données imbriquées peuvent inclure plusieurs configurations de structure de données, comme des dictionnaires dans des dictionnaires ou des listes dans des listes. Pour accéder aux éléments de données imbriqués, vous devez évaluer les circonstances et le faire conformément aux types de données et aux structures de ces composants. Lorsque vous travaillez avec de grandes quantités de données imbriquées, les essais et erreurs sont fréquemment utilisés comme approche.

Commençons par quelques exemples simples. Nous tenterons de récupérer la liste précédente à partir d'une autre liste.

nested_lst1 = [[1,2], [“Vénus”,”Mars”], [Vrai, Faux]]


[Vrai faux]

Dictionnaire dans la liste

Puisqu'il peut être très pratique de stocker des données dans cette mise en page, nous rencontrons fréquemment des dictionnaires à l'intérieur de listes ou des listes à l'intérieur de dictionnaires.

En utilisant l'index de la liste, nous pouvons facilement récupérer des dictionnaires.

nested_lst2 = [{"caméra":2, "téléphone":1}, {"voiture":1, "van":0}]


{'appareil photo' : 2, 'téléphone' : 1}

Exemple d'imbrication en SQL

Tant que l'opération peut être effectuée en parallèle sur le back-end, travailler avec SQL sur des données imbriquées peut être assez performant. Cependant, si vos données sont présentées sous forme de tableaux plats, tels que CSV, elles doivent d'abord être modifiées.

Dans un environnement de table plate, vous devez généralement JOINDRE deux tables si nécessaire :

Bien sûr, nous ne conserverions jamais une table comme celle-ci puisque certains champs sont simplement redondants, consomment de l'espace de stockage et entraînent des dépenses inutiles.

Cependant, si nous utilisions des tables d'auteurs et organisions les livres dans un tableau, les champs d'auteur ne seraient pas répétés. ARRAY AGG(), une fonction d'agrégation que vous pouvez utiliser avec GROUP BY pour placer des données dans un tableau, simplifie cette tâche avec BigQuery :

L'enregistrement de ce tableau de résultats vous aiderait à économiser de l'argent en stockant les données efficacement, et comme nous n'aurions plus besoin de le joindre, nous pourrions nous débarrasser de certains identifiants.

Étant donné que toutes les informations sont dans un seul tableau, cela rend également les recherches moins difficiles. Les tableaux A et B peuvent être sélectionnés et regroupés sans imaginer d'abord à quoi ils ressembleraient une fois réunis. Vous pouvez sélectionner et regrouper immédiatement.

Exemple de désimbrication en SQL

Il redevient plat lorsqu'il est désimbriqué dans les colonnes standard.


Pour le dire simplement, l'inverse de l' aplatissement et de l'aplatissement des données est l'imbrication et la désimbrication des données. Vous avez besoin de plus de connaissances sur l'imbrication afin d'imbriquer des données à partir d'une forme "aplatie", comme un modèle relationnel. Chaque fois que vous désimbriquez, des informations sont perdues.

Dans cet article, nous avons parlé du cycle de vie des données et de la gestion des données au sein d'une organisation. Après cela, nous avons discuté en détail de l'imbrication et de la désimbrication, en comprenant comment les deux fonctions sont utilisées pour améliorer notre capacité à accéder aux données selon les besoins de la manière la plus simple possible. Nous avons ensuite examiné quelques exemples des concepts discutés.

Lien :

#test #datanesting

Qu'est-ce Que L'imbrication Et La Désimbrication Des Données ?
Python  Library

Python Library


Synapse: Matrix Homeserver Written in Python 3/Twisted


Matrix is an ambitious new ecosystem for open federated Instant Messaging and VoIP. The basics you need to know to get up and running are:

  • Everything in Matrix happens in a room. Rooms are distributed and do not exist on any single server. Rooms can be located using convenience aliases like or #test:localhost:8448.
  • Matrix user IDs look like (although in the future you will normally refer to yourself and others using a third party identifier (3PID): email address, phone number, etc rather than manipulating Matrix user IDs)

The overall architecture is:

client <----> homeserver <=====================> homeserver <----> client is the official support room for Matrix, and can be accessed by any client from or via IRC bridge at irc://

Synapse is currently in rapid development, but as of version 0.5 we believe it is sufficiently stable to be run as an internet-facing service for real usage!

About Matrix

Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard, which handle:

  • Creating and managing fully distributed chat rooms with no single points of control or failure
  • Eventually-consistent cryptographically secure synchronisation of room state across a global open network of federated servers and services
  • Sending and receiving extensible messages in a room with (optional) end-to-end encryption
  • Inviting, joining, leaving, kicking, banning room members
  • Managing user accounts (registration, login, logout)
  • Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, Facebook accounts to authenticate, identify and discover users on Matrix.
  • Placing 1:1 VoIP and Video calls

These APIs are intended to be implemented on a wide range of servers, services and clients, letting developers build messaging and VoIP functionality on top of the entirely open Matrix ecosystem rather than using closed or proprietary solutions. The hope is for Matrix to act as the building blocks for a new generation of fully open and interoperable messaging and VoIP apps for the internet.

Synapse is a Matrix "homeserver" implementation developed by the core team, written in Python 3/Twisted.

In Matrix, every user runs one or more Matrix clients, which connect through to a Matrix homeserver. The homeserver stores all their personal chat history and user account information - much as a mail client connects through to an IMAP/SMTP server. Just like email, you can either run your own Matrix homeserver and control and own your own communications and history or use one hosted by someone else (e.g. - there is no single point of control or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.

We'd like to invite you to join (via, run a homeserver, take a look at the Matrix spec, and experiment with the APIs and Client SDKs.

Thanks for using Matrix!


For support installing or managing Synapse, please join (from a account if necessary) and ask questions there. We do not use GitHub issues for support requests, only for bug reports and feature requests.

Synapse's documentation is nicely rendered on GitHub Pages, with its source available in docs.

Synapse Installation

Connecting to Synapse from a client

The easiest way to try out your new Synapse installation is by connecting to it from a web client.

Unless you are running a test instance of Synapse on your local machine, in general, you will need to enable TLS support before you can successfully connect from a client: see TLS certificates.

An easy way to get started is to login or register via Element at or respectively. You will need to change the server you are logging into from and instead specify a Homeserver URL of https://<server_name>:8448 (or just https://<server_name> if you are using a reverse proxy). If you prefer to use another client, refer to our client breakdown.

If all goes well you should at least be able to log in, create a room, and start sending messages.

Registering a new user from a client

By default, registration of new users via Matrix clients is disabled. To enable it, specify enable_registration: true in homeserver.yaml. (It is then recommended to also set up CAPTCHA - see docs/

Once enable_registration is set to true, it is possible to register a user via a Matrix client.

Your new user name will be formed partly from the server_name, and partly from a localpart you specify when you create the account. Your name will take the form of:

(pronounced "at localpart on my dot domain dot name").

As when logging in, you will need to specify a "Custom server". Specify your desired localpart in the 'User name' box.

Security note

Matrix serves raw, user-supplied data in some APIs -- specifically the content repository endpoints.

Whilst we make a reasonable effort to mitigate against XSS attacks (for instance, by using CSP), a Matrix homeserver should not be hosted on a domain hosting other web applications. This especially applies to sharing the domain with Matrix web clients and other sensitive applications like webmail. See for more information.

Ideally, the homeserver should not simply be on a different subdomain, but on a completely different registered domain (also known as top-level site or eTLD+1). This is because some attacks are still possible as long as the two applications share the same registered domain.

To illustrate this with an example, if your Element Web or other sensitive web application is hosted on, you should ideally host Synapse on Some amount of protection is offered by hosting on instead, so this is also acceptable in some scenarios. However, you should not host your Synapse on

Note that all of the above refers exclusively to the domain used in Synapse's public_baseurl setting. In particular, it has no bearing on the domain mentioned in MXIDs hosted on that server.

Following this advice ensures that even if an XSS is found in Synapse, the impact to other applications will be minimal.

Upgrading an existing Synapse

The instructions for upgrading synapse are in the upgrade notes. Please check these instructions as upgrading may require extra steps for some versions of synapse.

Using a reverse proxy with Synapse

It is recommended to put a reverse proxy such as nginx, Apache, Caddy, HAProxy or relayd in front of Synapse. One advantage of doing so is that it means that you can expose the default https port (443) to Matrix clients without needing to run Synapse with root privileges.

For information on configuring one, see docs/

Identity Servers

Identity servers have the job of mapping email addresses and other 3rd Party IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs before creating that mapping.

They are not where accounts or credentials are stored - these live on home servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.

This process is very security-sensitive, as there is obvious risk of spam if it is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer term, we hope to create a decentralised system to manage it (matrix-doc #712), but in the meantime, the role of managing trusted identity in the Matrix ecosystem is farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix Identity Servers' such as Sydent, whose role is purely to authenticate and track 3PID logins and publish end-user public keys.

You can host your own copy of Sydent, but this will prevent you reaching other users in the Matrix ecosystem via their email address, and prevent them finding you. We therefore recommend that you use one of the centralised identity servers at or for now.

To reiterate: the Identity server will only be used if you choose to associate an email address with your account, or send an invite to another user via their email address.

Password reset

Users can reset their password through their client. Alternatively, a server admin can reset a users password using the admin API or by directly editing the database as shown below.

First calculate the hash of the new password:

$ ~/synapse/env/bin/hash_password
Confirm password:

Then update the users table in the database:

UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
    WHERE name='';

Synapse Development

The best place to get started is our guide for contributors. This is part of our larger documentation, which includes information for synapse developers as well as synapse administrators.

Developers might be particularly interested in:

Alongside all that, join our developer community on Matrix:, featuring real humans!

Quick start

Before setting up a development environment for synapse, make sure you have the system dependencies (such as the python header files) installed - see Platform-specific prerequisites.

To check out a synapse for development, clone the git repo into a working directory of your choice:

git clone
cd synapse

Synapse has a number of external dependencies. We maintain a fixed development environment using Poetry. First, install poetry. We recommend:

pip install --user pipx
pipx install poetry

as described here. (See poetry's installation docs for other installation methods.) Then ask poetry to create a virtual environment from the project and install Synapse's dependencies:

poetry install --extras "all test"

This will run a process of downloading and installing all the needed dependencies into a virtual env.

We recommend using the demo which starts 3 federated instances running on ports 8080 - 8082:

poetry run ./demo/

(to stop, you can use poetry run ./demo/

See the demo documentation for more information.

If you just want to start a single instance of the app and run it directly:

# Create the homeserver.yaml config once
poetry run synapse_homeserver \
  --server-name \
  --config-path homeserver.yaml \
  --generate-config \

# Start the app
poetry run synapse_homeserver --config-path homeserver.yaml

Running the unit tests

After getting up and running, you may wish to run Synapse's unit tests to check that everything is installed correctly:

poetry run trial tests

This should end with a 'PASSED' result (note that exact numbers will differ):

Ran 1337 tests in 716.064s

PASSED (skips=15, successes=1322)

For more tips on running the unit tests, like running a specific test or to see the logging output, see the CONTRIBUTING doc.

Running the Integration Tests

Synapse is accompanied by SyTest, a Matrix homeserver integration testing suite, which uses HTTP requests to access the API as a Matrix client would. It is able to run Synapse directly from the source tree, so installation of the server is not required.

Testing with SyTest is recommended for verifying that changes related to the Client-Server API are functioning correctly. See the SyTest installation instructions for details.

Platform dependencies

Synapse uses a number of platform dependencies such as Python and PostgreSQL, and aims to follow supported upstream versions. See the docs/ document for more details.


Need help? Join our community support room on Matrix:

Running out of File Handles

If synapse runs out of file handles, it typically fails badly - live-locking at 100% CPU, and/or failing to accept new TCP connections (blocking the connecting client). Matrix currently can legitimately use a lot of file handles, thanks to busy rooms like containing hundreds of participating servers. The first time a server talks in a room it will try to connect simultaneously to all participating servers, which could exhaust the available file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow to respond. (We need to improve the routing algorithm used to be better than full mesh, but as of March 2019 this hasn't happened yet).

If you hit this failure mode, we recommend increasing the maximum number of open file handles to be at least 4096 (assuming a default of 1024 or 256). This is typically done by editing /etc/security/limits.conf

Separately, Synapse may leak file handles if inbound HTTP requests get stuck during processing - e.g. blocked behind a lock or talking to a remote server etc. This is best diagnosed by matching up the 'Received request' and 'Processed request' log lines and looking for any 'Processed request' lines which take more than a few seconds to execute. Please let us know at if you see this failure mode so we can help debug it, however.

Help!! Synapse is slow and eats all my RAM/CPU!

First, ensure you are running the latest version of Synapse, using Python 3 with a PostgreSQL database.

Synapse's architecture is quite RAM hungry currently - we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests. We'll improve this in the future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost-undocumented SYNAPSE_CACHE_FACTOR environment variable. The default is 0.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade.

However, degraded performance due to a low cache factor, common on machines with slow disks, often leads to explosions in memory use due backlogged requests. In this case, reducing the cache factor will make things worse. Instead, try increasing it drastically. 2.0 is a good starting value.

Using libjemalloc can also yield a significant improvement in overall memory use, and especially in terms of giving back RAM to the OS. To use it, the library must simply be put in the LD_PRELOAD environment variable when launching Synapse. On Debian, this can be done by installing the libjemalloc1 package and adding this line to /etc/default/matrix-synapse:


This can make a significant difference on Python 2.7 - it's unclear how much of an improvement it provides on Python 3.x.

If you're encountering high CPU use by the Synapse process itself, you may be affected by a bug with presence tracking that leads to a massive excess of outgoing federation requests (see discussion). If metrics indicate that your server is also issuing far more outgoing federation requests than can be accounted for by your users' activity, this is a likely cause. The misbehavior can be worked around by setting the following in the Synapse config file:

    enabled: false

People can't accept room invitations from me

The typical failure mode here is that you send an invitation to someone to join a room or direct chat, but when they go to accept it, they get an error (typically along the lines of "Invalid signature"). They might see something like the following in their logs:

2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>

This is normally caused by a misconfiguration in your reverse-proxy. See docs/ and double-check that your settings are correct.

Download Details:
Author: matrix-org
Source Code:
License: Apache-2.0 license


Synapse: Matrix Homeserver Written in Python 3/Twisted
Hermann  Frami

Hermann Frami


Serverless Plugin Test Helper

Serverless plugin test helper

Feedback is appreciated! If you have an idea for how this plugin/library can be improved (or even just a complaint/criticism) then please open an issue.


Running tests on deployed services (vs locally mocked ones) is an important final step in a robust serverless deployment pipeline because it isn't possible to recreate all aspects of a final solution locally - concerns such as fine-grained resource access through IAM and scalability/performance characteristics of the system can only be assessed while the application is running on AWS. Running these tests on stage/branch-specific versions of the application (see serverless testing best practices below) is difficult to do given the dynamic nature of AWS resource naming. This library makes it easier to write post-deployment tests for applications and services written and deployed using the Serverless Framework by locally persisting dynamic AWS resource information such as endpoint URLs and exposing them to your tests via easily-imported helper functions.

Because unit tests with mocked AWS services are still an important part of a well-tested service (especially for fast developer feedback), this library also includes helper functions to simplify the creation of mock events for the various AWS Lambda integrations.

Installation and setup

Install and save the library to package.json as a dev dependency:

npm i --save-dev serverless-plugin-test-helper

yarn add serverless-plugin-test-helper -D

E2E testing support setup & configuration

Two parts of this library work together to support E2E testing of your deployed serverless apps:

  1. A Serverless Framework plugin which extends sls deploy to save a copy of the generated CloudFormation Stack Output locally - this will persist the dynamically-generated API Gateway endpoint, for example.
  2. A standard Node.js library which can be imported to access local stack output values in tests (or any other code you want to run post-deployment) - this will allow you to access the dynamically-generated API Gateway endpoint that the plugin saved.

To setup the plugin add the library to the serverless.yml plugins section:

  - serverless-plugin-test-helper

By default the plugin will generate a file containing stack outputs at .serverless/stack-output/outputs.yml, which is where the library pulls values from. You can optionally specify an additional path for storing outputs by using the optional serverless.yml custom section with the testHelper key:

  testHelper: # The 'testHelper' key is used by the plugin to pull in the optional path value
    path: optional/path/for/another/outputs[ .yml | .yaml | .json ]

Using the library to retrieve stack outputs

Import the helper functions into your test files to retrieve values from deployed stack output:

import { getApiGatewayUrl, getDeploymentBucket, getOutput } from 'serverless-plugin-test-helper';

const URL = getApiGatewayUrl();
const BUCKET_NAME = getDeploymentBucket();
const DOCUMENT_STORAGE_BUCKET_NAME = getOutput('DocumentStorageBucket');
  • getApiGatewayUrl() returns the url of the deployed API Gateway service (if using http or httpApi as an event type in serverless.yml)
  • getDeploymentBucket() returns the name of the bucket Serverless Framework generates for uploading CloudFormation templates and zipped source code files as part of the sls deploy process
  • getOutput('output-key-from-stack-outputs') returns the value of the Cloudformation stack output with the specified key

To see what output values are available for reference you can check the generated .serverless/stack-output/outputs.yml file after a deployment. To make additional values available you can specify up to 60 CloudFormation Stack Outputs in serverless.yml using the resources > Outputs section:

    # Generic example
    Output1: # This is the key that will be used in the generated outputs file
      Description: This is an optional description that will show up in the CloudFormation dashboard
      Value: { Ref: CloudFormationParameterOrResourceYouWishToExport }

    # Example referencing a custom S3 bucket used for file storage (defined under Resources section below)
    DocumentStorageBucket: # This is the key that will be used in the generated outputs file
      Description: Name of the S3 bucket used for document storage by this stack
      Value: { Ref: DocumentStorageBucket }

      Type: AWS::S3::Bucket

See the AWS CloudFormation documentation on outputs for more information on stack outputs.

AWS event mocks

Import the helper functions and static objects into your test files to generate AWS event and method signature mocks with optional value overrides. Note that this portion of the library can be used without using the E2E testing module.

import {
} from 'serverless-plugin-test-helper';
import { handler } from './lambda-being-tested';

// Setup events with optional value overrides
const event = new ApiGatewayEvent({ body: 'overridden body value' });
const event2 = new ApiGatewayTokenAuthorizerEvent();
const event3 = new DynamoDBStreamEvent();
const event4 = new HttpApiEvent();
const event5 = new SnsEvent();


// Invoke the handler functions with events
const result = await handler(event, context);
const result2 = await handler(event2, context);

// TODO write your tests on the results


There is one working example of how this library can be used in a simple 'hello world' serverless application:

  1. Plugin and event mocks in a TypeScript project with E2E and unit tests

Serverless testing best practices

Due to tight coupling with managed services and the difficulty in mocking those same services locally, end-to-end testing is incredibly important for deploying and running serverless applications with confidence. I believe that a good serverless deployment pipeline setup should include the following steps, in order:

For checkins/merges to default branch*

  1. Install project and dependencies
  2. Run unit tests
  3. Deploy to a static, non-production environment like staging (using --stage staging option in Serverless Framework)†
  4. Run e2e tests in the static, non-production environment†
  5. Optional: include a manual approval step if you want to gate production deploys
  6. Deploy to production environment (with --stage production)
  7. Run e2e tests in production

† Repeat steps 3 and 4 for however many static, non-production environments you have (development, staging, demo, etc.)

For checkins/merges to a feature branch*

  1. Install project and dependencies
  2. Run unit tests
  3. Deploy to a dynamic, non-production environment (with --stage <branch or username> option in Serverless Framework)
  4. Run e2e tests in the dynamic, non-production environment
  5. Automate the cleanup of stale ephemeral environments with a solution like Odin

* Note that these kinds of pipelines work best using trunk based development

Author: manwaring
Source Code: 
License: MIT license

#serverless #plugin #test 

Serverless Plugin Test Helper
Hunter  Krajcik

Hunter Krajcik


Flame_test: A Package with Classes to Help Testing Apps using Flame

Flame test helpers

This package contains classes that helps with testing applications using Flame, and it also helps testing parts of Flame itself.


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add flame_test

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  flame_test: ^1.5.0

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:flame_test/flame_test.dart';


import 'package:flame/extensions.dart';

void main() {}

class MyVectorChanger {
  Vector2 addOne(Vector2 vector) {
    return vector + Vector2.all(1.0);

class MyDoubleChanger {
  double addOne(double number) {
    return number + 1.0;

Author: Flame-engine
Source Code: 
License: MIT license

#flutter #dart #flame #test 

Flame_test: A Package with Classes to Help Testing Apps using Flame
Garry Taylor

Garry Taylor


Playwright: A Framework for Web Testing and Automation

🎭 Playwright

Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API. Playwright is built to enable cross-browser web automation that is ever-green, capable, reliable and fast.

Headless execution is supported for all the browsers on all platforms. Check out system requirements for details.

Looking for Playwright for Python, .NET, or Java?


Playwright has its own test runner for end-to-end tests, we call it Playwright Test.

Using init command

The easiest way to get started with Playwright Test is to run the init command.

# Run from your project's root directory
npm init playwright@latest
# Or create a new project
npm init playwright@latest new-project

This will create a configuration file, optionally add examples, a GitHub Action workflow and a first test example.spec.ts. You can now jump directly to writing assertions section.


Add dependency and install browsers.

npm i -D @playwright/test
# install supported browsers
npx playwright install

You can optionally install only selected browsers, see install browsers for more details. Or you can install no browsers at all and use existing browser channels.


Resilient • No flaky tests

Auto-wait. Playwright waits for elements to be actionable prior to performing actions. It also has rich set of introspection events. The combination of the two eliminate the need for artificial timeouts - primary cause of flaky tests.

Web-first assertions. Playwright assertions are created specifically for the dynamic web. Checks are automatically retried until the necessary conditions are met.

Tracing. Configure test retry strategy, capture execution trace, videos, screenshots to eliminate flakes.

No trade-offs • No limits

Browsers run web content belonging to different origins in different processes. Playwright is aligned with the modern browsers architecture and runs tests out-of-process. This makes Playwright free of the typical in-process test runner limitations.

Multiple everything. Test scenarios that span multiple tabs, multiple origins and multiple users. Create scenarios with different contexts for different users and run them against your server, all in one test.

Trusted events. Hover elements, interact with dynamic controls, produce trusted events. Playwright uses real browser input pipeline indistinguishable from the real user.

Test frames, pierce Shadow DOM. Playwright selectors pierce shadow DOM and allow entering frames seamlessly.

Full isolation • Fast execution

Browser contexts. Playwright creates a browser context for each test. Browser context is equivalent to a brand new browser profile. This delivers full test isolation with zero overhead. Creating a new browser context only takes a handful of milliseconds.

Log in once. Save the authentication state of the context and reuse it in all the tests. This bypasses repetitive log-in operations in each test, yet delivers full isolation of independent tests.

Powerful Tooling

Codegen. Generate tests by recording your actions. Save them into any language.

Playwright inspector. Inspect page, generate selectors, step through the test execution, see click points, explore execution logs.

Trace Viewer. Capture all the information to investigate the test failure. Playwright trace contains test execution screencast, live DOM snapshots, action explorer, test source and many more.

Looking for Playwright for TypeScript, JavaScript, Python, .NET, or Java?


To learn how to run these Playwright Test examples, check out our getting started docs.

Page screenshot

This code snippet navigates to and saves a screenshot.

import { test } from '@playwright/test';

test('Page Screenshot', async ({ page }) => {
  await page.goto('');
  await page.screenshot({ path: `example.png` });

Mobile and geolocation

This snippet emulates Mobile Safari on a device at a given geolocation, navigates to, performs action and takes a screenshot.

import { test, devices } from '@playwright/test';

  ...devices['iPhone 13 Pro'],
  locale: 'en-US',
  geolocation: { longitude: 12.492507, latitude: 41.889938 },
  permissions: ['geolocation'],

test('Mobile and geolocation', async ({ page }) => {
  await page.goto('');
  await page.locator('text="Your location"').click();
  await page.waitForRequest(/.*preview\/pwa/);
  await page.screenshot({ path: 'colosseum-iphone.png' });

Evaluate in browser context

This code snippet navigates to, and executes a script in the page context.

import { test } from '@playwright/test';

test('Evaluate in browser context', async ({ page }) => {
  await page.goto('');
  const dimensions = await page.evaluate(() => {
    return {
      width: document.documentElement.clientWidth,
      height: document.documentElement.clientHeight,
      deviceScaleFactor: window.devicePixelRatio

Intercept network requests

This code snippet sets up request routing for a page to log all network requests.

import { test } from '@playwright/test';

test('Intercept network requests', async ({ page }) => {
  // Log and continue all network requests
  await page.route('**', route => {
  await page.goto('');


Documentation | API reference

Download Details: 
Author: Microsoft
Source Code: 
License: Apache-2.0 license
#playwright #test #testing 

Playwright: A Framework for Web Testing and Automation
Mike  Kozey

Mike Kozey


Test_cov_console: Flutter Console Coverage Test

Flutter Console Coverage Test

This small dart tools is used to generate Flutter Coverage Test report to console

How to install

Add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  test_cov_console: ^0.2.2

How to run

run the following command to make sure all flutter library is up-to-date

flutter pub get
Running "flutter pub get" in coverage...                            0.5s

run the following command to generate on coverage directory

flutter test --coverage
00:02 +1: All tests passed!

run the tool to generate report from

flutter pub run test_cov_console
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
 print_cov_constants.dart                    |    0.00 |    0.00 |    0.00 |    no unit testing|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |

Optional parameter

If not given a FILE, "coverage/" will be used.
-f, --file=<FILE>                      The target file to be reported
-e, --exclude=<STRING1,STRING2,...>    A list of contains string for files without unit testing
                                       to be excluded from report
-l, --line                             It will print Lines & Uncovered Lines only
                                       Branch & Functions coverage percentage will not be printed
-i, --ignore                           It will not print any file without unit testing
-m, --multi                            Report from multiple files
-c, --csv                              Output to CSV file
-o, --output=<CSV-FILE>                Full path of output CSV file
                                       If not given, "coverage/test_cov_console.csv" will be used
-t, --total                            Print only the total coverage
                                       Note: it will ignore all other option (if any), except -m
-p, --pass=<MINIMUM>                   Print only the whether total coverage is passed MINIMUM value or not
                                       If the value >= MINIMUM, it will print PASSED, otherwise FAILED
                                       Note: it will ignore all other option (if any), except -m
-h, --help                             Show this help

example run the tool with parameters

flutter pub run test_cov_console --file=coverage/ --exclude=_constants,_mock
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |

report for multiple files (-m, --multi)

It support to run for multiple files with the followings directory structures:
1. No root module
2. With root module
You must run test_cov_console on <root> dir, and the report would be grouped by module, here is
the sample output for directory structure 'with root module':
flutter pub run test_cov_console --file=coverage/ --exclude=_constants,_mock --multi
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
File - module_a -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
File - module_b -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |

Output to CSV file (-c, --csv, -o, --output)

flutter pub run test_cov_console -c --output=coverage/test_coverage.csv

#### sample CSV output file:
File,% Branch,% Funcs,% Lines,Uncovered Line #s
test_cov_console.dart,0.00,0.00,0.00,no unit testing
print_cov_constants.dart,0.00,0.00,0.00,no unit testing
All files with unit testing,100.00,100.00,86.07,""


Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate test_cov_console

Use it

The package has the following executables:

$ test_cov_console

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add test_cov_console

With Flutter:

 $ flutter pub add test_cov_console

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  test_cov_console: ^0.2.2

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:test_cov_console/test_cov_console.dart';


import 'package:flutter/material.dart';

void main() {

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        // This makes the visual density adapt to the platform that you run
        // the app on. For desktop platforms, the controls will be smaller and
        // closer together (more dense) than on mobile platforms.
        visualDensity: VisualDensity.adaptivePlatformDensity,
      home: MyHomePage(title: 'Flutter Demo Home Page'),

class MyHomePage extends StatefulWidget {
  MyHomePage({Key? key, required this.title}) : super(key: key);

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  _MyHomePageState createState() => _MyHomePageState();

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.

  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the method, and use it to set our appbar title.
        title: Text(widget.title),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          children: <Widget>[
              'You have pushed the button this many times:',
              style: Theme.of(context).textTheme.headline4,
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.

Author: DigitalKatalis
Source Code: 
License: BSD-3-Clause license

#flutter #dart #test 

Test_cov_console: Flutter Console Coverage Test