Recoding IO

Recoding IO

1622663917

Programmatically Push and Pop Views in SwiftUI using Navigation Stack View

In SwiftUI’s Navigation View there has been a little opportunity to Programmatically Optimize the behavior of View and Animation. As inside UIKit any one can customize the animation and behavior according to their need, but in SwiftUI there is no native Solution. But SwiftUI Navigation Stack Module solves this problem by letting the Developer to customize the View behavior, animation and custom push and pop views.

In this video we gonna take a look for 4 topics first we gonna learn some basic of Navigation Stack View and add Push Pop Function, then we create a Double Column Navigation View and use Navigation Stack into it, Then we learn how to Programmatically Push View and Pop views and finally we learn to Push Pop View with the help of ID.

Follow us on
📝 @Medium - https://medium.com/recoding
🐦 @Twitter - https://twitter.com/recoding_io
🦄 @Dev.to - https://dev.to/recoding
📌 @Pinterest.com - https://www.pinterest.com/recoding_io
🔗 @LinkedIn.com - https://www.linkedin.com/company/recoding-io/

#ios #swift #xcode #software-development

What is GEEK

Buddha Community

Programmatically Push and Pop Views in SwiftUI using Navigation Stack View
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

Concurrent Ruby: Modern Concurrency tools for Ruby.

Concurrent Ruby

Modern concurrency tools for Ruby. Inspired by Erlang, Clojure, Scala, Haskell, F#, C#, Java, and classic concurrency patterns.

The design goals of this gem are:

  • Be an 'unopinionated' toolbox that provides useful utilities without debating which is better or why
  • Remain free of external gem dependencies
  • Stay true to the spirit of the languages providing inspiration
  • But implement in a way that makes sense for Ruby
  • Keep the semantics as idiomatic Ruby as possible
  • Support features that make sense in Ruby
  • Exclude features that don't make sense in Ruby
  • Be small, lean, and loosely coupled
  • Thread-safety
  • Backward compatibility

Contributing

This gem depends on contributions and we appreciate your help. Would you like to contribute? Great! Have a look at issues with looking-for-contributor label. And if you pick something up let us know on the issue.

You can also get started by triaging issues which may include reproducing bug reports or asking for vital information, such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to subscribe to concurrent-ruby on CodeTriage. Open Source Helpers

Thread Safety

Concurrent Ruby makes one of the strongest thread safety guarantees of any Ruby concurrency library, providing consistent behavior and guarantees on all four of the main Ruby interpreters (MRI/CRuby, JRuby, Rubinius, TruffleRuby).

Every abstraction in this library is thread safe. Specific thread safety guarantees are documented with each abstraction.

It is critical to remember, however, that Ruby is a language of mutable references. No concurrency library for Ruby can ever prevent the user from making thread safety mistakes (such as sharing a mutable object between threads and modifying it on both threads) or from creating deadlocks through incorrect use of locks. All the library can do is provide safe abstractions which encourage safe practices. Concurrent Ruby provides more safe concurrency abstractions than any other Ruby library, many of which support the mantra of "Do not communicate by sharing memory; instead, share memory by communicating". Concurrent Ruby is also the only Ruby library which provides a full suite of thread safe and immutable variable types and data structures.

We've also initiated discussion to document memory model of Ruby which would provide consistent behaviour and guarantees on all four of the main Ruby interpreters (MRI/CRuby, JRuby, Rubinius, TruffleRuby).

Features & Documentation

The primary site for documentation is the automatically generated API documentation which is up to date with latest release. This readme matches the master so may contain new stuff not yet released.

We also have a IRC (gitter).

Versioning

  • concurrent-ruby uses Semantic Versioning
  • concurrent-ruby-ext has always same version as concurrent-ruby
  • concurrent-ruby-edge will always be 0.y.z therefore following point 4 applies "Major version zero (0.y.z) is for initial development. Anything may change at any time. The public API should not be considered stable." However we additionally use following rules:
    • Minor version increment means incompatible changes were made
    • Patch version increment means only compatible changes were made

General-purpose Concurrency Abstractions

  • Async: A mixin module that provides simple asynchronous behavior to a class. Loosely based on Erlang's gen_server.
  • ScheduledTask: Like a Future scheduled for a specific future time.
  • TimerTask: A Thread that periodically wakes up to perform work at regular intervals.
  • Promises: Unified implementation of futures and promises which combines features of previous Future, Promise, IVar, Event, dataflow, Delay, and (partially) TimerTask into a single framework. It extensively uses the new synchronization layer to make all the features non-blocking and lock-free, with the exception of obviously blocking operations like #wait, #value. It also offers better performance.

Thread-safe Value Objects, Structures, and Collections

Collection classes that were originally part of the (deprecated) thread_safe gem:

  • Array A thread-safe subclass of Ruby's standard Array.
  • Hash A thread-safe subclass of Ruby's standard Hash.
  • Set A thread-safe subclass of Ruby's standard Set.
  • Map A hash-like object that should have much better performance characteristics, especially under high concurrency, than Concurrent::Hash.
  • Tuple A fixed size array with volatile (synchronized, thread safe) getters/setters.

Value objects inspired by other languages:

Structure classes derived from Ruby's Struct:

  • ImmutableStruct Immutable struct where values are set at construction and cannot be changed later.
  • MutableStruct Synchronized, mutable struct where values can be safely changed at any time.
  • SettableStruct Synchronized, write-once struct where values can be set at most once, either at construction or any time thereafter.

Thread-safe variables:

  • Agent: A way to manage shared, mutable, asynchronous, independent state. Based on Clojure's Agent.
  • Atom: A way to manage shared, mutable, synchronous, independent state. Based on Clojure's Atom.
  • AtomicBoolean A boolean value that can be updated atomically.
  • AtomicFixnum A numeric value that can be updated atomically.
  • AtomicReference An object reference that may be updated atomically.
  • Exchanger A synchronization point at which threads can pair and swap elements within pairs. Based on Java's Exchanger.
  • MVar A synchronized single element container. Based on Haskell's MVar and Scala's MVar.
  • ThreadLocalVar A variable where the value is different for each thread.
  • TVar A transactional variable implementing software transactional memory (STM). Based on Clojure's Ref.

Java-inspired ThreadPools and Other Executors

  • See the thread pool overview, which also contains a list of other Executors available.

Thread Synchronization Classes and Algorithms

Deprecated

Deprecated features are still available and bugs are being fixed, but new features will not be added.

  • Future: An asynchronous operation that produces a value. Replaced by Promises.
    • .dataflow: Built on Futures, Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available. Replaced by Promises.
  • Promise: Similar to Futures, with more features. Replaced by Promises.
  • Delay Lazy evaluation of a block yielding an immutable result. Based on Clojure's delay. Replaced by Promises.
  • IVar Similar to a "future" but can be manually assigned once, after which it becomes immutable. Replaced by Promises.

Edge Features

These are available in the concurrent-ruby-edge companion gem.

These features are under active development and may change frequently. They are expected not to keep backward compatibility (there may also lack tests and documentation). Semantic versions will be obeyed though. Features developed in concurrent-ruby-edge are expected to move to concurrent-ruby when final.

Actor: Implements the Actor Model, where concurrent actors exchange messages. Status: Partial documentation and tests; depends on new future/promise framework; stability is good.

Channel: Communicating Sequential Processes (CSP). Functionally equivalent to Go channels with additional inspiration from Clojure core.async. Status: Partial documentation and tests.

LazyRegister

LockFreeLinkedSet Status: will be moved to core soon.

LockFreeStack Status: missing documentation and tests.

Promises::Channel A first in first out channel that accepts messages with push family of methods and returns messages with pop family of methods. Pop and push operations can be represented as futures, see #pop_op and #push_op. The capacity of the channel can be limited to support back pressure, use capacity option in #initialize. #pop method blocks ans #pop_op returns pending future if there is no message in the channel. If the capacity is limited the #push method blocks and #push_op returns pending future.

Cancellation The Cancellation abstraction provides cooperative cancellation.

The standard methods Thread#raise of Thread#kill available in Ruby are very dangerous (see linked the blog posts bellow). Therefore concurrent-ruby provides an alternative.

Throttle A tool managing concurrency level of tasks.

ErlangActor Actor implementation which precisely matches Erlang actor behaviour. Requires at least Ruby 2.1 otherwise it's not loaded.

WrappingExecutor A delegating executor which modifies each task before the task is given to the target executor it delegates to.

Supported Ruby versions

  • MRI 2.2 and above
  • Latest JRuby 9000
  • Latest TruffleRuby

The legacy support for Rubinius is kept for the moment but it is no longer maintained and is liable to be removed. If you would like to help please respond to #739.

Usage

Everything within this gem can be loaded simply by requiring it:

require 'concurrent'

Requiring only specific abstractions from Concurrent Ruby is not yet supported.

To use the tools in the Edge gem it must be required separately:

require 'concurrent-edge'

If the library does not behave as expected, Concurrent.use_stdlib_logger(Logger::DEBUG) could help to reveal the problem.

Installation

gem install concurrent-ruby

or add the following line to Gemfile:

gem 'concurrent-ruby', require: 'concurrent'

and run bundle install from your shell.

Edge Gem Installation

The Edge gem must be installed separately from the core gem:

gem install concurrent-ruby-edge

or add the following line to Gemfile:

gem 'concurrent-ruby-edge', require: 'concurrent-edge'

and run bundle install from your shell.

C Extensions for MRI

Potential performance improvements may be achieved under MRI by installing optional C extensions. To minimise installation errors the C extensions are available in the concurrent-ruby-ext extension gem. concurrent-ruby and concurrent-ruby-ext are always released together with same version. Simply install the extension gem too:

gem install concurrent-ruby-ext

or add the following line to Gemfile:

gem 'concurrent-ruby-ext'

and run bundle install from your shell.

In code it is only necessary to

require 'concurrent'

The concurrent-ruby gem will automatically detect the presence of the concurrent-ruby-ext gem and load the appropriate C extensions.

Note For gem developers

No gems should depend on concurrent-ruby-ext. Doing so will force C extensions on your users. The best practice is to depend on concurrent-ruby and let users to decide if they want C extensions.

Building the gem

Requirements

  • Recent CRuby
  • JRuby, rbenv install jruby-9.2.17.0
  • Set env variable CONCURRENT_JRUBY_HOME to point to it, e.g. /usr/local/opt/rbenv/versions/jruby-9.2.17.0
  • Install Docker, required for Windows builds

Publishing the Gem

  • Update version.rb
  • Update the CHANGELOG
  • Update the Yard documentation
    • Add the new version to docs-source/signpost.md. Needs to be done only if there are visible changes in the documentation.
    • Run bundle exec rake yard to update the master documentation and signpost.
    • Run bundle exec rake yard:<new-version> to add or update the documentation of the new version.
  • Commit (and push) the changes.
  • Use be rake release to release the gem. It consists of ['release:checks', 'release:build', 'release:test', 'release:publish'] steps. It will ask at the end before publishing anything. Steps can also be executed individually.

Maintainers

Special Thanks to

to the past maintainers

and to Ruby Association for sponsoring a project "Enhancing Ruby’s concurrency tooling" in 2018.

License and Copyright

Concurrent Ruby is free software released under the MIT License.

The Concurrent Ruby logo was designed by David Jones. It is Copyright © 2014 Jerry D'Antonio. All Rights Reserved.


Author: ruby-concurrency
Source code: https://github.com/ruby-concurrency/concurrent-ruby
License: View license

#ruby #ruby-on-rails 

Royce  Reinger

Royce Reinger

1658140980

Concurrent-ruby: Modern Concurrency tools for Ruby

Concurrent Ruby 

Modern concurrency tools for Ruby. Inspired by Erlang, Clojure, Scala, Haskell, F#, C#, Java, and classic concurrency patterns.

The design goals of this gem are:

  • Be an 'unopinionated' toolbox that provides useful utilities without debating which is better or why
  • Remain free of external gem dependencies
  • Stay true to the spirit of the languages providing inspiration
  • But implement in a way that makes sense for Ruby
  • Keep the semantics as idiomatic Ruby as possible
  • Support features that make sense in Ruby
  • Exclude features that don't make sense in Ruby
  • Be small, lean, and loosely coupled
  • Thread-safety
  • Backward compatibility

Contributing

This gem depends on contributions and we appreciate your help. Would you like to contribute? Great! Have a look at issues with looking-for-contributor label. And if you pick something up let us know on the issue.

You can also get started by triaging issues which may include reproducing bug reports or asking for vital information, such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to subscribe to concurrent-ruby on CodeTriage. Open Source Helpers

Thread Safety

Concurrent Ruby makes one of the strongest thread safety guarantees of any Ruby concurrency library, providing consistent behavior and guarantees on all four of the main Ruby interpreters (MRI/CRuby, JRuby, Rubinius, TruffleRuby).

Every abstraction in this library is thread safe. Specific thread safety guarantees are documented with each abstraction.

It is critical to remember, however, that Ruby is a language of mutable references. No concurrency library for Ruby can ever prevent the user from making thread safety mistakes (such as sharing a mutable object between threads and modifying it on both threads) or from creating deadlocks through incorrect use of locks. All the library can do is provide safe abstractions which encourage safe practices. Concurrent Ruby provides more safe concurrency abstractions than any other Ruby library, many of which support the mantra of "Do not communicate by sharing memory; instead, share memory by communicating". Concurrent Ruby is also the only Ruby library which provides a full suite of thread safe and immutable variable types and data structures.

We've also initiated discussion to document memory model of Ruby which would provide consistent behaviour and guarantees on all four of the main Ruby interpreters (MRI/CRuby, JRuby, Rubinius, TruffleRuby).

Features & Documentation

The primary site for documentation is the automatically generated API documentation which is up to date with latest release. This readme matches the master so may contain new stuff not yet released.

We also have a IRC (gitter).

Versioning

  • concurrent-ruby uses Semantic Versioning
  • concurrent-ruby-ext has always same version as concurrent-ruby
  • concurrent-ruby-edge will always be 0.y.z therefore following point 4 applies "Major version zero (0.y.z) is for initial development. Anything may change at any time. The public API should not be considered stable." However we additionally use following rules:
    • Minor version increment means incompatible changes were made
    • Patch version increment means only compatible changes were made

General-purpose Concurrency Abstractions

  • Async: A mixin module that provides simple asynchronous behavior to a class. Loosely based on Erlang's gen_server.
  • ScheduledTask: Like a Future scheduled for a specific future time.
  • TimerTask: A Thread that periodically wakes up to perform work at regular intervals.
  • Promises: Unified implementation of futures and promises which combines features of previous Future, Promise, IVar, Event, dataflow, Delay, and (partially) TimerTask into a single framework. It extensively uses the new synchronization layer to make all the features non-blocking and lock-free, with the exception of obviously blocking operations like #wait, #value. It also offers better performance.

Thread-safe Value Objects, Structures, and Collections

Collection classes that were originally part of the (deprecated) thread_safe gem:

  • Array A thread-safe subclass of Ruby's standard Array.
  • Hash A thread-safe subclass of Ruby's standard Hash.
  • Set A thread-safe subclass of Ruby's standard Set.
  • Map A hash-like object that should have much better performance characteristics, especially under high concurrency, than Concurrent::Hash.
  • Tuple A fixed size array with volatile (synchronized, thread safe) getters/setters.

Value objects inspired by other languages:

Structure classes derived from Ruby's Struct:

  • ImmutableStruct Immutable struct where values are set at construction and cannot be changed later.
  • MutableStruct Synchronized, mutable struct where values can be safely changed at any time.
  • SettableStruct Synchronized, write-once struct where values can be set at most once, either at construction or any time thereafter.

Thread-safe variables:

  • Agent: A way to manage shared, mutable, asynchronous, independent state. Based on Clojure's Agent.
  • Atom: A way to manage shared, mutable, synchronous, independent state. Based on Clojure's Atom.
  • AtomicBoolean A boolean value that can be updated atomically.
  • AtomicFixnum A numeric value that can be updated atomically.
  • AtomicReference An object reference that may be updated atomically.
  • Exchanger A synchronization point at which threads can pair and swap elements within pairs. Based on Java's Exchanger.
  • MVar A synchronized single element container. Based on Haskell's MVar and Scala's MVar.
  • ThreadLocalVar A variable where the value is different for each thread.
  • TVar A transactional variable implementing software transactional memory (STM). Based on Clojure's Ref.

Java-inspired ThreadPools and Other Executors

  • See the thread pool overview, which also contains a list of other Executors available.

Thread Synchronization Classes and Algorithms

Deprecated

Deprecated features are still available and bugs are being fixed, but new features will not be added.

  • Future: An asynchronous operation that produces a value. Replaced by Promises.
    • .dataflow: Built on Futures, Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available. Replaced by Promises.
  • Promise: Similar to Futures, with more features. Replaced by Promises.
  • Delay Lazy evaluation of a block yielding an immutable result. Based on Clojure's delay. Replaced by Promises.
  • IVar Similar to a "future" but can be manually assigned once, after which it becomes immutable. Replaced by Promises.

Edge Features

These are available in the concurrent-ruby-edge companion gem.

These features are under active development and may change frequently. They are expected not to keep backward compatibility (there may also lack tests and documentation). Semantic versions will be obeyed though. Features developed in concurrent-ruby-edge are expected to move to concurrent-ruby when final.

Actor: Implements the Actor Model, where concurrent actors exchange messages. Status: Partial documentation and tests; depends on new future/promise framework; stability is good.

Channel: Communicating Sequential Processes (CSP). Functionally equivalent to Go channels with additional inspiration from Clojure core.async. Status: Partial documentation and tests.

LazyRegister

LockFreeLinkedSet Status: will be moved to core soon.

LockFreeStack Status: missing documentation and tests.

Promises::Channel A first in first out channel that accepts messages with push family of methods and returns messages with pop family of methods. Pop and push operations can be represented as futures, see #pop_op and #push_op. The capacity of the channel can be limited to support back pressure, use capacity option in #initialize. #pop method blocks ans #pop_op returns pending future if there is no message in the channel. If the capacity is limited the #push method blocks and #push_op returns pending future.

Cancellation The Cancellation abstraction provides cooperative cancellation.

The standard methods Thread#raise of Thread#kill available in Ruby are very dangerous (see linked the blog posts bellow). Therefore concurrent-ruby provides an alternative.

Throttle A tool managing concurrency level of tasks.

ErlangActor Actor implementation which precisely matches Erlang actor behaviour. Requires at least Ruby 2.1 otherwise it's not loaded.

WrappingExecutor A delegating executor which modifies each task before the task is given to the target executor it delegates to.

Supported Ruby versions

  • MRI 2.2 and above
  • Latest JRuby 9000
  • Latest TruffleRuby

The legacy support for Rubinius is kept for the moment but it is no longer maintained and is liable to be removed. If you would like to help please respond to #739.

Usage

Everything within this gem can be loaded simply by requiring it:

require 'concurrent'

Requiring only specific abstractions from Concurrent Ruby is not yet supported.

To use the tools in the Edge gem it must be required separately:

require 'concurrent-edge'

If the library does not behave as expected, Concurrent.use_stdlib_logger(Logger::DEBUG) could help to reveal the problem.

Installation

gem install concurrent-ruby

or add the following line to Gemfile:

gem 'concurrent-ruby', require: 'concurrent'

and run bundle install from your shell.

Edge Gem Installation

The Edge gem must be installed separately from the core gem:

gem install concurrent-ruby-edge

or add the following line to Gemfile:

gem 'concurrent-ruby-edge', require: 'concurrent-edge'

and run bundle install from your shell.

C Extensions for MRI

Potential performance improvements may be achieved under MRI by installing optional C extensions. To minimise installation errors the C extensions are available in the concurrent-ruby-ext extension gem. concurrent-ruby and concurrent-ruby-ext are always released together with same version. Simply install the extension gem too:

gem install concurrent-ruby-ext

or add the following line to Gemfile:

gem 'concurrent-ruby-ext'

and run bundle install from your shell.

In code it is only necessary to

require 'concurrent'

The concurrent-ruby gem will automatically detect the presence of the concurrent-ruby-ext gem and load the appropriate C extensions.

Note For gem developers

No gems should depend on concurrent-ruby-ext. Doing so will force C extensions on your users. The best practice is to depend on concurrent-ruby and let users to decide if they want C extensions.

Building the gem

Requirements

  • Recent CRuby
  • JRuby, rbenv install jruby-9.2.17.0
  • Set env variable CONCURRENT_JRUBY_HOME to point to it, e.g. /usr/local/opt/rbenv/versions/jruby-9.2.17.0
  • Install Docker, required for Windows builds

Publishing the Gem

  • Update version.rb
  • Update the CHANGELOG
  • Update the Yard documentation
    • Add the new version to docs-source/signpost.md. Needs to be done only if there are visible changes in the documentation.
    • Run bundle exec rake yard to update the master documentation and signpost.
    • Run bundle exec rake yard:<new-version> to add or update the documentation of the new version.
  • Commit (and push) the changes.
  • Use be rake release to release the gem. It consists of ['release:checks', 'release:build', 'release:test', 'release:publish'] steps. It will ask at the end before publishing anything. Steps can also be executed individually.

Maintainers

Special Thanks to

to the past maintainers

and to Ruby Association for sponsoring a project "Enhancing Ruby’s concurrency tooling" in 2018.

Author: Ruby-concurrency
Source Code: https://github.com/ruby-concurrency/concurrent-ruby 
License: View license

#ruby #java #javascript 

Delbert  Ferry

Delbert Ferry

1668761329

What Is A Stack in Python and How Is It Implemented

In this Python article, let's learn about What is a Stack in Python? How To Implement Python Stack? Stack is a linear type of data structure that enables efficient data storage and access. As the literal meaning of stack indicates, this data structure is based on the logic of storing elements one on top of another. There are plenty of real-world examples of the stack from our daily lives, such as a Stack of plates, a stack of notes, a stack of clothes, etc. Like any other efficient programming language, Python also allows a smooth stack implementation and various other data structures. Today, in this article, we will learn about the Python stack and how to implement it. 

What is Stack in Python? 

Stack is a linear data structure that works on the principle of ‘Last In First Out (LIFO). This means that the element that goes in the stack first comes out last. The term that we use for sending the elements to a stack is known as ‘Push’, whereas the term for deleting the elements from a stack is known as ‘Pop’. Hence, we can say that since a stack has only one open end, pushing and popping can’t take place simultaneously. A pictorial representation of the PUSH and POP operation in the stack has been shown below:

Pictorial representation of stack, push, and pop

The inbuilt datatype of Python that we use to implement Python is the Python list. Further, for exercising PUSH and POP operations on a stack, we use the append() and pop() function of the Python list.

Get your hands on the Python Stack course and learn more about it.

Methods of Stack

The most basic methods associated with a Stack in python are as follows:

  • push(n)– This is a user-defined stack method used for inserting an element into the stack. The element to be pushed is passed in its argument.
  • pop()– We need this method to remove the topmost element from the stack. 
  • isempty()– We need this method to check whether the stack is empty or not. 
  • size()– We need this method to get the size of the stack. 
  • top()– This stacking method will be used for returning the reference to the topmost element or, lastly pushed element in a stack.

Functions associated with Python Stack

There are a bunch of useful functions in Python that help us deal with a stack efficiently. Let’s take a brief look at these functions –  

  • len()– This stack method is used for returning the size of the stack. This function can also be used in the definition of isempty() method in a Python stack.
  • append(n)– This Python function is used for inserting an element into the stack. The element to be pushed is passed in its argument.
  • pop()– This method, associated with the Python lists, is used for deleting the topmost element from the stack. 

Implementation of Stack

There are four ways in which we can carry out the implementation of a stack in Python-

  • list
  • collections.deque
  • queue.LifoQueue
  • Singly-linked list  

Out of these three, the easiest and the most popular way for implementing a stack in Python is list. Let’s see the implementation of a stack in Python using lists.

Implementation Using List

# Stack Creation
def create_stack():
    stack = list()            #declaring an empty list
    return stack


# Checking for empty stack
def Isempty(stack):
    return len(stack) == 0


# Inserting items into the stack
def push(stack, n):
    stack.append(n)
    print("pushed item: " + n)


# Removal of an element from the stack
def pop(stack):
    if (Isempty(stack)):
        return "stack is empty"
    else:
        return stack.pop()

# Displaying the stack elements
def show(stack):
    print("The stack elements are:")
    for i in stack:
        print(i)
        
stack = create_stack()
push(stack, str(10))
push(stack, str(20))
push(stack, str(30))
push(stack, str(40))
print("popped item: " + pop(stack))
show(stack)

 

However, the speed issue becomes a major limitation here when dealing with a growing stack. The items in a list are stored one after the other inside the memory. Hence, if the stack grows bigger than the block of memory allocated to the list, Python needs to do some new memory allocations, resulting in some append() taking much longer than the rest while calling.

Implementation using collections.deque

We can also use the deque class of the Python collections module to implement a stack. Since a deque or double ended queue allow us to insert and delete element from both front and rear sides, it might be more suitable at times when we require faster append() and pop() operations. 

from collections import deque  

def create_stack():  
    stack = deque()    #Creating empty deque
    return stack 
  
# PUSH operation using append()
def push(stack, item):
    stack.append(item)

  
#POP operation
def pop(stack):
    if(stack):
        print('Element popped from stack:')
        print(stack.pop())
    else:
        print('Stack is empty')
    

#Displaying Stack
def show(stack):
    print('Stack elements are:')
    print(stack)
    
new_stack=create_stack()
push(new_stack,25)
push(new_stack,56)
push(new_stack,32)
show(new_stack)

pop(new_stack)
show(new_stack)

Implementation using queue.LifoQueue

The queue module of Python consists of a LIFO queue. A LIFO queue is nothing but a stack. Hence, we can easily and effectively implement a stack in Python using the queue module. For a LifoQueue, we have certain functions that are useful in stack implementation, such as qsize(), full(), empty(), put(n), get() as seen in the following piece of code. The max size parameter of LifoQueue defines the limit of items that the stack can hold.

from queue import LifoQueue
  
# Initializing a stack
def new():
    stack = LifoQueue(maxsize=3)   #Fixing the stack size
    return stack

#PUSH using put(n) 
def push(stack, item):
    if(stack.full()):                      #Checking if the stack is full
        print("The stack is already full")
    else:
        stack.put(item)
        print("Size: ", stack.qsize())     #Determining the stack size

#POP using get()
def pop(stack):
    if(stack.empty()):              #Checking if the stack is empty
        print("Stack is empty")
    else:
        print('Element popped from the stack is ', stack.get())         #Removing the last element from stack
        print("Size: ", stack.qsize())

stack=new()
pop(stack)
push(stack,32)
push(stack,56)
push(stack,27)
pop(stack)

 

Implementation using a singly linked list

Singly-linked lists are the most efficient and effective way of implementing dynamic stacks. We use the class and object approach of Python OOP to create linked lists in Python. We have certain functions at our disposal in Python that are useful in stack implementation, such as getSize(), isEmpty(), push(n), and pop(). Let’s take a look at how each of these functions helps in implementing a stack.

#Node creation
class Node:
	def __init__(self, value):
		self.value = value
		self.next = None

#Stack creation
class Stack:
    #Stack with dummy node
	def __init__(self):
		self.head = Node("head")
		self.size = 0

	#  For string representation of the stack
	def __str__(self):
		val = self.head.next
		show = ""
		while val:
			show += str(val.value) + " , "
			val = val.next
		return show[:-3]

	# Retrieve the size of the stack
	def getSize(self):
		return self.size

	# Check if the stack is empty
	def isEmpty(self):
		return self.size == 0

	# Retrieve the top item of the stack
	def peek(self):
		# Check for empty stack.
		if self.isEmpty():
			raise Exception("This is an empty stack")
		return self.head.next.value

	# Push operation
	def push(self, value):
		node = Node(value)
		node.next = self.head.next
		self.head.next = node
		self.size += 1

	# Pop Operation
	def pop(self):
		if self.isEmpty():
			raise Exception("Stack is empty")
		remove = self.head.next
		self.head.next = self.head.next.next
		self.size -= 1
		return remove.value


#Driver Code
if __name__ == "__main__":
	stack = Stack()
	n=20
	for i in range(1, 11):
		stack.push(n)
		n+=5
	print(f"Stack:{stack}")

	for i  in range(1, 6):
		remove = stack.pop()
		print(f"Pop: {remove}")
	print(f"Stack: {stack}")

Deque Vs. List

DequeList
  
You need to import the collections module for using deque in PythonYou need not import any external module for using a list in Python. It’s an inbuilt-data structure 
Time complexity of deque for append() and pop() functions is O(1)Time complexity of lists for append() and pop() functions is O(n)
They are double-ended, i.e. elements can be inserted into and removed from either of the endsIt is a single-ended structure that allows append() to insert the element at the end of the list and pop() to remove the last element from the list
Stack with bigger sizes can be easily and efficiently implemented via dequesThe list is suitable for fixed-length operations and stack implementation via lists becomes difficult when its size starts growing bigger.

Python Stacks and Threading

Python is a multi-threaded language, i.e. it allows programming that involves running multiple parts of a process in parallel. We use threading in Python for running multiple threads like function calls, and tasks simultaneously. Python lists and deques both work differently for a program with threads. You would not want to use lists for data structures that ought to be accessed by multiple threads since they are not thread-safe. 

Your thread program is safe with deques as long as you are strictly using append() and pop() only. Besides, even if you succeed at creating a thread-safe deque program, it might expose your program to chances of being misused and give rise to race conditions at some later point in time. So, neither list nor a deque is very good to call when dealing with a threaded program. The best way to make a stack in a thread-safe environment is queue.LifoQueue. We are free to use its methods in a threaded environment. Nevertheless, your stack operations in queue.LifoQueue may take a little longer owing to making thread-safe calls. 

Note: Threading in Python does not mean that different threads are executed on different processors. If 100% of the CPU time is already being consumed, Python threads will no longer be helpful in making your program faster. You can switch to parallel programming in such cases.

Which Implementation of Stack should one consider?

When dealing with a non-threading program, you should go for a deque. When your program requires a thread-safe environment, you better opt for LifoQueue unless your program performance and maintenance are highly affected by the speed of the stack operations. 

Now, the list is a bit risky since it might raise memory reallocation issues. Besides, Python lists are not safe for multithreading environments. The list and deque interfaces are the same, except for such issues as in the list. Hence, a Python deque can be seen as the best alternative for stack implementation.

Conclusion 

Now that, you have come to the end of this article, you must have got a hang of stack in Python. The foremost essential part is to recognize the situations where you need to implement a stack. You have learned about various ways of implementing stack in Python, so you know it is significant to know the requirements of your program to be able to choose the best stack implementation option. 

You should be clear if you are writing a multi-threaded program or not. Python lists are not thread-safe, and thus you would prefer going for deques in case of a multi-threading environment. The drawback of slow stack operations can be overlooked as long as your program performance does not decline because of these factors. 

Frequently Asked Questions

What is a Python stack?

A stack is a form of linear data structure in Python that allows the storage and retrieval of elements in the LIFO (Last In First Out) manner.

Can you create a stack in Python?

Yes, we can easily create a stack in Python using lists, LifoQueues, or deques. For a dynamic stack, you can create single linked lists as well in Python for that matter.

When would you use a stack in Python?

Stack of books, a stack of documents, a stack of plates, etc., all real-world use cases of the stack. You would use a stack in Python whenever seeking a way to store and access elements in a LIFO manner. Suppose a developer, working on a new Word editor, has to build an undo feature where backtracking up to the very first action is required. For such a scenario, using a Python stack would be ideal for storing the actions of the users working on the Word editor.

What is a stack in Python example?

Example: A record of students entering a hall for a seminar where they must leave the hall in a LIFO manner.

Is Python full-stack?

Yes, Python can be very well used for full-stack development. Though, full-stack development and stack are two completely things altogether. To know more about the stack in Python, go back to the article given above. 

How do I know if a Python stack is full?

When implementing a stack in the form of lists or linked lists, you can use the size() function to check if the stack has reached its maximum limit. You have the full() method in LifoQueue to check whether the stack is full or not.


Original article source at: https://www.mygreatlearning.com

#python 

Hertha  Mayer

Hertha Mayer

1595334123

Authentication In MEAN Stack - A Quick Guide

I consider myself an active StackOverflow user, despite my activity tends to vary depending on my daily workload. I enjoy answering questions with angular tag and I always try to create some working example to prove correctness of my answers.

To create angular demo I usually use either plunker or stackblitz or even jsfiddle. I like all of them but when I run into some errors I want to have a little bit more usable tool to undestand what’s going on.

Many people who ask questions on stackoverflow don’t want to isolate the problem and prepare minimal reproduction so they usually post all code to their questions on SO. They also tend to be not accurate and make a lot of mistakes in template syntax. To not waste a lot of time investigating where the error comes from I tried to create a tool that will help me to quickly find what causes the problem.

Angular demo runner
Online angular editor for building demo.
ng-run.com
<>

Let me show what I mean…

Template parser errors#

There are template parser errors that can be easy catched by stackblitz

It gives me some information but I want the error to be highlighted

#mean stack #angular 6 passport authentication #authentication in mean stack #full stack authentication #mean stack example application #mean stack login and registration angular 8 #mean stack login and registration angular 9 #mean stack tutorial #mean stack tutorial 2019 #passport.js