Chrome Extension for Saving ChatGPT Threads using Gpt.best

gpt.best Chrome Extension

Chrome Extension for gpt.best that adds a share button to ChatGPT. Clicking it will create a shared link (eg: gpt.best/kkYN0o6p) and open it in a new tab.

gpt-best-demo.mp4

Chrome Store Installation

https://chrome.google.com/webstore/detail/gptbest/lfjehajehlondfmmdnacgomigbkfahln

Manual Installation

  1. Download this repo as a zip file link
  2. Open Chrome extensions page, or type chrome://extensions into the address bar
  3. Ensure "Developer Mode" is enabled (top right corner)
  4. Click "Load Unpacked" and select the unzipped download from step #1

Download Details:

Author: rileytomasek
Source Code: https://github.com/rileytomasek/gpt-best-chrome

#chatgpt 

What is GEEK

Buddha Community

Chrome Extension for Saving ChatGPT Threads using Gpt.best

Pyringe: Debugger Capable Of Attaching to & Injecting Code Into Python

DISCLAIMER: This is not an official google project, this is just something I wrote while at Google.

Pyringe

What this is

Pyringe is a python debugger capable of attaching to running processes, inspecting their state and even of injecting python code into them while they're running. With pyringe, you can list threads, get tracebacks, inspect locals/globals/builtins of running functions, all without having to prepare your program for it.

What this is not

A "Google project". It's my internship project that got open-sourced. Sorry for the confusion.

What do I need?

Pyringe internally uses gdb to do a lot of its heavy lifting, so you will need a fairly recent build of gdb (version 7.4 onwards, and only if gdb was configured with --with-python). You will also need the symbols for whatever build of python you're running.
On Fedora, the package you're looking for is python-debuginfo, on Debian it's called python2.7-dbg (adjust according to version). Arch Linux users: see issue #5, Ubuntu users can only debug the python-dbg binary (see issue #19).
Having Colorama will get you output in boldface, but it's optional.

How do I get it?

Get it from the Github repo, PyPI, or via pip (pip install pyringe).

Is this Python3-friendly?

Short answer: No, sorry. Long answer:
There's three potentially different versions of python in play here:

  1. The version running pyringe
  2. The version being debugged
  3. The version of libpythonXX.so your build of gdb was linked against

2 Is currently the dealbreaker here. Cpython has changed a bit in the meantime[1], and making all features work while debugging python3 will have to take a back seat for now until the more glaring issues have been taken care of.
As for 1 and 3, the 2to3 tool may be able to handle it automatically. But then, as long as 2 hasn't been taken care of, this isn't really a use case in the first place.

[1] - For example, pendingbusy (which is used for injection) has been renamed to busy and been given a function-local scope, making it harder to interact with via gdb.

Will this work with PyPy?

Unfortunately, no. Since this makes use of some CPython internals and implementation details, only CPython is supported. If you don't know what PyPy or CPython are, you'll probably be fine.

Why not PDB?

PDB is great. Use it where applicable! But sometimes it isn't.
Like when python itself crashes, gets stuck in some C extension, or you want to inspect data without stopping a program. In such cases, PDB (and all other debuggers that run within the interpreter itself) are next to useless, and without pyringe you'd be left with having to debug using print statements. Pyringe is just quite convenient in these cases.

I injected a change to a local var into a function and it's not showing up!

This is a known limitation. Things like inject('var = 2') won't work, but inject('var[1] = 1337') should. This is because most of the time, python internally uses a fast path for looking up local variables that doesn't actually perform the dictionary lookup in locals(). In general, code you inject into processes with pyringe is very different from a normal python function call.

How do I use it?

You can start the debugger by executing python -m pyringe. Alternatively:

import pyringe
pyringe.interact()

If that reminds you of the code module, good; this is intentional.
After starting the debugger, you'll be greeted by what behaves almost like a regular python REPL.
Try the following:

==> pid:[None] #threads:[0] current thread:[None]
>>> help()
Available commands:
 attach: Attach to the process with the given pid.
 bt: Get a backtrace of the current position.
 [...]
==> pid:[None] #threads:[0] current thread:[None]
>>> attach(12679)
==> pid:[12679] #threads:[11] current thread:[140108099462912]
>>> threads()
[140108099462912, 140108107855616, 140108116248323, 140108124641024, 140108133033728, 140108224739072, 140108233131776, 140108141426432, 140108241524480, 140108249917184, 140108269324032]

The IDs you see here correspond to what threading.current_thread().ident would tell you.
All debugger functions are just regular python functions that have been exposed to the REPL, so you can do things like the following.

==> pid:[12679] #threads:[11] current thread:[140108099462912]
>>> for tid in threads():
...   if not tid % 10:
...     thread(tid)
...     bt()
... 
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 524, in __bootstrap
    self.__bootstrap_inner()
  File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 504, in run
    self.__target(*self.__args, **self.__kwargs)
  File "./test.py", line 46, in Idle
    Thread_2_Func(1)
  File "./test.py", line 40, in Wait
    time.sleep(n)
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> 

You can access the inferior's locals and inspect them like so:

==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> inflocals()
{'a': <proxy of A object at remote 0x1d9b290>, 'LOL': 'success!', 'b': <proxy of B object at remote 0x1d988c0>, 'n': 1}
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> p('a')
<proxy of A object at remote 0x1d9b290>
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> p('a').attr
'Some_magic_string'
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> 

And sure enough, the definition of a's class reads:

class Example(object):
  cl_attr = False
  def __init__(self):
    self.attr = 'Some_magic_string'

There's limits to how far this proxying of objects goes, and everything that isn't trivial data will show up as strings (like '<function at remote 0x1d957d0>').
You can inject python code into running programs. Of course, there are caveats but... see for yourself:

==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> inject('import threading')
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> inject('print threading.current_thread().ident')
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> 

The output of my program in this case reads:

140108241524480

If you need additional pointers, just try using python's help (pyhelp() in the debugger) on debugger commands.

Author: google
Source Code: https://github.com/google/pyringe
License: Apache-2.0 License

#python 

Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

bindu singh

bindu singh

1647351133

Procedure To Become An Air Hostess/Cabin Crew

Minimum educational required – 10+2 passed in any stream from a recognized board.

The age limit is 18 to 25 years. It may differ from one airline to another!

 

Physical and Medical standards –

  • Females must be 157 cm in height and males must be 170 cm in height (for males). This parameter may vary from one airline toward the next.
  • The candidate's body weight should be proportional to his or her height.
  • Candidates with blemish-free skin will have an advantage.
  • Physical fitness is required of the candidate.
  • Eyesight requirements: a minimum of 6/9 vision is required. Many airlines allow applicants to fix their vision to 20/20!
  • There should be no history of mental disease in the candidate's past.
  • The candidate should not have a significant cardiovascular condition.

You can become an air hostess if you meet certain criteria, such as a minimum educational level, an age limit, language ability, and physical characteristics.

As can be seen from the preceding information, a 10+2 pass is the minimal educational need for becoming an air hostess in India. So, if you have a 10+2 certificate from a recognized board, you are qualified to apply for an interview for air hostess positions!

You can still apply for this job if you have a higher qualification (such as a Bachelor's or Master's Degree).

So That I may recommend, joining Special Personality development courses, a learning gallery that offers aviation industry courses by AEROFLY INTERNATIONAL AVIATION ACADEMY in CHANDIGARH. They provide extra sessions included in the course and conduct the entire course in 6 months covering all topics at an affordable pricing structure. They pay particular attention to each and every aspirant and prepare them according to airline criteria. So be a part of it and give your aspirations So be a part of it and give your aspirations wings.

Read More:   Safety and Emergency Procedures of Aviation || Operations of Travel and Hospitality Management || Intellectual Language and Interview Training || Premiere Coaching For Retail and Mass Communication |Introductory Cosmetology and Tress Styling  ||  Aircraft Ground Personnel Competent Course

For more information:

Visit us at:     https://aerofly.co.in

Phone         :     wa.me//+919988887551 

Address:     Aerofly International Aviation Academy, SCO 68, 4th Floor, Sector 17-D,                            Chandigarh, Pin 160017 

Email:     info@aerofly.co.in

 

#air hostess institute in Delhi, 

#air hostess institute in Chandigarh, 

#air hostess institute near me,

#best air hostess institute in India,
#air hostess institute,

#best air hostess institute in Delhi, 

#air hostess institute in India, 

#best air hostess institute in India,

#air hostess training institute fees, 

#top 10 air hostess training institute in India, 

#government air hostess training institute in India, 

#best air hostess training institute in the world,

#air hostess training institute fees, 

#cabin crew course fees, 

#cabin crew course duration and fees, 

#best cabin crew training institute in Delhi, 

#cabin crew courses after 12th,

#best cabin crew training institute in Delhi, 

#cabin crew training institute in Delhi, 

#cabin crew training institute in India,

#cabin crew training institute near me,

#best cabin crew training institute in India,

#best cabin crew training institute in Delhi, 

#best cabin crew training institute in the world, 

#government cabin crew training institute

Libraries for Debugging Code in Popular Python

In this Python article, let's learn about Debugging Tools: Libraries for Debugging Code in Popular Python

Table of contents:

  • pdb-like Debugger
    • ipdb - IPython-enabled pdb.
    • pdb++ - Another drop-in replacement for pdb.
    • pudb - A full-screen, console-based Python debugger.
    • wdb - An improbable web debugger through WebSockets.
  • Tracing
    • lptrace - strace for Python programs.
    • manhole - Debugging UNIX socket connections and present the stacktraces for all threads and an interactive prompt.
    • pyringe - Debugger capable of attaching to and injecting code into Python processes.
    • python-hunter - A flexible code tracing toolkit.
  • Profiler
    • line_profiler - Line-by-line profiling.
    • memory_profiler - Monitor Memory usage of Python code.
    • py-spy - A sampling profiler for Python programs. Written in Rust.
    • pyflame - A ptracing profiler For Python.
    • vprof - Visual Python profiler.
  • Others
    • django-debug-toolbar - Display various debug information for Django.
    • django-devserver - A drop-in replacement for Django's runserver.
    • flask-debugtoolbar - A port of the django-debug-toolbar to flask.
    • icecream - Inspect variables, expressions, and program execution with a single, simple function call.
    • pyelftools - Parsing and analyzing ELF files and DWARF debugging information.

 

What is a debugging tool?

A debugger is a software tool that can help the software development process by identifying coding errors at various stages of the operating system or application development. Some debuggers will analyze a test run to see what lines of code were not executed.

Debugger for Python programs with a graphical user interface. It uses bdb (part of stdlib) but adds a GUI and has some powerful features like object browser, windows for variables, classes, functions, exceptions, stack, conditional breakpoints, etc.


Libraries for Debugging Code in Popular Python

  1. IPython pdb

ipdb exports functions to access the IPython debugger, which features tab completion, syntax highlighting, better tracebacks, better introspection with the same interface as the pdb module.

Example usage:

import ipdb
ipdb.set_trace()
ipdb.set_trace(context=5)  # will show five lines of code
                           # instead of the default three lines
                           # or you can set it via IPDB_CONTEXT_SIZE env variable
                           # or setup.cfg file
ipdb.pm()
ipdb.run('x[0] = 3')
result = ipdb.runcall(function, arg0, arg1, kwarg='foo')
result = ipdb.runeval('f(1,2) - 3')

Arguments for set_trace

The set_trace function accepts context which will show as many lines of code as defined, and cond, which accepts boolean values (such as abc == 17) and will start ipdb's interface whenever cond equals to True.

Using configuration file

It's possible to set up context using a .ipdb file on your home folder, setup.cfg or pyproject.toml on your project folder. You can also set your file location via env var $IPDB_CONFIG. Your environment variable has priority over the home configuration file, which in turn has priority over the setup config file. Currently, only context setting is available.

A valid setup.cfg is as follows

[ipdb]
context=5

A valid .ipdb is as follows

context=5

A valid pyproject.toml is as follows

[tool.ipdb]
context=5

The post-mortem function, ipdb.pm(), is equivalent to the magic function %debug.

View on GitHub


2.  pdb++

pdb++, a drop-in replacement for pdb (the Python debugger)

What is it?

This module is an extension of the pdb module of the standard library. It is meant to be fully compatible with its predecessor, yet it introduces a number of new features to make your debugging experience as nice as possible.

https://user-images.githubusercontent.com/412005/64484794-2f373380-d20f-11e9-9f04-e1dabf113c6f.png

pdb++ features include:

  • colorful TAB completion of Python expressions (through fancycompleter)
  • optional syntax highlighting of code listings (through Pygments)
  • sticky mode
  • several new commands to be used from the interactive (Pdb++) prompt
  • smart command parsing (hint: have you ever typed r or c at the prompt to print the value of some variable?)
  • additional convenience functions in the pdb module, to be used from your program

pdb++ is meant to be a drop-in replacement for pdb. If you find some unexpected behavior, please report it as a bug.

Installation

Since pdb++ is not a valid package name the package is named pdbpp:

$ pip install pdbpp

pdb++ is also available via conda:

$ conda install -c conda-forge pdbpp

Alternatively, you can just put pdb.py somewhere inside your PYTHONPATH.

View on GitHub


3.  PuDB

Its goal is to provide all the niceties of modern GUI-based debuggers in a more lightweight and keyboard-friendly package. PuDB allows you to debug code right where you write and test it--in a terminal.

Here are some screenshots:

Light theme

  • doc/images/pudb-screenshot-light.png

Dark theme

  • doc/images/pudb-screenshot-dark.png

View on GitHub


4.  wdb

An improbable web debugger through WebSockets

wdb is a full featured web debugger based on a client-server architecture.

The wdb server which is responsible of managing debugging instances along with browser connections (through websockets) is based on Tornado. The wdb clients allow step by step debugging, in-program python code execution, code edition (based on CodeMirror) setting breakpoints...

Due to this architecture, all of this is fully compatible with multithread and multiprocess programs.

wdb works with python 2 (2.6, 2.7), python 3 (3.2, 3.3, 3.4, 3.5) and pypy. Even better, it is possible to debug a python 2 program with a wdb server running on python 3 and vice-versa or debug a program running on a computer with a debugging server running on another computer inside a web page on a third computer!

Even betterer, it is now possible to pause a currently running python process/thread using code injection from the web interface. (This requires gdb and ptrace enabled)

In other words it's a very enhanced version of pdb directly in your browser with nice features.

Installation:

Global installation:

    $ pip install wdb.server

In virtualenv or with a different python installation:

    $ pip install wdb

(You must have the server installed and running)

View on GitHub


5.  lptrace

lptrace is strace for Python programs. It lets you see in real-time what functions a Python program is running. It's particularly useful to debug weird issues on production.

For example, let's debug a non-trivial program, the Python SimpleHTTPServer. First, let's run the server:

vagrant@precise32:/vagrant$ python -m SimpleHTTPServer 8080 &
[1] 1818
vagrant@precise32:/vagrant$ Serving HTTP on 0.0.0.0 port 8080 ...

Now let's connect lptrace to it:

vagrant@precise32:/vagrant$ sudo python lptrace -p 1818
...
fileno (/usr/lib/python2.7/SocketServer.py:438)
meth (/usr/lib/python2.7/socket.py:223)

fileno (/usr/lib/python2.7/SocketServer.py:438)
meth (/usr/lib/python2.7/socket.py:223)

_handle_request_noblock (/usr/lib/python2.7/SocketServer.py:271)
get_request (/usr/lib/python2.7/SocketServer.py:446)
accept (/usr/lib/python2.7/socket.py:201)
__init__ (/usr/lib/python2.7/socket.py:185)
verify_request (/usr/lib/python2.7/SocketServer.py:296)
process_request (/usr/lib/python2.7/SocketServer.py:304)
finish_request (/usr/lib/python2.7/SocketServer.py:321)
__init__ (/usr/lib/python2.7/SocketServer.py:632)
setup (/usr/lib/python2.7/SocketServer.py:681)
makefile (/usr/lib/python2.7/socket.py:212)
__init__ (/usr/lib/python2.7/socket.py:246)
makefile (/usr/lib/python2.7/socket.py:212)
__init__ (/usr/lib/python2.7/socket.py:246)
handle (/usr/lib/python2.7/BaseHTTPServer.py:336)
handle_one_request (/usr/lib/python2.7/BaseHTTPServer.py:301)
^CReceived Ctrl-C, quitting
vagrant@precise32:/vagrant$

You can see that the server is handling the request in real time! After pressing Ctrl-C, the trace is removed and the program execution resumes normally.

View on GitHub


6.  python-manhole

Debugging manhole for python applications.

Manhole is in-process service that will accept unix domain socket connections and present the stacktraces for all threads and an interactive prompt. It can either work as a python daemon thread waiting for connections at all times or a signal handler (stopping your application and waiting for a connection).

Access to the socket is restricted to the application's effective user id or root.

This is just like Twisted's manhole. It's simpler (no dependencies), it only runs on Unix domain sockets (in contrast to Twisted's manhole which can run on telnet or ssh) and it integrates well with various types of applications.

Usage

Install it:

pip install manhole

You can put this in your django settings, wsgi app file, some module that's always imported early etc:

import manhole
manhole.install() # this will start the daemon thread

# and now you start your app, eg: server.serve_forever()

Now in a shell you can do either of these:

netcat -U /tmp/manhole-1234
socat - unix-connect:/tmp/manhole-1234
socat readline unix-connect:/tmp/manhole-1234

Socat with readline is best (history, editing etc). If your socat doesn't have readline try this.

Sample output:

$ nc -U /tmp/manhole-1234

Python 2.7.3 (default, Apr 10 2013, 06:20:15)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> dir()
['__builtins__', 'dump_stacktraces', 'os', 'socket', 'sys', 'traceback']
>>> print 'foobar'
foobar

View on GitHub


7.  Pyringe

Pyringe is a python debugger capable of attaching to running processes, inspecting their state and even of injecting python code into them while they're running. With pyringe, you can list threads, get tracebacks, inspect locals/globals/builtins of running functions, all without having to prepare your program for it.

How do I use it?

You can start the debugger by executing python -m pyringe. Alternatively:

import pyringe
pyringe.interact()

If that reminds you of the code module, good; this is intentional.
After starting the debugger, you'll be greeted by what behaves almost like a regular python REPL.
Try the following:

==> pid:[None] #threads:[0] current thread:[None]
>>> help()
Available commands:
 attach: Attach to the process with the given pid.
 bt: Get a backtrace of the current position.
 [...]
==> pid:[None] #threads:[0] current thread:[None]
>>> attach(12679)
==> pid:[12679] #threads:[11] current thread:[140108099462912]
>>> threads()
[140108099462912, 140108107855616, 140108116248323, 140108124641024, 140108133033728, 140108224739072, 140108233131776, 140108141426432, 140108241524480, 140108249917184, 140108269324032]

The IDs you see here correspond to what threading.current_thread().ident would tell you.
All debugger functions are just regular python functions that have been exposed to the REPL, so you can do things like the following.

==> pid:[12679] #threads:[11] current thread:[140108099462912]
>>> for tid in threads():
...   if not tid % 10:
...     thread(tid)
...     bt()
... 
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 524, in __bootstrap
    self.__bootstrap_inner()
  File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 504, in run
    self.__target(*self.__args, **self.__kwargs)
  File "./test.py", line 46, in Idle
    Thread_2_Func(1)
  File "./test.py", line 40, in Wait
    time.sleep(n)
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> 

You can access the inferior's locals and inspect them like so:

==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> inflocals()
{'a': <proxy of A object at remote 0x1d9b290>, 'LOL': 'success!', 'b': <proxy of B object at remote 0x1d988c0>, 'n': 1}
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> p('a')
<proxy of A object at remote 0x1d9b290>
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> p('a').attr
'Some_magic_string'
==> pid:[12679] #threads:[11] current thread:[140108241524480]
>>> 

And sure enough, the definition of a's class reads:

class Example(object):
  cl_attr = False
  def __init__(self):
    self.attr = 'Some_magic_string'

There's limits to how far this proxying of objects goes, and everything that isn't trivial data will show up as strings (like '<function at remote 0x1d957d0>').

View on GitHub


8.  python-hunter

Hunter is a flexible code tracing toolkit, not for measuring coverage, but for debugging, logging, inspection and other nefarious purposes. It has a simple Python API, a convenient terminal API and a CLI tool to attach to processes.

Installation

pip install hunter

Documentation

https://python-hunter.readthedocs.io/

Getting started

Basic use involves passing various filters to the trace option. An example:

import hunter
hunter.trace(module='posixpath', action=hunter.CallPrinter)

import os
os.path.join('a', 'b')

That would result in:

>>> os.path.join('a', 'b')
         /usr/lib/python3.6/posixpath.py:75    call      => join(a='a')
         /usr/lib/python3.6/posixpath.py:80    line         a = os.fspath(a)
         /usr/lib/python3.6/posixpath.py:81    line         sep = _get_sep(a)
         /usr/lib/python3.6/posixpath.py:41    call         => _get_sep(path='a')
         /usr/lib/python3.6/posixpath.py:42    line            if isinstance(path, bytes):
         /usr/lib/python3.6/posixpath.py:45    line            return '/'
         /usr/lib/python3.6/posixpath.py:45    return       <= _get_sep: '/'
         /usr/lib/python3.6/posixpath.py:82    line         path = a
         /usr/lib/python3.6/posixpath.py:83    line         try:
         /usr/lib/python3.6/posixpath.py:84    line         if not p:
         /usr/lib/python3.6/posixpath.py:86    line         for b in map(os.fspath, p):
         /usr/lib/python3.6/posixpath.py:87    line         if b.startswith(sep):
         /usr/lib/python3.6/posixpath.py:89    line         elif not path or path.endswith(sep):
         /usr/lib/python3.6/posixpath.py:92    line         path += sep + b
         /usr/lib/python3.6/posixpath.py:86    line         for b in map(os.fspath, p):
         /usr/lib/python3.6/posixpath.py:96    line         return path
         /usr/lib/python3.6/posixpath.py:96    return    <= join: 'a/b'
'a/b'

In a terminal it would look like:

https://raw.githubusercontent.com/ionelmc/python-hunter/master/docs/code-trace.png

Another useful scenario is to ignore all standard modules and force colors to make them stay even if the output is redirected to a file.

import hunter
hunter.trace(stdlib=False, action=hunter.CallPrinter(force_colors=True))

View on GitHub


9.  line_profiler

line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

Installation

Note: As of version 2.1.2, pip install line_profiler does not work. Please install as follows until it is fixed in the next release:

git clone https://github.com/rkern/line_profiler.git
find line_profiler -name '*.pyx' -exec cython {} \;
cd line_profiler
pip install . --user

Releases of line_profiler can be installed using pip:

$ pip install line_profiler

Source releases and any binaries can be downloaded from the PyPI link.

http://pypi.python.org/pypi/line_profiler

To check out the development sources, you can use Git:

$ git clone https://github.com/rkern/line_profiler.git

You may also download source tarballs of any snapshot from that URL.

Source releases will require a C compiler in order to build line_profiler. In addition, git checkouts will also require Cython >= 0.10. Source releases on PyPI should contain the pregenerated C sources, so Cython should not be required in that case.

kernprof is a single-file pure Python script and does not require a compiler. If you wish to use it to run cProfile and not line-by-line profiling, you may copy it to a directory on your PATH manually and avoid trying to build any C extensions.

View on GitHub


10.  Memory Profiler

This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for python programs. It is a pure python module which depends on the psutil module.

Installation

To install through easy_install or pip:

$ easy_install -U memory_profiler # pip install -U memory_profiler

To install from source, download the package, extract and type:

$ python setup.py install

Usage

line-by-line memory usage

The line-by-line memory usage mode is used much in the same way of the line_profiler: first decorate the function you would like to profile with @profile and then run the script with a special script (in this case with specific arguments to the Python interpreter).

In the following example, we create a simple function my_func that allocates lists a, b and then deletes b:

@profile
def my_func():
    a = [1] * (10 ** 6)
    b = [2] * (2 * 10 ** 7)
    del b
    return a

if __name__ == '__main__':
    my_func()

Execute the code passing the option -m memory_profiler to the python interpreter to load the memory_profiler module and print to stdout the line-by-line analysis. If the file name was example.py, this would result in:

$ python -m memory_profiler example.py

Output will follow:

Line #    Mem usage  Increment   Line Contents
==============================================
     3                           @profile
     4      5.97 MB    0.00 MB   def my_func():
     5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
     6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
     7     13.61 MB -152.59 MB       del b
     8     13.61 MB    0.00 MB       return a

The first column represents the line number of the code that has been profiled, the second column (Mem usage) the memory usage of the Python interpreter after that line has been executed. The third column (Increment) represents the difference in memory of the current line with respect to the last one. The last column (Line Contents) prints the code that has been profiled.

View on GitHub


11.  py-spy

py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spending time on without restarting the program or modifying the code in any way. py-spy is extremely low overhead: it is written in Rust for speed and doesn't run in the same process as the profiled Python program. This means py-spy is safe to use against production Python code.

py-spy works on Linux, OSX, Windows and FreeBSD, and supports profiling all recent versions of the CPython interpreter (versions 2.3-2.7 and 3.3-3.10).

Installation

Prebuilt binary wheels can be installed from PyPI with:

pip install py-spy

You can also download prebuilt binaries from the GitHub Releases Page.

If you're a Rust user, py-spy can also be installed with: cargo install py-spy.

On macOS, py-spy is in Homebrew and can be installed with brew install py-spy.

On Arch Linux, py-spy is in AUR and can be installed with yay -S py-spy.

On Alpine Linux, py-spy is in testing repository and can be installed with apk add py-spy --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted.

Usage

py-spy works from the command line and takes either the PID of the program you want to sample from or the command line of the python program you want to run. py-spy has three subcommands record, top and dump:

record

py-spy supports recording profiles to a file using the record command. For example, you can generate a flame graph of your python process by going:

py-spy record -o profile.svg --pid 12345
# OR
py-spy record -o profile.svg -- python myprogram.py

View on GitHub


12.  Pyflame

Pyflame is a high performance profiling tool that generates flame graphs for Python. Pyflame is implemented in C++, and uses the Linux ptrace(2) system call to collect profiling information. It can take snapshots of the Python call stack without explicit instrumentation, meaning you can profile a program without modifying its source code. Pyflame is capable of profiling embedded Python interpreters like uWSGI. It fully supports profiling multi-threaded Python programs.

Pyflame usually introduces significantly less overhead than the builtin profile (or cProfile) modules, and emits richer profiling data. The profiling overhead is low enough that you can use it to profile live processes in production.

Quickstart

Building And Installing

For Debian/Ubuntu, install the following:

# Install build dependencies on Debian or Ubuntu.
sudo apt-get install autoconf automake autotools-dev g++ pkg-config python-dev python3-dev libtool make

Once you have the build dependencies installed:

./autogen.sh
./configure
make

The make command will produce an executable at src/pyflame that you can run and use.

Optionally, if you have virtualenv installed, you can test the executable you produced using make check.

Using Pyflame

The full documentation for using Pyflame is here. But here's a quick guide:

# Attach to PID 12345 and profile it for 1 second
pyflame -p 12345

# Attach to PID 768 and profile it for 5 seconds, sampling every 0.01 seconds
pyflame -s 5 -r 0.01 -p 768

# Run py.test against tests/, emitting sample data to prof.txt
pyflame -o prof.txt -t py.test tests/

In all of these cases you will get flame graph data on stdout (or to a file if you used -o). This data is in the format expected by flamegraph.pl, which you can find here.

View on GitHub


13.  vprof

vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memory usage. It supports Python 3.4+ and distributed under BSD license.

The project is in active development and some of its features might not work as expected.

Installation

vprof can be installed from PyPI

pip install vprof

To build vprof from sources, clone this repository and execute

python3 setup.py deps_install && python3 setup.py build_ui && python3 setup.py install

To install just vprof dependencies, run

python3 setup.py deps_install

Usage

vprof -c <config> <src>

<config> is a combination of supported modes:

  • c - CPU flame graph ⚠️ Not available for windows #62

Shows CPU flame graph for <src>.

  • p - profiler

Runs built-in Python profiler on <src> and displays results.

  • m - memory graph

Shows objects that are tracked by CPython GC and left in memory after code execution. Also shows process memory usage after execution of each line of <src>.

  • h - code heatmap

Displays all executed code of <src> with line run times and execution counts.

View on GitHub


14.  Django Debug Toolbar

The Django Debug Toolbar is a configurable set of panels that display various debug information about the current request/response and when clicked, display more details about the panel's content.


Here's a screenshot of the toolbar in action:

Django Debug Toolbar screenshot

In addition to the built-in panels, a number of third-party panels are contributed by the community.

The current stable version of the Debug Toolbar is 3.6.0. It works on Django ≥ 3.2.4.

View on GitHub


15.  django-devserver

A drop in replacement for Django's built-in runserver command. Features include:

  • An extendable interface for handling things such as real-time logging.
  • Integration with the werkzeug interactive debugger.
  • Threaded (default) and multi-process development servers.
  • Ability to specify a WSGI application as your target environment.

Note

django-devserver works on Django 1.3 and newer

Installation

To install the latest stable version:

pip install git+git://github.com/dcramer/django-devserver#egg=django-devserver

django-devserver has some optional dependancies, which we highly recommend installing.

  • pip install sqlparse -- pretty SQL formatting
  • pip install werkzeug -- interactive debugger
  • pip install guppy -- tracks memory usage (required for MemoryUseModule)
  • pip install line_profiler -- does line-by-line profiling (required for LineProfilerModule)

You will need to include devserver in your INSTALLED_APPS:

INSTALLED_APPS = (
    ...
    'devserver',
)

If you're using django.contrib.staticfiles or any other apps with management command runserver, make sure to put devserver above any of them (or below, for Django<1.7). Otherwise devserver will log an error, but it will fail to work properly.

View on GitHub


16.  Flask Debug-toolbar

This is a port of the excellent django-debug-toolbar for Flask applications.

Installation

Installing is simple with pip:

$ pip install flask-debugtoolbar

Usage

Setting up the debug toolbar is simple:

from flask import Flask
from flask_debugtoolbar import DebugToolbarExtension

app = Flask(__name__)

# the toolbar is only enabled in debug mode:
app.debug = True

# set a 'SECRET_KEY' to enable the Flask session cookies
app.config['SECRET_KEY'] = '<replace with a secret key>'

toolbar = DebugToolbarExtension(app)

The toolbar will automatically be injected into Jinja templates when debug mode is on. In production, setting app.debug = False will disable the toolbar.

View on GitHub


17.  IceCream

Do you ever use print() or log() to debug your code? Of course you do. IceCream, or ic for short, makes print debugging a little sweeter.

ic() is like print(), but better:

  1. It prints both expressions/variable names and their values.
  2. It's 40% faster to type.
  3. Data structures are pretty printed.
  4. Output is syntax highlighted.
  5. It optionally includes program context: filename, line number, and parent function.

IceCream is well tested, permissively licensed, and supports Python 2, Python 3, PyPy2, and PyPy3. (Python 3.11 support is forthcoming.)

Inspect Variables

Have you ever printed variables or expressions to debug your program? If you've ever typed something like

print(foo('123'))

or the more thorough

print("foo('123')", foo('123'))

then ic() will put a smile on your face. With arguments, ic() inspects itself and prints both its own arguments and the values of those arguments.

from icecream import ic

def foo(i):
    return i + 333

ic(foo(123))

Prints

ic| foo(123): 456

Similarly,

d = {'key': {1: 'one'}}
ic(d['key'][1])

class klass():
    attr = 'yep'
ic(klass.attr)

Prints

ic| d['key'][1]: 'one'
ic| klass.attr: 'yep'

Just give ic() a variable or expression and you're done. Easy.

View on GitHub


18.  pyelftools

pyelftools is a pure-Python library for parsing and analyzing ELF files and DWARF debugging information. See the User's guide for more details.

Pre-requisites

As a user of pyelftools, one only needs Python 3 to run. For hacking on pyelftools the requirements are a bit more strict, please see the hacking guide.

Installing

pyelftools can be installed from PyPI (Python package index):

> pip install pyelftools

Alternatively, you can download the source distribution for the most recent and historic versions from the Downloads tab on the pyelftools project page (by going to Tags). Then, you can install from source, as usual:

> python setup.py install

Since pyelftools is a work in progress, it's recommended to have the most recent version of the code. This can be done by downloading the master zip file or just cloning the Git repository.

Since pyelftools has no external dependencies, it's also easy to use it without installing, by locally adjusting PYTHONPATH.

View on GitHub


FAQ about Debugging Tools python

  • How many types of debugging are in Python?

Debugging in any programming language typically involves two types of errors: syntax or logical. Syntax errors are those where the programming language commands are not interpreted by the compiler or interpreter because of a problem with how the program is written.

  • Best Debugging Tools include:

Chrome DevTools, Progress Telerik Fiddler, GDB (GNU Debugger), Data Display Debugger, SonarLint, Froglogic Squish, and TotalView HPC Debugging Software.

  • Why is it called debugging?

The terms "bug" and "debugging" are popularly attributed to Admiral Grace Hopper in the 1940s. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system.

  • Why do we need debugging?

Debugging is important because it allows software engineers and developers to fix errors in a program before releasing it to the public. It's a complementary process to testing, which involves learning how an error affects a program overall.


Related videos:

Python Tutorial - Introduction to DEBUGGING


Related posts:

#python 

Sasha  Roberts

Sasha Roberts

1659500100

Reform: Form Objects Decoupled From Models In Ruby

Reform

Form objects decoupled from your models.

Reform gives you a form object with validations and nested setup of models. It is completely framework-agnostic and doesn't care about your database.

Although reform can be used in any Ruby framework, it comes with Rails support, works with simple_form and other form gems, allows nesting forms to implement has_one and has_many relationships, can compose a form from multiple objects and gives you coercion.

Full Documentation

Reform is part of the Trailblazer framework. Full documentation is available on the project site.

Reform 2.2

Temporary note: Reform 2.2 does not automatically load Rails files anymore (e.g. ActiveModel::Validations). You need the reform-rails gem, see Installation.

Defining Forms

Forms are defined in separate classes. Often, these classes partially map to a model.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true
end

Fields are declared using ::property. Validations work exactly as you know it from Rails or other frameworks. Note that validations no longer go into the model.

The API

Forms have a ridiculously simple API with only a handful of public methods.

  1. #initialize always requires a model that the form represents.
  2. #validate(params) updates the form's fields with the input data (only the form, not the model) and then runs all validations. The return value is the boolean result of the validations.
  3. #errors returns validation messages in a classic ActiveModel style.
  4. #sync writes form data back to the model. This will only use setter methods on the model(s).
  5. #save (optional) will call #save on the model and nested models. Note that this implies a #sync call.
  6. #prepopulate! (optional) will run pre-population hooks to "fill out" your form before rendering.

In addition to the main API, forms expose accessors to the defined properties. This is used for rendering or manual operations.

Setup

In your controller or operation you create a form instance and pass in the models you want to work on.

class AlbumsController
  def new
    @form = AlbumForm.new(Album.new)
  end

This will also work as an editing form with an existing album.

def edit
  @form = AlbumForm.new(Album.find(1))
end

Reform will read property values from the model in setup. In our example, the AlbumForm will call album.title to populate the title field.

Rendering Forms

Your @form is now ready to be rendered, either do it yourself or use something like Rails' #form_for, simple_form or formtastic.

= form_for @form do |f|
  = f.input :title

Nested forms and collections can be easily rendered with fields_for, etc. Note that you no longer pass the model to the form builder, but the Reform instance.

Optionally, you might want to use the #prepopulate! method to pre-populate fields and prepare the form for rendering.

Validation

After form submission, you need to validate the input.

class SongsController
  def create
    @form = SongForm.new(Song.new)

    #=> params: {song: {title: "Rio", length: "366"}}

    if @form.validate(params[:song])

The #validate method first updates the values of the form - the underlying model is still treated as immutuable and remains unchanged. It then runs all validations you provided in the form.

It's the only entry point for updating the form. This is per design, as separating writing and validation doesn't make sense for a form.

This allows rendering the form after validate with the data that has been submitted. However, don't get confused, the model's values are still the old, original values and are only changed after a #save or #sync operation.

Syncing Back

After validation, you have two choices: either call #save and let Reform sort out the rest. Or call #sync, which will write all the properties back to the model. In a nested form, this works recursively, of course.

It's then up to you what to do with the updated models - they're still unsaved.

Saving Forms

The easiest way to save the data is to call #save on the form.

if @form.validate(params[:song])
  @form.save  #=> populates album with incoming data
              #   by calling @form.album.title=.
else
  # handle validation errors.
end

This will sync the data to the model and then call album.save.

Sometimes, you need to do saving manually.

Default values

Reform allows default values to be provided for properties.

class AlbumForm < Reform::Form
  property :price_in_cents, default: 9_95
end

Saving Forms Manually

Calling #save with a block will provide a nested hash of the form's properties and values. This does not call #save on the models and allows you to implement the saving yourself.

The block parameter is a nested hash of the form input.

  @form.save do |hash|
    hash      #=> {title: "Greatest Hits"}
    Album.create(hash)
  end

You can always access the form's model. This is helpful when you were using populators to set up objects when validating.

  @form.save do |hash|
    album = @form.model

    album.update_attributes(hash[:album])
  end

Nesting

Reform provides support for nested objects. Let's say the Album model keeps some associations.

class Album < ActiveRecord::Base
  has_one  :artist
  has_many :songs
end

The implementation details do not really matter here, as long as your album exposes readers and writes like Album#artist and Album#songs, this allows you to define nested forms.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true

  property :artist do
    property :full_name
    validates :full_name, presence: true
  end

  collection :songs do
    property :name
  end
end

You can also reuse an existing form from elsewhere using :form.

property :artist, form: ArtistForm

Nested Setup

Reform will wrap defined nested objects in their own forms. This happens automatically when instantiating the form.

album.songs #=> [<Song name:"Run To The Hills">]

form = AlbumForm.new(album)
form.songs[0] #=> <SongForm model: <Song name:"Run To The Hills">>
form.songs[0].name #=> "Run To The Hills"

Nested Rendering

When rendering a nested form you can use the form's readers to access the nested forms.

= text_field :title,         @form.title
= text_field "artist[name]", @form.artist.name

Or use something like #fields_for in a Rails environment.

= form_for @form do |f|
  = f.text_field :title

  = f.fields_for :artist do |a|
    = a.text_field :name

Nested Processing

validate will assign values to the nested forms. sync and save work analogue to the non-nested form, just in a recursive way.

The block form of #save would give you the following data.

@form.save do |nested|
  nested #=> {title:  "Greatest Hits",
         #    artist: {name: "Duran Duran"},
         #    songs: [{title: "Hungry Like The Wolf"},
         #            {title: "Last Chance On The Stairways"}]
         #   }
  end

The manual saving with block is not encouraged. You should rather check the Disposable docs to find out how to implement your manual tweak with the official API.

Populating Forms

Very often, you need to give Reform some information how to create or find nested objects when validateing. This directive is called populator and documented here.

Installation

Add this line to your Gemfile:

gem "reform"

Reform works fine with Rails 3.1-5.0. However, inheritance of validations with ActiveModel::Validations is broken in Rails 3.2 and 4.0.

Since Reform 2.2, you have to add the reform-rails gem to your Gemfile to automatically load ActiveModel/Rails files.

gem "reform-rails"

Since Reform 2.0 you need to specify which validation backend you want to use (unless you're in a Rails environment where ActiveModel will be used).

To use ActiveModel (not recommended because very out-dated).

require "reform/form/active_model/validations"
Reform::Form.class_eval do
  include Reform::Form::ActiveModel::Validations
end

To use dry-validation (recommended).

require "reform/form/dry"
Reform::Form.class_eval do
  feature Reform::Form::Dry
end

Put this in an initializer or on top of your script.

Compositions

Reform allows to map multiple models to one form. The complete documentation is here, however, this is how it works.

class AlbumForm < Reform::Form
  include Composition

  property :id,    on: :album
  property :title, on: :album
  property :songs, on: :cd
  property :cd_id, on: :cd, from: :id
end

When initializing a composition, you have to pass a hash that contains the composees.

AlbumForm.new(album: album, cd: CD.find(1))

More

Reform comes many more optional features, like hash fields, coercion, virtual fields, and so on. Check the full documentation here.

Reform is part of the Trailblazer project. Please buy my book to support the development and learn everything about Reform - there's two chapters dedicated to Reform!

Security And Strong_parameters

By explicitly defining the form layout using ::property there is no more need for protecting from unwanted input. strong_parameter or attr_accessible become obsolete. Reform will simply ignore undefined incoming parameters.

This is not Reform 1.x!

Temporary note: This is the README and API for Reform 2. On the public API, only a few tiny things have changed. Here are the Reform 1.2 docs.

Anyway, please upgrade and report problems and do not simply assume that we will magically find out what needs to get fixed. When in trouble, join us on Gitter.

Full documentation for Reform is available online, or support us and grab the Trailblazer book. There is an Upgrading Guide to help you migrate through versions.

Attributions!!!

Great thanks to Blake Education for giving us the freedom and time to develop this project in 2013 while working on their project.


Author: trailblazer
Source code: https://github.com/trailblazer/reform
License:  MIT license

#ruby  #ruby-on-rails