Jacky  Hoeger

Jacky Hoeger

1591244874

Intersection of Two Sets of Coordinates, Sort Colors Using Python OOP

Find the Intersection of Two Sets of Coordinates and Sort By Colors Using Python OOP
Solving Two programming problems, one with a sorting algorithm and
This article is about some programming exercises. If you are a learner and learning Data Stricture and OOP in Python, it may be helpful for you…

#algorithms #python-programming #sorting-algorithms #python #programming

What is GEEK

Buddha Community

Intersection of Two Sets of Coordinates, Sort  Colors Using Python OOP
Hermann  Frami

Hermann Frami

1651383480

A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator

Usage

This plugin relies on your serverless yml file and on the serverless-offline plugin.

plugins:
  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...

Configuration

Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

custom:
  appsync-simulator:
    location: '.webpack/service' # use webpack build directory
    dynamoDb:
      endpoint: 'http://my-custom-dynamo:8000'

Hot-reloading

By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.

e.g.

custom:
  appsync-simulator:
    watch:
      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

provider:
  environment:
    BUCKET_NAME:
      Ref: MyBucket # resolves to `my-bucket-name`

resources:
  Resources:
    MyDbTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: myTable
      ...
    MyBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-bucket-name
    ...

# in your appsync config
dataSources:
  - type: AMAZON_DYNAMODB
    name: dynamosource
    config:
      tableName:
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs

Example:

custom:
  appsync-simulator:
    refMap:
      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
    getAttMap:
      # define ElasticSearchInstance DomainName
      ElasticSearchInstance:
        DomainEndpoint: 'localhost:9200'
    importValueMap:
      other-service-api-url: 'https://other.api.url.com/graphql'

# in your appsync config
dataSources:
  - type: AMAZON_ELASTICSEARCH
    name: elasticsource
    config:
      # endpoint resolves as 'http://localhost:9200'
      endpoint:
        Fn::Join:
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

provider:
  environment:
    FINISH_ACTIVITY_FUNCTION_ARN:
      Fn::ImportValue: other-service-api-${self:provider.stage}-url

custom:
  serverless-appsync-simulator:
    importValueMap:
      - key: other-service-api-${self:provider.stage}-url
        value: 'https://other.api.url.com/graphql'

Limitations

This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

custom:
  appsync-simulator:
    functions:
      addUser:
        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
      addPost:
        url: https://jsonplaceholder.typicode.com/posts
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE
  • AWS_LAMBDA
  • AMAZON_DYNAMODB
  • PIPELINE

Implemented by this plugin

  • AMAZON_ELASTIC_SEARCH
  • HTTP
  • RELATIONAL_DATABASE

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
      #else
        #set( $discard = $vals.add("0") )
      #end
  #else
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
  #end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
#end
{
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
      #else
        #set ( $cur = "0" )
      #end
  #end
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
  #else
      #set($update = "$update,$toSnake$equals'$cur'" )
  #end
#end
{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}

Sample resolver for delete mutation

{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
    #end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
        #else
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #end
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    #end
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

{
  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  ],
  variableMap: {
    ":ID": $ctx.args.id,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
  }
}

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator 
License: MIT License

#serverless #sync #graphql 

Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

Flutter Dev

Flutter Dev

1679035563

How to Add Splash Screen in Android and iOS with Flutter

When your app is opened, there is a brief time while the native app loads Flutter. By default, during this time, the native app displays a white splash screen. This package automatically generates iOS, Android, and Web-native code for customizing this native splash screen background color and splash image. Supports dark mode, full screen, and platform-specific options.

What's New

[BETA] Support for flavors is in beta. Currently only Android and iOS are supported. See instructions below.

You can now keep the splash screen up while your app initializes! No need for a secondary splash screen anymore. Just use the preserve and remove methods together to remove the splash screen after your initialization is complete. See details below.

Usage

Would you prefer a video tutorial instead? Check out Johannes Milke's tutorial.

First, add flutter_native_splash as a dependency in your pubspec.yaml file.

dependencies:
  flutter_native_splash: ^2.2.19

Don't forget to flutter pub get.

1. Setting the splash screen

 

Customize the following settings and add to your project's pubspec.yaml file or place in a new file in your root project folder named flutter_native_splash.yaml.

flutter_native_splash:
  # This package generates native code to customize Flutter's default white native splash screen
  # with background color and splash image.
  # Customize the parameters below, and run the following command in the terminal:
  # flutter pub run flutter_native_splash:create
  # To restore Flutter's default white splash screen, run the following command in the terminal:
  # flutter pub run flutter_native_splash:remove

  # color or background_image is the only required parameter.  Use color to set the background
  # of your splash screen to a solid color.  Use background_image to set the background of your
  # splash screen to a png image.  This is useful for gradients. The image will be stretch to the
  # size of the app. Only one parameter can be used, color and background_image cannot both be set.
  color: "#42a5f5"
  #background_image: "assets/background.png"

  # Optional parameters are listed below.  To enable a parameter, uncomment the line by removing
  # the leading # character.

  # The image parameter allows you to specify an image used in the splash screen.  It must be a
  # png file and should be sized for 4x pixel density.
  #image: assets/splash.png

  # The branding property allows you to specify an image used as branding in the splash screen.
  # It must be a png file. It is supported for Android, iOS and the Web.  For Android 12,
  # see the Android 12 section below.
  #branding: assets/dart.png

  # To position the branding image at the bottom of the screen you can use bottom, bottomRight,
  # and bottomLeft. The default values is bottom if not specified or specified something else.
  #branding_mode: bottom

  # The color_dark, background_image_dark, image_dark, branding_dark are parameters that set the background
  # and image when the device is in dark mode. If they are not specified, the app will use the
  # parameters from above. If the image_dark parameter is specified, color_dark or
  # background_image_dark must be specified.  color_dark and background_image_dark cannot both be
  # set.
  #color_dark: "#042a49"
  #background_image_dark: "assets/dark-background.png"
  #image_dark: assets/splash-invert.png
  #branding_dark: assets/dart_dark.png

  # Android 12 handles the splash screen differently than previous versions.  Please visit
  # https://developer.android.com/guide/topics/ui/splash-screen
  # Following are Android 12 specific parameter.
  android_12:
    # The image parameter sets the splash screen icon image.  If this parameter is not specified,
    # the app's launcher icon will be used instead.
    # Please note that the splash screen will be clipped to a circle on the center of the screen.
    # App icon with an icon background: This should be 960×960 pixels, and fit within a circle
    # 640 pixels in diameter.
    # App icon without an icon background: This should be 1152×1152 pixels, and fit within a circle
    # 768 pixels in diameter.
    #image: assets/android12splash.png

    # Splash screen background color.
    #color: "#42a5f5"

    # App icon background color.
    #icon_background_color: "#111111"

    # The branding property allows you to specify an image used as branding in the splash screen.
    #branding: assets/dart.png

    # The image_dark, color_dark, icon_background_color_dark, and branding_dark set values that
    # apply when the device is in dark mode. If they are not specified, the app will use the
    # parameters from above.
    #image_dark: assets/android12splash-invert.png
    #color_dark: "#042a49"
    #icon_background_color_dark: "#eeeeee"

  # The android, ios and web parameters can be used to disable generating a splash screen on a given
  # platform.
  #android: false
  #ios: false
  #web: false

  # Platform specific images can be specified with the following parameters, which will override
  # the respective parameter.  You may specify all, selected, or none of these parameters:
  #color_android: "#42a5f5"
  #color_dark_android: "#042a49"
  #color_ios: "#42a5f5"
  #color_dark_ios: "#042a49"
  #color_web: "#42a5f5"
  #color_dark_web: "#042a49"
  #image_android: assets/splash-android.png
  #image_dark_android: assets/splash-invert-android.png
  #image_ios: assets/splash-ios.png
  #image_dark_ios: assets/splash-invert-ios.png
  #image_web: assets/splash-web.png
  #image_dark_web: assets/splash-invert-web.png
  #background_image_android: "assets/background-android.png"
  #background_image_dark_android: "assets/dark-background-android.png"
  #background_image_ios: "assets/background-ios.png"
  #background_image_dark_ios: "assets/dark-background-ios.png"
  #background_image_web: "assets/background-web.png"
  #background_image_dark_web: "assets/dark-background-web.png"
  #branding_android: assets/brand-android.png
  #branding_dark_android: assets/dart_dark-android.png
  #branding_ios: assets/brand-ios.png
  #branding_dark_ios: assets/dart_dark-ios.png

  # The position of the splash image can be set with android_gravity, ios_content_mode, and
  # web_image_mode parameters.  All default to center.
  #
  # android_gravity can be one of the following Android Gravity (see
  # https://developer.android.com/reference/android/view/Gravity): bottom, center,
  # center_horizontal, center_vertical, clip_horizontal, clip_vertical, end, fill, fill_horizontal,
  # fill_vertical, left, right, start, or top.
  #android_gravity: center
  #
  # ios_content_mode can be one of the following iOS UIView.ContentMode (see
  # https://developer.apple.com/documentation/uikit/uiview/contentmode): scaleToFill,
  # scaleAspectFit, scaleAspectFill, center, top, bottom, left, right, topLeft, topRight,
  # bottomLeft, or bottomRight.
  #ios_content_mode: center
  #
  # web_image_mode can be one of the following modes: center, contain, stretch, and cover.
  #web_image_mode: center

  # The screen orientation can be set in Android with the android_screen_orientation parameter.
  # Valid parameters can be found here:
  # https://developer.android.com/guide/topics/manifest/activity-element#screen
  #android_screen_orientation: sensorLandscape

  # To hide the notification bar, use the fullscreen parameter.  Has no effect in web since web
  # has no notification bar.  Defaults to false.
  # NOTE: Unlike Android, iOS will not automatically show the notification bar when the app loads.
  #       To show the notification bar, add the following code to your Flutter app:
  #       WidgetsFlutterBinding.ensureInitialized();
  #       SystemChrome.setEnabledSystemUIOverlays([SystemUiOverlay.bottom, SystemUiOverlay.top]);
  #fullscreen: true

  # If you have changed the name(s) of your info.plist file(s), you can specify the filename(s)
  # with the info_plist_files parameter.  Remove only the # characters in the three lines below,
  # do not remove any spaces:
  #info_plist_files:
  #  - 'ios/Runner/Info-Debug.plist'
  #  - 'ios/Runner/Info-Release.plist'

2. Run the package

After adding your settings, run the following command in the terminal:

flutter pub run flutter_native_splash:create

When the package finishes running, your splash screen is ready.

To specify the YAML file location just add --path with the command in the terminal:

flutter pub run flutter_native_splash:create --path=path/to/my/file.yaml

3. Set up app initialization (optional)

By default, the splash screen will be removed when Flutter has drawn the first frame. If you would like the splash screen to remain while your app initializes, you can use the preserve() and remove() methods together. Pass the preserve() method the value returned from WidgetsFlutterBinding.ensureInitialized() to keep the splash on screen. Later, when your app has initialized, make a call to remove() to remove the splash screen.

import 'package:flutter_native_splash/flutter_native_splash.dart';

void main() {
  WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
  FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
  runApp(const MyApp());
}

// whenever your initialization is completed, remove the splash screen:
    FlutterNativeSplash.remove();

NOTE: If you do not need to use the preserve() and remove() methods, you can place the flutter_native_splash dependency in the dev_dependencies section of pubspec.yaml.

4. Support the package (optional)

If you find this package useful, you can support it for free by giving it a thumbs up at the top of this page. Here's another option to support the package:

Android 12+ Support

Android 12 has a new method of adding splash screens, which consists of a window background, icon, and the icon background. Note that a background image is not supported.

Be aware of the following considerations regarding these elements:

1: image parameter. By default, the launcher icon is used:

  • App icon without an icon background, as shown on the left: This should be 1152×1152 pixels, and fit within a circle 768 pixels in diameter.
  • App icon with an icon background, as shown on the right: This should be 960×960 pixels, and fit within a circle 640 pixels in diameter.

2: icon_background_color is optional, and is useful if you need more contrast between the icon and the window background.

3: One-third of the foreground is masked.

4: color the window background consists of a single opaque color.

PLEASE NOTE: The splash screen may not appear when you launch the app from Android Studio on API 31. However, it should appear when you launch by clicking on the launch icon in Android. This seems to be resolved in API 32+.

PLEASE NOTE: There are a number of reports that non-Google launchers do not display the launch image correctly. If the launch image does not display correctly, please try the Google launcher to confirm that this package is working.

PLEASE NOTE: The splash screen does not appear when you launch the app from a notification. Apparently this is the intended behavior on Android 12: core-splashscreen Icon not shown when cold launched from notification.

Flavor Support

If you have a project setup that contains multiple flavors or environments, and you created more than one flavor this would be a feature for you.

Instead of maintaining multiple files and copy/pasting images, you can now, using this tool, create different splash screens for different environments.

Pre-requirements

In order to use the new feature, and generate the desired splash images for you app, a couple of changes are required.

If you want to generate just one flavor and one file you would use either options as described in Step 1. But in order to setup the flavors, you will then be required to move all your setup values to the flutter_native_splash.yaml file, but with a prefix.

Let's assume for the rest of the setup that you have 3 different flavors, Production, Acceptance, Development.

First this you will need to do is to create a different setup file for all 3 flavors with a suffix like so:

flutter_native_splash-production.yaml
flutter_native_splash-acceptance.yaml
flutter_native_splash-development.yaml

You would setup those 3 files the same way as you would the one, but with different assets depending on which environment you would be generating. For example (Note: these are just examples, you can use whatever setup you need for your project that is already supported by the package):

# flutter_native_splash-development.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-development.png
  branding: assets/branding-development.png
  color_dark: "#121212"
  image_dark: assets/logo-development.png
  branding_dark: assets/branding-development.png

  android_12:
    image: assets/logo-development.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-development.png
    icon_background_color_dark: "#121212"

  web: false

# flutter_native_splash-acceptance.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-acceptance.png
  branding: assets/branding-acceptance.png
  color_dark: "#121212"
  image_dark: assets/logo-acceptance.png
  branding_dark: assets/branding-acceptance.png

  android_12:
    image: assets/logo-acceptance.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-acceptance.png
    icon_background_color_dark: "#121212"

  web: false

# flutter_native_splash-production.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-production.png
  branding: assets/branding-production.png
  color_dark: "#121212"
  image_dark: assets/logo-production.png
  branding_dark: assets/branding-production.png

  android_12:
    image: assets/logo-production.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-production.png
    icon_background_color_dark: "#121212"

  web: false

Great, now comes the fun part running the new command!

The new command is:

# If you have a flavor called production you would do this:
flutter pub run flutter_native_splash:create --flavor production

# For a flavor with a name staging you would provide it's name like so:
flutter pub run flutter_native_splash:create --flavor staging

# And if you have a local version for devs you could do that:
flutter pub run flutter_native_splash:create --flavor development

Android setup

You're done! No, really, Android doesn't need any additional setup.

Note: If it didn't work, please make sure that your flavors are named the same as your config files, otherwise the setup will not work.

iOS setup

iOS is a bit tricky, so hang tight, it might look scary but most of the steps are just a single click, explained as much as possible to lower the possibility of mistakes.

When you run the new command, you will need to open xCode and follow the steps bellow:

Assumption

  • In order for this setup to work, you would already have 3 different schemes setup; production, acceptance and development.

Preparation

  • Open the iOS Flutter project in Xcode (open the Runner.xcworkspace)
  • Find the newly created Storyboard files at the same location where the original is {project root}/ios/Runner/Base.lproj
  • Select all of them and drag and drop into Xcode, directly to the left hand side where the current LaunchScreen.storyboard is located already
  • After you drop your files there Xcode will ask you to link them, make sure you select 'Copy if needed'
  • This part is done, you have linked the newly created storyboards in your project.

xCode

Xcode still doesn't know how to use them, so we need to specify for all the current flavors (schemes) which file to use and to use that value inside the Info.plist file.

  • Open the iOS Flutter project in Xcode (open the Runner.xcworkspace)
  • Click the Runner project in the top left corner (usually the first item in the list)
  • In the middle part of the screen, on the left side, select the Runner target
  • On the top part of the screen select Build Settings
  • Make sure that 'All' and 'Combined' are selected
  • Next to 'Combine' you have a '+' button, press it and select 'Add User-Defined Setting'
  • Once you do that Xcode will create a new variable for you to name. Suggestion is to name it LAUNCH_SCREEN_STORYBOARD
  • Once you do that, you will have the option to define a specific name for each flavor (scheme) that you have defined in the project. Make sure that you input the exact name of the LaunchScreen.storyboard that was created by this tool
    • Example: If you have a flavor Development, there is a Storyboard created name LaunchScreenDevelopment.storyboard, please add that name (without the storyboard part) to the variable value next to the flavor value
  • After you finish with that, you need to update Info.plist file to link the newly created variable so that it's used correctly
  • Open the Info.plist file
  • Find the entry called 'Launch screen interface file base name'
  • The default value is 'LaunchScreen', change that to the variable name that you create previously. If you follow these steps exactly, it would be LAUNCH_SCREEN_STORYBOARD, so input this $(LAUNCH_SCREEN_STORYBOARD)
  • And your done!

Congrats you finished your setup for multiple flavors,

FAQs

I got the error "A splash screen was provided to Flutter, but this is deprecated."

This message is not related to this package but is related to a change in how Flutter handles splash screens in Flutter 2.5. It is caused by having the following code in your android/app/src/main/AndroidManifest.xml, which was included by default in previous versions of Flutter:

<meta-data
 android:name="io.flutter.embedding.android.SplashScreenDrawable"
 android:resource="@drawable/launch_background"
 />

The solution is to remove the above code. Note that this will also remove the fade effect between the native splash screen and your app.

Are animations/lottie/GIF images supported?

Not at this time. PRs are always welcome!

I got the error AAPT: error: style attribute 'android:attr/windowSplashScreenBackground' not found

This attribute is only found in Android 12, so if you are getting this error, it means your project is not fully set up for Android 12. Did you update your app's build configuration?

I see a flash of the wrong splash screen on iOS

This is caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.

I see a white screen between splash screen and app

  1. It may be caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.
  2. It may be caused by the delay due to initialization in your app. To solve this, put any initialization code in the removeAfter method.

Can I base light/dark mode on app settings?

No. This package creates a splash screen that is displayed before Flutter is loaded. Because of this, when the splash screen loads, internal app settings are not available to the splash screen. Unfortunately, this means that it is impossible to control light/dark settings of the splash from app settings.

Notes

If the splash screen was not updated correctly on iOS or if you experience a white screen before the splash screen, run flutter clean and recompile your app. If that does not solve the problem, delete your app, power down the device, power up the device, install and launch the app as per this StackOverflow thread.

This package modifies launch_background.xml and styles.xml files on Android, LaunchScreen.storyboard and Info.plist on iOS, and index.html on Web. If you have modified these files manually, this plugin may not work properly. Please open an issue if you find any bugs.

How it works

Android

  • Your splash image will be resized to mdpi, hdpi, xhdpi, xxhdpi and xxxhdpi drawables.
  • An <item> tag containing a <bitmap> for your splash image drawable will be added in launch_background.xml
  • Background color will be added in colors.xml and referenced in launch_background.xml.
  • Code for full screen mode toggle will be added in styles.xml.
  • Dark mode variants are placed in drawable-night, values-night, etc. resource folders.

iOS

  • Your splash image will be resized to @3x and @2x images.
  • Color and image properties will be inserted in LaunchScreen.storyboard.
  • The background color is implemented by using a single-pixel png file and stretching it to fit the screen.
  • Code for hidden status bar toggle will be added in Info.plist.

Web

  • A web/splash folder will be created for splash screen images and CSS files.
  • Your splash image will be resized to 1x, 2x, 3x, and 4x sizes and placed in web/splash/img.
  • The splash style sheet will be added to the app's web/index.html, as well as the HTML for the splash pictures.

Acknowledgments

This package was originally created by Henrique Arthur and it is currently maintained by Jon Hanson.

Bugs or Requests

If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket. Pull request are also welcome.


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add flutter_native_splash

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  flutter_native_splash: ^2.2.19

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:flutter_native_splash/flutter_native_splash.dart';

example/lib/main.dart

import 'package:flutter/material.dart';
import 'package:flutter_native_splash/flutter_native_splash.dart';

void main() {
  WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
  FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
      ),
      home: const MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  const MyHomePage({super.key, required this.title});

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  @override
  State<MyHomePage> createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.
      _counter++;
    });
  }

  @override
  void initState() {
    super.initState();
    initialization();
  }

  void initialization() async {
    // This is where you can initialize the resources needed by your app while
    // the splash screen is displayed.  Remove the following example because
    // delaying the user experience is a bad design practice!
    // ignore_for_file: avoid_print
    print('ready in 3...');
    await Future.delayed(const Duration(seconds: 1));
    print('ready in 2...');
    await Future.delayed(const Duration(seconds: 1));
    print('ready in 1...');
    await Future.delayed(const Duration(seconds: 1));
    print('go!');
    FlutterNativeSplash.remove();
  }

  @override
  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    //
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the App.build method, and use it to set our appbar title.
        title: Text(widget.title),
      ),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          //
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          //
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            const Text(
              'You have pushed the button this many times:',
            ),
            Text(
              '$_counter',
              style: Theme.of(context).textTheme.headlineMedium,
            ),
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: const Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

Download Details:
 

Author: jonbhanson
Download Link: Download The Source Code
Official Website: https://github.com/jonbhanson/flutter_native_splash 
License: MIT license

#flutter #ios #android 

Anissa  Barrows

Anissa Barrows

1669099573

What Is Face Recognition? Facial Recognition with Python and OpenCV

In this article, we will know what is face recognition and how is different from face detection. We will go briefly over the theory of face recognition and then jump on to the coding section. At the end of this article, you will be able to make a face recognition program for recognizing faces in images as well as on a live webcam feed.

What is Face Detection?

In computer vision, one essential problem we are trying to figure out is to automatically detect objects in an image without human intervention. Face detection can be thought of as such a problem where we detect human faces in an image. There may be slight differences in the faces of humans but overall, it is safe to say that there are certain features that are associated with all the human faces. There are various face detection algorithms but Viola-Jones Algorithm is one of the oldest methods that is also used today and we will use the same later in the article. You can go through the Viola-Jones Algorithm after completing this article as I’ll link it at the end of this article.

Face detection is usually the first step towards many face-related technologies, such as face recognition or verification. However, face detection can have very useful applications. The most successful application of face detection would probably be photo taking. When you take a photo of your friends, the face detection algorithm built into your digital camera detects where the faces are and adjusts the focus accordingly.

For a tutorial on Real-Time Face detection

What is Face Recognition?

face recognition

Now that we are successful in making such algorithms that can detect faces, can we also recognise whose faces are they?

Face recognition is a method of identifying or verifying the identity of an individual using their face. There are various algorithms that can do face recognition but their accuracy might vary. Here I am going to describe how we do face recognition using deep learning.

So now let us understand how we recognise faces using deep learning. We make use of face embedding in which each face is converted into a vector and this technique is called deep metric learning. Let me further divide this process into three simple steps for easy understanding:

Face Detection: The very first task we perform is detecting faces in the image or video stream. Now that we know the exact location/coordinates of face, we extract this face for further processing ahead.
 

Feature Extraction: Now that we have cropped the face out of the image, we extract features from it. Here we are going to use face embeddings to extract the features out of the face. A neural network takes an image of the person’s face as input and outputs a vector which represents the most important features of a face. In machine learning, this vector is called embedding and thus we call this vector as face embedding. Now how does this help in recognizing faces of different persons? 
 

While training the neural network, the network learns to output similar vectors for faces that look similar. For example, if I have multiple images of faces within different timespan, of course, some of the features of my face might change but not up to much extent. So in this case the vectors associated with the faces are similar or in short, they are very close in the vector space. Take a look at the below diagram for a rough idea:

Now after training the network, the network learns to output vectors that are closer to each other(similar) for faces of the same person(looking similar). The above vectors now transform into:

We are not going to train such a network here as it takes a significant amount of data and computation power to train such networks. We will use a pre-trained network trained by Davis King on a dataset of ~3 million images. The network outputs a vector of 128 numbers which represent the most important features of a face.

Now that we know how this network works, let us see how we use this network on our own data. We pass all the images in our data to this pre-trained network to get the respective embeddings and save these embeddings in a file for the next step.

Comparing faces: Now that we have face embeddings for every face in our data saved in a file, the next step is to recognise a new t image that is not in our data. So the first step is to compute the face embedding for the image using the same network we used above and then compare this embedding with the rest of the embeddings we have. We recognise the face if the generated embedding is closer or similar to any other embedding as shown below:

So we passed two images, one of the images is of Vladimir Putin and other of George W. Bush. In our example above, we did not save the embeddings for Putin but we saved the embeddings of Bush. Thus when we compared the two new embeddings with the existing ones, the vector for Bush is closer to the other face embeddings of Bush whereas the face embeddings of Putin are not closer to any other embedding and thus the program cannot recognise him.

What is OpenCV

In the field of Artificial Intelligence, Computer Vision is one of the most interesting and Challenging tasks. Computer Vision acts like a bridge between Computer Software and visualizations around us. It allows computer software to understand and learn about the visualizations in the surroundings. For Example: Based on the color, shape and size determining the fruit. This task can be very easy for the human brain however in the Computer Vision pipeline, first we gather the data, then we perform the data processing activities and then we train and teach the model to understand how to distinguish between the fruits based on size, shape and color of fruit. 

Currently, various packages are present to perform machine learning, deep learning and computer vision tasks. By far, computer vision is the best module for such complex activities. OpenCV is an open-source library. It is supported by various programming languages such as R, Python. It runs on most of the platforms such as Windows, Linux and MacOS.

To know more about how face recognition works on opencv, check out the free course on face recognition in opencv.

Advantages of OpenCV:

  • OpenCV is an open-source library and is free of cost.
  • As compared to other libraries, it is fast since it is written in C/C++.
  • It works better on System with lesser RAM
  • To supports most of the Operating Systems such as Windows, Linux and MacOS.
  •  

Installation: 

Here we will be focusing on installing OpenCV for python only. We can install OpenCV using pip or conda(for anaconda environment). 

  1. Using pip: 

Using pip, the installation process of openCV can be done by using the following command in the command prompt.

pip install opencv-python

  1. Anaconda:

If you are using anaconda environment, either you can execute the above code in anaconda prompt or you can execute the following code in anaconda prompt.

conda install -c conda-forge opencv

Face Recognition using Python

In this section, we shall implement face recognition using OpenCV and Python. First, let us see the libraries we will need and how to install them:

  • OpenCV
  • dlib
  • Face_recognition

OpenCV is an image and video processing library and is used for image and video analysis, like facial detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.
 

The dlib library, maintained by Davis King, contains our implementation of “deep metric learning” which is used to construct our face embeddings used for the actual recognition process.
 

The face_recognition  library, created by Adam Geitgey, wraps around dlib’s facial recognition functionality, and this library is super easy to work with and we will be using this in our code. Remember to install dlib library first before you install face_recognition.
 

To install OpenCV, type in command prompt 
 

pip install opencv-python

I have tried various ways to install dlib on Windows but the easiest of all of them is via Anaconda. First, install Anaconda (here is a guide to install it) and then use this command in your command prompt:
 

conda install -c conda-forge dlib

Next to install face_recognition, type in command prompt

pip install face_recognition

Now that we have all the dependencies installed, let us start coding. We will have to create three files, one will take our dataset and extract face embedding for each face using dlib. Next, we will save these embedding in a file.
 

In the next file we will compare the faces with the existing the recognise faces in images and next we will do the same but recognise faces in live webcam feed
 

Extracting features from Face

First, you need to get a dataset or even create one of you own. Just make sure to arrange all images in folders with each folder containing images of just one person.

Next, save the dataset in a folder the same as you are going to make the file. Now here is the code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

from imutils import paths

import face_recognition

import pickle

import cv2

import os

#get paths of each file in folder named Images

#Images here contains my data(folders of various persons)

imagePaths = list(paths.list_images('Images'))

knownEncodings = []

knownNames = []

# loop over the image paths

for (i, imagePath) in enumerate(imagePaths):

    # extract the person name from the image path

    name = imagePath.split(os.path.sep)[-2]

    # load the input image and convert it from BGR (OpenCV ordering)

    # to dlib ordering (RGB)

    image = cv2.imread(imagePath)

    rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    #Use Face_recognition to locate faces

    boxes = face_recognition.face_locations(rgb,model='hog')

    # compute the facial embedding for the face

    encodings = face_recognition.face_encodings(rgb, boxes)

    # loop over the encodings

    for encoding in encodings:

        knownEncodings.append(encoding)

        knownNames.append(name)

#save emcodings along with their names in dictionary data

data = {"encodings": knownEncodings, "names": knownNames}

#use pickle to save data into a file for later use

f = open("face_enc", "wb")

f.write(pickle.dumps(data))

f.close()

Now that we have stored the embedding in a file named “face_enc”, we can use them to recognise faces in images or live video stream.

Face Recognition in Live webcam Feed

Here is the script to recognise faces on a live webcam feed:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

print("Streaming started")

video_capture = cv2.VideoCapture(0)

# loop over frames from the video file stream

while True:

    # grab the frame from the threaded video stream

    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale(gray,

                                         scaleFactor=1.1,

                                         minNeighbors=5,

                                         minSize=(60, 60),

                                         flags=cv2.CASCADE_SCALE_IMAGE)

    # convert the input frame from BGR to RGB

    rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    # the facial embeddings for face in input

    encodings = face_recognition.face_encodings(rgb)

    names = []

    # loop over the facial embeddings incase

    # we have multiple embeddings for multiple fcaes

    for encoding in encodings:

       #Compare encodings with encodings in data["encodings"]

       #Matches contain array with boolean values and True for the embeddings it matches closely

       #and False for rest

        matches = face_recognition.compare_faces(data["encodings"],

         encoding)

        #set name =inknown if no encoding matches

        name = "Unknown"

        # check to see if we have found a match

        if True in matches:

            #Find positions at which we get True and store them

            matchedIdxs = [i for (i, b) in enumerate(matches) if b]

            counts = {}

            # loop over the matched indexes and maintain a count for

            # each recognized face face

            for i in matchedIdxs:

                #Check the names at respective indexes we stored in matchedIdxs

                name = data["names"][i]

                #increase count for the name we got

                counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(frame, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

video_capture.release()

cv2.destroyAllWindows()

https://www.youtube.com/watch?v=fLnGdkZxRkg

Although in the example above we have used haar cascade to detect faces, you can also use face_recognition.face_locations to detect a face as we did in the previous script

Face Recognition in Images

The script for detecting and recognising faces in images is almost similar to what you saw above. Try it yourself and if you can’t take a look at the code below:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

#Find path to the image you want to detect face and pass it here

image = cv2.imread(Path-to-img)

rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

#convert image to Greyscale for haarcascade

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

faces = faceCascade.detectMultiScale(gray,

                                     scaleFactor=1.1,

                                     minNeighbors=5,

                                     minSize=(60, 60),

                                     flags=cv2.CASCADE_SCALE_IMAGE)

# the facial embeddings for face in input

encodings = face_recognition.face_encodings(rgb)

names = []

# loop over the facial embeddings incase

# we have multiple embeddings for multiple fcaes

for encoding in encodings:

    #Compare encodings with encodings in data["encodings"]

    #Matches contain array with boolean values and True for the embeddings it matches closely

    #and False for rest

    matches = face_recognition.compare_faces(data["encodings"],

    encoding)

    #set name =inknown if no encoding matches

    name = "Unknown"

    # check to see if we have found a match

    if True in matches:

        #Find positions at which we get True and store them

        matchedIdxs = [i for (i, b) in enumerate(matches) if b]

        counts = {}

        # loop over the matched indexes and maintain a count for

        # each recognized face face

        for i in matchedIdxs:

            #Check the names at respective indexes we stored in matchedIdxs

            name = data["names"][i]

            #increase count for the name we got

            counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(image, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", image)

    cv2.waitKey(0)

Output:

InputOutput

This brings us to the end of this article where we learned about face recognition.

You can also upskill with Great Learning’s PGP Artificial Intelligence and Machine Learning Course. The course offers mentorship from industry leaders, and you will also have the opportunity to work on real-time industry-relevant projects.


Original article source at: https://www.mygreatlearning.com

#python #opencv 

Duong Tran

Duong Tran

1646796864

Sắp Xếp Danh Sách Trong Python Với Python.sort ()

Trong bài viết này, bạn sẽ học cách sử dụng phương pháp danh sách của Python sort().

Bạn cũng sẽ tìm hiểu một cách khác để thực hiện sắp xếp trong Python bằng cách sử dụng sorted()hàm để bạn có thể thấy nó khác với nó như thế nào sort().

Cuối cùng, bạn sẽ biết những điều cơ bản về sắp xếp danh sách bằng Python và biết cách tùy chỉnh việc sắp xếp để phù hợp với nhu cầu của bạn.

Phương pháp sort() - Tổng quan về cú pháp

Phương pháp sort() này là một trong những cách bạn có thể sắp xếp danh sách trong Python.

Khi sử dụng sort(), bạn sắp xếp một danh sách tại chỗ . Điều này có nghĩa là danh sách ban đầu được sửa đổi trực tiếp. Cụ thể, thứ tự ban đầu của các phần tử bị thay đổi.

Cú pháp chung cho phương thức sort() này trông giống như sau:

list_name.sort(reverse=..., key=... )

Hãy chia nhỏ nó:

  • list_name là tên của danh sách bạn đang làm việc.
  • sort()là một trong những phương pháp danh sách của Python để sắp xếp và thay đổi danh sách. Nó sắp xếp các phần tử danh sách theo thứ tự tăng dần hoặc giảm dần .
  • sort()chấp nhận hai tham số tùy chọn .
  • reverse là tham số tùy chọn đầu tiên. Nó chỉ định liệu danh sách sẽ được sắp xếp theo thứ tự tăng dần hay giảm dần. Nó nhận một giá trị Boolean, nghĩa là giá trị đó là True hoặc False. Giá trị mặc định là False , nghĩa là danh sách được sắp xếp theo thứ tự tăng dần. Đặt nó thành True sẽ sắp xếp danh sách ngược lại, theo thứ tự giảm dần.
  • key là tham số tùy chọn thứ hai. Nó có một hàm hoặc phương pháp được sử dụng để chỉ định bất kỳ tiêu chí sắp xếp chi tiết nào mà bạn có thể có.

Phương sort()thức trả về None, có nghĩa là không có giá trị trả về vì nó chỉ sửa đổi danh sách ban đầu. Nó không trả về một danh sách mới.

Cách sắp xếp các mục trong danh sách theo thứ tự tăng dần bằng phương pháp sort()

Như đã đề cập trước đó, theo mặc định, sort()sắp xếp các mục trong danh sách theo thứ tự tăng dần.

Thứ tự tăng dần (hoặc tăng dần) có nghĩa là các mặt hàng được sắp xếp từ giá trị thấp nhất đến cao nhất.

Giá trị thấp nhất ở bên trái và giá trị cao nhất ở bên phải.

Cú pháp chung để thực hiện việc này sẽ giống như sau:

list_name.sort()

Hãy xem ví dụ sau đây cho thấy cách sắp xếp danh sách các số nguyên:

# a list of numbers
my_numbers = [10, 8, 3, 22, 33, 7, 11, 100, 54]

#sort list in-place in ascending order
my_numbers.sort()

#print modified list
print(my_numbers)

#output

#[3, 7, 8, 10, 11, 22, 33, 54, 100]

Trong ví dụ trên, các số được sắp xếp từ nhỏ nhất đến lớn nhất.

Bạn cũng có thể đạt được điều tương tự khi làm việc với danh sách các chuỗi:

# a list of strings
programming_languages = ["Python", "Swift","Java", "C++", "Go", "Rust"]

#sort list in-place in alphabetical order
programming_languages.sort()

#print modified list
print(programming_languages)

#output

#['C++', 'Go', 'Java', 'Python', 'Rust', 'Swift']

Trong trường hợp này, mỗi chuỗi có trong danh sách được sắp xếp theo thứ tự không tuân theo.

Như bạn đã thấy trong cả hai ví dụ, danh sách ban đầu đã được thay đổi trực tiếp.

Cách sắp xếp các mục trong danh sách theo thứ tự giảm dần bằng phương pháp sort()

Thứ tự giảm dần (hoặc giảm dần) ngược lại với thứ tự tăng dần - các phần tử được sắp xếp từ giá trị cao nhất đến thấp nhất.

Để sắp xếp các mục trong danh sách theo thứ tự giảm dần, bạn cần sử dụng reverse tham số tùy chọn với phương thức sort() và đặt giá trị của nó thành True.

Cú pháp chung để thực hiện việc này sẽ giống như sau:

list_name.sort(reverse=True)

Hãy sử dụng lại cùng một ví dụ từ phần trước, nhưng lần này làm cho nó để các số được sắp xếp theo thứ tự ngược lại:

# a list of numbers
my_numbers = [10, 8, 3, 22, 33, 7, 11, 100, 54]

#sort list in-place in descending order
my_numbers.sort(reverse=True)

#print modified list
print(my_numbers)

#output

#[100, 54, 33, 22, 11, 10, 8, 7, 3]

Bây giờ tất cả các số được sắp xếp ngược lại, với giá trị lớn nhất ở bên tay trái và giá trị nhỏ nhất ở bên phải.

Bạn cũng có thể đạt được điều tương tự khi làm việc với danh sách các chuỗi.

# a list of strings
programming_languages = ["Python", "Swift","Java", "C++", "Go", "Rust"]

#sort list in-place in  reverse alphabetical order
programming_languages.sort(reverse=True)

#print modified list
print(programming_languages)

#output

#['Swift', 'Rust', 'Python', 'Java', 'Go', 'C++']

Các mục danh sách hiện được sắp xếp theo thứ tự bảng chữ cái ngược lại.

Cách sắp xếp các mục trong danh sách bằng cách sử dụng key tham số với phương thức sort()

Bạn có thể sử dụng key tham số để thực hiện các thao tác sắp xếp tùy chỉnh hơn.

Giá trị được gán cho key tham số cần phải là thứ có thể gọi được.

Callable là thứ có thể được gọi, có nghĩa là nó có thể được gọi và tham chiếu.

Một số ví dụ về các đối tượng có thể gọi là các phương thức và hàm.

Phương thức hoặc hàm được gán cho key này sẽ được áp dụng cho tất cả các phần tử trong danh sách trước khi bất kỳ quá trình sắp xếp nào xảy ra và sẽ chỉ định logic cho tiêu chí sắp xếp.

Giả sử bạn muốn sắp xếp danh sách các chuỗi dựa trên độ dài của chúng.

Đối với điều đó, bạn chỉ định len()hàm tích hợp cho key tham số.

Hàm len()sẽ đếm độ dài của từng phần tử được lưu trong danh sách bằng cách đếm các ký tự có trong phần tử đó.

programming_languages = ["Python", "Swift","Java", "C++", "Go", "Rust"]

programming_languages.sort(key=len)

print(programming_languages)

#output

#['Go', 'C++', 'Java', 'Rust', 'Swift', 'Python']

Trong ví dụ trên, các chuỗi được sắp xếp theo thứ tự tăng dần mặc định, nhưng lần này việc sắp xếp xảy ra dựa trên độ dài của chúng.

Chuỗi ngắn nhất ở bên trái và dài nhất ở bên phải.

Các keyreverse tham số cũng có thể được kết hợp.

Ví dụ: bạn có thể sắp xếp các mục trong danh sách dựa trên độ dài của chúng nhưng theo thứ tự giảm dần.

programming_languages = ["Python", "Swift","Java", "C++", "Go", "Rust"]

programming_languages.sort(key=len, reverse=True)

print(programming_languages)

#output

#['Python', 'Swift', 'Java', 'Rust', 'C++', 'Go']

Trong ví dụ trên, các chuỗi đi từ dài nhất đến ngắn nhất.

Một điều cần lưu ý nữa là bạn có thể tạo một chức năng sắp xếp tùy chỉnh của riêng mình, để tạo các tiêu chí sắp xếp rõ ràng hơn.

Ví dụ: bạn có thể tạo một hàm cụ thể và sau đó sắp xếp danh sách theo giá trị trả về của hàm đó.

Giả sử bạn có một danh sách các từ điển với các ngôn ngữ lập trình và năm mà mỗi ngôn ngữ lập trình được tạo ra.

programming_languages = [{'language':'Python','year':1991},
{'language':'Swift','year':2014},
{'language':'Java', 'year':1995},
{'language':'C++','year':1985},
{'language':'Go','year':2007},
{'language':'Rust','year':2010},
]

Bạn có thể xác định một hàm tùy chỉnh nhận giá trị của một khóa cụ thể từ từ điển.

💡 Hãy nhớ rằng khóa từ điển và key tham số sort()chấp nhận là hai thứ khác nhau!

Cụ thể, hàm sẽ lấy và trả về giá trị của year khóa trong danh sách từ điển, chỉ định năm mà mọi ngôn ngữ trong từ điển được tạo.

Giá trị trả về sau đó sẽ được áp dụng làm tiêu chí sắp xếp cho danh sách.

programming_languages = [{'language':'Python','year':1991},
{'language':'Swift','year':2014},
{'language':'Java', 'year':1995},
{'language':'C++','year':1985},
{'language':'Go','year':2007},
{'language':'Rust','year':2010},
]

def get_year(element):
    return element['year']

Sau đó, bạn có thể sắp xếp theo giá trị trả về của hàm bạn đã tạo trước đó bằng cách gán nó cho key tham số và sắp xếp theo thứ tự thời gian tăng dần mặc định:

programming_languages = [{'language':'Python','year':1991},
{'language':'Swift','year':2014},
{'language':'Java', 'year':1995},
{'language':'C++','year':1985},
{'language':'Go','year':2007},
{'language':'Rust','year':2010},
]

def get_year(element):
    return element['year']

programming_languages.sort(key=get_year)

print(programming_languages)

Đầu ra:

[{'language': 'C++', 'year': 1985}, {'language': 'Python', 'year': 1991}, {'language': 'Java', 'year': 1995}, {'language': 'Go', 'year': 2007}, {'language': 'Rust', 'year': 2010}, {'language': 'Swift', 'year': 2014}]

Nếu bạn muốn sắp xếp từ ngôn ngữ được tạo gần đây nhất đến ngôn ngữ cũ nhất hoặc theo thứ tự giảm dần, thì bạn sử dụng reverse=Truetham số:

programming_languages = [{'language':'Python','year':1991},
{'language':'Swift','year':2014},
{'language':'Java', 'year':1995},
{'language':'C++','year':1985},
{'language':'Go','year':2007},
{'language':'Rust','year':2010},
]

def get_year(element):
    return element['year']

programming_languages.sort(key=get_year, reverse=True)

print(programming_languages)

Đầu ra:

[{'language': 'Swift', 'year': 2014}, {'language': 'Rust', 'year': 2010}, {'language': 'Go', 'year': 2007}, {'language': 'Java', 'year': 1995}, {'language': 'Python', 'year': 1991}, {'language': 'C++', 'year': 1985}]

Để đạt được kết quả chính xác, bạn có thể tạo một hàm lambda.

Thay vì sử dụng hàm tùy chỉnh thông thường mà bạn đã xác định bằng def từ khóa, bạn có thể:

  • tạo một biểu thức ngắn gọn một dòng,
  • và không xác định tên hàm như bạn đã làm với def hàm. Các hàm lambda còn được gọi là các hàm ẩn danh .
programming_languages = [{'language':'Python','year':1991},
{'language':'Swift','year':2014},
{'language':'Java', 'year':1995},
{'language':'C++','year':1985},
{'language':'Go','year':2007},
{'language':'Rust','year':2010},
]

programming_languages.sort(key=lambda element: element['year'])

print(programming_languages)

Hàm lambda được chỉ định với dòng key=lambda element: element['year']sắp xếp các ngôn ngữ lập trình này từ cũ nhất đến mới nhất.

Sự khác biệt giữa sort()sorted()

Phương sort()thức hoạt động theo cách tương tự như sorted()hàm.

Cú pháp chung của sorted()hàm trông như sau:

sorted(list_name,reverse=...,key=...)

Hãy chia nhỏ nó:

  • sorted()là một hàm tích hợp chấp nhận một có thể lặp lại. Sau đó, nó sắp xếp nó theo thứ tự tăng dần hoặc giảm dần.
  • sorted()chấp nhận ba tham số. Một tham số là bắt buộc và hai tham số còn lại là tùy chọn.
  • list_name là tham số bắt buộc . Trong trường hợp này, tham số là danh sách, nhưng sorted()chấp nhận bất kỳ đối tượng có thể lặp lại nào khác.
  • sorted()cũng chấp nhận các tham số tùy chọn reversekey, đó là các tham số tùy chọn tương tự mà phương thức sort() chấp nhận.

Sự khác biệt chính giữa sort()sorted()sorted()hàm nhận một danh sách và trả về một bản sao được sắp xếp mới của nó.

Bản sao mới chứa các phần tử của danh sách ban đầu theo thứ tự được sắp xếp.

Các phần tử trong danh sách ban đầu không bị ảnh hưởng và không thay đổi.

Vì vậy, để tóm tắt sự khác biệt:

  • Phương sort()thức không có giá trị trả về và trực tiếp sửa đổi danh sách ban đầu, thay đổi thứ tự của các phần tử chứa trong nó.
  • Mặt khác, sorted()hàm có giá trị trả về, là một bản sao đã được sắp xếp của danh sách ban đầu. Bản sao đó chứa các mục danh sách của danh sách ban đầu theo thứ tự được sắp xếp. Cuối cùng, danh sách ban đầu vẫn còn nguyên vẹn.

Hãy xem ví dụ sau để xem nó hoạt động như thế nào:

#original list of numbers
my_numbers = [10, 8, 3, 22, 33, 7, 11, 100, 54]

#sort original list in default ascending order
my_numbers_sorted = sorted(my_numbers)

#print original list
print(my_numbers)

#print the copy of the original list that was created
print(my_numbers_sorted)

#output

#[10, 8, 3, 22, 33, 7, 11, 100, 54]
#[3, 7, 8, 10, 11, 22, 33, 54, 100]

Vì không có đối số bổ sung nào được cung cấp sorted(), nó đã sắp xếp bản sao của danh sách ban đầu theo thứ tự tăng dần mặc định, từ giá trị nhỏ nhất đến giá trị lớn nhất.

Và khi in danh sách ban đầu, bạn thấy rằng nó vẫn được giữ nguyên và các mục có thứ tự ban đầu.

Như bạn đã thấy trong ví dụ trên, bản sao của danh sách đã được gán cho một biến mới my_numbers_sorted,.

Một cái gì đó như vậy không thể được thực hiện với sort().

Hãy xem ví dụ sau để xem điều gì sẽ xảy ra nếu điều đó được thực hiện với phương thức sort().

my_numbers = [10, 8, 3, 22, 33, 7, 11, 100, 54]

my_numbers_sorted = my_numbers.sort()

print(my_numbers)
print(my_numbers_sorted)

#output

#[3, 7, 8, 10, 11, 22, 33, 54, 100]
#None

Bạn thấy rằng giá trị trả về của sort()None.

Cuối cùng, một điều khác cần lưu ý là các reversekey tham số mà sorted()hàm chấp nhận hoạt động giống hệt như cách chúng thực hiện với phương thức sort() bạn đã thấy trong các phần trước.

Khi nào sử dụng sort()sorted()

Dưới đây là một số điều bạn có thể muốn xem xét khi quyết định có nên sử dụng sort()vs. sorted()

Trước tiên, hãy xem xét loại dữ liệu bạn đang làm việc:

  • Nếu bạn đang làm việc nghiêm ngặt với một danh sách ngay từ đầu, thì bạn sẽ cần phải sử dụng sort()phương pháp này vì sort()chỉ được gọi trong danh sách.
  • Mặt khác, nếu bạn muốn linh hoạt hơn và chưa làm việc với danh sách, thì bạn có thể sử dụng sorted(). Hàm sorted()chấp nhận và sắp xếp mọi thứ có thể lặp lại (như từ điển, bộ giá trị và bộ) chứ không chỉ danh sách.

Tiếp theo, một điều khác cần xem xét là liệu bạn có giữ được thứ tự ban đầu của danh sách mà bạn đang làm việc hay không:

  • Khi gọi sort(), danh sách ban đầu sẽ bị thay đổi và mất thứ tự ban đầu. Bạn sẽ không thể truy xuất vị trí ban đầu của các phần tử danh sách. Sử dụng sort()khi bạn chắc chắn muốn thay đổi danh sách đang làm việc và chắc chắn rằng bạn không muốn giữ lại thứ tự đã có.
  • Mặt khác, sorted()nó hữu ích khi bạn muốn tạo một danh sách mới nhưng bạn vẫn muốn giữ lại danh sách bạn đang làm việc. Hàm sorted()sẽ tạo một danh sách được sắp xếp mới với các phần tử danh sách được sắp xếp theo thứ tự mong muốn.

Cuối cùng, một điều khác mà bạn có thể muốn xem xét khi làm việc với các tập dữ liệu lớn hơn, đó là hiệu quả về thời gian và bộ nhớ:

  • Phương sort()pháp này chiếm dụng và tiêu tốn ít bộ nhớ hơn vì nó chỉ sắp xếp danh sách tại chỗ và không tạo ra danh sách mới không cần thiết mà bạn không cần. Vì lý do tương tự, nó cũng nhanh hơn một chút vì nó không tạo ra một bản sao. Điều này có thể hữu ích khi bạn đang làm việc với danh sách lớn hơn chứa nhiều phần tử hơn.

Phần kết luận

Và bạn có nó rồi đấy! Bây giờ bạn đã biết cách sắp xếp một danh sách trong Python bằng sort()phương pháp này.

Bạn cũng đã xem xét sự khác biệt chính giữa sắp xếp danh sách bằng cách sử dụng sort()sorted().

Tôi hy vọng bạn thấy bài viết này hữu ích.

Để tìm hiểu thêm về ngôn ngữ lập trình Python, hãy xem Chứng chỉ Máy tính Khoa học với Python của freeCodeCamp .

Bạn sẽ bắt đầu từ những điều cơ bản và học theo cách tương tác và thân thiện với người mới bắt đầu. Bạn cũng sẽ xây dựng năm dự án vào cuối để áp dụng vào thực tế và giúp củng cố những gì bạn đã học được.

Nguồn: https://www.freecodecamp.org/news/python-sort-how-to-sort-a-list-in-python/

#python