Find Similar Images in Python with DeepImagesearch DeepLearning based Library | Applied ML Tutorial

In this Python tutorial, We'll look at a new library DeepImagesearch (which calls itself AI-Based Image Search Engine) gives a Deep Learning based Solution for Finding Similar Images.

https://github.com/TechyNilesh/DeepImageSearch

#python #machine-learning #deep-learning 

What is GEEK

Buddha Community

Find Similar Images in Python with DeepImagesearch DeepLearning based Library | Applied ML Tutorial
Queenie  Davis

Queenie Davis

1653123600

EasyMDE: Simple, Beautiful and Embeddable JavaScript Markdown Editor

EasyMDE - Markdown Editor 

This repository is a fork of SimpleMDE, made by Sparksuite. Go to the dedicated section for more information.

A drop-in JavaScript text area replacement for writing beautiful and understandable Markdown. EasyMDE allows users who may be less experienced with Markdown to use familiar toolbar buttons and shortcuts.

In addition, the syntax is rendered while editing to clearly show the expected result. Headings are larger, emphasized words are italicized, links are underlined, etc.

EasyMDE also features both built-in auto saving and spell checking. The editor is entirely customizable, from theming to toolbar buttons and javascript hooks.

Try the demo

Preview

Quick access

Install EasyMDE

Via npm:

npm install easymde

Via the UNPKG CDN:

<link rel="stylesheet" href="https://unpkg.com/easymde/dist/easymde.min.css">
<script src="https://unpkg.com/easymde/dist/easymde.min.js"></script>

Or jsDelivr:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.css">
<script src="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.js"></script>

How to use

Loading the editor

After installing and/or importing the module, you can load EasyMDE onto the first textarea element on the web page:

<textarea></textarea>
<script>
const easyMDE = new EasyMDE();
</script>

Alternatively you can select a specific textarea, via JavaScript:

<textarea id="my-text-area"></textarea>
<script>
const easyMDE = new EasyMDE({element: document.getElementById('my-text-area')});
</script>

Editor functions

Use easyMDE.value() to get the content of the editor:

<script>
easyMDE.value();
</script>

Use easyMDE.value(val) to set the content of the editor:

<script>
easyMDE.value('New input for **EasyMDE**');
</script>

Configuration

Options list

  • autoDownloadFontAwesome: If set to true, force downloads Font Awesome (used for icons). If set to false, prevents downloading. Defaults to undefined, which will intelligently check whether Font Awesome has already been included, then download accordingly.
  • autofocus: If set to true, focuses the editor automatically. Defaults to false.
  • autosave: Saves the text that's being written and will load it back in the future. It will forget the text when the form it's contained in is submitted.
    • enabled: If set to true, saves the text automatically. Defaults to false.
    • delay: Delay between saves, in milliseconds. Defaults to 10000 (10 seconds).
    • submit_delay: Delay before assuming that submit of the form failed and saving the text, in milliseconds. Defaults to autosave.delay or 10000 (10 seconds).
    • uniqueId: You must set a unique string identifier so that EasyMDE can autosave. Something that separates this from other instances of EasyMDE elsewhere on your website.
    • timeFormat: Set DateTimeFormat. More information see DateTimeFormat instances. Default locale: en-US, format: hour:minute.
    • text: Set text for autosave.
  • autoRefresh: Useful, when initializing the editor in a hidden DOM node. If set to { delay: 300 }, it will check every 300 ms if the editor is visible and if positive, call CodeMirror's refresh().
  • blockStyles: Customize how certain buttons that style blocks of text behave.
    • bold: Can be set to ** or __. Defaults to **.
    • code: Can be set to ``` or ~~~. Defaults to ```.
    • italic: Can be set to * or _. Defaults to *.
  • unorderedListStyle: can be *, - or +. Defaults to *.
  • scrollbarStyle: Chooses a scrollbar implementation. The default is "native", showing native scrollbars. The core library also provides the "null" style, which completely hides the scrollbars. Addons can implement additional scrollbar models.
  • element: The DOM element for the textarea element to use. Defaults to the first textarea element on the page.
  • forceSync: If set to true, force text changes made in EasyMDE to be immediately stored in original text area. Defaults to false.
  • hideIcons: An array of icon names to hide. Can be used to hide specific icons shown by default without completely customizing the toolbar.
  • indentWithTabs: If set to false, indent using spaces instead of tabs. Defaults to true.
  • initialValue: If set, will customize the initial value of the editor.
  • previewImagesInEditor: - EasyMDE will show preview of images, false by default, preview for images will appear only for images on separate lines.
  • imagesPreviewHandler: - A custom function for handling the preview of images. Takes the parsed string between the parantheses of the image markdown ![]( ) as argument and returns a string that serves as the src attribute of the <img> tag in the preview. Enables dynamic previewing of images in the frontend without having to upload them to a server, allows copy-pasting of images to the editor with preview.
  • insertTexts: Customize how certain buttons that insert text behave. Takes an array with two elements. The first element will be the text inserted before the cursor or highlight, and the second element will be inserted after. For example, this is the default link value: ["[", "](http://)"].
    • horizontalRule
    • image
    • link
    • table
  • lineNumbers: If set to true, enables line numbers in the editor.
  • lineWrapping: If set to false, disable line wrapping. Defaults to true.
  • minHeight: Sets the minimum height for the composition area, before it starts auto-growing. Should be a string containing a valid CSS value like "500px". Defaults to "300px".
  • maxHeight: Sets fixed height for the composition area. minHeight option will be ignored. Should be a string containing a valid CSS value like "500px". Defaults to undefined.
  • onToggleFullScreen: A function that gets called when the editor's full screen mode is toggled. The function will be passed a boolean as parameter, true when the editor is currently going into full screen mode, or false.
  • parsingConfig: Adjust settings for parsing the Markdown during editing (not previewing).
    • allowAtxHeaderWithoutSpace: If set to true, will render headers without a space after the #. Defaults to false.
    • strikethrough: If set to false, will not process GFM strikethrough syntax. Defaults to true.
    • underscoresBreakWords: If set to true, let underscores be a delimiter for separating words. Defaults to false.
  • overlayMode: Pass a custom codemirror overlay mode to parse and style the Markdown during editing.
    • mode: A codemirror mode object.
    • combine: If set to false, will replace CSS classes returned by the default Markdown mode. Otherwise the classes returned by the custom mode will be combined with the classes returned by the default mode. Defaults to true.
  • placeholder: If set, displays a custom placeholder message.
  • previewClass: A string or array of strings that will be applied to the preview screen when activated. Defaults to "editor-preview".
  • previewRender: Custom function for parsing the plaintext Markdown and returning HTML. Used when user previews.
  • promptURLs: If set to true, a JS alert window appears asking for the link or image URL. Defaults to false.
  • promptTexts: Customize the text used to prompt for URLs.
    • image: The text to use when prompting for an image's URL. Defaults to URL of the image:.
    • link: The text to use when prompting for a link's URL. Defaults to URL for the link:.
  • uploadImage: If set to true, enables the image upload functionality, which can be triggered by drag and drop, copy-paste and through the browse-file window (opened when the user click on the upload-image icon). Defaults to false.
  • imageMaxSize: Maximum image size in bytes, checked before upload (note: never trust client, always check the image size at server-side). Defaults to 1024 * 1024 * 2 (2 MB).
  • imageAccept: A comma-separated list of mime-types used to check image type before upload (note: never trust client, always check file types at server-side). Defaults to image/png, image/jpeg.
  • imageUploadFunction: A custom function for handling the image upload. Using this function will render the options imageMaxSize, imageAccept, imageUploadEndpoint and imageCSRFToken ineffective.
    • The function gets a file and onSuccess and onError callback functions as parameters. onSuccess(imageUrl: string) and onError(errorMessage: string)
  • imageUploadEndpoint: The endpoint where the images data will be sent, via an asynchronous POST request. The server is supposed to save this image, and return a JSON response.
    • if the request was successfully processed (HTTP 200 OK): {"data": {"filePath": "<filePath>"}} where filePath is the path of the image (absolute if imagePathAbsolute is set to true, relative if otherwise);
    • otherwise: {"error": "<errorCode>"}, where errorCode can be noFileGiven (HTTP 400 Bad Request), typeNotAllowed (HTTP 415 Unsupported Media Type), fileTooLarge (HTTP 413 Payload Too Large) or importError (see errorMessages below). If errorCode is not one of the errorMessages, it is alerted unchanged to the user. This allows for server-side error messages. No default value.
  • imagePathAbsolute: If set to true, will treat imageUrl from imageUploadFunction and filePath returned from imageUploadEndpoint as an absolute rather than relative path, i.e. not prepend window.location.origin to it.
  • imageCSRFToken: CSRF token to include with AJAX call to upload image. For various instances like Django, Spring and Laravel.
  • imageCSRFName: CSRF token filed name to include with AJAX call to upload image, applied when imageCSRFToken has value, defaults to csrfmiddlewaretoken.
  • imageCSRFHeader: If set to true, passing CSRF token via header. Defaults to false, which pass CSRF through request body.
  • imageTexts: Texts displayed to the user (mainly on the status bar) for the import image feature, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • sbInit: Status message displayed initially if uploadImage is set to true. Defaults to Attach files by drag and dropping or pasting from clipboard..
    • sbOnDragEnter: Status message displayed when the user drags a file to the text area. Defaults to Drop image to upload it..
    • sbOnDrop: Status message displayed when the user drops a file in the text area. Defaults to Uploading images #images_names#.
    • sbProgress: Status message displayed to show uploading progress. Defaults to Uploading #file_name#: #progress#%.
    • sbOnUploaded: Status message displayed when the image has been uploaded. Defaults to Uploaded #image_name#.
    • sizeUnits: A comma-separated list of units used to display messages with human-readable file sizes. Defaults to B, KB, MB (example: 218 KB). You can use B,KB,MB instead if you prefer without whitespaces (218KB).
  • errorMessages: Errors displayed to the user, using the errorCallback option, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • noFileGiven: The server did not receive any file from the user. Defaults to You must select a file..
    • typeNotAllowed: The user send a file type which doesn't match the imageAccept list, or the server returned this error code. Defaults to This image type is not allowed..
    • fileTooLarge: The size of the image being imported is bigger than the imageMaxSize, or if the server returned this error code. Defaults to Image #image_name# is too big (#image_size#).\nMaximum file size is #image_max_size#..
    • importError: An unexpected error occurred when uploading the image. Defaults to Something went wrong when uploading the image #image_name#..
  • errorCallback: A callback function used to define how to display an error message. Defaults to (errorMessage) => alert(errorMessage).
  • renderingConfig: Adjust settings for parsing the Markdown during previewing (not editing).
    • codeSyntaxHighlighting: If set to true, will highlight using highlight.js. Defaults to false. To use this feature you must include highlight.js on your page or pass in using the hljs option. For example, include the script and the CSS files like:
      <script src="https://cdn.jsdelivr.net/highlight.js/latest/highlight.min.js"></script>
      <link rel="stylesheet" href="https://cdn.jsdelivr.net/highlight.js/latest/styles/github.min.css">
    • hljs: An injectible instance of highlight.js. If you don't want to rely on the global namespace (window.hljs), you can provide an instance here. Defaults to undefined.
    • markedOptions: Set the internal Markdown renderer's options. Other renderingConfig options will take precedence.
    • singleLineBreaks: If set to false, disable parsing GitHub Flavored Markdown (GFM) single line breaks. Defaults to true.
    • sanitizerFunction: Custom function for sanitizing the HTML output of Markdown renderer.
  • shortcuts: Keyboard shortcuts associated with this instance. Defaults to the array of shortcuts.
  • showIcons: An array of icon names to show. Can be used to show specific icons hidden by default without completely customizing the toolbar.
  • spellChecker: If set to false, disable the spell checker. Defaults to true. Optionally pass a CodeMirrorSpellChecker-compliant function.
  • inputStyle: textarea or contenteditable. Defaults to textarea for desktop and contenteditable for mobile. contenteditable option is necessary to enable nativeSpellcheck.
  • nativeSpellcheck: If set to false, disable native spell checker. Defaults to true.
  • sideBySideFullscreen: If set to false, allows side-by-side editing without going into fullscreen. Defaults to true.
  • status: If set to false, hide the status bar. Defaults to the array of built-in status bar items.
    • Optionally, you can set an array of status bar items to include, and in what order. You can even define your own custom status bar items.
  • styleSelectedText: If set to false, remove the CodeMirror-selectedtext class from selected lines. Defaults to true.
  • syncSideBySidePreviewScroll: If set to false, disable syncing scroll in side by side mode. Defaults to true.
  • tabSize: If set, customize the tab size. Defaults to 2.
  • theme: Override the theme. Defaults to easymde.
  • toolbar: If set to false, hide the toolbar. Defaults to the array of icons.
  • toolbarTips: If set to false, disable toolbar button tips. Defaults to true.
  • direction: rtl or ltr. Changes text direction to support right-to-left languages. Defaults to ltr.

Options example

Most options demonstrate the non-default behavior:

const editor = new EasyMDE({
    autofocus: true,
    autosave: {
        enabled: true,
        uniqueId: "MyUniqueID",
        delay: 1000,
        submit_delay: 5000,
        timeFormat: {
            locale: 'en-US',
            format: {
                year: 'numeric',
                month: 'long',
                day: '2-digit',
                hour: '2-digit',
                minute: '2-digit',
            },
        },
        text: "Autosaved: "
    },
    blockStyles: {
        bold: "__",
        italic: "_",
    },
    unorderedListStyle: "-",
    element: document.getElementById("MyID"),
    forceSync: true,
    hideIcons: ["guide", "heading"],
    indentWithTabs: false,
    initialValue: "Hello world!",
    insertTexts: {
        horizontalRule: ["", "\n\n-----\n\n"],
        image: ["![](http://", ")"],
        link: ["[", "](https://)"],
        table: ["", "\n\n| Column 1 | Column 2 | Column 3 |\n| -------- | -------- | -------- |\n| Text     | Text      | Text     |\n\n"],
    },
    lineWrapping: false,
    minHeight: "500px",
    parsingConfig: {
        allowAtxHeaderWithoutSpace: true,
        strikethrough: false,
        underscoresBreakWords: true,
    },
    placeholder: "Type here...",

    previewClass: "my-custom-styling",
    previewClass: ["my-custom-styling", "more-custom-styling"],

    previewRender: (plainText) => customMarkdownParser(plainText), // Returns HTML from a custom parser
    previewRender: (plainText, preview) => { // Async method
        setTimeout(() => {
            preview.innerHTML = customMarkdownParser(plainText);
        }, 250);

        return "Loading...";
    },
    promptURLs: true,
    promptTexts: {
        image: "Custom prompt for URL:",
        link: "Custom prompt for URL:",
    },
    renderingConfig: {
        singleLineBreaks: false,
        codeSyntaxHighlighting: true,
        sanitizerFunction: (renderedHTML) => {
            // Using DOMPurify and only allowing <b> tags
            return DOMPurify.sanitize(renderedHTML, {ALLOWED_TAGS: ['b']})
        },
    },
    shortcuts: {
        drawTable: "Cmd-Alt-T"
    },
    showIcons: ["code", "table"],
    spellChecker: false,
    status: false,
    status: ["autosave", "lines", "words", "cursor"], // Optional usage
    status: ["autosave", "lines", "words", "cursor", {
        className: "keystrokes",
        defaultValue: (el) => {
            el.setAttribute('data-keystrokes', 0);
        },
        onUpdate: (el) => {
            const keystrokes = Number(el.getAttribute('data-keystrokes')) + 1;
            el.innerHTML = `${keystrokes} Keystrokes`;
            el.setAttribute('data-keystrokes', keystrokes);
        },
    }], // Another optional usage, with a custom status bar item that counts keystrokes
    styleSelectedText: false,
    sideBySideFullscreen: false,
    syncSideBySidePreviewScroll: false,
    tabSize: 4,
    toolbar: false,
    toolbarTips: false,
});

Toolbar icons

Below are the built-in toolbar icons (only some of which are enabled by default), which can be reorganized however you like. "Name" is the name of the icon, referenced in the JavaScript. "Action" is either a function or a URL to open. "Class" is the class given to the icon. "Tooltip" is the small tooltip that appears via the title="" attribute. Note that shortcut hints are added automatically and reflect the specified action if it has a key bind assigned to it (i.e. with the value of action set to bold and that of tooltip set to Bold, the final text the user will see would be "Bold (Ctrl-B)").

Additionally, you can add a separator between any icons by adding "|" to the toolbar array.

NameActionTooltip
Class
boldtoggleBoldBold
fa fa-bold
italictoggleItalicItalic
fa fa-italic
strikethroughtoggleStrikethroughStrikethrough
fa fa-strikethrough
headingtoggleHeadingSmallerHeading
fa fa-header
heading-smallertoggleHeadingSmallerSmaller Heading
fa fa-header
heading-biggertoggleHeadingBiggerBigger Heading
fa fa-lg fa-header
heading-1toggleHeading1Big Heading
fa fa-header header-1
heading-2toggleHeading2Medium Heading
fa fa-header header-2
heading-3toggleHeading3Small Heading
fa fa-header header-3
codetoggleCodeBlockCode
fa fa-code
quotetoggleBlockquoteQuote
fa fa-quote-left
unordered-listtoggleUnorderedListGeneric List
fa fa-list-ul
ordered-listtoggleOrderedListNumbered List
fa fa-list-ol
clean-blockcleanBlockClean block
fa fa-eraser
linkdrawLinkCreate Link
fa fa-link
imagedrawImageInsert Image
fa fa-picture-o
tabledrawTableInsert Table
fa fa-table
horizontal-ruledrawHorizontalRuleInsert Horizontal Line
fa fa-minus
previewtogglePreviewToggle Preview
fa fa-eye no-disable
side-by-sidetoggleSideBySideToggle Side by Side
fa fa-columns no-disable no-mobile
fullscreentoggleFullScreenToggle Fullscreen
fa fa-arrows-alt no-disable no-mobile
guideThis linkMarkdown Guide
fa fa-question-circle
undoundoUndo
fa fa-undo
redoredoRedo
fa fa-redo

Toolbar customization

Customize the toolbar using the toolbar option.

Only the order of existing buttons:

const easyMDE = new EasyMDE({
    toolbar: ["bold", "italic", "heading", "|", "quote"]
});

All information and/or add your own icons

const easyMDE = new EasyMDE({
    toolbar: [
        {
            name: "bold",
            action: EasyMDE.toggleBold,
            className: "fa fa-bold",
            title: "Bold",
        },
        "italics", // shortcut to pre-made button
        {
            name: "custom",
            action: (editor) => {
                // Add your own code
            },
            className: "fa fa-star",
            title: "Custom Button",
            attributes: { // for custom attributes
                id: "custom-id",
                "data-value": "custom value" // HTML5 data-* attributes need to be enclosed in quotation marks ("") because of the dash (-) in its name.
            }
        },
        "|" // Separator
        // [, ...]
    ]
});

Put some buttons on dropdown menu

const easyMDE = new EasyMDE({
    toolbar: [{
                name: "heading",
                action: EasyMDE.toggleHeadingSmaller,
                className: "fa fa-header",
                title: "Headers",
            },
            "|",
            {
                name: "others",
                className: "fa fa-blind",
                title: "others buttons",
                children: [
                    {
                        name: "image",
                        action: EasyMDE.drawImage,
                        className: "fa fa-picture-o",
                        title: "Image",
                    },
                    {
                        name: "quote",
                        action: EasyMDE.toggleBlockquote,
                        className: "fa fa-percent",
                        title: "Quote",
                    },
                    {
                        name: "link",
                        action: EasyMDE.drawLink,
                        className: "fa fa-link",
                        title: "Link",
                    }
                ]
            },
        // [, ...]
    ]
});

Keyboard shortcuts

EasyMDE comes with an array of predefined keyboard shortcuts, but they can be altered with a configuration option. The list of default ones is as follows:

Shortcut (Windows / Linux)Shortcut (macOS)Action
Ctrl-'Cmd-'"toggleBlockquote"
Ctrl-BCmd-B"toggleBold"
Ctrl-ECmd-E"cleanBlock"
Ctrl-HCmd-H"toggleHeadingSmaller"
Ctrl-ICmd-I"toggleItalic"
Ctrl-KCmd-K"drawLink"
Ctrl-LCmd-L"toggleUnorderedList"
Ctrl-PCmd-P"togglePreview"
Ctrl-Alt-CCmd-Alt-C"toggleCodeBlock"
Ctrl-Alt-ICmd-Alt-I"drawImage"
Ctrl-Alt-LCmd-Alt-L"toggleOrderedList"
Shift-Ctrl-HShift-Cmd-H"toggleHeadingBigger"
F9F9"toggleSideBySide"
F11F11"toggleFullScreen"

Here is how you can change a few, while leaving others untouched:

const editor = new EasyMDE({
    shortcuts: {
        "toggleOrderedList": "Ctrl-Alt-K", // alter the shortcut for toggleOrderedList
        "toggleCodeBlock": null, // unbind Ctrl-Alt-C
        "drawTable": "Cmd-Alt-T", // bind Cmd-Alt-T to drawTable action, which doesn't come with a default shortcut
    }
});

Shortcuts are automatically converted between platforms. If you define a shortcut as "Cmd-B", on PC that shortcut will be changed to "Ctrl-B". Conversely, a shortcut defined as "Ctrl-B" will become "Cmd-B" for Mac users.

The list of actions that can be bound is the same as the list of built-in actions available for toolbar buttons.

Advanced use

Event handling

You can catch the following list of events: https://codemirror.net/doc/manual.html#events

const easyMDE = new EasyMDE();
easyMDE.codemirror.on("change", () => {
    console.log(easyMDE.value());
});

Removing EasyMDE from text area

You can revert to the initial text area by calling the toTextArea method. Note that this clears up the autosave (if enabled) associated with it. The text area will retain any text from the destroyed EasyMDE instance.

const easyMDE = new EasyMDE();
// ...
easyMDE.toTextArea();
easyMDE = null;

If you need to remove registered event listeners (when the editor is not needed anymore), call easyMDE.cleanup().

Useful methods

The following self-explanatory methods may be of use while developing with EasyMDE.

const easyMDE = new EasyMDE();
easyMDE.isPreviewActive(); // returns boolean
easyMDE.isSideBySideActive(); // returns boolean
easyMDE.isFullscreenActive(); // returns boolean
easyMDE.clearAutosavedValue(); // no returned value

How it works

EasyMDE is a continuation of SimpleMDE.

SimpleMDE began as an improvement of lepture's Editor project, but has now taken on an identity of its own. It is bundled with CodeMirror and depends on Font Awesome.

CodeMirror is the backbone of the project and parses much of the Markdown syntax as it's being written. This allows us to add styles to the Markdown that's being written. Additionally, a toolbar and status bar have been added to the top and bottom, respectively. Previews are rendered by Marked using GitHub Flavored Markdown (GFM).

SimpleMDE fork

I originally made this fork to implement FontAwesome 5 compatibility into SimpleMDE. When that was done I submitted a pull request, which has not been accepted yet. This, and the project being inactive since May 2017, triggered me to make more changes and try to put new life into the project.

Changes include:

  • FontAwesome 5 compatibility
  • Guide button works when editor is in preview mode
  • Links are now https:// by default
  • Small styling changes
  • Support for Node 8 and beyond
  • Lots of refactored code
  • Links in preview will open in a new tab by default
  • TypeScript support

My intention is to continue development on this project, improving it and keeping it alive.

Hacking EasyMDE

You may want to edit this library to adapt its behavior to your needs. This can be done in some quick steps:

  1. Follow the prerequisites and installation instructions in the contribution guide;
  2. Do your changes;
  3. Run gulp command, which will generate files: dist/easymde.min.css and dist/easymde.min.js;
  4. Copy-paste those files to your code base, and you are done.

Contributing

Want to contribute to EasyMDE? Thank you! We have a contribution guide just for you!


Author: Ionaru
Source Code: https://github.com/Ionaru/easy-markdown-editor
License: MIT license

#react-native #react 

Flutter Dev

Flutter Dev

1679035563

How to Add Splash Screen in Android and iOS with Flutter

When your app is opened, there is a brief time while the native app loads Flutter. By default, during this time, the native app displays a white splash screen. This package automatically generates iOS, Android, and Web-native code for customizing this native splash screen background color and splash image. Supports dark mode, full screen, and platform-specific options.

What's New

[BETA] Support for flavors is in beta. Currently only Android and iOS are supported. See instructions below.

You can now keep the splash screen up while your app initializes! No need for a secondary splash screen anymore. Just use the preserve and remove methods together to remove the splash screen after your initialization is complete. See details below.

Usage

Would you prefer a video tutorial instead? Check out Johannes Milke's tutorial.

First, add flutter_native_splash as a dependency in your pubspec.yaml file.

dependencies:
  flutter_native_splash: ^2.2.19

Don't forget to flutter pub get.

1. Setting the splash screen

 

Customize the following settings and add to your project's pubspec.yaml file or place in a new file in your root project folder named flutter_native_splash.yaml.

flutter_native_splash:
  # This package generates native code to customize Flutter's default white native splash screen
  # with background color and splash image.
  # Customize the parameters below, and run the following command in the terminal:
  # flutter pub run flutter_native_splash:create
  # To restore Flutter's default white splash screen, run the following command in the terminal:
  # flutter pub run flutter_native_splash:remove

  # color or background_image is the only required parameter.  Use color to set the background
  # of your splash screen to a solid color.  Use background_image to set the background of your
  # splash screen to a png image.  This is useful for gradients. The image will be stretch to the
  # size of the app. Only one parameter can be used, color and background_image cannot both be set.
  color: "#42a5f5"
  #background_image: "assets/background.png"

  # Optional parameters are listed below.  To enable a parameter, uncomment the line by removing
  # the leading # character.

  # The image parameter allows you to specify an image used in the splash screen.  It must be a
  # png file and should be sized for 4x pixel density.
  #image: assets/splash.png

  # The branding property allows you to specify an image used as branding in the splash screen.
  # It must be a png file. It is supported for Android, iOS and the Web.  For Android 12,
  # see the Android 12 section below.
  #branding: assets/dart.png

  # To position the branding image at the bottom of the screen you can use bottom, bottomRight,
  # and bottomLeft. The default values is bottom if not specified or specified something else.
  #branding_mode: bottom

  # The color_dark, background_image_dark, image_dark, branding_dark are parameters that set the background
  # and image when the device is in dark mode. If they are not specified, the app will use the
  # parameters from above. If the image_dark parameter is specified, color_dark or
  # background_image_dark must be specified.  color_dark and background_image_dark cannot both be
  # set.
  #color_dark: "#042a49"
  #background_image_dark: "assets/dark-background.png"
  #image_dark: assets/splash-invert.png
  #branding_dark: assets/dart_dark.png

  # Android 12 handles the splash screen differently than previous versions.  Please visit
  # https://developer.android.com/guide/topics/ui/splash-screen
  # Following are Android 12 specific parameter.
  android_12:
    # The image parameter sets the splash screen icon image.  If this parameter is not specified,
    # the app's launcher icon will be used instead.
    # Please note that the splash screen will be clipped to a circle on the center of the screen.
    # App icon with an icon background: This should be 960×960 pixels, and fit within a circle
    # 640 pixels in diameter.
    # App icon without an icon background: This should be 1152×1152 pixels, and fit within a circle
    # 768 pixels in diameter.
    #image: assets/android12splash.png

    # Splash screen background color.
    #color: "#42a5f5"

    # App icon background color.
    #icon_background_color: "#111111"

    # The branding property allows you to specify an image used as branding in the splash screen.
    #branding: assets/dart.png

    # The image_dark, color_dark, icon_background_color_dark, and branding_dark set values that
    # apply when the device is in dark mode. If they are not specified, the app will use the
    # parameters from above.
    #image_dark: assets/android12splash-invert.png
    #color_dark: "#042a49"
    #icon_background_color_dark: "#eeeeee"

  # The android, ios and web parameters can be used to disable generating a splash screen on a given
  # platform.
  #android: false
  #ios: false
  #web: false

  # Platform specific images can be specified with the following parameters, which will override
  # the respective parameter.  You may specify all, selected, or none of these parameters:
  #color_android: "#42a5f5"
  #color_dark_android: "#042a49"
  #color_ios: "#42a5f5"
  #color_dark_ios: "#042a49"
  #color_web: "#42a5f5"
  #color_dark_web: "#042a49"
  #image_android: assets/splash-android.png
  #image_dark_android: assets/splash-invert-android.png
  #image_ios: assets/splash-ios.png
  #image_dark_ios: assets/splash-invert-ios.png
  #image_web: assets/splash-web.png
  #image_dark_web: assets/splash-invert-web.png
  #background_image_android: "assets/background-android.png"
  #background_image_dark_android: "assets/dark-background-android.png"
  #background_image_ios: "assets/background-ios.png"
  #background_image_dark_ios: "assets/dark-background-ios.png"
  #background_image_web: "assets/background-web.png"
  #background_image_dark_web: "assets/dark-background-web.png"
  #branding_android: assets/brand-android.png
  #branding_dark_android: assets/dart_dark-android.png
  #branding_ios: assets/brand-ios.png
  #branding_dark_ios: assets/dart_dark-ios.png

  # The position of the splash image can be set with android_gravity, ios_content_mode, and
  # web_image_mode parameters.  All default to center.
  #
  # android_gravity can be one of the following Android Gravity (see
  # https://developer.android.com/reference/android/view/Gravity): bottom, center,
  # center_horizontal, center_vertical, clip_horizontal, clip_vertical, end, fill, fill_horizontal,
  # fill_vertical, left, right, start, or top.
  #android_gravity: center
  #
  # ios_content_mode can be one of the following iOS UIView.ContentMode (see
  # https://developer.apple.com/documentation/uikit/uiview/contentmode): scaleToFill,
  # scaleAspectFit, scaleAspectFill, center, top, bottom, left, right, topLeft, topRight,
  # bottomLeft, or bottomRight.
  #ios_content_mode: center
  #
  # web_image_mode can be one of the following modes: center, contain, stretch, and cover.
  #web_image_mode: center

  # The screen orientation can be set in Android with the android_screen_orientation parameter.
  # Valid parameters can be found here:
  # https://developer.android.com/guide/topics/manifest/activity-element#screen
  #android_screen_orientation: sensorLandscape

  # To hide the notification bar, use the fullscreen parameter.  Has no effect in web since web
  # has no notification bar.  Defaults to false.
  # NOTE: Unlike Android, iOS will not automatically show the notification bar when the app loads.
  #       To show the notification bar, add the following code to your Flutter app:
  #       WidgetsFlutterBinding.ensureInitialized();
  #       SystemChrome.setEnabledSystemUIOverlays([SystemUiOverlay.bottom, SystemUiOverlay.top]);
  #fullscreen: true

  # If you have changed the name(s) of your info.plist file(s), you can specify the filename(s)
  # with the info_plist_files parameter.  Remove only the # characters in the three lines below,
  # do not remove any spaces:
  #info_plist_files:
  #  - 'ios/Runner/Info-Debug.plist'
  #  - 'ios/Runner/Info-Release.plist'

2. Run the package

After adding your settings, run the following command in the terminal:

flutter pub run flutter_native_splash:create

When the package finishes running, your splash screen is ready.

To specify the YAML file location just add --path with the command in the terminal:

flutter pub run flutter_native_splash:create --path=path/to/my/file.yaml

3. Set up app initialization (optional)

By default, the splash screen will be removed when Flutter has drawn the first frame. If you would like the splash screen to remain while your app initializes, you can use the preserve() and remove() methods together. Pass the preserve() method the value returned from WidgetsFlutterBinding.ensureInitialized() to keep the splash on screen. Later, when your app has initialized, make a call to remove() to remove the splash screen.

import 'package:flutter_native_splash/flutter_native_splash.dart';

void main() {
  WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
  FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
  runApp(const MyApp());
}

// whenever your initialization is completed, remove the splash screen:
    FlutterNativeSplash.remove();

NOTE: If you do not need to use the preserve() and remove() methods, you can place the flutter_native_splash dependency in the dev_dependencies section of pubspec.yaml.

4. Support the package (optional)

If you find this package useful, you can support it for free by giving it a thumbs up at the top of this page. Here's another option to support the package:

Android 12+ Support

Android 12 has a new method of adding splash screens, which consists of a window background, icon, and the icon background. Note that a background image is not supported.

Be aware of the following considerations regarding these elements:

1: image parameter. By default, the launcher icon is used:

  • App icon without an icon background, as shown on the left: This should be 1152×1152 pixels, and fit within a circle 768 pixels in diameter.
  • App icon with an icon background, as shown on the right: This should be 960×960 pixels, and fit within a circle 640 pixels in diameter.

2: icon_background_color is optional, and is useful if you need more contrast between the icon and the window background.

3: One-third of the foreground is masked.

4: color the window background consists of a single opaque color.

PLEASE NOTE: The splash screen may not appear when you launch the app from Android Studio on API 31. However, it should appear when you launch by clicking on the launch icon in Android. This seems to be resolved in API 32+.

PLEASE NOTE: There are a number of reports that non-Google launchers do not display the launch image correctly. If the launch image does not display correctly, please try the Google launcher to confirm that this package is working.

PLEASE NOTE: The splash screen does not appear when you launch the app from a notification. Apparently this is the intended behavior on Android 12: core-splashscreen Icon not shown when cold launched from notification.

Flavor Support

If you have a project setup that contains multiple flavors or environments, and you created more than one flavor this would be a feature for you.

Instead of maintaining multiple files and copy/pasting images, you can now, using this tool, create different splash screens for different environments.

Pre-requirements

In order to use the new feature, and generate the desired splash images for you app, a couple of changes are required.

If you want to generate just one flavor and one file you would use either options as described in Step 1. But in order to setup the flavors, you will then be required to move all your setup values to the flutter_native_splash.yaml file, but with a prefix.

Let's assume for the rest of the setup that you have 3 different flavors, Production, Acceptance, Development.

First this you will need to do is to create a different setup file for all 3 flavors with a suffix like so:

flutter_native_splash-production.yaml
flutter_native_splash-acceptance.yaml
flutter_native_splash-development.yaml

You would setup those 3 files the same way as you would the one, but with different assets depending on which environment you would be generating. For example (Note: these are just examples, you can use whatever setup you need for your project that is already supported by the package):

# flutter_native_splash-development.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-development.png
  branding: assets/branding-development.png
  color_dark: "#121212"
  image_dark: assets/logo-development.png
  branding_dark: assets/branding-development.png

  android_12:
    image: assets/logo-development.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-development.png
    icon_background_color_dark: "#121212"

  web: false

# flutter_native_splash-acceptance.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-acceptance.png
  branding: assets/branding-acceptance.png
  color_dark: "#121212"
  image_dark: assets/logo-acceptance.png
  branding_dark: assets/branding-acceptance.png

  android_12:
    image: assets/logo-acceptance.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-acceptance.png
    icon_background_color_dark: "#121212"

  web: false

# flutter_native_splash-production.yaml
flutter_native_splash:
  color: "#ffffff"
  image: assets/logo-production.png
  branding: assets/branding-production.png
  color_dark: "#121212"
  image_dark: assets/logo-production.png
  branding_dark: assets/branding-production.png

  android_12:
    image: assets/logo-production.png
    icon_background_color: "#ffffff"
    image_dark: assets/logo-production.png
    icon_background_color_dark: "#121212"

  web: false

Great, now comes the fun part running the new command!

The new command is:

# If you have a flavor called production you would do this:
flutter pub run flutter_native_splash:create --flavor production

# For a flavor with a name staging you would provide it's name like so:
flutter pub run flutter_native_splash:create --flavor staging

# And if you have a local version for devs you could do that:
flutter pub run flutter_native_splash:create --flavor development

Android setup

You're done! No, really, Android doesn't need any additional setup.

Note: If it didn't work, please make sure that your flavors are named the same as your config files, otherwise the setup will not work.

iOS setup

iOS is a bit tricky, so hang tight, it might look scary but most of the steps are just a single click, explained as much as possible to lower the possibility of mistakes.

When you run the new command, you will need to open xCode and follow the steps bellow:

Assumption

  • In order for this setup to work, you would already have 3 different schemes setup; production, acceptance and development.

Preparation

  • Open the iOS Flutter project in Xcode (open the Runner.xcworkspace)
  • Find the newly created Storyboard files at the same location where the original is {project root}/ios/Runner/Base.lproj
  • Select all of them and drag and drop into Xcode, directly to the left hand side where the current LaunchScreen.storyboard is located already
  • After you drop your files there Xcode will ask you to link them, make sure you select 'Copy if needed'
  • This part is done, you have linked the newly created storyboards in your project.

xCode

Xcode still doesn't know how to use them, so we need to specify for all the current flavors (schemes) which file to use and to use that value inside the Info.plist file.

  • Open the iOS Flutter project in Xcode (open the Runner.xcworkspace)
  • Click the Runner project in the top left corner (usually the first item in the list)
  • In the middle part of the screen, on the left side, select the Runner target
  • On the top part of the screen select Build Settings
  • Make sure that 'All' and 'Combined' are selected
  • Next to 'Combine' you have a '+' button, press it and select 'Add User-Defined Setting'
  • Once you do that Xcode will create a new variable for you to name. Suggestion is to name it LAUNCH_SCREEN_STORYBOARD
  • Once you do that, you will have the option to define a specific name for each flavor (scheme) that you have defined in the project. Make sure that you input the exact name of the LaunchScreen.storyboard that was created by this tool
    • Example: If you have a flavor Development, there is a Storyboard created name LaunchScreenDevelopment.storyboard, please add that name (without the storyboard part) to the variable value next to the flavor value
  • After you finish with that, you need to update Info.plist file to link the newly created variable so that it's used correctly
  • Open the Info.plist file
  • Find the entry called 'Launch screen interface file base name'
  • The default value is 'LaunchScreen', change that to the variable name that you create previously. If you follow these steps exactly, it would be LAUNCH_SCREEN_STORYBOARD, so input this $(LAUNCH_SCREEN_STORYBOARD)
  • And your done!

Congrats you finished your setup for multiple flavors,

FAQs

I got the error "A splash screen was provided to Flutter, but this is deprecated."

This message is not related to this package but is related to a change in how Flutter handles splash screens in Flutter 2.5. It is caused by having the following code in your android/app/src/main/AndroidManifest.xml, which was included by default in previous versions of Flutter:

<meta-data
 android:name="io.flutter.embedding.android.SplashScreenDrawable"
 android:resource="@drawable/launch_background"
 />

The solution is to remove the above code. Note that this will also remove the fade effect between the native splash screen and your app.

Are animations/lottie/GIF images supported?

Not at this time. PRs are always welcome!

I got the error AAPT: error: style attribute 'android:attr/windowSplashScreenBackground' not found

This attribute is only found in Android 12, so if you are getting this error, it means your project is not fully set up for Android 12. Did you update your app's build configuration?

I see a flash of the wrong splash screen on iOS

This is caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.

I see a white screen between splash screen and app

  1. It may be caused by an iOS splash caching bug, which can be solved by uninstalling your app, powering off your device, power back on, and then try reinstalling.
  2. It may be caused by the delay due to initialization in your app. To solve this, put any initialization code in the removeAfter method.

Can I base light/dark mode on app settings?

No. This package creates a splash screen that is displayed before Flutter is loaded. Because of this, when the splash screen loads, internal app settings are not available to the splash screen. Unfortunately, this means that it is impossible to control light/dark settings of the splash from app settings.

Notes

If the splash screen was not updated correctly on iOS or if you experience a white screen before the splash screen, run flutter clean and recompile your app. If that does not solve the problem, delete your app, power down the device, power up the device, install and launch the app as per this StackOverflow thread.

This package modifies launch_background.xml and styles.xml files on Android, LaunchScreen.storyboard and Info.plist on iOS, and index.html on Web. If you have modified these files manually, this plugin may not work properly. Please open an issue if you find any bugs.

How it works

Android

  • Your splash image will be resized to mdpi, hdpi, xhdpi, xxhdpi and xxxhdpi drawables.
  • An <item> tag containing a <bitmap> for your splash image drawable will be added in launch_background.xml
  • Background color will be added in colors.xml and referenced in launch_background.xml.
  • Code for full screen mode toggle will be added in styles.xml.
  • Dark mode variants are placed in drawable-night, values-night, etc. resource folders.

iOS

  • Your splash image will be resized to @3x and @2x images.
  • Color and image properties will be inserted in LaunchScreen.storyboard.
  • The background color is implemented by using a single-pixel png file and stretching it to fit the screen.
  • Code for hidden status bar toggle will be added in Info.plist.

Web

  • A web/splash folder will be created for splash screen images and CSS files.
  • Your splash image will be resized to 1x, 2x, 3x, and 4x sizes and placed in web/splash/img.
  • The splash style sheet will be added to the app's web/index.html, as well as the HTML for the splash pictures.

Acknowledgments

This package was originally created by Henrique Arthur and it is currently maintained by Jon Hanson.

Bugs or Requests

If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket. Pull request are also welcome.


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add flutter_native_splash

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  flutter_native_splash: ^2.2.19

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:flutter_native_splash/flutter_native_splash.dart';

example/lib/main.dart

import 'package:flutter/material.dart';
import 'package:flutter_native_splash/flutter_native_splash.dart';

void main() {
  WidgetsBinding widgetsBinding = WidgetsFlutterBinding.ensureInitialized();
  FlutterNativeSplash.preserve(widgetsBinding: widgetsBinding);
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
      ),
      home: const MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  const MyHomePage({super.key, required this.title});

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  @override
  State<MyHomePage> createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.
      _counter++;
    });
  }

  @override
  void initState() {
    super.initState();
    initialization();
  }

  void initialization() async {
    // This is where you can initialize the resources needed by your app while
    // the splash screen is displayed.  Remove the following example because
    // delaying the user experience is a bad design practice!
    // ignore_for_file: avoid_print
    print('ready in 3...');
    await Future.delayed(const Duration(seconds: 1));
    print('ready in 2...');
    await Future.delayed(const Duration(seconds: 1));
    print('ready in 1...');
    await Future.delayed(const Duration(seconds: 1));
    print('go!');
    FlutterNativeSplash.remove();
  }

  @override
  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    //
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the App.build method, and use it to set our appbar title.
        title: Text(widget.title),
      ),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          //
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          //
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            const Text(
              'You have pushed the button this many times:',
            ),
            Text(
              '$_counter',
              style: Theme.of(context).textTheme.headlineMedium,
            ),
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: const Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

Download Details:
 

Author: jonbhanson
Download Link: Download The Source Code
Official Website: https://github.com/jonbhanson/flutter_native_splash 
License: MIT license

#flutter #ios #android 

Anissa  Barrows

Anissa Barrows

1669099573

What Is Face Recognition? Facial Recognition with Python and OpenCV

In this article, we will know what is face recognition and how is different from face detection. We will go briefly over the theory of face recognition and then jump on to the coding section. At the end of this article, you will be able to make a face recognition program for recognizing faces in images as well as on a live webcam feed.

What is Face Detection?

In computer vision, one essential problem we are trying to figure out is to automatically detect objects in an image without human intervention. Face detection can be thought of as such a problem where we detect human faces in an image. There may be slight differences in the faces of humans but overall, it is safe to say that there are certain features that are associated with all the human faces. There are various face detection algorithms but Viola-Jones Algorithm is one of the oldest methods that is also used today and we will use the same later in the article. You can go through the Viola-Jones Algorithm after completing this article as I’ll link it at the end of this article.

Face detection is usually the first step towards many face-related technologies, such as face recognition or verification. However, face detection can have very useful applications. The most successful application of face detection would probably be photo taking. When you take a photo of your friends, the face detection algorithm built into your digital camera detects where the faces are and adjusts the focus accordingly.

For a tutorial on Real-Time Face detection

What is Face Recognition?

face recognition

Now that we are successful in making such algorithms that can detect faces, can we also recognise whose faces are they?

Face recognition is a method of identifying or verifying the identity of an individual using their face. There are various algorithms that can do face recognition but their accuracy might vary. Here I am going to describe how we do face recognition using deep learning.

So now let us understand how we recognise faces using deep learning. We make use of face embedding in which each face is converted into a vector and this technique is called deep metric learning. Let me further divide this process into three simple steps for easy understanding:

Face Detection: The very first task we perform is detecting faces in the image or video stream. Now that we know the exact location/coordinates of face, we extract this face for further processing ahead.
 

Feature Extraction: Now that we have cropped the face out of the image, we extract features from it. Here we are going to use face embeddings to extract the features out of the face. A neural network takes an image of the person’s face as input and outputs a vector which represents the most important features of a face. In machine learning, this vector is called embedding and thus we call this vector as face embedding. Now how does this help in recognizing faces of different persons? 
 

While training the neural network, the network learns to output similar vectors for faces that look similar. For example, if I have multiple images of faces within different timespan, of course, some of the features of my face might change but not up to much extent. So in this case the vectors associated with the faces are similar or in short, they are very close in the vector space. Take a look at the below diagram for a rough idea:

Now after training the network, the network learns to output vectors that are closer to each other(similar) for faces of the same person(looking similar). The above vectors now transform into:

We are not going to train such a network here as it takes a significant amount of data and computation power to train such networks. We will use a pre-trained network trained by Davis King on a dataset of ~3 million images. The network outputs a vector of 128 numbers which represent the most important features of a face.

Now that we know how this network works, let us see how we use this network on our own data. We pass all the images in our data to this pre-trained network to get the respective embeddings and save these embeddings in a file for the next step.

Comparing faces: Now that we have face embeddings for every face in our data saved in a file, the next step is to recognise a new t image that is not in our data. So the first step is to compute the face embedding for the image using the same network we used above and then compare this embedding with the rest of the embeddings we have. We recognise the face if the generated embedding is closer or similar to any other embedding as shown below:

So we passed two images, one of the images is of Vladimir Putin and other of George W. Bush. In our example above, we did not save the embeddings for Putin but we saved the embeddings of Bush. Thus when we compared the two new embeddings with the existing ones, the vector for Bush is closer to the other face embeddings of Bush whereas the face embeddings of Putin are not closer to any other embedding and thus the program cannot recognise him.

What is OpenCV

In the field of Artificial Intelligence, Computer Vision is one of the most interesting and Challenging tasks. Computer Vision acts like a bridge between Computer Software and visualizations around us. It allows computer software to understand and learn about the visualizations in the surroundings. For Example: Based on the color, shape and size determining the fruit. This task can be very easy for the human brain however in the Computer Vision pipeline, first we gather the data, then we perform the data processing activities and then we train and teach the model to understand how to distinguish between the fruits based on size, shape and color of fruit. 

Currently, various packages are present to perform machine learning, deep learning and computer vision tasks. By far, computer vision is the best module for such complex activities. OpenCV is an open-source library. It is supported by various programming languages such as R, Python. It runs on most of the platforms such as Windows, Linux and MacOS.

To know more about how face recognition works on opencv, check out the free course on face recognition in opencv.

Advantages of OpenCV:

  • OpenCV is an open-source library and is free of cost.
  • As compared to other libraries, it is fast since it is written in C/C++.
  • It works better on System with lesser RAM
  • To supports most of the Operating Systems such as Windows, Linux and MacOS.
  •  

Installation: 

Here we will be focusing on installing OpenCV for python only. We can install OpenCV using pip or conda(for anaconda environment). 

  1. Using pip: 

Using pip, the installation process of openCV can be done by using the following command in the command prompt.

pip install opencv-python

  1. Anaconda:

If you are using anaconda environment, either you can execute the above code in anaconda prompt or you can execute the following code in anaconda prompt.

conda install -c conda-forge opencv

Face Recognition using Python

In this section, we shall implement face recognition using OpenCV and Python. First, let us see the libraries we will need and how to install them:

  • OpenCV
  • dlib
  • Face_recognition

OpenCV is an image and video processing library and is used for image and video analysis, like facial detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.
 

The dlib library, maintained by Davis King, contains our implementation of “deep metric learning” which is used to construct our face embeddings used for the actual recognition process.
 

The face_recognition  library, created by Adam Geitgey, wraps around dlib’s facial recognition functionality, and this library is super easy to work with and we will be using this in our code. Remember to install dlib library first before you install face_recognition.
 

To install OpenCV, type in command prompt 
 

pip install opencv-python

I have tried various ways to install dlib on Windows but the easiest of all of them is via Anaconda. First, install Anaconda (here is a guide to install it) and then use this command in your command prompt:
 

conda install -c conda-forge dlib

Next to install face_recognition, type in command prompt

pip install face_recognition

Now that we have all the dependencies installed, let us start coding. We will have to create three files, one will take our dataset and extract face embedding for each face using dlib. Next, we will save these embedding in a file.
 

In the next file we will compare the faces with the existing the recognise faces in images and next we will do the same but recognise faces in live webcam feed
 

Extracting features from Face

First, you need to get a dataset or even create one of you own. Just make sure to arrange all images in folders with each folder containing images of just one person.

Next, save the dataset in a folder the same as you are going to make the file. Now here is the code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

from imutils import paths

import face_recognition

import pickle

import cv2

import os

#get paths of each file in folder named Images

#Images here contains my data(folders of various persons)

imagePaths = list(paths.list_images('Images'))

knownEncodings = []

knownNames = []

# loop over the image paths

for (i, imagePath) in enumerate(imagePaths):

    # extract the person name from the image path

    name = imagePath.split(os.path.sep)[-2]

    # load the input image and convert it from BGR (OpenCV ordering)

    # to dlib ordering (RGB)

    image = cv2.imread(imagePath)

    rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    #Use Face_recognition to locate faces

    boxes = face_recognition.face_locations(rgb,model='hog')

    # compute the facial embedding for the face

    encodings = face_recognition.face_encodings(rgb, boxes)

    # loop over the encodings

    for encoding in encodings:

        knownEncodings.append(encoding)

        knownNames.append(name)

#save emcodings along with their names in dictionary data

data = {"encodings": knownEncodings, "names": knownNames}

#use pickle to save data into a file for later use

f = open("face_enc", "wb")

f.write(pickle.dumps(data))

f.close()

Now that we have stored the embedding in a file named “face_enc”, we can use them to recognise faces in images or live video stream.

Face Recognition in Live webcam Feed

Here is the script to recognise faces on a live webcam feed:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

print("Streaming started")

video_capture = cv2.VideoCapture(0)

# loop over frames from the video file stream

while True:

    # grab the frame from the threaded video stream

    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale(gray,

                                         scaleFactor=1.1,

                                         minNeighbors=5,

                                         minSize=(60, 60),

                                         flags=cv2.CASCADE_SCALE_IMAGE)

    # convert the input frame from BGR to RGB

    rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    # the facial embeddings for face in input

    encodings = face_recognition.face_encodings(rgb)

    names = []

    # loop over the facial embeddings incase

    # we have multiple embeddings for multiple fcaes

    for encoding in encodings:

       #Compare encodings with encodings in data["encodings"]

       #Matches contain array with boolean values and True for the embeddings it matches closely

       #and False for rest

        matches = face_recognition.compare_faces(data["encodings"],

         encoding)

        #set name =inknown if no encoding matches

        name = "Unknown"

        # check to see if we have found a match

        if True in matches:

            #Find positions at which we get True and store them

            matchedIdxs = [i for (i, b) in enumerate(matches) if b]

            counts = {}

            # loop over the matched indexes and maintain a count for

            # each recognized face face

            for i in matchedIdxs:

                #Check the names at respective indexes we stored in matchedIdxs

                name = data["names"][i]

                #increase count for the name we got

                counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(frame, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

video_capture.release()

cv2.destroyAllWindows()

https://www.youtube.com/watch?v=fLnGdkZxRkg

Although in the example above we have used haar cascade to detect faces, you can also use face_recognition.face_locations to detect a face as we did in the previous script

Face Recognition in Images

The script for detecting and recognising faces in images is almost similar to what you saw above. Try it yourself and if you can’t take a look at the code below:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

#Find path to the image you want to detect face and pass it here

image = cv2.imread(Path-to-img)

rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

#convert image to Greyscale for haarcascade

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

faces = faceCascade.detectMultiScale(gray,

                                     scaleFactor=1.1,

                                     minNeighbors=5,

                                     minSize=(60, 60),

                                     flags=cv2.CASCADE_SCALE_IMAGE)

# the facial embeddings for face in input

encodings = face_recognition.face_encodings(rgb)

names = []

# loop over the facial embeddings incase

# we have multiple embeddings for multiple fcaes

for encoding in encodings:

    #Compare encodings with encodings in data["encodings"]

    #Matches contain array with boolean values and True for the embeddings it matches closely

    #and False for rest

    matches = face_recognition.compare_faces(data["encodings"],

    encoding)

    #set name =inknown if no encoding matches

    name = "Unknown"

    # check to see if we have found a match

    if True in matches:

        #Find positions at which we get True and store them

        matchedIdxs = [i for (i, b) in enumerate(matches) if b]

        counts = {}

        # loop over the matched indexes and maintain a count for

        # each recognized face face

        for i in matchedIdxs:

            #Check the names at respective indexes we stored in matchedIdxs

            name = data["names"][i]

            #increase count for the name we got

            counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(image, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", image)

    cv2.waitKey(0)

Output:

InputOutput

This brings us to the end of this article where we learned about face recognition.

You can also upskill with Great Learning’s PGP Artificial Intelligence and Machine Learning Course. The course offers mentorship from industry leaders, and you will also have the opportunity to work on real-time industry-relevant projects.


Original article source at: https://www.mygreatlearning.com

#python #opencv 

Shardul Bhatt

Shardul Bhatt

1626775355

Why use Python for Software Development

No programming language is pretty much as diverse as Python. It enables building cutting edge applications effortlessly. Developers are as yet investigating the full capability of end-to-end Python development services in various areas. 

By areas, we mean FinTech, HealthTech, InsureTech, Cybersecurity, and that's just the beginning. These are New Economy areas, and Python has the ability to serve every one of them. The vast majority of them require massive computational abilities. Python's code is dynamic and powerful - equipped for taking care of the heavy traffic and substantial algorithmic capacities. 

Programming advancement is multidimensional today. Endeavor programming requires an intelligent application with AI and ML capacities. Shopper based applications require information examination to convey a superior client experience. Netflix, Trello, and Amazon are genuine instances of such applications. Python assists with building them effortlessly. 

5 Reasons to Utilize Python for Programming Web Apps 

Python can do such numerous things that developers can't discover enough reasons to admire it. Python application development isn't restricted to web and enterprise applications. It is exceptionally adaptable and superb for a wide range of uses.

Robust frameworks 

Python is known for its tools and frameworks. There's a structure for everything. Django is helpful for building web applications, venture applications, logical applications, and mathematical processing. Flask is another web improvement framework with no conditions. 

Web2Py, CherryPy, and Falcon offer incredible capabilities to customize Python development services. A large portion of them are open-source frameworks that allow quick turn of events. 

Simple to read and compose 

Python has an improved sentence structure - one that is like the English language. New engineers for Python can undoubtedly understand where they stand in the development process. The simplicity of composing allows quick application building. 

The motivation behind building Python, as said by its maker Guido Van Rossum, was to empower even beginner engineers to comprehend the programming language. The simple coding likewise permits developers to roll out speedy improvements without getting confused by pointless subtleties. 

Utilized by the best 

Alright - Python isn't simply one more programming language. It should have something, which is the reason the business giants use it. Furthermore, that too for different purposes. Developers at Google use Python to assemble framework organization systems, parallel information pusher, code audit, testing and QA, and substantially more. Netflix utilizes Python web development services for its recommendation algorithm and media player. 

Massive community support 

Python has a steadily developing community that offers enormous help. From amateurs to specialists, there's everybody. There are a lot of instructional exercises, documentation, and guides accessible for Python web development solutions. 

Today, numerous universities start with Python, adding to the quantity of individuals in the community. Frequently, Python designers team up on various tasks and help each other with algorithmic, utilitarian, and application critical thinking. 

Progressive applications 

Python is the greatest supporter of data science, Machine Learning, and Artificial Intelligence at any enterprise software development company. Its utilization cases in cutting edge applications are the most compelling motivation for its prosperity. Python is the second most well known tool after R for data analytics.

The simplicity of getting sorted out, overseeing, and visualizing information through unique libraries makes it ideal for data based applications. TensorFlow for neural networks and OpenCV for computer vision are two of Python's most well known use cases for Machine learning applications.

Summary

Thinking about the advances in programming and innovation, Python is a YES for an assorted scope of utilizations. Game development, web application development services, GUI advancement, ML and AI improvement, Enterprise and customer applications - every one of them uses Python to its full potential. 

The disadvantages of Python web improvement arrangements are regularly disregarded by developers and organizations because of the advantages it gives. They focus on quality over speed and performance over blunders. That is the reason it's a good idea to utilize Python for building the applications of the future.

#python development services #python development company #python app development #python development #python in web development #python software development

August  Larson

August Larson

1622163871

4 Great Python Libraries You Wish You Came Across Earlier

Obscure Python Libraries

I picked 4 Python libraries that you probably haven’t heard of that I find useful. I’ll show you how to use them and then we’ll make a fun little program with them.

Great Scott Marty! Introducing Delorean

You mean Delorean is more than a time-traveling car? Yes, random Internet Stranger, it is. Datetime objects in Python can be a little difficult to work with. Delorean is a library that makes dealing with Datetime objects easier.

According to the documentation , Delorean is a library for clearing up the inconvenient truths that arise dealing with datetimes in Python. Understanding that timing is delicate enough of a problem Delorean hopes to provide a cleaner less troublesome solution to shifting, manipulating, generating datetimes.

Fire Up The Flux Capacitor

#python #programming #tutorial #python-tutorials #libraries #python-libraries #coding #programming-languages