Agnes  Sauer

Agnes Sauer

1611892287

How To Detect Faces in an Image in Java

Leverage innovative face detection technology to instantly locate the positions of faces in an image to assist with metadata entry, accessibility features, etc.

In this article, we will provide a step-by-step tutorial on how to use our Face Detection API to instantly locate and identify the position of all faces within an image. This can be particularly helpful for websites to assist with metadata entry, captioning, and accessibility features.

Face detection technology has come a long way over the past decade; it began with simple computer vision techniques and has progressed to sophisticated AI and other state-of-the-art technologies. The reason it has become such an integral innovation is due to the role it plays as the initial step in numerous facial recognition processes.

Face detection is often used interchangeably with facial recognition, but the two terms actually have two different definitions. Face detection is a specific function that finds and identifies human faces in images or videos, while facial recognition is a method of identifying or verifying the identity of an individual from their face.

#java #api #artificial-intelligence #machine-learning #developer

What is GEEK

Buddha Community

How To Detect Faces in an Image in Java
Queenie  Davis

Queenie Davis

1653123600

EasyMDE: Simple, Beautiful and Embeddable JavaScript Markdown Editor

EasyMDE - Markdown Editor 

This repository is a fork of SimpleMDE, made by Sparksuite. Go to the dedicated section for more information.

A drop-in JavaScript text area replacement for writing beautiful and understandable Markdown. EasyMDE allows users who may be less experienced with Markdown to use familiar toolbar buttons and shortcuts.

In addition, the syntax is rendered while editing to clearly show the expected result. Headings are larger, emphasized words are italicized, links are underlined, etc.

EasyMDE also features both built-in auto saving and spell checking. The editor is entirely customizable, from theming to toolbar buttons and javascript hooks.

Try the demo

Preview

Quick access

Install EasyMDE

Via npm:

npm install easymde

Via the UNPKG CDN:

<link rel="stylesheet" href="https://unpkg.com/easymde/dist/easymde.min.css">
<script src="https://unpkg.com/easymde/dist/easymde.min.js"></script>

Or jsDelivr:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.css">
<script src="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.js"></script>

How to use

Loading the editor

After installing and/or importing the module, you can load EasyMDE onto the first textarea element on the web page:

<textarea></textarea>
<script>
const easyMDE = new EasyMDE();
</script>

Alternatively you can select a specific textarea, via JavaScript:

<textarea id="my-text-area"></textarea>
<script>
const easyMDE = new EasyMDE({element: document.getElementById('my-text-area')});
</script>

Editor functions

Use easyMDE.value() to get the content of the editor:

<script>
easyMDE.value();
</script>

Use easyMDE.value(val) to set the content of the editor:

<script>
easyMDE.value('New input for **EasyMDE**');
</script>

Configuration

Options list

  • autoDownloadFontAwesome: If set to true, force downloads Font Awesome (used for icons). If set to false, prevents downloading. Defaults to undefined, which will intelligently check whether Font Awesome has already been included, then download accordingly.
  • autofocus: If set to true, focuses the editor automatically. Defaults to false.
  • autosave: Saves the text that's being written and will load it back in the future. It will forget the text when the form it's contained in is submitted.
    • enabled: If set to true, saves the text automatically. Defaults to false.
    • delay: Delay between saves, in milliseconds. Defaults to 10000 (10 seconds).
    • submit_delay: Delay before assuming that submit of the form failed and saving the text, in milliseconds. Defaults to autosave.delay or 10000 (10 seconds).
    • uniqueId: You must set a unique string identifier so that EasyMDE can autosave. Something that separates this from other instances of EasyMDE elsewhere on your website.
    • timeFormat: Set DateTimeFormat. More information see DateTimeFormat instances. Default locale: en-US, format: hour:minute.
    • text: Set text for autosave.
  • autoRefresh: Useful, when initializing the editor in a hidden DOM node. If set to { delay: 300 }, it will check every 300 ms if the editor is visible and if positive, call CodeMirror's refresh().
  • blockStyles: Customize how certain buttons that style blocks of text behave.
    • bold: Can be set to ** or __. Defaults to **.
    • code: Can be set to ``` or ~~~. Defaults to ```.
    • italic: Can be set to * or _. Defaults to *.
  • unorderedListStyle: can be *, - or +. Defaults to *.
  • scrollbarStyle: Chooses a scrollbar implementation. The default is "native", showing native scrollbars. The core library also provides the "null" style, which completely hides the scrollbars. Addons can implement additional scrollbar models.
  • element: The DOM element for the textarea element to use. Defaults to the first textarea element on the page.
  • forceSync: If set to true, force text changes made in EasyMDE to be immediately stored in original text area. Defaults to false.
  • hideIcons: An array of icon names to hide. Can be used to hide specific icons shown by default without completely customizing the toolbar.
  • indentWithTabs: If set to false, indent using spaces instead of tabs. Defaults to true.
  • initialValue: If set, will customize the initial value of the editor.
  • previewImagesInEditor: - EasyMDE will show preview of images, false by default, preview for images will appear only for images on separate lines.
  • imagesPreviewHandler: - A custom function for handling the preview of images. Takes the parsed string between the parantheses of the image markdown ![]( ) as argument and returns a string that serves as the src attribute of the <img> tag in the preview. Enables dynamic previewing of images in the frontend without having to upload them to a server, allows copy-pasting of images to the editor with preview.
  • insertTexts: Customize how certain buttons that insert text behave. Takes an array with two elements. The first element will be the text inserted before the cursor or highlight, and the second element will be inserted after. For example, this is the default link value: ["[", "](http://)"].
    • horizontalRule
    • image
    • link
    • table
  • lineNumbers: If set to true, enables line numbers in the editor.
  • lineWrapping: If set to false, disable line wrapping. Defaults to true.
  • minHeight: Sets the minimum height for the composition area, before it starts auto-growing. Should be a string containing a valid CSS value like "500px". Defaults to "300px".
  • maxHeight: Sets fixed height for the composition area. minHeight option will be ignored. Should be a string containing a valid CSS value like "500px". Defaults to undefined.
  • onToggleFullScreen: A function that gets called when the editor's full screen mode is toggled. The function will be passed a boolean as parameter, true when the editor is currently going into full screen mode, or false.
  • parsingConfig: Adjust settings for parsing the Markdown during editing (not previewing).
    • allowAtxHeaderWithoutSpace: If set to true, will render headers without a space after the #. Defaults to false.
    • strikethrough: If set to false, will not process GFM strikethrough syntax. Defaults to true.
    • underscoresBreakWords: If set to true, let underscores be a delimiter for separating words. Defaults to false.
  • overlayMode: Pass a custom codemirror overlay mode to parse and style the Markdown during editing.
    • mode: A codemirror mode object.
    • combine: If set to false, will replace CSS classes returned by the default Markdown mode. Otherwise the classes returned by the custom mode will be combined with the classes returned by the default mode. Defaults to true.
  • placeholder: If set, displays a custom placeholder message.
  • previewClass: A string or array of strings that will be applied to the preview screen when activated. Defaults to "editor-preview".
  • previewRender: Custom function for parsing the plaintext Markdown and returning HTML. Used when user previews.
  • promptURLs: If set to true, a JS alert window appears asking for the link or image URL. Defaults to false.
  • promptTexts: Customize the text used to prompt for URLs.
    • image: The text to use when prompting for an image's URL. Defaults to URL of the image:.
    • link: The text to use when prompting for a link's URL. Defaults to URL for the link:.
  • uploadImage: If set to true, enables the image upload functionality, which can be triggered by drag and drop, copy-paste and through the browse-file window (opened when the user click on the upload-image icon). Defaults to false.
  • imageMaxSize: Maximum image size in bytes, checked before upload (note: never trust client, always check the image size at server-side). Defaults to 1024 * 1024 * 2 (2 MB).
  • imageAccept: A comma-separated list of mime-types used to check image type before upload (note: never trust client, always check file types at server-side). Defaults to image/png, image/jpeg.
  • imageUploadFunction: A custom function for handling the image upload. Using this function will render the options imageMaxSize, imageAccept, imageUploadEndpoint and imageCSRFToken ineffective.
    • The function gets a file and onSuccess and onError callback functions as parameters. onSuccess(imageUrl: string) and onError(errorMessage: string)
  • imageUploadEndpoint: The endpoint where the images data will be sent, via an asynchronous POST request. The server is supposed to save this image, and return a JSON response.
    • if the request was successfully processed (HTTP 200 OK): {"data": {"filePath": "<filePath>"}} where filePath is the path of the image (absolute if imagePathAbsolute is set to true, relative if otherwise);
    • otherwise: {"error": "<errorCode>"}, where errorCode can be noFileGiven (HTTP 400 Bad Request), typeNotAllowed (HTTP 415 Unsupported Media Type), fileTooLarge (HTTP 413 Payload Too Large) or importError (see errorMessages below). If errorCode is not one of the errorMessages, it is alerted unchanged to the user. This allows for server-side error messages. No default value.
  • imagePathAbsolute: If set to true, will treat imageUrl from imageUploadFunction and filePath returned from imageUploadEndpoint as an absolute rather than relative path, i.e. not prepend window.location.origin to it.
  • imageCSRFToken: CSRF token to include with AJAX call to upload image. For various instances like Django, Spring and Laravel.
  • imageCSRFName: CSRF token filed name to include with AJAX call to upload image, applied when imageCSRFToken has value, defaults to csrfmiddlewaretoken.
  • imageCSRFHeader: If set to true, passing CSRF token via header. Defaults to false, which pass CSRF through request body.
  • imageTexts: Texts displayed to the user (mainly on the status bar) for the import image feature, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • sbInit: Status message displayed initially if uploadImage is set to true. Defaults to Attach files by drag and dropping or pasting from clipboard..
    • sbOnDragEnter: Status message displayed when the user drags a file to the text area. Defaults to Drop image to upload it..
    • sbOnDrop: Status message displayed when the user drops a file in the text area. Defaults to Uploading images #images_names#.
    • sbProgress: Status message displayed to show uploading progress. Defaults to Uploading #file_name#: #progress#%.
    • sbOnUploaded: Status message displayed when the image has been uploaded. Defaults to Uploaded #image_name#.
    • sizeUnits: A comma-separated list of units used to display messages with human-readable file sizes. Defaults to B, KB, MB (example: 218 KB). You can use B,KB,MB instead if you prefer without whitespaces (218KB).
  • errorMessages: Errors displayed to the user, using the errorCallback option, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • noFileGiven: The server did not receive any file from the user. Defaults to You must select a file..
    • typeNotAllowed: The user send a file type which doesn't match the imageAccept list, or the server returned this error code. Defaults to This image type is not allowed..
    • fileTooLarge: The size of the image being imported is bigger than the imageMaxSize, or if the server returned this error code. Defaults to Image #image_name# is too big (#image_size#).\nMaximum file size is #image_max_size#..
    • importError: An unexpected error occurred when uploading the image. Defaults to Something went wrong when uploading the image #image_name#..
  • errorCallback: A callback function used to define how to display an error message. Defaults to (errorMessage) => alert(errorMessage).
  • renderingConfig: Adjust settings for parsing the Markdown during previewing (not editing).
    • codeSyntaxHighlighting: If set to true, will highlight using highlight.js. Defaults to false. To use this feature you must include highlight.js on your page or pass in using the hljs option. For example, include the script and the CSS files like:
      <script src="https://cdn.jsdelivr.net/highlight.js/latest/highlight.min.js"></script>
      <link rel="stylesheet" href="https://cdn.jsdelivr.net/highlight.js/latest/styles/github.min.css">
    • hljs: An injectible instance of highlight.js. If you don't want to rely on the global namespace (window.hljs), you can provide an instance here. Defaults to undefined.
    • markedOptions: Set the internal Markdown renderer's options. Other renderingConfig options will take precedence.
    • singleLineBreaks: If set to false, disable parsing GitHub Flavored Markdown (GFM) single line breaks. Defaults to true.
    • sanitizerFunction: Custom function for sanitizing the HTML output of Markdown renderer.
  • shortcuts: Keyboard shortcuts associated with this instance. Defaults to the array of shortcuts.
  • showIcons: An array of icon names to show. Can be used to show specific icons hidden by default without completely customizing the toolbar.
  • spellChecker: If set to false, disable the spell checker. Defaults to true. Optionally pass a CodeMirrorSpellChecker-compliant function.
  • inputStyle: textarea or contenteditable. Defaults to textarea for desktop and contenteditable for mobile. contenteditable option is necessary to enable nativeSpellcheck.
  • nativeSpellcheck: If set to false, disable native spell checker. Defaults to true.
  • sideBySideFullscreen: If set to false, allows side-by-side editing without going into fullscreen. Defaults to true.
  • status: If set to false, hide the status bar. Defaults to the array of built-in status bar items.
    • Optionally, you can set an array of status bar items to include, and in what order. You can even define your own custom status bar items.
  • styleSelectedText: If set to false, remove the CodeMirror-selectedtext class from selected lines. Defaults to true.
  • syncSideBySidePreviewScroll: If set to false, disable syncing scroll in side by side mode. Defaults to true.
  • tabSize: If set, customize the tab size. Defaults to 2.
  • theme: Override the theme. Defaults to easymde.
  • toolbar: If set to false, hide the toolbar. Defaults to the array of icons.
  • toolbarTips: If set to false, disable toolbar button tips. Defaults to true.
  • direction: rtl or ltr. Changes text direction to support right-to-left languages. Defaults to ltr.

Options example

Most options demonstrate the non-default behavior:

const editor = new EasyMDE({
    autofocus: true,
    autosave: {
        enabled: true,
        uniqueId: "MyUniqueID",
        delay: 1000,
        submit_delay: 5000,
        timeFormat: {
            locale: 'en-US',
            format: {
                year: 'numeric',
                month: 'long',
                day: '2-digit',
                hour: '2-digit',
                minute: '2-digit',
            },
        },
        text: "Autosaved: "
    },
    blockStyles: {
        bold: "__",
        italic: "_",
    },
    unorderedListStyle: "-",
    element: document.getElementById("MyID"),
    forceSync: true,
    hideIcons: ["guide", "heading"],
    indentWithTabs: false,
    initialValue: "Hello world!",
    insertTexts: {
        horizontalRule: ["", "\n\n-----\n\n"],
        image: ["![](http://", ")"],
        link: ["[", "](https://)"],
        table: ["", "\n\n| Column 1 | Column 2 | Column 3 |\n| -------- | -------- | -------- |\n| Text     | Text      | Text     |\n\n"],
    },
    lineWrapping: false,
    minHeight: "500px",
    parsingConfig: {
        allowAtxHeaderWithoutSpace: true,
        strikethrough: false,
        underscoresBreakWords: true,
    },
    placeholder: "Type here...",

    previewClass: "my-custom-styling",
    previewClass: ["my-custom-styling", "more-custom-styling"],

    previewRender: (plainText) => customMarkdownParser(plainText), // Returns HTML from a custom parser
    previewRender: (plainText, preview) => { // Async method
        setTimeout(() => {
            preview.innerHTML = customMarkdownParser(plainText);
        }, 250);

        return "Loading...";
    },
    promptURLs: true,
    promptTexts: {
        image: "Custom prompt for URL:",
        link: "Custom prompt for URL:",
    },
    renderingConfig: {
        singleLineBreaks: false,
        codeSyntaxHighlighting: true,
        sanitizerFunction: (renderedHTML) => {
            // Using DOMPurify and only allowing <b> tags
            return DOMPurify.sanitize(renderedHTML, {ALLOWED_TAGS: ['b']})
        },
    },
    shortcuts: {
        drawTable: "Cmd-Alt-T"
    },
    showIcons: ["code", "table"],
    spellChecker: false,
    status: false,
    status: ["autosave", "lines", "words", "cursor"], // Optional usage
    status: ["autosave", "lines", "words", "cursor", {
        className: "keystrokes",
        defaultValue: (el) => {
            el.setAttribute('data-keystrokes', 0);
        },
        onUpdate: (el) => {
            const keystrokes = Number(el.getAttribute('data-keystrokes')) + 1;
            el.innerHTML = `${keystrokes} Keystrokes`;
            el.setAttribute('data-keystrokes', keystrokes);
        },
    }], // Another optional usage, with a custom status bar item that counts keystrokes
    styleSelectedText: false,
    sideBySideFullscreen: false,
    syncSideBySidePreviewScroll: false,
    tabSize: 4,
    toolbar: false,
    toolbarTips: false,
});

Toolbar icons

Below are the built-in toolbar icons (only some of which are enabled by default), which can be reorganized however you like. "Name" is the name of the icon, referenced in the JavaScript. "Action" is either a function or a URL to open. "Class" is the class given to the icon. "Tooltip" is the small tooltip that appears via the title="" attribute. Note that shortcut hints are added automatically and reflect the specified action if it has a key bind assigned to it (i.e. with the value of action set to bold and that of tooltip set to Bold, the final text the user will see would be "Bold (Ctrl-B)").

Additionally, you can add a separator between any icons by adding "|" to the toolbar array.

NameActionTooltip
Class
boldtoggleBoldBold
fa fa-bold
italictoggleItalicItalic
fa fa-italic
strikethroughtoggleStrikethroughStrikethrough
fa fa-strikethrough
headingtoggleHeadingSmallerHeading
fa fa-header
heading-smallertoggleHeadingSmallerSmaller Heading
fa fa-header
heading-biggertoggleHeadingBiggerBigger Heading
fa fa-lg fa-header
heading-1toggleHeading1Big Heading
fa fa-header header-1
heading-2toggleHeading2Medium Heading
fa fa-header header-2
heading-3toggleHeading3Small Heading
fa fa-header header-3
codetoggleCodeBlockCode
fa fa-code
quotetoggleBlockquoteQuote
fa fa-quote-left
unordered-listtoggleUnorderedListGeneric List
fa fa-list-ul
ordered-listtoggleOrderedListNumbered List
fa fa-list-ol
clean-blockcleanBlockClean block
fa fa-eraser
linkdrawLinkCreate Link
fa fa-link
imagedrawImageInsert Image
fa fa-picture-o
tabledrawTableInsert Table
fa fa-table
horizontal-ruledrawHorizontalRuleInsert Horizontal Line
fa fa-minus
previewtogglePreviewToggle Preview
fa fa-eye no-disable
side-by-sidetoggleSideBySideToggle Side by Side
fa fa-columns no-disable no-mobile
fullscreentoggleFullScreenToggle Fullscreen
fa fa-arrows-alt no-disable no-mobile
guideThis linkMarkdown Guide
fa fa-question-circle
undoundoUndo
fa fa-undo
redoredoRedo
fa fa-redo

Toolbar customization

Customize the toolbar using the toolbar option.

Only the order of existing buttons:

const easyMDE = new EasyMDE({
    toolbar: ["bold", "italic", "heading", "|", "quote"]
});

All information and/or add your own icons

const easyMDE = new EasyMDE({
    toolbar: [
        {
            name: "bold",
            action: EasyMDE.toggleBold,
            className: "fa fa-bold",
            title: "Bold",
        },
        "italics", // shortcut to pre-made button
        {
            name: "custom",
            action: (editor) => {
                // Add your own code
            },
            className: "fa fa-star",
            title: "Custom Button",
            attributes: { // for custom attributes
                id: "custom-id",
                "data-value": "custom value" // HTML5 data-* attributes need to be enclosed in quotation marks ("") because of the dash (-) in its name.
            }
        },
        "|" // Separator
        // [, ...]
    ]
});

Put some buttons on dropdown menu

const easyMDE = new EasyMDE({
    toolbar: [{
                name: "heading",
                action: EasyMDE.toggleHeadingSmaller,
                className: "fa fa-header",
                title: "Headers",
            },
            "|",
            {
                name: "others",
                className: "fa fa-blind",
                title: "others buttons",
                children: [
                    {
                        name: "image",
                        action: EasyMDE.drawImage,
                        className: "fa fa-picture-o",
                        title: "Image",
                    },
                    {
                        name: "quote",
                        action: EasyMDE.toggleBlockquote,
                        className: "fa fa-percent",
                        title: "Quote",
                    },
                    {
                        name: "link",
                        action: EasyMDE.drawLink,
                        className: "fa fa-link",
                        title: "Link",
                    }
                ]
            },
        // [, ...]
    ]
});

Keyboard shortcuts

EasyMDE comes with an array of predefined keyboard shortcuts, but they can be altered with a configuration option. The list of default ones is as follows:

Shortcut (Windows / Linux)Shortcut (macOS)Action
Ctrl-'Cmd-'"toggleBlockquote"
Ctrl-BCmd-B"toggleBold"
Ctrl-ECmd-E"cleanBlock"
Ctrl-HCmd-H"toggleHeadingSmaller"
Ctrl-ICmd-I"toggleItalic"
Ctrl-KCmd-K"drawLink"
Ctrl-LCmd-L"toggleUnorderedList"
Ctrl-PCmd-P"togglePreview"
Ctrl-Alt-CCmd-Alt-C"toggleCodeBlock"
Ctrl-Alt-ICmd-Alt-I"drawImage"
Ctrl-Alt-LCmd-Alt-L"toggleOrderedList"
Shift-Ctrl-HShift-Cmd-H"toggleHeadingBigger"
F9F9"toggleSideBySide"
F11F11"toggleFullScreen"

Here is how you can change a few, while leaving others untouched:

const editor = new EasyMDE({
    shortcuts: {
        "toggleOrderedList": "Ctrl-Alt-K", // alter the shortcut for toggleOrderedList
        "toggleCodeBlock": null, // unbind Ctrl-Alt-C
        "drawTable": "Cmd-Alt-T", // bind Cmd-Alt-T to drawTable action, which doesn't come with a default shortcut
    }
});

Shortcuts are automatically converted between platforms. If you define a shortcut as "Cmd-B", on PC that shortcut will be changed to "Ctrl-B". Conversely, a shortcut defined as "Ctrl-B" will become "Cmd-B" for Mac users.

The list of actions that can be bound is the same as the list of built-in actions available for toolbar buttons.

Advanced use

Event handling

You can catch the following list of events: https://codemirror.net/doc/manual.html#events

const easyMDE = new EasyMDE();
easyMDE.codemirror.on("change", () => {
    console.log(easyMDE.value());
});

Removing EasyMDE from text area

You can revert to the initial text area by calling the toTextArea method. Note that this clears up the autosave (if enabled) associated with it. The text area will retain any text from the destroyed EasyMDE instance.

const easyMDE = new EasyMDE();
// ...
easyMDE.toTextArea();
easyMDE = null;

If you need to remove registered event listeners (when the editor is not needed anymore), call easyMDE.cleanup().

Useful methods

The following self-explanatory methods may be of use while developing with EasyMDE.

const easyMDE = new EasyMDE();
easyMDE.isPreviewActive(); // returns boolean
easyMDE.isSideBySideActive(); // returns boolean
easyMDE.isFullscreenActive(); // returns boolean
easyMDE.clearAutosavedValue(); // no returned value

How it works

EasyMDE is a continuation of SimpleMDE.

SimpleMDE began as an improvement of lepture's Editor project, but has now taken on an identity of its own. It is bundled with CodeMirror and depends on Font Awesome.

CodeMirror is the backbone of the project and parses much of the Markdown syntax as it's being written. This allows us to add styles to the Markdown that's being written. Additionally, a toolbar and status bar have been added to the top and bottom, respectively. Previews are rendered by Marked using GitHub Flavored Markdown (GFM).

SimpleMDE fork

I originally made this fork to implement FontAwesome 5 compatibility into SimpleMDE. When that was done I submitted a pull request, which has not been accepted yet. This, and the project being inactive since May 2017, triggered me to make more changes and try to put new life into the project.

Changes include:

  • FontAwesome 5 compatibility
  • Guide button works when editor is in preview mode
  • Links are now https:// by default
  • Small styling changes
  • Support for Node 8 and beyond
  • Lots of refactored code
  • Links in preview will open in a new tab by default
  • TypeScript support

My intention is to continue development on this project, improving it and keeping it alive.

Hacking EasyMDE

You may want to edit this library to adapt its behavior to your needs. This can be done in some quick steps:

  1. Follow the prerequisites and installation instructions in the contribution guide;
  2. Do your changes;
  3. Run gulp command, which will generate files: dist/easymde.min.css and dist/easymde.min.js;
  4. Copy-paste those files to your code base, and you are done.

Contributing

Want to contribute to EasyMDE? Thank you! We have a contribution guide just for you!


Author: Ionaru
Source Code: https://github.com/Ionaru/easy-markdown-editor
License: MIT license

#react-native #react 

Tyrique  Littel

Tyrique Littel

1600135200

How to Install OpenJDK 11 on CentOS 8

What is OpenJDK?

OpenJDk or Open Java Development Kit is a free, open-source framework of the Java Platform, Standard Edition (or Java SE). It contains the virtual machine, the Java Class Library, and the Java compiler. The difference between the Oracle OpenJDK and Oracle JDK is that OpenJDK is a source code reference point for the open-source model. Simultaneously, the Oracle JDK is a continuation or advanced model of the OpenJDK, which is not open source and requires a license to use.

In this article, we will be installing OpenJDK on Centos 8.

#tutorials #alternatives #centos #centos 8 #configuration #dnf #frameworks #java #java development kit #java ee #java environment variables #java framework #java jdk #java jre #java platform #java sdk #java se #jdk #jre #open java development kit #open source #openjdk #openjdk 11 #openjdk 8 #openjdk runtime environment

A Lightweight Face Recognition and Facial Attribute Analysis

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN and RetinaFace detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are its BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url.         = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Author: Serengil
Source Code: https://github.com/serengil/deepface 
License: MIT License

#python #machine-learning 

Dominic  Feeney

Dominic Feeney

1648217849

Deepface: A Face Recognition and Facial Attribute Analysis for Python

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly powered by TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN, RetinaFace and MediaPipe detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface', 'mediapipe']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url          = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Download Details:
Author: serengil
Source Code: https://github.com/serengil/deepface
License: MIT License

#tensorflow  #python #machinelearning 

Samanta  Moore

Samanta Moore

1620458875

Going Beyond Java 8: Local Variable Type Inference (var) - DZone Java

According to some surveys, such as JetBrains’s great survey, Java 8 is currently the most used version of Java, despite being a 2014 release.

What you are reading is one in a series of articles titled ‘Going beyond Java 8,’ inspired by the contents of my book, Java for Aliens. These articles will guide you step-by-step through the most important features introduced to the language, starting from version 9. The aim is to make you aware of how important it is to move forward from Java 8, explaining the enormous advantages that the latest versions of the language offer.

In this article, we will talk about the most important new feature introduced with Java 10. Officially called local variable type inference, this feature is better known as the **introduction of the word **var. Despite the complicated name, it is actually quite a simple feature to use. However, some observations need to be made before we can see the impact that the introduction of the word var has on other pre-existing characteristics.

#java #java 11 #java 10 #java 12 #var #java 14 #java 13 #java 15 #verbosity