Real-time YOLOv4 Object Detection on Webcam in Google Colab | Images and Video

Learn how to implement YOLOv4 Object Detection on your Webcam from within Google Colab! This tutorial uses scaled-YOLOv4, the most fast and accurate object detection system there currently is. Perform object detections in real-time on webcam images and video with high accuracy and speed. ALL WITH A FREE GPU!

THE GOOGLE COLAB NOTEBOOK: https://bit.ly/388IGD6

Github Code Repository (yolov4-webcam notebook): https://github.com/theAIGuysCode/YOLOv4-Cloud-Tutorial

#yolov4 #machine-learning

What is GEEK

Buddha Community

Real-time YOLOv4 Object Detection on Webcam in Google Colab | Images and Video
Sasha  Lee

Sasha Lee

1650636000

Dl4clj: Clojure Wrapper for Deeplearning4j.

dl4clj

Port of deeplearning4j to clojure

Contact info

If you have any questions,

  • my email is will@yetanalytics.com
  • I'm will_hoyt in the clojurians slack
  • twitter is @FeLungz (don't check very often)

TODO

  • update examples dir
  • finish README
    • add in examples using Transfer Learning
  • finish tests
    • eval is missing regression tests, roc tests
    • nn-test is missing regression tests
    • spark tests need to be redone
    • need dl4clj.core tests
  • revist spark for updates
  • write specs for user facing functions
    • this is very important, match isnt strict for maps
    • provides 100% certianty of the input -> output flow
    • check the args as they come in, dispatch once I know its safe, test the pure output
  • collapse overlapping api namespaces
  • add to core use case flows

Features

Stable Features with tests

  • Neural Networks DSL
  • Early Stopping Training
  • Transfer Learning
  • Evaluation
  • Data import

Features being worked on for 0.1.0

  • Clustering (testing in progress)
  • Spark (currently being refactored)
  • Front End (maybe current release, maybe future release. Not sure yet)
  • Version of dl4j is 0.0.8 in this project. Current dl4j version is 0.0.9
  • Parallelism
  • Kafka support
  • Other items mentioned in TODO

Features being worked on for future releases

  • NLP
  • Computational Graphs
  • Reinforement Learning
  • Arbiter

Artifacts

NOT YET RELEASED TO CLOJARS

  • fork or clone to try it out

If using Maven add the following repository definition to your pom.xml:

<repository>
  <id>clojars.org</id>
  <url>http://clojars.org/repo</url>
</repository>

Latest release

With Leiningen:

n/a

With Maven:

n/a

<dependency>
  <groupId>_</groupId>
  <artifactId>_</artifactId>
  <version>_</version>
</dependency>

Usage

Things you need to know

All functions for creating dl4j objects return code by default

  • All of these functions have an option to return the dl4j object
    • :as-code? = false
  • This because all builders require the code representation of dl4j objects
    • this requirement is not going to change
  • INDarray creation fns default to objects, this is for convenience
    • :as-code? is still respected

API functions return code when all args are provided as code

API functions return the value of calling the wrapped method when args are provided as a mixture of objects and code or just objects

The tests are there to help clarify behavior, if you are unsure of how to use a fn, search the tests

  • for questions about spark, refer to the spark section bellow

Example of obj/code duality

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]))

;; as code (the default)

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1)

;; =>

(doto
 (org.deeplearning4j.nn.conf.layers.DenseLayer$Builder.)
 (.nOut 1)
 (.activation (dl4clj.constants/value-of {:activation-fn :relu}))
 (.weightInit (dl4clj.constants/value-of {:weight-init :xavier}))
 (.nIn 10)
 (.name "example layer")
 (.learningRate 0.006))

;; as an object

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1
 :as-code? false)

;; =>

#object[org.deeplearning4j.nn.conf.layers.DenseLayer 0x69d7d160 "DenseLayer(super=FeedForwardLayer(super=Layer(layerName=example layer, activationFn=relu, weightInit=XAVIER, biasInit=NaN, dist=null, learningRate=0.006, biasLearningRate=NaN, learningRateSchedule=null, momentum=NaN, momentumSchedule=null, l1=NaN, l2=NaN, l1Bias=NaN, l2Bias=NaN, dropOut=NaN, updater=null, rho=NaN, epsilon=NaN, rmsDecay=NaN, adamMeanDecay=NaN, adamVarDecay=NaN, gradientNormalization=null, gradientNormalizationThreshold=NaN), nIn=10, nOut=1))"]

General usage examples

Importing data

Loading data from a file (here its a csv)


(ns my.ns
 (:require [dl4clj.datasets.input-splits :as s]
           [dl4clj.datasets.record-readers :as rr]
           [dl4clj.datasets.api.record-readers :refer :all]
           [dl4clj.datasets.iterators :as ds-iter]
           [dl4clj.datasets.api.iterators :refer :all]
           [dl4clj.helpers :refer [data-from-iter]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; file splits (convert the data to records)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def poker-path "resources/poker-hand-training.csv")
;; this is not a complete dataset, it is just here to sever as an example

(def file-split (s/new-filesplit :path poker-path))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers, (read the records created by the file split)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def csv-rr (initialize-rr! :rr (rr/new-csv-record-reader :skip-n-lines 0 :delimiter ",")
                                 :input-split file-split))

;; lets look at some data
(println (next-record! :rr csv-rr :as-code? false))
;; => #object[java.util.ArrayList 0x2473e02d [1, 10, 1, 11, 1, 13, 1, 12, 1, 1, 9]]
;; this is our first line from the csv


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers dataset iterators (turn our writables into a dataset)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                 :record-reader csv-rr
                 :batch-size 1
                 :label-idx 10
                 :n-possible-labels 10))

;; we use our record reader created above
;; we want to see one example per dataset obj returned (:batch-size = 1)
;; we know our label is at the last index, so :label-idx = 10
;; there are 10 possible types of poker hands so :n-possible-labels = 10
;; you can also set :label-idx to -1 to use the last index no matter the size of the seq

(def other-rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                       :record-reader csv-rr
                       :batch-size 1
                       :label-idx -1
                       :n-possible-labels 10))

(str (next-example! :iter rr-ds-iter :as-code? false))
;; =>
;;===========INPUT===================
;;[1.00, 10.00, 1.00, 11.00, 1.00, 13.00, 1.00, 12.00, 1.00, 1.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]


;; and to show that :label-idx = -1 gives us the same output

(= (next-example! :iter rr-ds-iter :as-code? false)
   (next-example! :iter other-rr-ds-iter :as-code? false)) ;; => true

INDArrays and Datasets from clojure data structures


(ns my.ns
  (:require [nd4clj.linalg.factory.nd4j :refer [vec->indarray matrix->indarray
                                                indarray-of-zeros indarray-of-ones
                                                indarray-of-rand vec-or-matrix->indarray]]
            [dl4clj.datasets.new-datasets :refer [new-ds]]
            [dl4clj.datasets.api.datasets :refer [as-list]]
            [dl4clj.datasets.iterators :refer [new-existing-dataset-iterator]]
            [dl4clj.datasets.api.iterators :refer :all]
            [dl4clj.datasets.pre-processors :as ds-pp]
            [dl4clj.datasets.api.pre-processors :refer :all]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; INDArray creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;TODO: consider defaulting to code

;; can create from a vector

(vec->indarray [1 2 3 4])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x269df212 [1.00, 2.00, 3.00, 4.00]]

;; or from a matrix

(matrix->indarray [[1 2 3 4] [2 4 6 8]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x20aa7fe1
;; [[1.00, 2.00, 3.00, 4.00], [2.00, 4.00, 6.00, 8.00]]]


;; will fill in spareness with zeros

(matrix->indarray [[1 2 3 4] [2 4 6 8] [10 12]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x8b7796c
;;[[1.00, 2.00, 3.00, 4.00],
;; [2.00, 4.00, 6.00, 8.00],
;; [10.00, 12.00, 0.00, 0.00]]]

;; can create an indarray of all zeros with specified shape
;; defaults to :rows = 1 :columns = 1

(indarray-of-zeros :rows 3 :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x6f586a7e
;;[[0.00, 0.00],
;; [0.00, 0.00],
;; [0.00, 0.00]]]

(indarray-of-zeros) ;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xe59ffec 0.00]

;; and if only one is supplied, will get a vector of specified length

(indarray-of-zeros :rows 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2899d974 [0.00, 0.00]]

(indarray-of-zeros :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xa5b9782 [0.00, 0.00]]

;; same considerations/defaults for indarray-of-ones and indarray-of-rand

(indarray-of-ones :rows 2 :columns 3)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x54f08662 [[1.00, 1.00, 1.00], [1.00, 1.00, 1.00]]]

(indarray-of-rand :rows 2 :columns 3)
;; all values are greater than 0 but less than 1
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2f20293b [[0.85, 0.86, 0.13], [0.94, 0.04, 0.36]]]



;; vec-or-matrix->indarray is built into all functions which require INDArrays
;; so that you can use clojure data structures
;; but you still have the option of passing existing INDArrays

(def example-array (vec-or-matrix->indarray [1 2 3 4]))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x5c44c71f [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray example-array)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x607b03b0 [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray (indarray-of-rand :rows 2))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x49143b08 [0.76, 0.92]]

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def ds-with-single-example (new-ds :input [1 2 3 4]
                                    :output [0.0 1.0 0.0]))

(as-list :ds ds-with-single-example :as-code? false)
;; =>
;; #object[java.util.ArrayList 0x5d703d12
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00]]]

(def ds-with-multiple-examples (new-ds
                                :input [[1 2 3 4] [2 4 6 8]]
                                :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

(as-list :ds ds-with-multiple-examples :as-code? false)
;; =>
;;#object[java.util.ArrayList 0x29c7a9e2
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00],
;;===========INPUT===================
;;[2.00, 4.00, 6.00, 8.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 1.00]]]

;; we can create a dataset iterator from the code which creates datasets
;; and set the labels for our outputs (optional)

(def ds-with-multiple-examples
  (new-ds
   :input [[1 2 3 4] [2 4 6 8]]
   :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

;; iterator
(def training-rr-ds-iter
  (new-existing-dataset-iterator
   :dataset ds-with-multiple-examples
   :labels ["foo" "baz" "foobaz"]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set normalization
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; this gathers statistics on the dataset and normalizes the data
;; and applies the transformation to all dataset objects in the iterator
(def train-iter-normalized
  (c/normalize-iter! :iter training-rr-ds-iter
                     :normalizer (ds-pp/new-standardize-normalization-ds-preprocessor)
                     :as-code? false))

;; above returns the normalized iterator
;; to get fit normalizer

(def the-normalizer
  (get-pre-processor train-iter-normalized))

Model configuration

Creating a neural network configuration with singe and multiple layers

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.conf.distributions :as dist]
            [dl4clj.nn.conf.input-pre-processor :as pp]
            [dl4clj.nn.conf.step-fns :as s-fn]))

;; nn/builder has 3 types of args
;; 1) args which set network configuration params
;; 2) args which set default values for layers
;; 3) args which set multi layer network configuration params

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; single layer nn configuration
;; here we are setting network configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn :default-step-fn
            :layers {:dense-layer {:activation-fn :relu
                                   :updater :adam
                                   :adam-mean-decay 0.2
                                   :adam-var-decay 0.1
                                   :learning-rate 0.006
                                   :weight-init :xavier
                                   :layer-name "single layer model example"
                                   :n-in 10
                                   :n-out 20}})

;; there are several options within a nn-conf map which can be configuration maps
;; or calls to fns
;; It doesn't matter which option you choose and you don't have to stay consistent
;; the list of params which can be passed as config maps or fn calls will
;; be enumerated at a later date

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn (s-fn/new-default-step-fn)
            :build? true
            ;; dont need to specify layer order, theres only one
            :layers (l/dense-layer-builder
                    :activation-fn :relu
                    :updater :adam
                    :adam-mean-decay 0.2
                    :adam-var-decay 0.1
                    :dist (dist/new-normal-distribution :mean 0 :std 1)
                    :learning-rate 0.006
                    :weight-init :xavier
                    :layer-name "single layer model example"
                    :n-in 10
                    :n-out 20))

;; these configurations are the same

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; multi-layer configuration
;; here we are also setting layer defaults
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; defaults will apply to layers which do not specify those value in their config

(nn/builder
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; we need to specify the layer order
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}})

;; specifying multi-layer config params

(nn/builder
 ;; network args
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false

 ;; layer defaults
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; the layers
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}}
 ;; multi layer network args
 :backprop? true
 :backprop-type :standard
 :pretrain? false
 :input-pre-processors {0 (pp/new-zero-mean-pre-pre-processor)
                        1 {:unit-variance-processor {}}})

Configuration to Trained models

Multi Layer models

(ns my.ns
  (:require [dl4clj.datasets.iterators :as iter]
            [dl4clj.datasets.input-splits :as split]
            [dl4clj.datasets.record-readers :as rr]
            [dl4clj.optimize.listeners :as listener]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.nn.api.model :refer [init! set-listeners!]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.datasets.api.record-readers :refer [initialize-rr!]]
            [dl4clj.eval.api.eval :refer [get-stats get-accuracy]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; nn-conf -> multi-layer-network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def multi-layer-network (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; local cpu training with dl4j pre-built iterators
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; lets use the pre-built Mnist data set iterator

(def train-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;; and lets set a listener so we can know how training is going

(def score-listener (listener/new-score-iteration-listener :print-every-n 5))

;; and attach it to our model

;; TODO: listeners are broken, look into log4j warnning
(def mln-with-listener (set-listeners! :model multi-layer-network
                                       :listeners [score-listener]))

(def trained-mln (mln/train-mln-with-ds-iter! :mln mln-with-listener
                                              :iter train-mnist-iter
                                              :n-epochs 15
                                              :as-code? false))

;; training happens because :as-code? = false
;; if it was true, we would still just have a data structure
;; we now have a trained model that has seen the training dataset 15 times
;; time to evaluate our model

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;Create an evaluation object
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj (evaluate-classification :mln trained-mln
                                       :iter test-mnist-iter))

;; always remember that these objects are stateful, dont use the same eval-obj
;; to eval two different networks
;; we trained the model on a training dataset.  We evaluate on a test set

(println (get-stats :evaler eval-obj))
;; this will print the stats to standard out for each feature/label pair

;;Examples labeled as 0 classified by model as 0: 968 times
;;Examples labeled as 0 classified by model as 1: 1 times
;;Examples labeled as 0 classified by model as 2: 1 times
;;Examples labeled as 0 classified by model as 3: 1 times
;;Examples labeled as 0 classified by model as 5: 1 times
;;Examples labeled as 0 classified by model as 6: 3 times
;;Examples labeled as 0 classified by model as 7: 1 times
;;Examples labeled as 0 classified by model as 8: 2 times
;;Examples labeled as 0 classified by model as 9: 2 times
;;Examples labeled as 1 classified by model as 1: 1126 times
;;Examples labeled as 1 classified by model as 2: 2 times
;;Examples labeled as 1 classified by model as 3: 1 times
;;Examples labeled as 1 classified by model as 5: 1 times
;;Examples labeled as 1 classified by model as 6: 2 times
;;Examples labeled as 1 classified by model as 7: 1 times
;;Examples labeled as 1 classified by model as 8: 2 times
;;Examples labeled as 2 classified by model as 0: 3 times
;;Examples labeled as 2 classified by model as 1: 2 times
;;Examples labeled as 2 classified by model as 2: 1006 times
;;Examples labeled as 2 classified by model as 3: 2 times
;;Examples labeled as 2 classified by model as 4: 3 times
;;Examples labeled as 2 classified by model as 6: 3 times
;;Examples labeled as 2 classified by model as 7: 7 times
;;Examples labeled as 2 classified by model as 8: 6 times
;;Examples labeled as 3 classified by model as 2: 4 times
;;Examples labeled as 3 classified by model as 3: 990 times
;;Examples labeled as 3 classified by model as 5: 3 times
;;Examples labeled as 3 classified by model as 7: 3 times
;;Examples labeled as 3 classified by model as 8: 3 times
;;Examples labeled as 3 classified by model as 9: 7 times
;;Examples labeled as 4 classified by model as 2: 2 times
;;Examples labeled as 4 classified by model as 3: 1 times
;;Examples labeled as 4 classified by model as 4: 967 times
;;Examples labeled as 4 classified by model as 6: 4 times
;;Examples labeled as 4 classified by model as 7: 1 times
;;Examples labeled as 4 classified by model as 9: 7 times
;;Examples labeled as 5 classified by model as 0: 2 times
;;Examples labeled as 5 classified by model as 3: 6 times
;;Examples labeled as 5 classified by model as 4: 1 times
;;Examples labeled as 5 classified by model as 5: 874 times
;;Examples labeled as 5 classified by model as 6: 3 times
;;Examples labeled as 5 classified by model as 7: 1 times
;;Examples labeled as 5 classified by model as 8: 3 times
;;Examples labeled as 5 classified by model as 9: 2 times
;;Examples labeled as 6 classified by model as 0: 4 times
;;Examples labeled as 6 classified by model as 1: 3 times
;;Examples labeled as 6 classified by model as 3: 2 times
;;Examples labeled as 6 classified by model as 4: 4 times
;;Examples labeled as 6 classified by model as 5: 4 times
;;Examples labeled as 6 classified by model as 6: 939 times
;;Examples labeled as 6 classified by model as 7: 1 times
;;Examples labeled as 6 classified by model as 8: 1 times
;;Examples labeled as 7 classified by model as 1: 7 times
;;Examples labeled as 7 classified by model as 2: 4 times
;;Examples labeled as 7 classified by model as 3: 3 times
;;Examples labeled as 7 classified by model as 7: 1005 times
;;Examples labeled as 7 classified by model as 8: 2 times
;;Examples labeled as 7 classified by model as 9: 7 times
;;Examples labeled as 8 classified by model as 0: 3 times
;;Examples labeled as 8 classified by model as 2: 3 times
;;Examples labeled as 8 classified by model as 3: 2 times
;;Examples labeled as 8 classified by model as 4: 4 times
;;Examples labeled as 8 classified by model as 5: 3 times
;;Examples labeled as 8 classified by model as 6: 2 times
;;Examples labeled as 8 classified by model as 7: 4 times
;;Examples labeled as 8 classified by model as 8: 947 times
;;Examples labeled as 8 classified by model as 9: 6 times
;;Examples labeled as 9 classified by model as 0: 2 times
;;Examples labeled as 9 classified by model as 1: 2 times
;;Examples labeled as 9 classified by model as 3: 4 times
;;Examples labeled as 9 classified by model as 4: 8 times
;;Examples labeled as 9 classified by model as 6: 1 times
;;Examples labeled as 9 classified by model as 7: 4 times
;;Examples labeled as 9 classified by model as 8: 2 times
;;Examples labeled as 9 classified by model as 9: 986 times

;;==========================Scores========================================
;; Accuracy:        0.9808
;; Precision:       0.9808
;; Recall:          0.9807
;; F1 Score:        0.9807
;;========================================================================

;; can get the stats that are printed via fns in the evaluation namespace
;; after running eval-model-whole-ds

(get-accuracy :evaler evaler-with-stats) ;; => 0.9808

Model Tuning

Early Stopping (controlling training)

it is recommened you start here when designing models

using dl4clj.core


(ns my.ns
  (:require [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123
   :iterations 1
   :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu
   :default-l2 7.5e-6
   :default-weight-init :xavier
   :default-learning-rate 0.0015
   :default-updater :nesterovs
   :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition
                             :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

(def in-mem-saver (new-in-memory-saver))

(def trained-mln
;; defaults to returning the model
  (c/train-with-early-stopping
   :nn-conf nn-conf
   :training-iter train-mnist-iter
   :testing-iter test-mnist-iter
   :eval-every-n-epochs 1
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :save-last-model? true
   :model-saver in-mem-saver
   :as-code? false))

(def model-evaler
  (evaluate-classification :mln trained-mln :iter test-mnist-iter))

(println (get-stats :evaler model-evaler))
  • explicit, step by step way of doing this
(ns my.ns
  (:require [dl4clj.earlystopping.early-stopping-config :refer [new-early-stopping-config]]
            [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver new-local-file-model-saver]]
            [dl4clj.earlystopping.score-calc :refer [new-ds-loss-calculator]]
            [dl4clj.earlystopping.early-stopping-trainer :refer [new-early-stopping-trainer]]
            [dl4clj.earlystopping.api.early-stopping-trainer :refer [fit-trainer!]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.utils :refer [load-model!]]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; start with our network config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true
   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98
   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}
   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def mln (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; the training/testing data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we are going to need termination conditions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; these allow us to control when we exit training

;; this can be based off of iterations or epochs

;; iteration termination conditions

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

;; epoch termination conditions

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we also need a way to save our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; can be in memory or to a local directory

(def in-mem-saver (new-in-memory-saver))

(def local-file-saver (new-local-file-model-saver :directory "resources/tmp/readme/"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; set up your score calculator
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def score-calcer (new-ds-loss-calculator :iter test-iter
                                          :average? true))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; termination conditions
;; a way to save our model
;; a way to calculate the score of our model on the dataset

(def early-stopping-conf
  (new-early-stopping-config
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :eval-every-n-epochs 5
   :model-saver local-file-saver
   :save-last-model? true
   :score-calculator score-calcer))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping trainer from our data, model and early stopping conf
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer (new-early-stopping-trainer :early-stopping-conf early-stopping-conf
                                            :mln mln
                                            :iter train-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; fit and use our early stopping trainer
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer-fitted (fit-trainer! es-trainer :as-code? false))

;; when the trainer terminates, you will see something like this
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  Completed training epoch 14
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  New best model: score = 0.005225599372851298,
;;                                                   epoch = 14 (previous: score = 0.018243224899038346, epoch = 7)
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Hit epoch termination condition at epoch 14.
;;                                           Details: BestScoreEpochTerminationCondition(0.009)

;; and if we look at the es-trainer-fitted object we see

;;#object[org.deeplearning4j.earlystopping.EarlyStoppingResult 0x5ab74f27 EarlyStoppingResult
;;(terminationReason=EpochTerminationCondition,details=BestScoreEpochTerminationCondition(0.009),
;; bestModelEpoch=14,bestModelScore=0.005225599372851298,totalEpochs=15)]

;; and our model has been saved to /resources/tmp/readme/bestModel.bin
;; there we have our model config, model params and our updater state

;; we can then load this model to use it or continue refining it

(def loaded-model (load-model! :path "resources/tmp/readme/bestModel.bin"
                               :load-updater? true))

Transfer Learning (freezing layers)


;; TODO: need to write up examples

Spark Training

dl4j Spark usage

How it is done in dl4clj

  • Uses dl4clj.core
    • This example uses a fn which takes care of most steps for you
      • allows you to pass args as code bc the fn accounts for the multiple spark contexts issue encountered when everything is just a data structure

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context
                                                java-rdd-from-iter]]
            [dl4clj.spark.api.dl4j-multi-layer :refer [eval-classification-spark-mln
                                                       get-spark-context]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, spark context
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, training data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, spark mln
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (c/train-with-spark :spark-context your-spark-context
                      :mln-conf mln-conf
                      :training-master training-master
                      :iter iris-iter
                      :n-epochs 1
                      :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, use spark context from spark-mln to create rdd
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; TODO: eliminate this step

(def our-rdd
  (let [sc (get-spark-context fitted-spark-mln :as-code? false)]
    (java-rdd-from-iter :spark-context sc
                        :iter iris-iter)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 6, evaluation model and print stats (poor performance of model expected)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))

(println (get-stats :evaler eval-obj))

  • this example demonstrates the dl4j workflow
    • NOTE: unlike the previous example, this one requires dl4j objects to be used
      • this is becaues spark only wants you to have one spark context at a time
(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context java-rdd-from-iter]]
            [dl4clj.spark.dl4j-multi-layer :as spark-mln]
            [dl4clj.spark.api.dl4j-multi-layer :refer [fit-spark-mln!
                                                       eval-classification-spark-mln]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :as-code? false
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, create a training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; not all options specified, but most are

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :as-code? false
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, create a Spark Multi Layer Network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app" :as-code? false))

;; new-java-spark-context will turn an existing spark-configuration into a java spark context
;; or create a new java spark context with master set to "local[*]" and the app name
;; set to :app-name


(def spark-mln
  (spark-mln/new-spark-multi-layer-network
   :spark-context your-spark-context
   :mln mln-conf
   :training-master training-master
   :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, load your data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; one way is via a dataset-iterator
;; can make one directly from a dataset (iterator data-set)
;; see: nd4clj.linalg.dataset.api.data-set and nd4clj.linalg.dataset.data-set
;; we are going to use a pre-built one

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5
   :as-code? false))

;; now lets convert the data into a javaRDD

(def our-rdd
  (java-rdd-from-iter :spark-context your-spark-context
                      :iter iris-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, fit and evaluate the model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (fit-spark-mln!
   :spark-mln spark-mln
   :rdd our-rdd
   :n-epochs 1))
;; this fn also has the option to supply :path-to-data instead of :rdd
;; that path should point to a directory containing a number of dataset objects

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))
;; we would want to have different testing and training rdd's but here we are using
;; the data we trained on

;; lets get the stats for how our model performed

(println (get-stats :evaler eval-obj))

Terminology

Coming soon

Packages to come back to:

Implement ComputationGraphs and the classes which use them

NLP

Parallelism

TSNE

UI


Author: yetanalytics
Source Code: https://github.com/yetanalytics/dl4clj
License: BSD-2-Clause License

#machine-learning #deep-learning 

Arvel  Parker

Arvel Parker

1591611780

How to Find Ulimit For user on Linux

How can I find the correct ulimit values for a user account or process on Linux systems?

For proper operation, we must ensure that the correct ulimit values set after installing various software. The Linux system provides means of restricting the number of resources that can be used. Limits set for each Linux user account. However, system limits are applied separately to each process that is running for that user too. For example, if certain thresholds are too low, the system might not be able to server web pages using Nginx/Apache or PHP/Python app. System resource limits viewed or set with the NA command. Let us see how to use the ulimit that provides control over the resources available to the shell and processes.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]

MEAN Stack Tutorial MongoDB ExpressJS AngularJS NodeJS

We are going to build a full stack Todo App using the MEAN (MongoDB, ExpressJS, AngularJS and NodeJS). This is the last part of three-post series tutorial.

MEAN Stack tutorial series:

AngularJS tutorial for beginners (Part I)
Creating RESTful APIs with NodeJS and MongoDB Tutorial (Part II)
MEAN Stack Tutorial: MongoDB, ExpressJS, AngularJS and NodeJS (Part III) 👈 you are here
Before completing the app, let’s cover some background about the this stack. If you rather jump to the hands-on part click here to get started.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]

Queenie  Davis

Queenie Davis

1653123600

EasyMDE: Simple, Beautiful and Embeddable JavaScript Markdown Editor

EasyMDE - Markdown Editor 

This repository is a fork of SimpleMDE, made by Sparksuite. Go to the dedicated section for more information.

A drop-in JavaScript text area replacement for writing beautiful and understandable Markdown. EasyMDE allows users who may be less experienced with Markdown to use familiar toolbar buttons and shortcuts.

In addition, the syntax is rendered while editing to clearly show the expected result. Headings are larger, emphasized words are italicized, links are underlined, etc.

EasyMDE also features both built-in auto saving and spell checking. The editor is entirely customizable, from theming to toolbar buttons and javascript hooks.

Try the demo

Preview

Quick access

Install EasyMDE

Via npm:

npm install easymde

Via the UNPKG CDN:

<link rel="stylesheet" href="https://unpkg.com/easymde/dist/easymde.min.css">
<script src="https://unpkg.com/easymde/dist/easymde.min.js"></script>

Or jsDelivr:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.css">
<script src="https://cdn.jsdelivr.net/npm/easymde/dist/easymde.min.js"></script>

How to use

Loading the editor

After installing and/or importing the module, you can load EasyMDE onto the first textarea element on the web page:

<textarea></textarea>
<script>
const easyMDE = new EasyMDE();
</script>

Alternatively you can select a specific textarea, via JavaScript:

<textarea id="my-text-area"></textarea>
<script>
const easyMDE = new EasyMDE({element: document.getElementById('my-text-area')});
</script>

Editor functions

Use easyMDE.value() to get the content of the editor:

<script>
easyMDE.value();
</script>

Use easyMDE.value(val) to set the content of the editor:

<script>
easyMDE.value('New input for **EasyMDE**');
</script>

Configuration

Options list

  • autoDownloadFontAwesome: If set to true, force downloads Font Awesome (used for icons). If set to false, prevents downloading. Defaults to undefined, which will intelligently check whether Font Awesome has already been included, then download accordingly.
  • autofocus: If set to true, focuses the editor automatically. Defaults to false.
  • autosave: Saves the text that's being written and will load it back in the future. It will forget the text when the form it's contained in is submitted.
    • enabled: If set to true, saves the text automatically. Defaults to false.
    • delay: Delay between saves, in milliseconds. Defaults to 10000 (10 seconds).
    • submit_delay: Delay before assuming that submit of the form failed and saving the text, in milliseconds. Defaults to autosave.delay or 10000 (10 seconds).
    • uniqueId: You must set a unique string identifier so that EasyMDE can autosave. Something that separates this from other instances of EasyMDE elsewhere on your website.
    • timeFormat: Set DateTimeFormat. More information see DateTimeFormat instances. Default locale: en-US, format: hour:minute.
    • text: Set text for autosave.
  • autoRefresh: Useful, when initializing the editor in a hidden DOM node. If set to { delay: 300 }, it will check every 300 ms if the editor is visible and if positive, call CodeMirror's refresh().
  • blockStyles: Customize how certain buttons that style blocks of text behave.
    • bold: Can be set to ** or __. Defaults to **.
    • code: Can be set to ``` or ~~~. Defaults to ```.
    • italic: Can be set to * or _. Defaults to *.
  • unorderedListStyle: can be *, - or +. Defaults to *.
  • scrollbarStyle: Chooses a scrollbar implementation. The default is "native", showing native scrollbars. The core library also provides the "null" style, which completely hides the scrollbars. Addons can implement additional scrollbar models.
  • element: The DOM element for the textarea element to use. Defaults to the first textarea element on the page.
  • forceSync: If set to true, force text changes made in EasyMDE to be immediately stored in original text area. Defaults to false.
  • hideIcons: An array of icon names to hide. Can be used to hide specific icons shown by default without completely customizing the toolbar.
  • indentWithTabs: If set to false, indent using spaces instead of tabs. Defaults to true.
  • initialValue: If set, will customize the initial value of the editor.
  • previewImagesInEditor: - EasyMDE will show preview of images, false by default, preview for images will appear only for images on separate lines.
  • imagesPreviewHandler: - A custom function for handling the preview of images. Takes the parsed string between the parantheses of the image markdown ![]( ) as argument and returns a string that serves as the src attribute of the <img> tag in the preview. Enables dynamic previewing of images in the frontend without having to upload them to a server, allows copy-pasting of images to the editor with preview.
  • insertTexts: Customize how certain buttons that insert text behave. Takes an array with two elements. The first element will be the text inserted before the cursor or highlight, and the second element will be inserted after. For example, this is the default link value: ["[", "](http://)"].
    • horizontalRule
    • image
    • link
    • table
  • lineNumbers: If set to true, enables line numbers in the editor.
  • lineWrapping: If set to false, disable line wrapping. Defaults to true.
  • minHeight: Sets the minimum height for the composition area, before it starts auto-growing. Should be a string containing a valid CSS value like "500px". Defaults to "300px".
  • maxHeight: Sets fixed height for the composition area. minHeight option will be ignored. Should be a string containing a valid CSS value like "500px". Defaults to undefined.
  • onToggleFullScreen: A function that gets called when the editor's full screen mode is toggled. The function will be passed a boolean as parameter, true when the editor is currently going into full screen mode, or false.
  • parsingConfig: Adjust settings for parsing the Markdown during editing (not previewing).
    • allowAtxHeaderWithoutSpace: If set to true, will render headers without a space after the #. Defaults to false.
    • strikethrough: If set to false, will not process GFM strikethrough syntax. Defaults to true.
    • underscoresBreakWords: If set to true, let underscores be a delimiter for separating words. Defaults to false.
  • overlayMode: Pass a custom codemirror overlay mode to parse and style the Markdown during editing.
    • mode: A codemirror mode object.
    • combine: If set to false, will replace CSS classes returned by the default Markdown mode. Otherwise the classes returned by the custom mode will be combined with the classes returned by the default mode. Defaults to true.
  • placeholder: If set, displays a custom placeholder message.
  • previewClass: A string or array of strings that will be applied to the preview screen when activated. Defaults to "editor-preview".
  • previewRender: Custom function for parsing the plaintext Markdown and returning HTML. Used when user previews.
  • promptURLs: If set to true, a JS alert window appears asking for the link or image URL. Defaults to false.
  • promptTexts: Customize the text used to prompt for URLs.
    • image: The text to use when prompting for an image's URL. Defaults to URL of the image:.
    • link: The text to use when prompting for a link's URL. Defaults to URL for the link:.
  • uploadImage: If set to true, enables the image upload functionality, which can be triggered by drag and drop, copy-paste and through the browse-file window (opened when the user click on the upload-image icon). Defaults to false.
  • imageMaxSize: Maximum image size in bytes, checked before upload (note: never trust client, always check the image size at server-side). Defaults to 1024 * 1024 * 2 (2 MB).
  • imageAccept: A comma-separated list of mime-types used to check image type before upload (note: never trust client, always check file types at server-side). Defaults to image/png, image/jpeg.
  • imageUploadFunction: A custom function for handling the image upload. Using this function will render the options imageMaxSize, imageAccept, imageUploadEndpoint and imageCSRFToken ineffective.
    • The function gets a file and onSuccess and onError callback functions as parameters. onSuccess(imageUrl: string) and onError(errorMessage: string)
  • imageUploadEndpoint: The endpoint where the images data will be sent, via an asynchronous POST request. The server is supposed to save this image, and return a JSON response.
    • if the request was successfully processed (HTTP 200 OK): {"data": {"filePath": "<filePath>"}} where filePath is the path of the image (absolute if imagePathAbsolute is set to true, relative if otherwise);
    • otherwise: {"error": "<errorCode>"}, where errorCode can be noFileGiven (HTTP 400 Bad Request), typeNotAllowed (HTTP 415 Unsupported Media Type), fileTooLarge (HTTP 413 Payload Too Large) or importError (see errorMessages below). If errorCode is not one of the errorMessages, it is alerted unchanged to the user. This allows for server-side error messages. No default value.
  • imagePathAbsolute: If set to true, will treat imageUrl from imageUploadFunction and filePath returned from imageUploadEndpoint as an absolute rather than relative path, i.e. not prepend window.location.origin to it.
  • imageCSRFToken: CSRF token to include with AJAX call to upload image. For various instances like Django, Spring and Laravel.
  • imageCSRFName: CSRF token filed name to include with AJAX call to upload image, applied when imageCSRFToken has value, defaults to csrfmiddlewaretoken.
  • imageCSRFHeader: If set to true, passing CSRF token via header. Defaults to false, which pass CSRF through request body.
  • imageTexts: Texts displayed to the user (mainly on the status bar) for the import image feature, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • sbInit: Status message displayed initially if uploadImage is set to true. Defaults to Attach files by drag and dropping or pasting from clipboard..
    • sbOnDragEnter: Status message displayed when the user drags a file to the text area. Defaults to Drop image to upload it..
    • sbOnDrop: Status message displayed when the user drops a file in the text area. Defaults to Uploading images #images_names#.
    • sbProgress: Status message displayed to show uploading progress. Defaults to Uploading #file_name#: #progress#%.
    • sbOnUploaded: Status message displayed when the image has been uploaded. Defaults to Uploaded #image_name#.
    • sizeUnits: A comma-separated list of units used to display messages with human-readable file sizes. Defaults to B, KB, MB (example: 218 KB). You can use B,KB,MB instead if you prefer without whitespaces (218KB).
  • errorMessages: Errors displayed to the user, using the errorCallback option, where #image_name#, #image_size# and #image_max_size# will replaced by their respective values, that can be used for customization or internationalization:
    • noFileGiven: The server did not receive any file from the user. Defaults to You must select a file..
    • typeNotAllowed: The user send a file type which doesn't match the imageAccept list, or the server returned this error code. Defaults to This image type is not allowed..
    • fileTooLarge: The size of the image being imported is bigger than the imageMaxSize, or if the server returned this error code. Defaults to Image #image_name# is too big (#image_size#).\nMaximum file size is #image_max_size#..
    • importError: An unexpected error occurred when uploading the image. Defaults to Something went wrong when uploading the image #image_name#..
  • errorCallback: A callback function used to define how to display an error message. Defaults to (errorMessage) => alert(errorMessage).
  • renderingConfig: Adjust settings for parsing the Markdown during previewing (not editing).
    • codeSyntaxHighlighting: If set to true, will highlight using highlight.js. Defaults to false. To use this feature you must include highlight.js on your page or pass in using the hljs option. For example, include the script and the CSS files like:
      <script src="https://cdn.jsdelivr.net/highlight.js/latest/highlight.min.js"></script>
      <link rel="stylesheet" href="https://cdn.jsdelivr.net/highlight.js/latest/styles/github.min.css">
    • hljs: An injectible instance of highlight.js. If you don't want to rely on the global namespace (window.hljs), you can provide an instance here. Defaults to undefined.
    • markedOptions: Set the internal Markdown renderer's options. Other renderingConfig options will take precedence.
    • singleLineBreaks: If set to false, disable parsing GitHub Flavored Markdown (GFM) single line breaks. Defaults to true.
    • sanitizerFunction: Custom function for sanitizing the HTML output of Markdown renderer.
  • shortcuts: Keyboard shortcuts associated with this instance. Defaults to the array of shortcuts.
  • showIcons: An array of icon names to show. Can be used to show specific icons hidden by default without completely customizing the toolbar.
  • spellChecker: If set to false, disable the spell checker. Defaults to true. Optionally pass a CodeMirrorSpellChecker-compliant function.
  • inputStyle: textarea or contenteditable. Defaults to textarea for desktop and contenteditable for mobile. contenteditable option is necessary to enable nativeSpellcheck.
  • nativeSpellcheck: If set to false, disable native spell checker. Defaults to true.
  • sideBySideFullscreen: If set to false, allows side-by-side editing without going into fullscreen. Defaults to true.
  • status: If set to false, hide the status bar. Defaults to the array of built-in status bar items.
    • Optionally, you can set an array of status bar items to include, and in what order. You can even define your own custom status bar items.
  • styleSelectedText: If set to false, remove the CodeMirror-selectedtext class from selected lines. Defaults to true.
  • syncSideBySidePreviewScroll: If set to false, disable syncing scroll in side by side mode. Defaults to true.
  • tabSize: If set, customize the tab size. Defaults to 2.
  • theme: Override the theme. Defaults to easymde.
  • toolbar: If set to false, hide the toolbar. Defaults to the array of icons.
  • toolbarTips: If set to false, disable toolbar button tips. Defaults to true.
  • direction: rtl or ltr. Changes text direction to support right-to-left languages. Defaults to ltr.

Options example

Most options demonstrate the non-default behavior:

const editor = new EasyMDE({
    autofocus: true,
    autosave: {
        enabled: true,
        uniqueId: "MyUniqueID",
        delay: 1000,
        submit_delay: 5000,
        timeFormat: {
            locale: 'en-US',
            format: {
                year: 'numeric',
                month: 'long',
                day: '2-digit',
                hour: '2-digit',
                minute: '2-digit',
            },
        },
        text: "Autosaved: "
    },
    blockStyles: {
        bold: "__",
        italic: "_",
    },
    unorderedListStyle: "-",
    element: document.getElementById("MyID"),
    forceSync: true,
    hideIcons: ["guide", "heading"],
    indentWithTabs: false,
    initialValue: "Hello world!",
    insertTexts: {
        horizontalRule: ["", "\n\n-----\n\n"],
        image: ["![](http://", ")"],
        link: ["[", "](https://)"],
        table: ["", "\n\n| Column 1 | Column 2 | Column 3 |\n| -------- | -------- | -------- |\n| Text     | Text      | Text     |\n\n"],
    },
    lineWrapping: false,
    minHeight: "500px",
    parsingConfig: {
        allowAtxHeaderWithoutSpace: true,
        strikethrough: false,
        underscoresBreakWords: true,
    },
    placeholder: "Type here...",

    previewClass: "my-custom-styling",
    previewClass: ["my-custom-styling", "more-custom-styling"],

    previewRender: (plainText) => customMarkdownParser(plainText), // Returns HTML from a custom parser
    previewRender: (plainText, preview) => { // Async method
        setTimeout(() => {
            preview.innerHTML = customMarkdownParser(plainText);
        }, 250);

        return "Loading...";
    },
    promptURLs: true,
    promptTexts: {
        image: "Custom prompt for URL:",
        link: "Custom prompt for URL:",
    },
    renderingConfig: {
        singleLineBreaks: false,
        codeSyntaxHighlighting: true,
        sanitizerFunction: (renderedHTML) => {
            // Using DOMPurify and only allowing <b> tags
            return DOMPurify.sanitize(renderedHTML, {ALLOWED_TAGS: ['b']})
        },
    },
    shortcuts: {
        drawTable: "Cmd-Alt-T"
    },
    showIcons: ["code", "table"],
    spellChecker: false,
    status: false,
    status: ["autosave", "lines", "words", "cursor"], // Optional usage
    status: ["autosave", "lines", "words", "cursor", {
        className: "keystrokes",
        defaultValue: (el) => {
            el.setAttribute('data-keystrokes', 0);
        },
        onUpdate: (el) => {
            const keystrokes = Number(el.getAttribute('data-keystrokes')) + 1;
            el.innerHTML = `${keystrokes} Keystrokes`;
            el.setAttribute('data-keystrokes', keystrokes);
        },
    }], // Another optional usage, with a custom status bar item that counts keystrokes
    styleSelectedText: false,
    sideBySideFullscreen: false,
    syncSideBySidePreviewScroll: false,
    tabSize: 4,
    toolbar: false,
    toolbarTips: false,
});

Toolbar icons

Below are the built-in toolbar icons (only some of which are enabled by default), which can be reorganized however you like. "Name" is the name of the icon, referenced in the JavaScript. "Action" is either a function or a URL to open. "Class" is the class given to the icon. "Tooltip" is the small tooltip that appears via the title="" attribute. Note that shortcut hints are added automatically and reflect the specified action if it has a key bind assigned to it (i.e. with the value of action set to bold and that of tooltip set to Bold, the final text the user will see would be "Bold (Ctrl-B)").

Additionally, you can add a separator between any icons by adding "|" to the toolbar array.

NameActionTooltip
Class
boldtoggleBoldBold
fa fa-bold
italictoggleItalicItalic
fa fa-italic
strikethroughtoggleStrikethroughStrikethrough
fa fa-strikethrough
headingtoggleHeadingSmallerHeading
fa fa-header
heading-smallertoggleHeadingSmallerSmaller Heading
fa fa-header
heading-biggertoggleHeadingBiggerBigger Heading
fa fa-lg fa-header
heading-1toggleHeading1Big Heading
fa fa-header header-1
heading-2toggleHeading2Medium Heading
fa fa-header header-2
heading-3toggleHeading3Small Heading
fa fa-header header-3
codetoggleCodeBlockCode
fa fa-code
quotetoggleBlockquoteQuote
fa fa-quote-left
unordered-listtoggleUnorderedListGeneric List
fa fa-list-ul
ordered-listtoggleOrderedListNumbered List
fa fa-list-ol
clean-blockcleanBlockClean block
fa fa-eraser
linkdrawLinkCreate Link
fa fa-link
imagedrawImageInsert Image
fa fa-picture-o
tabledrawTableInsert Table
fa fa-table
horizontal-ruledrawHorizontalRuleInsert Horizontal Line
fa fa-minus
previewtogglePreviewToggle Preview
fa fa-eye no-disable
side-by-sidetoggleSideBySideToggle Side by Side
fa fa-columns no-disable no-mobile
fullscreentoggleFullScreenToggle Fullscreen
fa fa-arrows-alt no-disable no-mobile
guideThis linkMarkdown Guide
fa fa-question-circle
undoundoUndo
fa fa-undo
redoredoRedo
fa fa-redo

Toolbar customization

Customize the toolbar using the toolbar option.

Only the order of existing buttons:

const easyMDE = new EasyMDE({
    toolbar: ["bold", "italic", "heading", "|", "quote"]
});

All information and/or add your own icons

const easyMDE = new EasyMDE({
    toolbar: [
        {
            name: "bold",
            action: EasyMDE.toggleBold,
            className: "fa fa-bold",
            title: "Bold",
        },
        "italics", // shortcut to pre-made button
        {
            name: "custom",
            action: (editor) => {
                // Add your own code
            },
            className: "fa fa-star",
            title: "Custom Button",
            attributes: { // for custom attributes
                id: "custom-id",
                "data-value": "custom value" // HTML5 data-* attributes need to be enclosed in quotation marks ("") because of the dash (-) in its name.
            }
        },
        "|" // Separator
        // [, ...]
    ]
});

Put some buttons on dropdown menu

const easyMDE = new EasyMDE({
    toolbar: [{
                name: "heading",
                action: EasyMDE.toggleHeadingSmaller,
                className: "fa fa-header",
                title: "Headers",
            },
            "|",
            {
                name: "others",
                className: "fa fa-blind",
                title: "others buttons",
                children: [
                    {
                        name: "image",
                        action: EasyMDE.drawImage,
                        className: "fa fa-picture-o",
                        title: "Image",
                    },
                    {
                        name: "quote",
                        action: EasyMDE.toggleBlockquote,
                        className: "fa fa-percent",
                        title: "Quote",
                    },
                    {
                        name: "link",
                        action: EasyMDE.drawLink,
                        className: "fa fa-link",
                        title: "Link",
                    }
                ]
            },
        // [, ...]
    ]
});

Keyboard shortcuts

EasyMDE comes with an array of predefined keyboard shortcuts, but they can be altered with a configuration option. The list of default ones is as follows:

Shortcut (Windows / Linux)Shortcut (macOS)Action
Ctrl-'Cmd-'"toggleBlockquote"
Ctrl-BCmd-B"toggleBold"
Ctrl-ECmd-E"cleanBlock"
Ctrl-HCmd-H"toggleHeadingSmaller"
Ctrl-ICmd-I"toggleItalic"
Ctrl-KCmd-K"drawLink"
Ctrl-LCmd-L"toggleUnorderedList"
Ctrl-PCmd-P"togglePreview"
Ctrl-Alt-CCmd-Alt-C"toggleCodeBlock"
Ctrl-Alt-ICmd-Alt-I"drawImage"
Ctrl-Alt-LCmd-Alt-L"toggleOrderedList"
Shift-Ctrl-HShift-Cmd-H"toggleHeadingBigger"
F9F9"toggleSideBySide"
F11F11"toggleFullScreen"

Here is how you can change a few, while leaving others untouched:

const editor = new EasyMDE({
    shortcuts: {
        "toggleOrderedList": "Ctrl-Alt-K", // alter the shortcut for toggleOrderedList
        "toggleCodeBlock": null, // unbind Ctrl-Alt-C
        "drawTable": "Cmd-Alt-T", // bind Cmd-Alt-T to drawTable action, which doesn't come with a default shortcut
    }
});

Shortcuts are automatically converted between platforms. If you define a shortcut as "Cmd-B", on PC that shortcut will be changed to "Ctrl-B". Conversely, a shortcut defined as "Ctrl-B" will become "Cmd-B" for Mac users.

The list of actions that can be bound is the same as the list of built-in actions available for toolbar buttons.

Advanced use

Event handling

You can catch the following list of events: https://codemirror.net/doc/manual.html#events

const easyMDE = new EasyMDE();
easyMDE.codemirror.on("change", () => {
    console.log(easyMDE.value());
});

Removing EasyMDE from text area

You can revert to the initial text area by calling the toTextArea method. Note that this clears up the autosave (if enabled) associated with it. The text area will retain any text from the destroyed EasyMDE instance.

const easyMDE = new EasyMDE();
// ...
easyMDE.toTextArea();
easyMDE = null;

If you need to remove registered event listeners (when the editor is not needed anymore), call easyMDE.cleanup().

Useful methods

The following self-explanatory methods may be of use while developing with EasyMDE.

const easyMDE = new EasyMDE();
easyMDE.isPreviewActive(); // returns boolean
easyMDE.isSideBySideActive(); // returns boolean
easyMDE.isFullscreenActive(); // returns boolean
easyMDE.clearAutosavedValue(); // no returned value

How it works

EasyMDE is a continuation of SimpleMDE.

SimpleMDE began as an improvement of lepture's Editor project, but has now taken on an identity of its own. It is bundled with CodeMirror and depends on Font Awesome.

CodeMirror is the backbone of the project and parses much of the Markdown syntax as it's being written. This allows us to add styles to the Markdown that's being written. Additionally, a toolbar and status bar have been added to the top and bottom, respectively. Previews are rendered by Marked using GitHub Flavored Markdown (GFM).

SimpleMDE fork

I originally made this fork to implement FontAwesome 5 compatibility into SimpleMDE. When that was done I submitted a pull request, which has not been accepted yet. This, and the project being inactive since May 2017, triggered me to make more changes and try to put new life into the project.

Changes include:

  • FontAwesome 5 compatibility
  • Guide button works when editor is in preview mode
  • Links are now https:// by default
  • Small styling changes
  • Support for Node 8 and beyond
  • Lots of refactored code
  • Links in preview will open in a new tab by default
  • TypeScript support

My intention is to continue development on this project, improving it and keeping it alive.

Hacking EasyMDE

You may want to edit this library to adapt its behavior to your needs. This can be done in some quick steps:

  1. Follow the prerequisites and installation instructions in the contribution guide;
  2. Do your changes;
  3. Run gulp command, which will generate files: dist/easymde.min.css and dist/easymde.min.js;
  4. Copy-paste those files to your code base, and you are done.

Contributing

Want to contribute to EasyMDE? Thank you! We have a contribution guide just for you!


Author: Ionaru
Source Code: https://github.com/Ionaru/easy-markdown-editor
License: MIT license

#react-native #react 

systemctl List All Failed Units/Services on Linux

Is there is a command to print list all failed units or services when using systemd on Linux? Can you tell me the systemctl command to list all failed services on Linux?

This quick tutorial explains how to find/list all failed systemd services/units on Linux operating systems using the systemctl command.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]