Peyton  Ullrich

Peyton Ullrich

1625987580

Easiest Series For Learning Javascript - Javascript Objects - Video 11

In this video we look at how objects work in javascript.

#javascript #javascript objects #learning javascript

What is GEEK

Buddha Community

Easiest Series For Learning Javascript - Javascript Objects - Video 11
Sasha  Lee

Sasha Lee

1650636000

Dl4clj: Clojure Wrapper for Deeplearning4j.

dl4clj

Port of deeplearning4j to clojure

Contact info

If you have any questions,

  • my email is will@yetanalytics.com
  • I'm will_hoyt in the clojurians slack
  • twitter is @FeLungz (don't check very often)

TODO

  • update examples dir
  • finish README
    • add in examples using Transfer Learning
  • finish tests
    • eval is missing regression tests, roc tests
    • nn-test is missing regression tests
    • spark tests need to be redone
    • need dl4clj.core tests
  • revist spark for updates
  • write specs for user facing functions
    • this is very important, match isnt strict for maps
    • provides 100% certianty of the input -> output flow
    • check the args as they come in, dispatch once I know its safe, test the pure output
  • collapse overlapping api namespaces
  • add to core use case flows

Features

Stable Features with tests

  • Neural Networks DSL
  • Early Stopping Training
  • Transfer Learning
  • Evaluation
  • Data import

Features being worked on for 0.1.0

  • Clustering (testing in progress)
  • Spark (currently being refactored)
  • Front End (maybe current release, maybe future release. Not sure yet)
  • Version of dl4j is 0.0.8 in this project. Current dl4j version is 0.0.9
  • Parallelism
  • Kafka support
  • Other items mentioned in TODO

Features being worked on for future releases

  • NLP
  • Computational Graphs
  • Reinforement Learning
  • Arbiter

Artifacts

NOT YET RELEASED TO CLOJARS

  • fork or clone to try it out

If using Maven add the following repository definition to your pom.xml:

<repository>
  <id>clojars.org</id>
  <url>http://clojars.org/repo</url>
</repository>

Latest release

With Leiningen:

n/a

With Maven:

n/a

<dependency>
  <groupId>_</groupId>
  <artifactId>_</artifactId>
  <version>_</version>
</dependency>

Usage

Things you need to know

All functions for creating dl4j objects return code by default

  • All of these functions have an option to return the dl4j object
    • :as-code? = false
  • This because all builders require the code representation of dl4j objects
    • this requirement is not going to change
  • INDarray creation fns default to objects, this is for convenience
    • :as-code? is still respected

API functions return code when all args are provided as code

API functions return the value of calling the wrapped method when args are provided as a mixture of objects and code or just objects

The tests are there to help clarify behavior, if you are unsure of how to use a fn, search the tests

  • for questions about spark, refer to the spark section bellow

Example of obj/code duality

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]))

;; as code (the default)

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1)

;; =>

(doto
 (org.deeplearning4j.nn.conf.layers.DenseLayer$Builder.)
 (.nOut 1)
 (.activation (dl4clj.constants/value-of {:activation-fn :relu}))
 (.weightInit (dl4clj.constants/value-of {:weight-init :xavier}))
 (.nIn 10)
 (.name "example layer")
 (.learningRate 0.006))

;; as an object

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1
 :as-code? false)

;; =>

#object[org.deeplearning4j.nn.conf.layers.DenseLayer 0x69d7d160 "DenseLayer(super=FeedForwardLayer(super=Layer(layerName=example layer, activationFn=relu, weightInit=XAVIER, biasInit=NaN, dist=null, learningRate=0.006, biasLearningRate=NaN, learningRateSchedule=null, momentum=NaN, momentumSchedule=null, l1=NaN, l2=NaN, l1Bias=NaN, l2Bias=NaN, dropOut=NaN, updater=null, rho=NaN, epsilon=NaN, rmsDecay=NaN, adamMeanDecay=NaN, adamVarDecay=NaN, gradientNormalization=null, gradientNormalizationThreshold=NaN), nIn=10, nOut=1))"]

General usage examples

Importing data

Loading data from a file (here its a csv)


(ns my.ns
 (:require [dl4clj.datasets.input-splits :as s]
           [dl4clj.datasets.record-readers :as rr]
           [dl4clj.datasets.api.record-readers :refer :all]
           [dl4clj.datasets.iterators :as ds-iter]
           [dl4clj.datasets.api.iterators :refer :all]
           [dl4clj.helpers :refer [data-from-iter]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; file splits (convert the data to records)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def poker-path "resources/poker-hand-training.csv")
;; this is not a complete dataset, it is just here to sever as an example

(def file-split (s/new-filesplit :path poker-path))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers, (read the records created by the file split)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def csv-rr (initialize-rr! :rr (rr/new-csv-record-reader :skip-n-lines 0 :delimiter ",")
                                 :input-split file-split))

;; lets look at some data
(println (next-record! :rr csv-rr :as-code? false))
;; => #object[java.util.ArrayList 0x2473e02d [1, 10, 1, 11, 1, 13, 1, 12, 1, 1, 9]]
;; this is our first line from the csv


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers dataset iterators (turn our writables into a dataset)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                 :record-reader csv-rr
                 :batch-size 1
                 :label-idx 10
                 :n-possible-labels 10))

;; we use our record reader created above
;; we want to see one example per dataset obj returned (:batch-size = 1)
;; we know our label is at the last index, so :label-idx = 10
;; there are 10 possible types of poker hands so :n-possible-labels = 10
;; you can also set :label-idx to -1 to use the last index no matter the size of the seq

(def other-rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                       :record-reader csv-rr
                       :batch-size 1
                       :label-idx -1
                       :n-possible-labels 10))

(str (next-example! :iter rr-ds-iter :as-code? false))
;; =>
;;===========INPUT===================
;;[1.00, 10.00, 1.00, 11.00, 1.00, 13.00, 1.00, 12.00, 1.00, 1.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]


;; and to show that :label-idx = -1 gives us the same output

(= (next-example! :iter rr-ds-iter :as-code? false)
   (next-example! :iter other-rr-ds-iter :as-code? false)) ;; => true

INDArrays and Datasets from clojure data structures


(ns my.ns
  (:require [nd4clj.linalg.factory.nd4j :refer [vec->indarray matrix->indarray
                                                indarray-of-zeros indarray-of-ones
                                                indarray-of-rand vec-or-matrix->indarray]]
            [dl4clj.datasets.new-datasets :refer [new-ds]]
            [dl4clj.datasets.api.datasets :refer [as-list]]
            [dl4clj.datasets.iterators :refer [new-existing-dataset-iterator]]
            [dl4clj.datasets.api.iterators :refer :all]
            [dl4clj.datasets.pre-processors :as ds-pp]
            [dl4clj.datasets.api.pre-processors :refer :all]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; INDArray creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;TODO: consider defaulting to code

;; can create from a vector

(vec->indarray [1 2 3 4])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x269df212 [1.00, 2.00, 3.00, 4.00]]

;; or from a matrix

(matrix->indarray [[1 2 3 4] [2 4 6 8]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x20aa7fe1
;; [[1.00, 2.00, 3.00, 4.00], [2.00, 4.00, 6.00, 8.00]]]


;; will fill in spareness with zeros

(matrix->indarray [[1 2 3 4] [2 4 6 8] [10 12]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x8b7796c
;;[[1.00, 2.00, 3.00, 4.00],
;; [2.00, 4.00, 6.00, 8.00],
;; [10.00, 12.00, 0.00, 0.00]]]

;; can create an indarray of all zeros with specified shape
;; defaults to :rows = 1 :columns = 1

(indarray-of-zeros :rows 3 :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x6f586a7e
;;[[0.00, 0.00],
;; [0.00, 0.00],
;; [0.00, 0.00]]]

(indarray-of-zeros) ;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xe59ffec 0.00]

;; and if only one is supplied, will get a vector of specified length

(indarray-of-zeros :rows 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2899d974 [0.00, 0.00]]

(indarray-of-zeros :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xa5b9782 [0.00, 0.00]]

;; same considerations/defaults for indarray-of-ones and indarray-of-rand

(indarray-of-ones :rows 2 :columns 3)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x54f08662 [[1.00, 1.00, 1.00], [1.00, 1.00, 1.00]]]

(indarray-of-rand :rows 2 :columns 3)
;; all values are greater than 0 but less than 1
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2f20293b [[0.85, 0.86, 0.13], [0.94, 0.04, 0.36]]]



;; vec-or-matrix->indarray is built into all functions which require INDArrays
;; so that you can use clojure data structures
;; but you still have the option of passing existing INDArrays

(def example-array (vec-or-matrix->indarray [1 2 3 4]))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x5c44c71f [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray example-array)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x607b03b0 [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray (indarray-of-rand :rows 2))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x49143b08 [0.76, 0.92]]

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def ds-with-single-example (new-ds :input [1 2 3 4]
                                    :output [0.0 1.0 0.0]))

(as-list :ds ds-with-single-example :as-code? false)
;; =>
;; #object[java.util.ArrayList 0x5d703d12
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00]]]

(def ds-with-multiple-examples (new-ds
                                :input [[1 2 3 4] [2 4 6 8]]
                                :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

(as-list :ds ds-with-multiple-examples :as-code? false)
;; =>
;;#object[java.util.ArrayList 0x29c7a9e2
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00],
;;===========INPUT===================
;;[2.00, 4.00, 6.00, 8.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 1.00]]]

;; we can create a dataset iterator from the code which creates datasets
;; and set the labels for our outputs (optional)

(def ds-with-multiple-examples
  (new-ds
   :input [[1 2 3 4] [2 4 6 8]]
   :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

;; iterator
(def training-rr-ds-iter
  (new-existing-dataset-iterator
   :dataset ds-with-multiple-examples
   :labels ["foo" "baz" "foobaz"]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set normalization
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; this gathers statistics on the dataset and normalizes the data
;; and applies the transformation to all dataset objects in the iterator
(def train-iter-normalized
  (c/normalize-iter! :iter training-rr-ds-iter
                     :normalizer (ds-pp/new-standardize-normalization-ds-preprocessor)
                     :as-code? false))

;; above returns the normalized iterator
;; to get fit normalizer

(def the-normalizer
  (get-pre-processor train-iter-normalized))

Model configuration

Creating a neural network configuration with singe and multiple layers

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.conf.distributions :as dist]
            [dl4clj.nn.conf.input-pre-processor :as pp]
            [dl4clj.nn.conf.step-fns :as s-fn]))

;; nn/builder has 3 types of args
;; 1) args which set network configuration params
;; 2) args which set default values for layers
;; 3) args which set multi layer network configuration params

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; single layer nn configuration
;; here we are setting network configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn :default-step-fn
            :layers {:dense-layer {:activation-fn :relu
                                   :updater :adam
                                   :adam-mean-decay 0.2
                                   :adam-var-decay 0.1
                                   :learning-rate 0.006
                                   :weight-init :xavier
                                   :layer-name "single layer model example"
                                   :n-in 10
                                   :n-out 20}})

;; there are several options within a nn-conf map which can be configuration maps
;; or calls to fns
;; It doesn't matter which option you choose and you don't have to stay consistent
;; the list of params which can be passed as config maps or fn calls will
;; be enumerated at a later date

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn (s-fn/new-default-step-fn)
            :build? true
            ;; dont need to specify layer order, theres only one
            :layers (l/dense-layer-builder
                    :activation-fn :relu
                    :updater :adam
                    :adam-mean-decay 0.2
                    :adam-var-decay 0.1
                    :dist (dist/new-normal-distribution :mean 0 :std 1)
                    :learning-rate 0.006
                    :weight-init :xavier
                    :layer-name "single layer model example"
                    :n-in 10
                    :n-out 20))

;; these configurations are the same

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; multi-layer configuration
;; here we are also setting layer defaults
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; defaults will apply to layers which do not specify those value in their config

(nn/builder
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; we need to specify the layer order
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}})

;; specifying multi-layer config params

(nn/builder
 ;; network args
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false

 ;; layer defaults
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; the layers
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}}
 ;; multi layer network args
 :backprop? true
 :backprop-type :standard
 :pretrain? false
 :input-pre-processors {0 (pp/new-zero-mean-pre-pre-processor)
                        1 {:unit-variance-processor {}}})

Configuration to Trained models

Multi Layer models

(ns my.ns
  (:require [dl4clj.datasets.iterators :as iter]
            [dl4clj.datasets.input-splits :as split]
            [dl4clj.datasets.record-readers :as rr]
            [dl4clj.optimize.listeners :as listener]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.nn.api.model :refer [init! set-listeners!]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.datasets.api.record-readers :refer [initialize-rr!]]
            [dl4clj.eval.api.eval :refer [get-stats get-accuracy]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; nn-conf -> multi-layer-network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def multi-layer-network (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; local cpu training with dl4j pre-built iterators
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; lets use the pre-built Mnist data set iterator

(def train-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;; and lets set a listener so we can know how training is going

(def score-listener (listener/new-score-iteration-listener :print-every-n 5))

;; and attach it to our model

;; TODO: listeners are broken, look into log4j warnning
(def mln-with-listener (set-listeners! :model multi-layer-network
                                       :listeners [score-listener]))

(def trained-mln (mln/train-mln-with-ds-iter! :mln mln-with-listener
                                              :iter train-mnist-iter
                                              :n-epochs 15
                                              :as-code? false))

;; training happens because :as-code? = false
;; if it was true, we would still just have a data structure
;; we now have a trained model that has seen the training dataset 15 times
;; time to evaluate our model

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;Create an evaluation object
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj (evaluate-classification :mln trained-mln
                                       :iter test-mnist-iter))

;; always remember that these objects are stateful, dont use the same eval-obj
;; to eval two different networks
;; we trained the model on a training dataset.  We evaluate on a test set

(println (get-stats :evaler eval-obj))
;; this will print the stats to standard out for each feature/label pair

;;Examples labeled as 0 classified by model as 0: 968 times
;;Examples labeled as 0 classified by model as 1: 1 times
;;Examples labeled as 0 classified by model as 2: 1 times
;;Examples labeled as 0 classified by model as 3: 1 times
;;Examples labeled as 0 classified by model as 5: 1 times
;;Examples labeled as 0 classified by model as 6: 3 times
;;Examples labeled as 0 classified by model as 7: 1 times
;;Examples labeled as 0 classified by model as 8: 2 times
;;Examples labeled as 0 classified by model as 9: 2 times
;;Examples labeled as 1 classified by model as 1: 1126 times
;;Examples labeled as 1 classified by model as 2: 2 times
;;Examples labeled as 1 classified by model as 3: 1 times
;;Examples labeled as 1 classified by model as 5: 1 times
;;Examples labeled as 1 classified by model as 6: 2 times
;;Examples labeled as 1 classified by model as 7: 1 times
;;Examples labeled as 1 classified by model as 8: 2 times
;;Examples labeled as 2 classified by model as 0: 3 times
;;Examples labeled as 2 classified by model as 1: 2 times
;;Examples labeled as 2 classified by model as 2: 1006 times
;;Examples labeled as 2 classified by model as 3: 2 times
;;Examples labeled as 2 classified by model as 4: 3 times
;;Examples labeled as 2 classified by model as 6: 3 times
;;Examples labeled as 2 classified by model as 7: 7 times
;;Examples labeled as 2 classified by model as 8: 6 times
;;Examples labeled as 3 classified by model as 2: 4 times
;;Examples labeled as 3 classified by model as 3: 990 times
;;Examples labeled as 3 classified by model as 5: 3 times
;;Examples labeled as 3 classified by model as 7: 3 times
;;Examples labeled as 3 classified by model as 8: 3 times
;;Examples labeled as 3 classified by model as 9: 7 times
;;Examples labeled as 4 classified by model as 2: 2 times
;;Examples labeled as 4 classified by model as 3: 1 times
;;Examples labeled as 4 classified by model as 4: 967 times
;;Examples labeled as 4 classified by model as 6: 4 times
;;Examples labeled as 4 classified by model as 7: 1 times
;;Examples labeled as 4 classified by model as 9: 7 times
;;Examples labeled as 5 classified by model as 0: 2 times
;;Examples labeled as 5 classified by model as 3: 6 times
;;Examples labeled as 5 classified by model as 4: 1 times
;;Examples labeled as 5 classified by model as 5: 874 times
;;Examples labeled as 5 classified by model as 6: 3 times
;;Examples labeled as 5 classified by model as 7: 1 times
;;Examples labeled as 5 classified by model as 8: 3 times
;;Examples labeled as 5 classified by model as 9: 2 times
;;Examples labeled as 6 classified by model as 0: 4 times
;;Examples labeled as 6 classified by model as 1: 3 times
;;Examples labeled as 6 classified by model as 3: 2 times
;;Examples labeled as 6 classified by model as 4: 4 times
;;Examples labeled as 6 classified by model as 5: 4 times
;;Examples labeled as 6 classified by model as 6: 939 times
;;Examples labeled as 6 classified by model as 7: 1 times
;;Examples labeled as 6 classified by model as 8: 1 times
;;Examples labeled as 7 classified by model as 1: 7 times
;;Examples labeled as 7 classified by model as 2: 4 times
;;Examples labeled as 7 classified by model as 3: 3 times
;;Examples labeled as 7 classified by model as 7: 1005 times
;;Examples labeled as 7 classified by model as 8: 2 times
;;Examples labeled as 7 classified by model as 9: 7 times
;;Examples labeled as 8 classified by model as 0: 3 times
;;Examples labeled as 8 classified by model as 2: 3 times
;;Examples labeled as 8 classified by model as 3: 2 times
;;Examples labeled as 8 classified by model as 4: 4 times
;;Examples labeled as 8 classified by model as 5: 3 times
;;Examples labeled as 8 classified by model as 6: 2 times
;;Examples labeled as 8 classified by model as 7: 4 times
;;Examples labeled as 8 classified by model as 8: 947 times
;;Examples labeled as 8 classified by model as 9: 6 times
;;Examples labeled as 9 classified by model as 0: 2 times
;;Examples labeled as 9 classified by model as 1: 2 times
;;Examples labeled as 9 classified by model as 3: 4 times
;;Examples labeled as 9 classified by model as 4: 8 times
;;Examples labeled as 9 classified by model as 6: 1 times
;;Examples labeled as 9 classified by model as 7: 4 times
;;Examples labeled as 9 classified by model as 8: 2 times
;;Examples labeled as 9 classified by model as 9: 986 times

;;==========================Scores========================================
;; Accuracy:        0.9808
;; Precision:       0.9808
;; Recall:          0.9807
;; F1 Score:        0.9807
;;========================================================================

;; can get the stats that are printed via fns in the evaluation namespace
;; after running eval-model-whole-ds

(get-accuracy :evaler evaler-with-stats) ;; => 0.9808

Model Tuning

Early Stopping (controlling training)

it is recommened you start here when designing models

using dl4clj.core


(ns my.ns
  (:require [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123
   :iterations 1
   :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu
   :default-l2 7.5e-6
   :default-weight-init :xavier
   :default-learning-rate 0.0015
   :default-updater :nesterovs
   :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition
                             :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

(def in-mem-saver (new-in-memory-saver))

(def trained-mln
;; defaults to returning the model
  (c/train-with-early-stopping
   :nn-conf nn-conf
   :training-iter train-mnist-iter
   :testing-iter test-mnist-iter
   :eval-every-n-epochs 1
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :save-last-model? true
   :model-saver in-mem-saver
   :as-code? false))

(def model-evaler
  (evaluate-classification :mln trained-mln :iter test-mnist-iter))

(println (get-stats :evaler model-evaler))
  • explicit, step by step way of doing this
(ns my.ns
  (:require [dl4clj.earlystopping.early-stopping-config :refer [new-early-stopping-config]]
            [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver new-local-file-model-saver]]
            [dl4clj.earlystopping.score-calc :refer [new-ds-loss-calculator]]
            [dl4clj.earlystopping.early-stopping-trainer :refer [new-early-stopping-trainer]]
            [dl4clj.earlystopping.api.early-stopping-trainer :refer [fit-trainer!]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.utils :refer [load-model!]]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; start with our network config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true
   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98
   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}
   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def mln (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; the training/testing data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we are going to need termination conditions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; these allow us to control when we exit training

;; this can be based off of iterations or epochs

;; iteration termination conditions

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

;; epoch termination conditions

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we also need a way to save our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; can be in memory or to a local directory

(def in-mem-saver (new-in-memory-saver))

(def local-file-saver (new-local-file-model-saver :directory "resources/tmp/readme/"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; set up your score calculator
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def score-calcer (new-ds-loss-calculator :iter test-iter
                                          :average? true))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; termination conditions
;; a way to save our model
;; a way to calculate the score of our model on the dataset

(def early-stopping-conf
  (new-early-stopping-config
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :eval-every-n-epochs 5
   :model-saver local-file-saver
   :save-last-model? true
   :score-calculator score-calcer))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping trainer from our data, model and early stopping conf
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer (new-early-stopping-trainer :early-stopping-conf early-stopping-conf
                                            :mln mln
                                            :iter train-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; fit and use our early stopping trainer
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer-fitted (fit-trainer! es-trainer :as-code? false))

;; when the trainer terminates, you will see something like this
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  Completed training epoch 14
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  New best model: score = 0.005225599372851298,
;;                                                   epoch = 14 (previous: score = 0.018243224899038346, epoch = 7)
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Hit epoch termination condition at epoch 14.
;;                                           Details: BestScoreEpochTerminationCondition(0.009)

;; and if we look at the es-trainer-fitted object we see

;;#object[org.deeplearning4j.earlystopping.EarlyStoppingResult 0x5ab74f27 EarlyStoppingResult
;;(terminationReason=EpochTerminationCondition,details=BestScoreEpochTerminationCondition(0.009),
;; bestModelEpoch=14,bestModelScore=0.005225599372851298,totalEpochs=15)]

;; and our model has been saved to /resources/tmp/readme/bestModel.bin
;; there we have our model config, model params and our updater state

;; we can then load this model to use it or continue refining it

(def loaded-model (load-model! :path "resources/tmp/readme/bestModel.bin"
                               :load-updater? true))

Transfer Learning (freezing layers)


;; TODO: need to write up examples

Spark Training

dl4j Spark usage

How it is done in dl4clj

  • Uses dl4clj.core
    • This example uses a fn which takes care of most steps for you
      • allows you to pass args as code bc the fn accounts for the multiple spark contexts issue encountered when everything is just a data structure

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context
                                                java-rdd-from-iter]]
            [dl4clj.spark.api.dl4j-multi-layer :refer [eval-classification-spark-mln
                                                       get-spark-context]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, spark context
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, training data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, spark mln
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (c/train-with-spark :spark-context your-spark-context
                      :mln-conf mln-conf
                      :training-master training-master
                      :iter iris-iter
                      :n-epochs 1
                      :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, use spark context from spark-mln to create rdd
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; TODO: eliminate this step

(def our-rdd
  (let [sc (get-spark-context fitted-spark-mln :as-code? false)]
    (java-rdd-from-iter :spark-context sc
                        :iter iris-iter)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 6, evaluation model and print stats (poor performance of model expected)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))

(println (get-stats :evaler eval-obj))

  • this example demonstrates the dl4j workflow
    • NOTE: unlike the previous example, this one requires dl4j objects to be used
      • this is becaues spark only wants you to have one spark context at a time
(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context java-rdd-from-iter]]
            [dl4clj.spark.dl4j-multi-layer :as spark-mln]
            [dl4clj.spark.api.dl4j-multi-layer :refer [fit-spark-mln!
                                                       eval-classification-spark-mln]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :as-code? false
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, create a training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; not all options specified, but most are

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :as-code? false
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, create a Spark Multi Layer Network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app" :as-code? false))

;; new-java-spark-context will turn an existing spark-configuration into a java spark context
;; or create a new java spark context with master set to "local[*]" and the app name
;; set to :app-name


(def spark-mln
  (spark-mln/new-spark-multi-layer-network
   :spark-context your-spark-context
   :mln mln-conf
   :training-master training-master
   :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, load your data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; one way is via a dataset-iterator
;; can make one directly from a dataset (iterator data-set)
;; see: nd4clj.linalg.dataset.api.data-set and nd4clj.linalg.dataset.data-set
;; we are going to use a pre-built one

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5
   :as-code? false))

;; now lets convert the data into a javaRDD

(def our-rdd
  (java-rdd-from-iter :spark-context your-spark-context
                      :iter iris-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, fit and evaluate the model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (fit-spark-mln!
   :spark-mln spark-mln
   :rdd our-rdd
   :n-epochs 1))
;; this fn also has the option to supply :path-to-data instead of :rdd
;; that path should point to a directory containing a number of dataset objects

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))
;; we would want to have different testing and training rdd's but here we are using
;; the data we trained on

;; lets get the stats for how our model performed

(println (get-stats :evaler eval-obj))

Terminology

Coming soon

Packages to come back to:

Implement ComputationGraphs and the classes which use them

NLP

Parallelism

TSNE

UI


Author: yetanalytics
Source Code: https://github.com/yetanalytics/dl4clj
License: BSD-2-Clause License

#machine-learning #deep-learning 

Arvel  Parker

Arvel Parker

1591611780

How to Find Ulimit For user on Linux

How can I find the correct ulimit values for a user account or process on Linux systems?

For proper operation, we must ensure that the correct ulimit values set after installing various software. The Linux system provides means of restricting the number of resources that can be used. Limits set for each Linux user account. However, system limits are applied separately to each process that is running for that user too. For example, if certain thresholds are too low, the system might not be able to server web pages using Nginx/Apache or PHP/Python app. System resource limits viewed or set with the NA command. Let us see how to use the ulimit that provides control over the resources available to the shell and processes.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]

Kaia  Schmitt

Kaia Schmitt

1659817260

SDK for Connecting to AWS IoT From A Device using Embedded C

AWS IoT Device SDK for Embedded C

Overview

The AWS IoT Device SDK for Embedded C (C-SDK) is a collection of C source files under the MIT open source license that can be used in embedded applications to securely connect IoT devices to AWS IoT Core. It contains MQTT client, HTTP client, JSON Parser, AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries. This SDK is distributed in source form, and can be built into customer firmware along with application code, other libraries and an operating system (OS) of your choice. These libraries are only dependent on standard C libraries, so they can be ported to various OS's - from embedded Real Time Operating Systems (RTOS) to Linux/Mac/Windows. You can find sample usage of C-SDK libraries on POSIX systems using OpenSSL (e.g. Linux demos in this repository), and on FreeRTOS using mbedTLS (e.g. FreeRTOS demos in FreeRTOS repository).

For the latest release of C-SDK, please see the section for Releases and Documentation.

C-SDK includes libraries that are part of the FreeRTOS 202012.01 LTS release. Learn more about the FreeRTOS 202012.01 LTS libraries by clicking here.

License

The C-SDK libraries are licensed under the MIT open source license.

Features

C-SDK simplifies access to various AWS IoT services. C-SDK has been tested to work with AWS IoT Core and an open source MQTT broker to ensure interoperability. The AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries are flexible to work with any MQTT client and JSON parser. The MQTT client and JSON parser libraries are offered as choices without being tightly coupled with the rest of the SDK. C-SDK contains the following libraries:

coreMQTT

The coreMQTT library provides the ability to establish an MQTT connection with a broker over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. This MQTT connection can be used for performing publish operations to MQTT topics and subscribing to MQTT topics. The library provides a mechanism to register customer-defined callbacks for receiving incoming PUBLISH, acknowledgement and keep-alive response events from the broker. The library has been refactored for memory optimization and is compliant with the MQTT 3.1.1 standard. It has no dependencies on any additional libraries other than the standard C library, a customer-implemented network transport interface, and optionally a customer-implemented platform time function. The refactored design embraces different use-cases, ranging from resource-constrained platforms using only QoS 0 MQTT PUBLISH messages to resource-rich platforms using QoS 2 MQTT PUBLISH over TLS connections.

See memory requirements for the latest release here.

coreHTTP

The coreHTTP library provides the ability to establish an HTTP connection with a server over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. The HTTP connection can be used to make "GET" (include range requests), "PUT", "POST" and "HEAD" requests. The library provides a mechanism to register a customer-defined callback for receiving parsed header fields in an HTTP response. The library has been refactored for memory optimization, and is a client implementation of a subset of the HTTP/1.1 standard.

See memory requirements for the latest release here.

coreJSON

The coreJSON library is a JSON parser that strictly enforces the ECMA-404 JSON standard. It provides a function to validate a JSON document, and a function to search for a key and return its value. A search can descend into nested structures using a compound query key. A JSON document validation also checks for illegal UTF8 encodings and illegal Unicode escape sequences.

See memory requirements for the latest release here.

corePKCS11

The corePKCS11 library is an implementation of the PKCS #11 interface (API) that makes it easier to develop applications that rely on cryptographic operations. Only a subset of the PKCS #11 v2.4 standard has been implemented, with a focus on operations involving asymmetric keys, random number generation, and hashing.

The Cryptoki or PKCS #11 standard defines a platform-independent API to manage and use cryptographic tokens. The name, "PKCS #11", is used interchangeably to refer to the API itself and the standard which defines it.

The PKCS #11 API is useful for writing software without taking a dependency on any particular implementation or hardware. By writing against the PKCS #11 standard interface, code can be used interchangeably with multiple algorithms, implementations and hardware.

Generally vendors for secure cryptoprocessors such as Trusted Platform Module (TPM), Hardware Security Module (HSM), Secure Element, or any other type of secure hardware enclave, distribute a PKCS #11 implementation with the hardware. The purpose of corePKCS11 mock is therefore to provide a PKCS #11 implementation that allows for rapid prototyping and development before switching to a cryptoprocessor specific PKCS #11 implementation in production devices.

Since the PKCS #11 interface is defined as part of the PKCS #11 specification replacing corePKCS11 with another implementation should require little porting effort, as the interface will not change. The system tests distributed in corePKCS11 repository can be leveraged to verify the behavior of a different implementation is similar to corePKCS11.

See memory requirements for the latest release here.

AWS IoT Device Shadow

The AWS IoT Device Shadow library enables you to store and retrieve the current state one or more shadows of every registered device. A device’s shadow is a persistent, virtual representation of your device that you can interact with from AWS IoT Core even if the device is offline. The device state is captured in its "shadow" is represented as a JSON document. The device can send commands over MQTT to get, update and delete its latest state as well as receive notifications over MQTT about changes in its state. The device’s shadow(s) are uniquely identified by the name of the corresponding "thing", a representation of a specific device or logical entity on the AWS Cloud. See Managing Devices with AWS IoT for more information on IoT "thing". This library supports named shadows, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. More details about AWS IoT Device Shadow can be found in AWS IoT documentation.

The AWS IoT Device Shadow library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).

See memory requirements for the latest release here.

AWS IoT Jobs

The AWS IoT Jobs library enables you to interact with the AWS IoT Jobs service which notifies one or more connected devices of a pending “Job”. A Job can be used to manage your fleet of devices, update firmware and security certificates on your devices, or perform administrative tasks such as restarting devices and performing diagnostics. For documentation of the service, please see the AWS IoT Developer Guide. Interactions with the Jobs service use the MQTT protocol. This library provides an API to compose and recognize the MQTT topic strings used by the Jobs service.

The AWS IoT Jobs library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with libmosquitto and coreJSON).

See memory requirements for the latest release here.

AWS IoT Device Defender

The AWS IoT Device Defender library enables you to interact with the AWS IoT Device Defender service to continuously monitor security metrics from devices for deviations from what you have defined as appropriate behavior for each device. If something doesn’t look right, AWS IoT Device Defender sends out an alert so you can take action to remediate the issue. More details about Device Defender can be found in AWS IoT Device Defender documentation. This library supports custom metrics, a feature that helps you monitor operational health metrics that are unique to your fleet or use case. For example, you can define a new metric to monitor the memory usage or CPU usage on your devices.

The AWS IoT Device Defender library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).

See memory requirements for the latest release here.

AWS IoT Over-the-air Update

The AWS IoT Over-the-air Update (OTA) library enables you to manage the notification of a newly available update, download the update, and perform cryptographic verification of the firmware update. Using the OTA library, you can logically separate firmware updates from the application running on your devices. You can also use the library to send other files (e.g. images, certificates) to one or more devices registered with AWS IoT. More details about OTA library can be found in AWS IoT Over-the-air Update documentation.

The AWS IoT Over-the-air Update library has a dependency on coreJSON for parsing of JSON job document and tinyCBOR for decoding encoded data streams, other than the standard C library. It can be used with any MQTT library, HTTP library, and operating system (e.g. Linux, FreeRTOS) (see demos with coreMQTT and coreHTTP over Linux).

See memory requirements for the latest release here.

AWS IoT Fleet Provisioning

The AWS IoT Fleet Provisioning library enables you to interact with the AWS IoT Fleet Provisioning MQTT APIs in order to provison IoT devices without preexisting device certificates. With AWS IoT Fleet Provisioning, devices can securely receive unique device certificates from AWS IoT when they connect for the first time. For an overview of all provisioning options offered by AWS IoT, see device provisioning documentation. For details about Fleet Provisioning, refer to the AWS IoT Fleet Provisioning documentation.

See memory requirements for the latest release here.

AWS SigV4

The AWS SigV4 library enables you to sign HTTP requests with Signature Version 4 Signing Process. Signature Version 4 (SigV4) is the process to add authentication information to HTTP requests to AWS services. For security, most requests to AWS must be signed with an access key. The access key consists of an access key ID and secret access key.

See memory requirements for the latest release here.

backoffAlgorithm

The backoffAlgorithm library is a utility library to calculate backoff period using an exponential backoff with jitter algorithm for retrying network operations (like failed network connection with server). This library uses the "Full Jitter" strategy for the exponential backoff with jitter algorithm. More information about the algorithm can be seen in the Exponential Backoff and Jitter AWS blog.

Exponential backoff with jitter is typically used when retrying a failed connection or network request to the server. An exponential backoff with jitter helps to mitigate the failed network operations with servers, that are caused due to network congestion or high load on the server, by spreading out retry requests across multiple devices attempting network operations. Besides, in an environment with poor connectivity, a client can get disconnected at any time. A backoff strategy helps the client to conserve battery by not repeatedly attempting reconnections when they are unlikely to succeed.

The backoffAlgorithm library has no dependencies on libraries other than the standard C library.

See memory requirements for the latest release here.

Sending metrics to AWS IoT

When establishing a connection with AWS IoT, users can optionally report the Operating System, Hardware Platform and MQTT client version information of their device to AWS. This information can help AWS IoT provide faster issue resolution and technical support. If users want to report this information, they can send a specially formatted string (see below) in the username field of the MQTT CONNECT packet.

Format

The format of the username string with metrics is:

<Actual_Username>?SDK=<OS_Name>&Version=<OS_Version>&Platform=<Hardware_Platform>&MQTTLib=<MQTT_Library_name>@<MQTT_Library_version>

Where

  • is the actual username used for authentication, if username and password are used for authentication. When username and password based authentication is not used, this is an empty value.
  • is the Operating System the application is running on (e.g. Ubuntu)
  • is the version number of the Operating System (e.g. 20.10)
  • is the Hardware Platform the application is running on (e.g. RaspberryPi)
  • is the MQTT Client library being used (e.g. coreMQTT)
  • is the version of the MQTT Client library being used (e.g. 1.1.0)

Example

  • Actual_Username = “iotuser”, OS_Name = Ubuntu, OS_Version = 20.10, Hardware_Platform_Name = RaspberryPi, MQTT_Library_Name = coremqtt, MQTT_Library_version = 1.1.0. If username is not used, then “iotuser” can be removed.
/* Username string:
 * iotuser?SDK=Ubuntu&Version=20.10&Platform=RaspberryPi&MQTTLib=coremqtt@1.1.0
 */

#define OS_NAME                   "Ubuntu"
#define OS_VERSION                "20.10"
#define HARDWARE_PLATFORM_NAME    "RaspberryPi"
#define MQTT_LIB                  "coremqtt@1.1.0"

#define USERNAME_STRING           "iotuser?SDK=" OS_NAME "&Version=" OS_VERSION "&Platform=" HARDWARE_PLATFORM_NAME "&MQTTLib=" MQTT_LIB
#define USERNAME_STRING_LENGTH    ( ( uint16_t ) ( sizeof( USERNAME_STRING ) - 1 ) )

MQTTConnectInfo_t connectInfo;
connectInfo.pUserName = USERNAME_STRING;
connectInfo.userNameLength = USERNAME_STRING_LENGTH;
mqttStatus = MQTT_Connect( pMqttContext, &connectInfo, NULL, CONNACK_RECV_TIMEOUT_MS, pSessionPresent );

Versioning

C-SDK releases will now follow a date based versioning scheme with the format YYYYMM.NN, where:

  • Y represents the year.
  • M represents the month.
  • N represents the release order within the designated month (00 being the first release).

For example, a second release in June 2021 would be 202106.01. Although the SDK releases have moved to date-based versioning, each library within the SDK will still retain semantic versioning. In semantic versioning, the version number itself (X.Y.Z) indicates whether the release is a major, minor, or point release. You can use the semantic version of a library to assess the scope and impact of a new release on your application.

Releases and Documentation

All of the released versions of the C-SDK libraries are available as git tags. For example, the last release of the v3 SDK version is available at tag 3.1.2.

202108.00

API documentation of 202108.00 release

This release introduces the refactored AWS IoT Fleet Provisioning library and the new AWS SigV4 library.

Additionally, this release brings minor version updates in the AWS IoT Over-the-Air Update and corePKCS11 libraries.

202103.00

API documentation of 202103.00 release

This release includes a major update to the APIs of the AWS IoT Over-the-air Update library.

Additionally, AWS IoT Device Shadow library introduces a minor update by adding support for named shadow, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. AWS IoT Jobs library introduces a minor update by introducing macros for $next job ID and compile-time generation of topic strings. AWS IoT Device Defender library introduces a minor update that adds macros to API for custom metrics feature of AWS IoT Device Defender service.

corePKCS11 also introduces a patch update by removing the pkcs11configPAL_DESTROY_SUPPORTED config and mbedTLS platform abstraction layer of DestroyObject. Lastly, no code changes are introduced for backoffAlgorithm, coreHTTP, coreMQTT, and coreJSON; however, patch updates are made to improve documentation and CI.

202012.01

API documentation of 202012.01 release

This release includes AWS IoT Over-the-air Update(Release Candidate), backoffAlgorithm, and PKCS #11 libraries. Additionally, there is a major update to the coreJSON and coreHTTP APIs. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), and Coverity static analysis. In addition, all libraries except AWS IoT Over-the-air Update and backoffAlgorithm undergo validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.

202011.00

API documentation of 202011.00 release

This release includes refactored HTTP client, AWS IoT Device Defender, and AWS IoT Jobs libraries. Additionally, there is a major update to the coreJSON API. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), Coverity static analysis, and validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.

202009.00

API documentation of 202009.00 release

This release includes refactored MQTT, JSON Parser, and AWS IoT Device Shadow libraries for optimized memory usage and modularity. These libraries are included in the SDK via Git submoduling. These libraries have gone through code quality checks including verification that no function has a GNU Complexity score over 8, and checks against deviations from mandatory rules in the MISRA coding standard. Deviations from the MISRA C:2012 guidelines are documented under MISRA Deviations. These libraries have also undergone both static code analysis from Coverity static analysis, and validation of memory safety and data structure invariance through the CBMC automated reasoning tool.

If you are upgrading from v3.x API of the C-SDK to the 202009.00 release, please refer to Migration guide from v3.1.2 to 202009.00 and newer releases. If you are using the C-SDK v4_beta_deprecated branch, note that we will continue to maintain this branch for critical bug fixes and security patches but will not add new features to it. See the C-SDK v4_beta_deprecated branch README for additional details.

v3.1.2

Details available here.

Porting Guide for 202009.00 and newer releases

All libraries depend on the ISO C90 standard library and additionally on the stdint.h library for fixed-width integers, including uint8_t, int8_t, uint16_t, uint32_t and int32_t, and constant macros like UINT16_MAX. If your platform does not support the stdint.h library, definitions of the mentioned fixed-width integer types will be required for porting any C-SDK library to your platform.

Porting coreMQTT

Guide for porting coreMQTT library to your platform is available here.

Porting coreHTTP

Guide for porting coreHTTP library is available here.

Porting AWS IoT Device Shadow

Guide for porting AWS IoT Device Shadow library is available here.

Porting AWS IoT Device Defender

Guide for porting AWS IoT Device Defender library is available here.

Porting AWS IoT Over-the-air Update

Guide for porting OTA library to your platform is available here.

Migration guide from v3.1.2 to 202009.00 and newer releases

MQTT Migration

Migration guide for MQTT library is available here.

Shadow Migration

Migration guide for Shadow library is available here.

Jobs Migration

Migration guide for Jobs library is available here.

Branches

main branch

The main branch hosts the continuous development of the AWS IoT Embedded C SDK (C-SDK) libraries. Please be aware that the development at the tip of the main branch is continuously in progress, and may have bugs. Consider using the tagged releases of the C-SDK for production ready software.

v4_beta_deprecated branch (formerly named v4_beta)

The v4_beta_deprecated branch contains a beta version of the C-SDK libraries, which is now deprecated. This branch was earlier named as v4_beta, and was renamed to v4_beta_deprecated. The libraries in this branch will not be released. However, critical bugs will be fixed and tested. No new features will be added to this branch.

Getting Started

Cloning

This repository uses Git Submodules to bring in the C-SDK libraries (eg, MQTT ) and third-party dependencies (eg, mbedtls for POSIX platform transport layer). Note: If you download the ZIP file provided by GitHub UI, you will not get the contents of the submodules (The ZIP file is also not a valid git repository). If you download from the 202012.00 Release Page page, you will get the entire repository (including the submodules) in the ZIP file, aws-iot-device-sdk-embedded-c-202012.00.zip. To clone the latest commit to main branch using HTTPS:

git clone --recurse-submodules https://github.com/aws/aws-iot-device-sdk-embedded-C.git

Using SSH:

git clone --recurse-submodules git@github.com:aws/aws-iot-device-sdk-embedded-C.git

If you have downloaded the repo without using the --recurse-submodules argument, you need to run:

git submodule update --init --recursive

When building with CMake, submodules are also recursively cloned automatically. However, -DBUILD_CLONE_SUBMODULES=0 can be passed as a CMake flag to disable this functionality. This is useful when you'd like to build CMake while using a different commit from a submodule.

Configuring Demos

The libraries in this SDK are not dependent on any operating system. However, the demos for the libraries in this SDK are built and tested on a Linux platform. The demos build with CMake, a cross-platform build tool.

Prerequisites

  • CMake 3.2.0 or any newer version for utilizing the build system of the repository.
  • C90 compiler such as gcc
    • Due to the use of mbedtls in corePKCS11, a C99 compiler is required if building the PKCS11 demos or the CMake install target.
  • Although not a part of the ISO C90 standard, stdint.h is required for fixed-width integer types that include uint8_t, int8_t, uint16_t, uint32_t and int32_t, and constant macros like UINT16_MAX, while stdbool.h is required for boolean parameters in coreMQTT. For compilers that do not provide these header files, coreMQTT provides the files stdint.readme and stdbool.readme, which can be renamed to stdint.h and stdbool.h, respectively, to provide the required type definitions.
  • A supported operating system. The ports provided with this repo are expected to work with all recent versions of the following operating systems, although we cannot guarantee the behavior on all systems.
    • Linux system with POSIX sockets, threads, RT, and timer APIs. (We have tested on Ubuntu 18.04).

Build Dependencies

The follow table shows libraries that need to be installed in your system to run certain demos. If a dependency is not installed and cannot be built from source, demos that require that dependency will be excluded from the default all target.

DependencyVersionUsage
OpenSSL1.1.0 or laterAll TLS demos and tests with the exception of PKCS11
Mosquitto Client1.4.10 or laterAWS IoT Jobs Mosquitto demo

AWS IoT Account Setup

You need to setup an AWS account and access the AWS IoT console for running the AWS IoT Device Shadow library, AWS IoT Device Defender library, AWS IoT Jobs library, AWS IoT OTA library and coreHTTP S3 download demos. Also, the AWS account can be used for running the MQTT mutual auth demo against AWS IoT broker. Note that running the AWS IoT Device Defender, AWS IoT Jobs and AWS IoT Device Shadow library demos require the setup of a Thing resource for the device running the demo. Follow the links to:

The MQTT Mutual Authentication and AWS IoT Shadow demos include example AWS IoT policy documents to run each respective demo with AWS IoT. You may use the MQTT Mutual auth and Shadow example policies by replacing [AWS_REGION] and [AWS_ACCOUNT_ID] with the strings of your region and account identifier. While the IoT Thing name and MQTT client identifier do not need to match for the demos to run, the example policies have the Thing name and client identifier identical as per AWS IoT best practices.

It can be very helpful to also have the AWS Command Line Interface tooling installed.

Configuring mutual authentication demos of MQTT and HTTP

You can pass the following configuration settings as command line options in order to run the mutual auth demos. Make sure to run the following command in the root directory of the C-SDK:

## optionally find your-aws-iot-endpoint from the command line
aws iot describe-endpoint --endpoint-type iot:Data-ATS
cmake -S . -Bbuild
-DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>" 

In order to set these configurations manually, edit demo_config.h in demos/mqtt/mqtt_demo_mutual_auth/ and demos/http/http_demo_mutual_auth/ to #define the following:

  • Set AWS_IOT_ENDPOINT to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.<aws-region>.amazonaws.com where <aws-region> can be an AWS region like us-east-2.
    • Optionally, it can also be found with the AWS CLI command aws iot describe-endpoint --endpoint-type iot:Data-ATS.
  • Set CLIENT_CERT_PATH to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.
  • Set CLIENT_PRIVATE_KEY_PATH to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.

It is possible to configure ROOT_CA_CERT_PATH to any PEM-encoded Root CA Certificate. However, this is optional because CMake will download and set it to AmazonRootCA1.pem when unspecified.

Configuring AWS IoT Device Defender and AWS IoT Device Shadow demos

To build the AWS IoT Device Defender and AWS IoT Device Shadow demos, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:

cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>" -DTHING_NAME="<your-registered-thing-name>"

An Amazon Root CA certificate can be downloaded from here.

In order to set these configurations manually, edit demo_config.h in the demo folder to #define the following:

  • Set AWS_IOT_ENDPOINT to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com.
  • Set ROOT_CA_CERT_PATH to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.
  • Set CLIENT_CERT_PATH to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.
  • Set CLIENT_PRIVATE_KEY_PATH to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.
  • Set THING_NAME to the name of the Thing created in AWS IoT Account Setup.

Configuring the AWS IoT Fleet Provisioning demo

To build the AWS IoT Fleet Provisioning Demo, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:

cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLAIM_CERT_PATH="<your-claim-certificate-path>" -DCLAIM_PRIVATE_KEY_PATH="<your-claim-private-key-path>" -DPROVISIONING_TEMPLATE_NAME="<your-template-name>" -DDEVICE_SERIAL_NUMBER="<your-serial-number>"

An Amazon Root CA certificate can be downloaded from here.

To create a provisioning template and claim credentials, sign into your AWS account and visit here. Make sure to enable the "Use the AWS IoT registry to manage your device fleet" option. Once you have created the template and credentials, modify the claim certificate's policy to match the sample policy.

In order to set these configurations manually, edit demo_config.h in the demo folder to #define the following:

  • Set AWS_IOT_ENDPOINT to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com.
  • Set ROOT_CA_CERT_PATH to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.
  • Set CLAIM_CERT_PATH to the path of the claim certificate downloaded when setting up the template and claim credentials.
  • Set CLAIM_PRIVATE_KEY_PATH to the path of the private key downloaded when setting up the template and claim credentials.
  • Set PROVISIONING_TEMPLATE_NAME to the name of the provisioning template created.
  • Set DEVICE_SERIAL_NUMBER to an arbitrary string representing a device identifier.

Configuring the S3 demos

You can pass the following configuration settings as command line options in order to run the S3 demos. Make sure to run the following command in the root directory of the C-SDK:

cmake -S . -Bbuild -DS3_PRESIGNED_GET_URL="s3-get-url" -DS3_PRESIGNED_PUT_URL="s3-put-url"

S3_PRESIGNED_PUT_URL is only needed for the S3 upload demo.

In order to set these configurations manually, edit demo_config.h in demos/http/http_demo_s3_download_multithreaded, and demos/http/http_demo_s3_upload to #define the following:

  • Set S3_PRESIGNED_GET_URL to a S3 presigned URL with GET access.
  • Set S3_PRESIGNED_PUT_URL to a S3 presigned URL with PUT access.

You can generate the presigned urls using demos/http/common/src/presigned_urls_gen.py. More info can be found here.

Configure S3 Download HTTP Demo using SigV4 Library:

Refer this demos/http/http_demo_s3_download/README.md to follow the steps needed to configure and run the S3 Download HTTP Demo using SigV4 Library that generates the authorization HTTP header needed to authenticate the HTTP requests send to S3.

Setup for AWS IoT Jobs demo

  1. The demo requires the Linux platform to contain curl and libmosquitto. On a Debian platform, these dependencies can be installed with:
    apt install curl libmosquitto-dev

If the platform does not contain the libmosquitto library, the demo will build the library from source.

libmosquitto 1.4.10 or any later version of the first major release is required to run this demo.

  1. A job that specifies the URL to download for the demo needs to be created on the AWS account for the Thing resource that will be used by the demo.
    The job can be created directly from the AWS IoT console or using the aws cli tool.

The following creates a job that specifies a Linux Kernel link for downloading.

 aws iot create-job \
        --job-id 'job_1' \
        --targets arn:aws:iot:us-west-2:<account-id>:thing/<thing-name> \
        --document '{"url":"https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.8.5.tar.xz"}'

Prerequisites for the AWS Over-The-Air Update (OTA) demos

  1. To perform a successful OTA update, you need to complete the prerequisites mentioned here.
  2. A code signing certificate is required to authenticate the update. A code signing certificate based on the SHA-256 ECDSA algorithm will work with the current demos. An example of how to generate this kind of certificate can be found here.

Scheduling an OTA Update Job

After you build and run the initial executable you will have to create another executable and schedule an OTA update job with this image.

  1. Increase the version of the application by setting macro APP_VERSION_BUILD in demos/ota/ota_demo_core_[mqtt/http]/demo_config.h to a different version than what is running.
  2. Rebuild the application using the build steps below into a different directory, say build-dir-2.
  3. Rename the demo executable to reflect the change, e.g. mv ota_demo_core_mqtt ota_demo_core_mqtt2
  4. Create an OTA job:
    1. Go to the AWS IoT Core console.
    2. Manage → Jobs → Create → Create a FreeRTOS OTA update job → Select the corresponding name for your device from the thing list.
    3. Sign a new firmware → Create a new profile → Select any SHA-ECDSA signing platform → Upload the code signing certificate(from prerequisites) and provide its path on the device.
    4. Select the image → Select the bucket you created during the prerequisite steps → Upload the binary build-dir-2/bin/ota_demo2.
    5. The path on device should be the absolute path to place the executable and the binary name: e.g. /home/ubuntu/aws-iot-device-sdk-embedded-C-staging/build-dir/bin/ota_demo_core_mqtt2.
    6. Select the IAM role created during the prerequisite steps.
    7. Create the Job.
  5. Run the initial executable again with the following command: sudo ./ota_demo_core_mqtt or sudo ./ota_demo_core_http.
  6. After the initial executable has finished running, go to the directory where the downloaded firmware image resides which is the path name used when creating an OTA job.
  7. Change the permissions of the downloaded firmware to make it executable, as it may be downloaded with read (user default) permissions only: chmod 775 ota_demo_core_mqtt2
  8. Run the downloaded firmware image with the following command: sudo ./ota_demo_core_mqtt2

Building and Running Demos

Before building the demos, ensure you have installed the prerequisite software. On Ubuntu 18.04 and 20.04, gcc, cmake, and OpenSSL can be installed with:

sudo apt install build-essential cmake libssl-dev

Build a single demo

  • Go to the root directory of the C-SDK.
  • Run cmake to generate the Makefiles: cmake -S . -Bbuild && cd build
  • Choose a demo from the list below or alternatively, run make help | grep demo:
defender_demo
http_demo_basic_tls
http_demo_mutual_auth
http_demo_plaintext
http_demo_s3_download
http_demo_s3_download_multithreaded
http_demo_s3_upload
jobs_demo_mosquitto
mqtt_demo_basic_tls
mqtt_demo_mutual_auth
mqtt_demo_plaintext
mqtt_demo_serializer
mqtt_demo_subscription_manager
ota_demo_core_http
ota_demo_core_mqtt
pkcs11_demo_management_and_rng
pkcs11_demo_mechanisms_and_digests
pkcs11_demo_objects
pkcs11_demo_sign_and_verify
shadow_demo_main
  • Replace demo_name with your desired demo then build it: make demo_name
  • Go to the build/bin directory and run any demo executables from there.

Build all configured demos

  • Go to the root directory of the C-SDK.
  • Run cmake to generate the Makefiles: cmake -S . -Bbuild && cd build
  • Run this command to build all configured demos: make
  • Go to the build/bin directory and run any demo executables from there.

Running corePKCS11 demos

The corePKCS11 demos do not require any AWS IoT resources setup, and are standalone. The demos build upon each other to introduce concepts in PKCS #11 sequentially. Below is the recommended order.

  1. pkcs11_demo_management_and_rng
  2. pkcs11_demo_mechanisms_and_digests
  3. pkcs11_demo_objects
  4. pkcs11_demo_sign_and_verify
    1. Please note that this demo requires the private and public key generated from pkcs11_demo_objects to be in the directory the demo is executed from.

Alternative option of Docker containers for running demos locally

Install Docker:

curl -fsSL https://get.docker.com -o get-docker.sh

sh get-docker.sh

Installing Mosquitto to run MQTT demos locally

The following instructions have been tested on an Ubuntu 18.04 environment with Docker and OpenSSL installed.

Download the official Docker image for Mosquitto 1.6.14. This version is deliberately chosen so that the Docker container can load certificates from the host system. Any version after 1.6.14 will drop privileges as soon as the configuration file has been read (before TLS certificates are loaded).

docker pull eclipse-mosquitto:1.6.14

If a Mosquitto broker with TLS communication needs to be run, ignore this step and proceed to the next step. A Mosquitto broker with plain text communication can be run by executing the command below.

docker run -it -p 1883:1883 --name mosquitto-plain-text eclipse-mosquitto:1.6.14

Set BROKER_ENDPOINT defined in demos/mqtt/mqtt_demo_plaintext/demo_config.h to localhost.

Ignore the remaining steps unless a Mosquitto broker with TLS communication also needs to be run.

For TLS communication with Mosquitto broker, server and CA credentials need to be created. Use OpenSSL commands to generate the credentials for the Mosquitto server.

# Generate CA key and certificate. Provide the Subject field information as appropriate for CA certificate.
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout ca.key -out ca.crt
# Generate server key and certificate.# Provide the Subject field information as appropriate for Server certificate. Make sure the Common Name (CN) field is different from the root CA certificate.
openssl req -nodes -sha256 -new -keyout server.key -out server.csr # Sign with the CA cert.
openssl x509 -req -sha256 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 365

Note: Make sure to use different Common Name (CN) detail between the CA and server certificates; otherwise, SSL handshake fails with exactly same Common Name (CN) detail in both the certificates.

port 8883

cafile /mosquitto/config/ca.crt
certfile /mosquitto/config/server.crt
keyfile /mosquitto/config/server.key

# Use this option for TLS mutual authentication (where client will provide CA signed certificate)
#require_certificate true
tls_version tlsv1.2
#use_identity_as_username true

Create a mosquitto.conf file to use port 8883 (for TLS communication) and providing path to the generated credentials.

Run the docker container from the local directory containing the generated credential and mosquitto.conf files.

docker run -it -p 8883:8883 -v $(pwd):/mosquitto/config/ --name mosquitto-basic-tls eclipse-mosquitto:1.6.14

Update demos/mqtt/mqtt_demo_basic_tls/demo_config.h to the following:
Set BROKER_ENDPOINT to localhost.
Set ROOT_CA_CERT_PATH to the absolute path of the CA certificate created in step 4. for the local Mosquitto server.

Installing httpbin to run HTTP demos locally

Run httpbin through port 80:

docker pull kennethreitz/httpbin
docker run -p 80:80 kennethreitz/httpbin

SERVER_HOST defined in demos/http/http_demo_plaintext/demo_config.h can now be set to localhost.

To run http_demo_basic_tls, download ngrok in order to create an HTTPS tunnel to the httpbin server currently hosted on port 80:

./ngrok http 80 # May have to use ./ngrok.exe depending on OS or filename of the executable

ngrok will provide an https link that can be substituted in demos/http/http_demo_basic_tls/demo_config.h and has a format of https://ABCDEFG12345.ngrok.io.

Set SERVER_HOST in demos/http/http_demo_basic_tls/demo_config.h to the https link provided by ngrok, without https:// preceding it.

You must also download the Root CA certificate provided by the ngrok https link and set ROOT_CA_CERT_PATH in demos/http/http_demo_basic_tls/demo_config.h to the file path of the downloaded certificate.

Installation

The C-SDK libraries and platform abstractions can be installed to a file system through CMake. To do so, run the following command in the root directory of the C-SDK. Note that installation is not required to run any of the demos.

cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0
cd build
sudo make install

Note that because make install will automatically build the all target, it may be useful to disable building demos and tests with -DBUILD_DEMOS=0 -DBUILD_TESTS=0 unless they have already been configured. Super-user permissions may be needed if installing to a system include or system library path.

To install only a subset of all libraries, pass -DINSTALL_LIBS to install only the libraries you need. By default, all libraries will be installed, but you may exclude any library that you don't need from this list:

-DINSTALL_LIBS="DEFENDER;SHADOW;JOBS;OTA;OTA_HTTP;OTA_MQTT;BACKOFF_ALGORITHM;HTTP;JSON;MQTT;PKCS"

By default, the install path will be in the project directory of the SDK. You can also set -DINSTALL_TO_SYSTEM=1 to install to the system path for headers and libraries in your OS (e.g. /usr/local/include & /usr/local/lib for Linux).

Upon entering make install, the location of each library will be specified first followed by the location of all installed headers:

-- Installing: /usr/local/lib/libaws_iot_defender.so
-- Installing: /usr/local/lib/libaws_iot_shadow.so
...
-- Installing: /usr/local/include/aws/defender.h
-- Installing: /usr/local/include/aws/defender_config_defaults.h
-- Installing: /usr/local/include/aws/shadow.h
-- Installing: /usr/local/include/aws/shadow_config_defaults.h

You may also set an installation path of your choice by passing the following flags through CMake. Make sure to run the following command in the root directory of the C-SDK:

cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0 \
-DCSDK_HEADER_INSTALL_PATH="/header/path" -DCSDK_LIB_INSTALL_PATH="/lib/path"
cd build
sudo make install

POSIX platform abstractions are used together with the C-SDK libraries in the demos. By default, these abstractions are also installed but can be excluded by passing the flag: -DINSTALL_PLATFORM_ABSTRACTIONS=0.

Lastly, a custom config path for any specific library can also be specified through the following CMake flags, allowing libraries to be compiled with a config of your choice:

-DDEFENDER_CUSTOM_CONFIG_DIR="defender-config-directory"
-DSHADOW_CUSTOM_CONFIG_DIR="shadow-config-directory"
-DJOBS_CUSTOM_CONFIG_DIR="jobs-config-directory"
-DOTA_CUSTOM_CONFIG_DIR="ota-config-directory"
-DHTTP_CUSTOM_CONFIG_DIR="http-config-directory"
-DJSON_CUSTOM_CONFIG_DIR="json-config-directory"
-DMQTT_CUSTOM_CONFIG_DIR="mqtt-config-directory"
-DPKCS_CUSTOM_CONFIG_DIR="pkcs-config-directory"

Note that the file name of the header should not be included in the directory.

Generating Documentation

Note: For pre-generated documentation, please visit Releases and Documentation section.

The Doxygen references were created using Doxygen version 1.9.2. To generate the Doxygen pages, use the provided Python script at tools/doxygen/generate_docs.py. Please ensure that each of the library submodules under libraries/standard/ and libraries/aws/ are cloned before using this script.

cd <CSDK_ROOT>
git submodule update --init --recursive --checkout
python3 tools/doxygen/generate_docs.py

The generated documentation landing page is located at docs/doxygen/output/html/index.html.


Author: aws
Source code: https://github.com/aws/aws-iot-device-sdk-embedded-C
License: MIT license

#aws 

MEAN Stack Tutorial MongoDB ExpressJS AngularJS NodeJS

We are going to build a full stack Todo App using the MEAN (MongoDB, ExpressJS, AngularJS and NodeJS). This is the last part of three-post series tutorial.

MEAN Stack tutorial series:

AngularJS tutorial for beginners (Part I)
Creating RESTful APIs with NodeJS and MongoDB Tutorial (Part II)
MEAN Stack Tutorial: MongoDB, ExpressJS, AngularJS and NodeJS (Part III) 👈 you are here
Before completing the app, let’s cover some background about the this stack. If you rather jump to the hands-on part click here to get started.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]

Tyrique  Littel

Tyrique Littel

1597723200

FreeBSD s3cmd failed [SSL CERTIFICATE_VERIFY_FAILED]

When I install s3cmd package on my FreeBSD system and try to use the s3cmd command I get the following error:

_ERROR: Test failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (ssl.c:1091)

How do I fix this problem on FreeBSD Unix system?

Amazon Simple Storage Service (s3 ) is object storage through a web service interface or API. You can store all sorts of files. FreeBSD is free and open-source operating systems. s3cmd is a command-line utility for the Unix-like system to upload, download files to AWS S3 service from the command line.

ERROR: Test failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed error and solution

This error indicates that you don’t have packages correctly installed, especially SSL certificates. Let us see how to fix this problem and install s3cmd correctly on FreeBSD to get rid of the problem.

How to install s3cmd on FreeBSD

Search for s3cmd package:

$ pkg search s3cmd

Execute the following command and make sure you install Python 3.x package as Python 2 will be removed after 2020:

$ sudo pkg install py37-s3cmd-2.1.0

Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The following 8 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
	libffi: 3.2.1_3
	py37-dateutil: 2.8.1
	py37-magic: 5.38
	py37-s3cmd: 2.1.0
	py37-setuptools: 44.0.0
	py37-six: 1.14.0
	python37: 3.7.8
	readline: 8.0.4

Number of packages to be installed: 8

The process will require 118 MiB more space.

Proceed with this action? [y/N]: y
[rsnapshot] [1/8] Installing readline-8.0.4...
[rsnapshot] [1/8] Extracting readline-8.0.4: 100%
[rsnapshot] [2/8] Installing libffi-3.2.1_3...
....
..
[rsnapshot] [8/8] Extracting py37-s3cmd-2.1.0: 100%
=====
Message from python37-3.7.8:

--
Note that some standard Python modules are provided as separate ports
as they require additional dependencies. They are available as:

py37-gdbm       databases/py-gdbm@py37
py37-sqlite3    databases/py-sqlite3@py37
py37-tkinter    x11-toolkits/py-tkinter@py37

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]