1631492539
Here, in our example, we have an Array with several products that we will filter according to the user preference, in this case, price.
We need to interact on this set to find all the products that match our condition. There are four products within this array, and each one is an object.
We need to interact on the objects too, if you don't know how to do it, just check this post and this one too.
So we gonna filter any item with price over $200, and it must be included within the new array.
const products = [
{ productName: "Television", price: "1000", shipping: "35" },
{ productName: "Pendrive", price: "200", shipping: "7" },
{ productName: "Camera", price: "450", shipping: "15" },
{ productName: "Mouse", price: "120", shipping: "5" }
];
const filtered = products.filter((item) => {
return item.price > 200;
});
console.log(filtered);
/*
[
{ productName: "Television", price: "1000", shipping: "35" },
{ productName: "Camera", price: "450", shipping: "15" }
]
*/
Just we like we did above, to filter the and array of objects based on multiples atributes you can still use the Javascript.filter()
method, the only difference will be in the callback function, which will need a little adjustment.
const products = [
{ productName: "Television", price: "1000", shipping: "35" },
{ productName: "Pendrive", price: "200", shipping: "7" },
{ productName: "Camera", price: "450", shipping: "15" },
{ productName: "Mouse", price: "120", shipping: "5" }
];
const filtered = products.filter((item) => {
return (item.price > 200 && item.shipping < 20);
});
console.log(filtered);
/*
[
{ productName: "Camera", price: "450", shipping: "15" }
]
*/
1650636000
Port of deeplearning4j to clojure
Contact info
If you have any questions,
NOT YET RELEASED TO CLOJARS
If using Maven add the following repository definition to your pom.xml:
<repository>
<id>clojars.org</id>
<url>http://clojars.org/repo</url>
</repository>
With Leiningen:
n/a
With Maven:
n/a
<dependency>
<groupId>_</groupId>
<artifactId>_</artifactId>
<version>_</version>
</dependency>
All functions for creating dl4j objects return code by default
API functions return code when all args are provided as code
API functions return the value of calling the wrapped method when args are provided as a mixture of objects and code or just objects
The tests are there to help clarify behavior, if you are unsure of how to use a fn, search the tests
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]))
;; as code (the default)
(l/dense-layer-builder
:activation-fn :relu
:learning-rate 0.006
:weight-init :xavier
:layer-name "example layer"
:n-in 10
:n-out 1)
;; =>
(doto
(org.deeplearning4j.nn.conf.layers.DenseLayer$Builder.)
(.nOut 1)
(.activation (dl4clj.constants/value-of {:activation-fn :relu}))
(.weightInit (dl4clj.constants/value-of {:weight-init :xavier}))
(.nIn 10)
(.name "example layer")
(.learningRate 0.006))
;; as an object
(l/dense-layer-builder
:activation-fn :relu
:learning-rate 0.006
:weight-init :xavier
:layer-name "example layer"
:n-in 10
:n-out 1
:as-code? false)
;; =>
#object[org.deeplearning4j.nn.conf.layers.DenseLayer 0x69d7d160 "DenseLayer(super=FeedForwardLayer(super=Layer(layerName=example layer, activationFn=relu, weightInit=XAVIER, biasInit=NaN, dist=null, learningRate=0.006, biasLearningRate=NaN, learningRateSchedule=null, momentum=NaN, momentumSchedule=null, l1=NaN, l2=NaN, l1Bias=NaN, l2Bias=NaN, dropOut=NaN, updater=null, rho=NaN, epsilon=NaN, rmsDecay=NaN, adamMeanDecay=NaN, adamVarDecay=NaN, gradientNormalization=null, gradientNormalizationThreshold=NaN), nIn=10, nOut=1))"]
Loading data from a file (here its a csv)
(ns my.ns
(:require [dl4clj.datasets.input-splits :as s]
[dl4clj.datasets.record-readers :as rr]
[dl4clj.datasets.api.record-readers :refer :all]
[dl4clj.datasets.iterators :as ds-iter]
[dl4clj.datasets.api.iterators :refer :all]
[dl4clj.helpers :refer [data-from-iter]]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; file splits (convert the data to records)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def poker-path "resources/poker-hand-training.csv")
;; this is not a complete dataset, it is just here to sever as an example
(def file-split (s/new-filesplit :path poker-path))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers, (read the records created by the file split)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def csv-rr (initialize-rr! :rr (rr/new-csv-record-reader :skip-n-lines 0 :delimiter ",")
:input-split file-split))
;; lets look at some data
(println (next-record! :rr csv-rr :as-code? false))
;; => #object[java.util.ArrayList 0x2473e02d [1, 10, 1, 11, 1, 13, 1, 12, 1, 1, 9]]
;; this is our first line from the csv
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers dataset iterators (turn our writables into a dataset)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
:record-reader csv-rr
:batch-size 1
:label-idx 10
:n-possible-labels 10))
;; we use our record reader created above
;; we want to see one example per dataset obj returned (:batch-size = 1)
;; we know our label is at the last index, so :label-idx = 10
;; there are 10 possible types of poker hands so :n-possible-labels = 10
;; you can also set :label-idx to -1 to use the last index no matter the size of the seq
(def other-rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
:record-reader csv-rr
:batch-size 1
:label-idx -1
:n-possible-labels 10))
(str (next-example! :iter rr-ds-iter :as-code? false))
;; =>
;;===========INPUT===================
;;[1.00, 10.00, 1.00, 11.00, 1.00, 13.00, 1.00, 12.00, 1.00, 1.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]
;; and to show that :label-idx = -1 gives us the same output
(= (next-example! :iter rr-ds-iter :as-code? false)
(next-example! :iter other-rr-ds-iter :as-code? false)) ;; => true
(ns my.ns
(:require [nd4clj.linalg.factory.nd4j :refer [vec->indarray matrix->indarray
indarray-of-zeros indarray-of-ones
indarray-of-rand vec-or-matrix->indarray]]
[dl4clj.datasets.new-datasets :refer [new-ds]]
[dl4clj.datasets.api.datasets :refer [as-list]]
[dl4clj.datasets.iterators :refer [new-existing-dataset-iterator]]
[dl4clj.datasets.api.iterators :refer :all]
[dl4clj.datasets.pre-processors :as ds-pp]
[dl4clj.datasets.api.pre-processors :refer :all]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; INDArray creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;TODO: consider defaulting to code
;; can create from a vector
(vec->indarray [1 2 3 4])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x269df212 [1.00, 2.00, 3.00, 4.00]]
;; or from a matrix
(matrix->indarray [[1 2 3 4] [2 4 6 8]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x20aa7fe1
;; [[1.00, 2.00, 3.00, 4.00], [2.00, 4.00, 6.00, 8.00]]]
;; will fill in spareness with zeros
(matrix->indarray [[1 2 3 4] [2 4 6 8] [10 12]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x8b7796c
;;[[1.00, 2.00, 3.00, 4.00],
;; [2.00, 4.00, 6.00, 8.00],
;; [10.00, 12.00, 0.00, 0.00]]]
;; can create an indarray of all zeros with specified shape
;; defaults to :rows = 1 :columns = 1
(indarray-of-zeros :rows 3 :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x6f586a7e
;;[[0.00, 0.00],
;; [0.00, 0.00],
;; [0.00, 0.00]]]
(indarray-of-zeros) ;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xe59ffec 0.00]
;; and if only one is supplied, will get a vector of specified length
(indarray-of-zeros :rows 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2899d974 [0.00, 0.00]]
(indarray-of-zeros :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xa5b9782 [0.00, 0.00]]
;; same considerations/defaults for indarray-of-ones and indarray-of-rand
(indarray-of-ones :rows 2 :columns 3)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x54f08662 [[1.00, 1.00, 1.00], [1.00, 1.00, 1.00]]]
(indarray-of-rand :rows 2 :columns 3)
;; all values are greater than 0 but less than 1
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2f20293b [[0.85, 0.86, 0.13], [0.94, 0.04, 0.36]]]
;; vec-or-matrix->indarray is built into all functions which require INDArrays
;; so that you can use clojure data structures
;; but you still have the option of passing existing INDArrays
(def example-array (vec-or-matrix->indarray [1 2 3 4]))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x5c44c71f [1.00, 2.00, 3.00, 4.00]]
(vec-or-matrix->indarray example-array)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x607b03b0 [1.00, 2.00, 3.00, 4.00]]
(vec-or-matrix->indarray (indarray-of-rand :rows 2))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x49143b08 [0.76, 0.92]]
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def ds-with-single-example (new-ds :input [1 2 3 4]
:output [0.0 1.0 0.0]))
(as-list :ds ds-with-single-example :as-code? false)
;; =>
;; #object[java.util.ArrayList 0x5d703d12
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00]]]
(def ds-with-multiple-examples (new-ds
:input [[1 2 3 4] [2 4 6 8]]
:output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))
(as-list :ds ds-with-multiple-examples :as-code? false)
;; =>
;;#object[java.util.ArrayList 0x29c7a9e2
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00],
;;===========INPUT===================
;;[2.00, 4.00, 6.00, 8.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 1.00]]]
;; we can create a dataset iterator from the code which creates datasets
;; and set the labels for our outputs (optional)
(def ds-with-multiple-examples
(new-ds
:input [[1 2 3 4] [2 4 6 8]]
:output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))
;; iterator
(def training-rr-ds-iter
(new-existing-dataset-iterator
:dataset ds-with-multiple-examples
:labels ["foo" "baz" "foobaz"]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set normalization
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; this gathers statistics on the dataset and normalizes the data
;; and applies the transformation to all dataset objects in the iterator
(def train-iter-normalized
(c/normalize-iter! :iter training-rr-ds-iter
:normalizer (ds-pp/new-standardize-normalization-ds-preprocessor)
:as-code? false))
;; above returns the normalized iterator
;; to get fit normalizer
(def the-normalizer
(get-pre-processor train-iter-normalized))
Creating a neural network configuration with singe and multiple layers
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.conf.distributions :as dist]
[dl4clj.nn.conf.input-pre-processor :as pp]
[dl4clj.nn.conf.step-fns :as s-fn]))
;; nn/builder has 3 types of args
;; 1) args which set network configuration params
;; 2) args which set default values for layers
;; 3) args which set multi layer network configuration params
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; single layer nn configuration
;; here we are setting network configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(nn/builder :optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:step-fn :default-step-fn
:layers {:dense-layer {:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "single layer model example"
:n-in 10
:n-out 20}})
;; there are several options within a nn-conf map which can be configuration maps
;; or calls to fns
;; It doesn't matter which option you choose and you don't have to stay consistent
;; the list of params which can be passed as config maps or fn calls will
;; be enumerated at a later date
(nn/builder :optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:step-fn (s-fn/new-default-step-fn)
:build? true
;; dont need to specify layer order, theres only one
:layers (l/dense-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:dist (dist/new-normal-distribution :mean 0 :std 1)
:learning-rate 0.006
:weight-init :xavier
:layer-name "single layer model example"
:n-in 10
:n-out 20))
;; these configurations are the same
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; multi-layer configuration
;; here we are also setting layer defaults
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; defaults will apply to layers which do not specify those value in their config
(nn/builder
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:default-activation-fn :sigmoid
:default-weight-init :uniform
;; we need to specify the layer order
:layers {0 (l/activation-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "example first layer"
:n-in 10
:n-out 20)
1 {:output-layer {:n-in 20
:n-out 2
:loss-fn :mse
:layer-name "example output layer"}}})
;; specifying multi-layer config params
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
;; layer defaults
:default-activation-fn :sigmoid
:default-weight-init :uniform
;; the layers
:layers {0 (l/activation-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "example first layer"
:n-in 10
:n-out 20)
1 {:output-layer {:n-in 20
:n-out 2
:loss-fn :mse
:layer-name "example output layer"}}}
;; multi layer network args
:backprop? true
:backprop-type :standard
:pretrain? false
:input-pre-processors {0 (pp/new-zero-mean-pre-pre-processor)
1 {:unit-variance-processor {}}})
Multi Layer models
(ns my.ns
(:require [dl4clj.datasets.iterators :as iter]
[dl4clj.datasets.input-splits :as split]
[dl4clj.datasets.record-readers :as rr]
[dl4clj.optimize.listeners :as listener]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.multilayer.multi-layer-network :as mln]
[dl4clj.nn.api.model :refer [init! set-listeners!]]
[dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
[dl4clj.datasets.api.record-readers :refer [initialize-rr!]]
[dl4clj.eval.api.eval :refer [get-stats get-accuracy]]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; nn-conf -> multi-layer-network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123 :iterations 1 :regularization? true
;; setting layer defaults
:default-activation-fn :relu :default-l2 7.5e-6
:default-weight-init :xavier :default-learning-rate 0.0015
:default-updater :nesterovs :default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def multi-layer-network (c/model-from-conf nn-conf))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; local cpu training with dl4j pre-built iterators
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; lets use the pre-built Mnist data set iterator
(def train-mnist-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-mnist-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
;; and lets set a listener so we can know how training is going
(def score-listener (listener/new-score-iteration-listener :print-every-n 5))
;; and attach it to our model
;; TODO: listeners are broken, look into log4j warnning
(def mln-with-listener (set-listeners! :model multi-layer-network
:listeners [score-listener]))
(def trained-mln (mln/train-mln-with-ds-iter! :mln mln-with-listener
:iter train-mnist-iter
:n-epochs 15
:as-code? false))
;; training happens because :as-code? = false
;; if it was true, we would still just have a data structure
;; we now have a trained model that has seen the training dataset 15 times
;; time to evaluate our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;Create an evaluation object
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def eval-obj (evaluate-classification :mln trained-mln
:iter test-mnist-iter))
;; always remember that these objects are stateful, dont use the same eval-obj
;; to eval two different networks
;; we trained the model on a training dataset. We evaluate on a test set
(println (get-stats :evaler eval-obj))
;; this will print the stats to standard out for each feature/label pair
;;Examples labeled as 0 classified by model as 0: 968 times
;;Examples labeled as 0 classified by model as 1: 1 times
;;Examples labeled as 0 classified by model as 2: 1 times
;;Examples labeled as 0 classified by model as 3: 1 times
;;Examples labeled as 0 classified by model as 5: 1 times
;;Examples labeled as 0 classified by model as 6: 3 times
;;Examples labeled as 0 classified by model as 7: 1 times
;;Examples labeled as 0 classified by model as 8: 2 times
;;Examples labeled as 0 classified by model as 9: 2 times
;;Examples labeled as 1 classified by model as 1: 1126 times
;;Examples labeled as 1 classified by model as 2: 2 times
;;Examples labeled as 1 classified by model as 3: 1 times
;;Examples labeled as 1 classified by model as 5: 1 times
;;Examples labeled as 1 classified by model as 6: 2 times
;;Examples labeled as 1 classified by model as 7: 1 times
;;Examples labeled as 1 classified by model as 8: 2 times
;;Examples labeled as 2 classified by model as 0: 3 times
;;Examples labeled as 2 classified by model as 1: 2 times
;;Examples labeled as 2 classified by model as 2: 1006 times
;;Examples labeled as 2 classified by model as 3: 2 times
;;Examples labeled as 2 classified by model as 4: 3 times
;;Examples labeled as 2 classified by model as 6: 3 times
;;Examples labeled as 2 classified by model as 7: 7 times
;;Examples labeled as 2 classified by model as 8: 6 times
;;Examples labeled as 3 classified by model as 2: 4 times
;;Examples labeled as 3 classified by model as 3: 990 times
;;Examples labeled as 3 classified by model as 5: 3 times
;;Examples labeled as 3 classified by model as 7: 3 times
;;Examples labeled as 3 classified by model as 8: 3 times
;;Examples labeled as 3 classified by model as 9: 7 times
;;Examples labeled as 4 classified by model as 2: 2 times
;;Examples labeled as 4 classified by model as 3: 1 times
;;Examples labeled as 4 classified by model as 4: 967 times
;;Examples labeled as 4 classified by model as 6: 4 times
;;Examples labeled as 4 classified by model as 7: 1 times
;;Examples labeled as 4 classified by model as 9: 7 times
;;Examples labeled as 5 classified by model as 0: 2 times
;;Examples labeled as 5 classified by model as 3: 6 times
;;Examples labeled as 5 classified by model as 4: 1 times
;;Examples labeled as 5 classified by model as 5: 874 times
;;Examples labeled as 5 classified by model as 6: 3 times
;;Examples labeled as 5 classified by model as 7: 1 times
;;Examples labeled as 5 classified by model as 8: 3 times
;;Examples labeled as 5 classified by model as 9: 2 times
;;Examples labeled as 6 classified by model as 0: 4 times
;;Examples labeled as 6 classified by model as 1: 3 times
;;Examples labeled as 6 classified by model as 3: 2 times
;;Examples labeled as 6 classified by model as 4: 4 times
;;Examples labeled as 6 classified by model as 5: 4 times
;;Examples labeled as 6 classified by model as 6: 939 times
;;Examples labeled as 6 classified by model as 7: 1 times
;;Examples labeled as 6 classified by model as 8: 1 times
;;Examples labeled as 7 classified by model as 1: 7 times
;;Examples labeled as 7 classified by model as 2: 4 times
;;Examples labeled as 7 classified by model as 3: 3 times
;;Examples labeled as 7 classified by model as 7: 1005 times
;;Examples labeled as 7 classified by model as 8: 2 times
;;Examples labeled as 7 classified by model as 9: 7 times
;;Examples labeled as 8 classified by model as 0: 3 times
;;Examples labeled as 8 classified by model as 2: 3 times
;;Examples labeled as 8 classified by model as 3: 2 times
;;Examples labeled as 8 classified by model as 4: 4 times
;;Examples labeled as 8 classified by model as 5: 3 times
;;Examples labeled as 8 classified by model as 6: 2 times
;;Examples labeled as 8 classified by model as 7: 4 times
;;Examples labeled as 8 classified by model as 8: 947 times
;;Examples labeled as 8 classified by model as 9: 6 times
;;Examples labeled as 9 classified by model as 0: 2 times
;;Examples labeled as 9 classified by model as 1: 2 times
;;Examples labeled as 9 classified by model as 3: 4 times
;;Examples labeled as 9 classified by model as 4: 8 times
;;Examples labeled as 9 classified by model as 6: 1 times
;;Examples labeled as 9 classified by model as 7: 4 times
;;Examples labeled as 9 classified by model as 8: 2 times
;;Examples labeled as 9 classified by model as 9: 986 times
;;==========================Scores========================================
;; Accuracy: 0.9808
;; Precision: 0.9808
;; Recall: 0.9807
;; F1 Score: 0.9807
;;========================================================================
;; can get the stats that are printed via fns in the evaluation namespace
;; after running eval-model-whole-ds
(get-accuracy :evaler evaler-with-stats) ;; => 0.9808
Early Stopping (controlling training)
it is recommened you start here when designing models
using dl4clj.core
(ns my.ns
(:require [dl4clj.earlystopping.termination-conditions :refer :all]
[dl4clj.earlystopping.model-saver :refer [new-in-memory-saver]]
[dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :as iter]
[dl4clj.core :as c]))
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:regularization? true
;; setting layer defaults
:default-activation-fn :relu
:default-l2 7.5e-6
:default-weight-init :xavier
:default-learning-rate 0.0015
:default-updater :nesterovs
:default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def train-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
(def invalid-score-condition (new-invalid-score-iteration-termination-condition))
(def max-score-condition (new-max-score-iteration-termination-condition
:max-score 20.0))
(def max-time-condition (new-max-time-iteration-termination-condition
:max-time-val 10
:max-time-unit :minutes))
(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
:max-n-epoch-no-improve 5))
(def target-score-condition (new-best-score-epoch-termination-condition
:best-expected-score 0.009))
(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))
(def in-mem-saver (new-in-memory-saver))
(def trained-mln
;; defaults to returning the model
(c/train-with-early-stopping
:nn-conf nn-conf
:training-iter train-mnist-iter
:testing-iter test-mnist-iter
:eval-every-n-epochs 1
:iteration-termination-conditions [invalid-score-condition
max-score-condition
max-time-condition]
:epoch-termination-conditions [score-doesnt-improve-condition
target-score-condition
max-number-epochs-condition]
:save-last-model? true
:model-saver in-mem-saver
:as-code? false))
(def model-evaler
(evaluate-classification :mln trained-mln :iter test-mnist-iter))
(println (get-stats :evaler model-evaler))
(ns my.ns
(:require [dl4clj.earlystopping.early-stopping-config :refer [new-early-stopping-config]]
[dl4clj.earlystopping.termination-conditions :refer :all]
[dl4clj.earlystopping.model-saver :refer [new-in-memory-saver new-local-file-model-saver]]
[dl4clj.earlystopping.score-calc :refer [new-ds-loss-calculator]]
[dl4clj.earlystopping.early-stopping-trainer :refer [new-early-stopping-trainer]]
[dl4clj.earlystopping.api.early-stopping-trainer :refer [fit-trainer!]]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.multilayer.multi-layer-network :as mln]
[dl4clj.utils :refer [load-model!]]
[dl4clj.datasets.iterators :as iter]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; start with our network config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123 :iterations 1 :regularization? true
;; setting layer defaults
:default-activation-fn :relu :default-l2 7.5e-6
:default-weight-init :xavier :default-learning-rate 0.0015
:default-updater :nesterovs :default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def mln (c/model-from-conf nn-conf))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; the training/testing data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def train-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we are going to need termination conditions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; these allow us to control when we exit training
;; this can be based off of iterations or epochs
;; iteration termination conditions
(def invalid-score-condition (new-invalid-score-iteration-termination-condition))
(def max-score-condition (new-max-score-iteration-termination-condition
:max-score 20.0))
(def max-time-condition (new-max-time-iteration-termination-condition
:max-time-val 10
:max-time-unit :minutes))
;; epoch termination conditions
(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
:max-n-epoch-no-improve 5))
(def target-score-condition (new-best-score-epoch-termination-condition :best-expected-score 0.009))
(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we also need a way to save our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; can be in memory or to a local directory
(def in-mem-saver (new-in-memory-saver))
(def local-file-saver (new-local-file-model-saver :directory "resources/tmp/readme/"))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; set up your score calculator
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def score-calcer (new-ds-loss-calculator :iter test-iter
:average? true))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; termination conditions
;; a way to save our model
;; a way to calculate the score of our model on the dataset
(def early-stopping-conf
(new-early-stopping-config
:epoch-termination-conditions [score-doesnt-improve-condition
target-score-condition
max-number-epochs-condition]
:iteration-termination-conditions [invalid-score-condition
max-score-condition
max-time-condition]
:eval-every-n-epochs 5
:model-saver local-file-saver
:save-last-model? true
:score-calculator score-calcer))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping trainer from our data, model and early stopping conf
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def es-trainer (new-early-stopping-trainer :early-stopping-conf early-stopping-conf
:mln mln
:iter train-iter))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; fit and use our early stopping trainer
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def es-trainer-fitted (fit-trainer! es-trainer :as-code? false))
;; when the trainer terminates, you will see something like this
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Completed training epoch 14
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO New best model: score = 0.005225599372851298,
;; epoch = 14 (previous: score = 0.018243224899038346, epoch = 7)
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Hit epoch termination condition at epoch 14.
;; Details: BestScoreEpochTerminationCondition(0.009)
;; and if we look at the es-trainer-fitted object we see
;;#object[org.deeplearning4j.earlystopping.EarlyStoppingResult 0x5ab74f27 EarlyStoppingResult
;;(terminationReason=EpochTerminationCondition,details=BestScoreEpochTerminationCondition(0.009),
;; bestModelEpoch=14,bestModelScore=0.005225599372851298,totalEpochs=15)]
;; and our model has been saved to /resources/tmp/readme/bestModel.bin
;; there we have our model config, model params and our updater state
;; we can then load this model to use it or continue refining it
(def loaded-model (load-model! :path "resources/tmp/readme/bestModel.bin"
:load-updater? true))
Transfer Learning (freezing layers)
;; TODO: need to write up examples
dl4j Spark usage
How it is done in dl4clj
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.spark.masters.param-avg :as master]
[dl4clj.spark.data.java-rdd :refer [new-java-spark-context
java-rdd-from-iter]]
[dl4clj.spark.api.dl4j-multi-layer :refer [eval-classification-spark-mln
get-spark-context]]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def mln-conf
(nn/builder
:optimization-algo :stochastic-gradient-descent
:default-learning-rate 0.006
:layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
1 {:output-layer
{:loss-fn :negativeloglikelihood
:n-in 2 :n-out 3
:activation-fn :soft-max
:weight-init :xavier}}}
:backprop? true
:backprop-type :standard))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def training-master
(master/new-parameter-averaging-training-master
:build? true
:rdd-n-examples 10
:n-workers 4
:averaging-freq 10
:batch-size-per-worker 2
:export-dir "resources/spark/master/"
:rdd-training-approach :direct
:repartition-data :always
:repartition-strategy :balanced
:seed 1234
:save-updater? true
:storage-level :none))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, spark context
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def your-spark-context
(new-java-spark-context :app-name "example app"))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, training data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def iris-iter
(new-iris-data-set-iterator
:batch-size 1
:n-examples 5))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, spark mln
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def fitted-spark-mln
(c/train-with-spark :spark-context your-spark-context
:mln-conf mln-conf
:training-master training-master
:iter iris-iter
:n-epochs 1
:as-code? false))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, use spark context from spark-mln to create rdd
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; TODO: eliminate this step
(def our-rdd
(let [sc (get-spark-context fitted-spark-mln :as-code? false)]
(java-rdd-from-iter :spark-context sc
:iter iris-iter)))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 6, evaluation model and print stats (poor performance of model expected)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def eval-obj
(eval-classification-spark-mln
:spark-mln fitted-spark-mln
:rdd our-rdd))
(println (get-stats :evaler eval-obj))
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.spark.masters.param-avg :as master]
[dl4clj.spark.data.java-rdd :refer [new-java-spark-context java-rdd-from-iter]]
[dl4clj.spark.dl4j-multi-layer :as spark-mln]
[dl4clj.spark.api.dl4j-multi-layer :refer [fit-spark-mln!
eval-classification-spark-mln]]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def mln-conf
(nn/builder
:optimization-algo :stochastic-gradient-descent
:default-learning-rate 0.006
:layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
1 {:output-layer
{:loss-fn :negativeloglikelihood
:n-in 2 :n-out 3
:activation-fn :soft-max
:weight-init :xavier}}}
:backprop? true
:as-code? false
:backprop-type :standard))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, create a training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; not all options specified, but most are
(def training-master
(master/new-parameter-averaging-training-master
:build? true
:rdd-n-examples 10
:n-workers 4
:averaging-freq 10
:batch-size-per-worker 2
:export-dir "resources/spark/master/"
:rdd-training-approach :direct
:repartition-data :always
:repartition-strategy :balanced
:seed 1234
:as-code? false
:save-updater? true
:storage-level :none))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, create a Spark Multi Layer Network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def your-spark-context
(new-java-spark-context :app-name "example app" :as-code? false))
;; new-java-spark-context will turn an existing spark-configuration into a java spark context
;; or create a new java spark context with master set to "local[*]" and the app name
;; set to :app-name
(def spark-mln
(spark-mln/new-spark-multi-layer-network
:spark-context your-spark-context
:mln mln-conf
:training-master training-master
:as-code? false))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, load your data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; one way is via a dataset-iterator
;; can make one directly from a dataset (iterator data-set)
;; see: nd4clj.linalg.dataset.api.data-set and nd4clj.linalg.dataset.data-set
;; we are going to use a pre-built one
(def iris-iter
(new-iris-data-set-iterator
:batch-size 1
:n-examples 5
:as-code? false))
;; now lets convert the data into a javaRDD
(def our-rdd
(java-rdd-from-iter :spark-context your-spark-context
:iter iris-iter))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, fit and evaluate the model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def fitted-spark-mln
(fit-spark-mln!
:spark-mln spark-mln
:rdd our-rdd
:n-epochs 1))
;; this fn also has the option to supply :path-to-data instead of :rdd
;; that path should point to a directory containing a number of dataset objects
(def eval-obj
(eval-classification-spark-mln
:spark-mln fitted-spark-mln
:rdd our-rdd))
;; we would want to have different testing and training rdd's but here we are using
;; the data we trained on
;; lets get the stats for how our model performed
(println (get-stats :evaler eval-obj))
Coming soon
Implement ComputationGraphs and the classes which use them
NLP
Parallelism
TSNE
UI
Author: yetanalytics
Source Code: https://github.com/yetanalytics/dl4clj
License: BSD-2-Clause License
1591611780
How can I find the correct ulimit values for a user account or process on Linux systems?
For proper operation, we must ensure that the correct ulimit values set after installing various software. The Linux system provides means of restricting the number of resources that can be used. Limits set for each Linux user account. However, system limits are applied separately to each process that is running for that user too. For example, if certain thresholds are too low, the system might not be able to server web pages using Nginx/Apache or PHP/Python app. System resource limits viewed or set with the NA command. Let us see how to use the ulimit that provides control over the resources available to the shell and processes.
#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]
1660147320
Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format.
Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions.
pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python.
pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics.
Installing pandas
!pip install pandas
Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Let’s check out our dataframe:
print(df.head())
Sample_data
Now that we have our DataFrame, we will be applying various methods to filter it.
We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Sales with Greater than 300
Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filter on Sales and Units
If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filter on Date
Here we are getting all the results for our Date operation evaluating multiple dates.
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filter on a date with multiple conditions
Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value.
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filter based on a specific string
Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filter based on a specific letter
Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Output of Method -2
Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results.
For more information please check the Regex Documentation
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filter based on REGEX
Here, we’ll check for null and not null values in all the columns with the help of isnull() function.
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filter based on NULL or NOT null values
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Finding null values on specific columns
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Finding not-null values on specific columns
query()
with a condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtering values with Query
Function
query()
with multiple conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtering multiple columns with Query
Function
loc
and iloc
functions.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explanation: iloc
considers rows based on the position of the given index, so that it takes only integers as values.
For more information please check out Pandas Documentation
#Filter with iloc
data.iloc[0 : 5]
Filter using iloc
Explanation: loc
considers rows based on index labels
#Filter with loc
data.loc[0 : 5]
Filter using loc
You might be thinking about why the loc
function returns 6 rows instead of 5 rows. This is because loc
does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint.
So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common.
Link: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660017761
Chaque fois que nous travaillons avec des données de toutes sortes, nous avons besoin d'une image claire du type de données avec lesquelles nous traitons. Pour la plupart des données disponibles, qui peuvent contenir des milliers, voire des millions d'entrées avec une grande variété d'informations, il est vraiment impossible de donner un sens à ces données sans aucun outil pour présenter les données dans un format court et lisible.
La plupart du temps, nous devons parcourir les données, les manipuler et les visualiser pour obtenir des informations. Eh bien, il existe une excellente bibliothèque qui porte le nom de pandas et qui nous offre cette capacité. L'opération de manipulation de données la plus fréquente est le filtrage de données. Il est très similaire à la clause WHERE dans SQL ou vous devez avoir utilisé un filtre dans MS Excel pour sélectionner des lignes spécifiques en fonction de certaines conditions.
pandas est un outil d'analyse/manipulation de données puissant, flexible et open source qui est essentiellement unpackage pythonqui offre vitesse, flexibilité et structures de données expressives conçues pour fonctionner avec des données « relationnelles » ou « étiquetées » de manière intuitive et simple. C'est l'une des bibliothèques les plus populairespour effectuer une analyse de données du monde réel en Python.
pandas est construit au-dessus de la bibliothèque NumPy qui vise à bien s'intégrer à l'environnement informatique scientifique et à de nombreuses autres bibliothèques tierces. Il comporte deux structures de données principales, à savoir Series (1D) et Dataframes (2D) , qui, dans la plupart des cas d'utilisation réels, correspondent au type de données traitées dans de nombreux secteurs de la finance, du calcul scientifique, de l'ingénierie et des statistiques.
Installer des pandas
!pip install pandas
Importation de la bibliothèque Pandas, lecture de notre exemple de fichier de données et affectation à "df" DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Voyons notre dataframe :
print(df.head())
Sample_data
Maintenant que nous avons notre DataFrame, nous allons appliquer différentes méthodes pour le filtrer.
Nous avons une colonne nommée "Total_Sales" dans notre DataFrame et nous voulons filtrer toute la valeur des ventes supérieure à 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Ventes avec plus de 300
Ici, nous filtrons toutes les valeurs dont la valeur "Total_Sales" est supérieure à 300 et également où les "Unités" sont supérieures à 20. Nous devrons utiliser l'opérateur python "&" qui effectue une opération ET au niveau du bit afin d'afficher le résultat correspondant.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Filtrer sur les ventes et les unités
Si nous voulons filtrer notre trame de données en fonction d'une certaine valeur de date, par exemple ici nous essayons d'obtenir tous les résultats en fonction d'une date particulière, dans notre cas les résultats après la date '03/10/21'.
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Filtrer par date
Ici, nous obtenons tous les résultats de notre opération Date évaluant plusieurs dates .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Filtrer sur une date avec plusieurs conditions
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui proviennent de la région 'East', filtrant ainsi en fonction d'une valeur de chaîne spécifique .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Filtre basé sur une chaîne spécifique
Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui ont la lettre 'E' comme premier caractère, c'est-à-dire à l'index 0 dans les résultats de colonne spécifiés.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Filtre basé sur une lettre spécifique
Ici, nous filtrons les lignes dans la colonne « Région » qui contient les valeurs « Ouest » ainsi que « Est » et affichons le résultat combiné. Deux méthodes peuvent être utilisées pour effectuer ce filtrage à savoir l'utilisation d'un tube | opérateur avec l'ensemble de valeurs souhaité correspondant avec la syntaxe ci-dessous OU nous pouvons utiliser la fonction .isin() pour filtrer les valeurs dans une colonne donnée, qui dans notre cas est la 'Région', et fournir la liste de l'ensemble souhaité de valeurs à l'intérieur sous forme de liste.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Sortie de la méthode -2
Ici, nous voulons toutes les valeurs de la colonne 'Region' , qui se termine par 'th' dans leur valeur de chaîne et les afficher. En d'autres termes, nous voulons que nos résultats montrent les valeurs de « Nord » et « Sud » et ignorent « Est » et « Ouest » . La méthode .str.contains() avec les valeurs spécifiées avec le modèle $ RegEx peut être utilisée pour obtenir les résultats souhaités.
Pour plus d'informations, veuillez consulter la documentation Regex
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Filtre basé sur REGEX
Ici, nous allons vérifier les valeurs nulles et non nulles dans toutes les colonnes à l'aide de la fonction isnull() .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Filtre basé sur les valeurs NULL ou NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Recherche de valeurs nulles sur des colonnes spécifiques
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Recherche de valeurs non nulles sur des colonnes spécifiques
query()
d'une condition#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Filtrer les valeurs avec Query
la fonction
query()
de plusieurs conditions#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Filtrer plusieurs colonnes avec Query
Function
loc
et iloc
.#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
sample_data
Explication : iloc
considère les lignes en fonction de la position de l'index donné, de sorte qu'il ne prend que des entiers comme valeurs.
Pour plus d'informations, veuillez consulter la documentation de Pandas
#Filter with iloc
data.iloc[0 : 5]
Filtrer en utilisantiloc
Explication : loc
considère les lignes en fonction des étiquettes d'index
#Filter with loc
data.loc[0 : 5]
Filtrer en utilisantloc
Vous vous demandez peut-être pourquoi la loc
fonction renvoie 6 lignes au lieu de 5 lignes. En effet , ne produit pas de sortie basée sur la position de l'index. Il ne prend en compte que les étiquettes d'index qui peuvent également être un alphabet et incluent à la fois le point de départ et le point final. loc
Donc, ce sont quelques-unes des méthodes de filtrage les plus couramment utilisées dans les pandas. Il existe de nombreuses autres méthodes de filtrage qui pourraient être utilisées, mais celles-ci sont parmi les plus courantes.
Lien : https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame
1660039560
Всякий раз, когда мы работаем с данными любого рода, нам нужна четкая картина того, с какими данными мы имеем дело. Для большинства имеющихся данных, которые могут содержать тысячи или даже миллионы записей с разнообразной информацией, действительно невозможно разобраться в этих данных без какого-либо инструмента для представления данных в кратком и удобочитаемом формате.
Большую часть времени нам нужно просматривать данные, манипулировать ими и визуализировать их для получения информации. Что ж, есть отличная библиотека под названием pandas, которая предоставляет нам эту возможность. Наиболее частой операцией манипулирования данными является фильтрация данных. Это очень похоже на предложение WHERE в SQL, или вы должны были использовать фильтр в MS Excel для выбора определенных строк на основе некоторых условий.
pandas — это мощный, гибкий инструмент с открытым исходным кодом для анализа/манипулирования данными, который, по сути, представляет собойпакет Python, обеспечивающий скорость, гибкость и выразительные структуры данных, созданные для интуитивно понятной и простой работы с «реляционными» или «помеченными» данными. Это одна из самых популярных библиотекдля реального анализа данных в Python.
pandas построен на основе библиотеки NumPy, которая нацелена на хорошую интеграцию с научной вычислительной средой и множеством других сторонних библиотек. Он имеет две основные структуры данных, а именно Series (1D) и Dataframes(2D) , которые в большинстве реальных случаев использования представляют собой тип данных, с которыми имеют дело во многих секторах финансов, научных вычислений, инженерии и статистики.
Установка панд
!pip install pandas
Импорт библиотеки Pandas, чтение нашего примера файла данных и назначение его в «df» DataFrame
import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")
Давайте проверим наш фрейм данных :
print(df.head())
Образец данных
Теперь, когда у нас есть DataFrame, мы будем применять различные методы для его фильтрации.
У нас есть столбец с именем «Total_Sales» в нашем DataFrame, и мы хотим отфильтровать все значения продаж, превышающие 300.
#Filter a DataFrame for a single column value with a given condition
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())
Продажи с более чем 300
Здесь мы фильтруем все значения, у которых значение «Total_Sales» больше 300, а также где «Единицы» больше 20. Нам нужно будет использовать оператор Python «&», который выполняет побитовую операцию И, чтобы отобразить соответствующий результат.
#Filter a DataFrame with multiple conditions
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())
Фильтр по продажам и единицам
Если мы хотим отфильтровать наш фрейм данных на основе определенного значения даты, например, здесь мы пытаемся получить все результаты на основе определенной даты, в нашем случае результаты после даты «10.03.21».
#Filter a DataFrame based on specific date
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())
Фильтр по дате
Здесь мы получаем все результаты нашей операции Date, оценивающей несколько дат .
#Filter a DataFrame with multiple conditions our Date value
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())
Фильтр по дате с несколькими условиями
Здесь мы выбираем столбец под названием «Регион» и получаем все строки из региона «Восток», таким образом фильтруя на основе определенного строкового значения .
#Filter a DataFrame to a specific string
east = df[df['Region'] == 'East']
print(east.head())
Фильтровать по определенной строке
Здесь мы выбираем столбец под названием «Регион» и получаем все строки, в которых буква «Е» является первым символом, т.е. индексом 0 в результатах указанного столбца.
#Filter a DataFrame to show rows starting with a specfic letter
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())
Фильтр по определенной букве
Здесь мы фильтруем строки в столбце «Регион», который содержит значения «Запад», а также «Восток», и отображаем объединенный результат. Для выполнения этой фильтрации можно использовать два метода, а именно использование канала | оператор с соответствующим желаемым набором значений с приведенным ниже синтаксисом ИЛИ мы можем использовать функцию .isin() для фильтрации значений в данном столбце, которым в нашем случае является «Регион», и предоставить список желаемого набора значений внутри него в виде списка.
#Filter a DataFrame rows based on list of values
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())
Выход метода -2
Здесь нам нужны все значения в столбце «Регион» , которые заканчиваются на «th» в их строковом значении, и отобразить их. Другими словами, мы хотим, чтобы наши результаты показывали значения «Север » и «Юг » и игнорировали «Восток» и «Запад» . Метод .str.contains() с указанными значениями вместе с шаблоном $ RegEx можно использовать для получения желаемых результатов.
Для получения дополнительной информации ознакомьтесь с документацией по регулярным выражениям.
#Filtering the DataFrame rows using regular expressions(REGEX)
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())
Фильтр на основе REGEX
Здесь мы проверим нулевые и не нулевые значения во всех столбцах с помощью функции isnull() .
#Filtering to check for null and not null values in all columns
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())
Фильтр на основе значений NULL или NOT null
#Filtering to check for null values if any in the 'Units' column
units_df = df[df['Units'].isnull()]
print(units_df.head())
Поиск нулевых значений в определенных столбцах
#Filtering to check for not null values in the 'Units' column
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())
Поиск ненулевых значений в определенных столбцах
query()
с использованием условия#Using query function in pandas
df_query = df.query('Total_Sales > 300')
print(df_query.head())
Фильтрация значений с Query
функцией
query()
нескольких условий#Using query function with multiple conditions in pandas
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())
Фильтрация нескольких столбцов с Query
функцией
loc
и .iloc
#Creating a sample DataFrame for illustrations
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)
образец данных
Объяснение : iloc
считает строки на основе позиции заданного индекса, поэтому в качестве значений принимает только целые числа.
Для получения дополнительной информации ознакомьтесь с документацией Pandas.
#Filter with iloc
data.iloc[0 : 5]
Фильтровать с помощьюiloc
Объяснение : loc
считает строки на основе меток индекса .
#Filter with loc
data.loc[0 : 5]
Фильтровать с помощьюloc
Вы можете подумать, почему loc
функция возвращает 6 строк вместо 5 строк. Это связано с тем , что вывод не производится на основе позиции индекса. Он рассматривает только метки индекса, которые также могут быть алфавитом, и включает как начальную, так и конечную точку. loc
Итак, это были одни из наиболее распространенных методов фильтрации, используемых в пандах. Существует множество других методов фильтрации, которые можно использовать, но эти являются одними из наиболее распространенных.
Ссылка: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
#pandas #python #datafame