Filter an Object by Key in JavaScript

Introduction

JavaScript's Objects are not iterable like arrays or strings, so we can't make use of the filter() method directly on an Object. filter() allows us to iterate through an array and returns only the items of that array that fit certain criteria, into a new array.

If you'd like to read more about the filter() method - read our Guide to JavaScript's filter() Method!

In this article, we will explore how to filter an Object making use of its key in JavaScript.

An object is, essentially, a map of properties and their values. This key-value pair set is what an object is. We can naturally extract the keys and values individually:

Keys are extracted using Object.keys(), while values are extracted using Object.values(). To retrieve both keys and values, you may alternatively use Object.entries(). We are solely concerned with the keys in this article, for filtering keys against certain criteria.

Using Object.keys() to filter an Object

The Object.keys() method is used to generate an array whose elements are strings containing the names (keys) of an object's properties. The object is passed as an argument to Object.keys():

Object.keys(objectName);

For example, suppose we have an object of user scores in various subjects:

const userScores = {
    chemistry: 60,
    mathematics: 70,
    physics: 80,
    english: 98
};

We can loop through the object and fetch the keys, which for this example would be the subjects:

const names = Object.keys(userScores);
console.log(names); // ["chemistry","mathematics","physics","english"]

After you've generated the keys, you may use filter() to loop over the existing values and return just those that meet the specified criteria. Finally, you can use reduce() to collect the filtered keys and their values into a new object, for instance.

Note: filter() is great at chaining with other functional methods!

Assume we have an Object, and we want to return only key-value pairs with the word "name" in the keys:

const user = {
    firstName: "John",
    lastName: "Doe",
    userName: "johndoe12",
    email: "johndoe@stackabuse.com",
    age: 37,
    hobby: "Singing"
};

We could filter by making use of the Objects key:

const names = Object.keys(user)
    .filter((key) => key.includes("Name"))
    .reduce((obj, key) => {
        return Object.assign(obj, {
          [key]: user[key]
        });
  }, {});

console.log(names);

We made use of Object.keys(user) to generate all the keys as an array, resulting in an array:

["firstName","lastName","userName","email","age","hobby"]

We then used the array function includes() as the criteria, within the filter() method, to go over each element in the array to determine whether any key included the word "Name":

["firstName","lastName","userName"]

Then, we made use of reduce() to reduce the array down into an object.

Note: The reduce() function accepts two arguments: an object as the first parameter (identity) and the current iteration value as the second.

 

We are using Object.assign() to combine source objects into a target object in the new object being generated. The Object.assign() function takes the Object that is being built and adds the current key-value pair that we are passing into it.

And at the end of this - we have a new object, filtered by the keys:

{ firstName: 'John', lastName: 'Doe', userName: 'johndoe12' }

Filter Array of Objects by Key

Oftentimes, the objects we're processing are sequenced in an array. Filtering each is as easy as filtering one - we just iterate through the array and apply the same steps:

const users = {
    John: { username: 'johncam112', age:19 },
    Daniel: { key: 'Dandandel1', age:21 },
    Ruth: { key: 'rutie01', age:24 },
    Joe: { key: 'Joemathuel', age:28 }
};

const selectedUsers = ['Ruth', 'Daniel'];

const filteredUsers = Object.keys(users)
    .filter(key => selectedUsers.includes(key))
    .reduce((obj, key) => {
        obj[key] = users[key];
        return obj;
  }, {});

console.log(filteredUsers);

In the above example, we filtered the Users object to only return objects of the selectedUsers, filtering them by the key:

{
    Daniel: {
        key:"Dandandel1",
        age:21
},
    Ruth: {
        key:"rutie01",
        age:24
    }
}

Conclusion

In this short article - we've taken a look at filtering objects by value, using the Object.keys() method, filtered via the filter() method.

Original article source at:  https://stackabuse.com/

#javascript #key #object 

What is GEEK

Buddha Community

Filter an Object by Key in JavaScript
Sasha  Lee

Sasha Lee

1650636000

Dl4clj: Clojure Wrapper for Deeplearning4j.

dl4clj

Port of deeplearning4j to clojure

Contact info

If you have any questions,

  • my email is will@yetanalytics.com
  • I'm will_hoyt in the clojurians slack
  • twitter is @FeLungz (don't check very often)

TODO

  • update examples dir
  • finish README
    • add in examples using Transfer Learning
  • finish tests
    • eval is missing regression tests, roc tests
    • nn-test is missing regression tests
    • spark tests need to be redone
    • need dl4clj.core tests
  • revist spark for updates
  • write specs for user facing functions
    • this is very important, match isnt strict for maps
    • provides 100% certianty of the input -> output flow
    • check the args as they come in, dispatch once I know its safe, test the pure output
  • collapse overlapping api namespaces
  • add to core use case flows

Features

Stable Features with tests

  • Neural Networks DSL
  • Early Stopping Training
  • Transfer Learning
  • Evaluation
  • Data import

Features being worked on for 0.1.0

  • Clustering (testing in progress)
  • Spark (currently being refactored)
  • Front End (maybe current release, maybe future release. Not sure yet)
  • Version of dl4j is 0.0.8 in this project. Current dl4j version is 0.0.9
  • Parallelism
  • Kafka support
  • Other items mentioned in TODO

Features being worked on for future releases

  • NLP
  • Computational Graphs
  • Reinforement Learning
  • Arbiter

Artifacts

NOT YET RELEASED TO CLOJARS

  • fork or clone to try it out

If using Maven add the following repository definition to your pom.xml:

<repository>
  <id>clojars.org</id>
  <url>http://clojars.org/repo</url>
</repository>

Latest release

With Leiningen:

n/a

With Maven:

n/a

<dependency>
  <groupId>_</groupId>
  <artifactId>_</artifactId>
  <version>_</version>
</dependency>

Usage

Things you need to know

All functions for creating dl4j objects return code by default

  • All of these functions have an option to return the dl4j object
    • :as-code? = false
  • This because all builders require the code representation of dl4j objects
    • this requirement is not going to change
  • INDarray creation fns default to objects, this is for convenience
    • :as-code? is still respected

API functions return code when all args are provided as code

API functions return the value of calling the wrapped method when args are provided as a mixture of objects and code or just objects

The tests are there to help clarify behavior, if you are unsure of how to use a fn, search the tests

  • for questions about spark, refer to the spark section bellow

Example of obj/code duality

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]))

;; as code (the default)

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1)

;; =>

(doto
 (org.deeplearning4j.nn.conf.layers.DenseLayer$Builder.)
 (.nOut 1)
 (.activation (dl4clj.constants/value-of {:activation-fn :relu}))
 (.weightInit (dl4clj.constants/value-of {:weight-init :xavier}))
 (.nIn 10)
 (.name "example layer")
 (.learningRate 0.006))

;; as an object

(l/dense-layer-builder
 :activation-fn :relu
 :learning-rate 0.006
 :weight-init :xavier
 :layer-name "example layer"
 :n-in 10
 :n-out 1
 :as-code? false)

;; =>

#object[org.deeplearning4j.nn.conf.layers.DenseLayer 0x69d7d160 "DenseLayer(super=FeedForwardLayer(super=Layer(layerName=example layer, activationFn=relu, weightInit=XAVIER, biasInit=NaN, dist=null, learningRate=0.006, biasLearningRate=NaN, learningRateSchedule=null, momentum=NaN, momentumSchedule=null, l1=NaN, l2=NaN, l1Bias=NaN, l2Bias=NaN, dropOut=NaN, updater=null, rho=NaN, epsilon=NaN, rmsDecay=NaN, adamMeanDecay=NaN, adamVarDecay=NaN, gradientNormalization=null, gradientNormalizationThreshold=NaN), nIn=10, nOut=1))"]

General usage examples

Importing data

Loading data from a file (here its a csv)


(ns my.ns
 (:require [dl4clj.datasets.input-splits :as s]
           [dl4clj.datasets.record-readers :as rr]
           [dl4clj.datasets.api.record-readers :refer :all]
           [dl4clj.datasets.iterators :as ds-iter]
           [dl4clj.datasets.api.iterators :refer :all]
           [dl4clj.helpers :refer [data-from-iter]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; file splits (convert the data to records)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def poker-path "resources/poker-hand-training.csv")
;; this is not a complete dataset, it is just here to sever as an example

(def file-split (s/new-filesplit :path poker-path))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers, (read the records created by the file split)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def csv-rr (initialize-rr! :rr (rr/new-csv-record-reader :skip-n-lines 0 :delimiter ",")
                                 :input-split file-split))

;; lets look at some data
(println (next-record! :rr csv-rr :as-code? false))
;; => #object[java.util.ArrayList 0x2473e02d [1, 10, 1, 11, 1, 13, 1, 12, 1, 1, 9]]
;; this is our first line from the csv


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers dataset iterators (turn our writables into a dataset)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                 :record-reader csv-rr
                 :batch-size 1
                 :label-idx 10
                 :n-possible-labels 10))

;; we use our record reader created above
;; we want to see one example per dataset obj returned (:batch-size = 1)
;; we know our label is at the last index, so :label-idx = 10
;; there are 10 possible types of poker hands so :n-possible-labels = 10
;; you can also set :label-idx to -1 to use the last index no matter the size of the seq

(def other-rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
                       :record-reader csv-rr
                       :batch-size 1
                       :label-idx -1
                       :n-possible-labels 10))

(str (next-example! :iter rr-ds-iter :as-code? false))
;; =>
;;===========INPUT===================
;;[1.00, 10.00, 1.00, 11.00, 1.00, 13.00, 1.00, 12.00, 1.00, 1.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]


;; and to show that :label-idx = -1 gives us the same output

(= (next-example! :iter rr-ds-iter :as-code? false)
   (next-example! :iter other-rr-ds-iter :as-code? false)) ;; => true

INDArrays and Datasets from clojure data structures


(ns my.ns
  (:require [nd4clj.linalg.factory.nd4j :refer [vec->indarray matrix->indarray
                                                indarray-of-zeros indarray-of-ones
                                                indarray-of-rand vec-or-matrix->indarray]]
            [dl4clj.datasets.new-datasets :refer [new-ds]]
            [dl4clj.datasets.api.datasets :refer [as-list]]
            [dl4clj.datasets.iterators :refer [new-existing-dataset-iterator]]
            [dl4clj.datasets.api.iterators :refer :all]
            [dl4clj.datasets.pre-processors :as ds-pp]
            [dl4clj.datasets.api.pre-processors :refer :all]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; INDArray creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;TODO: consider defaulting to code

;; can create from a vector

(vec->indarray [1 2 3 4])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x269df212 [1.00, 2.00, 3.00, 4.00]]

;; or from a matrix

(matrix->indarray [[1 2 3 4] [2 4 6 8]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x20aa7fe1
;; [[1.00, 2.00, 3.00, 4.00], [2.00, 4.00, 6.00, 8.00]]]


;; will fill in spareness with zeros

(matrix->indarray [[1 2 3 4] [2 4 6 8] [10 12]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x8b7796c
;;[[1.00, 2.00, 3.00, 4.00],
;; [2.00, 4.00, 6.00, 8.00],
;; [10.00, 12.00, 0.00, 0.00]]]

;; can create an indarray of all zeros with specified shape
;; defaults to :rows = 1 :columns = 1

(indarray-of-zeros :rows 3 :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x6f586a7e
;;[[0.00, 0.00],
;; [0.00, 0.00],
;; [0.00, 0.00]]]

(indarray-of-zeros) ;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xe59ffec 0.00]

;; and if only one is supplied, will get a vector of specified length

(indarray-of-zeros :rows 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2899d974 [0.00, 0.00]]

(indarray-of-zeros :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xa5b9782 [0.00, 0.00]]

;; same considerations/defaults for indarray-of-ones and indarray-of-rand

(indarray-of-ones :rows 2 :columns 3)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x54f08662 [[1.00, 1.00, 1.00], [1.00, 1.00, 1.00]]]

(indarray-of-rand :rows 2 :columns 3)
;; all values are greater than 0 but less than 1
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2f20293b [[0.85, 0.86, 0.13], [0.94, 0.04, 0.36]]]



;; vec-or-matrix->indarray is built into all functions which require INDArrays
;; so that you can use clojure data structures
;; but you still have the option of passing existing INDArrays

(def example-array (vec-or-matrix->indarray [1 2 3 4]))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x5c44c71f [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray example-array)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x607b03b0 [1.00, 2.00, 3.00, 4.00]]

(vec-or-matrix->indarray (indarray-of-rand :rows 2))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x49143b08 [0.76, 0.92]]

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def ds-with-single-example (new-ds :input [1 2 3 4]
                                    :output [0.0 1.0 0.0]))

(as-list :ds ds-with-single-example :as-code? false)
;; =>
;; #object[java.util.ArrayList 0x5d703d12
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00]]]

(def ds-with-multiple-examples (new-ds
                                :input [[1 2 3 4] [2 4 6 8]]
                                :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

(as-list :ds ds-with-multiple-examples :as-code? false)
;; =>
;;#object[java.util.ArrayList 0x29c7a9e2
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00],
;;===========INPUT===================
;;[2.00, 4.00, 6.00, 8.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 1.00]]]

;; we can create a dataset iterator from the code which creates datasets
;; and set the labels for our outputs (optional)

(def ds-with-multiple-examples
  (new-ds
   :input [[1 2 3 4] [2 4 6 8]]
   :output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))

;; iterator
(def training-rr-ds-iter
  (new-existing-dataset-iterator
   :dataset ds-with-multiple-examples
   :labels ["foo" "baz" "foobaz"]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set normalization
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; this gathers statistics on the dataset and normalizes the data
;; and applies the transformation to all dataset objects in the iterator
(def train-iter-normalized
  (c/normalize-iter! :iter training-rr-ds-iter
                     :normalizer (ds-pp/new-standardize-normalization-ds-preprocessor)
                     :as-code? false))

;; above returns the normalized iterator
;; to get fit normalizer

(def the-normalizer
  (get-pre-processor train-iter-normalized))

Model configuration

Creating a neural network configuration with singe and multiple layers

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.conf.distributions :as dist]
            [dl4clj.nn.conf.input-pre-processor :as pp]
            [dl4clj.nn.conf.step-fns :as s-fn]))

;; nn/builder has 3 types of args
;; 1) args which set network configuration params
;; 2) args which set default values for layers
;; 3) args which set multi layer network configuration params

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; single layer nn configuration
;; here we are setting network configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn :default-step-fn
            :layers {:dense-layer {:activation-fn :relu
                                   :updater :adam
                                   :adam-mean-decay 0.2
                                   :adam-var-decay 0.1
                                   :learning-rate 0.006
                                   :weight-init :xavier
                                   :layer-name "single layer model example"
                                   :n-in 10
                                   :n-out 20}})

;; there are several options within a nn-conf map which can be configuration maps
;; or calls to fns
;; It doesn't matter which option you choose and you don't have to stay consistent
;; the list of params which can be passed as config maps or fn calls will
;; be enumerated at a later date

(nn/builder :optimization-algo :stochastic-gradient-descent
            :seed 123
            :iterations 1
            :minimize? true
            :use-drop-connect? false
            :lr-score-based-decay-rate 0.002
            :regularization? false
            :step-fn (s-fn/new-default-step-fn)
            :build? true
            ;; dont need to specify layer order, theres only one
            :layers (l/dense-layer-builder
                    :activation-fn :relu
                    :updater :adam
                    :adam-mean-decay 0.2
                    :adam-var-decay 0.1
                    :dist (dist/new-normal-distribution :mean 0 :std 1)
                    :learning-rate 0.006
                    :weight-init :xavier
                    :layer-name "single layer model example"
                    :n-in 10
                    :n-out 20))

;; these configurations are the same

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; multi-layer configuration
;; here we are also setting layer defaults
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; defaults will apply to layers which do not specify those value in their config

(nn/builder
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; we need to specify the layer order
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}})

;; specifying multi-layer config params

(nn/builder
 ;; network args
 :optimization-algo :stochastic-gradient-descent
 :seed 123
 :iterations 1
 :minimize? true
 :use-drop-connect? false
 :lr-score-based-decay-rate 0.002
 :regularization? false

 ;; layer defaults
 :default-activation-fn :sigmoid
 :default-weight-init :uniform

 ;; the layers
 :layers {0 (l/activation-layer-builder
             :activation-fn :relu
             :updater :adam
             :adam-mean-decay 0.2
             :adam-var-decay 0.1
             :learning-rate 0.006
             :weight-init :xavier
             :layer-name "example first layer"
             :n-in 10
             :n-out 20)
          1 {:output-layer {:n-in 20
                            :n-out 2
                            :loss-fn :mse
                            :layer-name "example output layer"}}}
 ;; multi layer network args
 :backprop? true
 :backprop-type :standard
 :pretrain? false
 :input-pre-processors {0 (pp/new-zero-mean-pre-pre-processor)
                        1 {:unit-variance-processor {}}})

Configuration to Trained models

Multi Layer models

(ns my.ns
  (:require [dl4clj.datasets.iterators :as iter]
            [dl4clj.datasets.input-splits :as split]
            [dl4clj.datasets.record-readers :as rr]
            [dl4clj.optimize.listeners :as listener]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.nn.api.model :refer [init! set-listeners!]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.datasets.api.record-readers :refer [initialize-rr!]]
            [dl4clj.eval.api.eval :refer [get-stats get-accuracy]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; nn-conf -> multi-layer-network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def multi-layer-network (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; local cpu training with dl4j pre-built iterators
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; lets use the pre-built Mnist data set iterator

(def train-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-mnist-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;; and lets set a listener so we can know how training is going

(def score-listener (listener/new-score-iteration-listener :print-every-n 5))

;; and attach it to our model

;; TODO: listeners are broken, look into log4j warnning
(def mln-with-listener (set-listeners! :model multi-layer-network
                                       :listeners [score-listener]))

(def trained-mln (mln/train-mln-with-ds-iter! :mln mln-with-listener
                                              :iter train-mnist-iter
                                              :n-epochs 15
                                              :as-code? false))

;; training happens because :as-code? = false
;; if it was true, we would still just have a data structure
;; we now have a trained model that has seen the training dataset 15 times
;; time to evaluate our model

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;Create an evaluation object
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj (evaluate-classification :mln trained-mln
                                       :iter test-mnist-iter))

;; always remember that these objects are stateful, dont use the same eval-obj
;; to eval two different networks
;; we trained the model on a training dataset.  We evaluate on a test set

(println (get-stats :evaler eval-obj))
;; this will print the stats to standard out for each feature/label pair

;;Examples labeled as 0 classified by model as 0: 968 times
;;Examples labeled as 0 classified by model as 1: 1 times
;;Examples labeled as 0 classified by model as 2: 1 times
;;Examples labeled as 0 classified by model as 3: 1 times
;;Examples labeled as 0 classified by model as 5: 1 times
;;Examples labeled as 0 classified by model as 6: 3 times
;;Examples labeled as 0 classified by model as 7: 1 times
;;Examples labeled as 0 classified by model as 8: 2 times
;;Examples labeled as 0 classified by model as 9: 2 times
;;Examples labeled as 1 classified by model as 1: 1126 times
;;Examples labeled as 1 classified by model as 2: 2 times
;;Examples labeled as 1 classified by model as 3: 1 times
;;Examples labeled as 1 classified by model as 5: 1 times
;;Examples labeled as 1 classified by model as 6: 2 times
;;Examples labeled as 1 classified by model as 7: 1 times
;;Examples labeled as 1 classified by model as 8: 2 times
;;Examples labeled as 2 classified by model as 0: 3 times
;;Examples labeled as 2 classified by model as 1: 2 times
;;Examples labeled as 2 classified by model as 2: 1006 times
;;Examples labeled as 2 classified by model as 3: 2 times
;;Examples labeled as 2 classified by model as 4: 3 times
;;Examples labeled as 2 classified by model as 6: 3 times
;;Examples labeled as 2 classified by model as 7: 7 times
;;Examples labeled as 2 classified by model as 8: 6 times
;;Examples labeled as 3 classified by model as 2: 4 times
;;Examples labeled as 3 classified by model as 3: 990 times
;;Examples labeled as 3 classified by model as 5: 3 times
;;Examples labeled as 3 classified by model as 7: 3 times
;;Examples labeled as 3 classified by model as 8: 3 times
;;Examples labeled as 3 classified by model as 9: 7 times
;;Examples labeled as 4 classified by model as 2: 2 times
;;Examples labeled as 4 classified by model as 3: 1 times
;;Examples labeled as 4 classified by model as 4: 967 times
;;Examples labeled as 4 classified by model as 6: 4 times
;;Examples labeled as 4 classified by model as 7: 1 times
;;Examples labeled as 4 classified by model as 9: 7 times
;;Examples labeled as 5 classified by model as 0: 2 times
;;Examples labeled as 5 classified by model as 3: 6 times
;;Examples labeled as 5 classified by model as 4: 1 times
;;Examples labeled as 5 classified by model as 5: 874 times
;;Examples labeled as 5 classified by model as 6: 3 times
;;Examples labeled as 5 classified by model as 7: 1 times
;;Examples labeled as 5 classified by model as 8: 3 times
;;Examples labeled as 5 classified by model as 9: 2 times
;;Examples labeled as 6 classified by model as 0: 4 times
;;Examples labeled as 6 classified by model as 1: 3 times
;;Examples labeled as 6 classified by model as 3: 2 times
;;Examples labeled as 6 classified by model as 4: 4 times
;;Examples labeled as 6 classified by model as 5: 4 times
;;Examples labeled as 6 classified by model as 6: 939 times
;;Examples labeled as 6 classified by model as 7: 1 times
;;Examples labeled as 6 classified by model as 8: 1 times
;;Examples labeled as 7 classified by model as 1: 7 times
;;Examples labeled as 7 classified by model as 2: 4 times
;;Examples labeled as 7 classified by model as 3: 3 times
;;Examples labeled as 7 classified by model as 7: 1005 times
;;Examples labeled as 7 classified by model as 8: 2 times
;;Examples labeled as 7 classified by model as 9: 7 times
;;Examples labeled as 8 classified by model as 0: 3 times
;;Examples labeled as 8 classified by model as 2: 3 times
;;Examples labeled as 8 classified by model as 3: 2 times
;;Examples labeled as 8 classified by model as 4: 4 times
;;Examples labeled as 8 classified by model as 5: 3 times
;;Examples labeled as 8 classified by model as 6: 2 times
;;Examples labeled as 8 classified by model as 7: 4 times
;;Examples labeled as 8 classified by model as 8: 947 times
;;Examples labeled as 8 classified by model as 9: 6 times
;;Examples labeled as 9 classified by model as 0: 2 times
;;Examples labeled as 9 classified by model as 1: 2 times
;;Examples labeled as 9 classified by model as 3: 4 times
;;Examples labeled as 9 classified by model as 4: 8 times
;;Examples labeled as 9 classified by model as 6: 1 times
;;Examples labeled as 9 classified by model as 7: 4 times
;;Examples labeled as 9 classified by model as 8: 2 times
;;Examples labeled as 9 classified by model as 9: 986 times

;;==========================Scores========================================
;; Accuracy:        0.9808
;; Precision:       0.9808
;; Recall:          0.9807
;; F1 Score:        0.9807
;;========================================================================

;; can get the stats that are printed via fns in the evaluation namespace
;; after running eval-model-whole-ds

(get-accuracy :evaler evaler-with-stats) ;; => 0.9808

Model Tuning

Early Stopping (controlling training)

it is recommened you start here when designing models

using dl4clj.core


(ns my.ns
  (:require [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver]]
            [dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123
   :iterations 1
   :regularization? true

   ;; setting layer defaults
   :default-activation-fn :relu
   :default-l2 7.5e-6
   :default-weight-init :xavier
   :default-learning-rate 0.0015
   :default-updater :nesterovs
   :default-momentum 0.98

   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}

   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition
                             :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

(def in-mem-saver (new-in-memory-saver))

(def trained-mln
;; defaults to returning the model
  (c/train-with-early-stopping
   :nn-conf nn-conf
   :training-iter train-mnist-iter
   :testing-iter test-mnist-iter
   :eval-every-n-epochs 1
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :save-last-model? true
   :model-saver in-mem-saver
   :as-code? false))

(def model-evaler
  (evaluate-classification :mln trained-mln :iter test-mnist-iter))

(println (get-stats :evaler model-evaler))
  • explicit, step by step way of doing this
(ns my.ns
  (:require [dl4clj.earlystopping.early-stopping-config :refer [new-early-stopping-config]]
            [dl4clj.earlystopping.termination-conditions :refer :all]
            [dl4clj.earlystopping.model-saver :refer [new-in-memory-saver new-local-file-model-saver]]
            [dl4clj.earlystopping.score-calc :refer [new-ds-loss-calculator]]
            [dl4clj.earlystopping.early-stopping-trainer :refer [new-early-stopping-trainer]]
            [dl4clj.earlystopping.api.early-stopping-trainer :refer [fit-trainer!]]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.nn.multilayer.multi-layer-network :as mln]
            [dl4clj.utils :refer [load-model!]]
            [dl4clj.datasets.iterators :as iter]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; start with our network config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def nn-conf
  (nn/builder
   ;; network args
   :optimization-algo :stochastic-gradient-descent
   :seed 123 :iterations 1 :regularization? true
   ;; setting layer defaults
   :default-activation-fn :relu :default-l2 7.5e-6
   :default-weight-init :xavier :default-learning-rate 0.0015
   :default-updater :nesterovs :default-momentum 0.98
   ;; setting layer configuration
   :layers {0 {:dense-layer
               {:layer-name "example first layer"
                :n-in 784 :n-out 500}}
            1 {:dense-layer
               {:layer-name "example second layer"
                :n-in 500 :n-out 100}}
            2 {:output-layer
               {:n-in 100 :n-out 10
                ;; layer specific params
                :loss-fn :negativeloglikelihood
                :activation-fn :softmax
                :layer-name "example output layer"}}}
   ;; multi layer args
   :backprop? true
   :pretrain? false))

(def mln (c/model-from-conf nn-conf))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; the training/testing data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def train-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? true
   :seed 123))

(def test-iter
  (iter/new-mnist-data-set-iterator
   :batch-size 64
   :train? false
   :seed 123))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we are going to need termination conditions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; these allow us to control when we exit training

;; this can be based off of iterations or epochs

;; iteration termination conditions

(def invalid-score-condition (new-invalid-score-iteration-termination-condition))

(def max-score-condition (new-max-score-iteration-termination-condition
                          :max-score 20.0))

(def max-time-condition (new-max-time-iteration-termination-condition
                         :max-time-val 10
                         :max-time-unit :minutes))

;; epoch termination conditions

(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
                                     :max-n-epoch-no-improve 5))

(def target-score-condition (new-best-score-epoch-termination-condition :best-expected-score 0.009))

(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we also need a way to save our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; can be in memory or to a local directory

(def in-mem-saver (new-in-memory-saver))

(def local-file-saver (new-local-file-model-saver :directory "resources/tmp/readme/"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; set up your score calculator
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def score-calcer (new-ds-loss-calculator :iter test-iter
                                          :average? true))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; termination conditions
;; a way to save our model
;; a way to calculate the score of our model on the dataset

(def early-stopping-conf
  (new-early-stopping-config
   :epoch-termination-conditions [score-doesnt-improve-condition
                                  target-score-condition
                                  max-number-epochs-condition]
   :iteration-termination-conditions [invalid-score-condition
                                      max-score-condition
                                      max-time-condition]
   :eval-every-n-epochs 5
   :model-saver local-file-saver
   :save-last-model? true
   :score-calculator score-calcer))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping trainer from our data, model and early stopping conf
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer (new-early-stopping-trainer :early-stopping-conf early-stopping-conf
                                            :mln mln
                                            :iter train-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; fit and use our early stopping trainer
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def es-trainer-fitted (fit-trainer! es-trainer :as-code? false))

;; when the trainer terminates, you will see something like this
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  Completed training epoch 14
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO  New best model: score = 0.005225599372851298,
;;                                                   epoch = 14 (previous: score = 0.018243224899038346, epoch = 7)
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Hit epoch termination condition at epoch 14.
;;                                           Details: BestScoreEpochTerminationCondition(0.009)

;; and if we look at the es-trainer-fitted object we see

;;#object[org.deeplearning4j.earlystopping.EarlyStoppingResult 0x5ab74f27 EarlyStoppingResult
;;(terminationReason=EpochTerminationCondition,details=BestScoreEpochTerminationCondition(0.009),
;; bestModelEpoch=14,bestModelScore=0.005225599372851298,totalEpochs=15)]

;; and our model has been saved to /resources/tmp/readme/bestModel.bin
;; there we have our model config, model params and our updater state

;; we can then load this model to use it or continue refining it

(def loaded-model (load-model! :path "resources/tmp/readme/bestModel.bin"
                               :load-updater? true))

Transfer Learning (freezing layers)


;; TODO: need to write up examples

Spark Training

dl4j Spark usage

How it is done in dl4clj

  • Uses dl4clj.core
    • This example uses a fn which takes care of most steps for you
      • allows you to pass args as code bc the fn accounts for the multiple spark contexts issue encountered when everything is just a data structure

(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context
                                                java-rdd-from-iter]]
            [dl4clj.spark.api.dl4j-multi-layer :refer [eval-classification-spark-mln
                                                       get-spark-context]]
            [dl4clj.core :as c]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, spark context
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app"))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, training data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, spark mln
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (c/train-with-spark :spark-context your-spark-context
                      :mln-conf mln-conf
                      :training-master training-master
                      :iter iris-iter
                      :n-epochs 1
                      :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, use spark context from spark-mln to create rdd
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; TODO: eliminate this step

(def our-rdd
  (let [sc (get-spark-context fitted-spark-mln :as-code? false)]
    (java-rdd-from-iter :spark-context sc
                        :iter iris-iter)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 6, evaluation model and print stats (poor performance of model expected)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))

(println (get-stats :evaler eval-obj))

  • this example demonstrates the dl4j workflow
    • NOTE: unlike the previous example, this one requires dl4j objects to be used
      • this is becaues spark only wants you to have one spark context at a time
(ns my.ns
  (:require [dl4clj.nn.conf.builders.layers :as l]
            [dl4clj.nn.conf.builders.nn :as nn]
            [dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
            [dl4clj.eval.api.eval :refer [get-stats]]
            [dl4clj.spark.masters.param-avg :as master]
            [dl4clj.spark.data.java-rdd :refer [new-java-spark-context java-rdd-from-iter]]
            [dl4clj.spark.dl4j-multi-layer :as spark-mln]
            [dl4clj.spark.api.dl4j-multi-layer :refer [fit-spark-mln!
                                                       eval-classification-spark-mln]]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def mln-conf
  (nn/builder
   :optimization-algo :stochastic-gradient-descent
   :default-learning-rate 0.006
   :layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
            1 {:output-layer
               {:loss-fn :negativeloglikelihood
                :n-in 2 :n-out 3
                :activation-fn :soft-max
                :weight-init :xavier}}}
   :backprop? true
   :as-code? false
   :backprop-type :standard))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, create a training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; not all options specified, but most are

(def training-master
  (master/new-parameter-averaging-training-master
   :build? true
   :rdd-n-examples 10
   :n-workers 4
   :averaging-freq 10
   :batch-size-per-worker 2
   :export-dir "resources/spark/master/"
   :rdd-training-approach :direct
   :repartition-data :always
   :repartition-strategy :balanced
   :seed 1234
   :as-code? false
   :save-updater? true
   :storage-level :none))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, create a Spark Multi Layer Network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def your-spark-context
  (new-java-spark-context :app-name "example app" :as-code? false))

;; new-java-spark-context will turn an existing spark-configuration into a java spark context
;; or create a new java spark context with master set to "local[*]" and the app name
;; set to :app-name


(def spark-mln
  (spark-mln/new-spark-multi-layer-network
   :spark-context your-spark-context
   :mln mln-conf
   :training-master training-master
   :as-code? false))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, load your data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; one way is via a dataset-iterator
;; can make one directly from a dataset (iterator data-set)
;; see: nd4clj.linalg.dataset.api.data-set and nd4clj.linalg.dataset.data-set
;; we are going to use a pre-built one

(def iris-iter
  (new-iris-data-set-iterator
   :batch-size 1
   :n-examples 5
   :as-code? false))

;; now lets convert the data into a javaRDD

(def our-rdd
  (java-rdd-from-iter :spark-context your-spark-context
                      :iter iris-iter))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, fit and evaluate the model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(def fitted-spark-mln
  (fit-spark-mln!
   :spark-mln spark-mln
   :rdd our-rdd
   :n-epochs 1))
;; this fn also has the option to supply :path-to-data instead of :rdd
;; that path should point to a directory containing a number of dataset objects

(def eval-obj
  (eval-classification-spark-mln
   :spark-mln fitted-spark-mln
   :rdd our-rdd))
;; we would want to have different testing and training rdd's but here we are using
;; the data we trained on

;; lets get the stats for how our model performed

(println (get-stats :evaler eval-obj))

Terminology

Coming soon

Packages to come back to:

Implement ComputationGraphs and the classes which use them

NLP

Parallelism

TSNE

UI


Author: yetanalytics
Source Code: https://github.com/yetanalytics/dl4clj
License: BSD-2-Clause License

#machine-learning #deep-learning 

Arvel  Parker

Arvel Parker

1591611780

How to Find Ulimit For user on Linux

How can I find the correct ulimit values for a user account or process on Linux systems?

For proper operation, we must ensure that the correct ulimit values set after installing various software. The Linux system provides means of restricting the number of resources that can be used. Limits set for each Linux user account. However, system limits are applied separately to each process that is running for that user too. For example, if certain thresholds are too low, the system might not be able to server web pages using Nginx/Apache or PHP/Python app. System resource limits viewed or set with the NA command. Let us see how to use the ulimit that provides control over the resources available to the shell and processes.

#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]

August  Larson

August Larson

1660147320

Top 14 Ways to Filter Pandas Dataframes Easily

Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format.

Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions.

pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python.

pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics.

Let’s Start Filtering Data With the Help of Pandas Dataframe

Installing pandas

!pip install pandas

Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Let’s check out our dataframe:

print(df.head())

Sample_data

Sample_data

Now that we have our DataFrame, we will be applying various methods to filter it.

Method – 1: Filtering DataFrame by column value

We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Sample_data with sales > 300

Sales with Greater than 300

Method – 2: Filtering DataFrame based on multiple conditions

Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Image 3

Filter on Sales and Units

Method – 3: Filtering DataFrame based on Date value

If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′.

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Image 1

Filter on Date

Method – 4: Filtering DataFrame based on Date value with multiple conditions

Here we are getting all the results for our Date operation evaluating multiple dates.

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Image 2

Filter on a date with multiple conditions

Method – 5: Filtering DataFrame based on a specific string

Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value.

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Image 6

Filter based on a specific string

Method – 6: Filtering DataFrame based on a specific index value in a string

Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Image 7

Filter based on a specific letter

Method – 7: Filtering DataFrame based on a list of values

Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Image 9

Output of Method -2

Method – 8: Filtering DataFrame rows based on specific values using RegEx

Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results.

For more information please check the Regex Documentation

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Image 10

Filter based on REGEX

Method – 9: Filtering DataFrame to check for null

Here, we’ll check for null and not null values in all the columns with the help of isnull() function.

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Image 12

Filter based on NULL or NOT null values

Method – 10: Filtering DataFrame to check for null values in a specific column.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Image 13

Finding null values on specific columns

Method – 11: Filtering DataFrame to check for not null values in specific columns

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Image 14

Finding not-null values on specific columns

Method – 12: Filtering DataFrame using query() with a condition

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Image 17

Filtering values with Query Function

Method – 13: Filtering DataFrame using query() with multiple conditions

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Image 18

Filtering multiple columns with Query Function

Method – 14: Filtering our DataFrame using the loc and iloc functions.

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Image 19

sample_data

Explanation: iloc considers rows based on the position of the given index, so that it takes only integers as values.

For more information please check out Pandas Documentation

#Filter with iloc
 
data.iloc[0 : 5]

Image 20

Filter using iloc

Explanation: loc considers rows based on index labels

#Filter with loc
 
data.loc[0 : 5]

Image 21

Filter using loc

You might be thinking about why the loc function returns 6 rows instead of 5 rows. This is because loc does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint.

Conclusion

So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common.

Link: https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame

Thierry  Perret

Thierry Perret

1660017761

14 Meilleures Façons De Filtrer Facilement Les Dataframes Pandas

Chaque fois que nous travaillons avec des données de toutes sortes, nous avons besoin d'une image claire du type de données avec lesquelles nous traitons. Pour la plupart des données disponibles, qui peuvent contenir des milliers, voire des millions d'entrées avec une grande variété d'informations, il est vraiment impossible de donner un sens à ces données sans aucun outil pour présenter les données dans un format court et lisible.

La plupart du temps, nous devons parcourir les données, les manipuler et les visualiser pour obtenir des informations. Eh bien, il existe une excellente bibliothèque qui porte le nom de pandas et qui nous offre cette capacité. L'opération de manipulation de données la plus fréquente est le filtrage de données. Il est très similaire à la clause WHERE dans SQL ou vous devez avoir utilisé un filtre dans MS Excel pour sélectionner des lignes spécifiques en fonction de certaines conditions.

pandas est un outil d'analyse/manipulation de données puissant, flexible et open source qui est essentiellement unpackage pythonqui offre vitesse, flexibilité et structures de données expressives conçues pour fonctionner avec des données « relationnelles » ou « étiquetées » de manière intuitive et simple. C'est l'une des bibliothèques les plus populairespour effectuer une analyse de données du monde réel en Python.

pandas est construit au-dessus de la bibliothèque NumPy qui vise à bien s'intégrer à l'environnement informatique scientifique et à de nombreuses autres bibliothèques tierces. Il comporte deux structures de données principales, à savoir Series (1D) et Dataframes (2D) , qui, dans la plupart des cas d'utilisation réels, correspondent au type de données traitées dans de nombreux secteurs de la finance, du calcul scientifique, de l'ingénierie et des statistiques.

Commençons à filtrer les données à l'aide de Pandas Dataframe

Installer des pandas

!pip install pandas

Importation de la bibliothèque Pandas, lecture de notre exemple de fichier de données et affectation à "df" DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

Voyons notre dataframe :

print(df.head())

Sample_data

Sample_data

Maintenant que nous avons notre DataFrame, nous allons appliquer différentes méthodes pour le filtrer.

Méthode – 1 : Filtrage de DataFrame par valeur de colonne

Nous avons une colonne nommée "Total_Sales" dans notre DataFrame et nous voulons filtrer toute la valeur des ventes supérieure à 300.

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

Sample_data avec des ventes > 300

Ventes avec plus de 300

Méthode – 2 : Filtrage de DataFrame basé sur plusieurs conditions

Ici, nous filtrons toutes les valeurs dont la valeur "Total_Sales" est supérieure à 300 et également où les "Unités" sont supérieures à 20. Nous devrons utiliser l'opérateur python "&" qui effectue une opération ET au niveau du bit afin d'afficher le résultat correspondant.

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

Image 3

Filtrer sur les ventes et les unités

Méthode - 3 : Filtrage de DataFrame basé sur la valeur Date

Si nous voulons filtrer notre trame de données en fonction d'une certaine valeur de date, par exemple ici nous essayons d'obtenir tous les résultats en fonction d'une date particulière, dans notre cas les résultats après la date '03/10/21'.

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

Image 1

Filtrer par date

Méthode - 4 : Filtrage de DataFrame en fonction de la valeur Date avec plusieurs conditions

Ici, nous obtenons tous les résultats de notre opération Date évaluant plusieurs dates .

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

Image 2

Filtrer sur une date avec plusieurs conditions

Méthode - 5 : Filtrage de DataFrame en fonction d'une chaîne spécifique

Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui proviennent de la région 'East', filtrant ainsi en fonction d'une valeur de chaîne spécifique .

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

Image 6

Filtre basé sur une chaîne spécifique

Méthode - 6 : Filtrage de DataFrame en fonction d'une valeur d'index spécifique dans une chaîne

Ici, nous sélectionnons une colonne appelée 'Region' et obtenons toutes les lignes qui ont la lettre 'E' comme premier caractère, c'est-à-dire à l'index 0 dans les résultats de colonne spécifiés.

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

Image 7

Filtre basé sur une lettre spécifique

Méthode - 7 : Filtrage de DataFrame basé sur une liste de valeurs

Ici, nous filtrons les lignes dans la colonne « Région » qui contient les valeurs « Ouest » ainsi que « Est » et affichons le résultat combiné. Deux méthodes peuvent être utilisées pour effectuer ce filtrage à savoir l'utilisation d'un tube | opérateur avec l'ensemble de valeurs souhaité correspondant avec la syntaxe ci-dessous OU nous pouvons utiliser la fonction .isin() pour filtrer les valeurs dans une colonne donnée, qui dans notre cas est la 'Région', et fournir la liste de l'ensemble souhaité de valeurs à l'intérieur sous forme de liste.

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

Image 9

Sortie de la méthode -2

Méthode - 8: Filtrage des lignes DataFrame en fonction de valeurs spécifiques à l'aide de RegEx

Ici, nous voulons toutes les valeurs de la colonne 'Region' , qui se termine par 'th' dans leur valeur de chaîne et les afficher. En d'autres termes, nous voulons que nos résultats montrent les valeurs de « Nord » et « Sud » et ignorent « Est » et « Ouest » . La méthode .str.contains() avec les valeurs spécifiées avec le modèle $ RegEx peut être utilisée pour obtenir les résultats souhaités.

Pour plus d'informations, veuillez consulter la documentation Regex

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

Image 10

Filtre basé sur REGEX

Méthode - 9: Filtrage de DataFrame pour vérifier null

Ici, nous allons vérifier les valeurs nulles et non nulles dans toutes les colonnes à l'aide de la fonction isnull() .

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

Image 12

Filtre basé sur les valeurs NULL ou NOT null

Méthode - 10 : Filtrage de DataFrame pour vérifier les valeurs nulles dans une colonne spécifique.

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

Image 13

Recherche de valeurs nulles sur des colonnes spécifiques

Méthode - 11 : Filtrage de DataFrame pour vérifier les valeurs non nulles dans des colonnes spécifiques

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

Image 14

Recherche de valeurs non nulles sur des colonnes spécifiques

Méthode - 12: Filtrage de DataFrame à l'aide query()d'une condition

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

Image 17

Filtrer les valeurs avec Queryla fonction

Méthode - 13: Filtrage de DataFrame à l'aide query()de plusieurs conditions

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

Image 18

Filtrer plusieurs colonnes avec QueryFunction

Méthode – 14 : Filtrage de notre DataFrame à l'aide des fonctions locet iloc.

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

Image 19

sample_data

Explication : iloc considère les lignes en fonction de la position de l'index donné, de sorte qu'il ne prend que des entiers comme valeurs.

Pour plus d'informations, veuillez consulter la documentation de Pandas

#Filter with iloc
 
data.iloc[0 : 5]

Image 20

Filtrer en utilisantiloc

Explication : loc considère les lignes en fonction des étiquettes d'index

#Filter with loc
 
data.loc[0 : 5]

Image 21

Filtrer en utilisantloc

Vous vous demandez peut-être pourquoi la locfonction renvoie 6 lignes au lieu de 5 lignes. En effet , ne produit pas de sortie basée sur la position de l'index. Il ne prend en compte que les étiquettes d'index qui peuvent également être un alphabet et incluent à la fois le point de départ et le point final. loc 

Conclusion

Donc, ce sont quelques-unes des méthodes de filtrage les plus couramment utilisées dans les pandas. Il existe de nombreuses autres méthodes de filtrage qui pourraient être utilisées, mais celles-ci sont parmi les plus courantes.

Lien : https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame

田辺  亮介

田辺 亮介

1660032308

輕鬆過濾 Pandas 數據框的 14 種方法

每當我們處理任何類型的數據時,我們都需要清楚地了解我們正在處理的數據類型。對於那裡的大多數數據,其中可能包含數千甚至數百萬個包含各種信息的條目,如果沒有任何工具以簡短易讀的格式呈現數據,就真的不可能理解這些數據。

大多數時候,我們需要瀏覽數據、操作數據並將其可視化以獲得洞察力。嗯,有一個很棒的庫,它的名字叫 pandas,它為我們提供了這種能力。最常見的數據操作操作是數據過濾。它與 SQL 中的 WHERE 子句非常相似,或者您必須在 MS Excel 中使用過濾器來根據某些條件選擇特定行。

pandas是一個強大、靈活和開源的數據分析/操作工具,它本質上是一個python 包,提供速度、靈活性和富有表現力的數據結構,以直觀和簡單的方式處理關係”或“標記它是在 Python 中執行實際數據分析的最流行的庫

pandas建立在 NumPy 庫之上,旨在與科學計算環境和眾多其他第三方庫很好地集成。它有兩個主要數據結構,即Series (1D)Dataframes(2D),在大多數實際用例中,這是金融、科學計算、工程和統計等許多領域正在處理的數據類型。

讓我們開始在 Pandas Dataframe 的幫助下過濾數據

安裝熊貓

!pip install pandas

導入 Pandas 庫,讀取我們的示例數據文件並將其分配給“df” DataFrame

import pandas as pd
df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv")

讓我們看看我們的數據框

print(df.head())

樣本數據

樣本數據

現在我們有了 DataFrame,我們將應用各種方法來過濾它。

方法 - 1:按列值過濾 DataFrame

我們的 DataFrame 中有一個名為“Total_Sales”的列,我們想要過濾掉所有大於 300 的銷售額。

#Filter a DataFrame for a single column value with a given condition
 
greater_than = df[df['Total_Sales'] > 300]
print(greater_than.head())

銷售額 > 300 的 Sample_data

銷售額超過 300

Method – 2 : Filtering DataFrame based on multiple conditions

在這裡,我們過濾“Total_Sales”值大於 300 以及“Units”大於 20 的所有值。我們將不得不使用執行按位與操作的 python 運算符“&”以顯示相應的結果。

#Filter a DataFrame with multiple conditions
 
filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)]
print(Filter_sales_units.head())

圖 3

篩選銷售額和單位

方法 – 3:根據日期值過濾 DataFrame

如果我們想根據某個日期值過濾我們的數據框,例如這裡我們試圖獲取基於特定日期的所有結果,在我們的例子中是日期 '03/10/21' 之後的結果。

#Filter a DataFrame based on specific date
 
date_filter = df[df['Date'] > '03/10/21']
print(date_filter.head())

圖 1

按日期過濾

方法四:基於Date值多條件過濾DataFrame

在這裡,我們得到了評估多個日期的 Date 操作的所有結果。

#Filter a DataFrame with multiple conditions our Date value
 
date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')]
print(date_filter2.head())

圖 2

篩選具有多個條件的日期

方法五:根據特定字符串過濾DataFrame

在這裡,我們選擇一個名為“Region”的列並獲取來自“East”區域的所有行,從而根據特定的字符串值進行過濾。

#Filter a DataFrame to a specific string
 
east = df[df['Region'] == 'East']
print(east.head())

圖 6

根據特定字符串過濾

方法6:根據字符串中的特定索引值過濾 DataFrame

在這裡,我們選擇一個名為“Region”的列,並獲取所有以字母“E”作為第一個字符的行,即指定列結果中索引 0 處的所有行。

#Filter a DataFrame to show rows starting with a specfic letter
 
starting_with_e = df[df['Region'].str[0]== 'E']
print(starting_with_e.head())

圖 7

根據特定字母過濾

方法7:根據值列表過濾 DataFrame

在這裡,我們過濾包含值“West”和“East”的“Region”列中的行,並顯示組合結果。可以使用兩種方法來執行此過濾,即使用管道 | 具有相應所需值集的運算符具有以下語法,或者我們可以使用.isin()函數過濾給定列中的值,在我們的例子中是“區域”,並提供所需集的列表它裡面的值作為一個列表。

#Filter a DataFrame rows based on list of values
 
#Method 1:
east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')]
print(east_west)
 
#Method 2:
east_west_1 = df[df['Region'].isin(['West', 'East'])]
print(east_west_1.head())

圖 9

方法-2的輸出

方法 – 8:使用 RegEx 根據特定值過濾 DataFrame 行

在這裡,我們想要列 'Region' 中的所有值,並在其字符串值中以 'th'結尾並顯示它們。換句話說,我們希望我們的結果顯示 'Nor th ' 和 'Sout th ' 的值並忽略 'East' 和 'West'。具有指定值的方法.str.contains()以及$ RegEx 模式可用於獲得所需的結果。

有關更多信息,請查看正則表達式文檔

#Filtering the DataFrame rows using regular expressions(REGEX)
 
regex_df = df[df['Region'].str.contains('th$')]
print(regex_df.head())

圖 10

基於 REGEX 的過濾器

方法9:過濾 DataFrame 以檢查null

在這裡,我們將在isnull() 函數的幫助下檢查所有列中的空值和非空值。

#Filtering to check for null and not null values in all columns
 
df_null = df[df.isnull().any(axis=1)]
print(df_null.head())

圖 12

基於 NULL 或 NOT 空值過濾

方法 - 10:過濾 DataFrame 以檢查特定列中的空值。

#Filtering to check for null values if any in the 'Units' column
 
units_df = df[df['Units'].isnull()]
print(units_df.head())

圖 13

在特定列上查找空值

方法 – 11:過濾 DataFrame 以檢查特定列中的非 空值

#Filtering to check for not null values in the 'Units' column
 
df_not_null = df[df['Units'].notnull()]
print(df_not_null.head())

圖 14

在特定列上查找非空值

Method – 12: Filtering DataFrame using query()with a condition

#Using query function in pandas
 
df_query = df.query('Total_Sales > 300')
print(df_query.head())

圖 17

Query使用函數過濾值

Method – 13: Filtering DataFrame using query()with multiple conditions

#Using query function with multiple conditions in pandas
 
df_query_1 = df.query('Total_Sales > 300 and Units <18')
print(df_query_1.head())

圖 18

Query使用函數過濾多列

方法 –loc 14:使用和iloc函數過濾我們的 DataFrame 。

#Creating a sample DataFrame for illustrations
 
import numpy as np
data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5])
print(data)

圖 19

樣本數據

解釋iloc 根據給定索引的位置考慮行,因此它僅將整數作為值。

有關更多信息,請查看Pandas 文檔

#Filter with iloc
 
data.iloc[0 : 5]

圖 20

過濾使用iloc

說明loc 考慮基於索引標籤的行

#Filter with loc
 
data.loc[0 : 5]

圖 21

過濾使用loc

您可能正在思考為什麼loc函數返回 6 行而不是 5 行。這是因為不會根據索引位置產生輸出。它只考慮索引標籤,它也可以是字母表,包括起點和終點。 loc 

結論

因此,這些是 pandas 中最常用的一些過濾方法。還有許多其他過濾方法可以使用,但這些是最常見的一些。

鏈接:https ://www.askpython.com/python-modules/pandas/filter-pandas-dataframe

#pandas #python #datafame