1643196720
This project has reached the end of its development as a simple neural network library. Feel free to browse the code, but please use other JavaScript neural network libraries in development like brain.js and convnetjs.
brain
brain
is a JavaScript neural network library. Here's an example of using it to approximate the XOR function:
var net = new brain.NeuralNetwork();
net.train([{input: [0, 0], output: [0]},
{input: [0, 1], output: [1]},
{input: [1, 0], output: [1]},
{input: [1, 1], output: [0]}]);
var output = net.run([1, 0]); // [0.987]
There's no reason to use a neural network to figure out XOR however (-: so here's a more involved, realistic example: Demo: training a neural network to recognize color contrast
If you have node you can install with npm:
npm install brain
Download the latest brain.js. Training is computationally expensive, so you should try to train the network offline (or on a Worker) and use the toFunction()
or toJSON()
options to plug the pre-trained network in to your website.
Use train()
to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to train()
. The more training patterns, the longer it will probably take to train, but the better the network will be at classifiying new patterns.
Each training pattern should have an input
and an output
, both of which can be either an array of numbers from 0
to 1
or a hash of numbers from 0
to 1
. For the color constrast demo it looks something like this:
var net = new brain.NeuralNetwork();
net.train([{input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1 }},
{input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1 }},
{input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1 }}]);
var output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.99, black: 0.002 }
train()
takes a hash of options as its second argument:
net.train(data, {
errorThresh: 0.005, // error threshold to reach
iterations: 20000, // maximum training iterations
log: true, // console.log() progress periodically
logPeriod: 10, // number of iterations between logging
learningRate: 0.3 // learning rate
})
The network will train until the training error has gone below the threshold (default 0.005
) or the max number of iterations (default 20000
) has been reached, whichever comes first.
By default training won't let you know how its doing until the end, but set log
to true
to get periodic updates on the current training error of the network. The training error should decrease every time. The updates will be printed to console. If you set log
to a function, this function will be called with the updates instead of printing to the console.
The learning rate is a parameter that influences how quickly the network trains. It's a number from 0
to 1
. If the learning rate is close to 0
it will take longer to train. If the learning rate is closer to 1
it will train faster but it's in danger of training to a local minimum and performing badly on new data. The default learning rate is 0.3
.
The output of train()
is a hash of information about how the training went:
{
error: 0.0039139985510105032, // training error
iterations: 406 // training iterations
}
If the network failed to train, the error will be above the error threshold. This could happen because the training data is too noisy (most likely), the network doesn't have enough hidden layers or nodes to handle the complexity of the data, or it hasn't trained for enough iterations.
If the training error is still something huge like 0.4
after 20000 iterations, it's a good sign that the network can't make sense of the data you're giving it.
Serialize or load in the state of a trained network with JSON:
var json = net.toJSON();
net.fromJSON(json);
You can also get a custom standalone function from a trained network that acts just like run()
:
var run = net.toFunction();
var output = run({ r: 1, g: 0.4, b: 0 });
console.log(run.toString()); // copy and paste! no need to import brain.js
NeuralNetwork()
takes a hash of options:
var net = new brain.NeuralNetwork({
hiddenLayers: [4],
learningRate: 0.6 // global learning rate, useful when training using streams
});
Specify the number of hidden layers in the network and the size of each layer. For example, if you want two hidden layers - the first with 3 nodes and the second with 4 nodes, you'd give:
hiddenLayers: [3, 4]
By default brain
uses one hidden layer with size proportionate to the size of the input array.
The network now has a WriteStream. You can train the network by using pipe()
to send the training data to the network.
Refer to stream-example.js
for an example on how to train the network with a stream.
To train the network using a stream you must first create the stream by calling net.createTrainStream()
which takes the following options:
floodCallback()
- the callback function to re-populate the stream. This gets called on every training iteration.doneTrainingCallback(info)
- the callback function to execute when the network is done training. The info
param will contain a hash of information about how the training went:{
error: 0.0039139985510105032, // training error
iterations: 406 // training iterations
}
Use a Transform to coerce the data into the correct format. You might also use a Transform stream to normalize your data on the fly.
Author: Harthur
Source Code: https://github.com/harthur/brain
License: MIT License
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript
1623135499
Neural networks have been around for a long time, being developed in the 1960s as a way to simulate neural activity for the development of artificial intelligence systems. However, since then they have developed into a useful analytical tool often used in replace of, or in conjunction with, standard statistical models such as regression or classification as they can be used to predict or more a specific output. The main difference, and advantage, in this regard is that neural networks make no initial assumptions as to the form of the relationship or distribution that underlies the data, meaning they can be more flexible and capture non-standard and non-linear relationships between input and output variables, making them incredibly valuable in todays data rich environment.
In this sense, their use has took over the past decade or so, with the fall in costs and increase in ability of general computing power, the rise of large datasets allowing these models to be trained, and the development of frameworks such as TensforFlow and Keras that have allowed people with sufficient hardware (in some cases this is no longer even an requirement through cloud computing), the correct data and an understanding of a given coding language to implement them. This article therefore seeks to be provide a no code introduction to their architecture and how they work so that their implementation and benefits can be better understood.
Firstly, the way these models work is that there is an input layer, one or more hidden layers and an output layer, each of which are connected by layers of synaptic weights¹. The input layer (X) is used to take in scaled values of the input, usually within a standardised range of 0–1. The hidden layers (Z) are then used to define the relationship between the input and output using weights and activation functions. The output layer (Y) then transforms the results from the hidden layers into the predicted values, often also scaled to be within 0–1. The synaptic weights (W) connecting these layers are used in model training to determine the weights assigned to each input and prediction in order to get the best model fit. Visually, this is represented as:
#machine-learning #python #neural-networks #tensorflow #neural-network-algorithm #no code introduction to neural networks
1626321063
PixelCrayons: Our JavaScript web development service offers you a feature-packed & dynamic web application that effectively caters to your business challenges and provide you the best RoI. Our JavaScript web development company works on all major frameworks & libraries like Angular, React, Nodejs, Vue.js, to name a few.
With 15+ years of domain expertise, we have successfully delivered 13800+ projects and have successfully garnered 6800+ happy customers with 97%+ client retention rate.
Looking for professional JavaScript web app development services? We provide custom JavaScript development services applying latest version frameworks and libraries to propel businesses to the next level. Our well-defined and manageable JS development processes are balanced between cost, time and quality along with clear communication.
Our JavaScript development companies offers you strict NDA, 100% money back guarantee and agile/DevOps approach.
#javascript development company #javascript development services #javascript web development #javascript development #javascript web development services #javascript web development company
1593507275
These functions are very important for every JavaScript Developer and are used in almost every JavaScript Library or Framework. Check out the code snippet below.
Taken from the very popular library Lodash
/**
* Creates a function that invokes `func` with arguments reversed.
*
* @since 4.0.0
* @category Function
* @param {Function} func The function to flip arguments for.
* @returns {Function} Returns the new flipped function.
* @see reverse
* @example
*
* const flipped = flip((...args) => args)
*
* flipped('a', 'b', 'c', 'd')
* // => ['d', 'c', 'b', 'a']
*/
function flip(func) {
if (typeof func !== 'function') {
throw new TypeError('Expected a function')
}
return function(...args) {
return func.apply(this, args.reverse())
}
}
export default flip
Look at the statement on line 21, return func.apply(this, args.reverse())
In this article, we will have a look at the call(), apply() and bind() methods of JavaScript. Basically these 3 methods are used to control the invocation of the function. The call() and apply() were introduced in ECMAScript 3 while bind() was added as a part of ECMAScript 5.
Let us start with an example to understand these.
Suppose you are a student of X university and your professor has asked you to create a math library, for an assignment, which calculates the area of a circle.
const calcArea = {
pi: 3.14,
area: function(r) {
return this.pi * r * r;
}
}
calcArea.area(4); // prints 50.24
You test this and verify its result, it is working fine and you upload the library to portal way before the deadline ends. Then you ask your classmates to test and verify as well, you come to know that that your result and their results mismatches the decimals precision. You check the assignment guidelines, Oh no! The professor asked you to use a constant **pi** with 5 decimals precision. But you used **3.14** and not **3.14159** as the value of pi. Now you cannot re-upload the library as the deadline date was already over. In this situation, **call()** function will save you.
#js #javascript-development #javascript #javascript-interview #javascript-tips
1594312560
Talking about inspiration in the networking industry, nothing more than Autonomous Driving Network (ADN). You may hear about this and wondering what this is about, and does it have anything to do with autonomous driving vehicles? Your guess is right; the ADN concept is derived from or inspired by the rapid development of the autonomous driving car in recent years.
Driverless Car of the Future, the advertisement for “America’s Electric Light and Power Companies,” Saturday Evening Post, the 1950s.
The vision of autonomous driving has been around for more than 70 years. But engineers continuously make attempts to achieve the idea without too much success. The concept stayed as a fiction for a long time. In 2004, the US Defense Advanced Research Projects Administration (DARPA) organized the Grand Challenge for autonomous vehicles for teams to compete for the grand prize of $1 million. I remembered watching TV and saw those competing vehicles, behaved like driven by drunk man, had a really tough time to drive by itself. I thought that autonomous driving vision would still have a long way to go. To my surprise, the next year, 2005, Stanford University’s vehicles autonomously drove 131 miles in California’s Mojave desert without a scratch and took the $1 million Grand Challenge prize. How was that possible? Later I learned that the secret ingredient to make this possible was using the latest ML (Machine Learning) enabled AI (Artificial Intelligent ) technology.
Since then, AI technologies advanced rapidly and been implemented in all verticals. Around the 2016 time frame, the concept of Autonomous Driving Network started to emerge by combining AI and network to achieve network operational autonomy. The automation concept is nothing new in the networking industry; network operations are continually being automated here and there. But this time, ADN is beyond automating mundane tasks; it reaches a whole new level. With the help of AI technologies and other critical ingredients advancement like SDN (Software Defined Network), autonomous networking has a great chance from a vision to future reality.
In this article, we will examine some critical components of the ADN, current landscape, and factors that are important for ADN to be a success.
At the current stage, there are different terminologies to describe ADN vision by various organizations.
Even though slightly different terminologies, the industry is moving towards some common terms and consensus called autonomous networks, e.g. TMF, ETSI, ITU-T, GSMA. The core vision includes business and network aspects. The autonomous network delivers the “hyper-loop” from business requirements all the way to network and device layers.
On the network layer, it contains the below critical aspects:
On top of those, these capabilities need to be across multiple services, multiple domains, and the entire lifecycle(TMF, 2019).
No doubt, this is the most ambitious goal that the networking industry has ever aimed at. It has been described as the “end-state” and“ultimate goal” of networking evolution. This is not just a vision on PPT, the networking industry already on the move toward the goal.
David Wang, Huawei’s Executive Director of the Board and President of Products & Solutions, said in his 2018 Ultra-Broadband Forum(UBBF) keynote speech. (David W. 2018):
“In a fully connected and intelligent era, autonomous driving is becoming a reality. Industries like automotive, aerospace, and manufacturing are modernizing and renewing themselves by introducing autonomous technologies. However, the telecom sector is facing a major structural problem: Networks are growing year by year, but OPEX is growing faster than revenue. What’s more, it takes 100 times more effort for telecom operators to maintain their networks than OTT players. Therefore, it’s imperative that telecom operators build autonomous driving networks.”
Juniper CEO Rami Rahim said in his keynote at the company’s virtual AI event: (CRN, 2020)
“The goal now is a self-driving network. The call to action is to embrace the change. We can all benefit from putting more time into higher-layer activities, like keeping distributors out of the business. The future, I truly believe, is about getting the network out of the way. It is time for the infrastructure to take a back seat to the self-driving network.”
If you asked me this question 15 years ago, my answer would be “no chance” as I could not imagine an autonomous driving vehicle was possible then. But now, the vision is not far-fetch anymore not only because of ML/AI technology rapid advancement but other key building blocks are made significant progress, just name a few key building blocks:
#network-automation #autonomous-network #ai-in-network #self-driving-network #neural-networks