1662004261
Tomorrowland is an implementation of Promises for Swift and Objective-C. A Promise is a wrapper around an asynchronous task that provides a standard way of subscribing to task resolution as well as chaining promises together.
UIApplication.shared.isNetworkActivityIndicatorVisible = true
MyAPI.requestFeed(for: user).then { (feedItems) in
self.refreshUI(with: feedItems)
}.catch { (error) in
self.showError(error)
}.always { _ in
UIApplication.shared.isNetworkActivityIndicatorVisible = false
}
It is loosely based on both PromiseKit and Hydra, with a few key distinctions:
DispatchQueue
for each promise. This means it's faster and uses fewer resources.Error
as the error type. This may result in more typing to construct a promise but it allows for much more powerful error handling. Tomorrowland also has some affordances for working with promises that use Error
as the error type.You can add Tomorrowland to your workspace manually like any other project and add the resulting Tomorrowland.framework
to your application's frameworks.
github "lilyball/Tomorrowland" ~> 1.0
The project file is configured to use Swift 5. The code can be compiled against Swift 4.2 instead, but I'm not aware of any way to instruct Carthage to override the swift version during compilation.
pod 'Tomorrowland', '~> 1.0'
The podspec declares support for both Swift 4.2 and Swift 5.0, but selecting the Swift version requires using CoocaPods 1.7.0 or later. When using CocoaPods 1.6 or earlier the Swift version will default to 5.0.
Tomorrowland currently relies on a private Obj-C module for its atomics. This arrangement means it is not compatible with Swift Package Manager (as adding compatibility would necessitate publicly exposing the private Obj-C module).
Promises can be created using code like the following:
let promise = Promise<String,Error>(on: .utility, { (resolver) in
let value = try expensiveCalculation()
resolver.fulfill(with: value)
})
The body of this promise runs on the specified PromiseContext
, which in this case is .utility
(which means DispatchQueue.global(qos: .utility)
). Unlike callbacks, all created promises must specify a context, so as to avoid accidentally running expensive computations on the main thread. The available contexts include .main
, every Dispatch QoS, a specific DispatchQueue
, a specific OperationQueue
, or the value .immediate
which means to run the block synchronously. There's also the special context .auto
, which evaluates to .main
on the main thread and .default
otherwise.
Note: The .immediate
context can be dangerous to use for callback handlers and should be avoided in most cases. It's primarily intended for creating promises, and whenever it's used with a callback handler the handler must be prepared to execute on any thread. For callbacks it's usually only useful for short thread-agnostic callbacks, such as an .onRequestCancel
that does nothing more than cancelling a URLSessionTask
.
The body of a Promise
receives a "resolver", which it must use to fulfill, reject, or cancel the promise. If the resolver goes out of scope without being used, the promise is automatically cancelled. If the promise's error type is Error
, the promise body may also throw an error (as seen above), which is then used to reject the promise. This resolver can also be used to observe cancellation requests using resolver.onRequestCancel
, as seen here:
let promise = Promise<Data,Error>(on: .immediate, { (resolver) in
let task = urlSession.dataTask(with: url, completionHandler: { (data, response, error) in
if let data = data {
resolver.fulfill(with: data)
} else if case URLError.cancelled? = error {
resolver.cancel()
} else {
resolver.reject(with: error!)
}
})
resolver.onRequestCancel(on: .immediate, { _ in
task.cancel()
})
task.resume()
})
Resolvers also have a convenience method handleCallback()
that is intended to make it easy to wrap framework callbacks in promises. This method returns a closure that can be used as a callback directly. It also takes an optional isCancelError
parameter that can be used to indicate when an error represents cancellation. For example:
geocoder.reverseGeocodeLocation(location, completionHandler: resolver.handleCallback(isCancelError: { CLError.geocodeCanceled ~= $0 }))
Once you have a promise, you can register callbacks to be executed when the promise is resolved. Most callback methods require a context, but for some of them (then
, catch
, always
, and tryThen
) you can omit the context and it will default to .auto
, which means the main thread if the callback is registered from the main thread, otherwise the dispatch queue with QoS .default
.
When you register a callback, the method also returns a Promise
. All callback registration methods return a new Promise
even if the callback doesn't affect the value of the promise. The reason for this is so chained callbacks always guarantee that the previous callback finished executing before the new one starts, even when using concurrent contexts (e.g. .utility
), and so cancelling the returned promise doesn't cancel the original one if any other callbacks were registered on it.
Most callback registration methods also have versions that allow you to return a Promise
from your callback. In this event, the resulting Promise
waits for the promise you returned to resolve before adopting its value. This allows for easy composition of promises.
showLoadingIndicator()
fetchUserCredentials().flatMap(on: .default) { (credentials) in
// This returns a new promise
return MyAPI.login(name: credentials.name, password: credentials.password)
}.then { [weak self] (apiKey) in
// this is invoked when the promise returned by MyAPI.login fulfills.
MyAPI.apiKey = apiKey
self?.transitionToLoggedInState()
}.always { [weak self] _ in
// This is always invoked regardless of whether the previous chain was
// fulfilled, rejected, or cancelled.
self?.hideLoadingIndicator()
}.catch { [weak self] (error) in
// this handles any error returned from the previous chain, meaning any error
// from `fetchUserCredentials()` or from `MyAPI.login(name:password:)`.
self?.displayError(error)
}
When composing callbacks that return promises, you may run into issues with incompatible error types. There are convenience methods for working with promises whose errors are compatible with Error
, but they don't cover all cases. If you find yourself hitting one of these cases, any Promise
whose error type conforms to Error
has a property .upcast
that will convert that error into an Error
to allow for easier composition of promises.
Tomorrowland also offers a typealias StdPromise<Value>
as shorthand for Promise<T,Error>
. This is frequently useful to avoid having to repeat the types, such as with StdPromise(fulfilled: someValue)
instead of Promise<SomeValue,Error>(fulfilled: someValue)
.
All promises expose a method .requestCancel()
. It is named such because this doesn't actually guarantee that the promise will be cancelled. If the promise supports cancellation, this method will trigger a callback that the promise can use to cancel its work. But promises that don't support cancellation will ignore this and will eventually fulfill or reject as normal. Naturally, requesting cancellation of a promise that has already been resolved does nothing, even if the callbacks have not yet been invoked.
In order to handle the issue of a promise being resolved after you no longer care about it, there is a separate mechanism called a PromiseInvalidationToken
that can be used to suppress callbacks. All callback methods have an optional token
parameter that accepts a PromiseInvalidationToken
. If provided, calling invalidate()
on the token prior to the callback being executed guarantees the callback will not fire. If the callback returns a value that is required in order to resolve the Promise
returned from the callback registration method, the resulting Promise
is cancelled instead. PromiseInvalidationToken
s can be used with multiple callbacks at once, and a single token can be re-used as much as desired. It is recommended that you take advantage of both invalidation tokens and cancellation. This may look like
class URLImageView: UIImageView {
private var promise: StdPromise<Void>?
private let invalidationToken = PromiseInvalidationToken()
enum LoadError: Error {
case dataIsNotImage
}
/// Loads an image from the URL and displays it in the image view.
func loadImage(from url: URL) {
promise?.cancel()
invalidationToken.invalidate()
// Note: dataTaskAsPromise does not actually exist
promise = URLSession.shared.dataTaskAsPromise(with: url)
// Use `_ =` to avoid having to handle errors with `.catch`.
_ = promise?.tryMap(on: .utility, { (data) -> UIImage in
if let image = UIImage(data: data) {
return image
} else {
throw LoadError.dataIsNotImage
}
}).then(token: invalidationToken, { [weak self] (image) in
self?.image = image
})
}
}
PromiseInvalidationToken
also has a method .requestCancelOnInvalidate(_:)
that can register any number of Promise
s to be automatically requested to cancel (using .requestCancel()
) the next time the token is invalidated. Promise
also has the same method (except it takes a token as the argument) as a convenience for calling .requestCancelOnInvalidate(_:)
on the token. This can be used to terminate a promise chain without ever assigning the promise to a local variable. PromiseInvalidationToken
also has a method .cancelWithoutInvalidating()
which cancels any associated promises without invalidating the token.
By default PromiseInvalidationToken
s will invalidate themselves automatically when deinitialized. This is primarily useful in conjunction with requestCancelOnInvalidate(_:)
as it allows you to automatically cancel your promises when object that owns the token deinits. This behavior can be disabled with an optional parameter to init
.
Promise
also has a convenience method requestCancelOnDeinit(_:)
which can be used to request the Promise
to be cancelled when a given object deinits. This is equivalent to adding a PromiseInvalidationToken
property to the object (configured to invalidate on deinit) and requesting cancellation when the token invalidates, but can be used if the token would otherwise not be explicitly invalidated.
Using these methods, the above loadImage(from:)
can be rewritten as the following including cancellation:
class URLImageView: UIImageView {
private let promiseToken = PromiseInvalidationToken()
enum LoadError: Error {
case dataIsNotImage
}
/// Loads an image from the URL and displays it in the image view.
func loadImage(from url: URL) {
promiseToken.invalidate()
// Note: dataTaskAsPromise does not actually exist
promise = URLSession.shared.dataTaskAsPromise(with: url)
// Use `_ =` to avoid having to handle errors with `.catch`.
_ = promise?.tryMap(on: .utility, { (data) -> UIImage in
if let image = UIImage(data: data) {
return image
} else {
throw LoadError.dataIsNotImage
}
}).then(token: promiseToken, { [weak self] (image) in
self?.image = image
}).requestCancelOnInvalidate(invalidationToken)
}
}
PromiseInvalidationToken
s can be arranged in a tree such that invalidating one token will cascade this invalidation down to other tokens. This is accomplished by calling childToken.chainInvalidation(from: parentToken)
. Practically speaking this is no different than just manually invalidating each child token yourself after invalidating the parent token, but it's provided as a convenience to make it easy to have fine-grained invalidation control while also having a simple way to bulk-invalidate tokens. For example, you might have separate tokens for different view controllers that all chain invalidation from a single token that gets invalidated when the user logs out, thus automatically invalidating all your user-dependent network requests at once while still allowing each view controller the ability to invalidate just its own requests independently.
TokenPromise
In order to avoid the repetition of passing a PromiseInvalidationToken
to multiple Promise
methods as well as cancelling the resulting promise, a type TokenPromise
exists that handles this for you. You can create a TokenPromise
with the Promise.withToken(_:)
method. This allows you to take code like the following:
func loadModel() {
promiseToken.invalidate()
MyModel.fetchFromNetworkAsPromise()
.then(token: promiseToken, { [weak self] (model) in
self?.updateUI(with: model)
}).catch(token: promiseToken, { [weak self] (error) in
self?.handleError(error)
}).requestCancelOnInvalidate(promiseToken)
}
And rewrite it to be less repetitive:
func loadModel() {
promiseToken.invalidate()
MyModel.fetchFromNetworkAsPromise()
.withToken(promiseToken)
.then({ [weak self] (model) in
self?.updateUI(with: model)
}).catch({ [weak self] (error) in
self?.handleError(error)
})
}
Nearly all callback registration methods will automatically propagate cancellation requests from the child to the parent if the parent has no other observers. If all observers for a promise request cancellation, the cancellation request will propagate upwards at this time. This means that a promise will not automatically cancel as long as there's at least one interested observer. Do note that promises that have no observers do not get automatically cancelled, this only happens if there's at least one observer (which then requests cancellation). Automatic cancellation propagation also requires that the promise itself no longer be in scope. For this reason you should avoid holding onto promises long-term and instead use the .cancellable
property or PromiseInvalidationToken
's requestCancelOnInvalidate(_:)
if you want to be able to cancel the promise later.
Automatic cancellation propagation also works with the utility functions when(fulfilled:)
and when(first:)
as well as the convenience methods timeout(on:delay:)
and delay(on:_:)
.
Promises have a couple of methods that do not participate in automatic cancellation propagation. You can use tap(on:token:_:)
as an alternative to always
in order to register an observer that won't interfere with the existing automatic cancellation propagation (this is suitable for inserting into the middle of a promise chain). You can also use tap()
as a more generic version of this.
Note that ignoringCancel()
disables automatic cancellation propagation on the receiver. Once you invoke this on a promise, it will never automatically cancel.
propagatingCancellation(on:cancelRequested:)
In some cases you may need to hold onto a promise without blocking cancellation propagation from its children. The primary use-case here is deduplicating access to an asynchronous resource (such as a network load). In this scenario you may wish to hold onto a promise and return a new child for every client requesting the same resource, without preventing cancellation of the resource load if all clients cancel their requests. This can be accomplished by holding onto the result of calling .propagatingCancellation(on:cancelRequested:)
. The promise returned from this method will propagate cancellation to its parent as soon as all children have requested cancellation even if the promise is still in scope. When cancellation is requested, the cancelRequested
handler will be invoked immediately prior to propagating cancellation upwards; this enables you to release your reference to the promise (so a new request by a client will create a brand new resource load). Returning a new child to each client can be done using makeChild()
. An example of this might look like:
func loadResource(at url: URL) {
let promise: StdPromise<Model>
if let existingPromise = resourceLoads[url] {
promise = existingPromise
} else {
promise = makeResourceRequest(for: url).propagatingCancellation(on: .main, cancelRequested: { (promise) in
if self.resourceLoads[url] == promise {
self.resourceLoads[url] = nil
}
})
resourceLoads[url] = promise
}
// Return a new child for each request so all clients have to cancel, not just one.
return promise.makeChild()
}
.nowOr(_:)
contextThere is a special context PromiseContext.nowOr(_:)
that behaves a bit differently than other contexts. This context is special in that its callback executes differently depending on whether the promise it's being registered on has already resolved by the time the callback is registered. If the promise has already resolved then .nowOr(context)
behaves like .immediate
, otherwise it behaves like the wrapped context
. This context is intended to be used to replace code that would otherwise check if the promise.result
is non-nil
prior to registering a callback.
If this context is used in Promise.init(on:_:)
it always behaves like .immediate
, and if it's used in DelayedPromise.init(on:_:)
it always behaves like the wrapped context.
There is a property PromiseContext.isExecutingNow
that can be accessed from within a callback registered with .nowOr(_:)
to determine if the callback is executing synchronously or asynchronously. When accessed from any other context it returns false
. When registering a callback with .immediate
from within a callback where PromiseContext.isExecutingNow
is true
, the nested callback will inherit the PromiseContext.isExecutingNow
flag if and only if the nested callback is also executing synchronously. This is a bit subtle but is intended to allow Promise(on: .immediate, { … })
to inherit the flag from its surrounding scope.
An example of how this context might be used is when populating an image view from a network request:
createNetworkRequestAsPromise()
.then(on: .nowOr(.main), { [weak imageView] (image) in
guard let imageView = imageView else { return }
let duration: TimeInterval = PromiseContext.isExecutingNow
? 0 // no transition if we're synchronous
: 0.25
UIView.transition(with: imageView, duration: duration, options: .transitionCrossDissolve, animations: {
imageView.image = image
})
})
There are a few helper functions that can be used to deal with multiple promises.
when(fulfilled:)
when(fulfilled:)
is a global function that takes either an array of promises or 2–6 promises as separate arguments, and returns a single promise that is eventually fulfilled with the values of all input promises. With the array version all input promises must have the same type and the result is fulfilled with an array. With the separate argument version the promises may have unique value types (but the same error type) and the result is fulfilled with a tuple.
If any of the input promises is rejected or cancelled, the resulting promise is immediately rejected or cancelled as well. If multiple input promises are rejected or cancelled, the first such one affects the result.
This function has an optional parameter cancelOnFailure:
that, if provided as true
, will cancel all input promises if any of them are rejected.
when(first:)
when(first:)
is a global function that takes an array of promises of the same type, and returns a single promise that eventually adopts the same value or error as the first input promise that gets fulfilled or rejected. Cancelled input promises are ignored, unless all input promises are cancelled, at which point the resulting promise will be cancelled as well.
This function has an optional parameter cancelRemaining:
that, if provided as true
, will cancel the remaining input promises as soon as one of them is fulfilled or rejected.
Promise.timeout(on:delay:)
Promise.timeout(on:delay:)
is a method that returns a new promise that adopts the same value as the receiver, or is rejected with an error if the receiver isn't resolved within the given interval.
Promise.delay(on:_:)
Promise.delay(on:_:)
is a method that returns a new promise that adopts the same result as the receiver after the specified delay. It is intended primarily for testing purposes.
PromiseOperation
PromiseOperation
is an Operation
subclass that wraps a Promise
and allows for delayed execution of the promise handler. It's created just like Promise
, with init(on:_:)
, but it doesn't run the handler until the operation is started (either by calling start()
or by adding it to an OperationQueue
). The operation has a .promise
property that returns a Promise
that will resolve to the results of the computation, but can be accessed before the handler is invoked. If the operation is put on a queue and is initialized with the .immediate
context, the provided handler will run on the queue.
Requesting cancellation of the PromiseOperation.promise
is identical to calling PromiseOperation.cancel()
. If the operation has already started, cancellation support is at the discretion of the provided handler, just like with a normal Promise
. If the operation has not yet started, cancelling it will prevent the handler from ever executing, though the returned promise itself won't cancel until the operation has moved to the isFinished
state (e.g. by being started).
The use of PromiseOperation
instead of a Promise
allows for delaying execution of the promise, setting up dependencies, controlling concurrency with the operation queue's maxConcurrentOperationCount
, and generally integrating with existing operation queues.
Tomorrowland has Obj-C compatibility in the form of TWLPromise<ValueType,ErrorType>
. This is a parallel promise implementation that can be bridged to/from Promise
and supports all of the same functionality. Note that some of the method names are different (due to lack of overloading), and while TWLPromise
is generic over its types, the return values of callback registration methods that return new promises are not parameterized (due to inability to have generic methods).
Callbacks registered on promises will be retained until the promise is resolved. If a callback is invoked (or would be invoked if the relevant invalidation token hadn't been invalidated), Tomorrowland guarantees that it will release the callback on the context it was invoked on. If the callback is not invoked (e.g. it's a then(on:_:)
callback but the promise was rejected) then no guarantees are made as to the context the callback is released on. If you need to ensure it's released on the appropriate context (e.g. if it captures an object that must deallocate on the main thread) then you can use .always
or one of the .mapResult
variants.
Requires a minimum of iOS 9, macOS 10.10, watchOS 2.0, or tvOS 9.0.
Licensed under either of
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above, without any additional terms or conditions.
PromiseOperation
class (TWLPromiseOperation
in Obj-C) that integrates promises with OperationQueue
s. It can also be used similarly to DelayedPromise
if you simply want more control over when the promise handler actually executes. PromiseOperation
is useful if you want to be able to set up dependencies between promises or control concurrent execution counts (#58).Fix the cancellation propagation behavior of Promise.Resolver.resolve(with:)
and the flatMap
family of methods. Previously, requesting cancellation of the promise associated with the resolver (for resolve(with:)
, or the returned promise for the flatMap
family) would immediately request cancellation of the upstream promise even if the upstream promise had other children. The new behavior fixes this such that it participates in automatic cancellation propagation just like any other child promise (#54).
Slightly optimize stack usage when chaining one promise to another.
Avoid using stack space for chained promises that don't involve a callback. For example, when the promise returned from a flatMap(on:token:_:)
resolves it will resolve the outer promise without using additional stack frames. You can think of it like tail calling functions. This affects not just flatMap
but also operations such as tap()
, ignoringCancel()
, and more. This also applies to Obj-C (with TWLPromise
).
Note: This does not affect the variants that implicitly upcast from some E: Swift.Error
to Swift.Error
such as tryFlatMap(on:token:_:)
.
Change cancellation propagation behavior of onCancel
. Like tap
, it doesn't prevent automatic cancellation propagation if the parent has other children and all other children request cancellation. Unlike tap
, requesting cancellation of onCancel
when there are no other children will propagate cancellation to the parent. The motivation here is attaching an onCancel
observer shouldn't prevent cancellation that would otherwise occur, but when it's the only child it should behave like the other standard observers (#57).
Add method Promise.makeChild()
. This returns a new child of the receiver that adopts the receiver's value and propagates cancellation like any other observer. The purpose here is to be used when handing back multiple children of one parent to callers, as handing back the parent means any one caller can cancel it without the other callers' participation. This is particularly useful in conjunction with propagatingCancellation(on:cancelRequested:)
(#56).
PromiseContext.isExecutingNow
(TWLPromiseContext.isExecutingNow
in Obj-C) that returns true
if accessed from within a callback registered with .nowOr(_:)
and executing synchronously, or false
otherwise. If accessed from within a callback (or Promise.init(on:_:)
) registered with .immediate
and running synchronously, it inherits the surrounding scope's PromiseContext.isExecutingNow
flag. This is intended to allow Promise(on: .immediate, { … })
to query the surrounding scope's flag (#53).Promise.timeout
's default context to .nowOr(.auto)
for the Error
overload as well.Promise.timeout(on:delay:)
when the delay
is less than or equal to zero, the context
is .immediate
or .nowOr(_:)
, and the upstream promise hasn't resolved yet. Previously the timeout would occur asynchronously and the upstream promise would get a chance to race the timeout. With the new behavior the timeout occurs synchronously (#49).Add PromiseContext.nowOr(context)
(+[TWLContext nowOrContext:]
in Obj-C) that runs the callback synchronously when registered if the promise has already resolved, otherwise registers the callback to run on context
. This can be used to replace code that previously would have required checking promise.result
prior to registering the callback (#34).
For example:
networkImagePromise.then(on: .nowOr(.main), { [weak button] (image) in
button?.setImage(image, for: .normal)
})
Add Promise.Resolver.hasRequestedCancel
(TWLResolver.cancelRequested
in Obj-C) that returns true
if the promise has been requested to cancel or is already cancelled, or false
if it hasn't been requested to cancel or is fulfilled or rejected. This can be used when a promise initializer takes significant time in a manner not easily interrupted by an onRequestCancel
handler (#47).
Change Promise.timeout
's default context from .auto
to .nowOr(.auto)
. This behaves the same as .auto
in most cases, except if the receiver has already been resolved this will cause the returned promise to likewise already be resolved (#50).
Ensure when(first:cancelRemaining:)
returns an already-cancelled promise if all input promises were previously cancelled, instead of cancelling the returned promise asynchronously (#51).
Ensure when(fulfilled:qos:cancelOnFailure:)
returns an already-resolved promise if either all input promises were previously fulfilled or any input promise was previously rejected or cancelled (#52).
PromiseInvalidationToken.requestCancelOnInvalidate(_:)
and PromiseInvalidationToken.chainInvalidation(from:includingCancelWithoutInvalidating:)
when cleaning up nil
nodes prior to pushing on the new node (#48)..propagatingCancellation(on:cancelRequested:)
that can be used to create a long-lived promise that propagates cancellation from its children to its parent while it's still alive. Normally promises don't propagate cancellation until they themselves are released, in case more children are going to be added. This new method is intended to be used when deduplicating requests for an asynchronous resource (such as a network load) such that the resource request can be cancelled in the event that no children care about it anymore (#46).PromiseInvalidationToken
s would not deinit as long as any promise whose callback was tied to the token was still unresolved. This meant that the default invalidateOnDeinit
behavior would not trigger and the callback would still fire even though there were no more external references to the token, and this meant any promises configured to be cancelled when the promise invalidated would not cancel. Tokens used purely for requestCancelOnInvalidate(_:)
would still deallocate, and tokens would still deallocate after any associated promises had resolved.PromiseInvalidationToken
s. After a careful re-reading I don't believe I was issuing the correct fences previously, making it possible for tokens whose associated promise callbacks were executing concurrently with a call to requestCancelOnInvalidate(_:)
to read the wrong generation value, and for tokens that had requestCancelOnInvalidate(_:)
invoked concurrently on multiple threads to corrupt the generation.PromiseInvalidationToken.chainInvalidation(from:)
to invalidate a token whenever another token invalidates. This allows for building a tree of tokens in order to have both fine-grained and bulk invalidation at the same time. Tokens chained together this way stay chained forever (#43).Podfile
can now declare which version of Swift it's compatible with. For anyone using CocoaPods 1.6 or earlier it will default to Swift 5.0.DelayedPromise
conform to Equatable
(#37).Swift.Result
(#39).promise.then({ foo?($0) })
without it incorrectly resolving to the deprecated form of map(_:)
(#35).Promise.init(result:)
and Promise.init(on:result:after:)
to Promise.init(with:)
and Promise.init(on:with:after:)
(#40).When chaining multiple .main
context blocks in the same runloop pass, ensure we release each block before executing the next one.
Ensure that if a user-supplied callback is invoked, it is also released on the context where it was invoked (#38).
This guarantee is only made for callbacks that are invoked (ignoring tokens). What this means is when using e.g. .then(on:_:)
if the promise is fulfilled, the onSuccess
block will be released on the provided context, but if the promise is rejected no such guarantee is made. If you rely on the context it's released on (e.g. it captures an object that must deallocate on the main thread) then you can use .always
or one of the mapResult
variants.
Rename a lot of methods on Promise
and TokenPromise
(#5).
This gets rid of most overrides, leaving the only overridden methods to be ones that handle either Swift.Error
or E: Swift.Error
, and even these overrides are removed in the Swift 5 compiler.
then
is now map
or flatMap
, recover
's override is now flatMapError
, always
's override is now flatMapResult
, and similar renames were made for the try
variants.
Add a new then
method whose block returns Void
. The returned promise resolves to the same result as the original promise.
Add new mapError
and tryMapError
methods.
Add new mapResult
and tryMapResult
methods.
Extend tryFlatMapError
to be available on all Promise
s instead of just those whose error type is Swift.Error
.
Remove the default .auto
value for the on context:
parameter to most calls. It's now only provided for the "terminal" callbacks, the ones that don't return a value from the handler. This avoids the common problem of running trivial maps on the main thread unnecessarily (#33).
Promise.Resolver.resolve(with: somePromise)
that resolves the receiver using another promise (#30).PromiseCancellable.requestCancel()
as public
(#29)..delay(on:_:)
and .timeout(on:delay:)
when using PromiseContext.operationQueue
. The relevant operation is now added to the queue immediately and only becomes ready once the delay/timeout has elapsed.-[TWLPromise initCancelled]
to construct a pre-cancelled promise.Promise.init(on:fulfilled:after:)
, Promise.init(on:rejected:after:)
, and Promise.init(on:result:after:)
. These initializers produce something akin to Promise(fulfilled: value).delay(after)
except they respond to cancellation immediately. This makes them more suitable for use as cancellable timers, as opposed to .delay(_:)
which is more intended for debugging (#27).PromiseInvalidationToken.requestCancelOnInvalidate(_:)
. Any deallocated promises at the head of the callback list will be removed. This will help keep the callback list from growing uncontrollably when a token is used merely to cancel all promises when the owner deallocates as opposed to being periodically invalidated during its lifetime (#25)..delay(_:)
timer if .requestCancel()
is invoked and the upstream promise cancelled. This way requested cancels will skip the delay, but unexpected cancels will still delay the result (#26).PromiseInvalidationToken.cancelWithoutInvalidating()
. This method cancels any associated promises without invalidating the token, thus allowing for any onCancel
and always
handlers on the promises to fire (#23).Promise
↔ObjCPromise
bridging methods for the case of Value: AnyObject, Error == Swift.Error
(#24).Promise.init(result:)
for creating a Promise
from a PromiseResult
.when(resolved: …, cancelOnFailure: true)
and when(first: …, cancelRemaining: true)
(#20).APPLICATION_EXTENSION_API_ONLY
.Hashable
/ Equatable
conformance to PromiseInvalidationToken
.TokenPromise
that wraps a Promise
and automatically applies a PromiseInvalidationToken
. This API is Swift-only.Decodable
conformance to NoError
.Promise.fork(_:)
.Promise.requestCancelOnInvalidate(_:)
as a convenience for token.requestCancelOnInvalidate(_:)
.Promise.requestCancelOnDeinit(_:)
as a convenience for adding a token property to an object that invalidates on deinit.OperationQueue
with delay
/timeout
. Instead of using the OperationQueue
's underlying queue, we instead use a .userInitiated
queue for the timer and hop onto the OperationQueue
to resolve the promise..linkCancel
option.cancelOnTimeout:
parameter to timeout(on:delay:)
in favor of automatic cancellation propagation.PromiseInvalidationToken
s on deinit
. This behavior can be disabled via a parameter to init
.Initial alpha release.
Author: lilyball
Source code: https://github.com/lilyball/Tomorrowland
License: Apache-2.0, MIT licenses found
#swift #objective-c
1650636000
Port of deeplearning4j to clojure
Contact info
If you have any questions,
NOT YET RELEASED TO CLOJARS
If using Maven add the following repository definition to your pom.xml:
<repository>
<id>clojars.org</id>
<url>http://clojars.org/repo</url>
</repository>
With Leiningen:
n/a
With Maven:
n/a
<dependency>
<groupId>_</groupId>
<artifactId>_</artifactId>
<version>_</version>
</dependency>
All functions for creating dl4j objects return code by default
API functions return code when all args are provided as code
API functions return the value of calling the wrapped method when args are provided as a mixture of objects and code or just objects
The tests are there to help clarify behavior, if you are unsure of how to use a fn, search the tests
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]))
;; as code (the default)
(l/dense-layer-builder
:activation-fn :relu
:learning-rate 0.006
:weight-init :xavier
:layer-name "example layer"
:n-in 10
:n-out 1)
;; =>
(doto
(org.deeplearning4j.nn.conf.layers.DenseLayer$Builder.)
(.nOut 1)
(.activation (dl4clj.constants/value-of {:activation-fn :relu}))
(.weightInit (dl4clj.constants/value-of {:weight-init :xavier}))
(.nIn 10)
(.name "example layer")
(.learningRate 0.006))
;; as an object
(l/dense-layer-builder
:activation-fn :relu
:learning-rate 0.006
:weight-init :xavier
:layer-name "example layer"
:n-in 10
:n-out 1
:as-code? false)
;; =>
#object[org.deeplearning4j.nn.conf.layers.DenseLayer 0x69d7d160 "DenseLayer(super=FeedForwardLayer(super=Layer(layerName=example layer, activationFn=relu, weightInit=XAVIER, biasInit=NaN, dist=null, learningRate=0.006, biasLearningRate=NaN, learningRateSchedule=null, momentum=NaN, momentumSchedule=null, l1=NaN, l2=NaN, l1Bias=NaN, l2Bias=NaN, dropOut=NaN, updater=null, rho=NaN, epsilon=NaN, rmsDecay=NaN, adamMeanDecay=NaN, adamVarDecay=NaN, gradientNormalization=null, gradientNormalizationThreshold=NaN), nIn=10, nOut=1))"]
Loading data from a file (here its a csv)
(ns my.ns
(:require [dl4clj.datasets.input-splits :as s]
[dl4clj.datasets.record-readers :as rr]
[dl4clj.datasets.api.record-readers :refer :all]
[dl4clj.datasets.iterators :as ds-iter]
[dl4clj.datasets.api.iterators :refer :all]
[dl4clj.helpers :refer [data-from-iter]]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; file splits (convert the data to records)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def poker-path "resources/poker-hand-training.csv")
;; this is not a complete dataset, it is just here to sever as an example
(def file-split (s/new-filesplit :path poker-path))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers, (read the records created by the file split)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def csv-rr (initialize-rr! :rr (rr/new-csv-record-reader :skip-n-lines 0 :delimiter ",")
:input-split file-split))
;; lets look at some data
(println (next-record! :rr csv-rr :as-code? false))
;; => #object[java.util.ArrayList 0x2473e02d [1, 10, 1, 11, 1, 13, 1, 12, 1, 1, 9]]
;; this is our first line from the csv
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; record readers dataset iterators (turn our writables into a dataset)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
:record-reader csv-rr
:batch-size 1
:label-idx 10
:n-possible-labels 10))
;; we use our record reader created above
;; we want to see one example per dataset obj returned (:batch-size = 1)
;; we know our label is at the last index, so :label-idx = 10
;; there are 10 possible types of poker hands so :n-possible-labels = 10
;; you can also set :label-idx to -1 to use the last index no matter the size of the seq
(def other-rr-ds-iter (ds-iter/new-record-reader-dataset-iterator
:record-reader csv-rr
:batch-size 1
:label-idx -1
:n-possible-labels 10))
(str (next-example! :iter rr-ds-iter :as-code? false))
;; =>
;;===========INPUT===================
;;[1.00, 10.00, 1.00, 11.00, 1.00, 13.00, 1.00, 12.00, 1.00, 1.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]
;; and to show that :label-idx = -1 gives us the same output
(= (next-example! :iter rr-ds-iter :as-code? false)
(next-example! :iter other-rr-ds-iter :as-code? false)) ;; => true
(ns my.ns
(:require [nd4clj.linalg.factory.nd4j :refer [vec->indarray matrix->indarray
indarray-of-zeros indarray-of-ones
indarray-of-rand vec-or-matrix->indarray]]
[dl4clj.datasets.new-datasets :refer [new-ds]]
[dl4clj.datasets.api.datasets :refer [as-list]]
[dl4clj.datasets.iterators :refer [new-existing-dataset-iterator]]
[dl4clj.datasets.api.iterators :refer :all]
[dl4clj.datasets.pre-processors :as ds-pp]
[dl4clj.datasets.api.pre-processors :refer :all]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; INDArray creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;TODO: consider defaulting to code
;; can create from a vector
(vec->indarray [1 2 3 4])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x269df212 [1.00, 2.00, 3.00, 4.00]]
;; or from a matrix
(matrix->indarray [[1 2 3 4] [2 4 6 8]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x20aa7fe1
;; [[1.00, 2.00, 3.00, 4.00], [2.00, 4.00, 6.00, 8.00]]]
;; will fill in spareness with zeros
(matrix->indarray [[1 2 3 4] [2 4 6 8] [10 12]])
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x8b7796c
;;[[1.00, 2.00, 3.00, 4.00],
;; [2.00, 4.00, 6.00, 8.00],
;; [10.00, 12.00, 0.00, 0.00]]]
;; can create an indarray of all zeros with specified shape
;; defaults to :rows = 1 :columns = 1
(indarray-of-zeros :rows 3 :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x6f586a7e
;;[[0.00, 0.00],
;; [0.00, 0.00],
;; [0.00, 0.00]]]
(indarray-of-zeros) ;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xe59ffec 0.00]
;; and if only one is supplied, will get a vector of specified length
(indarray-of-zeros :rows 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2899d974 [0.00, 0.00]]
(indarray-of-zeros :columns 2)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0xa5b9782 [0.00, 0.00]]
;; same considerations/defaults for indarray-of-ones and indarray-of-rand
(indarray-of-ones :rows 2 :columns 3)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x54f08662 [[1.00, 1.00, 1.00], [1.00, 1.00, 1.00]]]
(indarray-of-rand :rows 2 :columns 3)
;; all values are greater than 0 but less than 1
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x2f20293b [[0.85, 0.86, 0.13], [0.94, 0.04, 0.36]]]
;; vec-or-matrix->indarray is built into all functions which require INDArrays
;; so that you can use clojure data structures
;; but you still have the option of passing existing INDArrays
(def example-array (vec-or-matrix->indarray [1 2 3 4]))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x5c44c71f [1.00, 2.00, 3.00, 4.00]]
(vec-or-matrix->indarray example-array)
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x607b03b0 [1.00, 2.00, 3.00, 4.00]]
(vec-or-matrix->indarray (indarray-of-rand :rows 2))
;; => #object[org.nd4j.linalg.cpu.nativecpu.NDArray 0x49143b08 [0.76, 0.92]]
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set creation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def ds-with-single-example (new-ds :input [1 2 3 4]
:output [0.0 1.0 0.0]))
(as-list :ds ds-with-single-example :as-code? false)
;; =>
;; #object[java.util.ArrayList 0x5d703d12
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00]]]
(def ds-with-multiple-examples (new-ds
:input [[1 2 3 4] [2 4 6 8]]
:output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))
(as-list :ds ds-with-multiple-examples :as-code? false)
;; =>
;;#object[java.util.ArrayList 0x29c7a9e2
;;[===========INPUT===================
;;[1.00, 2.00, 3.00, 4.00]
;;=================OUTPUT==================
;;[0.00, 1.00, 0.00],
;;===========INPUT===================
;;[2.00, 4.00, 6.00, 8.00]
;;=================OUTPUT==================
;;[0.00, 0.00, 1.00]]]
;; we can create a dataset iterator from the code which creates datasets
;; and set the labels for our outputs (optional)
(def ds-with-multiple-examples
(new-ds
:input [[1 2 3 4] [2 4 6 8]]
:output [[0.0 1.0 0.0] [0.0 0.0 1.0]]))
;; iterator
(def training-rr-ds-iter
(new-existing-dataset-iterator
:dataset ds-with-multiple-examples
:labels ["foo" "baz" "foobaz"]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; data-set normalization
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; this gathers statistics on the dataset and normalizes the data
;; and applies the transformation to all dataset objects in the iterator
(def train-iter-normalized
(c/normalize-iter! :iter training-rr-ds-iter
:normalizer (ds-pp/new-standardize-normalization-ds-preprocessor)
:as-code? false))
;; above returns the normalized iterator
;; to get fit normalizer
(def the-normalizer
(get-pre-processor train-iter-normalized))
Creating a neural network configuration with singe and multiple layers
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.conf.distributions :as dist]
[dl4clj.nn.conf.input-pre-processor :as pp]
[dl4clj.nn.conf.step-fns :as s-fn]))
;; nn/builder has 3 types of args
;; 1) args which set network configuration params
;; 2) args which set default values for layers
;; 3) args which set multi layer network configuration params
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; single layer nn configuration
;; here we are setting network configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(nn/builder :optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:step-fn :default-step-fn
:layers {:dense-layer {:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "single layer model example"
:n-in 10
:n-out 20}})
;; there are several options within a nn-conf map which can be configuration maps
;; or calls to fns
;; It doesn't matter which option you choose and you don't have to stay consistent
;; the list of params which can be passed as config maps or fn calls will
;; be enumerated at a later date
(nn/builder :optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:step-fn (s-fn/new-default-step-fn)
:build? true
;; dont need to specify layer order, theres only one
:layers (l/dense-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:dist (dist/new-normal-distribution :mean 0 :std 1)
:learning-rate 0.006
:weight-init :xavier
:layer-name "single layer model example"
:n-in 10
:n-out 20))
;; these configurations are the same
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; multi-layer configuration
;; here we are also setting layer defaults
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; defaults will apply to layers which do not specify those value in their config
(nn/builder
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
:default-activation-fn :sigmoid
:default-weight-init :uniform
;; we need to specify the layer order
:layers {0 (l/activation-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "example first layer"
:n-in 10
:n-out 20)
1 {:output-layer {:n-in 20
:n-out 2
:loss-fn :mse
:layer-name "example output layer"}}})
;; specifying multi-layer config params
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:minimize? true
:use-drop-connect? false
:lr-score-based-decay-rate 0.002
:regularization? false
;; layer defaults
:default-activation-fn :sigmoid
:default-weight-init :uniform
;; the layers
:layers {0 (l/activation-layer-builder
:activation-fn :relu
:updater :adam
:adam-mean-decay 0.2
:adam-var-decay 0.1
:learning-rate 0.006
:weight-init :xavier
:layer-name "example first layer"
:n-in 10
:n-out 20)
1 {:output-layer {:n-in 20
:n-out 2
:loss-fn :mse
:layer-name "example output layer"}}}
;; multi layer network args
:backprop? true
:backprop-type :standard
:pretrain? false
:input-pre-processors {0 (pp/new-zero-mean-pre-pre-processor)
1 {:unit-variance-processor {}}})
Multi Layer models
(ns my.ns
(:require [dl4clj.datasets.iterators :as iter]
[dl4clj.datasets.input-splits :as split]
[dl4clj.datasets.record-readers :as rr]
[dl4clj.optimize.listeners :as listener]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.multilayer.multi-layer-network :as mln]
[dl4clj.nn.api.model :refer [init! set-listeners!]]
[dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
[dl4clj.datasets.api.record-readers :refer [initialize-rr!]]
[dl4clj.eval.api.eval :refer [get-stats get-accuracy]]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; nn-conf -> multi-layer-network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123 :iterations 1 :regularization? true
;; setting layer defaults
:default-activation-fn :relu :default-l2 7.5e-6
:default-weight-init :xavier :default-learning-rate 0.0015
:default-updater :nesterovs :default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def multi-layer-network (c/model-from-conf nn-conf))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; local cpu training with dl4j pre-built iterators
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; lets use the pre-built Mnist data set iterator
(def train-mnist-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-mnist-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
;; and lets set a listener so we can know how training is going
(def score-listener (listener/new-score-iteration-listener :print-every-n 5))
;; and attach it to our model
;; TODO: listeners are broken, look into log4j warnning
(def mln-with-listener (set-listeners! :model multi-layer-network
:listeners [score-listener]))
(def trained-mln (mln/train-mln-with-ds-iter! :mln mln-with-listener
:iter train-mnist-iter
:n-epochs 15
:as-code? false))
;; training happens because :as-code? = false
;; if it was true, we would still just have a data structure
;; we now have a trained model that has seen the training dataset 15 times
;; time to evaluate our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;Create an evaluation object
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def eval-obj (evaluate-classification :mln trained-mln
:iter test-mnist-iter))
;; always remember that these objects are stateful, dont use the same eval-obj
;; to eval two different networks
;; we trained the model on a training dataset. We evaluate on a test set
(println (get-stats :evaler eval-obj))
;; this will print the stats to standard out for each feature/label pair
;;Examples labeled as 0 classified by model as 0: 968 times
;;Examples labeled as 0 classified by model as 1: 1 times
;;Examples labeled as 0 classified by model as 2: 1 times
;;Examples labeled as 0 classified by model as 3: 1 times
;;Examples labeled as 0 classified by model as 5: 1 times
;;Examples labeled as 0 classified by model as 6: 3 times
;;Examples labeled as 0 classified by model as 7: 1 times
;;Examples labeled as 0 classified by model as 8: 2 times
;;Examples labeled as 0 classified by model as 9: 2 times
;;Examples labeled as 1 classified by model as 1: 1126 times
;;Examples labeled as 1 classified by model as 2: 2 times
;;Examples labeled as 1 classified by model as 3: 1 times
;;Examples labeled as 1 classified by model as 5: 1 times
;;Examples labeled as 1 classified by model as 6: 2 times
;;Examples labeled as 1 classified by model as 7: 1 times
;;Examples labeled as 1 classified by model as 8: 2 times
;;Examples labeled as 2 classified by model as 0: 3 times
;;Examples labeled as 2 classified by model as 1: 2 times
;;Examples labeled as 2 classified by model as 2: 1006 times
;;Examples labeled as 2 classified by model as 3: 2 times
;;Examples labeled as 2 classified by model as 4: 3 times
;;Examples labeled as 2 classified by model as 6: 3 times
;;Examples labeled as 2 classified by model as 7: 7 times
;;Examples labeled as 2 classified by model as 8: 6 times
;;Examples labeled as 3 classified by model as 2: 4 times
;;Examples labeled as 3 classified by model as 3: 990 times
;;Examples labeled as 3 classified by model as 5: 3 times
;;Examples labeled as 3 classified by model as 7: 3 times
;;Examples labeled as 3 classified by model as 8: 3 times
;;Examples labeled as 3 classified by model as 9: 7 times
;;Examples labeled as 4 classified by model as 2: 2 times
;;Examples labeled as 4 classified by model as 3: 1 times
;;Examples labeled as 4 classified by model as 4: 967 times
;;Examples labeled as 4 classified by model as 6: 4 times
;;Examples labeled as 4 classified by model as 7: 1 times
;;Examples labeled as 4 classified by model as 9: 7 times
;;Examples labeled as 5 classified by model as 0: 2 times
;;Examples labeled as 5 classified by model as 3: 6 times
;;Examples labeled as 5 classified by model as 4: 1 times
;;Examples labeled as 5 classified by model as 5: 874 times
;;Examples labeled as 5 classified by model as 6: 3 times
;;Examples labeled as 5 classified by model as 7: 1 times
;;Examples labeled as 5 classified by model as 8: 3 times
;;Examples labeled as 5 classified by model as 9: 2 times
;;Examples labeled as 6 classified by model as 0: 4 times
;;Examples labeled as 6 classified by model as 1: 3 times
;;Examples labeled as 6 classified by model as 3: 2 times
;;Examples labeled as 6 classified by model as 4: 4 times
;;Examples labeled as 6 classified by model as 5: 4 times
;;Examples labeled as 6 classified by model as 6: 939 times
;;Examples labeled as 6 classified by model as 7: 1 times
;;Examples labeled as 6 classified by model as 8: 1 times
;;Examples labeled as 7 classified by model as 1: 7 times
;;Examples labeled as 7 classified by model as 2: 4 times
;;Examples labeled as 7 classified by model as 3: 3 times
;;Examples labeled as 7 classified by model as 7: 1005 times
;;Examples labeled as 7 classified by model as 8: 2 times
;;Examples labeled as 7 classified by model as 9: 7 times
;;Examples labeled as 8 classified by model as 0: 3 times
;;Examples labeled as 8 classified by model as 2: 3 times
;;Examples labeled as 8 classified by model as 3: 2 times
;;Examples labeled as 8 classified by model as 4: 4 times
;;Examples labeled as 8 classified by model as 5: 3 times
;;Examples labeled as 8 classified by model as 6: 2 times
;;Examples labeled as 8 classified by model as 7: 4 times
;;Examples labeled as 8 classified by model as 8: 947 times
;;Examples labeled as 8 classified by model as 9: 6 times
;;Examples labeled as 9 classified by model as 0: 2 times
;;Examples labeled as 9 classified by model as 1: 2 times
;;Examples labeled as 9 classified by model as 3: 4 times
;;Examples labeled as 9 classified by model as 4: 8 times
;;Examples labeled as 9 classified by model as 6: 1 times
;;Examples labeled as 9 classified by model as 7: 4 times
;;Examples labeled as 9 classified by model as 8: 2 times
;;Examples labeled as 9 classified by model as 9: 986 times
;;==========================Scores========================================
;; Accuracy: 0.9808
;; Precision: 0.9808
;; Recall: 0.9807
;; F1 Score: 0.9807
;;========================================================================
;; can get the stats that are printed via fns in the evaluation namespace
;; after running eval-model-whole-ds
(get-accuracy :evaler evaler-with-stats) ;; => 0.9808
Early Stopping (controlling training)
it is recommened you start here when designing models
using dl4clj.core
(ns my.ns
(:require [dl4clj.earlystopping.termination-conditions :refer :all]
[dl4clj.earlystopping.model-saver :refer [new-in-memory-saver]]
[dl4clj.nn.api.multi-layer-network :refer [evaluate-classification]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :as iter]
[dl4clj.core :as c]))
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123
:iterations 1
:regularization? true
;; setting layer defaults
:default-activation-fn :relu
:default-l2 7.5e-6
:default-weight-init :xavier
:default-learning-rate 0.0015
:default-updater :nesterovs
:default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def train-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
(def invalid-score-condition (new-invalid-score-iteration-termination-condition))
(def max-score-condition (new-max-score-iteration-termination-condition
:max-score 20.0))
(def max-time-condition (new-max-time-iteration-termination-condition
:max-time-val 10
:max-time-unit :minutes))
(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
:max-n-epoch-no-improve 5))
(def target-score-condition (new-best-score-epoch-termination-condition
:best-expected-score 0.009))
(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))
(def in-mem-saver (new-in-memory-saver))
(def trained-mln
;; defaults to returning the model
(c/train-with-early-stopping
:nn-conf nn-conf
:training-iter train-mnist-iter
:testing-iter test-mnist-iter
:eval-every-n-epochs 1
:iteration-termination-conditions [invalid-score-condition
max-score-condition
max-time-condition]
:epoch-termination-conditions [score-doesnt-improve-condition
target-score-condition
max-number-epochs-condition]
:save-last-model? true
:model-saver in-mem-saver
:as-code? false))
(def model-evaler
(evaluate-classification :mln trained-mln :iter test-mnist-iter))
(println (get-stats :evaler model-evaler))
(ns my.ns
(:require [dl4clj.earlystopping.early-stopping-config :refer [new-early-stopping-config]]
[dl4clj.earlystopping.termination-conditions :refer :all]
[dl4clj.earlystopping.model-saver :refer [new-in-memory-saver new-local-file-model-saver]]
[dl4clj.earlystopping.score-calc :refer [new-ds-loss-calculator]]
[dl4clj.earlystopping.early-stopping-trainer :refer [new-early-stopping-trainer]]
[dl4clj.earlystopping.api.early-stopping-trainer :refer [fit-trainer!]]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.nn.multilayer.multi-layer-network :as mln]
[dl4clj.utils :refer [load-model!]]
[dl4clj.datasets.iterators :as iter]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; start with our network config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def nn-conf
(nn/builder
;; network args
:optimization-algo :stochastic-gradient-descent
:seed 123 :iterations 1 :regularization? true
;; setting layer defaults
:default-activation-fn :relu :default-l2 7.5e-6
:default-weight-init :xavier :default-learning-rate 0.0015
:default-updater :nesterovs :default-momentum 0.98
;; setting layer configuration
:layers {0 {:dense-layer
{:layer-name "example first layer"
:n-in 784 :n-out 500}}
1 {:dense-layer
{:layer-name "example second layer"
:n-in 500 :n-out 100}}
2 {:output-layer
{:n-in 100 :n-out 10
;; layer specific params
:loss-fn :negativeloglikelihood
:activation-fn :softmax
:layer-name "example output layer"}}}
;; multi layer args
:backprop? true
:pretrain? false))
(def mln (c/model-from-conf nn-conf))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; the training/testing data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def train-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? true
:seed 123))
(def test-iter
(iter/new-mnist-data-set-iterator
:batch-size 64
:train? false
:seed 123))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we are going to need termination conditions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; these allow us to control when we exit training
;; this can be based off of iterations or epochs
;; iteration termination conditions
(def invalid-score-condition (new-invalid-score-iteration-termination-condition))
(def max-score-condition (new-max-score-iteration-termination-condition
:max-score 20.0))
(def max-time-condition (new-max-time-iteration-termination-condition
:max-time-val 10
:max-time-unit :minutes))
;; epoch termination conditions
(def score-doesnt-improve-condition (new-score-improvement-epoch-termination-condition
:max-n-epoch-no-improve 5))
(def target-score-condition (new-best-score-epoch-termination-condition :best-expected-score 0.009))
(def max-number-epochs-condition (new-max-epochs-termination-condition :max-n 20))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; we also need a way to save our model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; can be in memory or to a local directory
(def in-mem-saver (new-in-memory-saver))
(def local-file-saver (new-local-file-model-saver :directory "resources/tmp/readme/"))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; set up your score calculator
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def score-calcer (new-ds-loss-calculator :iter test-iter
:average? true))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; termination conditions
;; a way to save our model
;; a way to calculate the score of our model on the dataset
(def early-stopping-conf
(new-early-stopping-config
:epoch-termination-conditions [score-doesnt-improve-condition
target-score-condition
max-number-epochs-condition]
:iteration-termination-conditions [invalid-score-condition
max-score-condition
max-time-condition]
:eval-every-n-epochs 5
:model-saver local-file-saver
:save-last-model? true
:score-calculator score-calcer))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; create an early stopping trainer from our data, model and early stopping conf
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def es-trainer (new-early-stopping-trainer :early-stopping-conf early-stopping-conf
:mln mln
:iter train-iter))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; fit and use our early stopping trainer
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def es-trainer-fitted (fit-trainer! es-trainer :as-code? false))
;; when the trainer terminates, you will see something like this
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Completed training epoch 14
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO New best model: score = 0.005225599372851298,
;; epoch = 14 (previous: score = 0.018243224899038346, epoch = 7)
;;[nREPL-worker-24] BaseEarlyStoppingTrainer INFO Hit epoch termination condition at epoch 14.
;; Details: BestScoreEpochTerminationCondition(0.009)
;; and if we look at the es-trainer-fitted object we see
;;#object[org.deeplearning4j.earlystopping.EarlyStoppingResult 0x5ab74f27 EarlyStoppingResult
;;(terminationReason=EpochTerminationCondition,details=BestScoreEpochTerminationCondition(0.009),
;; bestModelEpoch=14,bestModelScore=0.005225599372851298,totalEpochs=15)]
;; and our model has been saved to /resources/tmp/readme/bestModel.bin
;; there we have our model config, model params and our updater state
;; we can then load this model to use it or continue refining it
(def loaded-model (load-model! :path "resources/tmp/readme/bestModel.bin"
:load-updater? true))
Transfer Learning (freezing layers)
;; TODO: need to write up examples
dl4j Spark usage
How it is done in dl4clj
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.spark.masters.param-avg :as master]
[dl4clj.spark.data.java-rdd :refer [new-java-spark-context
java-rdd-from-iter]]
[dl4clj.spark.api.dl4j-multi-layer :refer [eval-classification-spark-mln
get-spark-context]]
[dl4clj.core :as c]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model config
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def mln-conf
(nn/builder
:optimization-algo :stochastic-gradient-descent
:default-learning-rate 0.006
:layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
1 {:output-layer
{:loss-fn :negativeloglikelihood
:n-in 2 :n-out 3
:activation-fn :soft-max
:weight-init :xavier}}}
:backprop? true
:backprop-type :standard))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def training-master
(master/new-parameter-averaging-training-master
:build? true
:rdd-n-examples 10
:n-workers 4
:averaging-freq 10
:batch-size-per-worker 2
:export-dir "resources/spark/master/"
:rdd-training-approach :direct
:repartition-data :always
:repartition-strategy :balanced
:seed 1234
:save-updater? true
:storage-level :none))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, spark context
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def your-spark-context
(new-java-spark-context :app-name "example app"))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, training data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def iris-iter
(new-iris-data-set-iterator
:batch-size 1
:n-examples 5))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, spark mln
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def fitted-spark-mln
(c/train-with-spark :spark-context your-spark-context
:mln-conf mln-conf
:training-master training-master
:iter iris-iter
:n-epochs 1
:as-code? false))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, use spark context from spark-mln to create rdd
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; TODO: eliminate this step
(def our-rdd
(let [sc (get-spark-context fitted-spark-mln :as-code? false)]
(java-rdd-from-iter :spark-context sc
:iter iris-iter)))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 6, evaluation model and print stats (poor performance of model expected)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def eval-obj
(eval-classification-spark-mln
:spark-mln fitted-spark-mln
:rdd our-rdd))
(println (get-stats :evaler eval-obj))
(ns my.ns
(:require [dl4clj.nn.conf.builders.layers :as l]
[dl4clj.nn.conf.builders.nn :as nn]
[dl4clj.datasets.iterators :refer [new-iris-data-set-iterator]]
[dl4clj.eval.api.eval :refer [get-stats]]
[dl4clj.spark.masters.param-avg :as master]
[dl4clj.spark.data.java-rdd :refer [new-java-spark-context java-rdd-from-iter]]
[dl4clj.spark.dl4j-multi-layer :as spark-mln]
[dl4clj.spark.api.dl4j-multi-layer :refer [fit-spark-mln!
eval-classification-spark-mln]]))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 1, create your model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def mln-conf
(nn/builder
:optimization-algo :stochastic-gradient-descent
:default-learning-rate 0.006
:layers {0 (l/dense-layer-builder :n-in 4 :n-out 2 :activation-fn :relu)
1 {:output-layer
{:loss-fn :negativeloglikelihood
:n-in 2 :n-out 3
:activation-fn :soft-max
:weight-init :xavier}}}
:backprop? true
:as-code? false
:backprop-type :standard))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 2, create a training master
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; not all options specified, but most are
(def training-master
(master/new-parameter-averaging-training-master
:build? true
:rdd-n-examples 10
:n-workers 4
:averaging-freq 10
:batch-size-per-worker 2
:export-dir "resources/spark/master/"
:rdd-training-approach :direct
:repartition-data :always
:repartition-strategy :balanced
:seed 1234
:as-code? false
:save-updater? true
:storage-level :none))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 3, create a Spark Multi Layer Network
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def your-spark-context
(new-java-spark-context :app-name "example app" :as-code? false))
;; new-java-spark-context will turn an existing spark-configuration into a java spark context
;; or create a new java spark context with master set to "local[*]" and the app name
;; set to :app-name
(def spark-mln
(spark-mln/new-spark-multi-layer-network
:spark-context your-spark-context
:mln mln-conf
:training-master training-master
:as-code? false))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 4, load your data
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; one way is via a dataset-iterator
;; can make one directly from a dataset (iterator data-set)
;; see: nd4clj.linalg.dataset.api.data-set and nd4clj.linalg.dataset.data-set
;; we are going to use a pre-built one
(def iris-iter
(new-iris-data-set-iterator
:batch-size 1
:n-examples 5
:as-code? false))
;; now lets convert the data into a javaRDD
(def our-rdd
(java-rdd-from-iter :spark-context your-spark-context
:iter iris-iter))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Step 5, fit and evaluate the model
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(def fitted-spark-mln
(fit-spark-mln!
:spark-mln spark-mln
:rdd our-rdd
:n-epochs 1))
;; this fn also has the option to supply :path-to-data instead of :rdd
;; that path should point to a directory containing a number of dataset objects
(def eval-obj
(eval-classification-spark-mln
:spark-mln fitted-spark-mln
:rdd our-rdd))
;; we would want to have different testing and training rdd's but here we are using
;; the data we trained on
;; lets get the stats for how our model performed
(println (get-stats :evaler eval-obj))
Coming soon
Implement ComputationGraphs and the classes which use them
NLP
Parallelism
TSNE
UI
Author: yetanalytics
Source Code: https://github.com/yetanalytics/dl4clj
License: BSD-2-Clause License
1591611780
How can I find the correct ulimit values for a user account or process on Linux systems?
For proper operation, we must ensure that the correct ulimit values set after installing various software. The Linux system provides means of restricting the number of resources that can be used. Limits set for each Linux user account. However, system limits are applied separately to each process that is running for that user too. For example, if certain thresholds are too low, the system might not be able to server web pages using Nginx/Apache or PHP/Python app. System resource limits viewed or set with the NA command. Let us see how to use the ulimit that provides control over the resources available to the shell and processes.
#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]
1591993440
We are going to build a full stack Todo App using the MEAN (MongoDB, ExpressJS, AngularJS and NodeJS). This is the last part of three-post series tutorial.
MEAN Stack tutorial series:
AngularJS tutorial for beginners (Part I)
Creating RESTful APIs with NodeJS and MongoDB Tutorial (Part II)
MEAN Stack Tutorial: MongoDB, ExpressJS, AngularJS and NodeJS (Part III) 👈 you are here
Before completing the app, let’s cover some background about the this stack. If you rather jump to the hands-on part click here to get started.
#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]
1592610180
CentOS Linux 8.2 (2004) released. It is a Linux distribution derived from RHEL (Red Hat Enterprise Linux) 8.2 source code. CentOS was created when Red Hat stopped providing RHEL free. CentOS 8.2 gives complete control of its open-source software packages and is fully customized for research needs or for running a high-performance website without the need for license fees. Let us see what’s new in CentOS 8.2 (2004) and how to upgrade existing CentOS 8.1.1199 server to 8.2.2004 using the command line.
#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]
1592595480
Is there is a command to print list all failed units or services when using systemd on Linux? Can you tell me the systemctl command to list all failed services on Linux?
This quick tutorial explains how to find/list all failed systemd services/units on Linux operating systems using the systemctl command.
#[object object] #[object object] #[object object] #[object object] #[object object] #[object object] #[object object]