1670817060
In this Keras article, we will learn about How to Save and Load Your Keras Deep Learning Model. Keras is a simple and powerful Python library for deep learning.
Since deep learning models can take hours, days, and even weeks to train, it is important to know how to save and load them from a disk.
In this post, you will discover how to save your Keras models to files and load them up again to make predictions.
After reading this tutorial, you will know:
If you are new to Keras or deep learning, see this step-by-step Keras tutorial.
Keras separates the concerns of saving your model architecture and saving your model weights.
Model weights are saved to an HDF5 format. This grid format is ideal for storing multi-dimensional arrays of numbers.
The model structure can be described and saved using two different formats: JSON and YAML.
In this post, you will look at three examples of saving and loading your model to a file:
The first two examples save the model architecture and weights separately. The model weights are saved into an HDF5 format file in all cases.
The examples will use the same simple network trained on the Pima Indians onset of diabetes binary classification dataset. This is a small dataset that contains all numerical data and is easy to work with. You can download this dataset and place it in your working directory with the filename “pima-indians-diabetes.csv” (update: download from here).
Confirm that you have TensorFlow v2.x installed (e.g., v2.9 as of June 2022).
Note: Saving models requires that you have the h5py library installed. It is usually installed as a dependency with TensorFlow. You can also install it easily as follows:
sudo pip install h5py
JSON is a simple file format for describing data hierarchically.
Keras provides the ability to describe any model using JSON format with a to_json() function. This can be saved to a file and later loaded via the model_from_json() function that will create a new model from the JSON specification.
The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.
The example below trains and evaluates a simple model on the Pima Indians dataset. The model is then converted to JSON format and written to model.json in the local directory. The network weights are written to model.h5 in the local directory.
The model and weight data is loaded from the saved files, and a new model is created. It is important to compile the loaded model before it is used. This is so that predictions made using the model can use the appropriate efficient computation from the Keras backend.
The model is evaluated in the same way, printing the same evaluation score.
# MLP for Pima Indians Dataset Serialize to JSON and HDF5
from tensorflow.keras.models import Sequential, model_from_json
from tensorflow.keras.layers import Dense
import numpy
import os
# fix random seed for reproducibility
numpy.random.seed(7)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
# later...
# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
Running this example provides the output below.
acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%
The JSON format of the model looks like the following:
{
"class_name":"Sequential",
"config":{
"name":"sequential_1",
"layers":[
{
"class_name":"Dense",
"config":{
"name":"dense_1",
"trainable":true,
"batch_input_shape":[
null,
8
],
"dtype":"float32",
"units":12,
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"Dense",
"config":{
"name":"dense_2",
"trainable":true,
"dtype":"float32",
"units":8,
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"Dense",
"config":{
"name":"dense_3",
"trainable":true,
"dtype":"float32",
"units":1,
"activation":"sigmoid",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
}
]
},
"keras_version":"2.2.5",
"backend":"tensorflow"
}
Note: This method only applies to TensorFlow 2.5 or earlier. If you run it in later versions of TensorFlow, you will see a RuntimeError with the message “Method model.to_yaml()
has been removed due to security risk of arbitrary code execution. Please use model.to_json()
instead.”
This example is much the same as the above JSON example, except the YAML format is used for the model specification.
Note, this example assumes that you have PyYAML 5 installed:
sudo pip install PyYAML
In this example, the model is described using YAML, saved to file model.yaml, and later loaded into a new model via the model_from_yaml() function.
Weights are handled the same way as above in the HDF5 format as model.h5.
# MLP for Pima Indians Dataset serialize to YAML and HDF5
from tensorflow.keras.models import Sequential, model_from_yaml
from tensorflow.keras.layers import Dense
import numpy
import os
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# serialize model to YAML
model_yaml = model.to_yaml()
with open("model.yaml", "w") as yaml_file:
yaml_file.write(model_yaml)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
# later...
# load YAML and create model
yaml_file = open('model.yaml', 'r')
loaded_model_yaml = yaml_file.read()
yaml_file.close()
loaded_model = model_from_yaml(loaded_model_yaml)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
Running the example displays the following output.
acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%
The model described in YAML format looks like the following:
backend: tensorflow
class_name: Sequential
config:
layers:
- class_name: Dense
config:
activation: relu
activity_regularizer: null
batch_input_shape: !!python/tuple
- null
- 8
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_1
trainable: true
units: 12
use_bias: true
- class_name: Dense
config:
activation: relu
activity_regularizer: null
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_2
trainable: true
units: 8
use_bias: true
- class_name: Dense
config:
activation: sigmoid
activity_regularizer: null
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_3
trainable: true
units: 1
use_bias: true
name: sequential_1
keras_version: 2.2.5
Keras also supports a simpler interface to save both the model weights and model architecture together into a single H5 file.
Saving the model in this way includes everything you need to know about the model, including:
This means that you can load and use the model directly without having to re-compile it as you had to in the examples above.
Note: This is the preferred way for saving and loading your Keras model.
You can save your model by calling the save() function on the model and specifying the filename.
The example below demonstrates this by first fitting a model, evaluating it, and saving it to the file model.h5.
# MLP for Pima Indians Dataset saved to single file
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# load pima indians dataset
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
Running the example fits the model, summarizes the model’s performance on the training dataset, and saves the model to file.
acc: 77.73%
Saved model to disk
You can later load this model from the file and use it.
Note that in the Keras library, there is another function doing the same, as follows:
...
# equivalent to: model.save("model.h5")
from tensorflow.keras.models import save_model
save_model(model, "model.h5")
Your saved model can then be loaded later by calling the load_model()
function and passing the filename. The function returns the model with the same architecture and weights.
In this case, you load the model, summarize the architecture, and evaluate it on the same dataset to confirm the weights and architecture are the same.
# load and evaluate a saved model
from numpy import loadtxt
from tensorflow.keras.models import load_model
# load model
model = load_model('model.h5')
# summarize model.
model.summary()
# load dataset
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# evaluate the model
score = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
Running the example first loads the model, prints a summary of the model architecture, and then evaluates the loaded model on the same dataset.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
The model achieves the same accuracy score, which in this case is 77%.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 12) 108
_________________________________________________________________
dense_2 (Dense) (None, 8) 104
_________________________________________________________________
dense_3 (Dense) (None, 1) 9
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
acc: 77.73%
While saving and loading a Keras model using HDF5 format is the recommended way, TensorFlow supports yet another format, the protocol buffer. It is considered faster to save and load a protocol buffer format, but doing so will produce multiple files. The syntax is the same, except that you do not need to provide the .h5
extension to the filename:
# save model and architecture to single file
model.save("model")
# ... later
# load model
model = load_model('model')
# print summary
model.summary()
These will create a directory “model” with the following files:
model/
|-- assets/
|-- keras_metadata.pb
|-- saved_model.pb
`-- variables/
|-- variables.data-00000-of-00001
`-- variables.index
This is also the format used to save a model in TensorFlow v1.x. You may encounter this when you download a pre-trained model from TensorFlow Hub.
Original article sourced at: https://machinelearningmastery.com
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1659500100
Form objects decoupled from your models.
Reform gives you a form object with validations and nested setup of models. It is completely framework-agnostic and doesn't care about your database.
Although reform can be used in any Ruby framework, it comes with Rails support, works with simple_form and other form gems, allows nesting forms to implement has_one and has_many relationships, can compose a form from multiple objects and gives you coercion.
Reform is part of the Trailblazer framework. Full documentation is available on the project site.
Temporary note: Reform 2.2 does not automatically load Rails files anymore (e.g. ActiveModel::Validations
). You need the reform-rails
gem, see Installation.
Forms are defined in separate classes. Often, these classes partially map to a model.
class AlbumForm < Reform::Form
property :title
validates :title, presence: true
end
Fields are declared using ::property
. Validations work exactly as you know it from Rails or other frameworks. Note that validations no longer go into the model.
Forms have a ridiculously simple API with only a handful of public methods.
#initialize
always requires a model that the form represents.#validate(params)
updates the form's fields with the input data (only the form, not the model) and then runs all validations. The return value is the boolean result of the validations.#errors
returns validation messages in a classic ActiveModel style.#sync
writes form data back to the model. This will only use setter methods on the model(s).#save
(optional) will call #save
on the model and nested models. Note that this implies a #sync
call.#prepopulate!
(optional) will run pre-population hooks to "fill out" your form before rendering.In addition to the main API, forms expose accessors to the defined properties. This is used for rendering or manual operations.
In your controller or operation you create a form instance and pass in the models you want to work on.
class AlbumsController
def new
@form = AlbumForm.new(Album.new)
end
This will also work as an editing form with an existing album.
def edit
@form = AlbumForm.new(Album.find(1))
end
Reform will read property values from the model in setup. In our example, the AlbumForm
will call album.title
to populate the title
field.
Your @form
is now ready to be rendered, either do it yourself or use something like Rails' #form_for
, simple_form
or formtastic
.
= form_for @form do |f|
= f.input :title
Nested forms and collections can be easily rendered with fields_for
, etc. Note that you no longer pass the model to the form builder, but the Reform instance.
Optionally, you might want to use the #prepopulate!
method to pre-populate fields and prepare the form for rendering.
After form submission, you need to validate the input.
class SongsController
def create
@form = SongForm.new(Song.new)
#=> params: {song: {title: "Rio", length: "366"}}
if @form.validate(params[:song])
The #validate
method first updates the values of the form - the underlying model is still treated as immutuable and remains unchanged. It then runs all validations you provided in the form.
It's the only entry point for updating the form. This is per design, as separating writing and validation doesn't make sense for a form.
This allows rendering the form after validate
with the data that has been submitted. However, don't get confused, the model's values are still the old, original values and are only changed after a #save
or #sync
operation.
After validation, you have two choices: either call #save
and let Reform sort out the rest. Or call #sync
, which will write all the properties back to the model. In a nested form, this works recursively, of course.
It's then up to you what to do with the updated models - they're still unsaved.
The easiest way to save the data is to call #save
on the form.
if @form.validate(params[:song])
@form.save #=> populates album with incoming data
# by calling @form.album.title=.
else
# handle validation errors.
end
This will sync the data to the model and then call album.save
.
Sometimes, you need to do saving manually.
Reform allows default values to be provided for properties.
class AlbumForm < Reform::Form
property :price_in_cents, default: 9_95
end
Calling #save
with a block will provide a nested hash of the form's properties and values. This does not call #save
on the models and allows you to implement the saving yourself.
The block parameter is a nested hash of the form input.
@form.save do |hash|
hash #=> {title: "Greatest Hits"}
Album.create(hash)
end
You can always access the form's model. This is helpful when you were using populators to set up objects when validating.
@form.save do |hash|
album = @form.model
album.update_attributes(hash[:album])
end
Reform provides support for nested objects. Let's say the Album
model keeps some associations.
class Album < ActiveRecord::Base
has_one :artist
has_many :songs
end
The implementation details do not really matter here, as long as your album exposes readers and writes like Album#artist
and Album#songs
, this allows you to define nested forms.
class AlbumForm < Reform::Form
property :title
validates :title, presence: true
property :artist do
property :full_name
validates :full_name, presence: true
end
collection :songs do
property :name
end
end
You can also reuse an existing form from elsewhere using :form
.
property :artist, form: ArtistForm
Reform will wrap defined nested objects in their own forms. This happens automatically when instantiating the form.
album.songs #=> [<Song name:"Run To The Hills">]
form = AlbumForm.new(album)
form.songs[0] #=> <SongForm model: <Song name:"Run To The Hills">>
form.songs[0].name #=> "Run To The Hills"
When rendering a nested form you can use the form's readers to access the nested forms.
= text_field :title, @form.title
= text_field "artist[name]", @form.artist.name
Or use something like #fields_for
in a Rails environment.
= form_for @form do |f|
= f.text_field :title
= f.fields_for :artist do |a|
= a.text_field :name
validate
will assign values to the nested forms. sync
and save
work analogue to the non-nested form, just in a recursive way.
The block form of #save
would give you the following data.
@form.save do |nested|
nested #=> {title: "Greatest Hits",
# artist: {name: "Duran Duran"},
# songs: [{title: "Hungry Like The Wolf"},
# {title: "Last Chance On The Stairways"}]
# }
end
The manual saving with block is not encouraged. You should rather check the Disposable docs to find out how to implement your manual tweak with the official API.
Very often, you need to give Reform some information how to create or find nested objects when validate
ing. This directive is called populator and documented here.
Add this line to your Gemfile:
gem "reform"
Reform works fine with Rails 3.1-5.0. However, inheritance of validations with ActiveModel::Validations
is broken in Rails 3.2 and 4.0.
Since Reform 2.2, you have to add the reform-rails
gem to your Gemfile
to automatically load ActiveModel/Rails files.
gem "reform-rails"
Since Reform 2.0 you need to specify which validation backend you want to use (unless you're in a Rails environment where ActiveModel will be used).
To use ActiveModel (not recommended because very out-dated).
require "reform/form/active_model/validations"
Reform::Form.class_eval do
include Reform::Form::ActiveModel::Validations
end
To use dry-validation (recommended).
require "reform/form/dry"
Reform::Form.class_eval do
feature Reform::Form::Dry
end
Put this in an initializer or on top of your script.
Reform allows to map multiple models to one form. The complete documentation is here, however, this is how it works.
class AlbumForm < Reform::Form
include Composition
property :id, on: :album
property :title, on: :album
property :songs, on: :cd
property :cd_id, on: :cd, from: :id
end
When initializing a composition, you have to pass a hash that contains the composees.
AlbumForm.new(album: album, cd: CD.find(1))
Reform comes many more optional features, like hash fields, coercion, virtual fields, and so on. Check the full documentation here.
Reform is part of the Trailblazer project. Please buy my book to support the development and learn everything about Reform - there's two chapters dedicated to Reform!
By explicitly defining the form layout using ::property
there is no more need for protecting from unwanted input. strong_parameter
or attr_accessible
become obsolete. Reform will simply ignore undefined incoming parameters.
Temporary note: This is the README and API for Reform 2. On the public API, only a few tiny things have changed. Here are the Reform 1.2 docs.
Anyway, please upgrade and report problems and do not simply assume that we will magically find out what needs to get fixed. When in trouble, join us on Gitter.
Full documentation for Reform is available online, or support us and grab the Trailblazer book. There is an Upgrading Guide to help you migrate through versions.
Great thanks to Blake Education for giving us the freedom and time to develop this project in 2013 while working on their project.
Author: trailblazer
Source code: https://github.com/trailblazer/reform
License: MIT license
1595422560
Welcome to DataFlair Keras Tutorial. This tutorial will introduce you to everything you need to know to get started with Keras. You will discover the characteristics, features, and various other properties of Keras. This article also explains the different neural network layers and the pre-trained models available in Keras. You will get the idea of how Keras makes it easier to try and experiment with new architectures in neural networks. And how Keras empowers new ideas and its implementation in a faster, efficient way.
Keras is an open-source deep learning framework developed in python. Developers favor Keras because it is user-friendly, modular, and extensible. Keras allows developers for fast experimentation with neural networks.
Keras is a high-level API and uses Tensorflow, Theano, or CNTK as its backend. It provides a very clean and easy way to create deep learning models.
Keras has the following characteristics:
The following major benefits of using Keras over other deep learning frameworks are:
Before installing TensorFlow, you should have one of its backends. We prefer you to install Tensorflow. Install Tensorflow and Keras using pip python package installer.
The basic data structure of Keras is model, it defines how to organize layers. A simple type of model is the Sequential model, a sequential way of adding layers. For more flexible architecture, Keras provides a Functional API. Functional API allows you to take multiple inputs and produce outputs.
It allows you to define more complex models.
#keras tutorials #introduction to keras #keras models #keras tutorial #layers in keras #why learn keras
1591861777
A model is the basic data structure of Keras. Keras models define how to organize layers. In this article, we will discuss Keras Models and its two types with examples. We will also learn about Model subclassing through which we can create our own fully-customizable models.
Models in keras are available in two types:
#keras tutorials #functional api in keras #keras models #models in keras
1607579145
In this video on Keras, you will understand what is Keras and why do we need it, how to compose different models in Keras like the Sequential model and functional model, and later on how to define the inputs, how to connect layers over, and finally hands-on demo.
Why Keras is important
Keras is an Open Source Neural Network library written in Python that runs on top of Theano or Tensorflow. It is designed to be modular, fast, and easy to use. Keras is very quick to make a network model. If you want to make a simple network model with a few lines, Keras can help you with that.
Call Our Course Advisors IND: +91-7022374614 US: 1-800-216-8930 (Toll-Free) sales@intellipaat.com
Link: https://www.youtube.com/watch?v=nS1J-2uoKto
#keras tutorial for beginners #what is keras #keras sequential model #keras training