Optimum: Hugging Face Optimum

Hugging Face Optimum

🤗 Optimum is an extension of 🤗 Transformers, providing a set of optimization tools enabling maximum efficiency to train and run models on targeted hardware.

The AI ecosystem evolves quickly and more and more specialized hardware along with their own optimizations are emerging every day. As such, Optimum enables users to efficiently use any of these platforms with the same ease inherent to transformers.

Integration with Hardware Partners

Optimum aims at providing more diversity towards the kind of hardware users can target to train and finetune their models.

To achieve this, we are collaborating with the following hardware manufacturers in order to provide the best transformers integration:

  • Graphcore IPUs - IPUs are a completely new kind of massively parallel processor to accelerate machine intelligence. More information here.
  • Habana Gaudi Processor (HPU) - HPUs are designed to maximize training throughput and efficiency. More information here.
  • Intel - Enabling the usage of Intel tools to accelerate inference on Intel architectures. More information about Neural Compressor and OpenVINO.
  • More to come soon! :star:

Installation

🤗 Optimum can be installed using pip as follows:

python -m pip install optimum

If you'd like to use the accelerator-specific features of 🤗 Optimum, you can install the required dependencies according to the table below:

AcceleratorInstallation
ONNX Runtimepython -m pip install optimum[onnxruntime]
Intel Neural Compressorpython -m pip install optimum[neural-compressor]
OpenVINOpython -m pip install optimum[openvino,nncf]
Graphcore IPUpython -m pip install optimum[graphcore]
Habana Gaudi Processor (HPU)python -m pip install optimum[habana]

If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you can install the base library from source as follows:

python -m pip install git+https://github.com/huggingface/optimum.git

For the accelerator-specific features, you can install them by appending #egg=optimum[accelerator_type] to the pip command, e.g.

python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]

Optimizing models towards inference

Along with supporting dedicated AI hardware for training, Optimum also provides inference optimizations towards various frameworks and platforms.

Optimum enables the usage of popular compression techniques such as quantization and pruning by supporting ONNX Runtime along with Intel Neural Compressor and OpenVINO NNCF.

FeaturesONNX RuntimeNeural CompressorOpenVINO
Post-training Dynamic Quantization✔️✔️N/A
Post-training Static Quantization✔️✔️✔️
Quantization Aware Training (QAT)Stay tuned! ⭐✔️✔️
PruningN/A✔️✔️

Quick tour

Check out the examples below to see how 🤗 Optimum can be used to train and run inference on various hardware accelerators.

Accelerated inference

ONNX Runtime

To accelerate inference with ONNX Runtime, 🤗 Optimum uses configuration objects to define parameters for graph optimization and quantization. These objects are then used to instantiate dedicated optimizers and quantizers.

Before applying quantization or optimization, first we need to load our model. To load a model and run inference with ONNX Runtime, you can just replace the canonical Transformers AutoModelForXxx class with the corresponding ORTModelForXxx class. If you want to load from a PyTorch checkpoint, set from_transformers=True to export your model to the ONNX format.

from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer

model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
save_directory = "tmp/onnx/"
# Load a model from transformers and export it to ONNX
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, from_transformers=True)
# Save the ONNX model and tokenizer
ort_model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)

Let's see now how we can apply dynamic quantization with ONNX Runtime:

from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer

# Define the quantization methodology
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(ort_model)
# Apply dynamic quantization on the model
quantizer.quantize(save_dir=save_directory, quantization_config=qconfig)

In this example, we've quantized a model from the Hugging Face Hub, in the same manner we can quantize a model hosted locally by providing the path to the directory containing the model weights. The result from applying the quantize() method is a model_quantized.onnx file that can be used to run inference. Here's an example of how to load an ONNX Runtime model and generate predictions with it:

from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer

model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model_quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(save_directory)
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
results = classifier("I love burritos!")

You can find more examples in the documentation and in the examples.

Intel

To load a model and run inference with OpenVINO Runtime, you can just replace your AutoModelForXxx class with the corresponding OVModelForXxx class. If you want to load a PyTorch checkpoint, set from_transformers=True to convert your model to the OpenVINO IR (Intermediate Representation).

- from transformers import AutoModelForSequenceClassification
+ from optimum.intel.openvino import OVModelForSequenceClassification
  from transformers import AutoTokenizer, pipeline

  # Download a tokenizer and model from the Hub and convert to OpenVINO format
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  model_id = "distilbert-base-uncased-finetuned-sst-2-english"
- model = AutoModelForSequenceClassification.from_pretrained(model_id)
+ model = OVModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)

  # Run inference!
  classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
  results = classifier("He's a dreadful magician.")

You can find more examples in the documentation and in the examples.

Accelerated training

Habana

To train transformers on Habana's Gaudi processors, 🤗 Optimum provides a GaudiTrainer that is very similar to the 🤗 Transformers Trainer. Here is a simple example:

- from transformers import Trainer, TrainingArguments
+ from optimum.habana import GaudiTrainer, GaudiTrainingArguments

  # Download a pretrained model from the Hub
  model = AutoModelForXxx.from_pretrained("bert-base-uncased")

  # Define the training arguments
- training_args = TrainingArguments(
+ training_args = GaudiTrainingArguments(
      output_dir="path/to/save/folder/",
+     use_habana=True,
+     use_lazy_mode=True,
+     gaudi_config_name="Habana/bert-base-uncased",
      ...
  )

  # Initialize the trainer
- trainer = Trainer(
+ trainer = GaudiTrainer(
      model=model,
      args=training_args,
      train_dataset=train_dataset,
      ...
  )

  # Use Habana Gaudi processor for training!
  trainer.train()

You can find more examples in the documentation and in the examples.

Graphcore

To train transformers on Graphcore's IPUs, 🤗 Optimum provides a IPUTrainer that is very similar to the 🤗 Transformers Trainer. Here is a simple example:

- from transformers import Trainer, TrainingArguments
+ from optimum.graphcore import IPUConfig, IPUTrainer, IPUTrainingArguments

  # Download a pretrained model from the Hub
  model = AutoModelForXxx.from_pretrained("bert-base-uncased")

  # Define the training arguments
- training_args = TrainingArguments(
+ training_args = IPUTrainingArguments(
      output_dir="path/to/save/folder/",
+     ipu_config_name="Graphcore/bert-base-ipu", # Any IPUConfig on the Hub or stored locally
      ...
  )

  # Define the configuration to compile and put the model on the IPU
+ ipu_config = IPUConfig.from_pretrained(training_args.ipu_config_name)

  # Initialize the trainer
- trainer = Trainer(
+ trainer = IPUTrainer(
      model=model,
+     ipu_config=ipu_config
      args=training_args,
      train_dataset=train_dataset
      ...
  )

  # Use Graphcore IPU for training!
  trainer.train()

You can find more examples in the documentation and in the examples.

ONNX Runtime

To train transformers with ONNX Runtime's acceleration features, 🤗 Optimum provides a ORTTrainer that is very similar to the 🤗 Transformers Trainer. Here is a simple example:

- from transformers import Trainer, TrainingArguments
+ from optimum.onnxruntime import ORTTrainer, ORTTrainingArguments

  # Download a pretrained model from the Hub
  model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

  # Define the training arguments
- training_args = TrainingArguments(
+ training_args = ORTTrainingArguments(
      output_dir="path/to/save/folder/",
      optim="adamw_ort_fused",
      ...
  )

  # Create a ONNX Runtime Trainer
- trainer = Trainer(
+ trainer = ORTTrainer(
      model=model,
      args=training_args,
      train_dataset=train_dataset,
+     feature="sequence-classification", # The model type to export to ONNX
      ...
  )

  # Use ONNX Runtime for training!
  trainer.train()

You can find more examples in the documentation and in the examples.


Download Details:

Author: Huggingface
Source Code: https://github.com/huggingface/optimum 
License: Apache-2.0 license

#python #training #optimization #transformers #pytorch #quantization 

Optimum: Hugging Face Optimum
Royce  Reinger

Royce Reinger

1679412146

QPsolvers: Quadratic Programming Solvers in Python with A Unified API

QP Solvers for Python

Unified interface to convex Quadratic Programming (QP) solvers available in Python.

Installation

Using PyPI

pip install qpsolvers

Using

conda install qpsolvers -c conda-forge

Check out the documentation for Windows instructions.

Usage

The library provides a one-stop shop solve_qp function with a solver keyword argument to select the backend solver. It solves convex quadratic programs in standard form:

Vector inequalities apply coordinate by coordinate. The function returns the solution $x^*$ found by the solver, or None in case of failure/unfeasible problem. All solvers require the problem to be convex, meaning the matrix $P$ should be positive semi-definite. Some solvers further require the problem to be strictly convex, meaning $P$ should be positive definite.

Dual multipliers: alternatively, the solve_problem function returns a more complete solution object containing both the primal solution and its corresponding dual multipliers.

Example

To solve a quadratic program, build the matrices that define it and call the solve_qp function:

import numpy as np
from qpsolvers import solve_qp

M = np.array([[1.0, 2.0, 0.0], [-8.0, 3.0, 2.0], [0.0, 1.0, 1.0]])
P = M.T @ M  # this is a positive definite matrix
q = np.array([3.0, 2.0, 3.0]) @ M
G = np.array([[1.0, 2.0, 1.0], [2.0, 0.0, 1.0], [-1.0, 2.0, -1.0]])
h = np.array([3.0, 2.0, -2.0])
A = np.array([1.0, 1.0, 1.0])
b = np.array([1.0])

x = solve_qp(P, q, G, h, A, b, solver="proxqp")
print(f"QP solution: x = {x}")

This example outputs the solution [0.30769231, -0.69230769, 1.38461538]. It is also possible to get dual multipliers at the solution, as shown in this example.

Solvers

SolverKeywordAlgorithmAPILicenseWarm-start
ClarabelclarabelInterior pointSparseApache-2.0✖️
CVXOPTcvxoptInterior pointDenseGPL-3.0✔️
ECOSecosInterior pointSparseGPL-3.0✖️
GurobigurobiInterior pointSparseCommercial✖️
HiGHShighsActive setSparseMIT✖️
MOSEKmosekInterior pointSparseCommercial✔️
NPPronpproActive setDenseCommercial✔️
OSQPosqpAugmented LagrangianSparseApache-2.0✔️
ProxQPproxqpAugmented LagrangianDense & SparseBSD-2-Clause✔️
qpOASESqpoasesActive setDenseLGPL-2.1
qpSWIFTqpswiftInterior pointSparseGPL-3.0✖️
quadprogquadprogActive setDenseGPL-2.0✖️
SCSscsAugmented LagrangianSparseMIT✔️

Matrix arguments are NumPy arrays for dense solvers and SciPy Compressed Sparse Column (CSC) matrices for sparse ones.

Frequently Asked Questions

  • Can I print the list of solvers available on my machine?
    • Absolutely: print(qpsolvers.available_solvers)
  • Is it possible to solve a least squares rather than a quadratic program?
  • I have a squared norm in my cost function, how can I apply a QP solver to my problem?
  • I have a non-convex quadratic program. Is there a solver I can use?
  • I get the following build error on Windows when running pip install qpsolvers.
  • Can I help?
    • Absolutely! The first step is to install the library and use it. Report any bug in the issue tracker.
    • If you're a developer looking to hack on open source, check out the contribution guidelines for suggestions.

Benchmark

On a dense problem, the performance of all solvers (as measured by IPython's %timeit on an Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz) is:

SolverTypeTime (ms)
qpswiftDense0.008
quadprogDense0.01
qpoasesDense0.02
osqpSparse0.03
scsSparse0.03
ecosSparse0.27
cvxoptDense0.44
gurobiSparse1.74
mosekSparse7.17

On a sparse problem with n = 500 optimization variables, these performances become:

SolverTypeTime (ms)
osqpSparse1
qpswiftDense2
scsSparse4
mosekSparse17
ecosSparse33
cvxoptDense51
gurobiSparse221
quadprogDense427
qpoasesDense1560

On a model predictive control problem for robot locomotion, we get:

SolverTypeTime (ms)
quadprogDense0.03
qpswiftDense0.08
qpoasesDense0.36
osqpSparse0.48
ecosSparse0.69
scsSparse0.76
cvxoptDense2.75

Finally, here is a small benchmark of random dense problems (each data point corresponds to an average over 10 runs):

Note that performances of QP solvers largely depend on the problem solved. For instance, MOSEK performs an automatic conversion to Second-Order Cone Programming (SOCP) which the documentation advises bypassing for better performance. Similarly, ECOS reformulates from QP to SOCP and works best on small problems.

Contributing

We welcome contributions, see Contributing for details.


Download Details:

Author: qpsolvers
Source Code: https://github.com/qpsolvers/qpsolvers 
License: LGPL-3.0 license

#machinelearning #python #optimization #numeral

QPsolvers: Quadratic Programming Solvers in Python with A Unified API
Royce  Reinger

Royce Reinger

1679408160

PYMoo: Multi-objective Optimization in Python

Pymoo: Multi-objective Optimization in Python

Our open-source framework pymoo offers state of the art single- and multi-objective algorithms and many more features related to multi-objective optimization such as visualization and decision making.

Installation

First, make sure you have a Python 3 environment installed. We recommend miniconda3 or anaconda3.

The official release is always available at PyPi:

pip install -U pymoo

For the current developer version:

git clone https://github.com/anyoptimization/pymoo
cd pymoo
pip install .

Since for speedup, some of the modules are also available compiled, you can double-check if the compilation worked. When executing the command, be sure not already being in the local pymoo directory because otherwise not the in site-packages installed version will be used.

python -c "from pymoo.util.function_loader import is_compiled;print('Compiled Extensions: ', is_compiled())"

Usage

We refer here to our documentation for all the details. However, for instance, executing NSGA2:

from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.problems import get_problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter

problem = get_problem("zdt1")

algorithm = NSGA2(pop_size=100)

res = minimize(problem,
               algorithm,
               ('n_gen', 200),
               seed=1,
               verbose=True)

plot = Scatter()
plot.add(problem.pareto_front(), plot_type="line", color="black", alpha=0.7)
plot.add(res.F, color="red")
plot.show()

A representative run of NSGA2 looks as follows:

pymoo

Citation

If you have used our framework for research purposes, you can cite our publication by:

J. Blank and K. Deb, pymoo: Multi-Objective Optimization in Python, in IEEE Access, vol. 8, pp. 89497-89509, 2020, doi: 10.1109/ACCESS.2020.2990567

BibTex:

@ARTICLE{pymoo,
    author={J. {Blank} and K. {Deb}},
    journal={IEEE Access},
    title={pymoo: Multi-Objective Optimization in Python},
    year={2020},
    volume={8},
    number={},
    pages={89497-89509},
}

Contact

Feel free to contact me if you have any questions:

Julian Blank (blankjul [at] msu.edu)

Michigan State University

Computational Optimization and Innovation Laboratory (COIN)

East Lansing, MI 48824, USA


Download Details:

Author: Anyoptimization
Source Code: https://github.com/anyoptimization/pymoo 
License: Apache-2.0 license

#machinelearning #python #optimization #algorithm #multi #objective

PYMoo: Multi-objective Optimization in Python
Royce  Reinger

Royce Reinger

1679404260

A C++ Platform to Perform Parallel Computations Of Optimisation Tasks

Pagmo

pagmo is a C++ scientific library for massively parallel optimization. It is built around the idea of providing a unified interface to optimization algorithms and to optimization problems and to make their deployment in massively parallel environments easy.

If you are using pagmo as part of your research, teaching, or other activities, we would be grateful if you could star the repository and/or cite our work. For citation purposes, you can use the following BibTex entry, which refers to the pagmo paper in the Journal of Open Source Software:

@article{Biscani2020,
  doi = {10.21105/joss.02338},
  url = {https://doi.org/10.21105/joss.02338},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2338},
  author = {Francesco Biscani and Dario Izzo},
  title = {A parallel global multiobjective framework for optimization: pagmo},
  journal = {Journal of Open Source Software}
}

The DOI of the latest version of the software is available at this link.

The full documentation can be found here.

Upgrading from pagmo 1.x.x

If you were using the old pagmo, have a look here on some technical data on what and why a completely new API and code was developed: https://github.com/esa/pagmo2/wiki/From-1.x-to-2.x

You will find many tutorials in the documentation, we suggest to skim through them to realize the differences. The new pagmo (version 2) should be considered (and is) as an entirely different code.


IMPORTANT NOTICE: pygmo, the Python bindings for pagmo, have been split off into a separate project, hosted here. Please update your bookmarks!


Download Details:

Author: Esa
Source Code: https://github.com/esa/pagmo2 
License: GPL-3.0, LGPL-3.0 licenses found

#machinelearning #python #optimization #algorithm #parallel #computing 

A C++ Platform to Perform Parallel Computations Of Optimisation Tasks
Royce  Reinger

Royce Reinger

1679400300

OSQP: The Operator Splitting QP Solver

The Operator Splitting QP Solver

The OSQP (Operator Splitting Quadratic Program) solver is a numerical optimization package for solving problems in the form

minimize        0.5 x' P x + q' x

subject to      l <= A x <= u

where x in R^n is the optimization variable. The objective function is defined by a positive semidefinite matrix P in S^n_+ and vector q in R^n. The linear constraints are defined by matrix A in R^{m x n} and vectors l and u so that l_i in R U {-inf} and u_i in R U {+inf} for all i in 1,...,m.

The latest version is 0.6.2.

Citing OSQP

If you are using OSQP for your work, we encourage you to

We are looking forward to hearing your success stories with OSQP! Please share them with us.

Bug reports and support

Please report any issues via the Github issue tracker. All types of issues are welcome including bug reports, documentation typos, feature requests and so on.

Numerical benchmarks

Numerical benchmarks against other solvers are available here.


Join our forum on Discourse for any questions related to the solver!

The documentation is available at osqp.org


Download Details:

Author: osqp
Source Code: https://github.com/osqp/osqp 
License: Apache-2.0 license

#machinelearning #optimization #cpluplus #portfolio

OSQP: The Operator Splitting QP Solver
Royce  Reinger

Royce Reinger

1679399966

OptimLib: A Lightweight C++ Library Of Numerical Optimization Methods

OptimLib

OptimLib is a lightweight C++ library of numerical optimization methods for nonlinear functions.

Features:

  • A C++11/14/17 library of local and global optimization algorithms, as well as root finding techniques.
  • Derivative-free optimization using advanced, parallelized metaheuristic methods.
  • Constrained optimization routines to handle simple box constraints, as well as systems of nonlinear constraints.
  • For fast and efficient matrix-based computation, OptimLib supports the following templated linear algebra libraries:
  • Automatic differentiation functionality is available through use of the Autodiff library
  • OpenMP-accelerated algorithms for parallel computation.
  • Straightforward linking with parallelized BLAS libraries, such as OpenBLAS.
  • Available as a single precision (float) or double precision (double) library.
  • Available as a header-only library, or as a compiled shared library.
  • Released under a permissive, non-GPL license.

Algorithms

A list of currently available algorithms includes:

  • Broyden's Method (for root finding)
  • Newton's method, BFGS, and L-BFGS
  • Gradient descent: basic, momentum, Adam, AdaMax, Nadam, NadaMax, and more
  • Nonlinear Conjugate Gradient
  • Nelder-Mead
  • Differential Evolution (DE)
  • Particle Swarm Optimization (PSO)

Documentation

Full documentation is available online:

Documentation Status

A PDF version of the documentation is available here.

API

The OptimLib API follows a relatively simple convention, with most algorithms called in the following manner:

algorithm_id(<initial/final values>, <objective function>, <objective function data>);

The inputs, in order, are:

  • A writable vector of initial values to define the starting point of the algorithm. In the event of successful completion, the initial values will be overwritten by the solution vector.
  • The 'objective function' is the user-defined function to be minimized (or zeroed-out in the case of root finding methods).
  • The final input is optional: it is any object that contains additional parameters necessary to evaluate the objective function.

For example, the BFGS algorithm is called using

bfgs(ColVec_t& init_out_vals, std::function<double (const ColVec_t& vals_inp, ColVec_t* grad_out, void* opt_data)> opt_objfn, void* opt_data);

where ColVec_t is used to represent, e.g., arma::vec or Eigen::VectorXd types.

Installation

OptimLib is available as a compiled shared library, or as header-only library, for Unix-alike systems only (e.g., popular Linux-based distros, as well as macOS). Use of this library with Windows-based systems, with or without MSVC, is not supported.

Requirements

OptimLib requires either the Armadillo or Eigen C++ linear algebra libraries. (Note that Eigen version 3.4.0 requires a C++14-compatible compiler.)

Before including the header files, define one of the following:

#define OPTIM_ENABLE_ARMA_WRAPPERS
#define OPTIM_ENABLE_EIGEN_WRAPPERS

Example:

#define OPTIM_ENABLE_EIGEN_WRAPPERS
#include "optim.hpp"

Installation Method 1: Shared Library

The library can be installed on Unix-alike systems via the standard ./configure && make method.

First clone the library and any necessary submodules:

# clone optim into the current directory
git clone https://github.com/kthohr/optim ./optim

# change directory
cd ./optim

# clone necessary submodules
git submodule update --init

Set (one) of the following environment variables before running configure:

export ARMA_INCLUDE_PATH=/path/to/armadillo
export EIGEN_INCLUDE_PATH=/path/to/eigen

Finally:

# build and install with Eigen
./configure -i "/usr/local" -l eigen -p
make
make install

The final command will install OptimLib into /usr/local.

Configuration options (see ./configure -h):

      Primary

  • -h print help
  • -i installation path; default: the build directory
  • -f floating-point precision mode; default: double
  • -l specify the choice of linear algebra library; choose arma or eigen
  • -m specify the BLAS and Lapack libraries to link with; for example, -m "-lopenblas" or -m "-framework Accelerate"
  • -o compiler optimization options; defaults to -O3 -march=native -ffp-contract=fast -flto -DARMA_NO_DEBUG
  • -p enable OpenMP parallelization features (recommended)

      Secondary

  • -c a coverage build (used with Codecov)
  • -d a 'development' build
  • -g a debugging build (optimization flags set to -O0 -g)

      Special

  • --header-only-version generate a header-only version of OptimLib (see below)

Installation Method 2: Header-only Library

OptimLib is also available as a header-only library (i.e., without the need to compile a shared library). Simply run configure with the --header-only-version option:

./configure --header-only-version

This will create a new directory, header_only_version, containing a copy of OptimLib, modified to work on an inline basis. With this header-only version, simply include the header files (#include "optim.hpp) and set the include path to the head_only_version directory (e.g.,-I/path/to/optimlib/header_only_version).

R Compatibility

To use OptimLib with an R package, first generate a header-only version of the library (see above). Then simply add a compiler definition before including the OptimLib files.

  • For RcppArmadillo:
#define OPTIM_USE_RCPP_ARMADILLO
#include "optim.hpp"
  • For RcppEigen:
#define OPTIM_USE_RCPP_EIGEN
#include "optim.hpp"

Examples

To illustrate OptimLib at work, consider searching for the global minimum of the Ackley function:

Ackley

This is a well-known test function with many local minima. Newton-type methods (such as BFGS) are sensitive to the choice of initial values, and will perform rather poorly here. As such, we will employ a global search method--in this case: Differential Evolution.

Code:

#define OPTIM_ENABLE_EIGEN_WRAPPERS
#include "optim.hpp"
        
#define OPTIM_PI 3.14159265358979

double 
ackley_fn(const Eigen::VectorXd& vals_inp, Eigen::VectorXd* grad_out, void* opt_data)
{
    const double x = vals_inp(0);
    const double y = vals_inp(1);

    const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) );
            
    return obj_val;
}
        
int main()
{
    Eigen::VectorXd x = 2.0 * Eigen::VectorXd::Ones(2); // initial values: (2,2)
        
    bool success = optim::de(x, ackley_fn, nullptr);
        
    if (success) {
        std::cout << "de: Ackley test completed successfully." << std::endl;
    } else {
        std::cout << "de: Ackley test completed unsuccessfully." << std::endl;
    }
        
    std::cout << "de: solution to Ackley test:\n" << x << std::endl;
        
    return 0;
}

On x86-based computers, this example can be compiled using:

g++ -Wall -std=c++14 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim

Output:

de: Ackley test completed successfully.
elapsed time: 0.028167s

de: solution to Ackley test:
  -1.2702e-17
  -3.8432e-16

On a standard laptop, OptimLib will compute a solution to within machine precision in a fraction of a second.

The Armadillo-based version of this example:

#define OPTIM_ENABLE_ARMA_WRAPPERS
#include "optim.hpp"
        
#define OPTIM_PI 3.14159265358979

double 
ackley_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data)
{
    const double x = vals_inp(0);
    const double y = vals_inp(1);

    const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) );
            
    return obj_val;
}
        
int main()
{
    arma::vec x = arma::ones(2,1) + 1.0; // initial values: (2,2)
        
    bool success = optim::de(x, ackley_fn, nullptr);
        
    if (success) {
        std::cout << "de: Ackley test completed successfully." << std::endl;
    } else {
        std::cout << "de: Ackley test completed unsuccessfully." << std::endl;
    }
        
    arma::cout << "de: solution to Ackley test:\n" << x << arma::endl;
        
    return 0;
}

Compile and run:

g++ -Wall -std=c++11 -O3 -march=native -ffp-contract=fast -I/path/to/armadillo -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim
./optim_de_ex.out

Check the /tests directory for additional examples, and https://optimlib.readthedocs.io/en/latest/ for a detailed description of each algorithm.

Logistic regression

For a data-based example, consider maximum likelihood estimation of a logit model, common in statistics and machine learning. In this case we have closed-form expressions for the gradient and hessian. We will employ a popular gradient descent method, Adam (Adaptive Moment Estimation), and compare to a pure Newton-based algorithm.

#define OPTIM_ENABLE_ARMA_WRAPPERS
#include "optim.hpp"

// sigmoid function

inline
arma::mat sigm(const arma::mat& X)
{
    return 1.0 / (1.0 + arma::exp(-X));
}

// log-likelihood function data

struct ll_data_t
{
    arma::vec Y;
    arma::mat X;
};

// log-likelihood function with hessian

double ll_fn_whess(const arma::vec& vals_inp, arma::vec* grad_out, arma::mat* hess_out, void* opt_data)
{
    ll_data_t* objfn_data = reinterpret_cast<ll_data_t*>(opt_data);

    arma::vec Y = objfn_data->Y;
    arma::mat X = objfn_data->X;

    arma::vec mu = sigm(X*vals_inp);

    const double norm_term = static_cast<double>(Y.n_elem);

    const double obj_val = - arma::accu( Y%arma::log(mu) + (1.0-Y)%arma::log(1.0-mu) ) / norm_term;

    //

    if (grad_out)
    {
        *grad_out = X.t() * (mu - Y) / norm_term;
    }

    //

    if (hess_out)
    {
        arma::mat S = arma::diagmat( mu%(1.0-mu) );
        *hess_out = X.t() * S * X / norm_term;
    }

    //

    return obj_val;
}

// log-likelihood function for Adam

double ll_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data)
{
    return ll_fn_whess(vals_inp,grad_out,nullptr,opt_data);
}

//

int main()
{
    int n_dim = 5;     // dimension of parameter vector
    int n_samp = 4000; // sample length

    arma::mat X = arma::randn(n_samp,n_dim);
    arma::vec theta_0 = 1.0 + 3.0*arma::randu(n_dim,1);

    arma::vec mu = sigm(X*theta_0);

    arma::vec Y(n_samp);

    for (int i=0; i < n_samp; i++)
    {
        Y(i) = ( arma::as_scalar(arma::randu(1)) < mu(i) ) ? 1.0 : 0.0;
    }

    // fn data and initial values

    ll_data_t opt_data;
    opt_data.Y = std::move(Y);
    opt_data.X = std::move(X);

    arma::vec x = arma::ones(n_dim,1) + 1.0; // initial values

    // run Adam-based optim

    optim::algo_settings_t settings;

    settings.gd_method = 6;
    settings.gd_settings.step_size = 0.1;

    std::chrono::time_point<std::chrono::system_clock> start = std::chrono::system_clock::now();

    bool success = optim::gd(x,ll_fn,&opt_data,settings);

    std::chrono::time_point<std::chrono::system_clock> end = std::chrono::system_clock::now();
    std::chrono::duration<double> elapsed_seconds = end-start;

    //

    if (success) {
        std::cout << "Adam: logit_reg test completed successfully.\n"
                  << "elapsed time: " << elapsed_seconds.count() << "s\n";
    } else {
        std::cout << "Adam: logit_reg test completed unsuccessfully." << std::endl;
    }

    arma::cout << "\nAdam: true values vs estimates:\n" << arma::join_rows(theta_0,x) << arma::endl;

    //
    // run Newton-based optim

    x = arma::ones(n_dim,1) + 1.0; // initial values

    start = std::chrono::system_clock::now();

    success = optim::newton(x,ll_fn_whess,&opt_data);

    end = std::chrono::system_clock::now();
    elapsed_seconds = end-start;

    //

    if (success) {
        std::cout << "newton: logit_reg test completed successfully.\n"
                  << "elapsed time: " << elapsed_seconds.count() << "s\n";
    } else {
        std::cout << "newton: logit_reg test completed unsuccessfully." << std::endl;
    }

    arma::cout << "\nnewton: true values vs estimates:\n" << arma::join_rows(theta_0,x) << arma::endl;

    return 0;
}

Output:

Adam: logit_reg test completed successfully.
elapsed time: 0.025128s

Adam: true values vs estimates:
   2.7850   2.6993
   3.6561   3.6798
   2.3379   2.3860
   2.3167   2.4313
   2.2465   2.3064

newton: logit_reg test completed successfully.
elapsed time: 0.255909s

newton: true values vs estimates:
   2.7850   2.6993
   3.6561   3.6798
   2.3379   2.3860
   2.3167   2.4313
   2.2465   2.3064

Automatic Differentiation

By combining Eigen with the Autodiff library, OptimLib provides experimental support for automatic differentiation.

Example using forward-mode automatic differentiation with BFGS for the Sphere function:

#define OPTIM_ENABLE_EIGEN_WRAPPERS
#include "optim.hpp"

#include <autodiff/forward/real.hpp>
#include <autodiff/forward/real/eigen.hpp>

//

autodiff::real
opt_fnd(const autodiff::ArrayXreal& x)
{
    return x.cwiseProduct(x).sum();
}

double
opt_fn(const Eigen::VectorXd& x, Eigen::VectorXd* grad_out, void* opt_data)
{
    autodiff::real u;
    autodiff::ArrayXreal xd = x.eval();

    if (grad_out) {
        Eigen::VectorXd grad_tmp = autodiff::gradient(opt_fnd, autodiff::wrt(xd), autodiff::at(xd), u);

        *grad_out = grad_tmp;
    } else {
        u = opt_fnd(xd);
    }

    return u.val();
}

int main()
{
    Eigen::VectorXd x(5);
    x << 1, 2, 3, 4, 5;

    bool success = optim::bfgs(x, opt_fn, nullptr);

    if (success) {
        std::cout << "bfgs: forward-mode autodiff test completed successfully.\n" << std::endl;
    } else {
        std::cout << "bfgs: forward-mode autodiff test completed unsuccessfully.\n" << std::endl;
    }

    std::cout << "solution: x = \n" << x << std::endl;

    return 0;
}

Compile with:

g++ -Wall -std=c++17 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/autodiff -I/path/to/optim/include optim_autodiff_ex.cpp -o optim_autodiff_ex.out -L/path/to/optim/lib -loptim

See the documentation for more details on this topic.


Download Details:

Author: kthohr
Source Code: https://github.com/kthohr/optim 
License: Apache-2.0 license

#machinelearning #cpluplus #newton #cpp #optimization

OptimLib: A Lightweight C++ Library Of Numerical Optimization Methods
Royce  Reinger

Royce Reinger

1679391490

IFOPT: An Eigen-based, Light-weight C++ interface

IFOPT

A modern, light-weight, Eigen-based C++ interface to Nonlinear Programming solvers, such as Ipopt and Snopt.

An example nonlinear optimization problem to solve is defined as:

Features

Combines the advantages of Ipopt / Snopt and Eigen:

Ipopt / SnoptEigen
✔️ high-quality solvers for nonlinear optimization✔️ modern, intuitive formulations of vectors and matrices
❌ C++ API inconvenient and error-prone (raw pointers, index management, jacobian construction)✔️ highly efficient implementations
❌ linking and exporting difficult 
  • Solver independent formulation of variables and constraints with Eigen (highly efficient)
  • Automatic index management by formulation of variable- and constraint-sets
  • Integration: pure cmake find_package(ifopt) or catkin/ROS (optional)
  • light-weight (~2k lines of code) makes it easy to use and extend

An optimization problem consists of multiple independent variable- and constraint-sets. Each set represents a common concept, e.g. a set of variables might represents spline coefficients, another footstep positions. Similarly, a constraint-set groups similar constraints together. ifopt allows users to define each of these sets independently in separate classes and then builds the overall problem from these sets. (No more worrying adapting indices when adding or removing sets).
 

find x0, x1                              (variable-sets 0 & 1)
s.t
  x0_lower  <= x0 <= x0_upper            (bounds on variable-set x0 \in R^2)

  {x0,x1} = arg min c0(x0,x1)+c1(x0,x1)  (cost-terms 0 and 1)

  g0_lower < g0(x0,x1) < g0_upper        (constraint-set 0 \in R^2)
  g1_lower < g1(x0,x1) < g0_upper        (constraint-set 1 \in R^1)

Supplying derivative information greatly increases solution speed. ifopt allows to define the derivative of each cost-term/constraint-set with respect to each variable-set independently. This ensures that when the order of variable-sets changes in the overall vector, this derivative information is still valid. These "Jacobian blocks" must be supplied through ConstraintSet::FillJacobianBlock() and are then used to build the complete Jacobian for the cost and constraints.


A graphical overview as UML can be seen here.

Install

The easiest way to install is through the ROS binaries and you're all set!

sudo apt-get install ros-<distro>-ifopt

Install dependencies

In case you don't use ROS or the binaries don't exist for your distro, you can easily build these packages from source. For this, install the required dependencies Cmake, Eigen and Ipopt using

sudo apt-get install cmake libeigen3-dev coinor-libipopt-dev

If you want to link to a local installation of Ipopt or to Snopt, see here.

Build with cmake

Install

git clone https://github.com/ethz-adrl/ifopt.git && cd ifopt
mkdir build && cd build
cmake ..
make
sudo make install # copies files in this folder to /usr/local/*
# sudo xargs rm < install_manifest.txt # in case you want to uninstall the above

Use: To use in your cmake project, see this minimal CMakeLists.txt:

find_package(ifopt)
# Formulate (ifopt:ifopt_core) and solve (ifopt::ifopt_ipopt) the problem
add_executable(main main.cpp)
# Pull in include directories, libraries, ... 
target_link_libraries(main PUBLIC ifopt::ifopt_ipopt) 

Build with catkin

Install: Download catkin or catkin command line tools, then:

cd catkin_ws/src
git clone https://github.com/ethz-adrl/ifopt.git
cd ..
catkin_make_isolated # `catkin build` if you are using catkin command-line tools 
source ./devel/setup.bash

Use: Include in your catkin project by adding to your CMakeLists.txt

Add the following to your package.xml:

<package>
  <depend>ifopt</depend>
</package>
add_compile_options(-std=c++11)
find_package(catkin COMPONENTS ifopt) 
include_directories(${catkin_INCLUDE_DIRS})
target_link_libraries(foo ${catkin_LIBRARIES})

Examples

Unit tests & toy problem

Navigate to your build folder in which the Makefile resides, which depends on how you built the code:

cd ifopt/build  # plain cmake 
cd catkin_ws/build_isolated/ifopt/devel # catkin_make_isolated
cd catkin_ws/build/ifopt # catkin build

Make sure everything installed correctly by running the test target

make test

You should see ifopt_ipopt-example....Passed (or snopt if installed) as well as ifopt_core-test if gtest is installed.

If you have IPOPT installed and linked correctly, you can also run the binary example directly (again, first navigate to the build folder with the Makefile)

make test ARGS='-R ifopt_ipopt-example -V'

Output:

1.0 0.0

towr

A more involved problem, taken from towr, with multiple sets of variables and constraints to generate motions for legged robots produces the following:

Contribute

We love pull request, whether its interfaces to additional solvers, bug fixes, unit tests or updating the documentation. Please have a look at CONTRIBUTING.md for more information. See here the list of contributors who participated in this project.

Publications

If you use this work, please consider citing as follows:

@misc{ifopt,
  author       = {Alexander W Winkler},
  title        = {{Ifopt - A modern, light-weight, Eigen-based C++ interface to 
                   Nonlinear Programming solvers Ipopt and Snopt.}},
  year         = 2018,
  doi          = {10.5281/zenodo.1135046},
  url          = {https://doi.org/10.5281/zenodo.1135046}
}

The research project within which this code was developed:

Additional Information

Linking to custom Ipopt or Snopt

If you are building from source and want to use a locally installed version of Ipopt add the path to your Ipopt build folder to your ~/.bashrc, e.g.

export IPOPT_DIR=/home/your_name/Code/Ipopt-3.12.8/build

In case your OS doesn't provide the precompiled binaries or the required version, you can also easily install Ipopt from source as described here. This summary might work for you:

wget https://www.coin-or.org/download/source/Ipopt/Ipopt-3.11.10.zip
unzip Ipopt-3.11.10.zip
cd Ipopt-3.11.10/ThirdParty/Mumps
./get.Mumps  # HSL routines are faster (http://www.hsl.rl.ac.uk/ipopt/)
cd ../../
mkdir build && cd build
../configure --prefix=/usr/local
make
make test
make install
export IPOPT_DIR=`pwd`

If you need an interface to Snopt, point cmake to that build folder in your ~/.bashrc through e.g.

export SNOPT_DIR=/home/your_name/Code/Snopt

and run cmake as

cmake -DBUILD_SNOPT=ON ..

Download Details:

Author: ethz-adrl
Source Code: https://github.com/ethz-adrl/ifopt 
License: BSD-3-Clause license

#machinelearning #cpluplus #nlp #cmake #cpp #robotics #optimization 

IFOPT: An Eigen-based, Light-weight C++ interface
Royce  Reinger

Royce Reinger

1679387421

Exotica: Extensible Optimization Framework

EXOTica 🏝️ 

The EXOTica library is a general Optimisation Toolset for Robotics platforms, written in C++ with bindings for Python. Its motivation is to provide a more streamlined process for developing algorithms for tasks such as Inverse Kinematics, Trajectory Optimisation, and Optimal Control. Its design advocates:

  • Modularity: The library is developed in a modular manner making use of C++’s object-oriented features (such as polymorphism). This allows users to define their own components and ’plug them into’ the existing framework. Thus, an engineer does not need to implement a whole system whenever he needs to change a component, but rather can re-implement the specific functionality and as long as he follows certain guidelines, retain the use of the other modules.
  • Extensibility: The library is also heavily extensible, mainly thanks to the modular design. In addition, the library makes very minimal prior assumptions about the form of the problem so that it can be as general as possible.
  • Integration with ROS: The library is designed to be fully integrated with ROS allowing to set up, configuration, consuming data from ros topics, and publishing debug display using ROS tools.

The library itself consists of two major specifications, both of which are abstract classes. The first is the Motion Solver which defines the way optimisation should proceed: current implementation include AICO, Jacobian pseudo-inverse IK, and a range of sampling-based solvers from the OMPL library -- in total, more than 60 different motion solvers. The other is the Task Definition which describes the task itself by providing two necessary functions to compute the forward map from Configuration space (say joint angles in IK) to Task space (say end-effector positions in IK). The tasks themselves can describe a complete trajectory. Using the library then involves passing in an initial state and requesting a solution to the problem, which may consist of a single configuration or complete trajectory. Additionally, users can select different underlying dynamics models by specifying a DynamicsSolver. Similarly, different collision checking methods and libraries can be selected using the CollisionScene plug-ins.

Prerequisites

  • Ubuntu 18.04 (ROS Melodic) or Ubuntu 20.04 (ROS Noetic).
  • catkin_tools (catkin_make is no longer supported)
  • rosdep
  • ROS

Installation

From binary

Exotica is available as binaries for ROS Kinetic (EOL with v6.1.1), Melodic (current), and Noetic (current) and can be installed via

sudo apt install ros-$ROS_DISTRO-exotica

From source

  1. Create a catkin workspace or use an existing workspace. catkin_tools is the preferred build system.
  2. Clone this repository into the src/ subdirectory of the workspace (any subdirectory below src/ will do): git clone https://github.com/ipab-slmc/exotica.git.
  3. cd into the the cloned directory.
  4. Install dependencies: rosdep update ; rosdep install --from-paths ./ -iry
  5. Compile the code catkin build -s.
  6. Source the config file (ideally inside ~/.bashrc): source path_to_workspace/devel/setup.bash. You may have to source the config file from your install-space if your workspace is configured for installation.

Demos

Have a look at exotica_examples. If you have sourced the workspace correctly, you should be able to run any of the demos:

roslaunch exotica_examples cpp_ik_minimal.launch
roslaunch exotica_examples cpp_core.launch
roslaunch exotica_examples cpp_aico.launch
roslaunch exotica_examples python_ompl.launch
roslaunch exotica_examples python_attach.launch
roslaunch exotica_examples python_collision_distance.launch
roslaunch exotica_examples python_sphere_collision.launch

Publications

We have published a Springer book chapter outlining the concept and ideas behind EXOTica and recommend it to new users for getting started in addition to the tutorials and documentation.

Ivan V., Yang Y., Merkt W., Camilleri M.P., Vijayakumar S. (2019) EXOTica: An Extensible Optimization Toolset for Prototyping and Benchmarking Motion Planning and Control. In: Koubaa A. (eds) Robot Operating System (ROS). Studies in Computational Intelligence, vol 778. Springer, Cham

If you use EXOTica for academic work, please cite the relevant book chapter, a preprint of which is available here:

@Inbook{exotica,
  author="Ivan, Vladimir and Yang, Yiming and Merkt, Wolfgang and Camilleri, Michael P. and Vijayakumar, Sethu",
  editor="Koubaa, Anis",
  title="EXOTica: An Extensible Optimization Toolset for Prototyping and Benchmarking Motion Planning and Control",
  bookTitle="Robot Operating System (ROS): The Complete Reference (Volume 3)",
  year="2019",
  publisher="Springer International Publishing",
  address="Cham",
  pages="211--240",
  isbn="978-3-319-91590-6",
  doi="10.1007/978-3-319-91590-6_7",
  url="https://doi.org/10.1007/978-3-319-91590-6_7"
}

Documentation - C++ Doxygen - Python Documentation - Docker


Download Details:

Author: ipab-slmc
Source Code: https://github.com/ipab-slmc/exotica 
License: BSD-3-Clause license

#machinelearning #cpluplus #python #optimization #framework 

Exotica: Extensible Optimization Framework

PhySO: Physical Symbolic Optimization

Φ-SO : Physical Symbolic Optimization

The physical symbolic regression ( Φ-SO ) package physo is a symbolic regression package that fully leverages physical units constraints. For more details see: [Tenachi et al 2023].

Installation

Virtual environment

The package has been tested on Unix and OSX. To install the package it is recommend to first create a conda virtual environment:

conda create -n PhySO python=3.8

And activate it:

conda activate PhySO

Dependencies

From the repository root:

Installing essential dependencies :

conda install --file requirements.txt

Installing optional dependencies (for advanced debugging in tree representation) :

conda install --file requirements_display1.txt
pip install -r requirements_display2.txt

Side note regarding CUDA acceleration:

$\Phi$-SO supports CUDA but it should be noted that since the bottleneck of the code is free constant optimization, using CUDA (even on a very high-end GPU) does not improve performances over a CPU and can actually hinder performances.

Installing $\Phi$-SO

Installing physo (from the repository root):

pip install -e .

Testing install

Import test:

python3
>>> import physo

This should result in physo being successfully imported.

Unit tests:

From the repository root:

python -m unittest discover -p "*UnitTest.py"

This should result in all tests being successfully passed (except for program_display_UnitTest tests if optional dependencies were not installed).

Getting started

Symbolic regression with default hyperparameters

[Coming soon] In the meantime you can have a look at our demo folder ! :)

Symbolic regression

[Coming soon]

Custom symbolic optimization task

[Coming soon]

Using custom functions

[Coming soon]

Open training loop

[Coming soon]

About performances

The main performance bottleneck of physo is free constant optimization, therefore, performances are almost linearly dependent on the number of free constant optimization steps and on the number of trial expressions per epoch (ie. the batch size).

In addition, it should be noted that generating monitoring plots takes ~3s, therefore we suggest making monitoring plots every >10 epochs for low time / epoch cases.

Summary of expected performances with physo:

Time / epochBatch size# free constfree const 
opti steps
ExampleDevice
~20s10k215eg: demo_damped_harmonic_oscillatorCPU: Mac M1 
RAM: 16 Go
~30s10k215eg: demo_damped_harmonic_oscillatorCPU: Intel W-2155 10c/20t 
RAM: 128 Go
~250s10k215eg: demo_damped_harmonic_oscillatorGPU: Nvidia GV100 
VRAM : 32 Go
~3s1k215eg: demo_mechanical_energyCPU: Mac M1 
RAM: 16 Go
~3s1k215eg: demo_mechanical_energyCPU: Intel W-2155 10c/20t 
RAM: 128 Go
~4s1k215eg: demo_mechanical_energyGPU: Nvidia GV100 
VRAM : 32 Go

Please note that using a CPU typically results in higher performances than when using a GPU.

Uninstalling

Uninstalling the package.

conda deactivate
conda env remove -n PhySO

Citing this work

@ARTICLE{2023arXiv230303192T,
       author = {{Tenachi}, Wassim and {Ibata}, Rodrigo and {Diakogiannis}, Foivos I.},
        title = "{Deep symbolic regression for physics guided by units constraints: toward the automated discovery of physical laws}",
      journal = {arXiv e-prints},
     keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Computer Science - Machine Learning, Physics - Computational Physics},
         year = 2023,
        month = mar,
          eid = {arXiv:2303.03192},
        pages = {arXiv:2303.03192},
          doi = {10.48550/arXiv.2303.03192},
archivePrefix = {arXiv},
       eprint = {2303.03192},
 primaryClass = {astro-ph.IM},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv230303192T},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

Download Details:

Author: WassimTenachi
Source Code: https://github.com/WassimTenachi/PhySO 
License: MIT license

#python #symbol #optimization 

PhySO: Physical Symbolic Optimization
Royce  Reinger

Royce Reinger

1678972331

Siconos: Simulation Framework for Nonsmooth Dynamical Systems

Siconos

A software package for the modeling and simulation of nonsmooth dynamical systems in C++ and in Python.

Siconos is an open-source scientific software primarily targeted at modeling and simulating nonsmooth dynamical systems:

  • Mechanical systems (rigid or solid) with unilateral contact and Coulomb friction and impact (Nonsmooth mechanics, contact dynamics, multibody systems dynamics or granular materials).
  • Switched Electrical Circuit such as electrical circuits with ideal and piecewise linear components: power converter, rectifier, Phase-Locked Loop (PLL) or Analog-to-Digital converter.
  • Sliding mode control systems.
  • Biology Gene regulatory networks.

Other applications are found in Systems and Control (hybrid systems, differential inclusions, optimal control with state constraints), Optimization (Complementarity systems and Variational inequalities), Fluid Mechanics, Computer Graphics, ...

Read more about Siconos at the Siconos homepage

Installation

From source

Assuming you have cloned the project into , to build and install the libraries and the python interface ::

  • Create a user options file. Some templates are provided in /ci_gitlab/siconos_confs.
  • Run
mkdir build ;cd build
cmake -DUSER_OPTIONS_FILE=<your options file> <path-to-siconos-sources> 
make -j 4 # or the number of cores available on your computer.
make test # optional
make install

More details in Siconos download and install guide.

Docker images

Docker images with siconos ready to use:

  • latest version (development)
docker run -ti gricad-registry.univ-grenoble-alpes.fr/nonsmooth/siconos-tutorials/siconos-master:latest
  • A specific (release) version X.Y:
docker run -ti gricad-registry.univ-grenoble-alpes.fr/nonsmooth/siconos-tutorials/siconos-release-X.Y:latest

Jupyter Lab environment with siconos ready to use and a set of end-user examples:

  • latest version (development)
docker run -p 8888:8888 -ti gricad-registry.univ-grenoble-alpes.fr/nonsmooth/siconos-tutorials/siconoslab-master
  • A specific (release) version X.Y:
docker run -p 8888:8888 -ti gricad-registry.univ-grenoble-alpes.fr/nonsmooth/siconos-tutorials/siconoslab-release-X.Y

Then, access in your browser at http://localhost:8888

Main components

Each component can be used either from a low-level language like C/C++ or from Python.

siconos/numerics (C)

Collection of low-level algorithms for solving optimization problems arising in the simulation of nonsmooth dynamical systems:

  • Complementarity problems (LCP, MLCP, NCP)
  • Friction-contact problems (2D or 3D)
  • Second-order cone programming (SOCP)
  • Primal or Dual Relay problems
  • Finite dimensional Variational Inequality (AVI and VI)

siconos/kernel (C++)

Library for the modeling and simulation of nonsmooth dynamical systems.

  • Dynamical systems formalism: first order systems, Lagrangian and Newton-Euler formulations
  • Numerical integration techniques: Event-detecting (event-driven) and Event-Capturing (time-stepping) schemes
  • Nonsmooth laws: complementarity, Relay, normal cone inclusion, Friction Contact, Newton impact, multiple impact law.

siconos/mechanics (C++)

Component for the simulation of mechanical systems in interaction with their environment:

siconos/control (C++)

Library to add a controller to a simulation. For now almost all the implemented control schemes are based on sliding modes with an implicit discretization.

siconos/io (C++)

This component can be used to

  • serialize almost any simulation using boost::serialization
  • generate mechanical examples from HDF5 and to write HDF5 in view of visualization through vtk

The archetypal example: "The bouncing ball"

from siconos.kernel import LagrangianLinearTIDS, NewtonImpactNSL,\
LagrangianLinearTIR, Interaction, NonSmoothDynamicalSystem, MoreauJeanOSI,\
TimeDiscretisation, LCP, TimeStepping
from numpy import eye, empty

t0 = 0       # start time
T = 10       # end time
h = 0.005    # time step
r = 0.1      # ball radius
g = 9.81     # gravity
m = 1        # ball mass
e = 0.9      # restitution coeficient
theta = 0.5  # theta scheme

# the dynamical system
x = [1, 0, 0]    # initial position
v = [0, 0, 0]    # initial velocity
mass = eye(3)  # mass matrix
mass[2, 2] = 2. / 5 * r * r
ball = LagrangianLinearTIDS(x, v, mass)
weight = [-m * g, 0, 0] 
ball.setFExtPtr(weight) #set external forces
# Interaction ball-floor
H = [[1, 0, 0]]
nslaw = NewtonImpactNSL(e)
relation = LagrangianLinearTIR(H)
inter = Interaction(nslaw, relation)
# Model
bouncingBall = NonSmoothDynamicalSystem(t0, T)
# add the dynamical system to the non smooth dynamical system
bouncingBall.insertDynamicalSystem(ball)
# link the interaction and the dynamical system
bouncingBall.link(inter, ball)
# Simulation
# (1) OneStepIntegrators
OSI = MoreauJeanOSI(theta)
# (2) Time discretisation 
t = TimeDiscretisation(t0, h)
# (3) one step non smooth problem
osnspb = LCP()
# (4) Simulation setup with (1) (2) (3)
s = TimeStepping(bouncingBall, t, OSI, osnspb)
# end of model definition

# computation
N = (T - t0) / h # the number of time steps
# time loop
while s.hasNextEvent():
    s.computeOneStep()
    s.nextStep()

Download Details:

Author: Siconos
Source Code: https://github.com/siconos/siconos 
License: Apache-2.0 license

#machinelearning #python #optimization #numeral 

Siconos: Simulation Framework for Nonsmooth Dynamical Systems
Monty  Boehm

Monty Boehm

1678783260

DietPi: Lightweight Justice for Your Single-board Computer!

DietPi

Lightweight justice for your single-board computer! 

optimised • simplified • for everyone 


Ready to run optimised software choices with dietpi-software 
Feature-rich configuration tool for your device with dietpi-config.


Introduction

DietPi is an extremely lightweight Debian-based OS. It is highly optimised for minimal CPU and RAM resource usage, ensuring your SBC always runs at its maximum potential.

The dietpi programs use lightweight whiptail menus. You'll spend more time enjoying DietPi and applications you need and less time staring at the command line.

Use dietpi-software to quick and easy install Ready to Run & Optimised applications for your system. DietPi will do all the necessary configurations, including starting the services. Few highlights: Desktop Environments, Remote Desktop Access, Media Systems & Players, BitTorrent & Downloading, Cloud & Backup, Gaming & Emulation, Social & Search, Camera & Surveillance, Networking, System Stats & Management, Home Automation, Hardware & Voice Projects, Webserver Stacks, DNS Servers / Pi-hole, File Servers, Printing and much more.

Use dietpi-services to control which installed software has higher or lower priority levels (nice, affinity, policy scheduler).

dietpi-update automatically checks for updates and informs you when they are available. Update instantly, without having to write a new image. DietPi automation allows you to completely automate a DietPi installation with no user input, simply by configuring dietpi.txt before powering on.

The DietPi Project Team

The full list of code contributors can be viewed here.

Contributors

Micha

Joined Q3 2017

Project lead (20/02/2019 and onwards), source code contributor, bug fixes, software improvements, DietPi forum administrator.

Daniel Knight

Project founder and previous project lead (19/02/2019 and previous), source code contributor and tester.

JohnVick

Joined 2016-06-08

DietPi forum co-administrator, management, support, testing and valuable feedback.

sal666

Joined 2017-07-26

Creator and maintainer of the first Clonezilla based installer images for x86_64 UEFI systems.

Joulinar

Joined Q4 2019

DietPi forum moderator, support, testing, bug reports + investigation and valuable feedback.

StephanStS

Joined Q4 2019

NanoPi image creator, tester and bug reports.

Petru

Joined 2020-05-31

DietPi documentation author, product manager, SEO and DietPi visibility recommendations.

ravenclaw900

Joined 2020-10-11

Source code contributor, creator of the DietPi-Dashboard and many software implementations.

yumiris

Joined 2018-04-16

Creator and maintainer of the first DietPi Hyper-V images.


Collaborations

DietPi + Amiberry

Since 2016-09-02

Joint venture to bring you the ultimate Amiga experience on your SBC, running lightweight and optimised DietPi at its core: https://github.com/MichaIng/DietPi/issues/474


Hall of Fame

K-Plan

Joined 2016-01-01

Contributions to the DietPi in general, in-depth testing, bug finding and valuable feedback, forum moderator.

ZombieVirus

Joined 2016-03-20

DietPi forum moderator and version history maintainer on forums.

Rhkean

Joined 2018-03-01

Contributions to the DietPi in general, including source code, testing, new devices, forum moderator.

Pilovali

Joined 2015-10-10

Provided dietpi.com web hosting for 1 year until April 17th 2016. Additionally: forum moderator, testing, bug reporting.

Xenfomation

Joined 2016-04-01

Contributions to the DietPi in general, including source code and VirtualBox image creation/conversion.

AWL29

Joined 2016-10-01

Created the first DietPi image for NanoPi M3/T3.


Contributing

Git coders, please use the active development branch: dev

Are you able to:

  • Provide feedback and/or test areas of DietPi, to improve the user experience?
  • Report bugs?
  • Improve/add more features to the DietPi website or documentation?
  • Compile software for our supported SBCs?
  • Contribute to DietPi with programming on GitHub?
  • Suggest new software that we can add to the dietpi-software install system?

If so, let us know! We are always looking for talented people who believe in the DietPi project, and, wish to contribute in any way you can.

Also read our contribute page for an overview of way to support DietPi.

License

DietPi Copyright (C) 2022 Contributors

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/

Links

DietPi Source

DietPi Files

  • All files located in (recursively):
    • /var/lib/dietpi/
    • /var/tmp/dietpi/
    • /boot/dietpi/
  • /boot/dietpi.txt
  • /boot/config.txt (RPi)
  • /boot/boot.ini (Odroid)
  • All files prefixed with: dietpi-

The above GPLv2 documentation also applies to all mentioned files!

3rd Party Sources/Credits

Links to hardware and software manufacturers, sources and build instructions used in DietPi:


WebsiteDownloadsDocumentationForumBlog


Download Details:

Author: MichaIng
Source Code: https://github.com/MichaIng/DietPi 
License: GPL-2.0 license

#shell #bash #lightweight #debian #optimization #raspberrypi 

DietPi: Lightweight Justice for Your Single-board Computer!
Royce  Reinger

Royce Reinger

1677982740

GPflowOpt: Bayesian Optimization using GPflow

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.


Download Details:

Author: GPflow
Source Code: https://github.com/GPflow/GPflowOpt 
License: Apache-2.0 license

#machinelearning #python #hyperparameter #optimization 

GPflowOpt: Bayesian Optimization using GPflow
Royce  Reinger

Royce Reinger

1677978900

A Free & Open Source Python Library for Multiobjective Optimization

Platypus

A Free & Open Source Python Library for Multiobjective Optimization.


What is Platypus?

Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs). It differs from existing optimization libraries, including PyGMO, Inspyred, DEAP, and Scipy, by providing optimization algorithms and analysis tools for multiobjective optimization. It currently supports NSGA-II, NSGA-III, MOEA/D, IBEA, Epsilon-MOEA, SPEA2, GDE3, OMOPSO, SMPSO, and Epsilon-NSGA-II. For more information, see our IPython Notebook or our online documentation.

Example

For example, optimizing a simple biobjective problem with a single real-valued decision variables is accomplished in Platypus with:


    from platypus import NSGAII, Problem, Real

    def schaffer(x):
        return [x[0]**2, (x[0]-2)**2]

    problem = Problem(1, 2)
    problem.types[:] = Real(-10, 10)
    problem.function = schaffer

    algorithm = NSGAII(problem)
    algorithm.run(10000)

Installation

To install the latest Platypus release, run the following command:

    pip install platypus-opt

To install the latest development version of Platypus, run the following commands:

    pip install -U build setuptools
    git clone https://github.com/Project-Platypus/Platypus.git
    cd Platypus
    python -m build

Anaconda

Platypus is also available via conda-forge.

    conda config --add channels conda-forge
    conda install platypus-opt

For more information see the feedstock located here.


Download Details:

Author: Project-Platypus
Source Code: https://github.com/Project-Platypus/Platypus 
License: GPL-3.0 license

#machinelearning #python #algorithm #optimization 

A Free & Open Source Python Library for Multiobjective Optimization
Royce  Reinger

Royce Reinger

1677975120

PySwarms: A Research toolkit for Particle Swarm Optimization in Python

PySwarms

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python.

It is intended for swarm intelligence researchers, practitioners, and students who prefer a high-level declarative interface for implementing PSO in their problems. PySwarms enables basic optimization with PSO and interaction with swarm optimizations. Check out more features below!

Features

  • High-level module for Particle Swarm Optimization. For a list of all optimizers, check this link.
  • Built-in objective functions to test optimization algorithms.
  • Plotting environment for cost histories and particle movement.
  • Hyperparameter search tools to optimize swarm behaviour.
  • (For Devs and Researchers): Highly-extensible API for implementing your own techniques.

Installation

To install PySwarms, run this command in your terminal:

$ pip install pyswarms

This is the preferred method to install PySwarms, as it will always install the most recent stable release.

In case you want to install the bleeding-edge version, clone this repo:

$ git clone -b development https://github.com/ljvmiranda921/pyswarms.git

and then run

$ cd pyswarms
$ python setup.py install

To install PySwarms on Fedora, use:

$ dnf install python3-pyswarms

Running in a Vagrant Box

To run PySwarms in a Vagrant Box, install Vagrant by going to https://www.vagrantup.com/downloads.html and downloading the proper packaged from the Hashicorp website.

Afterward, run the following command in the project directory:

$ vagrant provision
$ vagrant up
$ vagrant ssh

Now you're ready to develop your contributions in a premade virtual environment.

Basic Usage

PySwarms provides a high-level implementation of various particle swarm optimization algorithms. Thus, it aims to be user-friendly and customizable. In addition, supporting modules can be used to help you in your optimization problem.

Optimizing a sphere function

You can import PySwarms as any other Python module,

import pyswarms as ps

Suppose we want to find the minima of f(x) = x^2 using global best PSO, simply import the built-in sphere function, pyswarms.utils.functions.sphere(), and the necessary optimizer:

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
best_cost, best_pos = optimizer.optimize(fx.sphere, iters=100)

Sphere Optimization

This will run the optimizer for 100 iterations, then returns the best cost and best position found by the swarm. In addition, you can also access various histories by calling on properties of the class:

# Obtain the cost history
optimizer.cost_history
# Obtain the position history
optimizer.pos_history
# Obtain the velocity history
optimizer.velocity_history

At the same time, you can also obtain the mean personal best and mean neighbor history for local best PSO implementations. Simply call optimizer.mean_pbest_history and optimizer.mean_neighbor_history respectively.

Hyperparameter search tools

PySwarms implements a grid search and random search technique to find the best parameters for your optimizer. Setting them up is easy. In this example, let's try using pyswarms.utils.search.RandomSearch to find the optimal parameters for LocalBestPSO optimizer.

Here, we input a range, enclosed in tuples, to define the space in which the parameters will be found. Thus, (1,5) pertains to a range from 1 to 5.

import numpy as np
import pyswarms as ps
from pyswarms.utils.search import RandomSearch
from pyswarms.utils.functions import single_obj as fx

# Set-up choices for the parameters
options = {
    'c1': (1,5),
    'c2': (6,10),
    'w': (2,5),
    'k': (11, 15),
    'p': 1
}

# Create a RandomSearch object
# n_selection_iters is the number of iterations to run the searcher
# iters is the number of iterations to run the optimizer
g = RandomSearch(ps.single.LocalBestPSO, n_particles=40,
            dimensions=20, options=options, objective_func=fx.sphere,
            iters=10, n_selection_iters=100)

best_score, best_options = g.search()

This then returns the best score found during optimization, and the hyperparameter options that enable it.

>>> best_score
1.41978545901
>>> best_options['c1']
1.543556887693
>>> best_options['c2']
9.504769054771

Swarm visualization

It is also possible to plot optimizer performance for the sake of formatting. The plotters module is built on top of matplotlib, making it highly-customizable.

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
from pyswarms.utils.plotters import plot_cost_history, plot_contour, plot_surface
import matplotlib.pyplot as plt
# Set-up optimizer
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
optimizer.optimize(fx.sphere, iters=100)
# Plot the cost
plot_cost_history(optimizer.cost_history)
plt.show()

CostHistory

We can also plot the animation...

from pyswarms.utils.plotters.formatters import Mesher, Designer
# Plot the sphere function's mesh for better plots
m = Mesher(func=fx.sphere,
           limits=[(-1,1), (-1,1)])
# Adjust figure limits
d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)],
             label=['x-axis', 'y-axis', 'z-axis'])

In 2D,

plot_contour(pos_history=optimizer.pos_history, mesher=m, designer=d, mark=(0,0))

Contour

Or in 3D!

pos_history_3d = m.compute_history_3d(optimizer.pos_history) # preprocessing
animation3d = plot_surface(pos_history=pos_history_3d,
                           mesher=m, designer=d,
                           mark=(0,0,0))    

Surface

Contributing

PySwarms is currently maintained by a small yet dedicated team:

And we would appreciate it if you can lend a hand with the following:

  • Find bugs and fix them
  • Update documentation in docstrings
  • Implement new optimizers to our collection
  • Make utility functions more robust.

We would also like to acknowledge all our contributors, past and present, for making this project successful!

If you wish to contribute, check out our contributing guide. Moreover, you can also see the list of features that need some help in our Issues page.

Most importantly, first-time contributors are welcome to join! I try my best to help you get started and enable you to make your first Pull Request! Let's learn from each other!

Credits

This project was inspired by the pyswarm module that performs PSO with constrained support. The package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Cite us

Are you using PySwarms in your project or research? Please cite us!

@article{pyswarmsJOSS2018,
    author  = {Lester James V. Miranda},
    title   = "{P}y{S}warms, a research-toolkit for {P}article {S}warm {O}ptimization in {P}ython",
    journal = {Journal of Open Source Software},
    year    = {2018},
    volume  = {3},
    issue   = {21},
    doi     = {10.21105/joss.00433},
    url     = {https://doi.org/10.21105/joss.00433}
}

Projects citing PySwarms

Not on the list? Ping us in the Issue Tracker!

  • Gousios, Georgios. Lecture notes for the TU Delft TI3110TU course Algorithms and Data Structures. Accessed May 22, 2018. http://gousios.org/courses/algo-ds/book/string-distance.html#sop-example-using-pyswarms.
  • Nandy, Abhishek, and Manisha Biswas., "Applying Python to Reinforcement Learning." Reinforcement Learning. Apress, Berkeley, CA, 2018. 89-128.
  • Benedetti, Marcello, et al., "A generative modeling approach for benchmarking and training shallow quantum circuits." arXiv preprint arXiv:1801.07686 (2018).
  • Vrbančič et al., "NiaPy: Python microframework for building nature-inspired algorithms." Journal of Open Source Software, 3(23), 613, https://doi.org/10.21105/joss.00613
  • Häse, Florian, et al. "Phoenics: A Bayesian optimizer for chemistry." ACS Central Science. 4.9 (2018): 1134-1145.
  • Szynkiewicz, Pawel. "A Comparative Study of PSO and CMA-ES Algorithms on Black-box Optimization Benchmarks." Journal of Telecommunications and Information Technology 4 (2018): 5.
  • Mistry, Miten, et al. "Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded." Imperial College London (2018).
  • Vishwakarma, Gaurav. Machine Learning Model Selection for Predicting Properties of High Refractive Index Polymers Dissertation. State University of New York at Buffalo, 2018.
  • Uluturk Ismail, et al. "Efficient 3D Placement of Access Points in an Aerial Wireless Network." 2019 16th IEEE Anual Consumer Communications and Networking Conference (CCNC) IEEE (2019): 1-7.
  • Downey A., Theisen C., et al. "Cam-based passive variable friction device for structural control." Engineering Structures Elsevier (2019): 430-439.
  • Thaler S., Paehler L., Adams, N.A. "Sparse identification of truncation errors." Journal of Computational Physics Elsevier (2019): vol. 397
  • Lin, Y.H., He, D., Wang, Y. Lee, L.J. "Last-mile Delivery: Optimal Locker locatuion under Multinomial Logit Choice Model" https://arxiv.org/abs/2002.10153
  • Park J., Kim S., Lee, J. "Supplemental Material for Ultimate Light trapping in free-form plasmonic waveguide" KAIST, University of Cambridge, and Cornell University http://www.jlab.or.kr/documents/publications/2019PRApplied_SI.pdf
  • Pasha A., Latha P.H., "Bio-inspired dimensionality reduction for Parkinson's Disease Classification," Health Information Science and Systems, Springer (2020).
  • Carmichael Z., Syed, H., et al. "Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time-Series Forecasting," Proceedings of the 7th Annual Neuro-inspired Computational Elements ACM (2019), nb. 7: 1-10 https://doi.org/10.1145/3320288.3320303
  • Klonowski, J. "Optimizing Message to Virtual Link Assignment in Avionics Full-Duplex Switched Ethernet Networks" Proquest
  • Haidar, A., Jan, ZM. "Evolving One-Dimensional Deep Convolutional Neural Netowrk: A Swarm-based Approach," IEEE Congress on Evolutionary Computation (2019) https://doi.org/10.1109/CEC.2019.8790036
  • Shang, Z. "Performance Evaluation of the Control Plane in OpenFlow Networks," Freie Universitat Berlin (2020)
  • Linker, F. "Industrial Benchmark for Fuzzy Particle Swarm Reinforcement Learning," Liezpic University (2020)
  • Vetter, A. Yan, C. et al. "Computational rule-based approach for corner correction of non-Manhattan geometries in mask aligner photolithography," Optics (2019). vol. 27, issue 22: 32523-32535 https://doi.org/10.1364/OE.27.032523
  • Wang, Q., Megherbi, N., Breckon T.P., "A Reference Architecture for Plausible Thread Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes" https://arxiv.org/abs/2001.05459
  • Menke, Tim, Hase, Florian, et al. "Automated discovery of superconducting circuits and its application to 4-local coupler design," arxiv preprint: https://arxiv.org/abs/1912.03322

Others

Like it? Love it? Leave us a star on Github to show your appreciation!



Download Details:

Author: ljvmiranda921
Source Code: https://github.com/ljvmiranda921/pyswarms 
License: MIT license

#machinelearning #algorithm #optimization #python 

PySwarms: A Research toolkit for Particle Swarm Optimization in Python
Royce  Reinger

Royce Reinger

1677971280

Solid: A Python Framework for Gradient-free Optimization

Solid is a Python framework for gradient-free optimization.


Current Features:


Usage:

  • pip install solidpy
  • Import the relevant algorithm
  • Create a class that inherits from that algorithm, and that implements the necessary abstract methods
  • Call its .run() method, which always returns the best solution and its objective function value

Example:

from random import choice, randint, random
from string import lowercase
from Solid.EvolutionaryAlgorithm import EvolutionaryAlgorithm


class Algorithm(EvolutionaryAlgorithm):
    """
    Tries to get a randomly-generated string to match string "clout"
    """
    def _initial_population(self):
        return list(''.join([choice(lowercase) for _ in range(5)]) for _ in range(50))

    def _fitness(self, member):
        return float(sum(member[i] == "clout"[i] for i in range(5)))

    def _crossover(self, parent1, parent2):
        partition = randint(0, len(self.population[0]) - 1)
        return parent1[0:partition] + parent2[partition:]

    def _mutate(self, member):
        if self.mutation_rate >= random():
            member = list(member)
            member[randint(0,4)] = choice(lowercase)
            member = ''.join(member)
        return member


def test_algorithm():
    algorithm = Algorithm(.5, .7, 500, max_fitness=None)
    best_solution, best_objective_value = algorithm.run()

Testing

To run tests, look in the tests folder.

Use pytest; it should automatically find the test files.


Contributing

Feel free to send a pull request if you want to add any features or if you find a bug.

Check the issues tab for some potential things to do.


It contains basic versions of many of the most common optimization algorithms that do not require the calculation of gradients, and allows for very rapid development using them.

It's a very versatile library that's great for learning, modifying, and of course, using out-of-the-box.

See the detailed documentation here.


Download Details:

Author: 100
Source Code: https://github.com/100/Solid 
License: MIT license

#machinelearning #python #library #algorithm #optimization 

Solid: A Python Framework for Gradient-free Optimization