1674069300

*Hidden Markov Models for Julia.*

HMMBase provides a lightweight and efficient abstraction for hidden Markov models in Julia. Most HMMs libraries only support discrete (e.g. categorical) or Normal distributions. In contrast HMMBase builds upon Distributions.jl to support arbitrary univariate and multivariate distributions.

See HMMBase.jl - A lightweight and efficient Hidden Markov Model abstraction for more details on the motivation behind this package.

Benchmark of HMMBase against hmmlearn and pyhsmm.

**Features:**

- Supports any observation distributions conforming to the Distribution interface.
- Fast and stable implementations of the forward/backward, EM (Baum-Welch) and Viterbi algorithms.

**Non-features:**

- Multi-sequences HMMs, see MS_HMMBase
- Bayesian models, probabilistic programming, see Turing
- Nonparametric models (HDP-H(S)MM, ...)

The package can be installed with the Julia package manager. From the Julia REPL, type `]`

to enter the Pkg REPL mode and run:

```
pkg> add HMMBase
```

**STABLE**—**documentation of the most recently tagged version.****DEVEL**—*documentation of the in-development version.*

The package is tested against Julia 1.0 and the latest Julia 1.x.

Starting with v1.0, we follow semantic versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.

Contributions are very welcome, as are feature requests and suggestions. Please read the CONTRIBUTING.md file for informations on how to contribute. Please open an issue if you encounter any problems.

*Logo: lego by jon trillana from the Noun Project.*

**👋 HMMBase is looking for a new maintainer, please open an issue or send me an email if you are interested!***v1.0 (stable):*HMMBase v1.0 comes with many new features and performance improvements (see the release notes), thanks to @nantonel PR#6. It also introduces breaking API changes (method and fields renaming), see Migration to v1.0 for details on migrating your code to the new version.*v0.0.14*: latest pre-release version.

Are you using HMMBase in a particular domain (Biology, NLP, ...) ? Feel free to open an issue to discuss your workflow/needs and see how we can improve HMMBase.

Author: Maxmouchet

Source Code: https://github.com/maxmouchet/HMMBase.jl

License: MIT license

1598165400

A **stochastic** **process** is a collection of random variables that are indexed by some mathematical sets. That is, each random variable of the **stochastic process** is uniquely associated with an element in the set. The set that is used to index the random variables is called the **index set **and the set of random variables forms the **state space**. A **stochastic** **process **can be classified in many ways based on state space, index set, etc.

When the **stochastic** **process **is interpreted as time, if the process has a finite number of elements such as integers, numbers, and natural numbers then it is **Discrete Time.**

It is a **discrete-time process** indexed at time 1,2,3,…that takes values called **states** which are observed.

For an example if the states****(S) ={hot , cold }

_State series over time => _

z∈ S_T

Weather for 4 days can be a sequence =>****{z1=hot, z2 =cold, z3 =cold, z4 =hot}

Markov and Hidden Markov models are engineered to handle data which can be represented as ‘sequence’ of observations over time. Hidden Markov models are probabilistic frameworks where the observed data are modeled as a series of outputs generated by one of several (hidden) internal states.

Markov models are developed based on mainly two assumptions.

**Limited Horizon assumption**: Probability of being in a state at a time t depend only on the state at the time (t-1).

Eq.1. Limited Horizon Assumption

That means state at time **t** represents _enough summary _of the past reasonably to predict the future. This assumption is an Order-1 Markov process. An order-k Markov process assumes conditional independence of state z_t from the states that are k + 1-time steps before it.

#hidden-markov-models #sequence-model #machine-learning #computer-science #time-series-model #deep learning

1674069300

*Hidden Markov Models for Julia.*

HMMBase provides a lightweight and efficient abstraction for hidden Markov models in Julia. Most HMMs libraries only support discrete (e.g. categorical) or Normal distributions. In contrast HMMBase builds upon Distributions.jl to support arbitrary univariate and multivariate distributions.

See HMMBase.jl - A lightweight and efficient Hidden Markov Model abstraction for more details on the motivation behind this package.

Benchmark of HMMBase against hmmlearn and pyhsmm.

**Features:**

- Supports any observation distributions conforming to the Distribution interface.
- Fast and stable implementations of the forward/backward, EM (Baum-Welch) and Viterbi algorithms.

**Non-features:**

- Multi-sequences HMMs, see MS_HMMBase
- Bayesian models, probabilistic programming, see Turing
- Nonparametric models (HDP-H(S)MM, ...)

The package can be installed with the Julia package manager. From the Julia REPL, type `]`

to enter the Pkg REPL mode and run:

```
pkg> add HMMBase
```

**STABLE**—**documentation of the most recently tagged version.****DEVEL**—*documentation of the in-development version.*

The package is tested against Julia 1.0 and the latest Julia 1.x.

Starting with v1.0, we follow semantic versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.

Contributions are very welcome, as are feature requests and suggestions. Please read the CONTRIBUTING.md file for informations on how to contribute. Please open an issue if you encounter any problems.

*Logo: lego by jon trillana from the Noun Project.*

**👋 HMMBase is looking for a new maintainer, please open an issue or send me an email if you are interested!***v1.0 (stable):*HMMBase v1.0 comes with many new features and performance improvements (see the release notes), thanks to @nantonel PR#6. It also introduces breaking API changes (method and fields renaming), see Migration to v1.0 for details on migrating your code to the new version.*v0.0.14*: latest pre-release version.

Are you using HMMBase in a particular domain (Biology, NLP, ...) ? Feel free to open an issue to discuss your workflow/needs and see how we can improve HMMBase.

Author: Maxmouchet

Source Code: https://github.com/maxmouchet/HMMBase.jl

License: MIT license

1601247600

In the recent advancement of the machine learning field, we start to discuss reinforcement learning more and more. Reinforcement learning differs from supervised learning, where we should be very familiar with, in which they do not need the examples or labels to be presented. The focus of reinforcement learning is finding the right balance between exploration (new environment) and exploitation (use of existing knowledge).

Conceptual diagram of reinforcement learning

The environment of reinforcement learning generally describes in the form of the Markov decision process (MDP). Therefore, it would be a good idea for us to understand various Markov concepts; Markov chain, Markov process, and hidden Markov model (HMM).

#markov-chains #hidden-markov-models #python #machine-learning #data-analytics

1664897580

This package is meant to assemble methods for handling 2D and 3D statistical shape models, which are often used in medical computer vision.

Currently, PCA based shape models are implemented, as introduced by Cootes et al1.

Given a set of *shapes* of the form `ndim x nlandmarks x nshapes`

, a PCA shape model is constructed using:

```
using ShapeModels
landmarks = ShapeModels.examplelandmarks(:hands2d)
model = PCAShapeModel(landmarks)
shapes = modeshapes(model, 1) # examples for first eigenmode
[plotshape(shapes[:,:,i], "b.") for i = 1:10]
plotshape(meanshape(model), "r.")
```

Example computed with outlines of metacarpal bones:

`model = PCAShapeModel(shapes)`

compute a shape model`nmodes(model)`

get number of modes of the model, including rotation, scaling and translation`modesstd(model)`

get standard deviations of modes`shape(model, coeffs)`

compute a shape given a vector`coeffs`

of`length(nmodes(a))`

`meanshape(model)`

get the shape which represents the mean of all shapes`modeshapes(model, mode)`

get 10 shapes from -3std to 3std of mode number`mode`

Helper functions for plotting. They require the `PyPlot`

package to be installed.

`axisij()`

set the origin to top-left`plotshape(shape)`

plot a single shape`plotshapes(shapes)`

plot several shaped in individual subfigures

1 T.F. Cootes, D. Cooper, C.J. Taylor and J. Graham, "Active Shape Models - Their Training and Application." Computer Vision and Image Understanding. Vol. 61, No. 1, Jan. 1995, pp. 38-59.

Author: Rened

Source Code: https://github.com/rened/ShapeModels.jl

License: View license

1666890060

If you use AmplNLReader.jl in your work, please cite using the format given in CITATION.bib.

At the Julia prompt,

```
pkg> add AmplNLReader
```

```
pkg> test AmplNLReader
```

For an introduction to the AMPL modeling language, see

- R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Mathematical Programming Language, Management Science 36, pp. 519-554, 1990.
- R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming, Duxbury Press / Brooks/Cole Publishing Company, 2003.
- D. Orban, The Lightning AMPL Tutorial. A Guide for Nonlinear Optimization Users, GERAD Technical Report G-2009-66, 2009.

Suppose you have an AMPL model represented by the model and data files `mymodel.mod`

and `mymodel.dat`

. Decode this model as a so-called `nl`

file using

```
ampl -ogmymodel mymodel.mod mymodel.dat
```

For example:

```
julia> using AmplNLReader
julia> hs33 = AmplModel("hs033.nl")
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)
julia> print(hs33)
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)
lvar = 1x3 Array{Float64,2}:
0.0 0.0 0.0
uvar = 1x3 Array{Float64,2}:
Inf Inf 5.0
lcon = 1x2 Array{Float64,2}:
-Inf 4.0
ucon = 1x2 Array{Float64,2}:
0.0 Inf
x0 = 1x3 Array{Float64,2}:
0.0 0.0 3.0
y0 = 1x2 Array{Float64,2}:
-0.0 -0.0
```

There is support for holding multiple models in memory simultaneously. This should be transparent to the user.

`AmplNLReader.jl`

currently focuses on continuous problems conforming to `NLPModels.jl`

.

`AmplModel`

objects support all methods associated to `NLPModel`

objects. Please see the `NLPModels.jl`

documentation for more information. The following table lists extra methods associated to an `AmplModel`

. See Hooking your Solver to AMPL for background.

Method | Notes |
---|---|

`write_sol(nlp, msg, x, y)` | Write primal and dual solutions to file |

- methods for LPs (sparse cost, sparse constraint matrix)
- methods to check optimality conditions.

Author: JuliaSmoothOptimizers

Source Code: https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl

License: View license