HMMBase.jl: Hidden Markov Models for Julia

HMMBase.jl

Hidden Markov Models for Julia.

Introduction

HMMBase provides a lightweight and efficient abstraction for hidden Markov models in Julia. Most HMMs libraries only support discrete (e.g. categorical) or Normal distributions. In contrast HMMBase builds upon Distributions.jl to support arbitrary univariate and multivariate distributions.
See HMMBase.jl - A lightweight and efficient Hidden Markov Model abstraction for more details on the motivation behind this package.


Benchmark of HMMBase against hmmlearn and pyhsmm.

Features:

  • Supports any observation distributions conforming to the Distribution interface.
  • Fast and stable implementations of the forward/backward, EM (Baum-Welch) and Viterbi algorithms.

Non-features:

  • Multi-sequences HMMs, see MS_HMMBase
  • Bayesian models, probabilistic programming, see Turing
  • Nonparametric models (HDP-H(S)MM, ...)

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add HMMBase

Documentation

  • STABLEdocumentation of the most recently tagged version.
  • DEVELdocumentation of the in-development version.

Project Status

The package is tested against Julia 1.0 and the latest Julia 1.x.

Starting with v1.0, we follow semantic versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards compatible manner, and
  3. PATCH version when you make backwards compatible bug fixes.

Questions and Contributions

Contributions are very welcome, as are feature requests and suggestions. Please read the CONTRIBUTING.md file for informations on how to contribute. Please open an issue if you encounter any problems.

Logo: lego by jon trillana from the Noun Project.

News

  • 👋 HMMBase is looking for a new maintainer, please open an issue or send me an email if you are interested!
  • v1.0 (stable): HMMBase v1.0 comes with many new features and performance improvements (see the release notes), thanks to @nantonel PR#6. It also introduces breaking API changes (method and fields renaming), see Migration to v1.0 for details on migrating your code to the new version.
  • v0.0.14: latest pre-release version.

Are you using HMMBase in a particular domain (Biology, NLP, ...) ? Feel free to open an issue to discuss your workflow/needs and see how we can improve HMMBase.

Download Details:

Author: Maxmouchet
Source Code: https://github.com/maxmouchet/HMMBase.jl 
License: MIT license

#julia #statistics #hidden 

What is GEEK

Buddha Community

HMMBase.jl: Hidden Markov Models for Julia

Markov and Hidden Markov Model

stochastic process is a collection of random variables that are indexed by some mathematical sets. That is, each random variable of the stochastic process is uniquely associated with an element in the set. The set that is used to index the random variables is called the **index set **and the set of random variables forms the state space. A stochastic **process **can be classified in many ways based on state space, index set, etc.

When the stochastic **process **is interpreted as time, if the process has a finite number of elements such as integers, numbers, and natural numbers then it is Discrete Time.

Stochastic Model

It is a discrete-time process indexed at time 1,2,3,…that takes values called states which are observed.

For an example if the states **(S) ={hot , cold }**

_State series over time => _z∈ S_T

Weather for 4 days can be a sequence => **{z1=hot, z2 =cold, z3 =cold, z4 =hot}**


Markov and Hidden Markov models are engineered to handle data which can be represented as ‘sequence’ of observations over time. Hidden Markov models are probabilistic frameworks where the observed data are modeled as a series of outputs generated by one of several (hidden) internal states.

Markov Assumptions

Markov models are developed based on mainly two assumptions.

  1. Limited Horizon assumption: Probability of being in a state at a time t depend only on the state at the time (t-1).

Image for post

Eq.1. Limited Horizon Assumption

That means state at time t represents _enough summary _of the past reasonably to predict the future. This assumption is an Order-1 Markov process. An order-k Markov process assumes conditional independence of state z_t from the states that are k + 1-time steps before it.

#hidden-markov-models #sequence-model #machine-learning #computer-science #time-series-model #deep learning

HMMBase.jl: Hidden Markov Models for Julia

HMMBase.jl

Hidden Markov Models for Julia.

Introduction

HMMBase provides a lightweight and efficient abstraction for hidden Markov models in Julia. Most HMMs libraries only support discrete (e.g. categorical) or Normal distributions. In contrast HMMBase builds upon Distributions.jl to support arbitrary univariate and multivariate distributions.
See HMMBase.jl - A lightweight and efficient Hidden Markov Model abstraction for more details on the motivation behind this package.


Benchmark of HMMBase against hmmlearn and pyhsmm.

Features:

  • Supports any observation distributions conforming to the Distribution interface.
  • Fast and stable implementations of the forward/backward, EM (Baum-Welch) and Viterbi algorithms.

Non-features:

  • Multi-sequences HMMs, see MS_HMMBase
  • Bayesian models, probabilistic programming, see Turing
  • Nonparametric models (HDP-H(S)MM, ...)

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add HMMBase

Documentation

  • STABLEdocumentation of the most recently tagged version.
  • DEVELdocumentation of the in-development version.

Project Status

The package is tested against Julia 1.0 and the latest Julia 1.x.

Starting with v1.0, we follow semantic versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards compatible manner, and
  3. PATCH version when you make backwards compatible bug fixes.

Questions and Contributions

Contributions are very welcome, as are feature requests and suggestions. Please read the CONTRIBUTING.md file for informations on how to contribute. Please open an issue if you encounter any problems.

Logo: lego by jon trillana from the Noun Project.

News

  • 👋 HMMBase is looking for a new maintainer, please open an issue or send me an email if you are interested!
  • v1.0 (stable): HMMBase v1.0 comes with many new features and performance improvements (see the release notes), thanks to @nantonel PR#6. It also introduces breaking API changes (method and fields renaming), see Migration to v1.0 for details on migrating your code to the new version.
  • v0.0.14: latest pre-release version.

Are you using HMMBase in a particular domain (Biology, NLP, ...) ? Feel free to open an issue to discuss your workflow/needs and see how we can improve HMMBase.

Download Details:

Author: Maxmouchet
Source Code: https://github.com/maxmouchet/HMMBase.jl 
License: MIT license

#julia #statistics #hidden 

Tyshawn  Braun

Tyshawn Braun

1601247600

Introduction to the Markov Chain, Process, and Hidden Markov Model

Introduction

In the recent advancement of the machine learning field, we start to discuss reinforcement learning more and more. Reinforcement learning differs from supervised learning, where we should be very familiar with, in which they do not need the examples or labels to be presented. The focus of reinforcement learning is finding the right balance between exploration (new environment) and exploitation (use of existing knowledge).

Image for post

Conceptual diagram of reinforcement learning

The environment of reinforcement learning generally describes in the form of the Markov decision process (MDP). Therefore, it would be a good idea for us to understand various Markov concepts; Markov chain, Markov process, and hidden Markov model (HMM).

#markov-chains #hidden-markov-models #python #machine-learning #data-analytics

ShapeModels.jl: Statistical Shape Models / Point Distribution Models

ShapeModels

This package is meant to assemble methods for handling 2D and 3D statistical shape models, which are often used in medical computer vision.

Currently, PCA based shape models are implemented, as introduced by Cootes et al1.

Given a set of shapes of the form ndim x nlandmarks x nshapes, a PCA shape model is constructed using:

using ShapeModels
landmarks = ShapeModels.examplelandmarks(:hands2d)

model = PCAShapeModel(landmarks)

shapes = modeshapes(model, 1)  # examples for first eigenmode
[plotshape(shapes[:,:,i], "b.") for i = 1:10]
plotshape(meanshape(model), "r.")

Example computed with outlines of metacarpal bones:

Functions

  • model = PCAShapeModel(shapes) compute a shape model
  • nmodes(model) get number of modes of the model, including rotation, scaling and translation
  • modesstd(model) get standard deviations of modes
  • shape(model, coeffs) compute a shape given a vector coeffs of length(nmodes(a))
  • meanshape(model) get the shape which represents the mean of all shapes
  • modeshapes(model, mode) get 10 shapes from -3std to 3std of mode number mode

Helper functions for plotting. They require the PyPlot package to be installed.

  • axisij() set the origin to top-left
  • plotshape(shape) plot a single shape
  • plotshapes(shapes) plot several shaped in individual subfigures

1 T.F. Cootes, D. Cooper, C.J. Taylor and J. Graham, "Active Shape Models - Their Training and Application." Computer Vision and Image Understanding. Vol. 61, No. 1, Jan. 1995, pp. 38-59.

Download Details:

Author: Rened
Source Code: https://github.com/rened/ShapeModels.jl 
License: View license

#julia #models #models 

AmplNLReader.jl: Julia AMPL Models Conforming to NLPModels.jl

AmplNLReader.jl: A Julia interface to AMPL

How to Cite

If you use AmplNLReader.jl in your work, please cite using the format given in CITATION.bib.

How to Install

At the Julia prompt,

pkg> add AmplNLReader

Testing

pkg> test AmplNLReader

Creating a Model

For an introduction to the AMPL modeling language, see

Suppose you have an AMPL model represented by the model and data files mymodel.mod and mymodel.dat. Decode this model as a so-called nl file using

ampl -ogmymodel mymodel.mod mymodel.dat

For example:

julia> using AmplNLReader

julia> hs33 = AmplModel("hs033.nl")
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)

julia> print(hs33)
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)
lvar = 1x3 Array{Float64,2}:
 0.0  0.0  0.0
uvar = 1x3 Array{Float64,2}:
 Inf  Inf  5.0
lcon = 1x2 Array{Float64,2}:
 -Inf  4.0
ucon = 1x2 Array{Float64,2}:
 0.0  Inf
x0 = 1x3 Array{Float64,2}:
 0.0  0.0  3.0
y0 = 1x2 Array{Float64,2}:
 -0.0  -0.0

There is support for holding multiple models in memory simultaneously. This should be transparent to the user.

Optimization Problems

AmplNLReader.jl currently focuses on continuous problems conforming to NLPModels.jl.

AmplModel objects support all methods associated to NLPModel objects. Please see the NLPModels.jl documentation for more information. The following table lists extra methods associated to an AmplModel. See Hooking your Solver to AMPL for background.

MethodNotes
write_sol(nlp, msg, x, y)Write primal and dual solutions to file

Missing Methods

  • methods for LPs (sparse cost, sparse constraint matrix)
  • methods to check optimality conditions.

Download Details:

Author: JuliaSmoothOptimizers
Source Code: https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl 
License: View license

#julia #optimization #nlp #models