1658930280

NGram

This implementation uses the linear interpolation to build the model. For example, with a simple trigram model

```
p("book" | "the", "green") = count("the green book") / count("the green")
```

But there are some limitations

- We need a bigger corpus to efficiently train a trigram model compared to bigram or unigram
- Count(trigram) is often equal to zero
- With bigram or unigram we don't capture as much information

The idea is then to combine the results of `trigram`

with `bigram`

and `unigram`

. We can generalize by saying that to compute ngram, we also use the results of `(n-1)gram`

, ..., `bigram`

, `unigram`

. Here is an exemple in the case of a trigram model.

```
p("book" | "the", "green") = a * count("the green book") / count("the green")
+ b * count("the green") / count("the")
+ c * count("the") / count()
where
a + b + c = 1
a >= 0
b >= 0
c >= 0
# For example: a = b = c = 1 / 3
```

```
using NGram
texts = String["the green book", "my blue book", "his green house", "book"]
# Train a trigram model on the documents
model = NGramModel(texts, 3)
# Query on the model
# p(book | the, green)
model["the green book"]
```

Author: Remusao

Source Code: https://github.com/remusao/NGram.jl

License: View license

1658930280

NGram

This implementation uses the linear interpolation to build the model. For example, with a simple trigram model

```
p("book" | "the", "green") = count("the green book") / count("the green")
```

But there are some limitations

- We need a bigger corpus to efficiently train a trigram model compared to bigram or unigram
- Count(trigram) is often equal to zero
- With bigram or unigram we don't capture as much information

The idea is then to combine the results of `trigram`

with `bigram`

and `unigram`

. We can generalize by saying that to compute ngram, we also use the results of `(n-1)gram`

, ..., `bigram`

, `unigram`

. Here is an exemple in the case of a trigram model.

```
p("book" | "the", "green") = a * count("the green book") / count("the green")
+ b * count("the green") / count("the")
+ c * count("the") / count()
where
a + b + c = 1
a >= 0
b >= 0
c >= 0
# For example: a = b = c = 1 / 3
```

```
using NGram
texts = String["the green book", "my blue book", "his green house", "book"]
# Train a trigram model on the documents
model = NGramModel(texts, 3)
# Query on the model
# p(book | the, green)
model["the green book"]
```

Author: Remusao

Source Code: https://github.com/remusao/NGram.jl

License: View license

1664897580

This package is meant to assemble methods for handling 2D and 3D statistical shape models, which are often used in medical computer vision.

Currently, PCA based shape models are implemented, as introduced by Cootes et al1.

Given a set of *shapes* of the form `ndim x nlandmarks x nshapes`

, a PCA shape model is constructed using:

```
using ShapeModels
landmarks = ShapeModels.examplelandmarks(:hands2d)
model = PCAShapeModel(landmarks)
shapes = modeshapes(model, 1) # examples for first eigenmode
[plotshape(shapes[:,:,i], "b.") for i = 1:10]
plotshape(meanshape(model), "r.")
```

Example computed with outlines of metacarpal bones:

`model = PCAShapeModel(shapes)`

compute a shape model`nmodes(model)`

get number of modes of the model, including rotation, scaling and translation`modesstd(model)`

get standard deviations of modes`shape(model, coeffs)`

compute a shape given a vector`coeffs`

of`length(nmodes(a))`

`meanshape(model)`

get the shape which represents the mean of all shapes`modeshapes(model, mode)`

get 10 shapes from -3std to 3std of mode number`mode`

Helper functions for plotting. They require the `PyPlot`

package to be installed.

`axisij()`

set the origin to top-left`plotshape(shape)`

plot a single shape`plotshapes(shapes)`

plot several shaped in individual subfigures

1 T.F. Cootes, D. Cooper, C.J. Taylor and J. Graham, "Active Shape Models - Their Training and Application." Computer Vision and Image Understanding. Vol. 61, No. 1, Jan. 1995, pp. 38-59.

Author: Rened

Source Code: https://github.com/rened/ShapeModels.jl

License: View license

1666890060

If you use AmplNLReader.jl in your work, please cite using the format given in CITATION.bib.

At the Julia prompt,

```
pkg> add AmplNLReader
```

```
pkg> test AmplNLReader
```

For an introduction to the AMPL modeling language, see

- R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Mathematical Programming Language, Management Science 36, pp. 519-554, 1990.
- R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming, Duxbury Press / Brooks/Cole Publishing Company, 2003.
- D. Orban, The Lightning AMPL Tutorial. A Guide for Nonlinear Optimization Users, GERAD Technical Report G-2009-66, 2009.

Suppose you have an AMPL model represented by the model and data files `mymodel.mod`

and `mymodel.dat`

. Decode this model as a so-called `nl`

file using

```
ampl -ogmymodel mymodel.mod mymodel.dat
```

For example:

```
julia> using AmplNLReader
julia> hs33 = AmplModel("hs033.nl")
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)
julia> print(hs33)
Minimization problem hs033.nl
nvar = 3, ncon = 2 (0 linear)
lvar = 1x3 Array{Float64,2}:
0.0 0.0 0.0
uvar = 1x3 Array{Float64,2}:
Inf Inf 5.0
lcon = 1x2 Array{Float64,2}:
-Inf 4.0
ucon = 1x2 Array{Float64,2}:
0.0 Inf
x0 = 1x3 Array{Float64,2}:
0.0 0.0 3.0
y0 = 1x2 Array{Float64,2}:
-0.0 -0.0
```

There is support for holding multiple models in memory simultaneously. This should be transparent to the user.

`AmplNLReader.jl`

currently focuses on continuous problems conforming to `NLPModels.jl`

.

`AmplModel`

objects support all methods associated to `NLPModel`

objects. Please see the `NLPModels.jl`

documentation for more information. The following table lists extra methods associated to an `AmplModel`

. See Hooking your Solver to AMPL for background.

Method | Notes |
---|---|

`write_sol(nlp, msg, x, y)` | Write primal and dual solutions to file |

- methods for LPs (sparse cost, sparse constraint matrix)
- methods to check optimality conditions.

Author: JuliaSmoothOptimizers

Source Code: https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl

License: View license

1666781285

A Julia package for equation-based modeling and simulations.

NOTE: This is a work in progress to convert the package to use ModelingToolkit.

Some of the components and/or examples do not work, yet. This especially includes models requiring events and discrete systems.

FunctionalModels builds on top of ModelingToolkit. The following are exported:

`t`

: independent variable`D`

and`der`

: aliases for`Differential(t)`

`system`

: flattens a set of hierarchical equations and returns a simplified`ODESystem`

`Unknown`

: helper function to create variables`default_value`

: return the default (starting) value of a variable`compatible_values`

: return the base value from a variable to use when creating other variables`RefBranch`

and`Branch`

: marks nodes and flow variables

Equations are standard ModelingToolkit equations. The main difference in FunctionalModels is that variables should be created with `Unknown(val; name)`

or one of the helpers like `Voltage()`

. Variables created this way include metadata to ensure that variable names don't clash. Multiple subcomponents can all have a `v(t)`

variable for example. Once the model is flattened, the variable names will be normalized.

FunctionalModels uses a functional style as opposed to the more object-oriented approach of ModelingToolkit, Modia, and Modelica. Because `system`

return an `ODESystem`

, models can be built up of FunctionalModels components and standard ModelingToolkit components.

This package is for non-causal modeling in Julia. The idea behind non-causal modeling is that the user develops models based on components which are described by a set of equations. A tool can then transform the equations and solve the differential algebraic equations. Non-causal models tend to match their physical counterparts in terms of their specification and implementation.

Causal modeling is where all signals have an input and an output, and the flow of information is clear. Simulink is the highest-profile example. The problem with causal modeling is that it is difficult to build up models from components.

The highest profile noncausal modeling tools are in the Modelica family. The MathWorks company also has FunctionalModelscape that uses Matlab notation. Modelica is an object-oriented, open language with multiple implementations. It is a large, complex, powerful language with an extensive standard library of components.

This implementation follows the work of David Broman (thesis and code) and George Giorgidze (Hydra code and thesis) and Henrik Nilsson and their functional hybrid modeling. FunctionalModels is most similar to Modelyze by David Broman (report).

FunctionalModels is an installable package. To install FunctionalModels, use the following:

```
Pkg.add("FunctionalModels")
```

FunctionalModels.jl has one main module named `FunctionalModels`

and the following submodules:

`FunctionalModels.Lib`

-- the standard library

`FunctionalModels.Examples`

-- example models, including:

`FunctionalModels.Examples.Basics`

`FunctionalModels.Examples.Lib`

`FunctionalModels.Examples.Neural`

FunctionalModels uses ModelingToolkit to build up models. All equations use the ModelingToolkit variables and syntax. In a simulation, the unknowns are to be solved based on a set of equations. Equations are built from device models.

A device model is a function that returns a vector of equations or other devices that also return lists of equations.

This example shows definitions of several electrical components. Each is again a function that returns a list of equations.

Arguments to each function are model parameters. These normally include nodes specifying connectivity followed by parameters specifying model characteristics.

Models can contain models or other functions that return equations. The function `Branch`

is a special function that returns an equation specifying relationships between nodes and flows. It also acts as an indicator to mark nodes. In the flattening/elaboration process, equations are created to sum flows (in this case electrical currents) to zero at all nodes. `RefBranch`

is another special function for marking nodes and flow variables.

Nodes passed as parameters are unknown variables. For these electrical examples, a node is simply an unknown voltage.

```
function Resistor(n1, n2; R::Real)
i = Current()
v = Voltage()
[
Branch(n1, n2, v, i)
R * i ~ v
]
end
function Capacitor(n1, n2; C::Real)
i = Current()
v = Voltage()
[
Branch(n1, n2, v, i)
D(v) ~ i / C
]
end
```

What follows is a top-level circuit definition. In this case, there are no input parameters. The ground reference "g" is assigned zero volts.

All of the equations returned in the list of equations are other models with various parameters.

In this example, the model components are named (`:vs`

, `:r1`

, ...). Unnamed components can also be used, but then variables used in components have anonymized naming (`c1₊i(t)`

vs. `var"##i#1057"(t)`

).

```
function Circuit()
@named n1 = Voltage()
@named n2 = Voltage()
g = 0.0 # A ground has zero volts; it's not an unknown.
[
:vs => SineVoltage(n1, g, V = 10.0, f = 60.0)
:r1 => Resistor(n1, n2, R = 10.0)
:r2 => Resistor(n2, g, R = 5.0)
:c1 => Capacitor(n2, g, C = 5.0e-3)
]
end
ckt = Circuit()
```

For more information, see the documentation:

Author: tshort

Source Code: https://github.com/tshort/FunctionalModels.jl

License: View license

1658600040

ParticleFilters

This package provides some simple generic particle filters, and may serve as a template for making custom particle filters and other belief updaters. It is compatible with POMDPs.jl, but does not have to be used with that package.

Installation

In Julia:

```
Pkg.add("ParticleFilters")
```

Usage

Basic setup might look like this:

```
using ParticleFilters, Distributions
dynamics(x, u, rng) = x + u + randn(rng)
y_likelihood(x_previous, u, x, y) = pdf(Normal(), y - x)
model = ParticleFilterModel{Float64}(dynamics, y_likelihood)
pf = BootstrapFilter(model, 10)
```

Then the `update`

function can be used to perform a particle filter update.

```
b = ParticleCollection([1.0, 2.0, 3.0, 4.0])
u = 1.0
y = 3.0
b_new = update(pf, b, u, y)
```

This is a very simple example and the framework can accommodate a variety of more complex use cases. More details can be found in the documentation linked to below.

There are tutorials for three ways to use the particle filters:

- As an estimator for feedback control,
- to filter time-series measurements, and
- as an updater for POMDPs.jl.

Documentation

https://JuliaPOMDP.github.io/ParticleFilters.jl/latest

Author: JuliaPOMDP

Source Code: https://github.com/JuliaPOMDP/ParticleFilters.jl

License: View license