1660264320

*Extensions for Julia's docsystem.*

The package can be added using the Julia package manager. From the Julia REPL, type `]`

to enter the Pkg REPL mode and run

```
pkg> add DocStringExtensions
```

**STABLE**—**most recently tagged version of the documentation.****LATEST**—*in-development version of the documentation.*

The package is tested and developed against Julia `0.7`

and `1.0`

on Linux, OS X, and Windows, but there are versions of the package that works on older versions of Julia.

Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems. If you have a question then feel free to ask for help in the Gitter chat room.

Author: JuliaDocs

Source Code: https://github.com/JuliaDocs/DocStringExtensions.jl

License: View license

1660264320

*Extensions for Julia's docsystem.*

The package can be added using the Julia package manager. From the Julia REPL, type `]`

to enter the Pkg REPL mode and run

```
pkg> add DocStringExtensions
```

**STABLE**—**most recently tagged version of the documentation.****LATEST**—*in-development version of the documentation.*

The package is tested and developed against Julia `0.7`

and `1.0`

on Linux, OS X, and Windows, but there are versions of the package that works on older versions of Julia.

Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems. If you have a question then feel free to ask for help in the Gitter chat room.

Author: JuliaDocs

Source Code: https://github.com/JuliaDocs/DocStringExtensions.jl

License: View license

1660827300

**JUlia Development Extensions**

Jude statically parses your codebase in javascript to provide IDE like capabilities for Atom:

- non-fuzzy autocomplete
- jump to definition
- forward/back cursor history
- highlights incorrect name errors and some parsing errors

In the future, this may include tools to find usages of a function, limited static type checking, and some refactoring/renaming.

`julia`

should be on your PATH. You can customize the exact path to julia in the Jude package settings.

**Autocomplete** is triggered by typing. Only names available in the scope (eg try block, for loop, function scope) are shown. To get function signatures, press `ctrl-space`

after the `...(`

. You can `tab`

through the arg list.

**Jump to defintion** is also triggered by `ctrl-space`

when the cursor is on any word. You can jump to function definitions, type defintions, or variable declarations for files in your workspace. You can even jump to some function declarations in files not in your workspace, such as in the base library. Your jump history is tracked, and you can go back/forward using `ctrl-alt-left`

/`ctrl-alt-right`

.

**Syntax errors** are highlighted as lint warnings. Jude shows these when it cannot parse your code or resolve a name. You can customize the linter to hide the error message panel that pops up at the bottom of the screen by going to the Linter package options and unselecting `Show Error Panel`

.

Jude performs a full syntax parse of all the Julia files in your workspace in Javascript, and does scoping analysis statically to resolve names. Names are resolved specific to the scope where they are used, not using fuzzy matching over the entire project. Your project files are not loaded into Julia or executed to run the analysis.

Jude reparses some or all of your code as you type. This is done in <50 ms for small to medium sized codebases, especially if it is broken into modules. If you are editing a file that has no "module" declarations (maybe it is just included in another file that does), the reparse can be <5 ms.

This is a parser written in Javascript (actually Typescript) separate from the Julia compiler, so there are some gaps in its coverage. Currently, most Julia syntax should be parsed correctly, but this is a work in progress! Please help make it better by reporting any parsing problems or even submitting pull requests. The goal is not necessarily to be a full syntax checker for Julia, but just to be able to resolve names properly. Many errors are shown in the Chromium Dev Tools console, which can be opened with the command `Window: Toggle Dev Tools`

.

For imported modules that are not in the workspace, Jude starts up a short lived child Julia process and queries it for the module contents. It will import type definitions, function signatures, macros, and variables at the module level. During the first run with Jude newly installed, it may take a minute to retrieve the Base library and any modules you have imported into your files. Afterwards, the results are cached. The path to Julia is configurable in the Jude options.

Julia is a very flexible language, but for Jude to provide these capabilities, some restrictions are in place.

- Jude can only follow
`include("...")`

calls where the string is a constant literal. `include("...")`

can only be present at the module level, not inside a function.- Binary operators cannot be overridden to not be binary, eg:
`+ = 5`

- Anonymous functions have no signature information.
`foo = (a, b) -> a + b`

has no signature information because foo is treated as a variable.`foo(a,b) = a + b`

is recognized.

- Jude can jump to function definitions not in your workspace, but Julia reflection doesn't provide the locations for type definitions or macro definitions.

Parsing should be very fast, but if it starts to cause noticable slow down in the GUI, you can reduce the parsing intervals by changing the lint delay in the Linter package (`Lint As You Type Interval`

, which defaults to 300 ms).

Julia is a dynamic language, so autocomplete currently only works for functions/types/variables, not for fields on types. This is because the type of the object often cannot be determined statically, so the fields are unknown. There is no fuzzy matching currently for fields. In the future, there may be some flow analysis that allows types to be inferred such as from function arg list signatures or type assertions. If Julia eventually allows return type declarations, these can be leveraged too.

- The bodies of macro definitions are not parsed.
- Code quotes are not parsed, eg
`:(a + b)`

,`quote ... end`

- Jump to definition for overloaded functions currently leverages Autocomplete+ for the GUI to select the overload. When you jump, Autocomplete+ will insert the function signature you selected, and Jude will then undo the change. This clears any redos on your undo stack. Eventually, a separate GUI will be created, along with a separate shortcut from
`ctrl-space`

.

- Testing.
- Indicators to show when running julia in background or a running a long parse.
- Fix gaps in syntax coverage.
- Use own GUI for jump to function definitions.
- Refactor capability for variable and type names.
- Perhaps factor into service that can be plugged into other editors.
- Simple flow analysis to allow autocomplete of some fields.

Author: jamesdanged

Source Code: https://github.com/jamesdanged/Jude

1665205020

This package is a Julia extension package to Wavelets.jl (WaveletsExt is short for Wavelets Extension). It contains additional functionalities that complement Wavelets.jl, namely

- Multi-dimensional wavelet transforms
- Redundant wavelet transforms
- Best basis algorithms
- Denoising methods
- Wavelet transform based feature extraction techniques

The package is part of the official Julia Registry. It can be install via the Julia REPL.

```
(@1.7) pkg> add WaveletsExt
```

or

```
julia> using Pkg; Pkg.add("WaveletsExt")
```

Load the WaveletsExt module along with Wavelets.jl.

```
using Wavelets, WaveletsExt
```

[1] Coifman, R.R., Wickerhauser, M.V. (1992). *Entropy-based algorithms for best basis selection*. DOI: 10.1109/18.119732

[2] Saito, N. (1998). *The least statistically-dependent basis and its applications*. DOI: 10.1109/ACSSC.1998.750958

[3] Beylkin, G., Saito, N. (1992). *Wavelets, their autocorrelation functions, and multiresolution representations of signals*. DOI: 10.1117/12.131585

[4] Nason, G.P., Silverman, B.W. (1995) *The Stationary Wavelet Transform and some Statistical Applications*. DOI: 10.1007/978-1-4612-2544-7_17

[5] Donoho, D.L., Johnstone, I.M. (1995). *Adapting to Unknown Smoothness via Wavelet Shrinkage*. DOI: 10.1080/01621459.1995.10476626

[6] Saito, N., Coifman, R.R. (1994). *Local Discriminant Basis*. DOI: 10.1117/12.188763

[7] Saito, N., Coifman, R.R. (1995). *Local discriminant basis and their applications*. DOI: 10.1007/BF01250288

[8] Saito, N., Marchand, B. (2012). *Earth Mover's Distance-Based Local Discriminant Basis*. DOI: 10.1007/978-1-4614-4145-8_12

[9] Cohen, I., Raz, S., Malah, D. (1997). *Orthonormal shift-invariant wavelet packet decomposition and representation*. DOI: 10.1016/S0165-1684(97)00007-8

[10] Irion, J., Saito, N. (2017). *Efficient Approximation and Denoising of Graph Signals Using the Multiscale Basis Dictionaries*. DOI: 10.1109/TSIPN.2016.2632039

- nD wavelet transforms for redundant and non-redundant versions

Author: UCD4IDS

Source Code: https://github.com/UCD4IDS/WaveletsExt.jl

License: BSD-3-Clause license

1656190800

This repository is a set of extension functionality for estimating the parameters of differential equations using Bayesian methods. It allows the choice of using CmdStan.jl, Turing.jl, DynamicHMC.jl and ApproxBayes.jl to perform a Bayesian estimation of a differential equation problem specified via the DifferentialEquations.jl interface.

To begin you first need to add this repository using the following command.

```
Pkg.add("DiffEqBayes")
using DiffEqBayes
```

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

```
using ParameterizedFunctions, OrdinaryDiffEq, RecursiveArrayTools, Distributions
f1 = @ode_def LotkaVolterra begin
dx = a*x - x*y
dy = -3*y + x*y
end a
p = [1.5]
u0 = [1.0,1.0]
tspan = (0.0,10.0)
prob1 = ODEProblem(f1,u0,tspan,p)
σ = 0.01 # noise, fixed for now
t = collect(1.:10.) # observation times
sol = solve(prob1,Tsit5())
priors = [Normal(1.5, 1)]
randomized = VectorOfArray([(sol(t[i]) + σ * randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)
using CmdStan #required for using the Stan backend
bayesian_result_stan = stan_inference(prob1,t,data,priors)
bayesian_result_turing = turing_inference(prob1,Tsit5(),t,data,priors)
using DynamicHMC #required for DynamicHMC backend
bayesian_result_hmc = dynamichmc_inference(prob1, Tsit5(), t, data, priors)
bayesian_result_abc = abc_inference(prob1, Tsit5(), t, data, priors)
```

You don't always have data for all of the variables of the model. In case of certain latent variables you can utilise the `save_idxs`

kwarg to declare the oberved variables and run the inference using any of the backends as shown below.

```
sol = solve(prob1,Tsit5(),save_idxs=[1])
randomized = VectorOfArray([(sol(t[i]) + σ * randn(1)) for i in 1:length(t)])
data = convert(Array,randomized)
using CmdStan #required for using the Stan backend
bayesian_result_stan = stan_inference(prob1,t,data,priors,save_idxs=[1])
bayesian_result_turing = turing_inference(prob1,Tsit5(),t,data,priors,save_idxs=[1])
using DynamicHMC #required for DynamicHMC backend
bayesian_result_hmc = dynamichmc_inference(prob1,Tsit5(),t,data,priors,save_idxs = [1])
bayesian_result_abc = abc_inference(prob1,Tsit5(),t,data,priors,save_idxs=[1])
```

Author: SciML

Source Code: https://github.com/SciML/DiffEqBayes.jl

License: View license

1656243720

Torch.jl

Sensible extensions for exposing torch in Julia.

This package is aimed at providing the `Tensor`

type, which offloads all computations over to ATen, the foundational tensor library for PyTorch, written in C++.

**Note:**

- Needs a machine with a CUDA GPU (CUDA 10.1 or above)
- will need lazy artifacts function without a GPU

To add the package, from the Julia REPL, enter the Pkg prompt by typing `]`

and execute the following:

```
pkg> add Torch
```

Or via Julia's package manager Pkg.

```
julia> using Pkg; Pkg.add("Torch");
```

```
using Metalhead, Metalhead.Flux, Torch
using Torch: torch
resnet = ResNet()
```

We can move our object over to Torch via a simple call to `torch`

```
tresnet = resnet.layers |> torch
```

Or if we need more control over the device to be used like so:

```
ip = rand(Float32, 224, 224, 3, 1) # An RGB Image
tip = tensor(ip, dev = 0) # 0 => GPU:0 in Torch
cpu_tensor = tensor(ip, dev = -1) # -1 => CPU:0
```

Calling into the model is done via the usual Flux mechanism.

```
tresnet(tip);
```

We can take gradients using Zygote as well

```
gs = gradient(x -> sum(tresnet(x)), tip);
# Or
ps = Flux.params(tresnet);
gs = gradient(ps) do
sum(tresnet(tip))
end
```

Please feel free to open issues you might encounter in the issue tracker. I would also appreciate contributions through PRs toward corrections, increased coverage, docs, etc. Testing currently runs on Linux, but that can be expanded as need arises.

Takes a lot of inspiration from existing such projects - ocaml-torch for generating the wrappers.

Author: FluxML

Source Code: https://github.com/FluxML/Torch.jl

License: View license