NLopt.jl: The NLopt Module for Julia

The NLopt module for Julia

This module provides a Julia-language interface to the free/open-source NLopt library for nonlinear optimization. NLopt provides a common interface for many different optimization algorithms, including:

  • Both global and local optimization
  • Algorithms using function values only (derivative-free) and also algorithms exploiting user-supplied gradients.
  • Algorithms for unconstrained optimization, bound-constrained optimization, and general nonlinear inequality/equality constraints.

See the NLopt introduction for a further overview of the types of problems it addresses.

NLopt can be used either by accessing it's specialized API or by using the generic MathOptInterface or MathProgBase interfaces for nonlinear optimization. Both methods are documented below.

Installation

Within Julia, you can install the NLopt.jl package with the package manager: Pkg.add("NLopt")

On Windows and OS X platforms, NLopt binaries will be automatically installed. On other platforms, Julia will attempt to build NLopt from source; be sure to have a compiler installed.

Using with MathOptInterface

NLopt implements the MathOptInterface interface for nonlinear optimization, which means that it can be used interchangeably with other optimization packages from modeling packages like JuMP or when providing hand-written derivatives. Note that NLopt does not exploit sparsity of Jacobians.

The NLopt solver is named NLopt.Optimizer and takes parameters:

  • algorithm
  • stopval
  • ftol_rel
  • ftol_abs
  • xtol_rel
  • xtol_abs
  • constrtol_abs
  • maxeval
  • maxtime
  • initial_step
  • population
  • seed
  • vector_storage

The algorithm parameter is required, and all others are optional. The meaning and acceptable values of all parameters, except constrtol_abs, match the descriptions below from the specialized NLopt API. The constrtol_abs parameter is an absolute feasibility tolerance applied to all constraints.

Tutorial

The following example code solves the nonlinearly constrained minimization problem from the NLopt Tutorial:

using NLopt

function myfunc(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1] = 0
        grad[2] = 0.5/sqrt(x[2])
    end
    return sqrt(x[2])
end

function myconstraint(x::Vector, grad::Vector, a, b)
    if length(grad) > 0
        grad[1] = 3a * (a*x[1] + b)^2
        grad[2] = -1
    end
    (a*x[1] + b)^3 - x[2]
end

opt = Opt(:LD_MMA, 2)
opt.lower_bounds = [-Inf, 0.]
opt.xtol_rel = 1e-4

opt.min_objective = myfunc
inequality_constraint!(opt, (x,g) -> myconstraint(x,g,2,0), 1e-8)
inequality_constraint!(opt, (x,g) -> myconstraint(x,g,-1,1), 1e-8)

(minf,minx,ret) = optimize(opt, [1.234, 5.678])
numevals = opt.numevals # the number of function evaluations
println("got $minf at $minx after $numevals iterations (returned $ret)")

The output should be:

got 0.5443310476200902 at [0.3333333346933468,0.29629628940318486] after 11 iterations (returned XTOL_REACHED)

Much like the NLopt interfaces in other languages, you create an Opt object (analogous to nlopt_opt in C) which encapsulates the dimensionality of your problem (here, 2) and the algorithm to be used (here, LD_MMA) and use various functions to specify the constraints and stopping criteria (along with any other aspects of the problem).

The same problem can be solved by using the JuMP interface to NLopt:

using JuMP
using NLopt

model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :LD_MMA)

a1 = 2
b1 = 0
a2 = -1
b2 = 1

@variable(model, x1)
@variable(model, x2 >= 0)

@NLobjective(model, Min, sqrt(x2))
@NLconstraint(model, x2 >= (a1*x1+b1)^3)
@NLconstraint(model, x2 >= (a2*x1+b2)^3)

set_start_value(x1, 1.234)
set_start_value(x2, 5.678)

JuMP.optimize!(model)

println("got ", objective_value(model), " at ", [value(x1), value(x2)])

The output should be:

got 0.5443310477213124 at [0.3333333342139688,0.29629628951338166]

Note that the MathOptInterface interface sets slightly different convergence tolerances by default (these default values are given by the NLopt.DEFAULT_OPTIONS dictionary), so the outputs from the two problems are not identical.

Some algorithms need a local optimizer. These are set with local_optimizer, e.g.,

model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :AUGLAG)
set_optimizer_attribute(model, "local_optimizer", :LD_LBFGS)

To parametrize the local optimizer, pass use the NLopt.Opt interface, e.g.,

model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :AUGLAG)
local_optimizer = NLopt.Opt(:LD_LBFGS, num_variables)
local_optimizer.xtol_rel = 1e-4
set_optimizer_attribute(model, "local_optimizer", local_optimizer)

where num_variables is the number of variables of the optimization problem.

Reference

The main purpose of this section is to document the syntax and unique features of the Julia interface; for more detail on the underlying features, please refer to the C documentation in the NLopt Reference.

Using the Julia API

To use NLopt in Julia, your Julia program should include the line:

using NLopt

which imports the NLopt module and its symbols. (Alternatively, you can use import NLopt if you want to keep all the NLopt symbols in their own namespace. You would then prefix all functions below with NLopt., e.g. NLopt.Opt and so on.)

The Opt type

The NLopt API revolves around an object of type Opt. Via functions acting on this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally calls the optimize function in order to perform the optimization. The object should normally be created via the constructor:

opt = Opt(algorithm, n)

given an algorithm (see NLopt Algorithms for possible values) and the dimensionality of the problem (n, the number of optimization parameters). Whereas in C the algorithms are specified by nlopt_algorithm constants of the form NLOPT_LD_MMA, NLOPT_LN_COBYLA, etcetera, the Julia algorithm values are symbols of the form :LD_MMA, :LN_COBYLA, etcetera (with the NLOPT_ prefix replaced by : to create a Julia symbol).

There is also a copy(opt::Opt) function to make a copy of a given object (equivalent to nlopt_copy in the C API).

If there is an error in these functions, an exception is thrown.

The algorithm and dimension parameters of the object are immutable (cannot be changed without constructing a new object), but you can query them for a given object by:

ndims(opt)
opt.algorithm

You can get a string description of the algorithm via:

algorithm_name(opt::Opt)

Objective function

The objective function is specified by setting one of the properties:

opt.min_objective = f
opt.max_objective = f

depending on whether one wishes to minimize or maximize the objective function f, respectively. The function f should be of the form:

function f(x::Vector, grad::Vector)
    if length(grad) > 0
        ...set grad to gradient, in-place...
    end
    return ...value of f(x)...
end

The return value should be the value of the function at the point x, where x is a (Float64) array of length n of the optimization parameters (the same as the dimension passed to the Opt constructor).

In addition, if the argument grad is not empty [i.e. length(grad)>0], then grad is a (Float64) array of length n which should (upon return) be set to the gradient of the function with respect to the optimization parameters at x. That is, grad[i] should upon return contain the partial derivative ∂f/∂xi, for 1≤in, if grad is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the grad argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, grad may still be empty for some calls.)

Note that grad must be modified in-place by your function f. Generally, this means using indexing operations grad[...] = ... to overwrite the contents of grad. For example grad = 2x will not work, because it points grad to a new array 2x rather than overwriting the old contents; instead, use an explicit loop or use grad[:] = 2x.

Bound constraints

The bound constraints can be specified by setting one of the properties:

opt.lower_bounds = lb::Union{AbstractVector,Real}
opt.upper_bounds = ub::Union{AbstractVector,Real}

where lb and ub are real arrays of length n (the same as the dimension passed to the Opt constructor). For convenience, you can instead use a single scalar for lb or ub in order to set the lower/upper bounds for all optimization parameters to a single constant.

To retrieve the values of the lower/upper bounds, you can use the properties

opt.lower_bounds
opt.upper_bounds

both of which return Vector{Float64} arrays.

To specify an unbounded dimension, you can use ±Inf.

Nonlinear constraints

Just as for nonlinear constraints in C, you can specify nonlinear inequality and equality constraints by the functions:

inequality_constraint!(opt::Opt, fc, tol=0)
equality_constraint!(opt::Opt, h, tol=0)

where the arguments fc and h have the same form as the objective function above. The optional tol arguments specify a tolerance (which defaults to zero) in judging feasibility for the purposes of stopping the optimization, as in C. For the default tol=0, you can also use opt.inequality_constraint = fc or opt.equality_constraint = hc.

Each call to these function adds a new constraint to the set of constraints, rather than replacing the constraints. To remove all of the inequality and equality constraints from a given problem, you can call the following functions:

remove_constraints!(opt::Opt)

Vector-valued constraints

Just as for nonlinear constraints in C, you can specify vector-valued nonlinear inequality and equality constraints by the functions

inequality_constraint!(opt::Opt, c, tol::AbstractVector)
equality_constraint!(opt::Opt, c, tol::AbstractVector)

Here, tol is an array of the tolerances in each constraint dimension; the dimensionality m of the constraint is determined by length(tol). The constraint function c must be of the form:

function c(result::Vector, x::Vector, grad::Matrix)
    if length(grad) > 0
        ...set grad to gradient, in-place...
    end
    result[1] = ...value of c1(x)...
    result[2] = ...value of c2(x)...
    ...

result is a (Float64) array whose length equals the dimensionality m of the constraint (same as the length of tol above), which upon return should be set in-place (see also above) to the constraint results at the point x (a Float64 array whose length n is the same as the dimension passed to the Opt constructor). Any return value of the function is ignored.

In addition, if the argument grad is not empty [i.e. length(grad)>0], then grad is a 2d array of size n×m which should (upon return) be set in-place (see above) to the gradient of the function with respect to the optimization parameters at x. That is, grad[j,i] should upon return contain the partial derivative ∂ci/∂xj if grad is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the grad argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, grad may still be empty for some calls.)

An inequality constraint corresponds to ci≤0 for 1≤im, and an equality constraint corresponds to ci=0, in both cases with tolerance tol[i] for purposes of termination criteria.

(You can add multiple vector-valued constraints and/or scalar constraints in the same problem.)

Stopping criteria

As explained in the C API Reference and the Introduction), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.)

For each stopping criteria, there is a property of the opt::Opt object that you can use to get/set the value of the stopping criterion. The meanings of each criterion are exactly the same as in the C API:

opt.stopval

Stop when an objective value of at least stopval is found. (Defaults to -Inf.)

opt.ftol_rel
opt.ftol_abs

Relative or absolute tolerance on function value. (Defaults to 0.)

opt.xtol_rel
opt.xtol_abs

Absolute or relative tolerances on the optimization parameters. (Both default to 0.) In the case of xtol_abs, you can either set it to a scalar (to use the same tolerance for all inputs) or a vector of length n (the dimension specified in the Opt constructor) to use a different tolerance for each parameter.

opt.maxeval

Stop when the number of function evaluations exceeds mev. (0 or negative for no limit, which is the default.)

opt.maxtime

Stop when the optimization time (in seconds) exceeds t. (0 or negative for no limit, which is the default.)

Forced termination

In certain cases, the caller may wish to force the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by throwing any exception inside your objective/constraint functions: the optimization will be halted gracefully, and the same exception will be thrown to the caller. See below regarding exceptions. The Julia equivalent of nlopt_forced_stop from the C API is to throw a ForcedStop exception.

Performing the optimization

Once all of the desired optimization parameters have been specified in a given object opt::Opt, you can perform the optimization by calling:

(optf,optx,ret) = optimize(opt::Opt, x::AbstractVector)

On input, x is an array of length n (the dimension of the problem from the Opt constructor) giving an initial guess for the optimization parameters. The return value optx is a array containing the optimized values of the optimization parameters. optf contains the optimized value of the objective function, and ret contains a symbol indicating the NLopt return code (below).

Alternatively,

(optf,optx,ret) = optimize!(opt::Opt, x::Vector{Float64})

is the same but modifies x in-place (as well as returning optx=x).

On failure (negative return codes), optimize() throws an exception (see Exceptions, below).

Return values

The possible return values are the same as the return values in the C API, except that the NLOPT_ prefix is replaced with :. That is, the return values are :SUCCESS, :XTOL_REACHED, etcetera (instead of NLOPT_SUCCESS etcetera).

Exceptions

The error codes in the C API are replaced in the Julia API by thrown exceptions. The following exceptions are thrown by the various routines:

If your objective/constraint functions throw any exception during the execution of optimize, it will be caught by NLopt and the optimization will be halted gracefully, and opt.optimize will re-throw the same exception to its caller.

Local/subsidiary optimization algorithm

Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances by setting:

opt.local_optimizer = local_opt::Opt

Here, local_opt is another Opt object whose parameters are used to determine the local search algorithm, its stopping criteria, and other algorithm parameters. (However, the objective function, bounds, and nonlinear-constraint parameters of local_opt are ignored.) The dimension n of local_opt must match that of opt.

This makes a copy of the local_opt object, so you can freely change your original local_opt afterwards without affecting opt.

Initial step size

Just as in the C API, you can set the initial step sizes for derivative-free optimization algorithms via the opt.initial_step property:

opt.initial_step = dx

Here, dx is an array of the (nonzero) initial steps for each dimension, or a single number if you wish to use the same initial steps for all dimensions. initial_step(opt::Opt, x::AbstractVector) returns the initial step that will be used for a starting guess of x in optimize(opt,x).

Stochastic population

Just as in the C API, you can get and set the initial population for stochastic optimization algorithms by the property

opt.population

(A population of zero, the default, implies that the heuristic default will be used as decided upon by individual algorithms.)

Pseudorandom numbers

For stochastic optimization algorithms, NLopt uses pseudorandom numbers generated by the Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed by calling:

NLopt.srand(seed::Integer)

To reset the seed based on the system time, you can call NLopt.srand_time().

(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling nlopt.srand to set a deterministic seed.)

Vector storage for limited-memory quasi-Newton algorithms

Just as in the C API, you can get and set the number M of stored vectors for limited-memory quasi-Newton algorithms, via integer-valued property

opt.vector_storage

(The default is 0, in which case NLopt uses a heuristic nonzero value as determined by individual algorithms.)

Version number

The version number of NLopt is given by the global variable:

NLOPT_VERSION::VersionNumber

where VersionNumber is a built-in Julia type from the Julia standard library.

Download Details:

Author: JuliaOpt
Source Code: https://github.com/JuliaOpt/NLopt.jl 
License: View license

#julia #math #optimization 

NLopt.jl: The NLopt Module for Julia

A Julia Module Providing The Definition Of The Circle Constant Tau

Tau.jl

This Julia package defines the Tau constant and related functions.

tau ≈ 2*pi

Usage

After installing this package with Pkg.add("Tau"), it can be used as follows:

julia> using Tau

julia> tau === τ ≈ 2*pi
true

julia> typeof(tau)
Irrational{:twoπ}

Note: to input the τ character, type \tau then press Tab.

The tau variants of sinpi, cospi, and mod2pi are also defined:

julia> sintau(1//4)
1.0

julia> costau(1//2)
-1.0

julia> modtau(9*pi/4)
0.7853981633974481

Alternatively, one can use the Unicode aliases sinτ, cosτ, and modτ.

The tau != 2pi inequality

When this package was first created, the equality tau == 2pi did hold true, in accordance to the mathematical definition of the constant. However, that is not valid anymore -- the two values are only approximately equal: tau ≈ 2*pi.

For a detailed explanation of the reasons for this, see this document.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/Tau.jl 
License: View license

#julia #math #number 

 A Julia Module Providing The Definition Of The Circle Constant Tau

Tensor Algebra Abstract Type interoperability Setup

AbstractTensors.jl

Tensor algebra abstract type interoperability with vector bundle parameter

The AbstractTensors package is intended for universal interoperability of the abstract TensorAlgebra type system. All TensorAlgebra{V} subtypes have type parameter V, used to store a TensorBundle value obtained from DirectSum.jl.

For example, this is mainly used in Grassmann.jl to define various SubAlgebra, TensorGraded and TensorMixed types, each with subtypes. Externalizing the abstract type helps extend the dispatch to other packages. By itself, this package does not impose any specifications or structure on the TensorAlgebra{V} subtypes and elements, aside from requiring V to be a Manifold. This means that different packages can create tensor types having a common underlying TensorBundle structure.

Additionally, TupleVector is provided as a light weight alternative to StaticArrays.jl.

If the environment variable STATICJL is set, the StaticArrays package is depended upon.

Interoperability

Since TensorBundle choices are fundamental to TensorAlgebra operations, the universal interoperability between TensorAlgebra{V} elements with different associated TensorBundle choices is naturally realized by applying the union morphism to operations.

function op(::TensorAlgebra{V},::TensorAlgebra{V}) where V
    # well defined operations if V is shared
end # but what if V ≠ W in the input types?

function op(a::TensorAlgebra{V},b::TensorAlgebra{W}) where {V,W}
    VW = V ∪ W        # VectorSpace type union
    op(VW(a),VW(b))   # makes call well-defined
end # this option is automatic with interop(a,b)

# alternatively for evaluation of forms, VW(a)(VW(b))

Some of operations like +,-,*,⊗,⊛,⊙,⊠,⨼,⨽,⋆ and postfix operators ⁻¹,ǂ,₊,₋,ˣ for TensorAlgebra elements are shared across different packages, some of the interoperability is taken care of in this package. Additionally, a universal unit volume element can be specified in terms of LinearAlgebra.UniformScaling, which is independent of V and has its interpretation only instantiated by the context of the TensorAlgebra{V} element being operated on.

Utility methods such as scalar, involute, norm, norm2, unit, even, odd are also defined.

Example with a new subtype

Suppose we are dealing with a new subtype in another project, such as

using AbstractTensors, DirectSum
struct SpecialTensor{V} <: TensorAlgebra{V} end
a = SpecialTensor{ℝ}()
b = SpecialTensor{ℝ'}()

To define additional specialized interoperability for further methods, it is necessary to define dispatch that catches well-defined operations for equal TensorBundle choices and a fallback method for interoperability, along with a Manifold morphism:

(W::Signature)(s::SpecialTensor{V}) where V = SpecialTensor{W}() # conversions
op(a::SpecialTensor{V},b::SpecialTensor{V}) where V = a # do some kind of operation
op(a::TensorAlgebra{V},b::TensorAlgebra{W}) where {V,W} = interop(op,a,b) # compat

which should satisfy (using the operation as defined in DirectSum)

julia> op(a,b) |> Manifold == Manifold(a) ∪ Manifold(b)
true

Thus, interoperability is simply a matter of defining one additional fallback method for the operation and also a new form TensorBundle compatibility morphism.

UniformScaling pseudoscalar

The universal interoperability of LinearAlgebra.UniformScaling as a pseudoscalar element which takes on the TensorBundle form of any other TensorAlgebra element is handled globally by defining the dispatch:

(W::Signature)(s::UniformScaling) = ones(ndims(W)) # interpret a unit pseudoscalar
op(a::TensorAlgebra{V},b::UniformScaling) where V = op(a,V(b)) # right pseudoscalar
op(a::UniformScaling,b::TensorAlgebra{V}) where V = op(V(a),b) # left pseudoscalar

This enables the usage of I from LinearAlgebra as a universal pseudoscalar element.

Tensor evaluation

To support a generalized interface for TensorAlgebra element evaluation, a similar compatibility interface is constructible.

(a::SpecialTensor{V})(b::SpecialTensor{V}) where V = a # conversion of some form
(a::SpecialTensor{W})(b::SpecialTensor{V}) where {V,W} = interform(a,b) # compat

which should satisfy (using the operation as defined in DirectSum)

julia> b(a) |> Manifold == Manifold(a) ∪ Manifold(b)
true

The purpose of the interop and interform methods is to help unify the interoperability of TensorAlgebra elements.

Deployed applications

The key to making the whole interoperability work is that each TensorAlgebra subtype shares a TensorBundle parameter (with all isbitstype parameters), which contains all the info needed at compile time to make decisions about conversions. So other packages need only use the vector space information to decide on how to convert based on the implementation of a type. If external methods are needed, they can be loaded by Requires when making a separate package with TensorAlgebra interoperability.

TupleVector

Statically sized tuple vectors for Julia

TupleVector provides a framework for implementing statically sized tuple vectors in Julia, using the abstract type TupleVector{N,T} <: AbstractVector{T}. Subtypes of TupleVector will provide fast implementations of common array and linear algebra operations. Note that here "statically sized" means that the size can be determined from the type, and "static" does not necessarily imply immutable.

The package also provides some concrete static vector types: Values which may be used as-is (or else embedded in your own type). Mutable versions Variables are also exported, as well as FixedVector for annotating standard Vectors with static size information.

Quick start

Add AbstractTensors from the Pkg REPL, i.e., pkg> add AbstractTensors. Then:

using AbstractTensors

# Create Values using various forms, using constructors, functions or macros
v1 = Values(1, 2, 3)
v1.v === (1, 2, 3) # Values uses a tuple for internal storage
v2 = Values{3,Float64}(1, 2, 3) # length 3, eltype Float64
v5 = zeros(Values{3}) # defaults to Float64
v7 = Values{3}([1, 2, 3]) # Array conversions must specify size

# Can get size() from instance or type
size(v1) == (3,)
size(typeof(v1)) == (3,)

# Supports all the common operations of AbstractVector
v7 = v1 + v2
v8 = sin.(v2)

# Indexing can also be done using static vectors of integers
v1[1] === 1
v1[:] === v1
typeof(v1[[1,2,3]]) <: Vector # Can't determine size from the type of [1,2,3]

Approach

The package provides a range of different useful built-in TupleVector types, which include mutable and immutable vectors based upon tuples, vectors based upon structs, and wrappers of Vector. There is a relatively simple interface for creating your own, custom TupleVector types, too.

This package also provides methods for a wide range of AbstractVector functions, specialized for (potentially immutable) TupleVectors. Many of Julia's built-in method definitions inherently assume mutability, and further performance optimizations may be made when the size of the vector is known to the compiler. One example of this is by loop unrolling, which has a substantial effect on small arrays and tends to automatically trigger LLVM's SIMD optimizations. In combination with intelligent fallbacks to the methods in Base, we seek to provide a comprehensive support for statically sized vectors, large or small, that hopefully "just works".

TupleVector is directly inspired from StaticArrays.jl.

Download Details:

Author: Chakravala
Source Code: https://github.com/chakravala/AbstractTensors.jl 
License: MIT license

#julia #math #algebra 

Tensor Algebra Abstract Type interoperability Setup

Dendriform.jl: Dendriform Di-algebra Algorithms to Compute

Dendriform.jl

Dendriform dialgebra algorithms to compute using Loday's arithmetic on groves of planar binary trees

Setup

Installation of latest release version using Julia:

julia> Pkg.add("Dendriform")

Provides the types PBTree for planar binary trees, Grove for tree collections of constant degree, and GroveBin to compress grove data. This package defines various essential operations on planar binary trees and groves like for union; for graft; left and right for branching; , , <, >, , for Tamari's partial ordering; for between; / and \ (i.e. over and under); and the dashv and vdash operations , , +, * for dendriform algebra.

View the documentation stable / latest for more features and examples.

Background

We call tree-symb the name of a tree to represent it as a vector, where the sequence is made up of n integers. Collections of planar binary trees are encoded into an equivalence class of matrices:

matrix-equivalence-class

where A~B if there exists a permutation f  in Sk so that condition. The binary tree grafting operation is computed

The left and right addition are computed on the following recursive principle:

Together these non-commutative binary operations satisfy the properties of an associative dendriform dialgebra. The structures induced by Loday's definition of the sum have the partial ordering of the associahedron known as Tamari lattice.

tamari-grove-commutativity.png

  • Figure: Tamari associahedron, colored to visualize noncommutative sums of [1,2] and [2,1], code: gist

However, in this computational package, a stricter total ordering is constructed using a function that transforms the set-vector isomorphism obtained from the descending greatest integer index position search method:

The structure obtained from this total ordering is used to construct a reliable binary groveindex representation that encodes the essential data of any grove, using the formula

These algorithms are used in order to facilitate computations that provide insight into the Loday arithmetic.

Usage

Basic usage examples:

julia> using Dendriform

julia> Grove(3,7) ⊣ [1,2]∪[2,1]
[1,2,5,1,2]
[1,2,5,2,1]
[2,1,5,1,2]
[2,1,5,2,1]
[1,5,3,1,2]
[1,5,2,1,3]
[1,5,1,2,3]
[1,5,3,2,1]
[1,5,1,3,1]
Y5 #9/42

julia> Grove(2,3) * ([1,2,3]∪[3,2,1]) |> GroveBin
2981131286847743360614880957207748817969 Y6 #30/132 [54.75%]

julia> [2,1,7,4,1,3,1] < [2,1,7,4,3,2,1]
true

References

Download Details:

Author: Chakravala
Source Code: https://github.com/chakravala/Dendriform.jl 
License: GPL-3.0 license

#julia #math #graph #mathematics 

Dendriform.jl: Dendriform Di-algebra Algorithms to Compute

FastPow.jl: Optimal Addition-chain Exponentiation for Julia

FastPow

This package provides a macro @fastpow that can speed up the computation of integer powers in any Julia expression by transforming them into optimal sequences of multiplications, with a slight sacrifice in accuracy compared to Julia's built-in x^n function. It also optimizes powers of the form 1^p, (-1)^p, 2^p, and 10^p.

In particular, it uses optimal addition-chain exponentiation for (literal) integer powers up to 255, and for larger powers uses repeated squaring to first reduce the power to ≤ 255 and then use addition chains.

For example, @fastpow z^25 requires 6 multiplications, and for z = 0.73 it gives the correct answer to a relative error of ≈ 1.877e-15 (about 8 ulps), vs. the default z^25 which gives the correct answer to a relative error of ≈ 6.03e-16 (about 3 ulps) but is about 10× slower.

Note that you can apply the @fastpow macro to a whole block of Julia code at once. For example,

@fastpow function foo(x,y)    z = sin(x)^3 + sqrt(y)    return z^7 - 4x^5.3 + 3y^12end

applies the @fastpow transformation to every literal integer exponent (^3, ^7, and ^12) in the function foo.

An alternative to @fastpow is to use Julia's built-in @fastmath macro, which enables various LLVM optimizations including, in some cases, faster integer powers using repeated multiplication. The advantages of @fastpow are that it guarantees optimal addition-chain exponentiation and that it works for exponentiating any Julia type (e.g. complex numbers, matrices, …), whereas LLVM will only optimize a small set of hardware numeric types.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/FastPow.jl 
License: MIT license

#julia #math 

FastPow.jl: Optimal Addition-chain Exponentiation for Julia

MDCT.jl: MDCT Module for Julia

MDCT module for Julia

This module computes the modified discrete cosine transform (MDCT) in the Julia language and the inverse transform (IMDCT), using the fast type-IV discrete cosine tranform (DCT-IV) functions in the FFTW.jl package.

Definitions of the MDCT and IMDCT can be found, for example in the Wikipedia MDCT article. The MDCT is a linear transformation that takes 2N inputs and produces N outputs, which is designed to be applied to a sequence of 50%-overlapping blocks of a longer sequence (e.g. audio samples). Because this is non-square (fewer outputs than inputs), the IMDCT is not an "inverse" transformation in the usual sense; it only recovers the original data when IMDCTs of overlapping blocks are added (by "time-domain aliasing cancellation").

Installation

Within Julia, just use the package manager to run Pkg.add("MDCT") to install the files.

Usage

To use the MDCT functions, simply do

using MDCT
Y = mdct(X)
Z = imdct(Y)

where X is any numeric AbstractVector (1d array). Currently, the length of X must be a multiple of 4.

For example, suppose we make a random vector X of length 1000 and consider 50%-overlapping blocks of length 100 (X[1:100], X[51:150], X[101:200], and so on). If we perform the MDCT of two such blocks, then the IMDCT, and then add the overlapping halves of the IMDCT outputs, we recover that portion of the original data:

X = rand(1000)
Y1 = mdct(X[1:100])
Y2 = mdct(X[51:150])
Z1 = imdct(Y1)
Z2 = imdct(Y2)
norm(Z1[51:100] + Z2[1:50] - X[51:100])

where the last line computes the difference between the overlapped IMDCT sum and the original data, and should be around 10−15 (floating-point roundoff error).

Planning

To create a pre-planned transforms for a given size of input vector do:

using MDCT
Mp = plan_mdct(X)
Ip = plan_imdct(Y)

where Ip and Mp are pre-planned transforms optimized to allow much quicker subsequent transformations. To use them simply:

Xt = Mp*X
Yt = Ip*Y

Download Details:

Author: stevengj
Source Code: https://github.com/stevengj/MDCT.jl 
License: MIT license

#julia #math #fourier 

MDCT.jl: MDCT Module for Julia

Roots.jl: Root Finding Functions for Julia

Root finding functions for Julia

This package contains simple routines for finding roots of continuous scalar functions of a single real variable. The find_zero function provides the primary interface. It supports various algorithms through the specification of a method. These include:

Bisection-like algorithms. For functions where a bracketing interval is known (one where f(a) and f(b) have alternate signs), the Bisection method can be specified. For most floating point number types, bisection occurs in a manner exploiting floating point storage conventions. For others, an algorithm of Alefeld, Potra, and Shi is used. These methods are guaranteed to converge.

Several derivative-free methods are implemented. These are specified through the methods Order0, Order1 (the secant method), Order2 (the Steffensen method), Order5, Order8, and Order16. The number indicates, roughly, the order of convergence. The Order0 method is the default, and the most robust, but may take many more function calls to converge. The higher order methods promise higher order (faster) convergence, though don't always yield results with fewer function calls than Order1 or Order2. The methods Roots.Order1B and Roots.Order2B are superlinear and quadratically converging methods independent of the multiplicity of the zero.

There are historic methods that require a derivative or two: Roots.Newton and Roots.Halley. Roots.Schroder provides a quadratic method, like Newton's method, which is independent of the multiplicity of the zero.

There are several non-exported methods, such as, Roots.Brent(), FalsePosition, Roots.A42, Roots.AlefeldPotraShi, Roots.LithBoonkkampIJzermanBracket, and Roots.LithBoonkkampIJzerman.

Each method's documentation has additional detail.

Some examples:

julia> using Roots

julia> f(x) = exp(x) - x^4;

julia> α₀,α₁,α₂ = -0.8155534188089607, 1.4296118247255556, 8.6131694564414;

julia> find_zero(f, (8,9), Bisection()) ≈ α₂ # a bisection method has the bracket specified
true

julia> find_zero(f, (-10, 0)) ≈ α₀ # Bisection is default if x in `find_zero(f,x)` is not a number
true


julia> find_zero(f, (-10, 0), Roots.A42()) ≈ α₀ # fewer function evaluations
true

For non-bracketing methods, the initial position is passed in as a scalar:

julia> find_zero(f, 3) ≈ α₁  # find_zero(f, x0::Number) will use Order0()
true

julia> find_zero(f, 3, Order1()) ≈ α₁ # same answer, different method (secant)
true

julia> find_zero(sin, BigFloat(3.0), Order16()) ≈ π
true

The find_zero function can be used with callable objects:

julia> using Polynomials;

julia> x = variable();

julia> find_zero(x^5 - x - 1, 1.0) ≈ 1.1673039782614187
true

The function should respect the units of the Unitful package:

julia> using Unitful

julia> s, m  = u"s", u"m";

julia> g, v₀, y₀ = 9.8*m/s^2, 10m/s, 16m;


julia> y(t) = -g*t^2 + v₀*t + y₀
y (generic function with 1 method)

julia> find_zero(y, 1s)  ≈ 1.886053370668014s
true

Newton's method can be used without taking derivatives by hand. The following use the ForwardDiff package:

julia> using ForwardDiff

julia> D(f) = x -> ForwardDiff.derivative(f,float(x))
D (generic function with 1 method)

Now we have:

julia> f(x) = x^3 - 2x - 5
f (generic function with 1 method)

julia> x0 = 2
2

julia> find_zero((f,D(f)), x0, Roots.Newton()) ≈ 2.0945514815423265
true

Automatic derivatives allow for easy solutions to finding critical points of a function.

julia> using Statistics: mean, median

julia> as = rand(5);

julia> M(x) = sum([(x-a)^2 for a in as])
M (generic function with 1 method)

julia> find_zero(D(M), .5) ≈ mean(as)
true

julia> med(x) = sum([abs(x-a) for a in as])
med (generic function with 1 method)

julia> find_zero(D(med), (0, 1)) ≈ median(as)
true

The CommonSolve interface

The DifferentialEquations interface of setting up a problem; initializing the problem; then solving the problem is also implemented using the methods ZeroProblem, init, solve! and solve.

For example, we can solve a problem with many different methods, as follows:

julia> f(x) = exp(-x) - x^3
f (generic function with 1 method)

julia> x0 = 2.0
2.0

julia> fx = ZeroProblem(f,x0)
ZeroProblem{typeof(f), Float64}(f, 2.0)

julia> solve(fx) ≈ 0.7728829591492101
true

With no default, and a single initial point specified, the default Order1 method is used. The solve method allows other root-solving methods to be passed, along with other options. For example, to use the Order2 method using a convergence criteria (see below) that |xₙ - xₙ₋₁| ≤ δ, we could make this call:

julia> solve(fx, Order2(), atol=0.0, rtol=0.0) ≈ 0.7728829591492101
true

Unlike find_zero, which errors on non-convergence, solve returns NaN on non-convergence.

This next example has a zero at 0.0, but for most initial values will escape towards ±∞, sometimes causing a relative tolerance to return a misleading value. Here we can see the differences:

julia> f(x) = cbrt(x)*exp(-x^2)
f (generic function with 1 method)

julia> x0 = 0.1147
0.1147

julia> find_zero(f, 1.0, Roots.Order1()) # stopped as |f(xₙ)| ≤ |xₙ|ϵ
5.53043654482315

julia> find_zero(f, 1.0, Roots.Order1(), atol=0.0, rtol=0.0) # error as no check on `|f(xn)|`
ERROR: Roots.ConvergenceFailed("Algorithm failed to converge")
[...]

julia> fx = ZeroProblem(f, x0);

julia> solve(fx, Roots.Order1(), atol=0.0, rtol=0.0) # NaN, not an error
NaN

julia> fx = ZeroProblem((f, D(f)), x0); # higher order methods can identify zero of this function

julia> solve(fx, Roots.LithBoonkkampIJzerman(2,1), atol=0.0, rtol=0.0)
0.0

Functions may be parameterized, as illustrated:

julia> f(x, p=2) = cos(x) - x/p
f (generic function with 2 methods)

julia> Z = ZeroProblem(f, pi/4)
ZeroProblem{typeof(f), Float64}(f, 0.7853981633974483)

julia> solve(Z, Order1()) ≈ 1.0298665293222586     # use p=2 default
true

julia> solve(Z, Order1(), p=3) ≈ 1.170120950002626 # use p=3
true

Multiple zeros

The find_zeros function can be used to search for all zeros in a specified interval. The basic algorithm essentially splits the interval into many subintervals. For each, if there is a bracket, a bracketing algorithm is used to identify a zero, otherwise a derivative free method is used to search for zeros. This algorithm can miss zeros for various reasons, so the results should be confirmed by other means.

julia> f(x) = exp(x) - x^4
f (generic function with 2 methods)

julia> find_zeros(f, -10,10) ≈ [α₀,α₁,α₂] # from above
true

The interval can also be specified using a structure with extrema defined, where extrema return two different values:

julia> using IntervalSets

julia> find_zeros(f, -10..10) ≈ [α₀,α₁,α₂]
true

(For tougher problems, the IntervalRootFinding package gives guaranteed results, rather than the heuristically identified values returned by find_zeros.)

Convergence

For most algorithms, convergence is decided when

The value |f(x_n)| <= tol with tol = max(atol, abs(x_n)*rtol), or

the values x_n ≈ x_{n-1} with tolerances xatol and xrtol and f(x_n) ≈ 0 with a relaxed tolerance based on atol and rtol.

The algorithm stops if

it encounters an NaN or an Inf, or

the number of iterations exceed maxevals, or

If the algorithm stops and the relaxed convergence criteria is met, the suspected zero is returned. Otherwise an error is thrown indicating no convergence. To adjust the tolerances, find_zero accepts keyword arguments atol, rtol, xatol, and xrtol, as seen in some examples above.

The Bisection and Roots.A42 methods are guaranteed to converge even if the tolerances are set to zero, so these are the defaults. Non-zero values for xatol and xrtol can be specified to reduce the number of function calls when lower precision is required.

julia> fx = ZeroProblem(sin, (3,4));

julia> solve(fx, Bisection(); xatol=1/16)
3.125

An alternate interface

This functionality is provided by the fzero function, familiar to MATLAB users. Roots also provides this alternative interface:

fzero(f, x0::Real; order=0) calls a derivative-free method. with the order specifying one of Order0, Order1, etc.

fzero(f, a::Real, b::Real) calls the find_zero algorithm with the Bisection method.

fzeros(f, a::Real, b::Real) will call find_zeros.

Usage examples

julia> f(x) = exp(x) - x^4
f (generic function with 2 methods)

julia> fzero(f, 8, 9) ≈ α₂   # bracketing
true

julia> fzero(f, -10, 0) ≈ α₀
true

julia> fzeros(f, -10, 10) ≈ [α₀, α₁, α₂]
true

julia> fzero(f, 3) ≈ α₁      # default is Order0()
true

julia> fzero(sin, big(3), order=16)  ≈ π # uses higher order method
true

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/Roots.jl 
License: MIT license

#julia #math #roots 

Roots.jl: Root Finding Functions for Julia

Leibniz.jl: Tensor Algebra Utility Library

Leibniz.jl

Bit entanglements for tensor algebra derivations and hypergraphs

Although intended for compatibility use with the Grassmann.jl package for multivariable differential operators and tensor field operations, Leibniz can be used independently.

Extended dual index printing with full alphanumeric characters 62'

To help provide a commonly shared and readable indexing to the user, some print methods are provided:

julia> Leibniz.printindices(stdout,Leibniz.indices(UInt(2^62-1)),false,"v")
v₁₂₃₄₅₆₇₈₉₀abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

julia> Leibniz.printindices(stdout,Leibniz.indices(UInt(2^62-1)),false,"w")
w¹²³⁴⁵⁶⁷⁸⁹⁰ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz

An application of this is in Grassmann and DirectSum, where dual indexing is used.

Derivation

Generates the tensor algebra of multivariable symmetric Leibniz differentials and interfaces using Reduce, Grassmann to provide the ∇,Δ vector field operators, enabling mixed-symmetry tensors with arbitrary multivariate Grassmann manifolds.

julia> using Leibniz, Grassmann

julia> V = tangent(ℝ^3,4,3)
⟨+++⟩

julia> V(∇)
∂₁v₁ + ∂₂v₂ + ∂₃v₃

julia> V(∇^2)
0 + 1∂₁∂₁ + 1∂₂∂₂ + 1∂₃∂₃

julia> V(∇^3)
0 + 1∂₁∂₁∂₁v₁ + 1∂₂∂₂∂₂v₂ + 1∂₃∂₃∂₃v₃ + 1∂₂∂₁₂v₁ + 1∂₃∂₁₃v₁ + 1∂₁∂₁₂v₂ + 1∂₃∂₂₃v₂ + 1∂₁∂₁₃v₃ + 1∂₂∂₂₃v₃

julia> V(∇^4)
0.0 + 1∂₁∂₁∂₁∂₁ + 1∂₂∂₂∂₂∂₂ + 1∂₃∂₃∂₃∂₃ + 2∂₁₂∂₁₂ + 2∂₁₃∂₁₃ + 2∂₂₃∂₂₃

julia> ∇^2 == Δ
true

julia> ∇, Δ
(∂ₖvₖ, ∂ₖ²v)

This is an initial undocumented pre-release registration for testing with other packages.

Download Details:

Author: Chakravala
Source Code: https://github.com/chakravala/Leibniz.jl 
License: AGPL-3.0 license

#julia #math #derive 

Leibniz.jl: Tensor Algebra Utility Library

SymEngine.jl: Julia Wrappers Of SymEngine

SymEngine.jl

Julia Wrappers for SymEngine, a fast symbolic manipulation library, written in C++.

Installation

You can install SymEngine.jl by giving the following command.

julia> Pkg.add("SymEngine")

Quick Start

Working with scalar variables

Defining variables

One can define variables in a few ways. The following three examples are equivalent.

Defining two symbolic variables with the names a and b, and assigning them to julia variables with the same name.

julia> a=symbols(:a); b=symbols(:b)
b

julia> a,b = symbols("a b")
(a, b)

julia> @vars a b
(a, b)

Simple expressions

We are going to define an expression using the variables from earlier:

julia> ex1 = a + 2(b+2)^2 + 2a + 3(a+1)
3*a + 3*(1 + a) + 2*(2 + b)^2

One can see that values are grouped, but no expansion is done.

Working with vector and matrix variables

Defining vectors of variables

A vector of variables can be defined using list comprehension and string interpolation.

julia> [symbols("α_$i") for i in 1:3]
3-element Array{SymEngine.Basic,1}:
 α_1
 α_2
 α_3

Defining matrices of variables

Some times one might want to define a matrix of variables. One can use a matrix comprehension, and string interpolation to create a matrix of variables.

julia> W = [symbols("W_$i$j") for i in 1:3, j in 1:4]
3×4 Array{Basic,2}:
 W_11  W_12  W_13  W_14
 W_21  W_22  W_23  W_24
 W_31  W_32  W_33  W_34

Matrix-vector multiplication

Now using the matrix we can perform matrix operations:

julia> W*[1.0, 2.0, 3.0, 4.0]
3-element Array{Basic,1}:
 1.0*W_11 + 2.0*W_12 + 3.0*W_13 + 4.0*W_14
 1.0*W_21 + 2.0*W_22 + 3.0*W_23 + 4.0*W_24
 1.0*W_31 + 2.0*W_32 + 3.0*W_33 + 4.0*W_34

Operations

expand

julia> expand(a + 2(b+2)^2 + 2a + 3(a+1))
11 + 6*a + 8*b + 2*b^2

subs

Performs substitution.

julia> subs(a^2+(b-2)^2, b=>a)
a^2 + (-2 + a)^2

julia> subs(a^2+(b-2)^2, b=>2)
a^2

julia> subs(a^2+(b-2)^2, a=>2)
4 + (-2 + b)^2

julia> subs(a^2+(b-2)^2, a^2=>2)
2 + (-2 + b)^2

julia> subs(a^2+(b-2)^2, a=>2, b=>3)
5

diff

Peforms differentiation

julia> diff(a + 2(b+2)^2 + 2a + 3(a+1), b)
4*(2 + b)

Download Details:

Author: Symengine
Source Code: https://github.com/symengine/SymEngine.jl 
License: MIT license

#julia #math #computer #algebra 

SymEngine.jl: Julia Wrappers Of SymEngine

Polynomials.jl: Polynomial manipulations in Julia

Polynomials.jl

Basic arithmetic, integration, differentiation, evaluation, and root finding over dense univariate polynomials.

Installation

(v1.6) pkg> add Polynomials

This package supports Julia v1.6 and later.

Available Types of Polynomials

  • Polynomial –⁠ standard basis polynomials, $a(x) = a_0 + a_1 x + a_2 x^2 + … + a_n x^n$ for $n ≥ 0$.
  • ImmutablePolynomial –⁠ standard basis polynomials backed by a Tuple type for faster evaluation of values
  • SparsePolynomial –⁠ standard basis polynomial backed by a dictionary to hold sparse high-degree polynomials
  • LaurentPolynomial –⁠ Laurent polynomials, $a(x) = a_m x^m + … + a_n x^n$ for $m ≤ n$ and $m,n ∈ ℤ$. This is backed by an offset array; for example, if $m<0$ and $n>0$, we obtain $a(x) = a_m x^m + … + a_{-1} x^{-1} + a_0 + a_1 x + … + a_n x^n$
  • FactoredPolynomial –⁠ standard basis polynomials, storing the roots, with multiplicity, and leading coefficient of a polynomial
  • ChebyshevT –⁠ Chebyshev polynomials of the first kind
  • RationalFunction - a type for ratios of polynomials.

Usage

julia> using Polynomials

Construction and Evaluation

Construct a polynomial from an array (a vector) of its coefficients, lowest order first.

julia> Polynomial([1,0,3,4])
Polynomial(1 + 3*x^2 + 4*x^3)

Optionally, the variable of the polynomial can be specified.

julia> Polynomial([1,2,3], :s)
Polynomial(1 + 2*s + 3*s^2)

Construct a polynomial from its roots.

julia> fromroots([1,2,3]) # (x-1)*(x-2)*(x-3)
Polynomial(-6 + 11*x - 6*x^2 + x^3)

Evaluate the polynomial p at x.

julia> p = Polynomial([1, 0, -1]);
julia> p(0.1)
0.99

Arithmetic

Methods are added to the usual arithmetic operators so that they work on polynomials, and combinations of polynomials and scalars.

julia> p = Polynomial([1,2])
Polynomial(1 + 2*x)

julia> q = Polynomial([1, 0, -1])
Polynomial(1 - x^2)

julia> p - q
Polynomial(2*x + x^2)

julia> p = Polynomial([1,2])
Polynomial(1 + 2*x)

julia> q = Polynomial([1, 0, -1])
Polynomial(1 - x^2)

julia> 2p
Polynomial(2 + 4*x)

julia> 2+p
Polynomial(3 + 2*x)

julia> p - q
Polynomial(2*x + x^2)

julia> p * q
Polynomial(1 + 2*x - x^2 - 2*x^3)

julia> q / 2
Polynomial(0.5 - 0.5*x^2)

julia> q ÷ p # `div`, also `rem` and `divrem`
Polynomial(0.25 - 0.5*x)

Most operations involving polynomials with different variables will error.

julia> p = Polynomial([1, 2, 3], :x);
julia> q = Polynomial([1, 2, 3], :s);
julia> p + q
ERROR: ArgumentError: Polynomials have different indeterminates

Construction and Evaluation

While polynomials of type Polynomial are mutable objects, operations such as +, -, *, always create new polynomials without modifying its arguments. The time needed for these allocations and copies of the polynomial coefficients may be noticeable in some use cases. This is amplified when the coefficients are for instance BigInt or BigFloat which are mutable themself. This can be avoided by modifying existing polynomials to contain the result of the operation using the MutableArithmetics (MA) API.

Consider for instance the following arrays of polynomials

using Polynomials
d, m, n = 30, 20, 20
p(d) = Polynomial(big.(1:d))
A = [p(d) for i in 1:m, j in 1:n]
b = [p(d) for i in 1:n]

In this case, the arrays are mutable objects for which the elements are mutable polynomials which have mutable coefficients (BigInts). These three nested levels of mutable objects communicate with the MA API in order to reduce allocation. Calling A * b requires approximately 40 MiB due to 2 M allocations as it does not exploit any mutability.

Using

using PolynomialsMutableArithmetics

to register Polynomials with MutableArithmetics, then multiplying with:

using MutableArithmetics
const MA = MutableArithmetics
MA.operate(*, A, b)

exploits the mutability and hence only allocates approximately 70 KiB due to 4 k allocations.

If the resulting vector is already allocated, e.g.,

z(d) = Polynomial([zero(BigInt) for i in 1:d])
c = [z(2d - 1) for i in 1:m]

then we can exploit its mutability with

MA.operate!(MA.add_mul, c, A, b)

to reduce the allocation down to 48 bytes due to 3 allocations.

These remaining allocations are due to the BigInt buffer used to store the result of intermediate multiplications. This buffer can be preallocated with:

buffer = MA.buffer_for(MA.add_mul, typeof(c), typeof(A), typeof(b))
MA.buffered_operate!(buffer, MA.add_mul, c, A, b)

then the second line is allocation-free.

The MA.@rewrite macro rewrite an expression into an equivalent code that exploit the mutability of the intermediate results. For instance

MA.@rewrite(A1 * b1 + A2 * b2)

is rewritten into

c = MA.operate!(MA.add_mul, MA.Zero(), A1, b1)
MA.operate!(MA.add_mul, c, A2, b2)

which is equivalent to

c = MA.operate(*, A1, b1)
MA.mutable_operate!(MA.add_mul, c, A2, b2)

Note that currently, only the Polynomial type implements the API and it only implements part of it.

Integrals and Derivatives

Integrate the polynomial p term by term, optionally adding a constant term k. The degree of the resulting polynomial is one higher than the degree of p (for a nonzero polynomial).

julia> integrate(Polynomial([1, 0, -1]))
Polynomial(1.0*x - 0.3333333333333333*x^3)

julia> integrate(Polynomial([1, 0, -1]), 2)
Polynomial(2.0 + 1.0*x - 0.3333333333333333*x^3)

Differentiate the polynomial p term by term. For non-zero polynomials the degree of the resulting polynomial is one lower than the degree of p.

julia> derivative(Polynomial([1, 3, -1]))
Polynomial(3 - 2*x)

Root-finding

Return the roots (zeros) of p, with multiplicity. The number of roots returned is equal to the degree of p. By design, this is not type-stable, the returned roots may be real or complex.

julia> roots(Polynomial([1, 0, -1]))
2-element Vector{Float64}:
 -1.0
  1.0

julia> roots(Polynomial([1, 0, 1]))
2-element Vector{ComplexF64}:
 0.0 - 1.0im
 0.0 + 1.0im

julia> roots(Polynomial([0, 0, 1]))
2-element Vector{Float64}:
 0.0
 0.0

Fitting arbitrary data

Fit a polynomial (of degree deg or less) to x and y using a least-squares approximation.

julia> xs = 0:4; ys = @. exp(-xs) + sin(xs);

julia> fit(xs, ys) |> p -> round.(coeffs(p), digits=4) |> Polynomial
Polynomial(1.0 + 0.0593*x + 0.3959*x^2 - 0.2846*x^3 + 0.0387*x^4)

julia> fit(ChebyshevT, xs, ys, 2) |> p -> round.(coeffs(p), digits=4) |> ChebyshevT
ChebyshevT(0.5413⋅T_0(x) - 0.8991⋅T_1(x) - 0.4238⋅T_2(x))

Visual example:

fit example

Other methods

Polynomial objects also have other methods:

For standard basis polynomials, 0-based indexing is used to extract the coefficients of [a0, a1, a2, ...]; for mutable polynomials, coefficients may be changed using indexing notation.

coeffs: returns the coefficients

degree: returns the polynomial degree, length is number of stored coefficients

variable: returns the polynomial symbol as a polynomial in the underlying type

LinearAlgebra.norm: find the p-norm of a polynomial

conj: finds the conjugate of a polynomial over a complex field

truncate: set to 0 all small terms in a polynomial;

chop chops off any small leading values that may arise due to floating point operations.

gcd: greatest common divisor of two polynomials.

Pade: Return the Pade approximant of order m/n for a polynomial as a Pade object.

Related Packages

StaticUnivariatePolynomials.jl Fixed-size univariate polynomials backed by a Tuple

MultiPoly.jl for sparse multivariate polynomials

DynamicPolynomials.jl Multivariate polynomials implementation of commutative and non-commutative variables

MultivariatePolynomials.jl for multivariate polynomials and moments of commutative or non-commutative variables

PolynomialRings.jl A library for arithmetic and algebra with multi-variable polynomials.

AbstractAlgebra.jl, Nemo.jl for generic polynomial rings, matrix spaces, fraction fields, residue rings, power series, Hecke.jl for algebraic number theory.

LaurentPolynomials.jl A package for Laurent polynomials.

CommutativeAlgebra.jl the start of a computer algebra system specialized to discrete calculations with support for polynomials.

PolynomialRoots.jl for a fast complex polynomial root finder. For larger degree problems, also FastPolynomialRoots and AMRVW. For real roots only RealPolynomialRoots.

SpecialPolynomials.jl A package providing various polynomial types beyond the standard basis polynomials in Polynomials.jl. Includes interpolating polynomials, Bernstein polynomials, and classical orthogonal polynomials.

ClassicalOrthogonalPolynomials.jl A Julia package for classical orthogonal polynomials and expansions. Includes chebyshevt, chebyshevu, legendrep, jacobip, ultrasphericalc, hermiteh, and laguerrel. The same repository includes FastGaussQuadrature.jl, FastTransforms.jl, and the ApproxFun packages.

Legacy code

As of v0.7, the internals of this package were greatly generalized and new types and method names were introduced. For compatability purposes, legacy code can be run after issuing using Polynomials.PolyCompat.

Contributing

If you are interested in contributing, feel free to open an issue or pull request to get started.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/Polynomials.jl 
License: View license

#julia #math 

Polynomials.jl: Polynomial manipulations in Julia
Rupert  Beatty

Rupert Beatty

1666124820

Surge: A Swift Library That Uses The Accelerate Framework

Surge

Surge is a Swift library that uses the Accelerate framework to provide high-performance functions for matrix math, digital signal processing, and image manipulation.

Accelerate exposes SIMD instructions available in modern CPUs to significantly improve performance of certain calculations. Because of its relative obscurity and inconvenient APIs, Accelerate is not commonly used by developers, which is a shame, since many applications could benefit from these performance optimizations.

Surge aims to bring Accelerate to the mainstream, making it as easy (and nearly as fast, in most cases) to perform computation over a set of numbers as for a single member.

Though, keep in mind: Accelerate is not a silver bullet. Under certain conditions, such as performing simple calculations over a small data set, Accelerate can be out-performed by conventional algorithms. Always benchmark to determine the performance characteristics of each potential approach.


Curious about the name Surge? (And Jounce?) Back in the mid 90's, Apple, IBM, and Motorola teamed up to create AltiVec (a.k.a the Velocity Engine), which provided a SIMD instruction set for the PowerPC architecture. When Apple made the switch to Intel CPUs, AltiVec was ported to the x86 architecture and rechristened Accelerate. The derivative of Accelerate (and second derivative of Velocity) is known as either jerk, jolt, surge, or lurch; if you take the derivative of surge, you get the jounce --- hence the name of this library and its parent organization.


Installation

The infrastructure and best practices for distributing Swift libraries are currently in flux during this beta period of Swift & Xcode. In the meantime, you can add Surge as a git submodule, drag the Surge.xcodeproj file into your Xcode project, and add Surge.framework as a dependency for your target.

Surge uses Swift 5. This means that your code has to be written in Swift 5 due to current binary compatibility limitations.

Swift Package Manager

To use Swift Package Manager add Surge to your Package.swift file:

let package = Package(
    name: "myproject",
    dependencies: [
        .package(url: "https://github.com/Jounce/Surge.git", .upToNextMajor(from: "2.3.2")),
    ],
    targets: [
        .target(
            name: "myproject",
            dependencies: ["Surge"]),
    ]
)

Then run swift build.

CocoaPods

To use CocoaPods add Surge to your Podfile:

source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '10.0'
use_frameworks!

target '<Your Target Name>' do
    pod 'Surge', '~> 2.3.2'
end

Then run pod install.

Carthage

To use Carthage add Surge to your Cartfile:

github "Jounce/Surge" ~> 2.3.2

Then run carthage update and use the framework in Carthage/Build/<platform>.


Usage

Computing Sum of [Double]

import Surge

let n = [1.0, 2.0, 3.0, 4.0, 5.0]
let sum = Surge.sum(n) // 15.0

Computing Product of Two [Double]s

import Surge

let a = [1.0, 3.0, 5.0, 7.0]
let b = [2.0, 4.0, 6.0, 8.0]

let product = Surge.elmul(a, b) // [2.0, 12.0, 30.0, 56.0]

General Arithmetic Operations

Addition

Addition functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)add.+ (infix).+= (infix)
(Array, Scalar)add+ (infix)+= (infix)
(Matrix, Matrix)add+ (infix)+= (infix)
(Matrix, Scalar)n/an/an/a
(Vector, Vector)add+ (infix)+= (infix)
(Vector, Scalar)add+ (infix)+= (infix)

Subtraction

Subtraction functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)sub.- (infix).-= (infix)
(Array, Scalar)sub- (infix)-= (infix)
(Matrix, Matrix)sub- (infix)-= (infix)
(Matrix, Scalar)n/an/an/a
(Vector, Vector)sub- (infix)-= (infix)
(Vector, Scalar)sub- (infix)-= (infix)

Multiplication

Multiplication functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)mul.* (infix).*= (infix)
(Array, Scalar)mul* (infix)*= (infix)
(Matrix, Matrix)mul* (infix)n/a
(Matrix, Vector)mul* (infix)n/a
(Matrix, Scalar)mul* (infix)n/a
(Vector, Matrix)mul* (infix)n/a
(Vector, Scalar)mul* (infix)*= (infix)
(Scalar, Array)mul* (infix)n/a
(Scalar, Matrix)mul* (infix)n/a
(Scalar, Vector)mul* (infix)n/a

Element-wise multiplication

Element-wise multiplication functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Matrix, Matrix)elmuln/an/a
(Vector, Vector)elmul.* (infix).*= (infix)

Division

Division functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)div./ (infix)./= (infix)
(Array, Scalar)div/ (infix)/= (infix)
(Matrix, Matrix)div/ (infix)n/a
(Matrix, Scalar)n/a/ (infix)n/a
(Vector, Scalar)div/ (infix)/= (infix)

Element-wise Division

Element-wise multiplication functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Vector, Vector)eldiv./ (infix)./= (infix)

Modulo

Modulo functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)mod.% (infix)n/a
(Array, Scalar)mod% (infix)n/a

Remainder

Remainder functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)remaindern/an/a
(Array, Scalar)remaindern/an/a

Square Root

Square root functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array)sqrtn/an/a

Summation

Sum functions & operator

ArgumentsFunctionOperatorIn-Place Operator
(Array)sumn/an/a
(Matrix)sumn/an/a

Dot Product

Dot product functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)dot (infix)n/a
(Vector, Vector)dot (infix)n/a

Distance

Distance functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)distn/an/a
(Vector, Vector)distn/an/a

Squared Distance

Squared distance functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)distSqn/an/a
(Vector, Vector)distSqn/an/a

Power

Power functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array, Array)pow.** (infix).**= (infix)
(Array, Scalar)pow** (infix)**= (infix)
(Matrix, Scalar)pown/an/a
(Vector, Vector)pown/an/a

(Serial exponentiation: a ** b ** c == a ** (b ** c))

Exponential

Exponential functions & operators

ArgumentsFunctionOperatorIn-Place Operator
(Array)expn/an/a
(Matrix)expn/an/a
(Vector)expn/an/a

Trigonometric Operations

Trigonometric functions & operators

Sine/Cosine/Tangent

ArgumentsFunctionOperation
(Array)sinSine
(Array)cosCosine
(Array)tanTangent
(Array)sincosSine & Cosine

Arc Sine/Cosine/Tangent

ArgumentsFunctionOperation
(Array)asinArc Sine
(Array)acosArc Cosine
(Array)atanArc Tangent

Hyperbolic Sine/Cosine/Tangent

ArgumentsFunctionOperation
(Array)sinhHyperbolic Sine
(Array)coshHyperbolic Cosine
(Array)tanhHyperbolic Tangent

Inverse Hyperbolic Sine/Cosine/Tangent

ArgumentsFunctionOperation
(Array)asinhInverse Hyperbolic Sine
(Array)acoshInverse Hyperbolic Cosine
(Array)atanhInverse Hyperbolic Tangent

Radians ↔︎ Degrees

ArgumentsFunctionOperation
(Array)rad2degRadians to Degrees
(Array)deg2radDegrees to Radians

Exponential Function

Exponential functions & operators

ArgumentsFunctionOperation
(Array)expBase-e Exponential Function
(Array)exp2Base-2 Exponential Function

Logarithm

Exponential functions & operators

ArgumentsFunctionOperation
(Array)logBase-e Logarithm
(Array)log2Base-2 Logarithm
(Array)log10Base-10 Logarithm
(Array)logbBase-b Logarithm

Statistical Operations

Statistical functions & operators

Summation

ArgumentsFunctionOperation
(Array)sumSummation
(Array)asumAbsolute Summation

Minimum/Maximum

ArgumentsFunctionOperation
(Array)minMinimum
(Array)maxMaximum

Mean/Variance

ArgumentsFunctionOperation
(Array)meanMean
(Array)meamgMean of Magnitudes
(Array)measqMean of Squares
(Array)varianceVariance
(Array)stdStandard Deviation

Auxiliary Functions

Auxiliary functions & operators

Rounding Functions

ArgumentsFunctionOperation
(Array)ceilCeiling
(Array)floorFlooring
(Array)roundRounding
(Array)truncInteger truncation

Absolute value

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array)absn/an/an/a

Signum function

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array)copysignn/an/an/a

Multiplicative inverse

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array)recn/an/an/a

Matrix-specific Operations

Matrix-specific functions & operators

Matrix Inversion

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Matrix)invn/an/an/a

Matrix Transposition

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Matrix)transposen/a (postfix)n/a

Matrix Determinant

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Matrix)detn/an/an/a

Eigen Decomposition

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Matrix)eigenDecomposen/an/an/a

DSP-specific Operations

Fast fourier transform functions & operators

Fast Fourier Transform

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array)fftn/an/an/a

Convolution

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array, Array)convn/an/an/a

Cross-Correlation

ArgumentsFunctionIn-Place FunctionOperatorIn-Place Operator
(Array, Array)xcorrn/an/an/a
(Array)xcorrn/an/an/a

Download Details:

Author: Jounce
Source Code: https://github.com/Jounce/Surge 
License: MIT license

#swift #math #matrix #arithmetic 

Surge: A Swift Library That Uses The Accelerate Framework

FFTW.jl: Julia Bindings to The FFTW Library for Fast Fourier Transform

FFTW.jl

This package provides Julia bindings to the FFTW library for fast Fourier transforms (FFTs), as well as functionality useful for signal processing. These functions were formerly a part of Base Julia.

Usage and documentation

]add FFTW
using FFTW
fft([0; 1; 2; 1])

returns

4-element Array{Complex{Float64},1}:
  4.0 + 0.0im
 -2.0 + 0.0im
  0.0 + 0.0im
 -2.0 + 0.0im

The documentation of generic FFT functionality can be found in the AbstractFFTs.jl package. Additional functionalities supported by the FFTW library are documented in the present package.

MKL

Alternatively, the FFTs in Intel's Math Kernel Library (MKL) can be used by running FFTW.set_provider!("mkl"). MKL will be provided through MKL_jll. This change of provider is persistent and has to be done only once, i.e., the package will use MKL when building and updating. Note however that MKL provides only a subset of the functionality provided by FFTW. See Intel's documentation for more information about potential differences or gaps in functionality. In case MKL does not fit the needs (anymore), FFTW.set_provider!("fftw") allows to revert the change of provider.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/FFTW.jl 
License: MIT license

#julia #math #binding 

FFTW.jl: Julia Bindings to The FFTW Library for Fast Fourier Transform

AbstractFFTs.jl: A Julia framework for implementing FFTs

AbstractFFTs.jl

A general framework for fast Fourier transforms (FFTs) in Julia.

This package is mainly not intended to be used directly. Instead, developers of packages that implement FFTs (such as FFTW.jl or FastTransforms.jl) extend the types/functions defined in AbstractFFTs. This allows multiple FFT packages to co-exist with the same underlying fft(x) and plan_fft(x) interface.

Developer information

To define a new FFT implementation in your own module, you should

Define a new subtype (e.g. MyPlan) of AbstractFFTs.Plan{T} for FFTs and related transforms on arrays of T. This must have a pinv::Plan field, initially undefined when a MyPlan is created, that is used for caching the inverse plan.

Define a new method AbstractFFTs.plan_fft(x, region; kws...) that returns a MyPlan for at least some types of x and some set of dimensions region. The region (or a copy thereof) should be accessible via fftdims(p::MyPlan) (which defaults to p.region).

Define a method of LinearAlgebra.mul!(y, p::MyPlan, x) (or A_mul_B!(y, p::MyPlan, x) on Julia prior to 0.7.0-DEV.3204) that computes the transform p of x and stores the result in y.

Define a method of *(p::MyPlan, x), which can simply call your mul! (or A_mul_B!) method. This is not defined generically in this package due to subtleties that arise for in-place and real-input FFTs.

If the inverse transform is implemented, you should also define plan_inv(p::MyPlan), which should construct the inverse plan to p, and plan_bfft(x, region; kws...) for an unnormalized inverse ("backwards") transform of x.

You can also define similar methods of plan_rfft and plan_brfft for real-input FFTs.

The normalization convention for your FFT should be that it computes yₖ = ∑ⱼ xⱼ exp(-2πi jk/n) for a transform of length n, and the "backwards" (unnormalized inverse) transform computes the same thing but with exp(+2πi jk/n).

Documentation:

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/AbstractFFTs.jl 
License: MIT license

#julia #math #framework 

AbstractFFTs.jl: A Julia framework for implementing FFTs

NumericFuns.jl: Math Functions and Functors for Numerical Computations

NumericFuns

Numerical functions and functors.

Note: This package was originally part of the NumericExtensions package. I realized later that the functors and the type inference machinery can be useful in other packages. Hence, I separate this part to construct a standalone package.

This package provides:

  • Additional numerical functions, such as sqr, rsqrt, xlogx, sigmoid, logit, etc.
  • Vectorized methods of the additional numerical functions.
  • Typed functors.

New Numeric Functions

This package provides several commonly used numerical functions that are not in the Julia Base.

functionequivalent expression
sqr(x)x * x
rcp(x)1 / x
rsqrt(x)1 / sqrt(x)
rcbrt(x)1 / cbrt(x)
xlogx(x)ifelse(x > 0, x * log(x), 0)
xlogy(x, y)ifelse(x > 0, x * log(y), 0)
sigmoid(x)1 / (1 + exp(-x))
logit(x)log(x / (1 - x))
softplus(x)log(1 + exp(x))
invsoftplus(x)log(exp(x) - 1)
logsumexp(x, y)log(exp(x) + exp(y))

Note that the equivalent expressions above are just for the purpose to conveying the semantics. The actual implementation might be different, which would takes a more optimal route that takes care of risk of overflow, type stability, and computational efficiency.

Functors

Functors are typed instances used in indicate a particular function. Since Julia is not able to specialize on functions (yet), functors provide an effective way that allow mutliple dispatch and functional programming to work together.

The package defines an abstract type Functor as

abstract Functor{N}

where, N is an integer indicating the number of arguments. All functor types are subtypes of Functor.

Each functor type comes with an evaluate method, which evaluates the corresponding function given arguments.

Define a functor

Here is an example that illustrates how one can define a functor

type Add <: Functor{2} end
evaluate{T1<:Number,T2<:Number}(::Add, x::Number, y::Number) = x + y

Two macros @functor1 and @functor2 are provided for simplifying the definition of unary and binary functors:

@functor1(Cbrt, cbrt, Real)
@functor2(Add, +, Number)

These macros accept three arguments: the functor type name, the corresponding function, and the super type of all acceptable argument types.

Note: The packages also defines a large collection of functors for various mathematical operations (so you don't have to define them yourself).

Functors for operators

Here is a table of functor types for operators:

functor typeoperatordomain
Negate-Number
Add+Number
Subtract-Number
Multiply*Number
Divide/Number
RDivide\Number
Pow^Number
And&Bool
Or|Bool
Not!Bool
BitwiseAnd&Integer
BitwiseOr|Integer
BitwiseNot~Integer
BitwiseXor$Integer
LT<Real
GT>Real
LE<=Real
GE>=Real
EQ==Number
NE!=Number

Functors for math functions

The package also defined functors for named functions. The naming of functor types follows the $(capitalize(funname))Fun rule. For example, the functor type for sqrt is SqrtFun, and that for lgamma is LgammaFun, etc.

In particular, the package defines functors for the following functions:

arithmetic functions

abs, abs2, real, imag, sqr, rcp,
sign, signbit, div, fld, rem, mod

rounding functions

floor, ceil, trunc, round,
ifloor, iceil, itrunc, iround

number classification functions

isnan, isinf, isfinite

algebraic functions

sqrt, rsqrt, cbrt, rcbrt, hypot

exponential & logarithm

exp, exp2, exp10, expm1,
log, log2, log10, log1p,

sigmoid, logit, xlogx, xlogy,
softplus, invsoftplus, logsumexp

trigonometric functions

sin, cos, tan, cot, sec, csc,
asin, acos, atan, acot, asec, acsc, atan2,
sinc, cosc, sinpi, cospi,
sind, cosd, tand, cotd, secd, cscd,
asind, acosd, atand, acotd, asecd, acscd

hyperbolic functions

sinh, cosh, tanh, coth, sech, csch,
asinh, acosh, atanh, acoth, asech, acsch

special functions

erf, erfc, erfinv, erfcinv, erfi, erfcx,
gamma, lgamma, digamma,
eta, zeta, beta, lbeta,
airy, airyprime, airyai, airyaiprime, airybi, airybiprime,
besselj0, besselj1, bessely0, bessely1
besseli, besselj, besselk, bessely,
hankelh1, hankelh2

Result Type Inference

Each functor defined in this package comes with result_type methods that return the type of the result, given the argument types. These methods are thoroughly tested to ensure correctness. For example,

result_type(Add(), Int, Float64)  # --> returns Float64
result_type(SqrtFun(), Int)   # --> returns Float64

The package also provides other convenient methods for type inference, which include fptype and arithtype. Particularly, we have

fptype{T<:Real}(::Type{T}) == typeof(Convert(AbstractFloat, one(T)))
fptype{T<:Real}(::Type{Complex{T}}) == Complex{fptype(T)}

arithtype{T1<:Number, T2<:Number} == typeof(one(T1) + one(T2))

The internal implementation of these functions are very efficient, usually without actually evaluating the expressions.

Download Details:

Author: lindahua
Source Code: https://github.com/lindahua/NumericFuns.jl 
License: View license

#julia #math #functions 

NumericFuns.jl: Math Functions and Functors for Numerical Computations
Jacob Banks

Jacob Banks

1665996137

The Foundations of Arithmetic in C++

The C++ integral arithmetic operations present a challenge in formal interface design. Their preconditions are nontrivial, their postconditions are exacting, and they are deeply interconnected by mathematical theorems. I will address this challenge, presenting interfaces, theorems, and proofs in a lightly extended C++.

This talk takes its title from Bertrand Russell’s and Alfred North Whitehead’s logicist tour de force, Principia Mathematica. It echoes that work in developing arithmetic from first principles, but starts from procedural first principles: stability of objects, substitutability of values, and repeatability of operations.

In sum, this talk is one part formal interface design, one part tour of C++ integral arithmetic, one part foundations of arithmetic, and one part writing mathematical proofs procedurally.

#cplusplus #cpp #programming #mathematics #math 

The Foundations of Arithmetic in C++