1622572560

Motivated by the industry practice of pairs trading and long/short equity strategies, we study an approach that combines statistical learning and optimization to construct portfolios with mean-reverting price dynamics.

Our main objectives are:

- Design a portfolio with mean-reverting price dynamics, with parameters estimated by maximum likelihood;
- Select portfolios with desirable characteristics, such as high mean reversion;
- Build a parsimonious portfolio, i.e. find a small subset from a larger collection of assets for long/short positions.

In this article, we present the full problem formulation and discuss a specialized algorithm that exploits the problem structure. Using historical price data, we illustrate the method in a series of numerical examples .

Given historical data for *m* assets observed over _T _time-steps. Our main goal is to find the vector *w*, the linear combination of assets that comprise our portfolio, such that the corresponding portfolio price process best follows an OU process. The likelihood of an OU process observed over _T _time-steps is given by

*A major feature of our joint optimization approach is that we simultaneously* solve for the optimal portfolio and corresponding parameters for maximum likelihood.

Minimizing the negative log-likelihood results in the optimization problem

Given a set of candidate assets, we want to select a small parsimonious subset to build a portfolio. This feature is useful in practice since it reduces transaction costs, execution risks, and the burden of monitoring many stock prices.

To add this feature to the model, we want to impose a sparsity penalty on the portfolio vector *w*. While the 1-norm is frequently used, in our case we have already imposed the 1-norm equality constraint ||w||₁ = 1 . To obtain sparse solutions (i.e. limiting non-zero weights to a small number), we use the 0-norm and apply a cardinality constraint ||w||₀ ≤η to the optimization problem. This constraint limits the maximum number of assets in the portfolio, and is nonconvex.

In addition to sparsifying the solution, we may also want to promote other features of the portfolio. The penalized likelihood framework is flexible enough to allow these enhancements. An important feature is encapsulated by the mean-reverting coefficient μ; a higher may be desirable. We can seek a higher by promoting a lower *c* , e.g. with a linear penalty.

#optimization #algorithms #portfolio #trading

1622572560

Motivated by the industry practice of pairs trading and long/short equity strategies, we study an approach that combines statistical learning and optimization to construct portfolios with mean-reverting price dynamics.

Our main objectives are:

- Design a portfolio with mean-reverting price dynamics, with parameters estimated by maximum likelihood;
- Select portfolios with desirable characteristics, such as high mean reversion;
- Build a parsimonious portfolio, i.e. find a small subset from a larger collection of assets for long/short positions.

In this article, we present the full problem formulation and discuss a specialized algorithm that exploits the problem structure. Using historical price data, we illustrate the method in a series of numerical examples .

Given historical data for *m* assets observed over _T _time-steps. Our main goal is to find the vector *w*, the linear combination of assets that comprise our portfolio, such that the corresponding portfolio price process best follows an OU process. The likelihood of an OU process observed over _T _time-steps is given by

*A major feature of our joint optimization approach is that we simultaneously* solve for the optimal portfolio and corresponding parameters for maximum likelihood.

Minimizing the negative log-likelihood results in the optimization problem

Given a set of candidate assets, we want to select a small parsimonious subset to build a portfolio. This feature is useful in practice since it reduces transaction costs, execution risks, and the burden of monitoring many stock prices.

To add this feature to the model, we want to impose a sparsity penalty on the portfolio vector *w*. While the 1-norm is frequently used, in our case we have already imposed the 1-norm equality constraint ||w||₁ = 1 . To obtain sparse solutions (i.e. limiting non-zero weights to a small number), we use the 0-norm and apply a cardinality constraint ||w||₀ ≤η to the optimization problem. This constraint limits the maximum number of assets in the portfolio, and is nonconvex.

In addition to sparsifying the solution, we may also want to promote other features of the portfolio. The penalized likelihood framework is flexible enough to allow these enhancements. An important feature is encapsulated by the mean-reverting coefficient μ; a higher may be desirable. We can seek a higher by promoting a lower *c* , e.g. with a linear penalty.

#optimization #algorithms #portfolio #trading

1595334123

I consider myself an active StackOverflow user, despite my activity tends to vary depending on my daily workload. I enjoy answering questions with angular tag and I always try to create some working example to prove correctness of my answers.

To create angular demo I usually use either plunker or stackblitz or even jsfiddle. I like all of them but when I run into some errors I want to have a little bit more usable tool to undestand what’s going on.

Many people who ask questions on stackoverflow don’t want to isolate the problem and prepare minimal reproduction so they usually post all code to their questions on SO. They also tend to be not accurate and make a lot of mistakes in template syntax. To not waste a lot of time investigating where the error comes from I tried to create a tool that will help me to quickly find what causes the problem.

```
Angular demo runner
Online angular editor for building demo.
ng-run.com
<>
```

Let me show what I mean…

There are template parser errors that can be easy catched by stackblitz

It gives me some information but I want the error to be highlighted

#mean stack #angular 6 passport authentication #authentication in mean stack #full stack authentication #mean stack example application #mean stack login and registration angular 8 #mean stack login and registration angular 9 #mean stack tutorial #mean stack tutorial 2019 #passport.js

1624867080

Algorithm trading backtest and optimization examples.

#algorithms #optimization examples #algorithm trading backtest #algorithm #trading backtest

1624496700

While writing code and algorithms you should consider tail call optimization (TCO).

The tail call optimization is the fact of optimizing the recursive functions in order to avoid building up a tall **call stack**. You should as well know that some programming languages are doing tail call optimizations.

For example, Python and Java decided to don’t use TCO. While JavaScript allows to use TCO since ES2015-ES6.

Even if you know that your favorite language support natively TCO or not, I would definitely recommend you to assume that your compiler/interpreter will not do the work for you.

There are two famous methods to do a tail call optimization and avoid tall call stacks.

As you know recursions are building up the call stack so if we avoid such recursions in our algorithms it will will allow us to save on the **memory usage**. This strategy is called the **bottom-up** (we start from the beginning, while a recursive algorithm starts from the end after building a stack and works backwards.)

Let’s take an example with the following code (top-down — recursive code):

```
function product1ToN(n) {
return (n > 1) ? (n * product1ToN(n-1)) : 1;
}
```

As you can see this code has a problem: it builds up a **call stack** of size O(n), which makes our total memory cost O(n). This code makes us vulnerable to a **stack overflow error**, where the call stack gets too big and runs out of space.

In order to optimize our example we need to go bottom-down and remove the recursion:

```
function product1ToN(n) {
let result = 1;
for (let num = 1; num <= n; num++) {
result *= num;
}
return result;
}
```

This time we are not stacking up our calls in the **call stack**, and we do use a O(1) space complexity(with a O(n) time complexity).

#memoization #programming #algorithms #optimization #optimize

1598775060

In complex machine learning models, the performance usually depends on multiple input parameters. In order to get the optimal model, the parameters must be properly tuned. However, when there are multiple parameter variables, each ranging across a wide spectrum of values, there are too many possible configurations for each set of parameters to be tested. In these cases, optimization methods should be used to attain the optimal input parameters without spending vast amounts of time finding them.

In the diagram above, it shows the distribution of the model based on only two parameters. As evident in the example shown, it is not always an easy task to find the maximum or minimum of the curve. This is why optimization methods and algorithms are crucial in the field of machine learning.

The most commonly used optimization strategy are Genetic Algorithms. Genetic Algorithms are based off of Darwin’s theory of natural selection. It is relatively easy to implement and there is a lot of flexibility for the setup of the algorithm so that it can be applied to a wide range of problems.

To start off, there must be a fitness function that measures how well a set of input parameters perform. Solutions with a higher fitness derived from a fitness function will be better than ones with a lower fitness.

For example, if a solution has a cost of x + y + z, then the fitness function should try to minimize the cost. This can be done with the following fitness function

#genetic-algorithm #optimization #genetics #optimization-algorithms #machine-learning