A method is a block of code that performs a task and is associated with a class or an object. It is related to the non-object-oriented concepts of functions and procedures.

1674064980

Implementation of data fusion methods in Julia, specifically **Macau**, **BPMF** (Bayesian Probabilistic Matrix Factorization). Supported features:

- Factorization of matrices (without or with side information)
- Factorization of tensors (without or with side information)
- Co-factorization of multiple matrices and tensors (any side information is possible)
- Side information inside relation
- Parallelization (multi-core and multi-node)

BayesianDataFusion uses Gibbs sampling to learn the latent vectors and link matrices. In addition to predictions Gibbs sampling also provides estimates of **standard deviation** and possible other metrics (that can be computed from samples).

`BayesianDataFusion.jl`

provides parallel and highly optimized implementation for

- Bayesian Probabilistic Matrix Factorization (BPMF)
- Bayesian Probabilistic Tensor Factorization (BPTF)
- Macau - Bayesian Multi-relational Factorization with Side Information

These methods allow to predict **unobserved values** in the matrices (or tensors). Since they are all Bayesian methods we can also measure the **uncertainty** of the predictions. BPMF and BPTF are special cases of Macau. Macau adds

- use of
**entity side information**to improve factorization (e.g, user and/or movie features for factorizing movie ratings) - use of
**relation side information**to improve factorization (e.g., data about when user went to see particular movie) - factorization of
**several**matrices (and tensors) for an entity simultaneously. - Macau can handle high dimensional side-information, e.g., 1,000,000-dimensional user features.

```
Pkg.clone("https://github.com/jaak-s/BayesianDataFusion.jl.git")
```

Examples

Next we give simple examples of using **Macau** for movie ratings prediction from MovieLens data, which is included in the BayesianDataFusion package.

We will use `macau`

function to factorize (incompletely observed) matrix of movie ratings with **side information** for both users and movies. The side information contains basic features about users like age group and gender and genre information for movies. To run the example first install Julia library for reading matlab files

`Pkg.add("MAT")`

For examples, please see documentation.

Author: jaak-s

Source Code: https://github.com/jaak-s/BayesianDataFusion.jl

License: View license

1673438886

Learn the definition of a method in Java, how to use methods, and when to use methods in this handy tutorial.

A method in Java (called a "function" in many other programming languages) is a portion of code that's been grouped together and labeled for reuse. Methods are useful because they allow you to perform the same action or series of actions without rewriting the same code, which not only means less work for you, it means less code to maintain and debug when something goes wrong.

A method exists within a class, so the standard Java boilerplate code applies:

```
package com.opensource.example;
public class Example {
// code here
}
```

A package definition isn't strictly necessary in a simple one-file application like this, but it's a good habit to get into, and most IDEs enforce it.

By default, Java looks for a `main`

method to run in a class. Methods can be made public or private, and static or non-static, but the main method must be public and static for the Java compiler to recognize and utilize it. When a method is public, it's able to be executed from outside the class. To call the `Example`

class upon start of the program, its `main`

method must be accessible, so set it to `public`

.

Here's a simple demonstration of two methods: one `main`

method that gets executed by default when the `Example`

class is invoked, and one `report`

method that accepts input from `main`

and performs a simple action.

To mimic arbitrary data input, I use an if-then statement that chooses between two strings, based on when you happen to start the application. In other words, the `main`

method first sets up some data (in real life, this data could be from user input, or from some other method elsewhere in the application), and then "calls" the `report`

method, providing the processed data as input:

```
package com.opensource.example;
public class Example {
public static void main(String[] args) {
// generate some data
long myTime = System.currentTimeMillis();
String weather;
if ( myTime%2 == 0 ) {
weather = "party";
} else {
weather = "apocalypse";
}
// call the other method
report(weather);
}
private static void report(String day) {
System.out.printf("Welcome to the zombie %s\n", day);
}
}
```

Run the code:

```
$ java ./Example.java
Welcome to the zombie apocalypse
$ java ./Example.java
Welcome to the zombie party
```

Notice that there are two different results from the same `report`

method. In this simple demonstration, of course, there's no need for a second method. The same result could have been generated from the if-then statement that mimics the data generation. But when a method performs a complex task, like resizing an image into a thumbnail and then generating a widget on screen using that resized image, then the "expense" of an additional component makes a lot of sense.

It can be difficult to know when to use a method and when to just send data into a Java Stream or loop. If you're faced with that decision, the answer is usually to use a method. Here's why:

- Methods are cheap. They don't add processing overhead to your code.
- Methods reduce the line count of your code.
- Methods are specific. It's usually easier to find a method called
`resizeImage`

than it is to find code that's hidden in a loop somewhere in the function that loads images from the drive. - Methods are reusable. When you first write a method, you may
*think*it's only useful for one task within your application. As your application grows, however, you may find yourself using a method you thought you were "done" with.

Functional programming utilizes methods as the primary construct for performing tasks. You create a method that accepts one kind of data, processes that data, and outputs new data. String lots of methods together, and you have a dynamic and capable application. Programming languages like C and Lua are examples of this style of coding.

The other way to think of accomplishing tasks with code is the object-oriented model, which Java uses. In object-oriented programming, methods are components of a template. Instead of sending data from method to method, you create objects with the option to alter them through the use of their methods.

Here's the same simple zombie apocalypse demo program from an object-oriented perspective. In the functional approach, I used one method to generate data and another to perform an action with that data. The object-oriented equivalent is to have a class that represents a work unit. This example application presents a message-of-the-day to the user, announcing that the day brings either a zombie party or a zombie apocalypse. It makes sense to program a "day" object, and then to query that day to learn about its characteristics. As an excuse to demonstrate different aspects of object-oriented construction, the new sample application will also count how many zombies have shown up to the party (or apocalypse).

Java uses one file for each class, so the first file to create is `Day.java`

, which serves as the Day object:

```
package com.opensource.example;
import java.util.Random;
// Class
public class Day {
public static String weather;
public int count;
// Constructor
public Day() {
long myTime = System.currentTimeMillis();
if ( myTime%2 == 0 ) {
weather = "paradise";
} else {
weather = "apocalypse";
}
}
// Methods
public String report() {
return weather;
}
public int counter() {
Random rand = new Random();
count = count + rand.nextInt(100);
return(count);
}
}
```

In the `Class`

section, two fields are created: `weather`

and `count`

. Weather is static. Over the course of a day (in this imaginary situation), weather doesn't change. It's either a party or an apocalypse, and it lasts all day. The number of zombies, however, increases over the course of a day.

In the `Constructor`

section, the day's weather is determined. It's done as a constructor because it's meant to only happen once, when the class is initially invoked.

In the `Methods`

section, the `report`

method only returns the weather report as determined and set by the constructor. The `counter`

method, however, generates a random number and adds it to the current zombie count.

This class, in other words, does three very different things:

- Represents a "day" as defined by the application.
- Sets an unchanging weather report for the day.
- Sets an ever-increasing zombie count for the day.

To put all of this to use, create a second file:

```
package com.opensource.example;
public class Example {
public static void main(String[] args) {
Day myDay = new Day();
String foo = myDay.report();
String bar = myDay.report();
System.out.printf("Welcome to a zombie %s\n", foo);
System.out.printf("Welcome to a zombie %s\n", bar);
System.out.printf("There are %d zombies out today.\n", myDay.counter());
System.out.printf("UPDATE: %d zombies. ", myDay.counter());
System.out.printf("UPDATE: %d zombies. ", myDay.counter());
}
}
```

Because there are now two files, it's easiest to use a Java IDE to run the code, but if you don't want to use an IDE, you can create your own JAR file. Run the code to see the results:

```
Welcome to a zombie apocalypse
Welcome to a zombie apocalypse
There are 35 zombies out today.
UPDATE: 67 zombies. UPDATE: 149 zombies.
```

The "weather" stays the same regardless of how many times the `report`

method is called, but the number of zombies on the loose increases the more you call the `counter`

method.

Methods (or functions) are important constructs in programming. In Java, you can use them either as part of a single class for functional-style coding, or you can use them across classes for object-oriented code. Both styles of coding are different perspectives on solving the same problem, so there's no right or wrong decision. Through trial and error, and after a little experience, you learn which one suits a particular problem best.

Original article source at: https://opensource.com/

1671791422

Markov Chain Monte Carlo (MCMC) methods let us compute samples from a distribution even though we can’t do this relying on traditional methods. In this article, Toptal Data Scientist Divyanshu Kalra will introduce you to Bayesian methods and Metropolis-Hastings, demonstrating their potential in the field of probabilistic programming.

Let’s get the basic definition out of the way: Markov Chain Monte Carlo (MCMC) methods let us compute samples from a distribution even though we can’t compute it.

What does this mean? Let’s back up and talk about Monte Carlo Sampling.

What are Monte Carlo methods?

*“Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.”* (Wikipedia)

Let’s break that down.

Imagine that you have an irregular shape, like the shape presented below:

And you are tasked with determining the area enclosed by this shape. One of the methods you can use is to make small squares in the shape, count the squares, and that will give you a pretty accurate approximation of the area. However, that is difficult and time-consuming.

Monte Carlo sampling to the rescue!

First, we draw a big square of a known area around the shape, for example of 50 cm2. Now we “hang” this square and start throwing darts randomly at the shape.

Next, we count the total number of darts in the rectangle and the number of darts in the shape we are interested in. Let’s assume that the total number of “darts” used was 100 and that 22 of them ended up within the shape. Now the area can be calculated by the simple formula:

*area of shape = area of square *(number of darts in the shape) / (number of darts in the square)*

So, in our case, this comes down to:

*area of shape = 50 * 22/100 = 11 cm2*

If we multiply the number of “darts” by a factor of 10, this approximation becomes very close to the real answer:

*area of shape = 50 * 280/1000 = 14 cm2*

This is how we break down complicated tasks, like the one given above, by using Monte Carlo sampling.

The area approximation was closer the more darts we threw, and this is because of the Law of Large Numbers:

*“The law of large numbers is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.”*

This brings us to our next example, the famous Monty Hall problem.

The Monty Hall problem is a very famous brain teaser:

*“There are three doors, one has a car behind it, the others have a goat behind them. You choose a door, the host opens a different door, and shows you there’s a goat behind it. He then asks you if you want to change your decision. Do you? Why? Why not?”*

The first thing that comes to your mind is that the chances of winning are equal whether you switch or don’t, but that’s not true. Let’s make a simple flowchart to demonstrate the same.

Assuming that the car is behind door 3:

Hence, if you switch, you win ⅔ times, and if you don’t switch, you win only ⅓ times.

Let’s solve this by using sampling.

```
wins = []
for i in range(int(10e6)):
car_door = assign_car_door()
choice = random.randint(0, 2)
opened_door = assign_door_to_open(car_door)
did_he_win = win_or_lose(choice, car_door, opened_door, switch = False)
wins.append(did_he_win)
print(sum(wins)/len(wins))
```

The `assign_car_door()`

function is just a random number generator that selects a door 0, 1, or 2, behind which there is a car. Using `assign_door_to_open`

selects a door that has a goat behind it and is not the one you selected, and the host opens it. `win_or_lose`

returns *true* or *false*, denoting whether you won the car or not, it takes a bool “switch” which says whether you switched the door or not.

Let’s run this simulation 10 million times:

- Probability of winning if you don’t switch: 0.334134
- Probability of winning if you do switch: 0.667255

This is very close to the answers the flowchart gave us.

In fact, the more we run this simulation, the closer the answer comes to the true value, hence validating the law of large numbers:

The x-axis is the number of simulations run, and y is the probability of winning if you don’t switch.

The same can be seen from this table:

Simulations run | Probability of winning if you switch | Probability of winning if you don't switch |

10 | 0.6 | 0.2 |

10^2 | 0.63 | 0.33 |

10^3 | 0.661 | 0.333 |

10^4 | 0.6683 | 0.3236 |

10^5 | 0.66762 | 0.33171 |

10^6 | 0.667255 | 0.334134 |

10^7 | 0.6668756 | 0.3332821 |

*“Frequentist, known as the more classical version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title).”*

*“Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event.”* - from Probabilistic Programming and Bayesian Methods for Hackers

What does this mean?

In the frequentist way of thinking, we look at probabilities in the long run. When a frequentist says that there is a 0.001% chance of a car crash happening, it means, if we consider infinite car trips, 0.001% of them will end in a crash.

A Bayesian mindset is different, as we start with a prior, a belief. If we talk about a belief of 0, it means that your belief is that the event will never happen; conversely, a belief of 1 means that you are sure it will happen.

Then, once we start observing data, we update this belief to take into consideration the data. How do we do this? By using the Bayes theorem.

....(1)

Let’s break it down.

`P(A | B)`

gives us the probability of event A given event B. This is the posterior, B is the data we observed, so we are essentially saying what the probability of an event happening is, considering the data we observed.`P(A)`

is the prior, our belief that event A will happen.`P(B | A)`

is the likelihood, what is the probability that we will observe the data given that A is true.

Let’s look at an example, the cancer screening test.

Let’s say a patient goes to get a mammogram done, and the mammogram comes back positive. What is the probability that the patient actually has cancer?

Let’s define the probabilities:

- 1% of women have breast cancer.
- Mammograms detect cancer 80% of the time when it is actually present.
- 9.6% of mammograms falsely report that you have cancer when you actually do not have it.

So, if you were to say that if a mammogram came back positive meaning there is an 80% chance that you have cancer, that would be wrong. You would not take into consideration that having cancer is a rare event, i.e., that only 1% of women have breast cancer. We need to take this as prior, and this is where the Bayes theorem comes into play:

*P(C+ | T+) =(P(T+|C+)*P(C+))/P(T+)*

`P(C+ | T+)`

: This is the probability that cancer is there, given that the test was positive, this is what we are interested in.`P(T+ | C+)`

: This is the probability that the test is positive, given that there is cancer, this, as discussed above is equal to 80% = 0.8.`P(C+)`

: This is the prior probability, the probability of an individual having cancer, which is equal to 1% = 0.01.`P(T+)`

: This is the probability that the test is positive, no matter what, so it has two components:*P(T+) = P(T+|C-)P(C-)+P(T+|C+)P(C+)*`P(T+ | C-)`

: This is the probability that the test came back positive but there is no cancer, this is given by 0.096.`P(C-)`

: This is the probability of not having cancer since the probability of having cancer is 1%, this is equal to 99% = 0.99.`P(T+ | C+)`

: This is the probability that the test came back positive, given that you have cancer, this is equal to 80% = 0.8.`P(C+)`

: This is the probability of having cancer, which is equal to 1% = 0.01.

Plugging all of this into the original formula:

So, given that the mammogram came back positive, there is a 7.76% chance of the patient having cancer. It might seem strange at first but it makes sense. The test gives a false positive 9.6% of the time (quite high), so there will be many false positives in a given population. For a rare disease, most of the positive test results will be wrong.

Let us now revisit the Monty Hall problem and solve it using the Bayes theorem.

The priors can be defined as:

- Assume you choose door A as your choice in the beginning.
- P(H) = ⅓, ⅓. ⅓ for all three doors, which means that before the door was open and the goat revealed, there was an equal chance of the car being behind any of them.

The likelihood can be defined as:

`P(D|H)`

, where event D is that Monty chooses door B and there is no car behind it.`P(D|H)`

= ½ if the car is behind door A - since there is a 50% chance that he will choose door B and 50% chance that he chooses door C.`P(D|H)`

= 0 if the car is behind door B, since there is a 0% chance that he will choose door B if the car is behind it.`P(D|H)`

= 1 if the car is behind C, and you choose A, there is a 100% probability that he will choose door B.

We are interested in `P(H|D)`

, which is the probability that the car is behind a door given that it shows us a goat behind one of the other doors.

It can be seen from the posterior, `P(H|D)`

, that switching to door C from A will increase the probability of winning.

Now, let’s combine everything we covered so far and try to understand how Metropolis-Hastings works.

Metropolis-Hastings uses the Bayes theorem to get the posterior distribution of a complex distribution, from which sampling directly is difficult.

How? Essentially, it randomly selects different samples from a space and checks if the new sample is more probable of being from the posterior than the last sample, since we are looking at the ratio of probabilities, P(B) in equation (1), gets canceled out:

*P(acceptance) = ( P(newSample) * Likelihood of new sample) / (P(oldSample) * Likelihood of old sample)*

The “likelihood” of each new sample is decided by the function *f*. That’s why *f* must be proportional to the posterior we want to sample from.

To decide if θ′ is to be accepted or rejected, the following ratio must be computed for each new proposed θ’:

Where θ is the old sample, `P(D| θ)`

is the likelihood of sample θ.

Let’s use an example to understand this better. Let’s say you have data but you want to find out the normal distribution that fits it, so:

*X ~ N(mean, std)*

Now we need to define the priors for the mean and std both. For simplicity, we will assume that both follow a normal distribution with mean 1 and std 2:

*Mean ~ N(1, 2)**std ~ N(1, 2)*

Now, let’s generate some data and assume the true mean and std are 0.5 and 1.2, respectively.

```
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as st
meanX = 1.5
stdX = 1.2
X = np.random.normal(meanX, stdX, size = 1000)
_ = plt.hist(X, bins = 50)
```

Let’s first use a library called *PyMC3* to model the data and find the posterior distribution for the mean and std.

```
import pymc3 as pm
with pm.Model() as model:
mean = pm.Normal("mean", mu = 1, sigma = 2)
std = pm.Normal("std", mu = 1, sigma = 2)
x = pm.Normal("X", mu = mean, sigma = std, observed = X)
step = pm.Metropolis()
trace = pm.sample(5000, step = step)
```

Let’s go over the code.

First, we define the prior for mean and std, i.e., a normal with mean 1 and std 2.

x = pm.Normal(“X”, mu = mean, sigma = std, observed = X)

In this line, we define the variable that we are interested in; it takes the mean and std we defined earlier, it also takes observed values that we calculated earlier.

Let’s look at the results:

```
_ = plt.hist(trace['std'], bins = 50, label = "std")
_ = plt.hist(trace['mean'], bins = 50, label = "mean")
_ = plt.legend()
```

The std is centered around 1.2, and the mean is at 1.55 - pretty close!

Let’s now do it from scratch to understand Metropolis-Hastings.

First, let’s understand what Metropolis-Hastings does. Earlier in this article, we said, *“Metropolis-Hastings randomly selects different samples from a space,”* but how does it know which sample to select?

It does so using the proposal distribution. It is a normal distribution centered at the currently accepted sample and has an STD of 0.5. Where 0.5 is a hyperparameter, we can tweak; the more the STD of the proposal, the further it will search from the currently accepted sample. Let’s code this.

Since we have to find the mean and std of the distribution, this function will take the currently accepted mean and the currently accepted std, and return proposals for both.

```
def get_proposal(mean_current, std_current, proposal_width = 0.5):
return np.random.normal(mean_current, proposal_width), \
np.random.normal(std_current, proposal_width)
```

Now, let’s code the logic that accepts or rejects the proposal. It has two parts: the *prior* and the *likelihood*.

First, let’s calculate the probability of the proposal coming from the prior:

```
def prior(mean, std, prior_mean, prior_std):
return st.norm(prior_mean[0], prior_mean[1]).pdf(mean)* \
st.norm(prior_std[0], prior_std[1]).pdf(std)
```

It just takes the *mean* and *std* and calculates how likely it is that this *mean* and *std* came from the *prior distribution* that we defined.

In calculating the likelihood, we calculate how likely it is that the data that we saw came from the proposed distribution.

```
def likelihood(mean, std, data):
return np.prod(st.norm(mean, std).pdf(X))
```

So for each data point, we multiply the probability of that data point coming from the proposed distribution.

Now, we need to call these functions for the current mean and std and for the proposed *mean* and *std*.

```
prior_current = prior(mean_current, std_current, prior_mean, prior_std)
likelihood_current = likelihood(mean_current, std_current, data)
prior_proposal = prior(mean_proposal, std_proposal, prior_mean, prior_std)
likelihood_proposal = likelihood(mean_proposal, std_proposal, data)
```

The whole code:

```
def accept_proposal(mean_proposal, std_proposal, mean_current, \
std_current, prior_mean, prior_std, data):
def prior(mean, std, prior_mean, prior_std):
return st.norm(prior_mean[0], prior_mean[1]).pdf(mean)* \
st.norm(prior_std[0], prior_std[1]).pdf(std)
def likelihood(mean, std, data):
return np.prod(st.norm(mean, std).pdf(X))
prior_current = prior(mean_current, std_current, prior_mean, prior_std)
likelihood_current = likelihood(mean_current, std_current, data)
prior_proposal = prior(mean_proposal, std_proposal, prior_mean, prior_std)
likelihood_proposal = likelihood(mean_proposal, std_proposal, data)
return (prior_proposal * likelihood_proposal) / (prior_current * likelihood_current)
```

Now, let’s create the final function that will tie everything together and generate the posterior samples we need.

Now, we call the functions we have written above and generate the posterior!

```
def get_trace(data, samples = 5000):
mean_prior = 1
std_prior = 2
mean_current = mean_prior
std_current = std_prior
trace = {
"mean": [mean_current],
"std": [std_current]
}
for i in tqdm(range(samples)):
mean_proposal, std_proposal = get_proposal(mean_current, std_current)
acceptance_prob = accept_proposal(mean_proposal, std_proposal, mean_current, \
std_current, [mean_prior, std_prior], \
[mean_prior, std_prior], data)
if np.random.rand() < acceptance_prob:
mean_current = mean_proposal
std_current = std_proposal
trace['mean'].append(mean_current)
trace['std'].append(std_current)
return trace
```

Log is your friend!

Recall `a * b = antilog(log(a) + log(b))`

and `a / b = antilog(log(a) - log(b)).`

This is beneficial to us because we will be dealing with very small probabilities, multiplying which will result in zero, so we will rather add the log prob, problem solved!

So, our previous code becomes:

```
def accept_proposal(mean_proposal, std_proposal, mean_current, \
std_current, prior_mean, prior_std, data):
def prior(mean, std, prior_mean, prior_std):
return st.norm(prior_mean[0], prior_mean[1]).logpdf(mean)+ \
st.norm(prior_std[0], prior_std[1]).logpdf(std)
def likelihood(mean, std, data):
return np.sum(st.norm(mean, std).logpdf(X))
prior_current = prior(mean_current, std_current, prior_mean, prior_std)
likelihood_current = likelihood(mean_current, std_current, data)
prior_proposal = prior(mean_proposal, std_proposal, prior_mean, prior_std)
likelihood_proposal = likelihood(mean_proposal, std_proposal, data)
return (prior_proposal + likelihood_proposal) - (prior_current + likelihood_current)
```

Since we are returning the log of probability of acceptance:

`if np.random.rand() < acceptance_prob:`

Becomes:

`if math.log(np.random.rand()) < acceptance_prob:`

Let’s run the new code and plot the results.

```
_ = plt.hist(trace['std'], bins = 50, label = "std")
_ = plt.hist(trace['mean'], bins = 50, label = "mean")
_ = plt.legend()
```

As you can see, the *std* is centered at 1.2, and the *mean* at 1.5. We did it!

By now, you hopefully understand how Metropolis-Hastings works and you might be wondering where you can use it.

Well, *Bayesian Methods for Hackers* is an excellent book that explains probabilistic programming, and if you want to learn more about the Bayes theorem and its applications, *Think Bayes* is a great book by Allen B. Downey.

Thank you for reading, and I hope this article encourages you to discover the amazing world of Bayesian stats.

Original article source at: https://www.toptal.com/

1671702840

Multithreading is one of the very important concepts in Scala programming. It allows us to create multiple threads in our program to perform various tasks. Now Scala Thread class provides us many methods which make our tasks easy. In this blog, we are going to discuss a few of these methods.

Scala threads provide us various methods to deal with threads. In this blog, we are going to discuss a few of them.

This method basically starts the execution of a thread. Basically, it changes the state of the threads from New State to Runnable state. After this whenever the thread gets a chance to execute it will call the run method. Let’s take this code as an example

```
class Method1 extends Thread {
override def run(): Unit ={
println("Hello, This is Thread by extending Thread class ")
}
}
object main extends App {
val th1 = new Method1()
th1.start() // .start() method
}
Output:
Hello, This is Thread by extending Thread class
```

This method performs tasks on the threads. Let’s take this code as an example

```
class Method1 extends Thread {
override def run(): Unit ={ // run() method
println("Hello, This is Thread by extending Thread class ")
}
}
```

setName(String) method gives a name to threads whereas the getName() method returns the name of the threads. Let’s take this code as an example

```
class Name extends Thread{
override def run(): Unit ={
println("Thread name is " +this.getName) // getName
}
}
object Name extends App {
var thread1= new Name()
thread1.setName("Thread-101") // setName()
thread1.start()
}
OUTPUT:
Thread name is Thread-101
```

The currentThread() is used to return the reference of the currently executing thread. Let’s take this code as an example

```
class CurrentThread extends Thread{
override def run(): Unit ={
Thread.currentThread().setName("Current-Thread")
println("Thread name is " + Thread.currentThread().getName)
}
}
object CurrentThread extends App{
var thread= new CurrentThread()
thread.start()
}
Output:
Thread name is Current-Thread
```

By using these methods, we can get and set the priority of threads. Let’s take this code as an example

```
class PriorityMethod extends Thread {
override def run(): Unit = {
println("Priority of this thread is "+ this.getPriority) // getPriority
}
}
object PriorityMethod extends App {
var thread1= new Thread(new PriorityMethod())
thread1.setPriority(5) // setPriority()
thread1.start()
}
Output:
Priority of this thread is 5
```

This method is used to put threads to sleep for a number of milliseconds. In the sleep() method we take time in milliseconds as an argument. Let’s take this code as an example

```
class Sleep extends Thread {
override def run(): Unit = {
for (i <- 1 to 5) {
println("After " + i + " sec")
Thread.sleep(1000) // sleep()
}
}
}
object Sleep extends App {
var thread1= new Sleep()
thread1.start()
}
Output:
After 1 sec
After 2 sec
After 3 sec
After 4 sec
After 5 sec
```

join() method is used to hold the execution of the thread which is currently running until it finishes the execution. It also waits for a thread to die before it can start more. Let’s take this code as an example

```
class Join extends Thread{
override def run(): Unit = {
for (i <- 1 to 2) {
println(i)
Thread.sleep(1000)
}
}
}
object Join extends App{
var thread1 = new Join()
var thread2 = new Join()
thread1.start()
thread1.join() // join() method
thread2.start()
}
Output:
1
2
1
2
```

isAlive() method checks that weather a thread is alive or not. A thread is said to be alive when it has been started but has not reached a dead state. Let’s take this code as an example

```
class Alive extends Thread{
override def run(): Unit = {
println("Is "+ Thread.currentThread().getName + " alive -> " + Thread.currentThread().isAlive) // isAlive method
}
}
object main extends App {
var thread1 = new Thread(new Alive())
thread1.setName("Thread1")
thread1.start()
}
Output:
Is Thread1 alive -> true
```

This method returns the id of the threads. Let’s take this code as an example

```
class ID extends Thread{
override def run(): Unit = {
println("Id of this thread is "+Thread.currentThread().getId)
// getId method
}
}
object ID extends App {
var thread1 = new Thread(new ID())
thread1.start()
}
```

In this blog, we get to know about some scala thread methods and how to use them. If you want to know more about multithreading visit these sites

https://www.geeksforgeeks.org/scala-multithreading/

Original article source at: https://blog.knoldus.com/

1670844185

**In this article, we’ll explore some of the missing math methods in JavaScript and how we can write functions for them.**

The JavaScript Math object contains some really useful and powerful mathematical operations that can be used in web development, but it lacks many important operations that most other languages provide (such as Haskell, which has a huge number of them).

You may remember from school that “sum” is a synonym for “add”. For example, if we sum the numbers 1, 2, and 3, it really means `1 + 2 + 3`

.

Our `sum`

function will involve summing all the values in an array.

There are two ways of writing this function: we could use a `for`

loop, or we could use the `reduce`

function. If you’d like to re-familiarize yourself with the `reduce`

function, you can read about using map() and reduce() in JavaScript.

Using a `for`

loop:

```
function sum(array){
let total = 0
for(let count = 0; count < array.length; count++){
total = total + array[count]
}
return total
}
```

Using the `reduce`

function:

```
function sum(array){
return array.reduce((sum, number) => sum + number, 0)
}
```

Both functions work in exactly the same way (the `reduce`

function is just an inbuilt `for`

loop), and will return the same number (given the same array). But the `reduce`

function is much neater.

So, for example:

```
sum([1,2,3,4]) === 10 // 1 + 2 + 3 + 4
sum([2,4,6,8]) === 20 // 2 + 4 + 6 + 8
```

Being able to sum a list of numbers is perhaps the most useful and most needed “missing” math operation from the JavaScript `Math`

object. Again, a `sum`

function works as a great checking tool. For example, in a Sudoku we can check if the user has no repeats in that column or row by checking that the column/row adds up to 45 (1 + 2 + 3 + 4 +…+ 9). The function would also work really well in an online shopping app, if we wanted to work out the total bill — assuming all the prices are stored in an array.

Following the shopping app example, here’s an example of how we could use it in our code:

```
const prices = [2.80, 6.10, 1.50, 1.00, 8.99, 2.99]
function totalCost(prices){
return prices.reduce((sum, item) => sum + item, 0)
}
```

Our `product`

function will work in a similar way to the `sum`

function, except that, instead of *adding* all the numbers in a list, we’ll *multiply* them.

Once again, we could use a `for`

loop almost identically to the first `sum`

function:

```
function product(array){
let total = 1
for(let count = 0; count < array.length; count++){
total = total * array[count]
}
return total
}
```

Note that we initialize the `total`

variable with `1`

instead of `0`

, as otherwise we would always end up with a `total`

of 0.

But the `reduce`

function still works in this case and is still a much neater way of writing the function:

```
function product(array){
return array.reduce((total, num) => total*num, 1)
}
```

Here are some examples:

```
product([2,5,8,6]) === 480 // 2 x 5 x 8 x 6
product([3,7,10,2]) === 420 // 3 x 7 x 10 x 2
```

The uses of this function may not seem obvious, but I’ve found they’re very useful when trying to enact multiple conversions within one calculation. For example, if you wanted to find the price in dollars of ten packs of apples (each kilogram pack at $1.50), rather than having a huge multiplication sum, it would be more efficient to have all the values stored in an array and use the `product`

function we’ve just written.

An example of the array would be of this format:

```
const pricePerKg = 1.50
const numberOfKg = 10
const conversionRate = 1.16
const conversion = [1.50, 10, 1.16]
const USprice = product([pricePerKg,numberOfKg,conversionRate])
```

These functions will accept a number, which could be in the form of an array length, and return `true`

or `false`

depending on whether the number is odd or even.

For a number to be even, it must be divisible by two, and for a number to be odd, it’s the opposite and isn’t divisible by two. This will be the key part to the functions.

Haskell, for example, has these functions inbuilt, which makes things much easier, especially as you can just write this:

```
even 29
<< false
odd 29
<< true
```

Ruby, on the other hand, provides these functions as methods. This is still much easier to write:

```
29.even?
<< false
29.odd?
<< true
```

The simplest way to write these functions in JavaScript is to use the remainder operator, `%`

. This returns the remainder when a number is divided by another number. For example:

```
11 % 3 === 2 // 11 divide 3 === 3 remainder 2
```

Here’s an example of what our `even`

function could look like:

```
function even(number){
return number % 2 === 0
}
```

As we can see, we have an `even`

function that takes a number as its parameter and returns a Boolean value based on the condition:

```
number % 2 === 0
```

When the number is divided by two, if the remainder is equal to zero, we know it’s divisible by two and `true`

will be returned. For example:

```
even(6) === true
even (9) === false
```

Here’s an example of what our `odd`

function could look like:

```
function odd(number){
return number % 2 !== 0
}
```

The two functions are very similar: a number is taken as a parameter and a Boolean value is returned based on the condition:

```
number % 2 !== 0
```

If the remainder of the number divided by two isn’t equal to zero, the number is odd and `true`

will be returned. For example:

```
odd(7) === true
odd(114) === false
```

Being able to check whether a number is odd or even is vital, and it’s remarkably simple. It may not seem so important at first, but it can work as a great input validation technique — for example, with array lengths, or simply by checking the winner of a two-player game. You can keep track of how many rounds have been played, and if the number is odd, player 1 wins, and if it’s even, player 2 wins — presuming the first round is counted 1.

These functions are interchangeable, and we’ll most likely only need to use one. However, having the two functions can make it much easier to keep track of `true`

or `false`

logic, especially in big chunks of code.

Here’s how we can code the example above:

```
function checkWinner(gamesPlayed){
let winner
if(odd(gamesPlayed)){
winner = "player1"
}
else{
winner = "player2"
}
return winner
}
```

Triangle numbers sound a lot more fancy than they actually are. They’re simply the sum of all the integers up until a certain number.

For example, this is the fifth triangle number: 5 + 4 + 3 + 2 + 1 = 15.

This links back to our previous example of the Sudoku. We want to check that all the digits are unique, and we can do this by checking that they match the result of 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9. This, of course, is the ninth triangle number!

We could, of course, write the function using a `for`

loop, in a way like this:

```
function triangleNumber(number){
let sum = 0
for(let i=1; i < number + 1; i++){
sum = sum + i
}
return sum
}
```

However, this would be a very inefficient decision, because there’s a very simple formula for calculating triangle numbers: `0.5 x (number) x (number + 1)`

.

So, the most efficient version of our function should look like this:

```
function triangleNumber(number){
return 0.5 * number * (number + 1)
}
```

Here are some of examples of how we’d use it:

```
triangleNumber(7) === 28 // 0.5 x 7 x 8
triangleNumber(123) === 7626 // 0.5 x 123 x 124
```

The factorial of a natural number (any whole number strictly greater than 0) is the product of all numbers less than or equal to that number. For example: 3 factorial (denoted by `3!`

) is `3 x 2 x 1 = 6`

.

Similar to the `sum`

and `product`

functions, there are two ways of creating our `factorial`

function: by using a `for`

loop, and by using recursion. If you haven’t met recursive algorithms before, they’re essentially functions that call themselves repeatedly until they reach a “base case”. You can read more about them in “Recursion in Functional JavaScript”.

Here’s how we can create our `factorial`

function using a `for`

loop:

```
function factorial(number){
let total = 1
for (let i = 1; i < number+1; i++){
total = total * i
}
return total
}
```

This function loops through all the numbers from 1 to the number (incrementing with each pass) and multiplies the total by each number, before returning the final total (the number factorial).

Here’s how we can create our `factorial`

function using recursion:

```
function factorial(number){
if (number <= 0){
return 1
}
else{
return number * factorial(number - 1)
}
}
```

In this function, our base case is zero, since `0!`

is surprisingly one (the proof to this is actually very interesting). This means that, as the number passes through the function, so long as it’s not zero, it will multiply itself by `factorial(number - 1)`

.

To help understand exactly what this function is doing at each pass, it might help to trace the algorithm. Here’s the algorithm traced with 3:

```
factorial(3) === 3*factorial(2) === 3*2*factorial(1) === 3*2*1*factorial(0) === 3*2*1*1 === 3*2*1 === 6
```

Either way, both functions will return the same value. For example:

```
factorial(5) === 120 // 5 x 4 x 3 x 2 x 1
```

Factors come in pairs, and each pair multiplies together to form the original number. For example:

- The factors of 10 are: 1 and 10; 2 and 5.
- The factors of 18 are: 1 and 18; 2 and 9; 3 and 6.

We want our `factors`

function to accept a number, and return an array of all its factors. There are many ways to write this function, but the simplest way is to use an imperative approach, such as this:

```
function factors(number){
let factorsList = []
for(let count = 1; count < number+1; count++){
if(number % count === 0){
factorsList.push(count)
}
}
return factorsList
}
```

Firstly, we create our array — leaving it empty to start with. We then use a `for`

loop to pass through every integer from 1 to the number itself, and at each pass we check whether the number is divisible by the integer (or `count`

in this case).

As you can see, to check the divisibility, we use the `mod`

sign again. And if the number is divisible by the integer, it’s a factor and can be pushed into our array.

The array is then returned, and every time we run the function, an array of factors will be returned in ascending order. For example:

```
factors(50) === [1,2,5,10,25,50]
```

Finding the factors of a number can be incredibly useful, particularly when you need to formulate groups — such as in online gaming, when you need an equal number of users in each team. For example, if you had 20 users and each team needed 10 players, you’d be able to use a `factors`

function to match the 10 with two teams. Similarly, if each team needed four players, you could use the `factors`

function to match the four into five teams.

In practice, it may look like this:

```
function createTeams(numberOfPlayers, numberOfTeams){
let playersInEachTeam
if(factors(numberOfPlayers).includes(numberOfTeams)){
playersInEachTeam = numberOfPlayers / numberOfTeams
}
else{
playersInEachTeam = "wait for more players"
}
return playersInEachTeam
}
```

This is one of the earliest conditions that you learn in school, and yet it’s not often used in day-to-day life. In a nutshell, a number is **prime** if it has two distinct factors, which are always one and itself. The prime numbers begin: 2, 3, 5, 7, 11, 13, 17, 19 … and so on to infinity.

It might initially seem like a complex function — and it may indeed be so if we hadn’t just written a very useful `factors`

function. As mentioned, a number is prime if it has two distinct factors, and so our function is as simple as this:

```
function isPrime(number){
return factors(number).length === 2
}
```

This will return a Boolean value based on whether or not the length of the list of its factors is two — in other words, whether it has two factors.

In practice, it will look like this:

```
isPrime(3) === true
isPrime(76) === false
isPrime(57) === true
```

Continuing the “grouping users” example from above, if the number of users is prime, we can’t group them equally (unless we only had one group, but this would defeat the object of the example), which means we’ll have to wait for another user to join. So, we could use it in a function such as this:

```
function addUsers(users){
if(isPrime(users)){
wait = true
}
else{
wait = false
}
}
```

Sometimes known as the “highest common factor”, the **greatest common divisor** operation finds the largest factor that two numbers share.

For example:

- The GCD of 12 and 15 is 3.
- The GCD of 8 and 4 is 4.

An easy way of working this out is to list all the factors of each number (using our incredible function above) and compare those lists. However, comparing the lists requires some pretty nifty but also inefficient array manipulation.

But here’s an example anyway:

```
function gcd(number1, number2){
let inCommon = []
for(let i of factors(number1)){
if(factors(number2).includes(i)){
inCommon.push(i)
}
}
return inCommon.sort((a,b)=> b - a)[0]
}
```

Here, we assign an empty array to the variable `inCommon`

and loop through the array of factors of `number1`

(using our function from before). If the array of factors of `number2`

contains the item in the current pass, we push it into our `inCommon`

array.

Once we have an array of all the factors the two numbers have in common, we return the first value of the array sorted in descending order. In other words, we return the greatest common divisor.

As you can imagine, if we hadn’t already created the `factors`

function, the code for this would be huge.

A more succinct but harder way of doing this is by using recursion. This is a pretty famous algorithm, called the Euclidean Algorithm:

```
function gcd(number1, number2){
if(number2 === 0){
return number1
}
else{
return gcd(number2, number1%number2)
}
}
```

Our base case here is `number2`

being equal to 0, at which point `number1`

is the greatest common divisor. Otherwise, the GCD is the GCD of `number2`

and the remainder of `number1`

divided by `number2`

.

Again, both functions will return the same thing. For example:

```
gcd(24, 16) === 8
gcd(75, 1) === 1
```

Lowest common multiple works on a similar wavelength to greatest common divisor, but instead finds the smallest integer that both numbers are factors of.

For example:

- The LCM of 2 and 6 is 6.
- The LCM of 4 and 15 is 60.

Unfortunately, for this function we can’t just create an array of all the multiples of each number, as this would be an infinite list.

However, there’s a very useful formula that we can use to calculate the lowest common multiple:

```
(number1 x number2) / the Greatest Common Divisor of the two numbers
```

To check the formula, you can try it with the example above. LCM of 2 and 6:

```
(2 x 6)/gcd(2,6) = 12/2 = 6
```

Luckily for us, we’ve just created a `gcd`

function, so creating this function is remarkably easy:

```
function lcm(number1, number2){
return (number1*number2)/gcd(number1, number2)
}
```

That’s it! All we need to do is return the formula above and it should work:

```
lcm(12, 9) === 36 // (12 x 9)/3
```

This function may not have any obvious uses, but I’ve often found it great for situations when there are two events occurring at different intervals, which means we could use the LCM to find out when the two events occur at the same time.

For example, if an image is programmed to appear every six seconds and a paragraph of text is programmed to appear every eight seconds, the image and paragraph will both appear together for the first time on the 24th second.

All the functions above can be found on the following CodePen demo, where you can interact with the functions and see them working in practice.

However, if you want to save yourself copying in these functions every time you need them, I’ve compiled them (plus a few others) into a mini-library, called JOG-Maths.

Hopefully this has given you some ideas about which math operations you can use beyond the inbuilt JavaScript `Math`

object and the power of math in code!

**Related reading:**

- How to Generate Random Numbers in JavaScript with Math.random()
- A Guide to Rounding Numbers in JavaScript

Original article source at: https://www.sitepoint.com

1667897880

A macOS input source switcher with user-defined shortcuts.

```
brew update
brew install --cask kawa
```

The prebuilt binaries can be found in Releases.

Unzip `Kawa.zip`

and move `Kawa.app`

to `Applications`

.

There is a known bug in the macOS's Carbon library that switching keyboard layouts using `TISSelectInputSource`

doesn't work well with complex input sources like CJKV.

We use Carthage as a dependency manager. You can find the latest releases of Carthage here, or just install it with Homebrew.

```
$ brew update
$ brew install carthage
```

To clone the Git repository of Kawa and install dependencies:

```
$ git clone git@github.com:utatti/kawa.git
$ carthage bootstrap
```

After dependency installation, open the project with Xcode.

Author: Hatashiro

Source Code: https://github.com/hatashiro/kawa

License: MIT license

1667564100

CodeTracking can be thought of as an extension of Julia's InteractiveUtils library. It provides an interface for obtaining:

- the strings and expressions of method definitions
- the method signatures at a specific file & line number
- location information for "dynamic" code that might have moved since it was first loaded
- a list of files that comprise a particular package.

CodeTracking is a minimal package designed to work with Revise.jl (for versions v1.1.0 and higher). CodeTracking is a very lightweight dependency.

`@code_string`

and `@code_expr`

```
julia> using CodeTracking, Revise
julia> print(@code_string sum(1:5))
function sum(r::AbstractRange{<:Real})
l = length(r)
# note that a little care is required to avoid overflow in l*(l-1)/2
return l * first(r) + (iseven(l) ? (step(r) * (l-1)) * (l>>1)
: (step(r) * l) * ((l-1)>>1))
end
julia> @code_expr sum(1:5)
[ Info: tracking Base
quote
#= toplevel:977 =#
function sum(r::AbstractRange{<:Real})
#= /home/tim/src/julia-1/base/range.jl:978 =#
l = length(r)
#= /home/tim/src/julia-1/base/range.jl:980 =#
return l * first(r) + if iseven(l)
(step(r) * (l - 1)) * l >> 1
else
(step(r) * l) * (l - 1) >> 1
end
end
end
```

`@code_string`

succeeds in that case even if you are not using Revise, but `@code_expr`

always requires Revise. (If you must live without Revise, you can use `Meta.parse(@code_string(...))`

as a fallback.)

"Difficult" methods are handled more accurately with `@code_expr`

and Revise. Here's one that's defined via an `@eval`

statement inside a loop:

```
julia> @code_expr Float16(1) + Float16(2)
:(a::Float16 + b::Float16 = begin
#= /home/tim/src/julia-1/base/float.jl:398 =#
Float16(Float32(a) + Float32(b))
end)
```

whereas `@code_string`

cannot return a useful result:

```
julia> @code_string Float16(1) + Float16(2)
"# This file is a part of Julia. License is MIT: https://julialang.org/license\n\nconst IEEEFloat = Union{Float16, Float32, Float64}"
```

Consequently it's recommended to use `@code_expr`

in preference to `@code_string`

wherever possible.

`@code_expr`

and `@code_string`

have companion functional variants, `code_expr`

and `code_string`

, which accept the function and a `Tuple{T1, T2, ...}`

of types.

`@code_expr`

and `@code_string`

are based on the lower-level function `definition`

; you can read about it with `?definition`

.

```
julia> using CodeTracking, Revise
julia> m = @which sum([1,2,3])
sum(a::AbstractArray) in Base at reducedim.jl:648
julia> Revise.track(Base) # also edit reducedim.jl
julia> file, line = whereis(m)
("/home/tim/src/julia-1/usr/share/julia/base/reducedim.jl", 642)
julia> m.line
648
```

In this (ficticious) example, `sum`

moved because I deleted a few lines higher in the file; these didn't affect the functionality of `sum`

(so we didn't need to redefine and recompile it), but it does change the starting line number of the file at which this method appears. `whereis`

reports the current line number, and `m.line`

the old line number. (For technical reasons, it is important that `m.line`

remain at the value it had when the code was lowered.)

Other methods of `whereis`

allow you to obtain the current position corresponding to a single statement inside a method; see `?whereis`

for details.

CodeTracking can also be used to find out what files define a particular package:

```
julia> using CodeTracking, Revise, ColorTypes
julia> pkgfiles(ColorTypes)
PkgFiles(ColorTypes [3da002f7-5984-5a60-b8a6-cbb66c0b333f]):
basedir: /home/tim/.julia/packages/ColorTypes/BsAWO
files: ["src/ColorTypes.jl", "src/types.jl", "src/traits.jl", "src/conversions.jl", "src/show.jl", "src/operations.jl"]
```

You can also find the method-signatures at a particular location:

```
julia> signatures_at(ColorTypes, "src/traits.jl", 14)
1-element Array{Any,1}:
Tuple{typeof(red),AbstractRGB}
julia> signatures_at("/home/tim/.julia/packages/ColorTypes/BsAWO/src/traits.jl", 14)
1-element Array{Any,1}:
Tuple{typeof(red),AbstractRGB}
```

CodeTracking also helps correcting for Julia issue #26314:

```
julia> @which uuid1()
uuid1() in UUIDs at C:\cygwin\home\Administrator\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.1\UUIDs\src\UUIDs.jl:50
julia> CodeTracking.whereis(@which uuid1())
("C:\\Users\\SomeOne\\AppData\\Local\\Julia-1.1.0\\share\\julia\\stdlib\\v1.1\\UUIDs\\src\\UUIDs.jl", 50)
```

CodeTracking has limited functionality unless the user is also running Revise, because Revise populates CodeTracking's internal variables. (Using `whereis`

as an example, CodeTracking will just return the file/line info in the method itself if Revise isn't running.)

CodeTracking is perhaps best thought of as the "query" part of Revise.jl, providing a lightweight and stable API for gaining access to information it maintains internally.

Author: Timholy

Source Code: https://github.com/timholy/CodeTracking.jl

License: MIT license

1667540460

This is a collection of spike sorting methods in Julia that were made to be used with my online acquisition system Intan.jl https://github.com/paulmthompson/Intan.jl. Each component of the spike sorting process (detection, aligning, feature detection, dimensionality reduction, and clustering) is intended to work independently, so that you can mix and match different parts of different algorithms.

How I actually use this

Right now, I only use this with online electrophysiology as a component of Intan.jl. I do NOT actually use this stand alone for offline sorting (although I'd love to if it was put together enough for that).

Partially implemented components

Manual Offline Sorting I moved pretty much all of the GUI elements for visualization of spikes from Intan.jl to this package; consequently, it shouldn't be *too* much additional effort to put together a GUI for offline sorting.

Automatic Offline Sorting Any of the sorint pipelines could be applied to voltage traces, but I expect it wouldn't work very well compared to modern methods. I only do acute single channel recordings, so the methods I have developed are biased toward those types of recordings with one channel and high SNR (e.g. Template matching). I have NOT implemented any types of cross channel (e.g silicon probe or tetrode) sorting methods. Because I use this online, I have also not used any methods that look at all of the data from an experimental session to develop features.

Algorithm Profiling I coded in the benchmarking methods from https://www.ncbi.nlm.nih.gov/pubmed/21677152, but have never actually used them.

Installation

using Pkg Pkg.add(url="https://github.com/paulmthompson/SpikeSorting.jl.git")

Documentation

http://spikesortingjl.readthedocs.org/en/latest/

Author: Paulmthompson

Source Code: https://github.com/paulmthompson/SpikeSorting.jl

License: BSD-2-Clause license

1666912320

The R package *forecast* provides methods and tools for displaying and analysing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.

This package is now retired in favour of the fable package. The forecast package will remain in its current state, and maintained with bug fixes only. For the latest features and development, we recommend forecasting with the fable package.

You can install the **stable** version from CRAN.

```
install.packages('forecast', dependencies = TRUE)
```

You can install the **development** version from Github

```
# install.packages("remotes")
remotes::install_github("robjhyndman/forecast")
```

```
library(forecast)
library(ggplot2)
# ETS forecasts
USAccDeaths %>%
ets() %>%
forecast() %>%
autoplot()
# Automatic ARIMA forecasts
WWWusage %>%
auto.arima() %>%
forecast(h=20) %>%
autoplot()
# ARFIMA forecasts
library(fracdiff)
x <- fracdiff.sim( 100, ma=-.4, d=.3)$series
arfima(x) %>%
forecast(h=30) %>%
autoplot()
# Forecasting with STL
USAccDeaths %>%
stlm(modelfunction=ar) %>%
forecast(h=36) %>%
autoplot()
AirPassengers %>%
stlf(lambda=0) %>%
autoplot()
USAccDeaths %>%
stl(s.window='periodic') %>%
forecast() %>%
autoplot()
# TBATS forecasts
USAccDeaths %>%
tbats() %>%
forecast() %>%
autoplot()
taylor %>%
tbats() %>%
forecast() %>%
autoplot()
```

- Get started in forecasting with the online textbook at http://OTexts.org/fpp2/
- Read the Hyndsight blog at https://robjhyndman.com/hyndsight/
- Ask forecasting questions on http://stats.stackexchange.com/tags/forecasting
- Ask R questions on http://stackoverflow.com/tags/forecasting+r
- Join the International Institute of Forecasters: http://forecasters.org/

Author: Robjhyndman

Source Code: https://github.com/robjhyndman/forecast

1666411020

Data types and methods for permutations

This module implements representations of permutations: list, disjoint cycles, and matrix, and sparse (not `SparseMatrix`

). This module wraps generic routines in the package `PermPlain`

The name is `PermutationsA`

to distinguish it from the `Permutations`

package. `PermutationsA`

differs from `Permutations`

mainly in that it is broader in scope and avoids copying and validating.

The Julia manual says the `AbstractArray`

type includes everything vaguely array-like. Permutations are at least vaguely array-like. This package defines

```
AbstractPerm{T} <: AbstractMatrix{T}
```

and concrete subtypes:

`PermList`

an`Array{T,1}`

corresponding to the one-line form`PermCyc`

an`Array{Array{T,1},}`

corresponding to disjoint-cycle form`PermMat`

acts like a matrix. stores data in one-line form`Array{T,1}`

`PermSparse`

is a`Dict`

which can be thought of as one-line form with fixed points omitted

These types differ both in how they store the data representing the permutation, and in their interaction with other types.

This package is not meant to create a type hierarchy faithfully reproducing mathematical structures. It is also meant to be more than an exercise in learning Julia. Some choices follow those made by other languages that have had implementations of permutations for many years. One goal of the package is efficiency. The objects are lightweight; there is little copying and validation, although `copy`

and `isperm`

methods are provided. For instance this

`julia M = rand(10,10) v = randperm(10) kron(PermMat(v),M) `

performs a Kronecker product taking advantage of the structure of a permutation matrix. `PermMat(v)`

only captures a pointer to `v`

. No copying or construction of a matrix, or list, apart from the output (dense) matrix, is done.

The Kronecker product of two permutations is a permutation. With dense matrices, this would crash your computer,

```
julia> q = randpermmat(1000);
julia> @time size(kron(q,q))
elapsed time: 0.008285471 seconds (8000208 bytes allocated)
(1000000,1000000)
```

Every finite permutation can be represented (mathematically) as a matrix. The `AbstractPerm`

type implements some of the properties of permutation matrices that are independent of the way the data is stored in the concrete types. The following methods are defined in `AbtractPerm`

and behave as they would on the `Matrix`

type: det, logdet, rank, trace, ishermitian, issym, istriu, istril, isposdef, null, getindex(i,j), transpose, ctranspose, inv, one, size, eltype. The methods full and sparse which return `Matrix`

and `SparseMatrix`

objects are also implemented here. To support all these methods, some stubs are present in place of low-level methods that must be implemented by the concrete types.

This type represents the permutation internally as a vector representing one-line form, but is meant to behave as much as possible like a `Matrix`

. One-line form means that the data vector, operating on [1:n], maps i to data[i]. The majority of the methods for `PermMatrix`

are identical to those of `PermList`

.

`PermList`

represents the permutation internally in exactly the same way as `PermMatrix`

. It also behaves like a matrix when there is no conflict with its role as a permutation. Note this difference between `PermList`

and `PermMat`

```
julia> p = randpermlist(3)
( 3 1 2 )
julia> p * 3 # p maps 3 to 2 # Agrees with Gap
2
julia> 2 / p # preimage of 2 is 3 # Agrees with Gap
3
julia> m = matrix(p) # capture the pointer to the data in p, nothing else.
3x3 PermMat{Int64}: # But the display is different.
0 0 1
1 0 0
0 1 0
julia> m * 3 # PermMat behaves like a matrix whenever possible.
3x3 Array{Int64,2}:
0 0 3
3 0 0
0 3 0
julia> M = rand(3,3); kron(m,M) == kron(p,M) # kron can't be interpreted another way for a permutation
true
```

`PermCyc`

stores data internally as an array of arrays representing disjoint cycles in the standard canonical form (described below).

```
julia> c = permcycs( (1,10^5), (2,3) ) # construct with tuples of disjoint cycles
((1 100000)(2 3))
julia> p = list(c); length(p.data) # convert to PermList. The data is a big array.
100000
julia> @time c^100000; # Use efficient algorithm copied from Pari
elapsed time: 2.3248e-5 seconds (320 bytes allocated)
julia> @time p^100000; # Operate on big list. Julia is really fast, anyway
elapsed time: 0.01122444 seconds (1600192 bytes allocated)
```

`PermSparse`

stores data internally as a `Dict`

of mappings for all non-fixed elements of the permutation.

```
julia> s = psparse(c) # convert to PermSparse. Displays as disjoint cycles
((1 100000)(2 3))
julia> s.data
Dict{Int64,Int64} with 4 entries: # stored efficiently
2 => 3
3 => 2
100000 => 1
1 => 100000
julia> sparse(s) # convert to sparse matrix; not the same thing
100000x100000 sparse matrix with 100000 Int64 entries:
[100000, 1] = 1
[3 , 2] = 1
[2 , 3] = 1
...
```

Construction

```
PermList([10,1,3,6,9,8,4,5,7,2]) # construct from one-line form
PermMat([10,1,3,6,9,8,4,5,7,2])
PermSparse([10,1,3,6,9,8,4,5,7,2])
PermCycs(((1, 6, 2, 7, 9),(3, 8, 4,10, 5))) # construct from tuples representing cycles
```

The identity permutation,

```
PermList() == PermMat() == PermCycs() == PermSparse() ==
one(PermList) == one(PermMat) == one(PermCycs) == one(PermSparse)
true
isid(one(PermList))
true
```

plength gives the largest number not mapped to itself. Zero means that all numbers are mapped to themselves.

```
0 == plength(PermMat()) == plength(PermList()) ==
plength(PermCycs()) == plength(PermSparse())
true
```

For `PermCycs`

and `PermSparse`

there is only one representation of the identity. For `PermList`

and `PermMat`

there are many representations of the identity.

```
julia> one(PermList,10)
( 1 2 3 4 5 6 7 8 9 10 )
one(PermList) == one(PermList,10) == one(PermMat,20) == one(PermCycs)
true
```

The domain of each permutation is all positive integers.

```
julia> p = randpermlist(10)
( 10 7 4 2 1 8 3 5 6 9 )
julia> p * 3
4
julia> p * 100
100
julia> psparse(p) * 100 # sparse(p) means SparseMatrix
100
julia> 4 == p * 3 == psparse(p) * 3 == cycles(p) * 3 # These are all the same
true
```

`PermMat`

is different

```
julia> size(PermMat(p) * 100) # Not same, not application of the permutation
(10,10)
julia> pmap(PermMat(p),100) # Same for all permutation types
100
julia> p[1] # also application of permutation
10
julia> show(matrix(p)[1:10]) # first column of the matrix
[0,0,0,0,1,0,0,0,0,0]
```

Use `list`

, `cycles`

, `psparse`

, and `matrix`

to convert from one type to another. Use `pivtopermlist`

to convert the 'pivot' vector to `PermList`

, and `topiv`

to convert an `AbstractPerm`

to a pivot vector.

Use `full(p)`

and `sparse(p)`

to get dense and sparse matrices from any permutation type.

aprint,cprint,lprint,mprint,astring,cstring,lstring,mstring: Print or make strings of any type in various forms.

`* ^`

: composition and power, using various algorithms. ppow(c::PermCyc,n) gives power of c as a PermList using an algorithm from Pari that may be more efficient than `list(c^n)`

.

randpermlist, randpermcycs, randpermmat, randpermsparse, `randperm(type,n)`

: generate random permutations

numcycles finds the number of cycles (excluding 1-cycles). iscyclic returns true if the permutation is cyclic. We allow fixed points in a cyclic permutation.

Permutation types support element type parameters,

```
julia> typeof(PermList(Int32[3,2,1]))
PermList{Int32} (constructor with 1 method)
```

For other features, see the test and source files.

Author: jlapeyre

Source Code: https://github.com/jlapeyre/PermutationsA.jl

License: View license

1665740940

If you use Krylov.jl in your work, please cite using the format given in `CITATION.bib`

.

This package provides implementations of certain of the most useful Krylov method for a variety of problems:

Square or rectangular full-rank systems

**Ax = b**

should be solved when * b* lies in the range space of

is square and nonsingular,**A**is tall and has full column rank and**A**lies in the range of**b**.**A**

Linear least-squares problems

minimize ‖* b* -

should be solved when * b* is not in the range of

is square and singular,**A**is tall and thin.**A**

Underdetermined sytems are less common but also occur.

If there are infinitely many such * x* (because

minimize ‖* x*‖ subject to

Linear least-norm problems

minimize ‖* x*‖ subject to

sould be solved when * A* is column rank-deficient but

is square and singular,**A**is short and wide.**A**

Overdetermined sytems are less common but also occur.

Adjoint systems

* Ax = b* and

where * A* can have any shape.

Saddle-point and symmetric quasi-definite (SQD) systems

[**M *** A*] [

[

where * A* can have any shape.

Generalized saddle-point and unsymmetric partitioned systems

[**M*** A*] [

[

where * A* can have any shape and

Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:

is not available explicitly,**A**would be dense or would consume an excessive amount of memory if it were materialized,**A**- factors would consume an excessive amount of memory.

Iterative methods are recommended in either of the following situations:

- the problem is sufficiently large that a factorization is not feasible or would be slow,
- an effective preconditioner is known in cases where the problem has unfavorable spectral structure,
- the operator can be represented efficiently as a sparse matrix,
- the operator is
*fast*, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as*dense*matrices.

All solvers in Krylov.jl have in-place version, are compatible with **GPU** and work in any floating-point data type.

Krylov can be installed and tested through the Julia package manager:

```
julia> ]
pkg> add Krylov
pkg> test Krylov
```

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

Author: JuliaSmoothOptimizers

Source Code: https://github.com/JuliaSmoothOptimizers/Krylov.jl

License: View license

1665570555

A collection of quantifications related to Shannon's information theory and methods to discretise data.

```
using Shannon
xy = hcat([sin(x) + randn() * .1 for x=0:0.01:2pi], [cos(x) + randn() * .1 for x=0:0.01:2pi])
bxy = bin_matrix(xy, -1.0, 1.0, 10)
c=combine_binned_matrix(bxy)
c=relabel(c)
H = entropy(c)
I = MI(bxy)
```

A faster way is to call

```
unary_of_matrix(xy, -1.0, 1.0, 10)
```

which is a short cut for the lines below

```
bxy = bin_matrix(xy, -1.0, 1.0, 10)
c=combine_binned_matrix(bxy)
c=relabel(c)
```

The estimators are implemented from the following list of publications:

[1] A. Chao and T.-J. Shen. Nonparametric estimation of shannon’s index of diversity when there are unseen species in sample. Environmental and Ecological Statistics, 10(4):429–443, 2003.

and the function call is

```
entropy(data, base=2, mode="ML")
```

where

**data** | is the discrete data (*Vector{Int64}*) |

**mode** | determines which estimator should be used (see below). It is *not* case-sensitive |

**base** | determines the base of the logarithm |

###Maximum Likelihood Estimator This is the default estimator.

```
entropy(data)
entropy(data, mode="ML")
entropy(data, mode="Maximum Likelihood")
```

###Maximum Likelihood Estimator with Bias Correction (implemented from [1])

```
entropy(data, mode="MLBC")
entropy(data, mode="Maximum Likelihood with Bias Compensation")
```

###Horovitz-Thompson Estimator (implemented from [1])

```
entropy(data, mode="HT")
entropy(data, mode="Horovitz-Thompson")
```

###Chao-Shen Estimator (implemented from [1])

```
entropy(data, mode="CS")
entropy(data, mode="Chao-Shen")
entropy(data, mode="ChaoShen")
```

```
entropy(data, base=2) [ this is the default ]
entropy(data, mode="HT", base=10)
```

Currently, only the *maximum likelihood estimator* is implemented. It can be used with different bases:

```
MI(xy, base=2) [ this is the default ]
MI(xy, base=10)
```

**xy** is a two-dimensional matrix with **n** rows and two columns.

This in an implementation of the one-step predictive information, which is given by the mutual information of consecutive data points. If x is the data vector, then:

```
PI(x) = MI(hcat(x[1:end-1], x[2:end]))
PI(x,[base],[mode]) = MI(x[1:end-1], x[2:end], base, mode)
```

This function calculates the KL-Divergence on two probability distributions, and is essentially given by:

```
KL(p,q)= sum([(p[i] != 0 && q[i] != 0)? p[i] * log(base, p[i]/q[i]) : 0 for i=1:length(p)])
```

**p**,**q** must be valid probability distributions, i.e.

```
x >= 0 for x in p
y >= 0 for y in q
sum(p) == sum(q) == 1.0
```

Implementation of measures from

Quantifying Morphological Computation, Zahedi & Ay, Entropy, 2013: [pdf]

and

Quantifying Morphological Computation based on an Information Decomposition of the Sensorimotor Loop, Ghazi-Zahedi & Rauh, ECAL 2015: [pdf]

Author: Kzahedi

Source Code: https://github.com/kzahedi/Shannon.jl

License: View license

1660917360

Generate boilerplate code for R6 classes. Given R6 class create getters and/or setters for selected class fields or use RStudio addins to insert methods straight into class definition.

You can install the package from CRAN:

```
install.packages("r6methods")
```

or install development version from Github using:

```
remotes::install_github("jakubsob/r6methods")
```

Core functionality comes with `make_methods`

function.

```
library(r6methods)
library(R6)
Person <- R6Class(
"Person",
public = list(
name = NULL,
age = NA,
initialize = function(name, age = NA) {
self$name <- name
self$age <- age
},
print = function(...) {
cat("Person: \n")
cat(" Name: ", self$name, "\n", sep = "")
cat(" Age: ", self$age, "\n", sep = "")
invisible(self)
}
),
private = list(
secret1 = NULL,
secret2 = NULL
)
)
```

Create getters and setters for private fields:

```
make_methods(Person, "private", "both")
#> #' @description Setter for secret1
#> set_secret1 = function(secret1) {
#> private$secret1 <- secret1
#> },
#> #' @description Getter for secret1
#> get_secret1 = function() {
#> private$secret1
#> },
#> #' @description Setter for secret2
#> set_secret2 = function(secret2) {
#> private$secret2 <- secret2
#> },
#> #' @description Getter for secret2
#> get_secret2 = function() {
#> private$secret2
#> }
```

Or only getters:

```
make_methods(Person, "private", "get", add_roxygen = FALSE)
#> get_secret1 = function() {
#> private$secret1
#> },
#> get_secret2 = function() {
#> private$secret2
#> }
```

You can also create methods for fields of your liking, not only all of private/public:

```
make_methods(Person, c("age", "secret1"), "get", add_roxygen = FALSE)
#> get_age = function() {
#> self$age
#> },
#> get_secret1 = function() {
#> private$secret1
#> }
```

Four addins are supplied with the package. They are grouped into 2 families:

- Generate: makes method strings and prints them to console to be copied to class definition.
- Insert: makes method strings and inserts them into class definition.

Addins with `gadget`

suffix open gadget in RStudio which allows user to have more control over the generated methods.

**Insert R6 methods**

**Insert R6 methods gadget**

Author: jakubsob

Source Code: https://github.com/jakubsob/r6methods

License: Unknown, MIT licenses found

1660354980

- Efficient for large data sets, using algorithms from the Eigen linear algebra package via the RcppEigen interface layer.
- Allows arbitrarily many nested and crossed random effects.
- Fits generalized linear mixed models (GLMMs) and nonlinear mixed models (NLMMs) via Laplace approximation or adaptive Gauss-Hermite quadrature; GLMMs allow user-defined families and link functions.
- Incorporates likelihood profiling and parametric bootstrapping.

- From CRAN (stable release 1.0.+)
- Development version from Github:

```
library("devtools"); install_github("lme4/lme4",dependencies=TRUE)
```

(This requires `devtools`

>= 1.6.1, and installs the "master" (development) branch.) This approach builds the package from source, i.e. `make`

and compilers must be installed on your system -- see the R FAQ for your operating system; you may also need to install dependencies manually. Specify `build_vignettes=FALSE`

if you have trouble because your system is missing some of the `LaTeX/texi2dvi`

tools.

- Development binaries from
`lme4`

r-forge repository:

```
install.packages("lme4",
repos=c("http://lme4.r-forge.r-project.org/repos",
getOption("repos")[["CRAN"]]))
```

(these source and binary versions are updated manually, so may be out of date; if you believe they are, please contact the maintainers).

It is possible to install (but not easily to check) `lme4`

at least as recently as 1.1-7.

- make sure you have
*exactly*these package versions:`Rcpp`

0.10.5,`RcppEigen`

3.2.0.2 - for installation, use
`--no-inst`

; this is necessary in order to prevent R from getting hung up by the`knitr`

-based vignettes - running
`R CMD check`

is difficult, but possible if you hand-copy the contents of the`inst`

directory into the installed package directory ...

**lme4.0**

`lme4.0`

is a maintained version of lme4 back compatible to CRAN versions of lme4 0.99xy, mainly for the purpose of*reproducible research and data analysis*which was done with 0.99xy versions of lme4.- there have been some reports of problems with
`lme4.0`

on R version 3.1; if someone has a specific reproducible example they'd like to donate, please contact the maintainers. - Notably,
`lme4.0`

features`getME(<mod>, "..")`

which is compatible (as much as sensibly possible) with the current`lme4`

's version of`getME()`

. - You can use the
`convert_old_lme4()`

function to take a fitted object created with`lme4`

<1.0 and convert it for use with`lme4.0`

. - It currently resides on R-forge, and you should be able to install it with

```
install.packages("lme4.0",
repos=c("http://lme4.r-forge.r-project.org/repos",
getOption("repos")[["CRAN"]]))
```

(if the binary versions are out of date or unavailable for your system, please contact the maintainers).

- See the NEWS file

- r-sig-mixed-models@r-project.org for questions about
`lme4`

usage and more general mixed model questions; please read the info page, and subscribe, before posting ... (note that the mailing list does not support images or large/non-text attachments) - https://github.com/lme4/lme4/issues for bug, infelicity, and wishlist reporting
- The lme4 tag on StackOverflow for programming-related or the lme4-nlme tag on CrossValidated for statistics-related questions
- maintainer e-mail only for urgent/private communications

If you choose to support `lme4`

development financially, you can contribute to a fund at McMaster University (home institution of one of the developers) here. The form will say that you are donating to the "Global Coding Fund"; this fund is available for use by the developers, under McMaster's research spending rules. We plan to use the funds, as available, to pay students to do maintenance and development work. There is no way to earmark funds or set up a bounty to direct funding toward particular features, but you can e-mail the maintainers and suggest priorities for your donation.

Author: lme4

Source Code: https://github.com/lme4/lme4

License: View license

1660078920

Performs ice-seawater interface calculations using level set methods. The level set scheme is similar to that described by Chen, et al. (1997). Model state is passed around using a subtype of the `ModelState`

abstract type. Physical constants and parameters are contained in a `PhysicalParameters`

type. See `initialize1d()`

for an example.

**Iceberg is new and under development. It requires a recent build of Julia (i.e. 0.3).**

Models in one, two, and three dimensions will be supported. Dimension is indicated based on the concrete type of the `ModelState`

type passed to functions. Currently, one-dimensional models are implemented.

Methods to solve the heat equation are contained in `heat.jl`

.

Interface fluxes can be calculated using the `front_velocity()`

function. Level set reinitialization uses a global method described by Chen et al. (1997) and Osher and Fedkiw (2003). In the future, there will be functions to handle level set function evolution with a hyperbolic equation solver.

The function `initialize1d_hill()`

recreates a scenario easily compared to the analytical solution of Hill (1987), Section 1.3. An instance of this scenario with `n=200`

is a part of the test suite.

- extend to higher dimensions
- add simple Navier-Stokes field solver for fluid phase

Chen, S., B. Merriman, S. Osher and P. Smereka. *A simple level set method for solving Stefan problems*, 1997. Journal of Computational Physics, 135, 8-29.

Hill, J. M. *One-dimensional Stefan Problems: an introduction*, 1987. Pitman Monographs and Surveys in Pure and Applied Mathematics, 31. John Wiley and Sons, New York.

Osher, S. and R. Fedkiw. *Level Set Methods and Dynamic Implicit Surfaces*, 2003. Applied Mathematical Sciences, 153. Springer-Verlag, New York.

Author: njwilson23

Source Code: https://github.com/njwilson23/Iceberg.jl