1598568480

Scala supports using type parameters to implement a classes and functions that can support multiple types. Type parameters are very useful when creating a generics. However, advanced use cases may require you to specify a constraints on types used. That’s where a Variances help to model the relationships between generic types. This post will cover the topic of type variances in Scala.

Variance is the correlation of subtyping relationships of complex types and the subtyping relationships of their component types.

In other words, variance allows developers to model the relationships between types and component types. As a consequence, variance allows to create a clean and very usable generic classes that are widely used in various Scala libraries.

Scala supports thee types of variance: **covariance**, **invariance **and **contravariance**. With this in mind, let’s look at each of these in greater detail.

Let’s assume that we have the following class structure

```
abstract class Vehicle {
def name: String
}
case class Car(name: String) extends Vehicle
case class Bus(name: String) extends Vehicle
class VehicleList[T](val vehicle: T)
```

The `Car`

and `Bus`

classes both inherit from abstract class `Vehicle`

. Considering that class `Car`

has its own collection `VehicleList[Car]`

: is `VehicleList[Car]`

a subtype of `VehicleList[Vehicle]`

? The answer is no. Despite the fact that `Car`

class extends a `Vehicle`

class, the same can’t be said about `VehicleList[Car]`

and `VehicleList[Vehicle]`

.

```
val vehicleList: VehicleList[Vehicle] = new VehicleList[Car](new Car("bmw")) // incorrect: type mismatch
```

#jvm #coding #programming #scala #function

1594453980

**Dotty, **a comprehensive name for all the things that are being added to Scala 3.0 has been a topic of discussion in the Scala community for the last four years. With all the promises and progress, The time for the release is very near. We can expect a release by the end of the year. In this post, we will get a teaser of what changes to expect from Dotty as well as some of the new changes in the major release. At the end of the post, we will see how we can quickly start with the Dotty compiler and example code.

If you would ask, “Is Scala 3.0 a new language”, Martin Ordersky says “yes Or no” in “A tour of Dotty.” He has reasons for it. If you ask me, I would say yes!! It has a lot of new features.

- If you are a beginner, and only starting with Scala 3, you will find the new features as a new concept that was not part of the old Scala.
- Below snippet is a poll depicting the favorite changes for beginners in Scala 3.0. which was shared by Martin Ordersky in the last edition of ScalaDays.

- However, it does not end here. There are more changes which you will see in the next section.
- If you are at an intermediate level in scala and already familiar with the constructs of the language, you will see some of the features which are removed and no longer supported. We will see those in detail in the upcoming sections.
- If you are a Scala expert and
**FP Enthusiast,**there is a list of changes for you as well such as, **type-level derivation, match types, implicit function types, Metaprogramming **to name a few. - These changes are shared in “A tour of Dotty” by Martin Ordersky in the last edition of the ScalaDays.

What do we mean by breaking changes? Those changes which are not going to be supported and compiling them in the newer version will require a re-write of the source in either a new way or to be removed. There are some features that are considered to have been pain points performance-wise or creating unnecessary complexity that is going to removed from Scala 3.0.

At a preview level here is the list of breaking changes:

**Macro: **One of the biggest breaking change is Macros. The newer version of Scala includes a newer macros-system and Experimental Macro from Scala version 2.13 will not work in 3.0 and will have to be re-written.

**22-Arity: **The limits of 22 for the maximal number of parameters of function types and the maximal number of fields in tuple types have been dropped.

**And many more features. **Here is the list of all the breaking changes you should go through if you think you will be porting your code at some point in time.

An exhaustive list of Dropped Features is given below:

- DelayedInit
- Macros
- Existential Types
- Type Projection
- Do-While
- Procedure Syntax
- Package Objects
- Early Initializers
- Class Shadowing
- Limit 22
- XML literals
- Symbol Literals
- Auto-Application
- Weak Conformance
- Nonlocal Returns
- [this] Qualifier

#dotty #scala #scala days #beginners #dotty #functional programming in scala #scala 3

1594461240

In this blog, we are going to see the use of Either in scala.

We use Options in scala but why do we want to go for Either?

Either is a better approach in the respect that if something fails we can track down the reason, which in Option None case is not possible.

We simply pass None but what is the reason we got None instead of Some. We will see how to tackle this scenario using Either.

```
Either[Left, Right]
```

*None is similar to Left which signifies Failure and Some is similar to Right which signifies Success.*

Let’s see with help of an example:

```
def returnEither(value: String): Either[NumberFormatException, Int] =
{
try {
Right(value.toInt)
} catch {
case ex: NumberFormatException => Left(ex)
}
}
returnEither("abc").map(x => println(x))
```

It will not print anything, Either is right biased and returnEither(“abc”) gives Left(java.lang.NumberFormatException: For input string: “abc”)

Now let’s call it on Right value.

```
returnEither("1").map(x => println(x))
```

It will print 1. Yes, it is right biased as the map works on Either.right. What if I want to call the map on left?

```
returnEither("abc").left.map(x => println(x))
```

It will print** java.lang.NumberFormatException: For input string: “abc”.**

**Using Match case with Either**

```
returnEither("1") match
{
case Right(value) => println(s"Right value: $value")
case Left(ex: NumberFormatException) => println(ex)
}
```

It will print Right value: 1

**Extract value from Either**

Let’s say we want to extract left value from Either.

```
println(returnEither("abc").left)
```

will print **LeftProjection(Left(java.lang.NumberFormatException: For input string: “abc”))**

```
println(returnEither("1").left)
```

will print **LeftProjection(Right(1))**.

```
println(returnEither("abc").left.get)
```

will give **java.lang.NumberFormatException: For input string: “abc”**.

```
println(returnEither("1").left.get)
```

will give: **Exception in thread “main” java.util.NoSuchElementException: Either.left.get on Right**

Oops. It had right value.

We can use getOrElse or fold for default value.

#scala ##use of either ##use of either in scala #either #scala

1594468500

In this blog post, we will learn about higher-order functions in Scala – what they mean, why they are used and how they are used.

A higher-order function is a function that takes in another function as argument and itself returns some value or function. Some examples of higher-order functions include map, filter, reduce, foreach etc.

As a part of the functional programming paradigm, whatever logic we need to write is to be implemented in terms of pure and immutable functions. Here, functions take arguments from other functions as input and return values/functions which used by other functions for further processing. Here, pure means that the function does not produce any side-effects like printing to the console and immutable means that the function takes in and produces immutable data(val) only.

Higher-order functions comply with the above idea. As compared to for loops, we can iterate a data structure using higher-order functions with much less code.

Let’s move forward and try using some higher order functions. We will be seeing 3 most commonly used higher-order functions which are map, filter and reduce.

A map higher-order function applies the function passed in it to every element of the data structure. It then returns the same type of data structure but with mapped values. It has the following form :

```
def map[B] (f: A => B): Traversable[B]
```

If you have sbt installed, just type “sbt console” in any directory to get to the Scala REPL and you can try all the examples given here. If not, you should install sbt first.

**EXAMPLE** : Given a vector of integers from 1 to 10, we will try to find the corresponding vector that contains the squares of those integers. We perform this using map function as below :

Here, the map function is used over a vector of integers in Scala. The function passed into the map function i => i*i takes each integer from the numbers vector and converts it to the square of that integer. This happens for all the integers and the resulting vector containing squares of numbers from 1 to 10 is returned. Doing the same in Java would take many more lines.

A filter function takes in a predicate and selects the elements from the data structure which satisfy the given predicate. A predicate is a unary function that returns a boolean value. Here again we get the same type of data structure as the input. A filter function looks like:

```
def filter(f: A => Boolean): Traversable[A]
```

**EXAMPLE** : Given a vector of integers from 1 to 10, we will try to pull out all even numbers integers in a new vector. Have a look at the code below :

Here, the filter function is taking in a predicate x => x%2==0 which is a mapping from an integer to a boolean value based on the condition that the integer is even. If the integer is even, the predicate returns true and the integer is added to the resulting data structure. If the integer is odd, the predicate returns false and it is discarded. The result is a vector of all even integers.

A reduce function reduces the elements of the data structure using the specified associative binary operator and returns the reduced value. Reduction means to apply the same binary operator consecutively on each element to produce a single value. It is of the form :

```
def reduce[B >: A](op: (B, B) => B): B
```

**EXAMPLE** : Let’s try to sum the numbers present in a vector using reduce function.

#scala #functional programming in scala #higher order function #scala

1594330800

*This article assumes that you understand and know how to build regression or classification models.*

The error of any statistical model is composed of three parts — bias, variance and noise. In layman’s terms, bias is the inverse of the accuracy of predictions. And variance refers to the degree to which the predictions are spread out. Noise, on the other hand, is random fluctuation that cannot be expressed systematically.

However, the above definitions are vague and we need to inspect them from a mathematical perspective. In this article, I will stress on variance — the more perplexing demon that haunts our models.

When we allow our models the flexibility to uselessly learn complex relationships over the training data, it loses the ability to generalize. Most of the times, this flexibility is provided through features i.e when the data has a large number of features (sometimes more than the number of observations). It could also be due to a complex neural network architecture or an excessively small training dataset.

What results from this is a model which also learns the noise in the training data; consequently, when we try to make predictions on unseen data, the model misfires.

_Variance is also responsible for the differences in predictions on the same observation in different “realizations” of the model. _

We will use this point later to find an exact value for the variance.

Let *Xᵢ* be the population of predictions made by model *M* on observation *i.* If we take a sample of size *n* values, the variance will be:

Now that was something we already knew. However, we need to calculate the variance of the whole population (which is equivalent to the variance of the statistical model that generated it) and we are not quite there yet. There is one concept we need to understand before that — bootstrap resampling.

Note that this formula of variance assumes that the outcome of the model is a continuous variable — this happens in regression. In the case of classification, the outcome is 0/1 and thus, we would have to measure the variance differently. You can find the explanation to that in this paper.

Often we don’t have access to an entire population to be able to calculate a statistic such as variance or mean. In such cases, we make use of bootstrap sub-samples.

The principle of bootstrapping suggests that if we take a large number of sub-samples of size nfrom a sample of size n, then it is an approximation of taking those samples from the original population.with replacement

We find the sample statistic on each of these sub-samples and take their mean to estimate the statistic with respect to the population. The number of sub-samples we take is only limited by time and space constraints; however, the more you take, the more accurate will be your result.

Let _M _be our statistical model. A realization of *M* is a mapping from input to output. When we train *M* on a particular input, we obtain a specific realization of the model. We can obtain more realizations by training the model on sub-samples from the input data.

#machine-learning #bias-variance #variance-analysis #variance #sensitivity-analysis #data analysis

1598568480

Scala supports using type parameters to implement a classes and functions that can support multiple types. Type parameters are very useful when creating a generics. However, advanced use cases may require you to specify a constraints on types used. That’s where a Variances help to model the relationships between generic types. This post will cover the topic of type variances in Scala.

Variance is the correlation of subtyping relationships of complex types and the subtyping relationships of their component types.

In other words, variance allows developers to model the relationships between types and component types. As a consequence, variance allows to create a clean and very usable generic classes that are widely used in various Scala libraries.

Scala supports thee types of variance: **covariance**, **invariance **and **contravariance**. With this in mind, let’s look at each of these in greater detail.

Let’s assume that we have the following class structure

```
abstract class Vehicle {
def name: String
}
case class Car(name: String) extends Vehicle
case class Bus(name: String) extends Vehicle
class VehicleList[T](val vehicle: T)
```

The `Car`

and `Bus`

classes both inherit from abstract class `Vehicle`

. Considering that class `Car`

has its own collection `VehicleList[Car]`

: is `VehicleList[Car]`

a subtype of `VehicleList[Vehicle]`

? The answer is no. Despite the fact that `Car`

class extends a `Vehicle`

class, the same can’t be said about `VehicleList[Car]`

and `VehicleList[Vehicle]`

.

```
val vehicleList: VehicleList[Vehicle] = new VehicleList[Car](new Car("bmw")) // incorrect: type mismatch
```

#jvm #coding #programming #scala #function