Scala

Scala

Scala is a general-purpose programming language providing support for functional programming and a strong static type system. Designed to be concise, many of Scala's design decisions aimed to address criticisms of Java.

如何在 Scala 中进行惰性求值

惰性求值是一种将表达式的计算延迟到需要时才计算的技术。这对于在某些情况下提高性能和减少内存使用很有用。

Scala 中的惰性求值

在 Scala 中,惰性求值是通过使用惰性值来实现的。惰性val是惰性计算的值。在第一次访问它之前,不会评估它的值。这是 Scala 中惰性 val 的示例:

lazy val a ={

    println("computing a")

    26

  }

在此示例中,直到第一次访问a时才对其值进行求值。当第一次访问a时,代码块 { println(“Computing a”); 26 } 被执行并且a的值被设置为26

高阶函数中的延迟评估

在 Scala 中,惰性求值在处理将一个或多个函数作为参数的高阶函数时特别有用,因为它允许仅在高阶函数实际使用时才对函数求值。

例如,考虑以下代码:

  def computation(x:Int): Int = {

    println("Computing...")

    Thread.sleep(1000) // Simulating an expensive computation

    x

  }

  def higherOrderFunc(f: Int => Int): Int = {

    println("Doing something...")

    f(10)

  }

  val result = higherOrderFunc(computation)

这段代码computation()是一个昂贵的函数,需要很长时间才能运行。该函数higherOrderFunc(f: Int => Int)是一个高阶函数,它将另一个函数f作为参数。

higherOrderFunc()调用时,它会立即求值computation(),即使结果直到稍后在函数中才真正使用。这会导致不必要的资源使用和性能开销。

为避免此问题,我们可以使用惰性求值来推迟对 的求值,computation()直到它实际需要时higherOrderFunc()

def computation(x:Int): Int = {

  println("Computing...")

  Thread.sleep(1000) // Simulating an expensive computation

  x

}

def higherOrderFunc(f: => Int => Int): Int = {

  println("Doing something...")

  f(10)

}

val result = higherOrderFunc(computation)

在此代码中,我们使用=>语法使f参数成为按名称调用的参数。这意味着computation(x:Int)直到函数实际需要表达式时才会对其求值f

因此,文本“Computing...”只会在)computation()实际计算时打印到控制台higherOrderFunc(,而不是在定义函数时立即打印。

通过在 Scala 中对高阶函数使用惰性求值,我们可以进一步提高代码的性能和资源利用率。这在涉及昂贵的计算或操作并且不需要立即获得结果的情况下特别有用。

使用惰性求值的好处

提高性能

通过将表达式的计算推迟到实际需要时,程序可以避免不必要的计算并更有效地使用资源。这可以提高性能并减少资源消耗。

更灵活的代码

惰性求值允许将表达式的求值推迟到实际需要时,从而使代码更加灵活。这可以更轻松地编写更加模块化的代码,并且可以在不同的上下文中重用。

改进的错误处理

惰性求值可以通过允许程序捕获和处理表达式求值期间发生的错误来改进错误处理。这可以更轻松地编写健壮且可以处理意外情况的代码。

结论

惰性评估是一种强大的技术,可用于在某些情况下提高性能并减少内存使用。在 Scala 中,惰性求值是通过使用惰性值来实现的。通过将值的计算延迟到需要时,我们可以避免不必要的计算并提高程序的整体效率。

文章原文出处:https: //blog.knoldus.com/

#scala #evaluate 

如何在 Scala 中进行惰性求值

Как сделать ленивую оценку в Scala

Ленивое вычисление — это метод, который откладывает вычисление выражения до тех пор, пока оно не понадобится. Это может быть полезно для повышения производительности и сокращения использования памяти в определенных ситуациях.

Ленивые вычисления в Scala

В Scala ленивые вычисления достигаются за счет использования ленивых валов. Lazy val — это значение, которое вычисляется лениво. Его значение не оценивается до первого обращения к нему. Вот пример ленивого val в Scala:

lazy val a ={

    println("computing a")

    26

  }

В этом примере значение a не оценивается до первого обращения к нему. При первом обращении к a блок кода { println («Вычисление a»); 26 } выполняется и значение a устанавливается равным 26 .

Ленивая оценка в функции высшего порядка

В Scala ленивые вычисления могут быть особенно полезны при работе с функциями более высокого порядка, которые принимают одну или несколько функций в качестве параметров, поскольку это позволяет вычислять функции только тогда, когда они фактически используются функцией более высокого порядка.

Например, рассмотрим следующий код:

  def computation(x:Int): Int = {

    println("Computing...")

    Thread.sleep(1000) // Simulating an expensive computation

    x

  }

  def higherOrderFunc(f: Int => Int): Int = {

    println("Doing something...")

    f(10)

  }

  val result = higherOrderFunc(computation)

Этот код computation()является дорогостоящей функцией, выполнение которой занимает много времени. Функция higherOrderFunc(f: Int => Int)является функцией более высокого порядка, которая принимает другую функцию fв качестве параметра.

Когда higherOrderFunc()вызывается, он немедленно оценивает computation(), даже если результат фактически не используется в функции до более поздних этапов. Это может привести к ненужному использованию ресурсов и повышению производительности.

Чтобы избежать этой проблемы, мы можем использовать ленивую оценку, чтобы отложить оценку computation()до тех пор, пока она действительно не понадобится higherOrderFunc():

def computation(x:Int): Int = {

  println("Computing...")

  Thread.sleep(1000) // Simulating an expensive computation

  x

}

def higherOrderFunc(f: => Int => Int): Int = {

  println("Doing something...")

  f(10)

}

val result = higherOrderFunc(computation)

В этом коде мы используем синтаксис =>, чтобы сделать fпараметр параметром, вызываемым по имени. Это означает, что выражение computation(x:Int)не будет оцениваться до тех пор, пока оно действительно не понадобится функции f.

В результате текст «Вычисления…» будет выводиться на консоль только тогда, когда computation()он фактически вычисляется с помощью higherOrderFunc(), а не сразу после определения функции.

Используя отложенное вычисление с функциями высшего порядка в Scala, мы можем еще больше улучшить производительность и использование ресурсов в нашем коде. Это может быть особенно полезно в ситуациях, когда требуются дорогостоящие вычисления или операции, а результаты не нужны немедленно.

Преимущества использования ленивых вычислений

Улучшенная производительность

Откладывая вычисление выражений до тех пор, пока они действительно не понадобятся, программа может избежать ненужных вычислений и более эффективно использовать ресурсы. Это может привести к повышению производительности и снижению потребления ресурсов.

Более гибкий код

Ленивое вычисление может сделать код более гибким, позволяя откладывать вычисление выражений до тех пор, пока они действительно не понадобятся. Это может упростить написание более модульного кода, который можно повторно использовать в различных контекстах.

Улучшенная обработка ошибок

Ленивое вычисление может улучшить обработку ошибок, позволяя программе перехватывать и обрабатывать ошибки, возникающие при вычислении выражений. Это может упростить написание надежного кода, способного справляться с непредвиденными ситуациями.

Заключение

Ленивое вычисление — это мощный метод, который можно использовать для повышения производительности и сокращения использования памяти в определенных ситуациях. В Scala ленивые вычисления достигаются за счет использования ленивых валов. Откладывая вычисление значения до тех пор, пока оно не понадобится, мы можем избежать ненужных вычислений и повысить общую эффективность наших программ.

Оригинальный источник статьи: https://blog.knoldus.com/

#scala #evaluate 

Как сделать ленивую оценку в Scala
Nat  Grady

Nat Grady

1679660104

How to Lazy Evaluation in Scala

Lazy evaluation is a technique that delays the computation of an expression until it is needed. This can be useful for improving performance and reducing memory usage in certain situations.

Lazy Evaluation in Scala

In Scala, lazy evaluation is achieved through the use of lazy vals. A lazy val is a value that is computed lazily. Its value is not evaluated until it is accessed for the first time. Here is an example of a lazy val in Scala:

lazy val a ={

    println("computing a")

    26

  }

In this example, the value of a is not evaluated until it is accessed for the first time. When a is accessed for the first time, the code block { println(“Computing a”); 26 } is executed and the value of a is set to 26.

Lazy evaluation in a Higher-Order function

In Scala, lazy evaluation can be particularly useful when dealing with higher-order functions that take one or more functions as parameters, as it allows the functions to be evaluated only when they are actually used by the higher-order function.

For example, consider the following code:

  def computation(x:Int): Int = {

    println("Computing...")

    Thread.sleep(1000) // Simulating an expensive computation

    x

  }

  def higherOrderFunc(f: Int => Int): Int = {

    println("Doing something...")

    f(10)

  }

  val result = higherOrderFunc(computation)

This code, computation() is an expensive function that takes a long time to run. The function higherOrderFunc(f: Int => Int) is a higher-order function that takes another function f as a parameter.

When higherOrderFunc() is called, it immediately evaluates computation(), even though the result is not actually used until later in the function. This can result in unnecessary resource usage and performance overhead.

To avoid this issue, we can use lazy evaluation to defer the evaluation of computation() until it is actually needed by higherOrderFunc():

def computation(x:Int): Int = {

  println("Computing...")

  Thread.sleep(1000) // Simulating an expensive computation

  x

}

def higherOrderFunc(f: => Int => Int): Int = {

  println("Doing something...")

  f(10)

}

val result = higherOrderFunc(computation)

In this code, we use the => syntax to make the f parameter a call-by-name parameter. This means that the expression computation(x:Int) will not be evaluated until it is actually needed by the function f.

As a result, the text “Computing…” will only be printed to the console when computation() is actually evaluated by higherOrderFunc(), rather than immediately when the function is defined.

By using lazy evaluation with higher-order functions in Scala, we can further improve performance and resource utilization in our code. This can be particularly useful in situations where expensive computations or operations are involved, and the results are not needed immediately.

Benefits of using lazy evaluation

Improved performance

By deferring the evaluation of expressions until they are actually needed, a program can avoid unnecessary computation and use resources more efficiently. This can lead to improved performance and reduced resource consumption.

More flexible code

Lazy evaluation can make code more flexible by allowing the evaluation of expressions to be deferred until they are actually needed. This can make it easier to write code that is more modular and can be reused in different contexts.

Improved error handling

Lazy evaluation can improve error handling by allowing a program to catch and handle errors that occur during the evaluation of expressions. This can make it easier to write code that is robust and can handle unexpected situations.

Conclusion

Lazy evaluation is a powerful technique that can be used to improve performance and reduce memory usage in certain situations. In Scala, lazy evaluation is achieved through the use of lazy vals. By delaying the computation of a value until it is needed, we can avoid unnecessary computations and improve the overall efficiency of our programs.

Original article source at: https://blog.knoldus.com/

#scala #evaluate 

How to Lazy Evaluation in Scala

Как разработать приложение, управляемое событиями, с ZIO Actors

Что такое ЗИО:

ZIO — это передовая платформа для создания облачных JVM-приложений. ZIO позволяет разработчикам создавать передовые приложения, которые являются чрезвычайно масштабируемыми, протестированными, надежными, отказоустойчивыми, ресурсобезопасными, эффективными и наблюдаемыми благодаря своему удобному, но мощному функциональному ядру.

Разница между Аккой и ZIO:

Akka и ZIO — это библиотеки в Scala для создания параллельных, масштабируемых и отказоустойчивых приложений.

Akka — это набор инструментов и среда выполнения для создания высокопараллельных, распределенных и отказоустойчивых систем. Он предоставляет акторов, которые представляют собой легкие единицы вычислений, которые взаимодействуют друг с другом путем обмена сообщениями. Akka также включает в себя инструменты для кластеризации, маршрутизации и сохраняемости, что делает его подходящим для создания реактивных приложений.

С другой стороны, ZIO (ZIO означает «ZIO Is Our») — чисто функциональная библиотека, обеспечивающая типобезопасный и компонуемый способ написания параллельного и асинхронного кода. Он предоставляет набор абстракций, таких как волокна, представляющие собой легкие потоки, которые можно комбинировать для создания сложных приложений, и эффекты, являющиеся неизменяемыми и составными описаниями вычислений с побочными эффектами. ZIO также включает мощную модель параллелизма и поддержку асинхронных операций ввода-вывода.

Почему ZIO вместо Akka:

ZIO и Akka — мощные платформы для создания параллельных и распределенных приложений на Scala. Однако есть несколько причин, по которым мы можем выбрать ZIO, а не Akka:

  1. Безопасность типов: ZIO — это чисто функциональная структура, использующая систему типов для обеспечения безопасного параллелизма и параллелизма. Это упрощает анализ нашего кода и поиск ошибок во время компиляции, а не во время выполнения.
  2. Легкий: ZIO — это легкая библиотека, которую легко изучить и использовать. Он имеет меньшую площадь поверхности API, чем Akka, и не требует столько стандартного кода.
  3. Производительность: ZIO оптимизирован для повышения производительности, с низкими затратами на управление потоками и минимальным переключением контекста. Он также обеспечивает детальный контроль над планированием и пулами потоков.
  4. Совместимость: ZIO полностью совместим с существующими библиотеками Scala и Java, что упрощает интеграцию в существующие проекты.

Актеры ЗИО:

В ZIO акторы реализованы как тип волокна, который представляет собой облегченный поток, который можно запускать одновременно с другими волокнами. Актер — это, по сути, волокно, которое может получать сообщения и реагировать на них.

Одним из способов создания актора в ZIO является использование метода actor.make, который принимает функцию, определяющую поведение актора. Функция принимает два параметра: первый — это начальное состояние актора, а второй — сообщение, которое получает актор.

Разработка системы бронирования билетов с использованием ZIO Actors –

Чтобы создать некоторую систему акторов в ZIO, нам нужно добавить некоторые зависимости, связанные с ZIO, в build.sbt. Проверьте ниже зависимость, которую мы использовали:

libraryDependencies ++= Seq(

  "dev.zio" %% "zio" % zioVersion,

  "dev.zio" %% "zio-streams" % zioVersion,

  "dev.zio" %% "zio-kafka" % "2.0.7",

  "dev.zio" %% "zio-json" % "0.4.2",

  "dev.zio" %% "zio-dynamodb" % "0.2.6",

  "dev.zio" %% "zio-test" % zioVersion,

  "dev.zio" %% "zio-actors" % "0.1.0",

  "dev.zio" %% "zio-http" % "0.0.4",

  "dev.zio" %% "zio-http-testkit" % "0.0.3",

  "io.d11" %% "zhttp" % "2.0.0-RC11"

)

Архитектура и блок-схема (система бронирования билетов):

Затем, переходя к кодовой базе, мы представили актера ZIO таким образом, что есть интеграция потребителя кафки и производителя кафки, который работает параллельно со всеми актерами.
Давайте определим главного актера для TicketBookingSystem.

  object TicketBookingSystem extends ZIOAppDefault {

  val actorSystem = ActorSystem("ticketBookingSystem")

  def run = {

    println("starting actor system ")

    for {

      ticketInfoConsumerProducer <- KafkaConsumer.consumerRun.fork

      _ <- ticketInfoConsumerProducer.join

    } yield ()

  }

}

Здесь мы инициализируем Kafka Consumer.

  def consumerRun: ZIO[Any, Throwable, Unit] = {
  
  println("starting KafkaConsumer ")

    val finalInfoStream =

      Consumer

        //create a kafka consumer here with respect to a particular topic

          for {

            theatreActor <- actorSystem.flatMap(x => 

x.make("ticketBookingflowActor", zio.actors.Supervisor.none, (), 

theatreActor))

            theatreActorData <- theatreActor ! ticketBooking

          } yield theatreActorData

        }

        .map(_.offset)

        .aggregateAsync(Consumer.offsetBatches)

        .mapZIO(_.commit)

        .drain

      finalInfoStream.runDrain.provide(KafkaProdConsLayer.consumerLayer ++ 

KafkaProdConsLayer.producer)

  }

В Kafka Consumer мы передаем информацию о бронировании актеру театра с помощью метода tell (!). Этот актер обработает данные и получит детали платежа, подтвердит билет и передаст его следующему актеру.

Реализация TheatreActor для TicketBookingSystem –

Это один из способов, которым мы можем создать систему акторов и различных других акторов и связать их с помощью метода ask или tell. В нашем коде мы добавили нашего первого актора в самого потребителя kafka, который срабатывает. И оттуда мы можем получить доступ к нашим следующим актерам.

object ThreatreActor {

  val theatreActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, Unit, 

ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) => {

          println("ThreatreActor ................" + value)

          val ticketConfirm= Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some("Confirmed"))

          for{

            paymentActor <- actorSystem.flatMap(x => 

x.make("paymentGatewayflowActor", zio.actors.Supervisor.none, (), 

paymentGatewayflowActor))

            paymentDetails <- paymentActor ? BookingMessage(value)

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            _ <- bookingSyncActor ! BookingMessage(ticketConfirm)

          }yield {

            println("Completed Theatre Actor")

            ((),())}

        }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

Этот код приведет нас к paymentActor, который написан следующим образом:

Реализация PaymentActor для TicketBookingSystem –

object PaymentGatewayActor {

   val paymentGatewayflowActor: Stateful[Any, Unit, ZioMessage] = new 

Stateful[Any, Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("paymentInfo ................" + value)

          val booking = Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some(""))

          for {

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            //ZIO.succeed(booking)

          }yield{

            println("paymentInfo return................" + booking)

            ( BookingMessage(booking), ())

          }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

Тот же theatreActor приведет нас к bookingSyncActor, который написан так:

Реализация bookingSyncActor для TicketBookingSystem –

  val bookingSyncActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, 

Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("bookingSyncActor ................" + value)

          for {

            _ <- KafkaProducer.producerRun(value)

            _ <- f1(value).provide(

              netty.NettyHttpClient.default,

              config.AwsConfig.default,

              dynamodb.DynamoDb.live,

              DynamoDBExecutor.live

            )

          }yield((),())

      }

  } // plus some other computations.

Эти разные акторы будут собирать некоторую информацию из класса базового случая и давать некоторую релевантную информацию об отдельных актерах.

Отправка ответа обратно клиенту –

Для ответного сообщения производитель создает данные по другой теме (ответное сообщение).

          for {
 
           _ <- KafkaProducer.producerRun(value)

           //logic for db

          } yield((),())

Тестовый пример системы бронирования билетов:

Конечно, давайте проверим, как тестировать наших актеров с помощью модульного теста.

Для приведенного выше кода theatreActor мы создали theatreActorSpec, который можно записать в этом простом формате. Мы можем проверить правильность введенных данных в актере или нет.

object ThreatreActorSpec extends ZIOAppDefault{

  val data: Booking = booking.handler.actor.JsonSampleData.booking

  override def run: ZIO[Any with ZIOAppArgs with Scope, Any, Any] =

    for {

      system <- ActorSystem("ticketBookingSystem")

      actor <- system.make("ticketBookingflowActor", Supervisor.none, (), 

theatreActor)

      result <- actor !  BookingMessage(data)

    }

    yield result

}

Чтобы запустить эту службу -

  • Настройка брокера
  • Запустить службу субъекта
  • Продюсер запуска

Заключение :

В заключение, ZIO Actors — это мощная и эффективная библиотека для создания параллельных и распределенных систем на Scala. Он обеспечивает упрощенный и типобезопасный подход к параллелизму, позволяя разработчикам легко моделировать свои потребности в параллелизме для предметной области без сложностей традиционных систем акторов. Благодаря расширенным функциям, таким как прозрачность местоположения, перехват сообщений и контроль, ZIO Actors упрощает разработку и развертывание масштабируемых и отказоустойчивых распределенных приложений.

Здесь мы создали различных актеров, таких как актер театра, актер платежа и актер синхронизации бронирования, которые работают в соответствии с их логикой. Кроме того, мы попытались реализовать акторы ZIO с помощью ZIO kafka, и это очень хороший пример такой простой интеграции.

Кроме того, его интеграция с экосистемой ZIO обеспечивает бесшовную композицию с другими функциональными библиотеками, предоставляя разработчикам мощный и целостный набор инструментов для создания надежного и удобного в сопровождении программного обеспечения. Дополнительные блоги см. здесь . Дополнительные блоги ZIO см. здесь

Оригинальный источник статьи:   https://blog.knoldus.com/

#scala #event #application #actors 

Как разработать приложение, управляемое событиями, с ZIO Actors
田辺  桃子

田辺 桃子

1678959749

打开选项 使用 ZIO Actors 开发事件驱动的应用程序

什么是 ZIO:

ZIO 是用于创建云原生 JVM 应用程序的尖端框架。ZIO 使开发人员能够构建最佳实践应用程序,这些应用程序具有极高的可扩展性、经过测试、健壮、有弹性、资源安全、高效且可观察,这要归功于其用户友好但功能强大的核心。

Akka 和 ZIO 的区别:

Akka 和 ZIO 都是 Scala 中用于构建并发、可扩展和容错应用程序的库。

Akka 是用于构建高度并发、分布式和容错系统的工具包和运行时。它提供了参与者,这是轻量级的计算单元,通过交换消息相互通信。Akka 还包括用于集群、路由和持久化的工具,使其非常适合构建反应式应用程序。

另一方面,ZIO(ZIO 代表“ZIO Is Our”)是一个纯函数库,它提供了一种类型安全且可组合的方式来编写并发和异步代码。它提供了一组抽象,例如纤程,它们是轻量级的线程,可以组合起来创建复杂的应用程序,以及效果,它们是对副作用计算的不可变和可组合的描述。ZIO 还包括强大的并发模型和对异步 IO 操作的支持。

为什么 ZIO 而不是 Akka:

ZIO 和 Akka 都是在 Scala 中构建并发和分布式应用程序的强大框架。但是,有一些我们可能会选择 ZIO 而不是 Akka 的原因:

  1. 类型安全:ZIO 是一个纯功能框架,它使用类型系统来强制执行安全的并发性和并行性。这使得在编译时而不是运行时更容易推理我们的代码并捕获错误。
  2. 轻量级:ZIO 是一个轻量级的库,易于学习和使用。它具有比 Akka 更小的 API 表面积,并且不需要那么多的样板代码。
  3. 性能:ZIO 针对性能进行了高度优化,线程管理开销较低,上下文切换最少。它还提供对调度和线程池的细粒度控制。
  4. 兼容性:ZIO 与现有的 Scala 和 Java 库完全兼容,可以轻松集成到现有项目中。

齐奥演员:

在 ZIO 中,actor 被实现为一种 fiber,它是一种轻量级线程,可以与其他 fiber 并发运行。Actor 本质上是一根可以接收消息并对其做出反应的纤维。

要在 ZIO 中创建一个 actor,一种方法是我们可以使用 方法actor.make,它采用一个定义 actor 行为的函数。该函数有两个参数:第一个是actor的初始状态,第二个是actor接收到的消息。

使用 ZIO Actors 开发票务预订系统 –

要在 ZIO 中创建一些 actor 系统,我们需要在 build.sbt 中添加一些 ZIO 相关的依赖项。在下面检查我们使用的依赖项-

libraryDependencies ++= Seq(

  "dev.zio" %% "zio" % zioVersion,

  "dev.zio" %% "zio-streams" % zioVersion,

  "dev.zio" %% "zio-kafka" % "2.0.7",

  "dev.zio" %% "zio-json" % "0.4.2",

  "dev.zio" %% "zio-dynamodb" % "0.2.6",

  "dev.zio" %% "zio-test" % zioVersion,

  "dev.zio" %% "zio-actors" % "0.1.0",

  "dev.zio" %% "zio-http" % "0.0.4",

  "dev.zio" %% "zio-http-testkit" % "0.0.3",

  "io.d11" %% "zhttp" % "2.0.0-RC11"

)

架构和流程图(订票系统):

然后转向代码库,我们以这样一种方式引入了 ZIO actor,即集成了 kafka consumer,kafka producer,它与所有 actors 并行工作。
让我们为 TicketBookingSystem 定义一个主要参与者。

  object TicketBookingSystem extends ZIOAppDefault {

  val actorSystem = ActorSystem("ticketBookingSystem")

  def run = {

    println("starting actor system ")

    for {

      ticketInfoConsumerProducer <- KafkaConsumer.consumerRun.fork

      _ <- ticketInfoConsumerProducer.join

    } yield ()

  }

}

在这里,我们正在初始化 Kafka Consumer。

  def consumerRun: ZIO[Any, Throwable, Unit] = {
  
  println("starting KafkaConsumer ")

    val finalInfoStream =

      Consumer

        //create a kafka consumer here with respect to a particular topic

          for {

            theatreActor <- actorSystem.flatMap(x => 

x.make("ticketBookingflowActor", zio.actors.Supervisor.none, (), 

theatreActor))

            theatreActorData <- theatreActor ! ticketBooking

          } yield theatreActorData

        }

        .map(_.offset)

        .aggregateAsync(Consumer.offsetBatches)

        .mapZIO(_.commit)

        .drain

      finalInfoStream.runDrain.provide(KafkaProdConsLayer.consumerLayer ++ 

KafkaProdConsLayer.producer)

  }

在 Kafka Consumer 中,我们使用 tell 方法 (!) 将预订信息传递给剧院演员。该演员将处理数据并获取付款详细信息并确认票证并传递给下一个演员。

TicketBookingSystem 的 TheatreActor 实现 –

这是我们可以创建演员系统和各种其他演员并可以使用询问或告诉方法链接它们的方式之一。在我们的代码中,我们在 kafka 消费者本身中添加了我们的第一个演员,它正在被触发。从那里,我们可以访问我们的下一个演员。

object ThreatreActor {

  val theatreActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, Unit, 

ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) => {

          println("ThreatreActor ................" + value)

          val ticketConfirm= Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some("Confirmed"))

          for{

            paymentActor <- actorSystem.flatMap(x => 

x.make("paymentGatewayflowActor", zio.actors.Supervisor.none, (), 

paymentGatewayflowActor))

            paymentDetails <- paymentActor ? BookingMessage(value)

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            _ <- bookingSyncActor ! BookingMessage(ticketConfirm)

          }yield {

            println("Completed Theatre Actor")

            ((),())}

        }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

此代码将带我们到 paymentActor,其编写如下 -

TicketBookingSystem 的 PaymentActor 实现 –

object PaymentGatewayActor {

   val paymentGatewayflowActor: Stateful[Any, Unit, ZioMessage] = new 

Stateful[Any, Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("paymentInfo ................" + value)

          val booking = Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some(""))

          for {

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            //ZIO.succeed(booking)

          }yield{

            println("paymentInfo return................" + booking)

            ( BookingMessage(booking), ())

          }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

相同的 theatreActor 将带我们到 bookingSyncActor,如下所示 -

TicketBookingSystem 的 bookingSyncActor 实现 –

  val bookingSyncActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, 

Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("bookingSyncActor ................" + value)

          for {

            _ <- KafkaProducer.producerRun(value)

            _ <- f1(value).provide(

              netty.NettyHttpClient.default,

              config.AwsConfig.default,

              dynamodb.DynamoDb.live,

              DynamoDBExecutor.live

            )

          }yield((),())

      }

  } // plus some other computations.

这些不同的参与者将从基本案例类中收集一些信息,并将提供一些关于各个参与者的相关信息。

将响应发送回客户端 –

对于回复消息,生产者正在生成关于不同主题的数据(回复消息)

          for {
 
           _ <- KafkaProducer.producerRun(value)

           //logic for db

          } yield((),())

订票系统测试用例:

当然,让我们检查一下如何使用单元测试来测试我们的演员。

对于上面的 theatreActor 代码,我们创建了可以用这种简单格式编写的 theatreActorSpec。我们可以检查输入的数据在演员中是否正确。

object ThreatreActorSpec extends ZIOAppDefault{

  val data: Booking = booking.handler.actor.JsonSampleData.booking

  override def run: ZIO[Any with ZIOAppArgs with Scope, Any, Any] =

    for {

      system <- ActorSystem("ticketBookingSystem")

      actor <- system.make("ticketBookingflowActor", Supervisor.none, (), 

theatreActor)

      result <- actor !  BookingMessage(data)

    }

    yield result

}

要运行此服务 –

  • 设置经纪人
  • 运行 Actor 服务
  • 运行制作人

结论 :

总之,ZIO Actors 是一个强大而高效的库,用于在 Scala 中构建并发和分布式系统。它提供了一种轻量级和类型安全的并发方法,允许开发人员轻松地模拟他们特定领域的并发需求,而无需传统参与者系统的复杂性。ZIO Actors 凭借其位置透明、消息拦截和监督等高级功能,简化了高度可扩展和容错分布式应用程序的开发和部署。

在这里,我们创建了各种 actor,如剧院 actor、支付 actor 和预订同步 actor,它们按照它们的逻辑工作。此外,我们尝试使用 ZIO kafka 实现 ZIO actor,这是这种简单集成的一个很好的例子。

此外,它与 ZIO 生态系统的集成允许与其他功能库无缝组合,为开发人员提供强大且内聚的工具包,用于构建健壮且可维护的软件。更多博客请参考这里。更多 ZIO 博客,请参考这里

文章原文出处:https:   //blog.knoldus.com/

#scala #event #application #actors 

 打开选项 使用 ZIO Actors 开发事件驱动的应用程序
Nat  Grady

Nat Grady

1678955951

Develop Event Driven Application using ZIO Actors

What is ZIO :

ZIO is a cutting-edge framework for creating cloud-native JVM applications. ZIO enables developers to construct best-practice applications that are extremely scalable, tested, robust, resilient, resource-safe, efficient, and observable thanks to its user-friendly yet strong functional core.

Difference between Akka and ZIO :

Akka and ZIO are both the libraries in Scala for building concurrent, scalable, and fault-tolerant applications.

Akka is a toolkit and runtime for building highly concurrent, distributed, and fault-tolerant systems. It provides actors, which are lightweight units of computation that communicate with each other by exchanging messages. Akka also includes tools for clustering, routing, and persistence, making it well-suited for building reactive applications.

On the other hand, ZIO (ZIO stands for “ZIO Is Our”) is a purely functional library that provides a type-safe and composable way to write concurrent and asynchronous code. It provides a set of abstractions such as fibers, which are lightweight threads that can be composed to create complex applications, and effects, which are immutable and composable descriptions of side-effecting computations. ZIO also includes a powerful concurrency model and support for asynchronous IO operations.

Why ZIO over Akka :

ZIO and Akka are both powerful frameworks for building concurrent and distributed applications in Scala. However, there are some reasons why we might choose ZIO over Akka:

  1. Type safety: ZIO is a purely functional framework that uses the type system to enforce safe concurrency and parallelism. This makes it easier to reason about our code and catch errors at compile time rather than runtime.
  2. Lightweight: ZIO is a lightweight library that is easy to learn and use. It has a smaller API surface area than Akka and doesn’t require as much boilerplate code.
  3. Performance: ZIO is highly optimized for performance, with a low overhead for thread management and minimal context switching. It also provides fine-grained control over scheduling and thread pools.
  4. Compatibility: ZIO is fully compatible with existing Scala and Java libraries, making it easy to integrate into existing projects.

ZIO Actors :

In ZIO, actors are implemented as a type of fiber, which is a lightweight thread that can be run concurrently with other fibers. An actor is essentially a fiber that can receive messages and react to them.

To create an actor in ZIO, one of the way is we can use the actor.make method, which takes a function that defines the behavior of the actor. The function takes two parameters: the first one is the initial state of the actor, and the second one is the message that the actor receives.

Developing a Ticket Booking System using ZIO Actors –

To create some actor system in ZIO, we need to add some ZIO related dependencies in build.sbt. Check below for the dependency which we have used-

libraryDependencies ++= Seq(

  "dev.zio" %% "zio" % zioVersion,

  "dev.zio" %% "zio-streams" % zioVersion,

  "dev.zio" %% "zio-kafka" % "2.0.7",

  "dev.zio" %% "zio-json" % "0.4.2",

  "dev.zio" %% "zio-dynamodb" % "0.2.6",

  "dev.zio" %% "zio-test" % zioVersion,

  "dev.zio" %% "zio-actors" % "0.1.0",

  "dev.zio" %% "zio-http" % "0.0.4",

  "dev.zio" %% "zio-http-testkit" % "0.0.3",

  "io.d11" %% "zhttp" % "2.0.0-RC11"

)

Architecture and Flow Diagram (Ticket Booking System) :

Then moving forward to codebase, we have introduced ZIO actor in such a way that there is an integration of kafka consumer, kafka producer which is is working parallely with all the actors.
Lets define a main actor for the TicketBookingSystem.

  object TicketBookingSystem extends ZIOAppDefault {

  val actorSystem = ActorSystem("ticketBookingSystem")

  def run = {

    println("starting actor system ")

    for {

      ticketInfoConsumerProducer <- KafkaConsumer.consumerRun.fork

      _ <- ticketInfoConsumerProducer.join

    } yield ()

  }

}

Here, we are initializing Kafka Consumer.

  def consumerRun: ZIO[Any, Throwable, Unit] = {
  
  println("starting KafkaConsumer ")

    val finalInfoStream =

      Consumer

        //create a kafka consumer here with respect to a particular topic

          for {

            theatreActor <- actorSystem.flatMap(x => 

x.make("ticketBookingflowActor", zio.actors.Supervisor.none, (), 

theatreActor))

            theatreActorData <- theatreActor ! ticketBooking

          } yield theatreActorData

        }

        .map(_.offset)

        .aggregateAsync(Consumer.offsetBatches)

        .mapZIO(_.commit)

        .drain

      finalInfoStream.runDrain.provide(KafkaProdConsLayer.consumerLayer ++ 

KafkaProdConsLayer.producer)

  }

In Kafka Consumer, we are passing booking information to the theatre actor by using tell method (!). This actor will process the data and fetch the payment details and confirm ticket and pass on to next actor.

TheatreActor implementation for TicketBookingSystem –

This is one of the way we can create actor system and various other actors and can link them using ask or tell method.In our code, we have added our first actor in kafka consumer itself, which is getting triggered. And from there, we can get access to our next actors.

object ThreatreActor {

  val theatreActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, Unit, 

ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) => {

          println("ThreatreActor ................" + value)

          val ticketConfirm= Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some("Confirmed"))

          for{

            paymentActor <- actorSystem.flatMap(x => 

x.make("paymentGatewayflowActor", zio.actors.Supervisor.none, (), 

paymentGatewayflowActor))

            paymentDetails <- paymentActor ? BookingMessage(value)

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            _ <- bookingSyncActor ! BookingMessage(ticketConfirm)

          }yield {

            println("Completed Theatre Actor")

            ((),())}

        }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

This code will take us to paymentActor which is written as below –

PaymentActor implementation for TicketBookingSystem –

object PaymentGatewayActor {

   val paymentGatewayflowActor: Stateful[Any, Unit, ZioMessage] = new 

Stateful[Any, Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("paymentInfo ................" + value)

          val booking = Booking(value.uuid, value.bookingDate, 

value.theatreName, value.theatreLocation, value.seatNumbers, 

value.cardNumber, value.pin,

            value.cvv, value.otp, Some("Success"), Some(""))

          for {

            bookingSyncActor <- actorSystem.flatMap(x => 

x.make("bookingSyncActor", zio.actors.Supervisor.none, (), bookingSyncActor))

            //ZIO.succeed(booking)

          }yield{

            println("paymentInfo return................" + booking)

            ( BookingMessage(booking), ())

          }

        case _ => throw new Exception("Wrong value Input")

      }

  }

}

The same theatreActor will take us to bookingSyncActor which is written as below –

bookingSyncActor implementation for TicketBookingSystem –

  val bookingSyncActor: Stateful[Any, Unit, ZioMessage] = new Stateful[Any, 

Unit, ZioMessage] {

    override def receive[A](state: Unit, msg: ZioMessage[A], context: 

Context): Task[(Unit, A)] =

      msg match {

        case BookingMessage(value) =>

          println("bookingSyncActor ................" + value)

          for {

            _ <- KafkaProducer.producerRun(value)

            _ <- f1(value).provide(

              netty.NettyHttpClient.default,

              config.AwsConfig.default,

              dynamodb.DynamoDb.live,

              DynamoDBExecutor.live

            )

          }yield((),())

      }

  } // plus some other computations.

These different actors will collect some info from the base case class and will give some relevant info regarding the individual actors.

Sending a response back to client –

For reply message, the producer is producing the data on a different topic (reply message)

          for {
 
           _ <- KafkaProducer.producerRun(value)

           //logic for db

          } yield((),())

Ticket Booking System Test Case :

Sure, Lets check how to test our actors using unit test.

For the above theatreActor code, we have created theatreActorSpec which can be written in this simple format. We can check if the data entered is getting correct in the actor or not.

object ThreatreActorSpec extends ZIOAppDefault{

  val data: Booking = booking.handler.actor.JsonSampleData.booking

  override def run: ZIO[Any with ZIOAppArgs with Scope, Any, Any] =

    for {

      system <- ActorSystem("ticketBookingSystem")

      actor <- system.make("ticketBookingflowActor", Supervisor.none, (), 

theatreActor)

      result <- actor !  BookingMessage(data)

    }

    yield result

}

To run this service –

  • Setup broker
  • Run Actor Service
  • Run Producer

Conclusion :

In conclusion, ZIO Actors is a powerful and efficient library for building concurrent and distributed systems in Scala. It provides a lightweight and type-safe approach to concurrency, allowing developers to easily model their domain-specific concurrency needs without the complexities of traditional actor systems. With its advanced features such as location transparency, message interception, and supervision, ZIO Actors simplifies the development and deployment of highly scalable and fault-tolerant distributed applications.

Here, we created various actors such as theatre actor, payment actor and booking sync actor which are working as per their logic. Also, we tried to implement ZIO actors with ZIO kafka and this is a very good example of this simple integration.

Additionally, its integration with the ZIO ecosystem allows for seamless composition with other functional libraries, providing developers with a powerful and cohesive toolkit for building robust and maintainable software. For more blogs, please refer here . For more ZIO blogs, refer here

Original article source at:  https://blog.knoldus.com/

#scala #event #application #actors 

Develop Event Driven Application using ZIO Actors

Метапрограмминг ин Scala

Метапрограммирование — это популярная техника 1970-х и 1980-х годов, в которой использовались такие языки, как LISP, чтобы приложения могли обрабатывать код для приложений на основе искусственного интеллекта. Когда язык программирования является собственным метаязыком, это называется отражением. Рефлексия — одна из важных функций любого языка программирования, облегчающая метапрограммирование. Метапрограммирование перемещает вычисления из времени выполнения во время компиляции, тем самым позволяя самомодифицирующийся код. Следовательно, программа разработана таким образом, что она может читать, анализировать или преобразовывать другие программы или себя во время работы. Этот стиль программирования подпадает под общую парадигму программирования, где сам язык программирования является первоклассным типом данных.

Это метапрограммирование осуществляется на различных языках программирования для различных целей. В Scala это используется в качестве макросистем, многоэтапного программирования (постановка во время выполнения) и т. д.

Метапрограммирование на Scala

Метапрограммирование в Scala вводит такие фундаментальные функции, как:

  1. Макросы : построены на двух фундаментальных операциях: кавычка (через. as '{…}) и сращивание (через. as ${…}). Наряду с этим  inline, эти две абстракции позволяют строить программный код программно.
  2. inline : новый модификатор, гарантирующий определение, будет встроен в момент использования. Это снижает накладные расходы на вызов функций и доступ к значениям.
  3. Операции времени компиляции : вспомогательные функции, обеспечивающие поддержку операций времени компиляции, таких как constValue и constValueOpt
  4. Постановка во время выполнения . Чтобы генерация кода зависела от данных времени выполнения, постановка позволяет коду создавать новый код во время выполнения.
  5. Отражение
  6. TASTy Inspection : Typed Abstract Syntax Tree позволяет загружать файлы и анализировать их содержимое в древовидной структуре.

Добавление этих новых возможностей метапрограммирования дает огромные преимущества и привилегии для устранения шаблонного кода и повышения общей производительности приложений. С помощью метапрограммирования разработчики в scala могут повысить производительность своих приложений и удалить весь избыточный и шаблонный код с помощью этих функций.

Компромиссы: макросы вместо функций

Время выполнения : с помощью макросов мы можем сделать выполнение сравнительно быстрее. Так как во время обработки макрос расширяется и заменяется своим определением каждый раз при его использовании. С другой стороны, определение функции происходит только один раз, независимо от того, сколько раз она вызывалась. Макросы могут увеличивать количество строк кода, но не несут накладных расходов, связанных с вызовами функций.

Чистый код с помощью макросов

Повторяющийся код: несмотря на то, что синтаксис Scala краток, он позволяет избежать шаблонного кода, который встречается в другом языке программирования JVM. Но все же есть сценарии, в которых разработчики могут в конечном итоге написать повторяющийся код, который нельзя подвергнуть дальнейшему рефакторингу для повторного использования. С помощью макросов Scala мы можем поддерживать чистоту и удобство кода.

Фрагменты функций метапрограммирования

Inlined Method Example

inline def repeat(s: String, count: Int): String =

  inline count match

    case 0 => ""

    case _ => s + repeat(s, count-1)

Macros Example

import scala.quoted.*

// Note the quote '{...} and the argument types

private def failImpl[T](

      predicate: Expr[Boolean], message: Expr[String],

      block: Expr[T], beforeAfter: Expr[String])(

      using Quotes): Expr[String] =

    '{ throw InvariantFailure(  

      s"""FAILURE! predicate "${${showExpr(predicate)}}" """

      + s"""failed ${$beforeAfter} evaluation of block:"""

      + s""" "${${showExpr(block)}}". Message = "${$message}". """)

    }

private def showExpr[T](expr: Expr[T])(using Quotes): Expr[String] =
    
val code: String = expr.show 

    Expr(code)


TASTy Inspection Example

<Sample .tasty file>

import scala.quoted.*

import scala.tasty.inspector.*

class MyInspector extends Inspector:

   def inspect(using Quotes)(tastys: List[Tasty[quotes.type]]): Unit =

      import quotes.reflect.*

      for tasty <- tastys do

         val tree = tasty.ast

         // Your code here

Consumer of above .tasty file

object Test:

   def main(args: Array[String]): Unit =

      val tastyFiles = List("sample.tasty")

      TastyInspector.inspectTastyFiles(tastyFiles)(new MyInspector)


Compile-time ops Example

/* constValue - function to produce constant value represented by a type, or 

a compile time error if the type is not a constant type. constValueOpt is  

same as constValue, however it returns Option[T] to handle where a value is 

not present. */


import scala.compiletime.constValue

import scala.compiletime.ops.int.S

transparent inline def toIntConst[N]: Int =

  inline constValue[N] match

    case 0        => 0

    case _: S[n1] => 1 + toIntConst[n1]

inline val constTwo = toIntConst[2]

Приложения для метапрограммирования

Системы преобразования программ могут быть полезны для создания:

  • Тестовое покрытие и инструменты профилирования
  •  Инструменты генерации и завершения кода
  •  Инструменты автоматического рефакторинга
  •  Инструменты языковой миграции
  •  Инструменты для реархитектуры/изменения формы приложений
  •  Создавайте специфичные для предметной области языки через. Метапрограммирование
  •  Шаблоны проектов
  •  Генерация кода графического интерфейса
  •  Реализация компиляторов и интерпретаторов
  •  Фреймворки
  •  ORM на динамическом языке

Заключение

В статически типизированных языках, таких как Java и Scala, метапрограммирование более ограничено и встречается гораздо реже. Но он по-прежнему полезен для решения многих предварительных задач проектирования в реальном времени. Прилагая больше усилий для разделения времени компиляции и манипулирования временем выполнения. Кроме того, это дает больше гибкости и конфигурации во время выполнения.

Оригинальный источник статьи:   https://blog.knoldus.com/

#scala #programming 

Метапрограмминг ин Scala
津田  淳

津田 淳

1678391280

Scala 中的元编程

元编程是 1970 年代和 1980 年代的一种流行技术,它使用 LISP 等语言使应用程序能够处理基于人工智能的应用程序的代码。当一种编程语言是它自己的元语言时,它被称为反射。反射是任何编程语言促进元编程的重要特征之一。元编程将计算从运行时转移到编译时,从而实现自修改代码。因此,程序的设计方式使其可以在运行时读取、分析或转换其他程序或自身。这种编程风格属于通用编程范式,其中编程语言本身是一流的数据类型。

出于各种目的,这种元编程在各种编程语言中得到应用。在 Scala 中,它被用作宏系统、多阶段编程(运行时暂存)等。

Scala 中的元编程

Scala 中的元编程引入了一些基本特性,例如:

  1. :建立在两个基本操作之上:引用(via.as '{…})和拼接(via.as ${…})。与 一起 inline,这两个抽象允许以编程方式构建程序代码。
  2. inline:一个新的修饰符,它保证定义将在使用时内联。它减少了函数调用和值访问的开销。
  3. 编译时操作:辅助函数,为编译时操作提供支持,如constValue 和 constValueOpt
  4. 运行时暂存:为了使代码生成依赖于运行时数据,暂存让代码在运行时构建新代码。
  5. 反射
  6. TASTy 检查:类型化抽象语法树允许加载文件并以树结构分析其内容。

添加这些新的元编程功能为消除样板代码和提高应用程序的整体性能带来了巨大的好处和特权。使用 scala 中的元编程开发人员可以利用他们的应用程序性能并使用这些功能删除所有冗余和样板代码。

权衡:宏优于函数

执行时间:使用宏,我们可以使执行相对更快。在处理过程中,每次使用宏时都会扩展并替换为它的定义。另一方面,函数定义只出现一次,而不管它被调用的次数。宏可能会增加代码行数,但不会增加与函数调用相关的开销。

使用宏清理代码

重复代码:尽管 Scala 语法简洁,但避免了出现在其他 JVM 编程语言中的样板代码。但仍然存在开发人员最终可能会编写重复代码并且无法进一步重构以供重用的情况。使用 Scala 宏,我们可以保持代码的整洁和可维护性。

元编程功能片段

Inlined Method Example

inline def repeat(s: String, count: Int): String =

  inline count match

    case 0 => ""

    case _ => s + repeat(s, count-1)

Macros Example

import scala.quoted.*

// Note the quote '{...} and the argument types

private def failImpl[T](

      predicate: Expr[Boolean], message: Expr[String],

      block: Expr[T], beforeAfter: Expr[String])(

      using Quotes): Expr[String] =

    '{ throw InvariantFailure(  

      s"""FAILURE! predicate "${${showExpr(predicate)}}" """

      + s"""failed ${$beforeAfter} evaluation of block:"""

      + s""" "${${showExpr(block)}}". Message = "${$message}". """)

    }

private def showExpr[T](expr: Expr[T])(using Quotes): Expr[String] =
    
val code: String = expr.show 

    Expr(code)


TASTy Inspection Example

<Sample .tasty file>

import scala.quoted.*

import scala.tasty.inspector.*

class MyInspector extends Inspector:

   def inspect(using Quotes)(tastys: List[Tasty[quotes.type]]): Unit =

      import quotes.reflect.*

      for tasty <- tastys do

         val tree = tasty.ast

         // Your code here

Consumer of above .tasty file

object Test:

   def main(args: Array[String]): Unit =

      val tastyFiles = List("sample.tasty")

      TastyInspector.inspectTastyFiles(tastyFiles)(new MyInspector)


Compile-time ops Example

/* constValue - function to produce constant value represented by a type, or 

a compile time error if the type is not a constant type. constValueOpt is  

same as constValue, however it returns Option[T] to handle where a value is 

not present. */


import scala.compiletime.constValue

import scala.compiletime.ops.int.S

transparent inline def toIntConst[N]: Int =

  inline constValue[N] match

    case 0        => 0

    case _: S[n1] => 1 + toIntConst[n1]

inline val constTwo = toIntConst[2]

元编程应用

程序转换系统有助于构建:

  • 测试覆盖率和分析工具
  •  代码生成和完成工具
  •  自动化重构工具
  •  语言迁移工具
  •  重新构建/重塑应用程序的工具
  •  通过构建领域特定语言。元编程
  •  项目模板
  •  GUI代码生成
  •  编译器和解释器实现
  •  构架
  •  动态语言中的 ORM

结论

在 Java 和 Scala 等静态类型语言中,元编程受到更多限制并且很少见。但它对于解决许多高级实时设计问题仍然很有用。更加努力地将编译时操作与运行时操作分开。此外,它在运行时提供了更多的灵活性和配置

文章原文出处:https:   //blog.knoldus.com/

#scala #programming 

Scala 中的元编程
Gordon  Murray

Gordon Murray

1678387500

Metaprogramming in Scala

Meta programming is a popular technique from 1970’s and 1980’s which used languages like LISP to enable applications to process code for artificial intelligence based applications. When a programming language is it’s own meta-language then it is called as reflection. Reflection is one of the important feature for any programming language to facilitate meta programming. Meta programming moves computations from run-time to compile-time thereby enabling self-modifying code. Hence, a program is designed in such a way that it can read, analyse or transform other programs or itself while it is running. This style of programming falls under Generic programming paradigm where the programming language itself is a first-class datatype.

This metaprogramming is exercised in various programming languages for various purposes. In Scala it is used as macro systems, muti-stage programming (runtime staging) etc.

Meta programming in Scala

Meta programming in Scala introduces fundamental features like:

  1. Macros: built on two fundamental operations: quotation (via. as ‘{…}) and splicing (via. as ${ … }). Along with inline, these two abstractions allow to construct program code pro-grammatically.
  2. inline: A new modifier which guarantees definition will be inlined at point of use. It reduces overhead of function call and values access.
  3. Compile-time ops: helper functions that provides support for compile time operations like constValue and constValueOpt
  4. Runtime Staging: To make code generation depend on run time data, staging lets code construct new code at runtime.
  5. Reflection
  6. TASTy Inspection: Typed Abstract Syntax Tree allows to load files and analyse their content in tree structure.

Adding these new meta programming capabilities adds enormous benefits and privileges for eliminating the boilerplate code and improving the overall performance of applications. With meta programming developers in scala can leverage their applications performance and remove all redundant & boilerplate code with use of these features.

Trade-Offs: Macros over Functions

Execution Time: With macros we can make execution comparatively faster. As during processing a macro is expanded and replaced by its definition each time its used. On the other hand function definition occurs only once irrespective of number of times its called. Macros might increase code s lines of code but don’t have overhead associated with function calls.

Clean Code with Macros

Repeated Code: Even though Scala syntax is concise that avoids boilerplate code that occurs in other JVM programming language. But still there are scenarios where developers might end up writing repetitive code and which can’t be refactored further for reuse. With Scala macros we can keep code clean and maintainable.

Meta programming Features Snippets

Inlined Method Example

inline def repeat(s: String, count: Int): String =

  inline count match

    case 0 => ""

    case _ => s + repeat(s, count-1)

Macros Example

import scala.quoted.*

// Note the quote '{...} and the argument types

private def failImpl[T](

      predicate: Expr[Boolean], message: Expr[String],

      block: Expr[T], beforeAfter: Expr[String])(

      using Quotes): Expr[String] =

    '{ throw InvariantFailure(  

      s"""FAILURE! predicate "${${showExpr(predicate)}}" """

      + s"""failed ${$beforeAfter} evaluation of block:"""

      + s""" "${${showExpr(block)}}". Message = "${$message}". """)

    }

private def showExpr[T](expr: Expr[T])(using Quotes): Expr[String] =
    
val code: String = expr.show 

    Expr(code)


TASTy Inspection Example

<Sample .tasty file>

import scala.quoted.*

import scala.tasty.inspector.*

class MyInspector extends Inspector:

   def inspect(using Quotes)(tastys: List[Tasty[quotes.type]]): Unit =

      import quotes.reflect.*

      for tasty <- tastys do

         val tree = tasty.ast

         // Your code here

Consumer of above .tasty file

object Test:

   def main(args: Array[String]): Unit =

      val tastyFiles = List("sample.tasty")

      TastyInspector.inspectTastyFiles(tastyFiles)(new MyInspector)


Compile-time ops Example

/* constValue - function to produce constant value represented by a type, or 

a compile time error if the type is not a constant type. constValueOpt is  

same as constValue, however it returns Option[T] to handle where a value is 

not present. */


import scala.compiletime.constValue

import scala.compiletime.ops.int.S

transparent inline def toIntConst[N]: Int =

  inline constValue[N] match

    case 0        => 0

    case _: S[n1] => 1 + toIntConst[n1]

inline val constTwo = toIntConst[2]

Meta programming Applications

Program transformation systems can be helpful to build:

  • Test coverage and profiling tools
  •  Code generation & completion tools
  •  Automated Refactoring tools
  •  Language migration tools
  •  Tools to re-architecture/re-shape applications
  •  Build Domain Specific Languages via. Metaprogramming
  •  Project Templates
  •  GUI code generation
  •  Compilers & Interpreters implementation
  •  Frameworks
  •  ORM in dynamic language

Conclusion

In statically typed languages like Java and Scala, meta programming is more constrained and is very less common. But it’s still useful for solving many advance real-time design problems. With more effort to separate compile-time versus runtime manipulation. Also, it gives more flexibility and configuration at runtime

Original article source at:  https://blog.knoldus.com/

#scala #programming 

Metaprogramming in Scala

Migrate Scala 2.13 Project to Scala 3

Migrate Scala 2.13 Project to Scala 3

Are you a Scala developer looking to migrate your existing Scala 2.13 projects to the latest version of the language? If so, you’ll be happy to know that Scala 3 is now available and comes with a range of new features and improvements. With its streamlined syntax, improved performance, and better compatibility with Java 8 and above, Scala 3 offers a host of benefits for developers working with the language.

However, migrating to a new major version of any programming language can be a daunting task, and Scala 3 is no exception. But don’t worry – we’ve got you covered. In this blog post, we’ll provide you with a step-by-step guide to help you migrate your projects from Scala 2.13 to Scala 3 using the Scala 3 Migrate Plugin. Whether you’re interested in the new features of Scala 3 or just looking to stay up-to-date with the latest version of the language, this guide is for you.

So, let’s get started and take your Scala development to the next level with Scala 3.

Scala 3 Migrate Plugin

The Scala 3 Migrate Plugin is a valuable tool that can help you migrate your codebase to Scala 3. It has been designed to make the migration to scala 3 as easy as possible. It provides a set of automated tools and manual suggestions to make the process as smooth and painless as possible.

The migration process consists of four independent steps that are packaged into an sbt plugin:

  1. migrate-libs: This step helps you update the list of library dependencies in your build file to use the corresponding Scala 3 versions of your dependencies. It ensures that your project’s dependencies are compatible with Scala 3 and can be resolved correctly during the build process.
  2. migrate-scalacOptions: This step helps you update the list of compiler options (scalacOptions) in your build file to use the corresponding Scala 3 options. It ensures that the compiler is using the correct set of options for Scala 3, which can help improve the quality and performance of your code.
  3. migrate-syntax: This step fixes a number of syntax incompatibilities in your Scala 2.13 code so that it can be compiled in Scala 3. It handles common syntax changes between the two versions of Scala and can help you quickly fix issues that would otherwise require significant manual changes.
  4. migrate: This step tries to make your code compile with Scala 3 by adding the minimum required inferred types and implicit arguments. It automates the process of making your code compatible with Scala 3 and can help you quickly identify issues that would otherwise require significant manual changes.

Each of these steps is an sbt command that we will understand in detail in the following sections. So make sure to run them in an sbt shell.

Prerequisites

Before using the scala3-migrate plugin, you’ll need to make sure that your development environment meets the following prerequisites:

  1. SBT 1.5 or later: You’ll need to be using SBT as your build tool, and have a version of 1.5 or later installed on your system.
  2. Java 8 or later: The scala3-migrate plugin requires Java 8 or later to run. Make sure it is installed on your system.
  3. Scala 2.13: The scala3-migrate plugin requires Scala 2.13(preferred 2.13.5) to work correctly. If you’re using an earlier version of Scala, you’ll need to upgrade first.

By ensuring that your development environment meets these prerequisites, you’ll be able to use the scala3-migrate plugin with confidence and make a smooth transition to Scala 3.

Installation

You can install the scala3-migrate plugin by adding it to your plugins.sbt file:

addSbtPlugin("ch.epfl.scala" % "sbt-scala3-migrate" % "0.5.1")

Choosing a Module to Migrate

scala3-migrate plugin operates on one module at a time. So for projects with multiple modules, the first step is to choose the right one to migrate first.

Choosing the right module to migrate is an important first step in the process of migrating to Scala 3. Here are a few considerations to help you decide which module to migrate first:

  • Start with a small module: Migrating a large codebase all at once can be overwhelming, so it’s best to start with a small, self-contained module that is easy to test and debug. This will allow you to gain confidence in the migration process before tackling larger and more complex modules.
  • Choose a module with clear dependencies: Look for a module that has clear dependencies and is less likely to have complex interactions with other parts of your codebase. This will make it easier to identify any issues that arise during the migration process and ensure that you’re not introducing new bugs or breaking existing functionality.
  • Select a module that uses fewer language features: Some Scala 2 language features have been removed or changed in Scala 3, so it’s best to start with a module that uses fewer of these features. This will make it easier to identify and fix any issues related to the changes in the language.
  • Select a module that is actively developed: It’s a good idea to select a module that is currently under active development, as this will give you the opportunity to address any issues that arise during the migration process as part of your regular development workflow.

Consider these factors to choose a suitable module for migration and gain confidence before tackling more complex code.

Note:

Make sure the module you choose is not an aggregate project, otherwise only its own sources will be migrated, not the sources of its subprojects.

Migrate library dependencies

command: migrate-libs projectId

Migrating library dependencies is an important step in upgrading a Scala 2.13 project to Scala 3. Library dependencies can include external packages, plugins, and other code that your project relies on. Fortunately, the scala3-migrate plugin provides the migrate-libs projectId command(where projectId is the name of the module chosen to be migrated), which can help you to update your library dependencies to be compatible with Scala 3.

Let’s consider the following sbt build that is supposed to be migrated:

//build.sbt
val akkaHttpVersion = "10.2.4"
val akkaVersion = "2.6.5"
val jdbcAndLiftJsonVersion = "3.4.1"
val flywayCore = "3.2.1"
val keycloakVersion = "4.0.0.Final"

scapegoatVersion in ThisBuild := "1.4.8"

lazy val ticketService = project
  .in(file("."))
  .settings(
    name := "ticket-service",
    scalaVersion := "2.13.6",
    semanticdbEnabled := true,
    scalacOptions ++= Seq("-explaintypes", "-Wunused"),
    libraryDependencies ++= Seq(
      "com.typesafe.akka" %% "akka-http" % akkaHttpVersion,
      "com.typesafe.akka" %% "akka-stream" % akkaVersion,
      "net.liftweb" %% "lift-json" % jdbcAndLiftJsonVersion,
      "org.postgresql" % "postgresql" % "42.2.11",
      "org.scalikejdbc" %% "scalikejdbc" % jdbcAndLiftJsonVersion,
      "ch.qos.logback" % "logback-classic" % "1.2.3",
      "com.typesafe.scala-logging" %% "scala-logging" % "3.9.3",
      "ch.megard" %% "akka-http-cors" % "0.4.3",
      "org.apache.commons" % "commons-io" % "1.3.2",
      "org.fusesource.jansi" % "jansi" % "1.12",
      "com.google.api-client" % "google-api-client" % "1.30.9",
      "com.google.apis" % "google-api-services-sheets" % "v4-rev1-1.21.0",
      "com.google.apis" % "google-api-services-admin-directory" % "directory_v1-rev20191003-1.30.8",
      "com.google.oauth-client" % "google-oauth-client-jetty" % "1.30.5",
      "com.google.auth" % "google-auth-library-oauth2-http" % "1.3.0",
      // test lib
      "com.typesafe.akka" %% "akka-stream-testkit" % akkaVersion % Test,
      "com.typesafe.akka" %% "akka-http-testkit" % akkaHttpVersion % Test,
      "com.typesafe.akka" %% "akka-http-spray-json" % akkaHttpVersion,
      "org.scalatest" %% "scalatest" % "3.1.0" % Test,
      "org.mockito" %% "mockito-scala" % "1.11.4" % Test,
      "com.typesafe.akka" %% "akka-testkit" % akkaVersion % Test,
      "com.h2database" % "h2" % "1.4.196",
      //flyway
      "org.flywaydb" % "flyway-core" % flywayCore,
      //swagger-akka-http
      "com.github.swagger-akka-http" %% "swagger-akka-http" % "2.4.2",
      "com.github.swagger-akka-http" %% "swagger-scala-module" % "2.3.1",
      //javax
      "javax.ws.rs" % "javax.ws.rs-api" % "2.0.1",
      "org.keycloak" % "keycloak-core" % keycloakVersion,
      "org.keycloak" % "keycloak-adapter-core" % keycloakVersion,
      "com.github.jwt-scala" %% "jwt-circe" % "9.0.1",
      "org.jboss.logging" % "jboss-logging" % "3.3.0.Final" % Runtime,
      "org.keycloak" % "keycloak-admin-client" % "12.0.2",
      "com.rabbitmq" % "amqp-client" % "5.12.0",
      "org.apache.commons" % "commons-text" % "1.9",
      "org.typelevel" %% "cats-core" % "2.3.0"
    )
  )

Next, we’ll run the command and see the output:

Output

The output lists project dependencies with their current version and required Scala 3-compatible version.

The Valid status indicates that the current version of the dependency is compatible with Scala 3. In contrast, the X status indicates that the dependency is not compatible with the Scala 3 version. The To be updated status displays the latest Scala 3 compatible version of the dependency.

In the given result, it appears that several dependencies are already valid and doesn’t require any updates. However, some dependencies require a specific Scala 3 compatible version, while others cannot be updated to Scala 3 at all.

For example, com.sksamuel.scapegoat:scalac-scapegoat-plugin:1.4.8:provided is marked with an X status, indicating that it is not compatible with Scala 3 and you need to remove it and find an alternative for the same. Moreover, the output suggests that the dependency ch.megard:akka-http-cors:0.4.3 should be updated to "ch.megard" %% "akka-http-cors" % "1.1.3", as the latter version is compatible with Scala 3.

In addition, some dependencies have a cross label next to them, indicating that they need to be used with a specific cross-versioning scheme, as they are not fully compatible with Scala 3. For example, the net.liftweb:lift-json:3.4.1 dependency needs to be used with the cross-versioning scheme CrossVersion.for3Use2_13, as it is only safe to use the 2.13 version if it’s inside an application.

Overall, this output can help identify which dependencies to update or remove when migrating to Scala 3. By following this migration guide, you can ensure that all the dependencies in your project are compatible with Scala 3.

Once you have applied all the changes mentioned in the above output, run the migrate-libs command again. All project dependencies with Valid status indicate successful migration of library dependencies to Scala 3.

Migrate scalacOptions

command: migrate-scalacOptions projectId

The next step for migration is to update the project’s Scala compiler options(scalacOptions) to work with Scala 3.

The Scala compiler options are flags that control the compiler’s behavior when passed to the Scala compiler. These flags can affect the code generation, optimization, and error reporting of the compiler.

In Scala 3, some of the compiler options have been renamed or removed, while others have been added. Therefore, it is important to review and update the scalacOptions when migrating from Scala 2.13 to Scala 3.

To perform this step, we’ll run the migrate-scalacOptions command which displays the following output:

The output shows a list of scalacOptions that were found in the project and indicate whether each option is still valid, has been renamed, or is no longer available in Scala 3.

For instance, the line -Wunused -> X indicates that the -Wunused option is not available in Scala 3 and needs to be removed. On the other hand, -explaintypes -> -explain-types shows that the -explaintypes option has been renamed to -explain-types and can still be used in Scala 3. So you just need to rename this scalacOption.

Some scalacOptions are not set by you in the build file but by some sbt plugins. For example, scala3-migrate tool enables semanticdb in Scala 2, which adds -Yrangepos option. Here sbt will adapt the semanticdb options in Scala 3. Therefore, all the information specific to the sbt plugins displayed by migrate-scalacOption can be ignored if the previous step has been followed successfully.

Overall, the output is intended to help you identify which scalacOptions need to be updated or removed in order to migrate the project to Scala 3.

After applying the suggested changes, the updated scalacOptions in the build looks like this:

scalacOptions ++=

      (if (scalaVersion.value.startsWith("3"))

        Seq("-explain-types")

      else Seq("-explaintypes", "-Wunused"))

Migrate the syntax

command: migrate-syntax projectId

This step is to fix the syntax incompatibilities that may arise when migrating code from Scala 2.13 to Scala 3. An incompatibility is a piece of code that compiles in Scala 2.13 but does not compile in Scala 3. Migrating a code base involves finding and fixing all the incompatibilities of the source code.

The command migrate-syntax is used to perform this step and fixes a number of syntax incompatibilities by applying the following Scalafix rules:

  • ProcedureSyntax
  • fix.scala213.ConstructorProcedureSyntax
  • fix.scala213.ExplicitNullaryEtaExpansion
  • fix.scala213.ParensAroundLambda
  • fix.scala213.ExplicitNonNullaryApply
  • fix.scala213.Any2StringAdd

This command is very useful in making the syntax migration process more efficient and less error-prone. By automatically identifying and fixing syntax incompatibilities, time and effort are saved from manual code changes.

Note that the migrate-syntax command is not guaranteed to fix all syntax incompatibilities. It is still necessary to manually review and update any remaining issues that the tool may have missed.

Let’s run the command and check the output:

The output displays a list of files that previously had syntax incompatibilities and are now fixed after running this command.

Migrate the code: the final step

command: migrate projectId

The final step in the migration process is to use the migrate command to make your code compile with Scala 3.

The new type inference algorithm in Scala 3 allows its compiler to infer a different type than Scala 2.13’s compiler. This command attempts to compile your code in Scala 3 by adding the minimum required inferred types and implicit arguments.

When you run the migrate command, it will generate a report that lists any errors or warnings encountered during the compilation process. This report identifies areas of your code needing modification for compatibility with Scala 3.

Overall, the migrate command is an essential tool for the final step in the migration process to Scala 3. It automatically identifies migration issues and ensures full compatibility with Scala 3.

Let’s run the command and see the output:

The output indicates that the project has been successfully migrated to Scala 3.1.1.

If your project has multiple modules, repeat the same migration steps for each of them. Once you’ve finished migrating each module, remove the scala3-migrate plugin from your project and update the Scala version to 3.1.1(or add this version to crossScalaVersions).

Conclusion

In conclusion, the process of migrating a Scala 2.13 project to Scala 3 can be made much simpler with the use of the scala3-migrate plugin. The plugin automates many migration changes, such as syntax incompatibilities and updating deprecated code. It also provides helpful diagnostics and suggestions for manual changes that are needed. However, it is still important to manually review and test changes to ensure the project runs correctly after migration. Careful planning and attention to detail ensure a successful migration to Scala 3, providing access to new features and benefits.

That’s it for this blog post. I hope that the information provided has been helpful and informative.

Additionally, if you found this post valuable, please share it with your friends, and colleagues, or on social media. Sharing information is a great way to help others and build a community of like-minded individuals.

To access more fascinating articles on Scala or any other cutting-edge technologies, visit Knoldus Blogs.

Finally, remember to keep learning and growing. With the vast amount of information available today, there’s always something new to discover and explore. So keep an open mind, stay curious, and never stop seeking knowledge.

Original article source at: https://blog.knoldus.com/

#scala #migrate 

Migrate Scala 2.13 Project to Scala 3
Desmond  Gerber

Desmond Gerber

1677716220

How to Deep Dive into The Working Of The “fold” Operation in Scala

How to Deep Dive into The Working Of The “fold” Operation in Scala

Introduction to “fold”

“fold” is a common operation in programming languages including Scala where we essentially use it to “reduce” (note that “reduce” is also an operation in programming languages and has a special meaning in Scala as well). In this blog, we will learn how to use the fold function, understand different types of fold operations (including foldLeft and foldRight), and try to understand how it all works. Although fold operation can be applied on Option, Future, Try, etc.. but here we will understand it through List

Definition of “fold”

This is what the Scala docs say about the fold operation:

Folds the elements of this collection using the specified associative binary operator. The default implementation in IterableOnce is equivalent to foldLeft but may be overridden for more efficient traversal orders.
The order in which operations are performed on elements is unspecified and may be nondeterministic.

Here is how it is defined in the Scala Collections library:

// fold

def fold[A1 >: A](z: A1)(op: (A1, A1) => A1): A1 = foldLeft(z)(op)
  


// foldLeft

def foldLeft[B](z: B)(op: (B, A) => B): B = this match {
    
    case seq: IndexedSeq[A @unchecked] => foldl(seq, 0, z, op)
    
    case _ =>

      var result = z

      val it = iterator

      while (it.hasNext) {

          result = op(result, it.next())

      }

      result
}


private[this] def foldl[B](seq: IndexedSeq[A], start: Int, z: B, op: (B, A) => B): B = {
    
    @tailrec def loop(at: Int, end: Int, acc: B): B =
          
          if (at == end) acc
          
          else loop(at + 1, end, op(acc, seq(at)))
        
          loop(start, seq.length, z)
}



// foldRight

def foldRight[B](z: B)(op: (A, B) => B): B = reversed.foldLeft(z)((b, a) => op(a, b))

// For internal use

protected def reversed: Iterable[A] = {
    
    var xs: immutable.List[A] = immutable.Nil

    val it = iterator

    while (it.hasNext) xs = it.next() :: xs

    xs
}

Looking at the definition itself can teach us a lot about the working of the “fold” operation.

Some of the key takeaways from the above code are:

  1. fold and foldLeft are in fact synonymous. fold internally uses foldLeft operation
  2. foldLeft operation distinguishes between an IndexedSeq and other subclasses Seq like LinearSeq.
  3. IndexedSeq provides faster length operation. If we see the implementation of “foldl” method, we can see that we are leveraging the length operation to iterate over the Seq
  4. “case _” is highlighting the fact that mutability is something that we cannot avoid completely. We not only use mutability but also “loop” over the Seq
  5. Looking at foldRight operation, we can see that it is doing a foldLeft operation on the reversed Seq
  6. Looking at “foldl” method one will intuitively feel a similar “foldr” method for foldRight. But that is not the case. Note that we do have a “foldr” method but Scala standard library uses it in the reduceRight operation

 

Illustration of fold (foldLeft) & foldRight

@ val list = List(1, 2, 3, 4) 

list: List[Int] = List(1, 2, 3, 4)


@ list.foldLeft("a")(_ + _.toString) 

res12: String = "a1234"


@ list.foldRight("a")(_ + _.toString) 

res13: String = "1234a"

Conclusion

The fold operation is used a lot in production codes. One such common use case in case of an e-commerce website, we would use the fold operation to calculate the sum of all the prices of the items ordered by the customer.

Note that the fold operations are not restricted to Sequences and are applicable to other monads as well as mentioned in the introduction.

I hope the readers gain a better understanding of “folding” works internally in Scala and would urge them to keep exploring it as this will help them a lot in their development journey.

Original article source at: https://blog.knoldus.com/

#scala #deep #working 

How to Deep Dive into The Working Of The “fold” Operation in Scala

Basics of the Twitter Finagle Ecosystem

Basics of the Twitter Finagle Ecosystem

In this blog, we will learn about the very basics of the Twitter Finagle Ecosystem and see how it is used in Twitter.

In order to create high-concurrency servers and clients in Scala, developers leverage the open-source, asynchronous, protocol-neutral RPC framework known as Twitter Finagle. Hence, to manage the enormous traffic and size that Twitter required to serve, it was created by Twitter. The entire ecosystem of Finagle’s tools and libraries comes together to form a potent framework for creating distributed systems. In this post, we’ll examine the Finagle ecosystem’s many elements in more detail.

Architecture

Finagle is built on the foundation of Netty, a high-performance network application framework. It uses Netty’s NIO (non-blocking IO) features to provide a scalable and efficient networking stack. Finagle’s architecture is based on service-oriented architecture (SOA) principles. Services are defined as functions that take a request and return a response. Likewise, these services are then composed into complex systems.

The architecture of Twitter Finagle can be broken down into three main components:

  1. Finagle Core: Firstly, this is the core library that provides the basic abstractions for building distributed systems. It includes modules for managing connections, load balancing, service composition, and retry logic.
  2. Protocol Support: Finagle supports a variety of protocols including HTTP, Thrift, Memcached, and Finagle’s own custom protocol. Hence, each protocol has its own set of codecs and serializers that are used to encode and decode data.
  3. Integrations: Finagle integrates with a variety of third-party tools and frameworks such as Zipkin, a distributed tracing system, and Ostrich, a service monitoring system. Finally, these integrations provide additional functionality for building and monitoring distributed systems.

Overall, the architecture of Twitter Finagle is designed to be modular and extensible, allowing developers to build and deploy complex distributed systems with ease.

Components of Twitter Finagle

Finagle is composed of several components that work together to provide a complete ecosystem for building distributed systems.

  1. Finagle Core: Firstly, The core of Finagle provides the basic building blocks for building RPC systems. It includes support for HTTP, Thrift, and other protocols. It also provides a robust set of tools for handling errors and managing network connections.
  2. Finagle Clients: Finagle clients provide a simple API for making remote procedure calls. Clients are protocol-agnostic, which means that they can be used with any protocol that Finagle supports. Clients also include support for load balancing and failover.
  3. Finagle Servers: Finagle servers provide a simple API for building high-concurrency servers. Like clients, servers are protocol-agnostic, which means that they can be used with any protocol that Finagle supports. Servers also include support for load balancing and failover.
  4. Finagle Load Balancers: Finagle includes a set of load balancers that can be used to distribute traffic across multiple servers. Load balancers can be configured to use different load balancing algorithms, such as round-robin or weighted round-robin.
  5. Finagle Filters: Finagle filters provide a way to add functionality to Finagle services. Filters can be used to add metrics, logging, or authentication to a service. Filters can also be composed to create complex functionality.
  6. Finatra: Finatra is a web framework built on top of Finagle. furthermore, it provides a set of tools and libraries for building RESTful APIs. Finatra includes support for features like request routing, JSON serialization, and dependency injection.
    •  

Benefits of Twitter Finagle

Finagle provides several benefits for building distributed systems:

  1. Scalability: Finagle’s architecture is designed to provide scalability. Additionally, its NIO-based networking stack allows it to handle large numbers of connections efficiently. Its support for load balancing and failover also makes it easy to scale up and down.
  2. Robustness: Finagle provides a robust set of tools for handling errors and managing network connections. Also, its support for retries and timeouts helps to ensure that requests are processed successfully. Its filters also provide a way to add additional error handling and monitoring.
  3. Flexibility: Finagle’s protocol-agnostic design allows it to be used with any protocol that it supports. This makes it easy to switch between protocols or add new protocols to a system.

Conclusion

Twitter Finagle is a powerful ecosystem for building distributed systems in Scala. Its architecture is designed to provide scalability, robustness, and flexibility. Its components work together to provide a complete set of tools for building high-concurrency servers and clients. Finagle has been battle-tested at Twitter and is used by many other companies to build their distributed systems. If you are building a distributed system in Scala, Finagle is definitely worth considering.

For more information, you can visit Twitter Finagle Official Documentation.

Original article source at: https://blog.knoldus.com/

#twitter #scala 

Basics of the Twitter Finagle Ecosystem

Implicit classes in Scala do not have to extend AnyVal

Implicit classes in Scala do not have to extend AnyVal

What are implicit classes?

Implicit classes are a feature that allow you to extend the functionality of existing types by adding new methods. They are defined using the implicit keyword and have a single constructor parameter. We can use these class as if they belong to the original type without having to perform explicit type conversion.

Implicit classes are particularly useful for adding utility methods to existing types. They allow you to do this without creating a new type or modifying the original type. You can also use implicit classes to add implicit conversions. It can be helpful in making your code more concise and readable.

Should implicit classes always extend AnyVal in scala?

Implicit classes in Scala do not have to extend AnyVal. They can extend any type that is a subtype of Any. However, if the implicit class is meant to be used as a value type and is simple enough, it may make sense to extend AnyVal to allow for optimized storage and improved performance.

It’s worth noting that an implicit class that extends AnyVal can only have a single constructor parameter and is subject to certain restrictions in terms of its functionality, as it is meant to represent a value type. On the other hand, implicit classes that do not extend AnyVal are treated as normal classes and can have multiple constructor parameters, additional fields, and more complex logic.

So whether or not an implicit class should extend AnyVal depends on the specific use case and the intended behavior of the class.

Examples:

Here’s an example to illustrate the difference between implicit classes that extend AnyVal and those that do not.

Let’s say we want to add a method to the Int type that squares its value. We can define an implicit class that takes an Int value and adds this method:

implicit class IntOps(val x: Int) extends AnyVal {

  def square: Int = x * x

}

In this case, the implicit class extends AnyVal, so it is optimized for use as a value type. We can use this implicit class like this:

scala> 5.square

res0: Int = 25

Now, let’s say we want to add a similar method to the String type that repeats its value a specified number of times. To do this, we can define an implicit class that takes a String value and adds this method:

implicit class StringOps(val s: String) {

  def repeat(n: Int): String = s * n

}

In this case, the implicit class does not extend AnyVal, because it is not meant to be used as a value type. We can treat it as a normal class. We can use this implicit class like this:

scala> "Hello ".repeat(3)

res1: String = Hello Hello Hello 

So, in this example, the implicit class that extends AnyVal is more optimized for performance as a value type, while the implicit class that does not extend AnyVal is treated as a normal class and can handle more complex logic.

Here’s one more example. Let’s say we want to add a method to the Int type that calculates the factorial of a number. We can define an implicit class that takes an Int value and adds this method:

implicit class IntFactorial(val n: Int) {

  def factorial: Int = {

    def fact(x: Int, acc: Int): Int =

      if (x <= 1) acc else fact(x - 1, acc * x)

    fact(n, 1)

  }

}

In this case, the implicit class does not extend AnyVal, because it needs to perform a recursive calculation, which is not possible with value types. We can use this implicit class like this:

cala> 5.factorial
res2: Int = 120

So, in this example, we see that extending AnyVal is not always the right choice, as the more complex logic required by the factorial method makes it more appropriate to use a normal class that does not extend AnyVal.

Conclusion:

In conclusion, whether or not an implicit class in Scala should extend AnyVal depends on the intended use case and behavior of the class. If the implicit class is meant to be used as a simple value type, then extending AnyVal can result in improved performance and optimized storage. On the other hand, if the implicit class requires more complex logic or additional fields, it may make more sense to treat it as a normal class and not extend AnyVal. In either case, implicit classes can be a convenient way to add new methods to existing types in Scala, and the choice of whether to extend AnyVal or not should be based on the specific requirements of each case.

Original article source at: https://blog.knoldus.com/

#scala #classes 

Implicit classes in Scala do not have to extend AnyVal

How to Replacing Type Check By Pattern Matching in Scala

How to Replacing Type Check By Pattern Matching in Scala

There are several benefits of using pattern matching instead of type checking in programming:

  1. Conciseness: Pattern matching allows you to write more concise code by eliminating the need for multiple type checks and conditional statements. This can make your code easier to read and maintain.
  2. Clarity: Pattern matching provides a more intuitive and readable way of checking the structure of values, as it directly matches values against patterns. This can make it easier to understand what your code is doing.
  3. Safety: Pattern matching helps prevent type-related bugs by making it explicit when a value does not match a particular pattern. This can help catch errors earlier in the development process, reducing the amount of debugging time required.
  4. Flexibility: Pattern matching allows you to easily handle multiple cases, such as different types of data structures, without relying on type information. This can make your code more flexible and reusable.
  5. Performance: In some cases, pattern matching can be faster than type checking, as it provides a direct way to extract information from values without having to perform multiple checks and conditionals.

Overall, pattern matching can provide a cleaner, more efficient, and more flexible way of handling values in your code, making it a powerful tool for many programming tasks.

Example:

Here’s a simple example in Scala that demonstrates the benefits of pattern matching:

Suppose you want to write a function that takes a list of integers and returns the sum of the even numbers in the list. Using type checking, you might write the following code:

def sumEven(lst: List[Int]): Int = {

  var sum = 0

  for (x <- lst) {

    if (x.isInstanceOf[Int]) {

      val i = x.asInstanceOf[Int]

      if (i % 2 == 0) {

        sum += i

      }

    }

  }

  sum

}

This code uses type checking (isInstanceOf and asInstanceOf) to determine whether an element in the list is an integer, and then performs a check to see if it is even.

With pattern matching, you can write a much more concise and readable version of the same function:

def sumEven(lst: List[Int]): Int = lst.filter {

  case i: Int if i % 2 == 0 => true

  case _ => false

}.sum

This code uses pattern matching to filter the list for even integers and then sum the result, all in one expression. The pattern match makes it clear that the function only cares about even integers, and eliminates the need for separate type and value checks.

As you can see, pattern matching provides a more concise and readable way of processing values in Scala, making it a useful tool for many programming tasks.

Here’s another example in Scala that demonstrates the use of pattern matching:

Suppose you want to write a function that takes a value and returns a message based on its type. Using type checking, you might write the following code:

def message(value: Any): String = {

  if (value.isInstanceOf[Int]) {

    return "Received an integer: " + value.toString

  } else if (value.isInstanceOf[String]) {

    return "Received a string: " + value.toString

  } else {

    return "Received an unknown type"

  }

}

This code uses type checking (isInstanceOf) to determine the type of the value argument and returns a message based on its type.

With pattern matching, you can write a much more concise and readable version of the same function:

def message(value: Any): String = value match {

  case i: Int => "Received an integer: " + i.toString

  case s: String => "Received a string: " + s

  case _ => "Received an unknown type"

}

This code uses pattern matching to match the value argument against different patterns and returns a message based on the match. The pattern match makes it clear what types of values the function handles and eliminates the need for multiple type checks.

As you can see, pattern matching provides a more intuitive and readable way of handling values in Scala, making it a valuable tool for many programming tasks.

Let’s check one more real time example. We will be converting the type check by pattern matching here:

      if (con.isInstanceOf[HttpURLConnection]) {

        val httpCon = con.asInstanceOf[HttpURLConnection]

        if (getRequestMethod == URLHandler.REQUEST_METHOD_HEAD) 

httpCon.setRequestMethod("HEAD")

        if (checkStatusCode(url, httpCon)) {

          val bodyCharset = 

BasicURLHandler.getCharSetFromContentType(con.getContentType)

          return new SbtUrlInfo(true, httpCon.getContentLength, 

con.getLastModified, bodyCharset)

        }

      }

      else {

        val contentLength = con.getContentLength

        if (contentLength <= 0) return UNAVAILABLE

        else { // TODO: not HTTP... maybe we *don't* want to default to ISO-

8559-1 here?

          val bodyCharset = 

BasicURLHandler.getCharSetFromContentType(con.getContentType)

          return new SbtUrlInfo(true, contentLength, con.getLastModified, 

bodyCharset)

        }

      }

We can modify the code as follows:

      con match {

        case httpCon: HttpURLConnection =>

          if (getRequestMethod == URLHandler.REQUEST_METHOD_HEAD) 

httpCon.setRequestMethod("HEAD")

          if (checkStatusCode(url, httpCon)) {

            val bodyCharset = 

BasicURLHandler.getCharSetFromContentType(con.getContentType)

            return new SbtUrlInfo(true, httpCon.getContentLength, 

con.getLastModified, bodyCharset)

          }

        case _ =>

          val contentLength = con.getContentLength

          if (contentLength <= 0) return UNAVAILABLE

          else { // TODO: not HTTP... maybe we *don't* want to default to 

ISO-8559-1 here?

            val bodyCharset = 

BasicURLHandler.getCharSetFromContentType(con.getContentType)

            return new SbtUrlInfo(true, contentLength, con.getLastModified, 

bodyCharset)

          }

Conclusion:

In conclusion, pattern matching is a powerful feature in programming languages that provides a concise and intuitive way of handling values. It eliminates the need for multiple type checks and conditional statements, making code easier to read and maintain. Pattern matching provides a direct way of matching values against patterns, making it easier to understand what the code is doing and helping to prevent type-related bugs. It allows for greater flexibility in handling different cases, making code more flexible and reusable. In addition, pattern matching can be faster than type checking in some cases, providing a more efficient way to extract information from values. Pattern matching is a valuable tool for many programming tasks, and its use can lead to more readable, maintainable, and efficient code.

Original article source at: https://blog.knoldus.com/

#scala #type #check 

How to Replacing Type Check By Pattern Matching in Scala
Nat  Grady

Nat Grady

1676290140

Should Implicit Classes Always Extend AnyVal?

What are implicit classes?

Implicit classes are a feature that allow you to extend the functionality of existing types by adding new methods. They are defined using the implicit keyword and have a single constructor parameter. We can use these class as if they belong to the original type without having to perform explicit type conversion.

Implicit classes are particularly useful for adding utility methods to existing types. They allow you to do this without creating a new type or modifying the original type. You can also use implicit classes to add implicit conversions. It can be helpful in making your code more concise and readable.

Should implicit classes always extend AnyVal in scala?

Implicit classes in Scala do not have to extend AnyVal. They can extend any type that is a subtype of Any. However, if the implicit class is meant to be used as a value type and is simple enough, it may make sense to extend AnyVal to allow for optimized storage and improved performance.

It’s worth noting that an implicit class that extends AnyVal can only have a single constructor parameter and is subject to certain restrictions in terms of its functionality, as it is meant to represent a value type. On the other hand, implicit classes that do not extend AnyVal are treated as normal classes and can have multiple constructor parameters, additional fields, and more complex logic.

So whether or not an implicit class should extend AnyVal depends on the specific use case and the intended behavior of the class.

Examples:

Here’s an example to illustrate the difference between implicit classes that extend AnyVal and those that do not.

Let’s say we want to add a method to the Int type that squares its value. We can define an implicit class that takes an Int value and adds this method:

implicit class IntOps(val x: Int) extends AnyVal {

  def square: Int = x * x

}

In this case, the implicit class extends AnyVal, so it is optimized for use as a value type. We can use this implicit class like this:

scala> 5.square

res0: Int = 25

Now, let’s say we want to add a similar method to the String type that repeats its value a specified number of times. To do this, we can define an implicit class that takes a String value and adds this method:

implicit class StringOps(val s: String) {

  def repeat(n: Int): String = s * n

}

In this case, the implicit class does not extend AnyVal, because it is not meant to be used as a value type. We can treat it as a normal class. We can use this implicit class like this:

scala> "Hello ".repeat(3)

res1: String = Hello Hello Hello 

So, in this example, the implicit class that extends AnyVal is more optimized for performance as a value type, while the implicit class that does not extend AnyVal is treated as a normal class and can handle more complex logic.

Here’s one more example. Let’s say we want to add a method to the Int type that calculates the factorial of a number. We can define an implicit class that takes an Int value and adds this method:

implicit class IntFactorial(val n: Int) {

  def factorial: Int = {

    def fact(x: Int, acc: Int): Int =

      if (x <= 1) acc else fact(x - 1, acc * x)

    fact(n, 1)

  }

}

In this case, the implicit class does not extend AnyVal, because it needs to perform a recursive calculation, which is not possible with value types. We can use this implicit class like this:

scala> 5.factorial

res2: Int = 120

So, in this example, we see that extending AnyVal is not always the right choice, as the more complex logic required by the factorial method makes it more appropriate to use a normal class that does not extend AnyVal.

Conclusion:

In conclusion, whether or not an implicit class in Scala should extend AnyVal depends on the intended use case and behavior of the class. If the implicit class is meant to be used as a simple value type, then extending AnyVal can result in improved performance and optimized storage. On the other hand, if the implicit class requires more complex logic or additional fields, it may make more sense to treat it as a normal class and not extend AnyVal. In either case, implicit classes can be a convenient way to add new methods to existing types in Scala, and the choice of whether to extend AnyVal or not should be based on the specific requirements of each case.

Original article source at: https://blog.knoldus.com/

#classes #scala 

Should Implicit Classes Always Extend AnyVal?