Dock  Koelpin

Dock Koelpin


Why OpenTelemetry Is the Future of Instrumentation

Today, companies are rapidly embracing open source in all aspects of software development; and the monitoring space is no different. Engineers have access to numerous tools to measure the performance of their stacks, and open source is at the heart of this tool explosion.

Apart from tools, standards are emerging and changing the way teams implement instrumentation. OpenTracing and OpenCensus were developed to standardize how teams capture traces and metrics. While both standards achieved their goals of making observability easy for many, a fundamental problem remained: they were two separate standards. For distributed tracing, it is particularly important to have one standard, because if even a single point in the causal chain is broken, the trace is no longer complete.

OpenTelemetry is an open source project and a unified standard for service instrumentation. Sponsored by the Cloud Native Computing Foundation (CNCF), it replaces OpenTracing and OpenCensus. The OpenTelemetry project consists of specifications, APIs and SDKs for various languages (like Java, Go and Python). It also defines a centralized collector and exporters, to send data to various backends. This standard is being backed by a number of leading monitoring companies, who all believe it’s the future of instrumentation.

#cloud native #kubernetes #monitoring #contributed #sponsored

What is GEEK

Buddha Community

Why OpenTelemetry Is the Future of Instrumentation
aaron silva

aaron silva


Deploy a white label futures trading software to boost your position

Deploy Infinite Block Tech’s White label futures trading software is embedded with solutions and services like derivatives trading, futures trading, margin trading, and spot trading. The safety measures include HTTP authentication, jail login, anti-DDoS protection, cross-site request forgery protection, and server-side request forgery protection.

#white label futures trading #white label futures trading software #futures trading #futures trading software

Gerhard  Brink

Gerhard Brink


The Path to a Better Future is paved with Data

The Cambridge Analytica[i] scandal along with other data breaches[ii] have given the data extraction industry a negative reputation. That’s a hard reality to face, because (a) I lead a company that provides ethically-sourced proxies for public data extraction, and (b) I believe that web scraping can be a force for good.

I realise that some people will need to be convinced that this is true because positive stories don’t get nearly as many clicks as negative ones. But they do exist, and I hope to change some minds with this article.

There’s no going back: big data is here to stay
Web scraping helps pave the path to a better internet
Online marketplaces
“Watchdog” monitoring groups & journalists

#big data #latest news #the path to a better future is paved with data #future #data #better future

Turner  Crona

Turner Crona


What is the future for Android developers

Android is dominating all other mobile operating systems in the market globally. There’s no doubt that Android applications will always be in demand. Companies like, Flipkart, Amazon, PayTM, Airtel, and many more are investing highly into third-party apps. These all apps be it native or third-party are powered by Android. Also there’s a growth in the prevalence and quality of Android Certification these days.

#android tutorials #android future scope #future for android developers #future in android #is android developer a good career

Seamus  Quitzon

Seamus Quitzon


Java8 Futures: Introduction & Best Practices

To understand this better, firstly we must understand what is blocking and why is it bad for our software.

BLOCKING – A blocking/long-running call occurs when a thread is tied up for long periods of time performing computations or waiting for resources. This may be anything – a database call, file I/O, serialization/deserialization of objects, network I/O etc.

There are multiple reasons why blocking negatively affects our code.

  1. During this period the threads, memory and other resources will not be released for use by other processes.
  2. Code following the blocking call will not be executed until a result from the blocked code arrives.
  3. Blocking is a potential bottleneck. It limits an application’s ability to scale.

But how do we solve the problem of blocking?

Blocking is inevitable in most systems. To maintain performance and scalability, we can isolate the blocking operations using Java Futures.

**JAVA FUTURES – **These allow us to isolate the blocking operations to a separate thread so that the execution of the main thread continues uninterrupted. The result of the futures is handled through a callback.

Futures represent the promise of value. In Java 8, the promise of values is represented by a CompletableFuture.

A CompletableFuture eventually resolves into one of the following 2 things –

  1. The value of the future

  2. An exception that occurs while resolving the value of the future.

Syntax of a CompletableFuture is as below-

CompletableFuture<Order> futureOrder = CompletableFuture.supplyAsync(() -> new Order(..));

This future takes in a lambda expression that takes some parameters. The lambda body contains all the work that is to be done in the blocking call. When the work is done, the future returns the value (in this case the order object) back to the calling code. All this work is done in a separate thread so that the normal flow of the program is not blocked.

Best Practices in Java 8 Futures:

Now, we will talk about some of the scenarios regarding futures & best practices we can leverage to our benefit.

  1. Using .get & .join to get the value of the future is not a good practice as these are blocking operations. These operations force the current thread to wait for the future to complete before moving on.

Best Practice: Rather than waiting for a future to complete, we can use transformations or callbacks to handle the result of the future. The .thenApply function transforms the value of the future using the lambda provided.

CompletableFuture<String> futureString = futureOrder.thenApply((order) -> order.toString());

2. Sometimes, if a lambda returns a future we can get a nested future.

_Best Practice: _We can use .thenCompose function to flatten the nested future instead of .thenApply. This returns a CompletableFuture instead of a CompletableFuture<CompletableFuture>.

CompletableFuture<Order> flattenedFutureOrder = futureOrder.thenCompose((order) -> completedFuture(order));

3. Futures are executed in separate threads or thread pool. Management of these threads is handled by an Executor or Executor Service. When no Executor is provided for a future, a default thread pool is used. This is convenient but also means that all operations, even the fast ones are competing for the same threads.

#functional programming #future #java #scala #tech blogs #futures #java 8

Fannie  Zemlak

Fannie Zemlak


Interview With Honeycomb Engineer Chris Toshok: Dogfooding OpenTelemetry

At Honeycomb, we talk a lot about eating our own dogfood. Since we use Honeycomb to observe Honeycomb, we have many opportunities to try out UX changes ourselves before rolling them out to all of our users.

UX doesn’t stop at the UI though! Developer experience matters too, especially when getting started with observability. We often get questions about the difference between using our Beeline SDKs compared with other integrations, especially OpenTelemetry (abbreviated “OTel”). That’s why the team decided to do some integration dogfooding by instrumenting our own code with OpenTelemetry alongside existing Beeline instrumentation.

Poodle is the frontend Go service that renders visualizations in the Honeycomb UI after getting query results and traces from the backend. Engineer Chris Toshok has been working on adding OpenTelemetry to the Poodle code and comparing it with our existing Beeline integration. I talked to Chris about his experience setting up OpenTelemetry from the perspective of a practitioner and service owner.

Interview With Chris Toshok

What were your thoughts going into the effort?

The main concern was schema compatibility. We have existing triggers, boards, SLOs, other things that relied on the schema that the Beeline generated. If we only sent to the existing Poodle dataset, we’d have to make sure that the OTel data would end up with the same field names and values as what’s already there, so it wouldn’t break the things that depend on the existing schema.

Alternatively, we could double-send: the Beeline data goes to one dataset and OTel data goes to another. We ended up going this way so that we can look at what the differences are between the two schemas without breaking existing dependencies in our Dogfood team.

Was there anything that surprised you in the process?

While there are lots of little differences between the APIs, the core concepts were the same. The only real sort of surprise was that the Beelines actually let you do stuff that’s kind of scary, because of how the data is stored.

With OTel, as soon as set a value for a field, you can’t modify that value, only replace it. In the Beeline, it’s just a reference to a Go struct. You can just change the values at any point before the data gets sent to Honeycomb. With some fields, like the team object within a span, changing that could be dangerous. But sometimes you want to change a certain value in the middle, like if you need to sanitize values before sending them to Honeycomb.

The configuration for OTel is a bit different from the Beeline, did you also work on getting set up with that?

We basically just set up the OpenTelemetry-Honeycomb Exporter for Go, written by [engineer] Alyson [van Hardenberg].

In the code, I used dependency injection to create an OTel-compatible initializer that works similarly to the Beeline. Beeline data and OTel data are stored a bit differently before they’re sent out, so we need to handle both cases. I added a shim that looks very similar to the OTel API, to wrap around both OTel and the Beeline and allow us to send to Honeycomb using either or both.

If you could advise a team starting today, would you recommend they use the Beeline or OTel for their Go app?

Beeline has a much more specific API that maps more closely to Honeycomb’s features. OTel is doing things at a more general layer, so it might not have all the interesting application bits that we have after several years of doing Honeycomb-specific work in the Go Beeline, which is an effect of the Beeline being something we built out alongside Honeycomb itself. We developed it to answer the questions we had about the service, and to make use of Honeycomb’s features.

What sort of differences have you found?

There were a couple of small, but notable differences. OTel sends JSON blobs as strings, which don’t work with the JSON unfurling feature in Honeycomb.

Also, Beelines allow us to set trace-level attributes. Any span that we send can add a trace-level field. OTel doesn’t support trace-level fields added from child spans.

Have you noticed any performance changes in Poodle after adding OTel?

No change, but I’m wishing we had time over time queries. This is something I was interested in checking on and seeing in time series form.

Observing the Impact

Toshok shared with me the queries he ran to check for any impact on Poodle’s performance. Here’s the baseline behavior from the week before Toshok’s change went out:

Poodle performance from the week before the change went out

Here’s the same query, run on the following week. Toshok’s changes got rolled out on 7/21:

Poodle performance from the week the change went out

It makes sense that there would be a minimal impact in this case because of how Toshok implemented the change, but it’s still neat to be able to query for it.

#integration #interview #go #golang #observability #instrumentation #opentelemetry #otel