Cyril  Parisian

Cyril Parisian

1621574340

Thought process: High-throughput data fetching and processing in C#

Let us assume that we have a serious piece of equipment that outputs high volumes of data in a specific format, which needs to be loaded onto a computer in real time and processed, then saved onto a storage medium. You are tasked with creating a .NET 5 program that takes said data from the equipment via a dedicated driver API, does the processing, then outputs the results.

If the above sounds scary, it shouldn’t be. Software development is almost always the result of an evolutive process. Experience will usually allow you to start later and later in the process, but, while learning, starting from the beginning is what everyone should do.

The setup

The easiest to imagine situation in which this could be necessary is an image acquisition setup. Usually, dedicated hardware will have a real-time operation, whereas a PC will not, and timing will become crucial for success.

At 60 frames per second of uncompressed 1080p frames (approximately 6MB per frame), the total amount of data that needs to be transferred, processed and saved would be:

This means that the time between two frames is slightly less than 17 milliseconds.

Considerations

Usually, since 360MB/s is quite a hefty load to transfer via multiple a chain of multiple interfaces, drivers, OS layers and software layers, and do processing on it as well, a few things are required for success:

  1. Enough buffer should be available so that, should processing slow down, there is space to accumulate so that data doesn’t get lost
  2. Either a fast-enough processing cycle, or a parallel-enough processing cycle, must be applied, so that the time it takes between subsequent moments that the software can process does not overrun the data input time from the dedicated hardware
  3. A fast-enough storage medium, or a good-enough compression cycle, must be used so that the time it takes to save the result to a non-volatile storage medium does not overrun the data input times

For point #1, a lot will depend on hardware. Usually, dedicated hardware will have enough buffer so that what it’s attached to will have the chance to read data before the same buffer is overwritten by new data. This is usually achieved with a buffer certain times larger than the actual data package being received (for the sake of simplicity, let’s just assume that the size of each data package (in our example case a bitmap-format 1080p frame) is constant).

Point #2 will mostly depend on software factors. If the requirements are for processing into a specific format (say, a series of PNG images), then the processing cycle defines which strategy should be employed. A still image encoding, for instance, will take a lot longer than 16.6 milliseconds, the gap between two frames at 60f/s. So, in such a situation, the only strategy that can ensure proper timing is parallel processing.

A separate problem that again forces a parallel strategy is the choice of software platform. If you chose a managed platform, such as .NET, you must contend with various timing skews and delays that come from various platform-specific sources, such as thread pool management, garbage collection, marshalling, p/invoke, etc.

Point #3 will mostly be based on requirement and available hardware. If the machine that you are running on has a dedicated encoding equipment that only takes 5ms to process, and a dedicated API to call, then, of course, you won’t have to use a parallel strategy on point #2. But, should the requirements be custom encoding, or no hardware is available, then you need to plan accordingly.

Ultimately, should the requirements actually be met, then considerations for points #1–3 will have only determined how many resources you need, and how efficient you are in using them.

However, in real-life scenarios, the considerations for points #1–3 will determine your strategy, as the inability to offer the speed or the timing required for dedicated hardware on a PC will usually greatly limit your range of available strategies.

#software-development #software-engineering #c#

What is GEEK

Buddha Community

Thought process: High-throughput data fetching and processing in C#
Siphiwe  Nair

Siphiwe Nair

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Siphiwe  Nair

Siphiwe Nair

1622608260

Making Sense of Unbounded Data & Real-Time Processing Systems

Unbounded data refers to continuous, never-ending data streams with no beginning or end. They are made available over time. Anyone who wishes to act upon them can do without downloading them first.

As Martin Kleppmann stated in his famous book, unbounded data will never “complete” in any meaningful way.

“In reality, a lot of data is unbounded because it arrives gradually over time: your users produced data yesterday and today, and they will continue to produce more data tomorrow. Unless you go out of business, this process never ends, and so the dataset is never “complete” in any meaningful way.”

— Martin Kleppmann, Designing Data-Intensive Applications

Processing unbounded data requires an entirely different approach than its counterpart, batch processing. This article summarises the value of unbounded data and how you can build systems to harness the power of real-time data.

#stream-processing #software-architecture #event-driven-architecture #data-processing #data-analysis #big-data-processing #real-time-processing #data-storage

Sid  Schuppe

Sid Schuppe

1617955288

Benefits of Data Ingestion

In the last two decades, many businesses have had to change their models as business operations continue to complicate. The major challenge companies face today is that a large amount of data is generated from multiple data sources. So, data analytics have introduced filters to various data sources to detect this problem. They need analytics and business intelligence to access all their data sources to make better business decisions.

It is obvious that the company needs this data to make decisions based on predicted market trends, market forecasts, customer requirements, future needs, etc. But how do you get all your company data in one place to make a proper decision? Data ingestion consolidates your data and stores it in one place.

#big data #data access #data ingestion #data collection #batch processing #data access layer #data integration platform #automate data collection

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt