When we talked about Data Processing or doing Big Data ETL, the first that comes to mind is using Hadoop (or Spark). Today, I would like to show another way of creating Data Processing using the Actor System. I will be using classic Akka Actors.
Note: This article will not talk about an Actor System, and introduce an Actor System in Akka. However, I will write another essay explaining the Akka Actor’s introduction, or you can also read the documentation on the Akka side.
Why use the Actor System?
Modern distributed systems with demanding requirements encounter challenges that traditional object-oriented programming (OOP) models cannot solve.
One of the fallback examples of OOP in creating distributed systems includes the challenge of encapsulation. If you write a regular thread model in Java, encapsulation doesn’t encapsulate multiple threads in your application. Encapsulation is only valid in the single-threaded model. To be thread-safe for encapsulation to occur, you need to do synchronized. Other fallbacks can be seen in the Akka side.
I will create an application that will get data from a log CSV file and use a master-worker pattern to process each message.
The goal is to aggregate and count all the occurrence Http Status code in the Log file corresponding to the Log file content and write it to the out.txt file.
Since there are multiple components involved in getting the data from the source file, splitting the work, and aggregating them, let’s examine the different actors’ role in this application.
Image for post
There are four kinds of actors: Supervision Actor, Ingestion Actor, Master Actor, Worker Actor.
Supervision Actor supervises, spawning, and stopping Ingestion Actor. Supervision Actor will handle any sort of event failure in the Ingestion Actor.
Ingestion Actor reads the log file from the source, initialized the Master Actor. It splits the log file to each line and sends each line to the Master Actor. Finally, it will receive the aggregated value from the Master Actor and write that value to the out.txt file.
Master Actor plays a role in creating Worker Actor and delegating tasks to the Worker Actors. It also aggregates the results and sends them back to the Ingestion Actor.
The worker Actor will create all the heavy lifting in process.
The data that we will read is a log file, and it will contain IP, Time, URL, Status:
10.128.2.1,[29/Nov/2017:06:58:55,GET /login.php HTTP/1.1,200
However, the data is not always in the right order, and we need to do a filter to process the data.
In this article, I will walk through the process of the code for Supervisor Actor and Ingestion Actor. Master Actor and Worker Actor will be in part 2.
Let’s go forth and conquer.
#programming #data-processing #akka #scala #functional-programming
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
CVDC 2020, the Computer Vision conference of the year, is scheduled for 13th and 14th of August to bring together the leading experts on Computer Vision from around the world. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.
The second day of the conference started with quite an informative talk on the current pandemic situation. Speaking of talks, the second session “Application of Data Science Algorithms on 3D Imagery Data” was presented by Ramana M, who is the Principal Data Scientist in Analytics at Cyient Ltd.
Ramana talked about one of the most important assets of organisations, data and how the digital world is moving from using 2D data to 3D data for highly accurate information along with realistic user experiences.
The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment, 3D data for object detection and two general case studies, which are-
This talk discussed the recent advances in 3D data processing, feature extraction methods, object type detection, object segmentation, and object measurements in different body cross-sections. It also covered the 3D imagery concepts, the various algorithms for faster data processing on the GPU environment, and the application of deep learning techniques for object detection and segmentation.
#developers corner #3d data #3d data alignment #applications of data science on 3d imagery data #computer vision #cvdc 2020 #deep learning techniques for 3d data #mesh data #point cloud data #uav data
Corona Virus Pandemic has brought the world to a standstill.
Countries are on a major lockdown. Schools, colleges, theatres, gym, clubs, and all other public places are shut down, the country’s economy is suffering, human health is on stake, people are losing their jobs and nobody knows how worse it can get.
Since most of the places are on lockdown, and you are working from home or have enough time to nourish your skills, then you should use this time wisely! We always complain that we want some ‘time’ to learn and upgrade our knowledge but don’t get it due to our ‘busy schedules’. So, now is the time to make a ‘list of skills’ and learn and upgrade your skills at home!
And for the technology-loving people like us, Knoldus Techhub has already helped us a lot in doing it in a short span of time!
If you are still not aware of it, don’t worry as Georgia Byng has well said,
“No time is better than the present”
– Georgia Byng, a British children’s writer, illustrator, actress and film producer.
No matter if you are a developer (be it front-end or back-end) or a data scientist, tester, or a DevOps person, or, a learner who has a keen interest in technology, Knoldus Techhub has brought it all for you under one common roof.
From technologies like Scala, spark, elastic-search to angular, go, machine learning, it has a total of 20 technologies with some recently added ones i.e. DAML, test automation, snowflake, and ionic.
Every technology in Tech-hub has n number of templates. Once you click on any specific technology you’ll be able to see all the templates of that technology. Since these templates are downloadable, you need to provide your email to get the template downloadable link in your mail.
These templates helps you learn the practical implementation of a topic with so much of ease. Using these templates you can learn and kick-start your development in no time.
Apart from your learning, there are some out of the box templates, that can help provide the solution to your business problem that has all the basic dependencies/ implementations already plugged in. Tech hub names these templates as xlr8rs (pronounced as accelerators).
xlr8rs make your development real fast by just adding your core business logic to the template.
If you are looking for a template that’s not available, you can also request a template may be for learning or requesting for a solution to your business problem and tech-hub will connect with you to provide you the solution. Isn’t this helpful 🙂
To keep you updated, the Knoldus tech hub provides you with the information on the most trending technology and the most downloaded templates at present. This you’ll be informed and learn the one that’s most trending.
Since we believe:
“There’s always a scope of improvement“
If you still feel like it isn’t helping you in learning and development, you can provide your feedback in the feedback section in the bottom right corner of the website.
#ai #akka #akka-http #akka-streams #amazon ec2 #angular 6 #angular 9 #angular material #apache flink #apache kafka #apache spark #api testing #artificial intelligence #aws #aws services #big data and fast data #blockchain #css #daml #devops #elasticsearch #flink #functional programming #future #grpc #html #hybrid application development #ionic framework #java #java11 #kubernetes #lagom #microservices #ml # ai and data engineering #mlflow #mlops #mobile development #mongodb #non-blocking #nosql #play #play 2.4.x #play framework #python #react #reactive application #reactive architecture #reactive programming #rust #scala #scalatest #slick #software #spark #spring boot #sql #streaming #tech blogs #testing #user interface (ui) #web #web application #web designing #angular #coronavirus #daml #development #devops #elasticsearch #golang #ionic #java #kafka #knoldus #lagom #learn #machine learning #ml #pandemic #play framework #scala #skills #snowflake #spark streaming #techhub #technology #test automation #time management #upgrade
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
Unbounded data refers to continuous, never-ending data streams with no beginning or end. They are made available over time. Anyone who wishes to act upon them can do without downloading them first.
As Martin Kleppmann stated in his famous book, unbounded data will never “complete” in any meaningful way.
“In reality, a lot of data is unbounded because it arrives gradually over time: your users produced data yesterday and today, and they will continue to produce more data tomorrow. Unless you go out of business, this process never ends, and so the dataset is never “complete” in any meaningful way.”
— Martin Kleppmann, Designing Data-Intensive Applications
Processing unbounded data requires an entirely different approach than its counterpart, batch processing. This article summarises the value of unbounded data and how you can build systems to harness the power of real-time data.
#stream-processing #software-architecture #event-driven-architecture #data-processing #data-analysis #big-data-processing #real-time-processing #data-storage