Vern  Greenholt

Vern Greenholt


How to automatically process data with AWS fargate

Nowadays, every data scientist should know how to integrate their models within a cloud platform so that they can enhance their work and become more valuable as a data scientist. Unfortunately integration concept is a bit hard when you are beginner but luckily this story is therefore for you if you want to build your first machine learning pipeline on the cloud and more precisely on Amazon Web Services (AWS).

Image for post

Pipeline architecture

As you can see on the schema, the pipeline’s input is a S3 upload of some data and the pipeline’s output is the data preprocessed written on S3. Everything in the pipeline is automated.

The AWS’s services I will use for this pipeline are the following :

  • S3 (Simple Storage Service): Service that provides object storage through a web service interface.
  • Lambda : It lets you run code without provisioning or managing servers.
  • SQS (Simple Queue Service) : A fully managed message queuing service that enables you to decouple and scale micro services, distributed systems, and serverless applications.
  • ECS ( Elastic Container Service) : A fully managed container orchestration service. You can choose to run your ECS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
  • ECR (Elastic Container Registry) : A fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
  • Step-function : It lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly.

If you already are introduce with some AWS services and just interested in some part of the tutorial, I write this table of contents so you can go directly through the part you are interested in.

  1. Create a S3 bucket.
  2. Create a queue SQS.
  3. Code your lambda function triggered on S3 event to send a message to queue.
  4. Create your scripts for preprocessing your data.
  5. Build a Docker Image to run your scripts.
  6. Push your Docker Image to ECR.
  7. Create Fargate Task and Cluster.
  8. Orchestrate your pipeline with Step-function.

This tutorial require to have an AWS account configured on the AWS CLI (or IAM user with permissions) and Docker installed locally on your computer.

Create a S3 bucket

The first step is to create a S3 bucket which will allow you to upload documents like data (json, csv, xlsx, …).

Go to the AWS console, in S3 service and click on “Create Bucket”. It will ask you for a bucket’s name that must be **unique **and for a location for your bucket to be hosted. You location must be the same for every service in a same project. Then you can just click ‘next’ until the creation of your bucket.

Create a queue SQS

Now go to the SQS service on the aws console, create a new queue, give it the name you want and select ‘standard queue’. Then click on ‘Quick-Create queue’.

Code your lambda function triggered on S3 event to send a message to queue

Now that you got a S3 bucket and a SQS queue, the goal is to send a message in queue to SQS service when a file is uploaded in S3. The fargate task will ask SQS queue what it have to do.

Go to Lambda service and create a new function. Indicate the function name and the programming language you want to run. (I will use Python:3.8.0)

Then in the ‘execution role’ section, you have to indicate a role with ‘S3 full access policy’ and ‘step-function full access policy’. If you have not one, go to create one in the IAM console.

It is now time to code. As I said before this tutorial will be exclusively coded with Python but feel free to use the language you want.

With Python the library that is useful for interact with AWS services is Boto3.

The code you put in the lambda function should look like that :

import boto3

s3 = boto3.resource('s3')
sqs = boto3.client('sqs')
queue_url = 'your_queue_url'
def lambda_handler(s3_event, context):
    # Get the s3 event
    for record in s3_event.get("Records"):
        bucket = record.get("s3").get("bucket").get("name")
        key = record.get("s3").get("object").get("key")

        # Send message to SQS queue
        response = sqs.send_message(
                'key': {
                    'DataType': 'String',
                    'StringValue': key

In this code you get the bucket and the file key from the s3 event and you send these 2 informations to the queue.

Once the function coded, you have to trigger it with the s3 event. In the ‘Designer’ section click on ‘add trigger’, choose S3, and indicate the bucket you want your lambda function to be triggered when something is uploaded on it. The prefix and suffix are useful to filter which files must trigger the function. If you want to trigger only when a csv file is upload in a directory called data, just indicate in suffix .csv and in prefix /data . Then click ‘add’ and your trigger is now ready.

Note that if you write in the same bucket you trigger you must create directories and indicate prefix otherwise the trigger will be infinite

#lambda #aws #s3 #machine-learning #fargate #deep learning

What is GEEK

Buddha Community

How to automatically process data with AWS fargate
Siphiwe  Nair

Siphiwe Nair


Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink


Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.


As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).

This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Siphiwe  Nair

Siphiwe Nair


Making Sense of Unbounded Data & Real-Time Processing Systems

Unbounded data refers to continuous, never-ending data streams with no beginning or end. They are made available over time. Anyone who wishes to act upon them can do without downloading them first.

As Martin Kleppmann stated in his famous book, unbounded data will never “complete” in any meaningful way.

“In reality, a lot of data is unbounded because it arrives gradually over time: your users produced data yesterday and today, and they will continue to produce more data tomorrow. Unless you go out of business, this process never ends, and so the dataset is never “complete” in any meaningful way.”

— Martin Kleppmann, Designing Data-Intensive Applications

Processing unbounded data requires an entirely different approach than its counterpart, batch processing. This article summarises the value of unbounded data and how you can build systems to harness the power of real-time data.

#stream-processing #software-architecture #event-driven-architecture #data-processing #data-analysis #big-data-processing #real-time-processing #data-storage

Database Vs Data Warehouse Vs Data Lake: A Simple Explanation

Databases store data in a structured form. The structure makes it possible to find and edit data. With their structured structure, databases are used for data management, data storage, data evaluation, and targeted processing of data.
In this sense, data is all information that is to be saved and later reused in various contexts. These can be date and time values, texts, addresses, numbers, but also pictures. The data should be able to be evaluated and processed later.

The amount of data the database could store is limited, so enterprise companies tend to use data warehouses, which are versions for huge streams of data.

#data-warehouse #data-lake #cloud-data-warehouse #what-is-aws-data-lake #data-science #data-analytics #database #big-data #web-monetization

Sid  Schuppe

Sid Schuppe


Benefits of Data Ingestion

In the last two decades, many businesses have had to change their models as business operations continue to complicate. The major challenge companies face today is that a large amount of data is generated from multiple data sources. So, data analytics have introduced filters to various data sources to detect this problem. They need analytics and business intelligence to access all their data sources to make better business decisions.

It is obvious that the company needs this data to make decisions based on predicted market trends, market forecasts, customer requirements, future needs, etc. But how do you get all your company data in one place to make a proper decision? Data ingestion consolidates your data and stores it in one place.

#big data #data access #data ingestion #data collection #batch processing #data access layer #data integration platform #automate data collection