1596511140
Apache Hudi is in use at organizations such as Alibaba Group, EMIS Health, Linknovate, Tathastu.AI, Tencent, and Uber, and is supported as part of Amazon EMR by Amazon Web Services and Google Cloud Platform. Recently, Amazon Athena adds support for querying Apache Hudi datasets in Amazon S3-based data lake. In this blog, I am going to test it and see if Athena can read Hudi format data set in S3.
We need Spark to write Hudi data. Login to Amazon EMR and launch a spark-shell:
$ export SCALA_VERSION=2.12
$ export SPARK_VERSION=2.4.4
$ spark-shell \
--packages org.apache.hudi:hudi-spark-bundle_${SCALA_VERSION}:0.5.3,org.apache.spark:spark-avro_${SCALA_VERSION}:${SPARK_VERSION} \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
...
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.4
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Now input the following scala code to setup table name, base path and a data generator to generate records for this article. Here we set the basepath
to a folder s3://hudi_athena_test/hudi_trips
in Amazon S3 bucket, so we can query it later:
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
val tableName = "hudi_trips"
val basePath = "s3://hudi_athena_test/hudi_trips"
val dataGen = new DataGenerator
#data-lake #athena #hudi #aws-emr #spark
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1596511140
Apache Hudi is in use at organizations such as Alibaba Group, EMIS Health, Linknovate, Tathastu.AI, Tencent, and Uber, and is supported as part of Amazon EMR by Amazon Web Services and Google Cloud Platform. Recently, Amazon Athena adds support for querying Apache Hudi datasets in Amazon S3-based data lake. In this blog, I am going to test it and see if Athena can read Hudi format data set in S3.
We need Spark to write Hudi data. Login to Amazon EMR and launch a spark-shell:
$ export SCALA_VERSION=2.12
$ export SPARK_VERSION=2.4.4
$ spark-shell \
--packages org.apache.hudi:hudi-spark-bundle_${SCALA_VERSION}:0.5.3,org.apache.spark:spark-avro_${SCALA_VERSION}:${SPARK_VERSION} \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
...
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.4
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Now input the following scala code to setup table name, base path and a data generator to generate records for this article. Here we set the basepath
to a folder s3://hudi_athena_test/hudi_trips
in Amazon S3 bucket, so we can query it later:
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
val tableName = "hudi_trips"
val basePath = "s3://hudi_athena_test/hudi_trips"
val dataGen = new DataGenerator
#data-lake #athena #hudi #aws-emr #spark
1595595360
We recently wrote an article debunking common myths about data lake architectures, data lake definitions, and data lake analytics. It is called "What is a Data Lake_? Get A Leg Up Avoiding The Biggest Myths." _In that article, we framed the current conversation about data lakes and how they fit within enterprise data strategies. This topic has historically been confusing and opaque for those wanting to get value from a data lake due to conflicting advice from consultants and vendors.
One area that can be particularly confusing is the perception that lakes are only for “big data.” If you spend any time reading materials on lakes, you would think there is only one type and it would look like the Capsian Sea (it’s a lake despite “sea” in the name). People describe data lakes as massive, all-encompassing entities, designed to hold all knowledge. The good news is that lakes are not just for “big data” and you have more opportunities than ever to have them be part of your data stack.
Just as they do in nature, lakes come in all different shapes and sizes. Each has a natural state, often reflecting ecosystems of data, just like those in nature reflect ecosystems of fish, birds, or other organisms.
Unfortunately, the “big data” angle gives the impression that lakes are only for “Caspian” scale data endeavors. This certainly makes the use of data lakes intimidating. As a result, describing things in such massive terms makes the concept of a lake inaccessible to those who can benefit from them on a smaller scale. Here are a few data lake examples;
We recently worked with a customer to create a “Domain” type lake. This lake would hold Adobe event data to an AWS to support an enterprise Oracle Cloud environment. Why AWS to Oracle? It was an efficient and cost-effective data consumption pattern for the customer Oracle BI environment, especially considering the agility and economics of using an AWS lake and Athena as the on-demand query service for lake content.
By design, all types of lakes should embrace an abstraction that minimizes risk and affords you greater flexibility. Also, they should be structured for easy consumption independent of their size. This ensures a lake used by a data scientist or business user or analyst all have an environment structured for easy data consumption.
Being a successful early adopter means taking a business value approach rather than a technology one. Here are a few tips as you think about how to get started:
#big data #data lake #data lakes #data lake architecture #data lake solutions #data analysis
1618053720
Databases store data in a structured form. The structure makes it possible to find and edit data. With their structured structure, databases are used for data management, data storage, data evaluation, and targeted processing of data.
In this sense, data is all information that is to be saved and later reused in various contexts. These can be date and time values, texts, addresses, numbers, but also pictures. The data should be able to be evaluated and processed later.
The amount of data the database could store is limited, so enterprise companies tend to use data warehouses, which are versions for huge streams of data.
#data-warehouse #data-lake #cloud-data-warehouse #what-is-aws-data-lake #data-science #data-analytics #database #big-data #web-monetization
1624546800
As data mesh advocates come to suggest that the data mesh should replace the monolithic, centralized data lake, I wanted to check in with Dipti Borkar, co-founder and Chief Product Officer at Ahana. Dipti has been a tremendous resource for me over the years as she has held leadership positions at Couchbase, Kinetica, and Alluxio.
According to Dipti, while data lakes and data mesh both have use cases they work well for, data mesh can’t replace the data lake unless all data sources are created equal — and for many, that’s not the case.
All data sources are not equal. There are different dimensions of data:
Each data source has its purpose. Some are built for fast access for small amounts of data, some are meant for real transactions, some are meant for data that applications need, and some are meant for getting insights on large amounts of data.
Things changed when AWS commoditized the storage layer with the AWS S3 object-store 15 years ago. Given the ubiquity and affordability of S3 and other cloud storage, companies are moving most of this data to cloud object stores and building data lakes, where it can be analyzed in many different ways.
Because of the low cost, enterprises can store all of their data — enterprise, third-party, IoT, and streaming — into an S3 data lake. However, the data cannot be processed there. You need engines on top like Hive, Presto, and Spark to process it. Hadoop tried to do this with limited success. Presto and Spark have solved the SQL in S3 query problem.
#big data #big data analytics #data lake #data lake and data mesh #data lake #data mesh