GeoSpark stands out for processing geospatial data at Scale

In the past decade, the volume of available geospatial data increased tremendously. Such data includes but not limited to: weather maps, socio-economic data, and geo-tagged social media. For example, spacecrafts from NASA keep monitoring the status of the earth, including land temperature, atmosphere humidity. As of today, NASA has released over 22PB satellite data. Today, we have close 5 billion mobile devices all around the world. In consequence, Mobile Apps generate tons of gesoaptial data. For instance, Lyft, Uber, and Mobike collect terabytes of GPS data from millions of riders every day. In fact, everything we do on our mobile devices leaves digital traces on the surface of the Earth. Moreover, the unprecedented popularity of GPS-equipped mobile devices and Internet of Things (IoT) sensors has led to continuously generating large-scale location information combined with the status of surrounding environments. For example, several cities have started installing sensors across the road intersections to monitor the environment, traffic and air quality.

Making sense of the rich geospatial properties hidden in the data may greatly transform our society. This includes many subjects undergoing intense study, such as climate change analysis, study of deforestation, population migration, analyzing pandemic spread, urban planning, transportation, commerce and advertisement. These data-intensive geospatial analytics applications highly rely on the underlying data management systems (DBMSs) to efficiently retrieve, process, wrangle and manage data.

GeoSpark Overview

GeoSpark is a cluster computing framework that can process geospatial data at scale. GeoSpark extends the Resilient Distributed Dataset (RDD), the core data structure in Apache Spark, to accommodate big geospatial data in a cluster. A SpatialRDD consists of data partitions that are distributed across the Spark cluster. A Spatial RDD can be created by RDD transformation or be loaded from a file that is stored on permanent storage. This layer provides a number of APIs which allow users to read heterogeneous spatial object from various data formats.

GeoSpark allows users to issue queries using the out-of-box Spatial SQL API and RDD API. The RDD API provides a set of interfaces written in operational programming languages including Scala, Java, Python and R. The Spatial SQL interfaces offers a declarative language interface to the users so they can enjoy more flexibility when creating their own applications. These SQL API implements the SQL/MM Part 3 standard which is widely used in many existing spatial databases such as PostGIS (on top of PostgreSQL). Next, we show how to use GeoSpark.

Supported spatial data sources in GeoSpark

In the past, researchers and practitioners have developed a number of geospatial data formats for different purposes. However, the heterogeneous sources make it extremely difficult to integrate geospatial data together. For example, WKT format is a widely used spatial data format that stores data in a human readable tab-separated-value file. Shapefile is a spatial database file which includes several sub-files such as index file, and non-spatial attribute file. In addition, geospatial data usually possess different shapes such as points, polygons and trajectories.

Currently, GeoSpark can read WKT, WKB, GeoJSON, Shapefile, and NetCDF / HDF format data from different external storage systems such as local disk, Amazon S3 and Hadoop Distributed File System (HDFS) to Spatial RDDs. Spatial RDDs now can accommodate seven types of spatial data including Point, Multi-Point, Polygon, Multi-Polygon, Line String, Multi-Line String, GeometryCollection, and Circle. Moreover, spatial objects that have different shapes can co-exist in the same Spatial RDD because GeoSpark adopts a flexible design which generalizes the geometrical computation interfaces of different spatial objects.

**Spatial RDD built-in geometrical library: **It is quite common that spatial data scientists need to exploit some geometrical attributes of spatial objects in GeoSpark, such as perimeter, area and intersection. Spatial RDD equips a built-in geometrical library to perform geometrical operations at scale so the users will not be involved into sophisticated computational geometry problems. Currently, GeoSpark provides over 20 different functions in this library and put them in two separate categories

Regular geometry functions are applied to every single spatial object in a Spatial RDD. For every object, it generates a corresponding result such as perimeter or area. The output must be either a regular RDD or Spatial RDD.

Geometry aggregation functions are applied to a Spatial RDD for producing an aggregate value. It only generates a single value or spatial object for the entire Spatial RDD. For example, GeoSpark can compute the bounding box or polygonal union of the entire Spatial RDD.

Run queries using RDD API

Here, we outline the steps to create Spatial RDDs and run spatial queries using GeoSpark RDD APIs. The example code is written in Scala but also works for Java.

Setup Dependencies: Before starting to use GeoSpark, users must add the corresponding package to their projects as a dependency. For the ease of managing dependencies, the binary packages of GeoSpark are hosted on the Maven Central Repository which includes all JVM based packages from the entire world. As long as the projects are managed by popular project management tools such as Apache Maven and sbt, users can easily add GeoSpark by adding the artifact id in the project specification file such as POM.xml and build.sbt.

#database #data-science #spatial-analysis #geospatial #gis #data analysis

What is GEEK

Buddha Community

GeoSpark stands out for processing geospatial data at Scale
Siphiwe  Nair

Siphiwe Nair

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Siphiwe  Nair

Siphiwe Nair

1622608260

Making Sense of Unbounded Data & Real-Time Processing Systems

Unbounded data refers to continuous, never-ending data streams with no beginning or end. They are made available over time. Anyone who wishes to act upon them can do without downloading them first.

As Martin Kleppmann stated in his famous book, unbounded data will never “complete” in any meaningful way.

“In reality, a lot of data is unbounded because it arrives gradually over time: your users produced data yesterday and today, and they will continue to produce more data tomorrow. Unless you go out of business, this process never ends, and so the dataset is never “complete” in any meaningful way.”

— Martin Kleppmann, Designing Data-Intensive Applications

Processing unbounded data requires an entirely different approach than its counterpart, batch processing. This article summarises the value of unbounded data and how you can build systems to harness the power of real-time data.

#stream-processing #software-architecture #event-driven-architecture #data-processing #data-analysis #big-data-processing #real-time-processing #data-storage

Sid  Schuppe

Sid Schuppe

1617955288

Benefits of Data Ingestion

In the last two decades, many businesses have had to change their models as business operations continue to complicate. The major challenge companies face today is that a large amount of data is generated from multiple data sources. So, data analytics have introduced filters to various data sources to detect this problem. They need analytics and business intelligence to access all their data sources to make better business decisions.

It is obvious that the company needs this data to make decisions based on predicted market trends, market forecasts, customer requirements, future needs, etc. But how do you get all your company data in one place to make a proper decision? Data ingestion consolidates your data and stores it in one place.

#big data #data access #data ingestion #data collection #batch processing #data access layer #data integration platform #automate data collection

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt