Announcing the new Azure Data Tables Libraries

We’re excited to announce that the Azure Data Tables libraries have been released for .NET, Java, JavaScript/TypeScript, and Python. The Azure Table service stores NoSQL data in the cloud with a key/attribute store schema-less design. The Table storage service can be used to store flexible data sets like user data for web applications, address books, device information, or other types of metadata.

The new libraries follow our Azure SDK Guidelines, making for an idiomatic, consistent, approachable, diagnosable, and dependable library. The new libraries use the language-specific Azure Core packages for handling requests, errors, and credentials.

Note: The Azure Data Tables libraries are capable of targeting both Azure Storage Table and Azure Cosmos DB Table API endpoints.

SDK Availability

The Azure Data Tables libraries can be downloaded from each languages preferred package manager.

This blog post assumes you have a working developer environment for your preferred programming language and you already have a Storage or Cosmos Table account. If you do not have those refer to the Getting Started entry in the above table for your preferred programming language. To follow along with these snippets you’ll need the programming language of your choice (Python, .NET, Java, JS) installed, a text editor, and a Storage or Cosmos Table account.

There will be migration guides added to each projects homepage that will show specific examples for updating your code base to the new Azure Data Tables library discussed in this blog.

Creating the clients

There are two clients for interacting with the service. The TableServiceClient can be used for account-level interactions (creating tables, setting and getting access policies) and the TableClient is used for table-level interactions (create or delete an entity, query or list entities). You can create clients with a key, Shared Access Signature, or using a connection string, all of which can be found in the Azure Portal.

#azure sdk #.net #azure-sdk #java #javascript #python #tables

What is GEEK

Buddha Community

Announcing the new Azure Data Tables Libraries
 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Gerhard  Brink

Gerhard Brink

1624699032

Introduction to Data Libraries for Small Data Science Teams

At smaller companies access to and control of data is one of the biggest challenges faced by data analysts and data scientists. The same is true at larger companies when an analytics team is forced to navigate bureaucracy, cybersecurity and over-taxed IT, rather than benefit from a team of data engineers dedicated to collecting and making good data available.

Creative, persistent analysts find ways to get access to at least some of this data. Through a combination of daily processes to save email attachments, run database queries, and copy and paste from internal web pages one might build up a mighty collection of data sets on a personal computer or in a team shared drive or even a database.

But this solution does not scale well, and is rarely documented and understood by others who could take it over if a particular analyst moves on to a different role or company. In addition, it is a nightmare to maintain. One may spend a significant part of each day executing these processes and troubleshooting failures; there may be little time to actually use this data!

I lived this for years at different companies. We found ways to be effective but data management took up way too much of our time and energy. Often, we did not have the data we needed to answer a question. I continued to learn from the ingenuity of others and my own trial and error, which led me to the theoretical framework that I will present in this blog series: building a self-managed data library.

A data library is _not _a data warehousedata lake, or any other formal BI architecture. It does not require any particular technology or skill set (coding will not be required but it will greatly increase the speed at which you can build and the degree of automation possible). So what is a data library and how can a small data analytics team use it to overcome the challenges I’ve described?

#big data #cloud & devops #data libraries #small data science teams #introduction to data libraries for small data science teams #data science

Ruthie  Bugala

Ruthie Bugala

1626494129

Using the new C# Azure.Data.Tables SDK with Azure Cosmos DB

Last month, the Azure SDK team released a new library for Azure Tables for .NET, Java, JS/TS and Python. This release brings the Table SDK in line with other Azure SDKs and they use the specific Azure Core packages for handling requests, errors and credentials.

Azure Cosmos DB provides a Table API offering that is essentially Azure Table Storage on steroids! If you need a globally distributed table storage service, Azure Cosmos DB should be your go to choice.

If you’re making a choice between Azure Cosmos DB Table API and regular Azure Table Storage, I’d recommend reading the following article.

In this article, I’ll show you how we can perform simple operations against a Azure Cosmos DB Table API account using the new Azure.Data.Table C## SDK. Specifically, we’ll go over:

  • Installing the SDK 💻
  • Connecting to our Table Client and Creating a table 🔨
  • Defining our entity 🧾
  • Adding an entity ➕
  • Performing Transactional Batch Operations 💰
  • Querying our Table ❓
  • Deleting an entity ❌

Let’s dive into it!

Installing the SDK 💻

Installing the SDK is pretty simple. We can do so by running the following dotnet command:

dotnet add package Azure.Data.Tables

If you prefer using a UI to install the NuGet packages, we can do so by right-clicking our C## Project in Visual Studio, click on Manage NuGet packages and search for the Azure.Data.Tables package:

Connecting to our Table Client and Creating a table 🔨

The SDK provides us with two clients to interact with the service. A TableServiceClient is used for interacting with our table at the account lelvel.

We do this for creating tables, setting access policies etc.

We can also use a TableClient. This is used for performing operations on our entities. We can also use the TableClient to create tables like so:

TableClient tableClient = new TableClient(config["StorageConnection"], "Customers");
            await tableClient.CreateIfNotExistsAsync();

To create our Table Client, I’m passing in my storage connection string from Azure and the name of the table I want to interact with. On the following line, we create the table if it doesn’t exist.

To get out Storage Connection string, we can do so from our Cosmos DB account under Connection String:

When we run this code for the first time, we can see that the table has been created in our Data Explorer:

Defining our entity 🧾

In Table Storage, we create entities in our table that require a Partition Key and a Row Key. The combination of these need to be unique within our table.

Entities have a set of properties and strongly-typed entities need to extend from the ITableEntity interface, which expose Partition Key, Row Key, ETag and Timestamp properties. ETag and Timestamp will be generated by Cosmos DB, so we don’t need to set these.

For this tutorial, I’m going to use the above mentioned properties along with two string properties (Email and PhoneNumber) to make up a CustomerEntity type.

#csharp #programming #azure #data #azure cosmos db #azure

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt