Birdie  Zboncak

Birdie Zboncak

1619009160

Intro to Cassandra - Tables, Partitions, and Examples

Welcome to the Intro to Cassandra Crash Course! In this video series, we’ll go over the basics of how Apache Cassandra works and get hands on with implementing a basic Cassandra Database in the cloud!

📥 Workshop Materials:
https://github.com/DataStax-Academy/Intro-to-Cassandra-for-Developers

Learn more about Cassandra at Apache (https://cassandra.apache.org/doc/latest/) and DataStax (https://docs.datastax.com/en/landing_page/doc/landing_page/cassandra.html) documentation websites.

#cassandra #developer

What is GEEK

Buddha Community

Intro to Cassandra - Tables, Partitions, and Examples
Julie  Donnelly

Julie Donnelly

1596495120

Beginner’s Guide to Table Partitioning In PostgreSQL

Table partitioning in SQL, as the name suggests, is a process of dividing large data tables into small manageable parts, such that each part has its own name and characteristics.

Table partitioning helps in significantly improving database server performance as less number of rows have to be read, processed and returned. We can also use partitioning techniques for dividing indexes and index-organized tables.

Table partitioning can be of two types, namely, vertical partitioning or horizontal partitioning. In vertical partitioning, we divide the table column wise. While in horizontal partitioning, we divide the table row wise on the basis of range of values in a certain column.

Syntax and parameters

The basic syntax for partitioning a table using range is as follows :

Main table creation :

CREATE TABLE main_table_name (

column_1 data type,

column_2 data type,

.

.

. ) PARTITION BY RANGE (column_2);

Partition table creation :

CREATE TABLE partition_name

PARTITION OF main_table_name FOR VALUES FROM (start_value) TO (end_value);

The parameters used in the above mentioned syntax are similar to CREATE TABLE statement, except these :

PARTITION BY RANGE (column_2) : column_2 is the field on the basis of which partitions will be created.

partition_name : name of the partition table

FROM (start_value) TO (end_value) : The range of values in column_2, which forms the part of this partition. Note that start_value is inclusive, while end_value is exclusive.

Here is an example to illustrate it further.

Example

Imagine that you are working as a data engineer for an e-com firm that gets a huge number of orders on a daily basis. You usually store data such as order_id, order_at, customer_id etc. in a SQL table called “e-transactions’’. Since, the table has a humongous amount of data in it, the low load speed and high return time etc. have become a problem for data analysts, who use this table for preparing KPIs on a daily basis.

What will you do to improvise this table, so that data analysts can run queries quickly?

A logical step would be partitioning the table into smaller parts. Let’s say we create partitions such that the partition stores data pertaining to specified order dates only. This way, we will have less data in each partition and working on it will be more fun.

We can partition the table using declarative partitioning i.e. by using a PARTITION BY RANGE (column_name) function as shown below.

#postgresql #drop-table #sql #alter-table #table-partitioning

Birdie  Zboncak

Birdie Zboncak

1619009160

Intro to Cassandra - Tables, Partitions, and Examples

Welcome to the Intro to Cassandra Crash Course! In this video series, we’ll go over the basics of how Apache Cassandra works and get hands on with implementing a basic Cassandra Database in the cloud!

📥 Workshop Materials:
https://github.com/DataStax-Academy/Intro-to-Cassandra-for-Developers

Learn more about Cassandra at Apache (https://cassandra.apache.org/doc/latest/) and DataStax (https://docs.datastax.com/en/landing_page/doc/landing_page/cassandra.html) documentation websites.

#cassandra #developer

Fredy  Larson

Fredy Larson

1595209620

How to alter tables in production when records are in millions

As a developer, I have experienced changes in app when it is in production and the records have grown up to millions. In this specific case if you want to alter a column using simple migrations that will not work because of the following reasons:

It is not so easy if your production servers are under heavy load and the database tables have 100 million rows. Because such a migration will run for some seconds or even minutes and the database table can be locked for this time period – a no-go on a zero-downtime environment.

In this specific case you can use MySQL’s algorithms: Online DDL operations. That’s how you can do it in Laravel.

First of all create migration. For example I want to modify a column’s name the traditional migration will be:

Schema::table('users', function (Blueprint $table) {
            $table->renameColumn('name', 'first_name');
        });

Run the following command php artisan migrate –pretend this command will not run the migration rather it will print out it’s raw sql:

ALTER TABLE users CHANGE name first_name VARCHAR(191) NOT NULL

Copy that raw sql, remove following code:

Schema::table('users', function (Blueprint $table) {
            $table->renameColumn('name', 'first_name');
        });

Replace it with following in migrations up method:

\DB::statement('ALTER TABLE users CHANGE name first_name VARCHAR(191) NOT NULL');

Add desired algorithm, in my case query will look like this:

\DB::statement('ALTER TABLE users CHANGE name first_name VARCHAR(191) NOT NULL, ALGORITHM=INPLACE, LOCK=NONE;');

#laravel #mysql #php #alter heavy tables in production laravel #alter table in production laravel #alter tables with million of records in laravel #how to alter heavy table in production laravel #how to alter table in production larave #mysql online ddl operations

Chaz  Homenick

Chaz Homenick

1601382720

Build fault tolerant applications with Cassandra API for Azure Cosmos DB

Azure Cosmos DB is a resource governed system that allows you to execute a certain number of operations per second based on the provisioned throughput you have configured. If clients exceed that limit and consume more request units than what was provisioned, it leads to rate limiting of subsequent requests and exceptions being thrown – they are also referred to as 429 errors.

With the help of a practical example, I’ll demonstrate how to incorporate fault-tolerance in your Go applications by handling and retrying operations affected by these rate limiting errors. To help you follow along, the sample application code for this blog is available on GitHub and it uses the gocql driver for Apache Cassandra. In this post, we’ll go through:

  • Initial setup and configuration before running the sample application
  • Execution of various load test scenarios and analyze the results
  • A quick overview of the Retry Policy implementation.

One way of tackling rate limiting is by adjusting provisioned throughput to meet your application requirements. There are multiple ways to do this, including using Azure portal, Azure CLI, and CQL (Cassandra Query Language) commands.

But, what if you wanted to handle these errors in the application itself?

The good thing is that the Cassandra API for Azure Cosmos DB translates the rate limiting exceptions into overloaded errors on the Cassandra native protocol. Since the gocql driver allows you to plugin your own RetryPolicy, you can write a custom implementation to intercept these errors and retry them after a certain (cool down) time period. This policy can then be applied to each Query or at a global level using a ClusterConfig.

The Azure Cosmos DB extension library makes it quite easy to use Retry Policies in your Java applications. An equivalent Go version is available on GitHub and has been used in the sample application for this blog post.

Retry Policy in action

As promised, you will walk through the entire process using a simple yet practical example. The sample application used to demonstrate the concepts is a service that exposes a REST endpoint to POST orders data which is persisted to a Cassandra table in Azure Cosmos DB.

You will run a few load tests on this API service to see how rate limiting manifests itself and how it’s handled.

Pre-requisites

Start by installing hey, a load testing program. You can download OS specific binaries (64-bit) for LinuxMac and Windows (please refer to the GitHub repo for latest information in case you face issues downloading the utility)

You can use any other tool that allows you to generate load on an HTTP endpoint

Clone this GitHub repo and change into the right directory:

git clone github.com/abhirockzz/cosmos-go-rate-limiting 
cd cosmos-go-rate-limiting

#cassandra api #apache cassandra #appdev #cassandra #go #paas

Laravel AJAX CRUD Example Tutorial

Hello Guys,

Today I will show you how to create laravel AJAX CRUD example tutorial. In this tutorial we are implements ajax crud operation in laravel. Also perform insert, update, delete operation using ajax in laravel 6 and also you can use this ajax crud operation in laravel 6, laravel 7. In ajax crud operation we display records in datatable.

Read More : Laravel AJAX CRUD Example Tutorial

https://www.techsolutionstuff.com/post/laravel-ajax-crud-example-tutorial


Read Also : Read Also : Laravel 6 CRUD Tutorial with Example

https://techsolutionstuff.com/post/laravel-6-crud-tutorial-with-example

#laravel ajax crud example tutorial #ajax crud example in laravel #laravel crud example #laravel crud example with ajax #laravel #php