1622443500
Choosing the right trigger type is very important task when designing data factory workflows. Today we will show you four ways to trigger data factory pipelines so you can make sure you react to your business needs better.
In this episode we will show you four ways to trigger data factory workflows using schedules, tumbling windows, events and manual (on-demand) with logic apps.
Subscribe: https://www.youtube.com/c/Azure4Everyone/featured
#azure
1639778400
PySQL is database framework for Python (v3.x) Language, Which is based on Python module mysql.connector, this module can help you to make your code more short and more easier. Before using this framework you must have knowledge about list, tuple, set, dictionary because all codes are designed using it. It's totally free and open source.
Before we said that this framework is based on mysql.connector so you have to install mysql.connector first on your system. Then you can import pysql and enjoy coding!
python -m pip install mysql-connector-python
After Install mysql.connector successfully create Python file download/install pysql on the same dir where you want to create program. You can clone is using git or npm command, and you can also downlaod manually from repository site.
Go to https://pypi.org/project/pysql-framework/ or use command
pip install pysql-framework
git clone https://github.com/rohit-chouhan/pysql
Go to https://www.npmjs.com/package/pysql or use command
$ npm i pysql
Install From Here https://marketplace.visualstudio.com/items?itemName=rohit-chouhan.pysql
Table of contents
To connect a database with localhost server or phpmyadmin, use connect method to establish your python with database server.
import pysql
db = pysql.connect(
"host",
"username",
"password"
)
Creating database in server, to use this method
import pysql
db = pysql.connect(
"host",
"username",
"password"
)
pysql.createDb(db,"demo")
#execute: CREATE DATABASE demo
To drop database use this method .
Syntex Code -
pysql.dropDb([connect_obj,"table_name"])
Example Code -
pysql.dropDb([db,"demo"])
#execute:DROP DATABASE demo
To connect a database with localhost server or phpmyadmin, use connect method to establish your python with database server.
import pysql
db = pysql.connect(
"host",
"username",
"password",
"database"
)
To create table in database use this method to pass column name as key and data type as value.
Syntex Code -
pysql.createTable([db,"table_name_to_create"],{
"column_name":"data_type",
"column_name":"data_type"
})
Example Code -
pysql.createTable([db,"details"],{
"id":"int(11) primary",
"name":"text",
"email":"varchar(50)",
"address":"varchar(500)"
})
2nd Example Code -
Use can use any Constraint with Data Value
pysql.createTable([db,"details"],{
"id":"int NOT NULL PRIMARY KEY",
"name":"varchar(20) NOT NULL",
"email":"varchar(50)",
"address":"varchar(500)"
})
To drop table in database use this method .
Syntex Code -
pysql.dropTable([connect_obj,"table_name"])
Example Code -
pysql.dropTable([db,"users"])
#execute:DROP TABLE users
For Select data from table, you have to mention the connector object with table name. pass column names in set.
Syntex For All Data (*)
-
records = pysql.selectAll([db,"table_name"])
for x in records:
print(x)
Example - -
records = pysql.selectAll([db,"details"])
for x in records:
print(x)
#execute: SELECT * FROM details
Syntex For Specific Column
-
records = pysql.select([db,"table_name"],{"column","column"})
for x in records:
print(x)
Example - -
records = pysql.select([db,"details"],{"name","email"})
for x in records:
print(x)
#execute: SELECT name, email FROM details
Syntex Where and Where Not
-
#For Where Column=Data
records = pysql.selectWhere([db,"table_name"],{"column","column"},("column","data"))
#For Where Not Column=Data (use ! with column)
records = pysql.selectWhere([db,"table_name"],{"column","column"},("column!","data"))
for x in records:
print(x)
Example - -
records = pysql.selectWhere([db,"details"],{"name","email"},("county","india"))
for x in records:
print(x)
#execute: SELECT name, email FROM details WHERE country='india'
To add column in table, use this method to pass column name as key and data type as value. Note: you can only add one column only one call
Syntex Code -
pysql.addColumn([db,"table_name"],{
"column_name":"data_type"
})
Example Code -
pysql.addColumn([db,"details"],{
"email":"varchar(50)"
})
#execute: ALTER TABLE details ADD email varchar(50);
To modify data type of column table, use this method to pass column name as key and data type as value.
Syntex Code -
pysql.modifyColumn([db,"table_name"],{
"column_name":"new_data_type"
})
Example Code -
pysql.modifyColumn([db,"details"],{
"email":"text"
})
#execute: ALTER TABLE details MODIFY COLUMN email text;
Note: you can only add one column only one call
Syntex Code -
pysql.dropColumn([db,"table_name"],"column_name")
Example Code -
pysql.dropColumn([db,"details"],"name")
#execute: ALTER TABLE details DROP COLUMN name
To execute manual SQL Query to use this method.
Syntex Code -
pysql.query(connector_object,your_query)
Example Code -
pysql.query(db,"INSERT INTO users (name) VALUES ('Rohit')")
For Inserting data in database, you have to mention the connector object with table name, and data as sets.
Syntex -
data = {
"db_column":"Data for Insert",
"db_column":"Data for Insert"
}
pysql.insert([db,"table_name"],data)
Example Code -
data = {
"name":"Komal Sharma",
"contry":"India"
}
pysql.insert([db,"users"],data)
For Update data in database, you have to mention the connector object with table name, and data as tuple.
Syntex For Updating All Data
-
data = ("column","data to update")
pysql.updateAll([db,"users"],data)
Example - -
data = ("name","Rohit")
pysql.updateAll([db,"users"],data)
#execute: UPDATE users SET name='Rohit'
Syntex For Updating Data (Where and Where Not)
-
data = ("column","data to update")
#For Where Column=Data
where = ("column","data")
#For Where Not Column=Data (use ! with column)
where = ("column!","data")
pysql.update([db,"users"],data,where)
Example -
data = ("name","Rohit")
where = ("id",1)
pysql.update([db,"users"],data,where)
#execute: UPDATE users SET name='Rohit' WHERE id=1
For Delete data in database, you have to mention the connector object with table name.
Syntex For Delete All Data
-
pysql.deleteAll([db,"table_name"])
Example - -
pysql.deleteAll([db,"users"])
#execute: DELETE FROM users
Syntex For Deleting Data (Where and Where Not)
-
where = ("column","data")
pysql.delete([db,"table_name"],where)
Example -
#For Where Column=Data
where = ("id",1)
#For Where Not Column=Data (use ! with column)
where = ("id!",1)
pysql.delete([db,"users"],where)
#execute: DELETE FROM users WHERE id=1
[19/06/2021]
- ConnectSever() removed and merged to Connect()
- deleteAll() [Fixed]
- dropTable() [Added]
- dropDb() [Added]
[20/06/2021]
- Where Not Docs [Added]
The module is designed by Rohit Chouhan, contact us for any bug report, feature or business inquiry.
Author: rohit-chouhan
Source Code: https://github.com/rohit-chouhan/pysql
License: Apache-2.0 License
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1600624800
In the last article, we had a look at how to start with Azure DevOps: Getting Started With Audit Streaming With Event Grid
In the article, we will go to the next step to create a subscription and use webhook event handlers to view those logs in our Azure web application.
#cloud #tutorial #azure #event driven architecture #realtime #signalr #webhook #azure web services #azure event grid #azure #azure event grid #serverless architecture #application integration
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1595932282
With the help of an example, this blog post will walk you through how to use the Azure Data explorer Go SDK to ingest data from an Azure Blob storage container and query it programmatically using the SDK. After a quick overview of how to setup Azure Data Explorer cluster (and a database), we will explore the code to understand what’s going on (and how) and finally test the application using a simple CLI interface
The sample data is a CSV file that can be downloaded from here.
Azure Data Explorer (also known as Kusto) is a fast and scalable data exploration service for analyzing large volumes of diverse data from any data source, such as websites, applications, IoT devices, and more. This data can then be used for diagnostics, monitoring, reporting, machine learning, and additional analytics capabilities.
It supports several ingestion methods, including connectors to common services like Event Hub, programmatic ingestion using SDKs, such as .NET and Python, and direct access to the engine for exploration purposes. It also integrates with analytics and modeling services for additional analysis and visualization of data using tools such as Power BI
The Go client SDK allows you to query, control and ingest into Azure Data Explorer clusters using Go. Please note that this is for interacting with the Azure Data Explorer cluster (and related components such as tables etc.). To create Azure Data Explorer clusters, databases etc. you should the use the admin component (control plane) SDK which is a part of the larger Azure SDK for Go
API docs - https://godoc.org/github.com/Azure/azure-kusto-go
Before getting started, here is what you would need to try out the sample application
#tutorial #big data #azure #analytics #go #azure data #azure data explorer