Ruthie  Bugala

Ruthie Bugala

1622562240

How to schedule Azure Data Factory pipeline executions using Triggers

In the previous articles, we discussed how to use the Azure Data Factory to move data between different data stores, how to transform data before writing it to a predefined sink, how to run an SSIS package using Azure Data Factory and how to use iterations and conditions activities in Azure Data Factory.

In this article, we will see how to schedule an Azure Data Factory pipeline using triggers.

Triggers Overview

Previously, we used the manual execution of the pipelines, also known as on-demand execution, to test the functionality and the results of the pipelines that we created. But it does not make sense to employ someone to execute that pipeline during the night or rely on a human to remember when to execute a specific pipeline. From that point, we can see the need to have another automated way to execute the pipeline at the correct time, which is called Triggers.

Azure Data Factory Triggers determines when the pipeline execution will be fired, based on the trigger type and criteria defined in that trigger. There are three main types of Azure Data Factory Triggers: The Schedule trigger that executes the pipeline on a wall-clock schedule, the Tumbling window trigger that executes the pipeline on a periodic interval, and retains the pipeline state, and the Event-based trigger that responds to a blob related event.

Azure Data Factory allows you to assign multiple triggers to execute a single pipeline or execute multiple pipelines using a single trigger, except for the tumbling window trigger.

Let us discuss the triggers types in detail.

Schedule Trigger

The schedule trigger is used to execute the Azure Data Factory pipelines on a wall-clock schedule. Where you need to specify the reference time zone that will be used in the trigger start and end date, when the pipeline will be executed, how frequent it will be executed and optionally the end date for that pipeline.

Azure Data Factory trigger can be created under the Manager page, by clicking on + New or Create Trigger option from the Triggers window, as shown below:

In the New Azure Data Factory Trigger window, provide a meaningful name for the trigger that reflects the trigger type and usage, the type of the trigger, which is Schedule here, the start date for the schedule trigger, the time zone that will be used in the schedule, optionally the end date of the trigger and the frequency of the trigger, with the ability to configure the trigger frequency to be called every specific number of minutes or hours, as shown below:

#azure #azure data factory

What is GEEK

Buddha Community

How to schedule Azure Data Factory pipeline executions using Triggers
Grace  Lesch

Grace Lesch

1639778400

PySQL Tutorial: A Database Framework for Python

PySQL 

PySQL is database framework for Python (v3.x) Language, Which is based on Python module mysql.connector, this module can help you to make your code more short and more easier. Before using this framework you must have knowledge about list, tuple, set, dictionary because all codes are designed using it. It's totally free and open source.

Tutorial Video in English (Watch Now)

IMAGE ALT TEXT HERE

Installation

Before we said that this framework is based on mysql.connector so you have to install mysql.connector first on your system. Then you can import pysql and enjoy coding!

python -m pip install mysql-connector-python

After Install mysql.connector successfully create Python file download/install pysql on the same dir where you want to create program. You can clone is using git or npm command, and you can also downlaod manually from repository site.

PyPi Command

Go to https://pypi.org/project/pysql-framework/ or use command

pip install pysql-framework

Git Command

git clone https://github.com/rohit-chouhan/pysql

Npm Command

Go to https://www.npmjs.com/package/pysql or use command

$ npm i pysql

Snippet Extention for VS Code

Install From Here https://marketplace.visualstudio.com/items?itemName=rohit-chouhan.pysql

IMAGE ALT TEXT HERE

Table of contents

Connecting a Server


To connect a database with localhost server or phpmyadmin, use connect method to establish your python with database server.

import pysql

db = pysql.connect(
    "host",
    "username",
    "password"
 )

Create a Database in Server


Creating database in server, to use this method

import pysql

db = pysql.connect(
    "host",
    "username",
    "password"
 )
 pysql.createDb(db,"demo")
 #execute: CREATE DATABASE demo

Drop Database


To drop database use this method .

Syntex Code -

pysql.dropDb([connect_obj,"table_name"])

Example Code -

pysql.dropDb([db,"demo"])
#execute:DROP DATABASE demo

Connecting a Database


To connect a database with localhost server or phpmyadmin, use connect method to establish your python with database server.

import pysql

db = pysql.connect(
    "host",
    "username",
    "password",
    "database"
 )

Creating Table in Database


To create table in database use this method to pass column name as key and data type as value.

Syntex Code -


pysql.createTable([db,"table_name_to_create"],{
    "column_name":"data_type", 
    "column_name":"data_type"
})

Example Code -


pysql.createTable([db,"details"],{
    "id":"int(11) primary", 
     "name":"text", 
    "email":"varchar(50)",
    "address":"varchar(500)"
})

2nd Example Code -

Use can use any Constraint with Data Value


pysql.createTable([db,"details"],{
    "id":"int NOT NULL PRIMARY KEY", 
     "name":"varchar(20) NOT NULL", 
    "email":"varchar(50)",
    "address":"varchar(500)"
})

Drop Table in Database


To drop table in database use this method .

Syntex Code -

pysql.dropTable([connect_obj,"table_name"])

Example Code -

pysql.dropTable([db,"users"])
#execute:DROP TABLE users

Selecting data from Table


For Select data from table, you have to mention the connector object with table name. pass column names in set.

Syntex For All Data (*)-

records = pysql.selectAll([db,"table_name"])
for x in records:
  print(x)

Example - -

records = pysql.selectAll([db,"details"])
for x in records:
  print(x)
#execute: SELECT * FROM details

Syntex For Specific Column-

records = pysql.select([db,"table_name"],{"column","column"})
for x in records:
  print(x)

Example - -

records = pysql.select([db,"details"],{"name","email"})
for x in records:
  print(x)
#execute: SELECT name, email FROM details

Syntex Where and Where Not-

#For Where Column=Data
records = pysql.selectWhere([db,"table_name"],{"column","column"},("column","data"))

#For Where Not Column=Data (use ! with column)
records = pysql.selectWhere([db,"table_name"],{"column","column"},("column!","data"))
for x in records:
  print(x)

Example - -

records = pysql.selectWhere([db,"details"],{"name","email"},("county","india"))
for x in records:
  print(x)
#execute: SELECT name, email FROM details WHERE country='india'

Add New Column to Table


To add column in table, use this method to pass column name as key and data type as value. Note: you can only add one column only one call

Syntex Code -


pysql.addColumn([db,"table_name"],{
    "column_name":"data_type"
})

Example Code -


pysql.addColumn([db,"details"],{
    "email":"varchar(50)"
})
#execute: ALTER TABLE details ADD email varchar(50);

Modify Column to Table


To modify data type of column table, use this method to pass column name as key and data type as value.

Syntex Code -

pysql.modifyColumn([db,"table_name"],{
    "column_name":"new_data_type"
})

Example Code -

pysql.modifyColumn([db,"details"],{
    "email":"text"
})
#execute: ALTER TABLE details MODIFY COLUMN email text;

Drop Column from Table


Note: you can only add one column only one call

Syntex Code -

pysql.dropColumn([db,"table_name"],"column_name")

Example Code -

pysql.dropColumn([db,"details"],"name")
#execute: ALTER TABLE details DROP COLUMN name

Manual Execute Query


To execute manual SQL Query to use this method.

Syntex Code -

pysql.query(connector_object,your_query)

Example Code -

pysql.query(db,"INSERT INTO users (name) VALUES ('Rohit')")

Inserting data


For Inserting data in database, you have to mention the connector object with table name, and data as sets.

Syntex -

data =     {
    "db_column":"Data for Insert",
    "db_column":"Data for Insert"
}
pysql.insert([db,"table_name"],data)

Example Code -

data =     {
    "name":"Komal Sharma",
    "contry":"India"
}
pysql.insert([db,"users"],data)

Updating data


For Update data in database, you have to mention the connector object with table name, and data as tuple.

Syntex For Updating All Data-

data = ("column","data to update")
pysql.updateAll([db,"users"],data)

Example - -

data = ("name","Rohit")
pysql.updateAll([db,"users"],data)
#execute: UPDATE users SET name='Rohit'

Syntex For Updating Data (Where and Where Not)-

data = ("column","data to update")
#For Where Column=Data
where = ("column","data")

#For Where Not Column=Data (use ! with column)
where = ("column!","data")
pysql.update([db,"users"],data,where)

Example -

data = ("name","Rohit")
where = ("id",1)
pysql.update([db,"users"],data,where)
#execute: UPDATE users SET name='Rohit' WHERE id=1

Deleting data


For Delete data in database, you have to mention the connector object with table name.

Syntex For Delete All Data-

pysql.deleteAll([db,"table_name"])

Example - -

pysql.deleteAll([db,"users"])
#execute: DELETE FROM users

Syntex For Deleting Data (Where and Where Not)-

where = ("column","data")

pysql.delete([db,"table_name"],where)

Example -

#For Where Column=Data
where = ("id",1)

#For Where Not Column=Data (use ! with column)
where = ("id!",1)
pysql.delete([db,"users"],where)
#execute: DELETE FROM users WHERE id=1

--- Finish ---

Change Logs

[19/06/2021]
 - ConnectSever() removed and merged to Connect()
 - deleteAll() [Fixed]
 - dropTable() [Added]
 - dropDb() [Added]
 
[20/06/2021]
 - Where Not Docs [Added]

The module is designed by Rohit Chouhan, contact us for any bug report, feature or business inquiry.

Author: rohit-chouhan
Source Code: https://github.com/rohit-chouhan/pysql
License: Apache-2.0 License

#python 

Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Ruthie  Bugala

Ruthie Bugala

1622562240

How to schedule Azure Data Factory pipeline executions using Triggers

In the previous articles, we discussed how to use the Azure Data Factory to move data between different data stores, how to transform data before writing it to a predefined sink, how to run an SSIS package using Azure Data Factory and how to use iterations and conditions activities in Azure Data Factory.

In this article, we will see how to schedule an Azure Data Factory pipeline using triggers.

Triggers Overview

Previously, we used the manual execution of the pipelines, also known as on-demand execution, to test the functionality and the results of the pipelines that we created. But it does not make sense to employ someone to execute that pipeline during the night or rely on a human to remember when to execute a specific pipeline. From that point, we can see the need to have another automated way to execute the pipeline at the correct time, which is called Triggers.

Azure Data Factory Triggers determines when the pipeline execution will be fired, based on the trigger type and criteria defined in that trigger. There are three main types of Azure Data Factory Triggers: The Schedule trigger that executes the pipeline on a wall-clock schedule, the Tumbling window trigger that executes the pipeline on a periodic interval, and retains the pipeline state, and the Event-based trigger that responds to a blob related event.

Azure Data Factory allows you to assign multiple triggers to execute a single pipeline or execute multiple pipelines using a single trigger, except for the tumbling window trigger.

Let us discuss the triggers types in detail.

Schedule Trigger

The schedule trigger is used to execute the Azure Data Factory pipelines on a wall-clock schedule. Where you need to specify the reference time zone that will be used in the trigger start and end date, when the pipeline will be executed, how frequent it will be executed and optionally the end date for that pipeline.

Azure Data Factory trigger can be created under the Manager page, by clicking on + New or Create Trigger option from the Triggers window, as shown below:

In the New Azure Data Factory Trigger window, provide a meaningful name for the trigger that reflects the trigger type and usage, the type of the trigger, which is Schedule here, the start date for the schedule trigger, the time zone that will be used in the schedule, optionally the end date of the trigger and the frequency of the trigger, with the ability to configure the trigger frequency to be called every specific number of minutes or hours, as shown below:

#azure #azure data factory

How to Debug a Pipeline in Azure Data Factory

In the previous article, How to schedule Azure Data Factory pipeline executions using Triggers, we discussed the three main types of the Azure Data Factory triggers, how to configure it then use it to schedule a pipeline.

In this article, we will see how to use the Azure Data Factory debug feature to test the pipeline activities during the development stage.

Why debug

When developing complex and multi-stage Azure Data Factory pipelines, it becomes harder to test the functionality and the performance of the pipeline as one block. Instead, it is highly recommended to test such pipelines when you develop each stage, so that you can make sure that this stage is working as expected, returning the correct result with the best performance, before publishing the changes to the data factory.

Take into consideration that debugging any pipeline activity will execute that activity and perform the action configured in it. For example, if this activity is a copy activity from an Azure Storage Account to an Azure SQL Database, the data will be copied, but the only difference is that the pipeline execution logs in the debug mode will be written to the pipeline output tab only and will not be shown under the pipeline runs in the Monitor page.

#azure #sql azure #azure data factory #pipeline