1626873550
https://full-stack-mastery.thinkific.com/courses/full-stack-web-development-master-course
The demand for a Full-Stack Web Developer is the highest compared to any technology professional. They are paid exceptionally well both in the companies and also as freelancers.
This is due to the fact that full-stack web developers have diverse set of skills.
They are good at Database Development.
Backend Development.
And also in Front End Development.
In this course, you will learn to build a web application from scratch using different technology stacks.
Currently, this Master Course will teach you to develop the same web application using different technology stacks such as.
–Mongo DB, MySQL, Microsoft SQL, Postgre SQL & SQLite for Database.
–.NET Core & Python Django for Backend.
–Angular 12, React JS & Vue JS for Frontend.
More technologies are being added and soon, you will see more popular technology stacks in this Master Course.
#angular #react #dotnet #python #django #sql-server
1626238664
When queries fail to execute I can see what goes wrong and adjust my query. Usually database objects like stored procedures get called by an API or another stored procedure. There’s no one there to notice when something goes wrong, usually you notice a few hours, days or weeks later when a client calls you saying that some data is missing.
Ideally you’d like to know as soon as possible why, when and where something went wrong in your database so you can fix your queries. This article shows you an easy way to monitor your database processes and detect when something goes wrong.
SQL Server Management Studio does a good job catching a lot of syntax errors. Sometimes error cannot be expected though. Think about situations where you are dependent on user input or are creating queries using dynamic SQL. I’m going to use a simple query that illustrates unexpected user input; a totally unnecessary stored procedure that allows users to specify a table. The SP will then retrieve that table’s record count.
#database #sql #sql-server
1626084180
Hello guys, if you are a computer science graduate or new to the programming world and interested in learning Databases and, SQL and looking for some awesome resources —like books, courses, and tutorials — to start with, then you have come to the right place.
In the past, I have shared some of the best SQL books and websites, and today, I am going to share some of the best SQL and database courses to learn so you can master this useful technology.
If you don’t know what SQL is and why you should learn it, let me give you a brief overview of SQL for everyone’s benefit. SQL is a programming language to work with a database.
You can use SQL to create database objects —like tables, stored procedures, etc. — and also to store and retrieve data from the database.
SQL is one of the most important skills for any programmer, irrespective of technology, framework, and domain. It is even more popular than a mainstream programming language like Java and Python, and it definitely adds a lot of value to your CV.
SQL allows you to play with data, which is the most important asset of today’s world. By learning SQL, you can get answers to your questions. For example, if you are a course creator for Udemy, a popular online course platform, and want to know which course is the best seller and which course is not selling at all, you can use SQL.
It can help in troubleshooting as well as reporting. Also, SQL is a very stable technology and has been around for years, and it will be needed in the future. This means any investment you make in learning SQL will also serve you for a long time in your career.
#sql #database #sql-server #oracle
1625999220
You must have heard about the top skills required for Data Science. Do you know where you should start? The easier and most important skill that you can acquire is SQL.
Before developing this skill, you must know the role of SQL in data science and why every Data Science expert mark SQL as an important one for data scientists. So, let’s explore how exactly SQL is crucial for data science.
SQL(Structured Query Language)for all the relational databases. It is also the standard for the current big data platforms that use SQL as their key API for their relational databases.
We will walk through some of the key aspects of SQL and its validity in the current scenario that is defined by Data Science. Then, we will proceed to learn the key elements of the SQL required for Data Science.
#sql-server #data-science #sql
1625993460
SQL is one of the best and easiest languages out there to learn when you are starting your data career. You might be studying SQL as a subject in your Computer Science course, or just interested in upskilling and have been using one or more sites as a go-to solution. Yes, there are a lot of sites available to help you study and practice SQL online besides SQL Test.
Most of these sites also offer you a comprehensive study plan to learn SQL syntax, and some are just tools to practice the skills that you pick up:
#sql #sql-server #learn-sql
1625902729
If you are thinking of upskilling or just curious about how difficult it would be to pick up SQL in your career goals, you are in for a pleasant surprise. SQL is one of the easiest languages to learn, and the concepts, syntax, queries, and data formats are not only easy to remember but have name-dependent functions too. That is, you would not be confused in any function, concepts of tables, and picking up the various necessary RDBMS tools makes it even more exciting.
As SQL is first and foremost a language you can use to access, transform, and manipulate data, it doesn’t have concepts you would have a hard time remembering. Unlike C, C++, Java, Python, you can expect SQL to be straightforward and easy to remember without the confusing syntax and concepts in the mentioned languages.
Moreover, SQL is a language that you can perfect with memory and some practice, unlike C, C++, Java, and so on. They have their classes, functions, loops, and other numerous complexities, whose variables can and do lead to confusion and a tough learning curve. In SQL, however, despite some similarities, you can easily remember the syntax, what they do, how to use it, and with some practice, you can feel you are already halfway through to the master skill level.
#sql-server #mysql #learn-sql #sql
1625564606
Table partitioning is a technique used in SQL Server to physically organize data stored in a table in different storage structures. In essence one ends up having this large logical structure split into smaller parts physically. The result is that we can improve performance for certain kinds of queries and more importantly move data about using certain techniques.
It is important to discuss the concept filegroups in this article because filegroups are the layer of abstraction used to separate data physically. A filegroup is a logical construct that allows SQL Server to see a collection of physical data files as a single logical unit. This means that when SQL Server writes data to that file group, the data is spread across the files which belong to that filegroup. One can think of this as a File Group – a group of data files.
Whenever a table is created, they are created on a filegroup – the PRIMARY file group by default. Typically, when creating a table, you do not specify the filegroup. This means they are created on the PRIMARY filegroup. You can choose to sit a table on a different filegroup if you wish to. However, when creating a partitioned table, you should sit such a table on a Partition Scheme.
A Partition Scheme maps a table to a set of filegroups. A Partition Function defines the criteria by which data is distributed across the filegroups that belong to the desired partition scheme. Thus, it follows logically that in creating a partitioned table, we must create a Partition Function first and then a Partition Scheme.
Two main reasons have been proposed for partitioned tables. It is possible to observe performance benefits when filegroups are sitting on separate disks entirely and we are working with the appropriate degree of parallelism and queries that span one partition. Unfortunately, we will not demonstrate this in this article. Another reason for partitioning tables is maintenance, specifically data archiving which is achieved by switching out partitions. This article shows the entire process: Switching Out Table Partitions in SQL Server: A Walkthrough. Other benefits of partitioning include online index rebuilds, parallel operations, and piecemeal restores of filegroups.
Let us walk through the process of creating a partitioned table. We start by creating a regular table as shown in Listing 1. Since we did not specify any filegroup or partition function, the table sits in the PRIMARY filegroup (See Figure 1).
-- Listing 1: CREATE TABLE Statement
use DB2
GO
CREATE TABLE memmanofarms (
fname VARCHAR(50)
,lname VARCHAR(50)
,city VARCHAR(50)
,PhoneNo bigint
,email VARCHAR(100) check (email like '%@%')
,gender char(1)
)
Using the code in Listing 2, we populate the table with six unique records which are replicated a different number of times per row. We can confirm the row count for each city using the queries in Listing 3. Listing 3 helps us in one more way: we can get a baseline of the way the query is executed when our table is not partitioned.
-- Listing 2: Populate Table
USE DB2
GO
INSERT INTO memmanofarms VALUES ('Kenneth','Igiri','Accra','23320055444','kenneth@kennethigiri.com','M');
GO 1100
INSERT INTO memmanofarms VALUES ('Vivian','Akeredolu','Lagos','2348020055444','vivian@gmail.com','F');
GO 720
INSERT INTO memmanofarms VALUES ('Emelia','Okoro','Port Harcourt','2348030057324','emelia@yahoo.com','F');
GO 400
INSERT INTO memmanofarms VALUES ('Uche','Igiri','Enugu','2348030057324','uche@yahoo.com','M');
GO 1000
INSERT INTO memmanofarms VALUES ('Kweku','Annan','Kumasi','23354055884','kweku@ymail.com','M');
GO 150
INSERT INTO memmanofarms VALUES ('Aisha','Bello','Kano','2347088057324','aisha@gmail.com','F');
GO 890
-- Listing 3: Count Rows
USE DB2
GO
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
SELECT COUNT(*) FROM memmanofarms;
SELECT COUNT(*) FROM memmanofarms WHERE city='Accra';
SELECT COUNT(*) FROM memmanofarms WHERE city='Lagos';
SELECT COUNT(*) FROM memmanofarms WHERE city='Port Harcourt';
SELECT COUNT(*) FROM memmanofarms WHERE city='Enugu';
SELECT COUNT(*) FROM memmanofarms WHERE city='Kumasi';
SELECT COUNT(*) FROM memmanofarms WHERE city='Kano';
Taking this further, we use the code in Listing 4 to set up objects required for Table Partitioning on the DB2 databases. Notice that for N partitions (and N filegroups), there will always be N-1 boundaries.
Taking this further, we use the code in Listing 4 to set up objects required for Table Partitioning on the DB2 databases. Notice that for N partitions (and N filegroups), there will always be N-1 boundaries.
-- Listing 4: Set Up Partitioning
-- Create a Partition Function
USE [DB2]
GO
CREATE PARTITION FUNCTION
PartFunc (VARCHAR(50))
AS RANGE RIGHT
FOR VALUES
('Accra'
,'Enugu'
,'Kano'
,'Kumasi'
,'Lagos'
,'Port Harcourt'
)
GO
-- Create File Groups
USE [master]
GO
ALTER DATABASE [DB2] ADD FILEGROUP [AC]
ALTER DATABASE [DB2] ADD FILEGROUP [EN]
ALTER DATABASE [DB2] ADD FILEGROUP [KA]
ALTER DATABASE [DB2] ADD FILEGROUP [KU]
ALTER DATABASE [DB2] ADD FILEGROUP [LA]
ALTER DATABASE [DB2] ADD FILEGROUP [PH]
ALTER DATABASE [DB2] ADD FILEGROUP [OT]
GO
-- Add Files to the File Groups
USE [master]
GO
ALTER DATABASE [DB2] ADD FILE ( NAME = N'AC01', FILENAME = N'C:\MSSQL\Data\AC01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [AC];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'EN01', FILENAME = N'C:\MSSQL\Data\EN01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [EN];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'KA01', FILENAME = N'C:\MSSQL\Data\KA01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [KA];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'KU01', FILENAME = N'C:\MSSQL\Data\KU01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [KU];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'LA01', FILENAME = N'C:\MSSQL\Data\LA01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [LA];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'PH01', FILENAME = N'C:\MSSQL\Data\PH01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [PH];
ALTER DATABASE [DB2] ADD FILE ( NAME = N'OT01', FILENAME = N'C:\MSSQL\Data\OT01.ndf' , SIZE = 102400KB , FILEGROWTH = 131072KB ) TO FILEGROUP [OT];
GO
-- Create a Partition Scheme
USE [DB2]
GO
CREATE PARTITION SCHEME PartSch
AS PARTITION PartFunc TO
(
AC,
EN,
KA,
KU,
LA,
PH,
OT
)
GO
Once we have the foundation laid, we can then move our regular table from the PRIMARY filegroup to the Partition Function we created. We do this by rebuilding the clustered index as shown in Listing 5. Observe that the column which we used to create the partition column must be part of the clustered index column listing. Also notice that when we specify the Partition Scheme, we must also indicate this column – city.
-- Listing 5: Move table to New Partition
CREATE CLUSTERED INDEX [ClusteredIndexCity] ON [dbo].[memmanofarms]
(
[city] ASC,
[PhoneNo] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
ON [PartSch](city);
GO
After we run this code, the table now sits on the Partition Scheme as shown in the following Figure.
We are using the code in Listing 6 as a benchmark of sorts. When we run both questions with the partitioned and non-partitioned tables, we see little difference in performance without the index. In both cases, SQL Server uses an index seek when we have the clustered index in place and a full table scan otherwise. Worthy of note though is that we have more reads when running these queries on the partitioned table. This is expected since we have in effect forced a distribution of the data across “distant” pages.
-- Listing 6: Querying Partitioned and Non-Partitioned Tables
SELECT COUNT(*) FROM memmanofarms WHERE city='Accra';
SELECT * FROM memmanofarms WHERE city='Accra';
We have seen in this article the steps for creating a partitioned table. The references section lists more resources that demonstrate the use of partitioning for archiving old data. We have also shown that partitioning does not necessarily introduce significant improvement in performance for most use cases without other enhancements such as the right CPU and proper MAXDOP configuration.
#sql #sql-server #partitions
1623030217
Microsoft Azure is a cloud computing service for building, testing, deploying, and managing applications through the Microsoft data centers. It provides the Software as Service (SaaS), Platform as Service (PaaS), and Infrastructure as a Service (IaaS), and supports different database servers, programming languages, and tools.
Azure allows creating databases on various database server platforms, both open source and paid. The deployment is fast and scalable. Besides, the Azure license model is flexible, which helps to reduce the infrastructure costs.
The current article will highlight the following points:
1. Create an Azure SQL Server instance.
2. Connect to the Azure SQL Server using SQL Server Management Studio.
3. Create an Azure SQL database using SQL Server Management Studio.
Let’s proceed to these points.
To create a new Azure SQL Server instance, log in to the Azure portal with your credentials. On the welcome screen, click SQL databases:
In the next SQL Databases screen, click Create SQL Database:
You should specify the following:
1. Subscription or the Resource group.
2. Database Server
3. Compute + storage.
4. Use the SQL elastic pool.
In this demonstration, I have chosen the Pay-As-You-Go subscription.
To create a new resource group, click Create New. Provide the desired resource group name and click OK.
In the Database name field, set the appropriate name for your database. To create a new SQL Server, click Create New:
In the New Server section, provide the following details:
After that, we get to the Create SQL Database screen demonstrating the data we’ve already entered. Here, we need to configure the Compute + storage parameters.
I am using the Basic variant in this demonstration, but you can select another option for your requirements.
Click Next: Networking to proceed to the further configuration step.
The Networking section provides the options for configuring the network access and connectivity for the Azure SQL database.
In this demonstration, I am using the Public endpoint.
Also, I have enabled the Add current client IP Address option to connect to the Azure SQL Database. It adds an entry of my current IP address to the server firewall.
Click Next: Additional settings to configure the following details:
We are installing an AdventureWorksLT database so click on Sample. We were not changing the collation and did not enable the Azure Defender for SQL. Click on Review + create.
The deployment process starts. Once it is complete, you can see the Azure SQL Server instance and the Azure SQL database on the All****Resources page.
To connect to the Azure SQL Server instance, we need Server Name, Username, and password. These details are present on the SQL Server resource group page.
Log in to the Azure portal and click on the Azure SQL Server instance named myazuresqlserverdb. The server name and admin login of the Azure SQL Server instance will be on its resource page:
Now, let’s apply the SQL Server Management Studio.
In the Connect to Server window, specify the details:
A new dialog window will open. There, you should add the firewall rule.
Note: When the IP Address of the workstation/Network used to create Azure SQL Server differs from the IP Address of the computer used to connect to the Azure SQL Server, the dialog window opens. It allows us to add the IP Address of the computer used to connect to the Azure SQL Server.
#sql-server #azure-functions #microsoft-azure #azure #sql
1622546040
On this occasion I would like to relate step by step another of the injections that I liked the most.
This is a site that came to me from a colleague to test together!
The parameter where the attack is made is used to display a PDF document and it is exactly with this that we guide ourselves to discover how many columns we are going to have for this injection!
We started!
First we have the link
site/ejemplo/parametro=25527
#sql-server #cybersecurity #sql-injection #sql
1622188500
This is my 20th year working with SQL (eek!), and I’ve shared my 10 key learnings to help make sure your SQL code is easy to read, debug, and maintain.
The key to success is, of course, to enforce these (or your own standards) across your Enterprise. Tip 10 discusses how you can do this.
So, in no particular order, let’s begin.
If I was given £1 every time I saw something like this, I think I’d be sitting on a tidy sum:
select first employee_first_name,
surname employee_last_name,
title,
CASE WHEN employment = 1 THEN 'FT' WHEN employment = 2 THEN 'PT' ELSE 'T' END
as EmploymentStatus,
'Y' AS isValid
,"HR" Employee-source
from employees
WHERE Valid = 1
#postgres #data-science #mysql #sql #sql-server
1622164800
When writing code, one must aim to follow the DRY Principle (Don’t Repeat Yourself). One way to avoid a repetition of code is to put chunks of code inside functions and invoke them as required.
The concept of functions in SQL is similar to other programming languages like Python. The major difference being the way they are implemented. There are two main types of user-defined functions in SQL based on the data they return:
#sql-server #data #function #sql
1622142300
As a DBA, you might come across a request where you need to extract the domain of the email address, the email address that is stored in the database table. In case you want to count the most used domain names from email addresses in any given table, you can count the number of extracted domains from Email in SQL Server as shown below.
SQL Queries could be used to extract the domain from the Email address.
Let us created table named “email_demo” –
create table (ID int, Email varchar (200));
#dbms #sql #sql-server
1622100876
Replication is one of the oldest technologies on MS SQL Server, loved by every database administrator. It is a great and reliable technology for bringing data closer to users, especially for distributed reporting. Thanks to it, the database availability increases across multiple SQL Servers and regions.
Replication was introduced in SQL 2005. Can you believe it’s that old? Though newer managed SQL platforms on the cloud are introduced regularly, I believe that SQL replication will remain here. If it was a bug or insect, I would think of it as a cockroach. It’s hard to squash!
If you are one of those belonging to a small population of administrators who never managed a database, there is official Microsoft documentation on the topic. However, note that is it pretty long, comprehensive and will rob you of some time off from holiday or planned binge-watching TV series. Also, Codingsight offers the SQL Server Database Replication setup and configuration guide.
But before you get your hands dirty with the technical stuff, and I know you’re eager too, it’s important to plan for it.
The replication requirement may change with regards to location when you deploy to SQL Servers running on the cloud. But once the SQL replication is running like a well-oiled machine and replicating production data, you need to plan how you manage it.
In this post, I will share some tips and T-SQL scripts for you to use when you need to check many SQL Agent jobs are created after the replication configuration.
When you set up and configure SQL replication, it also creates a set of standalone functions and SQL Agent jobs known as replication agents. Their goal is to carry out tasks associated with moving your tables, also called articles, in the replication configuration from publisher to subscriber/s. You can run replication agents from the command line and by applications that use Replication Management Objects (RMO).
SQL Server replication agents can be monitored and administered via Replication Monitor and SQL Server Management Studio.
The primary concern of a database administrator/replication administrator is making sure that all SQL Agents replication jobs are running. If the replication agent job fails, the subscriber may not receive data. Therefore, the distribution database may grow huge because of accumulated rows that won’t move to the subscriber database.
To set an alert for any replication agent job failure, you can create another agent job. It will check the job failures and send an email to your dba team if it identifies problems.
Use the below script:
set @time = dateadd(n,-30,getdate())
declare @date date
set @date = convert(date,getdate())
declare @publisher varchar(100)
set @publisher = @@SERVERNAME
SELECT LEFT(name,50) as [JobName], run_date AS [RunDate], run_time AS [RunTime], LEFT([message],50) AS [Message]
FROM
(select distinct b.name,a.run_date, run_time, message
from msdb..sysjobhistory a inner join msdb..sysjobs b on a.job_id = b.job_id where b.name like 'servername here%' and run_status <> 1 and message like '%error%'
and convert(date,convert(varchar,a.run_date ))= convert(date,getutcdate()) replace(convert(varchar(8),dateadd(n,-30,getutcdate())),':','') ) a
Apply the following script:
@profile_name = 'DBA Alerts',
@recipients = 'your dba team email here',
@subject = '[Database name] Replication Jobs Failure',
@query = 'SELECT LEFT(name,50) as [JobName], run_date AS [RunDate], run_time AS [RunTime], LEFT([message],50) AS [Message]
FROM
(select distinct b.name, a.run_date, a.run_time, message
from msdb.dbo.sysjobhistory a inner join msdb.dbo.sysjobs b on a.job_id = b.job_id
where b.name like ''servername here %'' and
convert(date,convert(varchar,a.run_date)) = convert(date,getutcdate()) ) a
',
@attach_query_result_as_file = 0 ;
To monitor the msrepl_commands table, you may use one more script provided below. Note that this table should grow too huge and too fast. If that is the case, the replication agent jobs might fail, or there could be a problem in the replication configuration.
The script is as follows:
SELECT Getdate() AS CaptureTime, LEFT(Object_name(t.object_id),20) AS TableName, st.row_count
FROM sys.dm_db_partition_stats st WITH (nolock)
INNER JOIN sys.tables t WITH (nolock) ON st.object_id = t.object_id INNER JOIN sys.schemas s WITH (nolock) ON t.schema_id = s.schema_id WHERE index_id < 2 AND Object_name(t.object_id)
IN ('MSsubscriptions', 'MSdistribution_history', 'MSrepl_commands', 'MSrepl_transactions')
ORDER BY st.row_count DESC
The msreplcommands table growth trend also gives you a hint of how healthy your replication latency is. There are many factors of impact. If your environment is in the cloud, the region selection may contribute a big deal to replication latency.
You may use the following script:
-- Set Publisher server and database name
Set @Publisher = 'publication server name';
Set @PublisherDB = 'publishing database name';
-- Refresh replication monitor data
USE [distribution]
Exec sys.sp_replmonitorrefreshjob @iterations = 1;
With MaxXact (ServerName, PublisherDBID, XactSeqNo)
As (Select S.name, DA.publisher_database_id, max(H.xact_seqno) From dbo.MSdistribution_history H with(nolock)
Inner Join dbo.MSdistribution_agents DA with(nolock) On DA.id = H.agent_id
Inner Join master.sys.servers S with(nolock) On S.server_id = DA.subscriber_id
Where DA.publisher_db = @PublisherDB
Group By S.name, DA.publisher_database_id), OldestXact (ServerName, OldestEntryTime)
As (Select MX.ServerName, Min(entry_time)
From dbo.msrepl_transactions T with(nolock)
Inner Join MaxXact MX On MX.XactSeqNo < T.xact_seqno And
MX.PublisherDBID = T.publisher_database_id
Group By MX.ServerName)
Select [Replication Status] = Case MD.status
When 1 Then 'Started'
When 2 Then 'Succeeded'
When 3 Then 'In progress'
When 4 Then 'Idle'
When 5 Then 'Retrying'
When 6 Then 'Failed'
End,
Subscriber = SubString(MD.agent_name, Len(MD.publisher) +
Len(MD.publisher_db) + Len(MD.publication) + 4,
Charindex('-', MD.agent_name,
Len(MD.publisher) + Len(MD.publisher_db) +
Len(MD.publication) + 5) -
(Len(MD.publisher) +
Len(MD.publisher_db) + Len(MD.publication) + 4)),
[Subscriber DB] = A.subscriber_db,
[Publisher DB] = MD.publisher_db,
Publisher = MD.publisher,
[Current Latency (sec)] = MD.cur_latency,
[Current Latency (hh:mm:ss)] = Right('00' + Cast(MD.cur_latency/3600 As varchar), 2) +
':' + Right('00' +
Cast((MD.cur_latency%3600)/60 As varchar), 2) +
':' + Right('00' +
Cast(MD.cur_latency%60 As varchar), 2),
[Latency Threshold (min)] = Cast(T.value As Int),
[Agent Last Stopped (sec)] = DateDiff(hour, agentstoptime, getdate()) - 1,
[Agent Last Sync] = MD.last_distsync,
[Last Entry TimeStamp] = OX.OldestEntryTime
From dbo.MSreplication_monitordata MD with(nolock)
Inner Join dbo.MSdistribution_agents A with(nolock) On A.id = MD.agent_id Inner Join dbo.MSpublicationthresholds T with(nolock) On T.publication_id = MD.publication_id And T.metric_id = 2 -- Latency
Inner Join OldestXact OX On OX.ServerName = SubString(MD.agent_name, Len(MD.publisher) + Len(MD.publisher_db) +
Len(MD.publication) + 4,
Charindex('-', MD.agent_name,
Len(MD.publisher) + Len(MD.publisher_db) +
Len(MD.publication) + 5) -
(Len(MD.publisher) +
Len(MD.publisher_db) + Len(MD.publication) + 4))
Where MD.publisher = @Publisher
And MD.publisher_db = @PublisherDB
And MD.publication_type = 0 -- 0 = Transactional publication And MD.agent_type = 3; -- 3 = distribution agent
IF (@@ROWCOUNT > 500)
BEGIN
-- send alerts here.. 500 rows of undistributed transactions , should be higher. run this on remote distributor
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'DBA Alert',
@recipients = 'your dba team email here',
@body = 'This is replication latency alert. Check undistributed transactions query.',
@subject = 'Replication Latency Alert' ;
PRINT 'Alert here!' --since email is not yet working
END
If you are working on a transaction replication, these operations are extremely important. Here is a script:
, LEFT(a.publisher_db, 50) AS publisher_db
, LEFT(p.publication,25) AS publication_name
, LEFT(a.article, 50) AS [article]
, LEFT(a.destination_object,50) AS destination_object
, LEFT(ss.srvname,25) AS subscription_server
, LEFT(s.subscriber_db,25) AS subscriber_db
, LEFT(da.name,50) AS distribution_agent_job_name
FROM distribution..MSArticles a
JOIN distribution..MSpublications p ON a.publication_id = p.publication_id JOIN distribution..MSsubscriptions s ON p.publication_id = s.publication_id JOIN master..sysservers ss ON s.subscriber_id = ss.srvid
JOIN master..sysservers srv ON srv.srvid = p.publisher_id
JOIN distribution..MSdistribution_agents da ON da.publisher_id = p.publisher_id AND da.subscriber_id = s.subscriber_id
ORDER BY 1,2,3
To combine all replication statistics and delivered and undelivered commands, you can create a table in the distribution database to contain all the replication details.
From this table, you can create a reporting summary to distribute to the dba team. This table can be refreshed every day as part of the daily replication health check aside from the standard database administrator morning health check.
IF OBJECT_ID('Tempdb.dbo.#ReplStats') IS NOT NULL
DROP TABLE #ReplStats
CREATE TABLE [dbo].[#ReplStats] (
[DistributionAgentName] [nvarchar](100) NOT NULL
,[DistributionAgentStartTime] [datetime] NOT NULL
,[DistributionAgentRunningDurationInSeconds] [int] NOT NULL ,[IsAgentRunning] [bit] NULL
,[ReplicationStatus] [varchar](14) NULL
,[LastSynchronized] [datetime] NOT NULL
,[Comments] [nvarchar](max) NOT NULL
,[Publisher] [sysname] NOT NULL
,[PublicationName] [sysname] NOT NULL
,[PublisherDB] [sysname] NOT NULL
,[Subscriber] [nvarchar](128) NULL
,[SubscriberDB] [sysname] NULL
,[SubscriptionType] [varchar](64) NULL
,[DistributionDB] [sysname] NULL
,[Article] [sysname] NOT NULL
,[UndelivCmdsInDistDB] [int] NULL
,[DelivCmdsInDistDB] [int] NULL
,[CurrentSessionDeliveryRate] [float] NOT NULL
,[CurrentSessionDeliveryLatency] [int] NOT NULL
,[TotalTransactionsDeliveredInCurrentSession] [int] NOT NULL
,[TotalCommandsDeliveredInCurrentSession] [int] NOT NULL ,[AverageCommandsDeliveredInCurrentSession] [int] NOT NULL ,[DeliveryRate] [float] NOT NULL
,[DeliveryLatency] [int] NOT NULL
,[TotalCommandsDeliveredSinceSubscriptionSetup] [int] NOT NULL ,[SequenceNumber] [varbinary](16) NULL
,[LastDistributerSync] [datetime] NULL
,[Retention] [int] NULL
,[WorstLatency] [int] NULL
,[BestLatency] [int] NULL
,[AverageLatency] [int] NULL
,[CurrentLatency] [int] NULL
) ON [PRIMARY]
INSERT INTO #ReplStats
SELECT da.[name] AS [DistributionAgentName]
,dh.[start_time] AS [DistributionAgentStartTime]
,dh.[duration] AS [DistributionAgentRunningDurationInSeconds] ,md.[isagentrunningnow] AS [IsAgentRunning]
,CASE md.[status]
WHEN 1
THEN '1 - Started'
WHEN 2
THEN '2 - Succeeded'
WHEN 3
THEN '3 - InProgress'
WHEN 4
THEN '4 - Idle'
WHEN 5
THEN '5 - Retrying'
WHEN 6
THEN '6 - Failed'
END AS [ReplicationStatus]
,dh.[time] AS [LastSynchronized]
,dh.[comments] AS [Comments]
,md.[publisher] AS [Publisher]
,da.[publication] AS [PublicationName]
,da.[publisher_db] AS [PublisherDB]
,CASE
WHEN da.[anonymous_subid] IS NOT NULL
THEN UPPER(da.[subscriber_name])
ELSE UPPER(s.[name])
END AS [Subscriber]
,da.[subscriber_db] AS [SubscriberDB]
,CASE da.[subscription_type]
WHEN '0'
THEN 'Push'
WHEN '1'
THEN 'Pull'
WHEN '2'
THEN 'Anonymous'
ELSE CAST(da.[subscription_type] AS [varchar](64))
END AS [SubscriptionType]
,md.[distdb] AS [DistributionDB]
,ma.[article] AS [Article]
,ds.[UndelivCmdsInDistDB]
,ds.[DelivCmdsInDistDB]
,dh.[current_delivery_rate] AS [CurrentSessionDeliveryRate] ,dh.[current_delivery_latency] AS [CurrentSessionDeliveryLatency] ,dh.[delivered_transactions] AS
[TotalTransactionsDeliveredInCurrentSession]
,dh.[delivered_commands] AS [TotalCommandsDeliveredInCurrentSession] ,dh.[average_commands] AS [AverageCommandsDeliveredInCurrentSession] ,dh.[delivery_rate] AS [DeliveryRate]
,dh.[delivery_latency] AS [DeliveryLatency]
,dh.[total_delivered_commands] AS
[TotalCommandsDeliveredSinceSubscriptionSetup]
,dh.[xact_seqno] AS [SequenceNumber]
,md.[last_distsync] AS [LastDistributerSync]
,md.[retention] AS [Retention]
,md.[worst_latency] AS [WorstLatency]
,md.[best_latency] AS [BestLatency]
,md.[avg_latency] AS [AverageLatency]
,md.[cur_latency] AS [CurrentLatency]
FROM [distribution]..[MSdistribution_status] ds
INNER JOIN [distribution]..[MSdistribution_agents] da ON da.[id] = ds.[agent_id]
INNER JOIN [distribution]..[MSArticles] ma ON ma.publisher_id = da.publisher_id
AND ma.[article_id] = ds.[article_id]
INNER JOIN [distribution]..[MSreplication_monitordata] md ON [md].[job_id] = da.[job_id]
INNER JOIN [distribution]..[MSdistribution_history] dh ON [dh].[agent_id] = md.[agent_id]
AND md.[agent_type] = 3
INNER JOIN [master].[sys].[servers] s ON s.[server_id] = da.[subscriber_id]
--Created WHEN your publication has the immediate_sync property set to true. This property dictates
--whether snapshot is available all the time for new subscriptions to be initialized.
--This affects the cleanup behavior of transactional replication. If this property is set to true,
--the transactions will be retained for max retention period instead of it getting cleaned up
--as soon as all the subscriptions got the change.
WHERE da.[subscriber_db] <> 'virtual'
AND da.[anonymous_subid] IS NULL
AND dh.[start_time] = (
SELECT TOP 1 start_time
FROM [distribution]..[MSdistribution_history] a
INNER JOIN [distribution]..[MSdistribution_agents] b ON a.[agent_id] = b.[id]
AND b.[subscriber_db] <> 'virtual'
WHERE [runstatus] <> 1
ORDER BY [start_time] DESC
)
AND dh.[runstatus] <> 1
SELECT 'Transactional Replication Summary' AS [Comments];
SELECT [DistributionAgentName]
,[DistributionAgentStartTime]
,[DistributionAgentRunningDurationInSeconds]
,[IsAgentRunning]
,[ReplicationStatus]
,[LastSynchronized]
,[Comments]
,[Publisher]
,[PublicationName]
,[PublisherDB]
,[Subscriber]
,[SubscriberDB]
,[SubscriptionType]
,[DistributionDB]
,SUM([UndelivCmdsInDistDB]) AS [UndelivCmdsInDistDB]
,SUM([DelivCmdsInDistDB]) AS [DelivCmdsInDistDB]
,[CurrentSessionDeliveryRate]
,[CurrentSessionDeliveryLatency]
,[TotalTransactionsDeliveredInCurrentSession]
,[TotalCommandsDeliveredInCurrentSession]
,[AverageCommandsDeliveredInCurrentSession]
,[DeliveryRate]
,[DeliveryLatency]
,[TotalCommandsDeliveredSinceSubscriptionSetup]
,[SequenceNumber]
,[LastDistributerSync]
,[Retention]
,[WorstLatency]
,[BestLatency]
,[AverageLatency]
,[CurrentLatency]
FROM #ReplStats
GROUP BY [DistributionAgentName]
,[DistributionAgentStartTime]
,[DistributionAgentRunningDurationInSeconds]
,[IsAgentRunning]
,[ReplicationStatus]
,[LastSynchronized]
,[Comments]
,[Publisher]
,[PublicationName]
,[PublisherDB]
,[Subscriber]
,[SubscriberDB]
,[SubscriptionType]
,[DistributionDB]
,[CurrentSessionDeliveryRate]
,[CurrentSessionDeliveryLatency]
,[TotalTransactionsDeliveredInCurrentSession]
,[TotalCommandsDeliveredInCurrentSession]
,[AverageCommandsDeliveredInCurrentSession]
,[DeliveryRate]
,[DeliveryLatency]
,[TotalCommandsDeliveredSinceSubscriptionSetup]
,[SequenceNumber]
,[LastDistributerSync]
,[Retention]
,[WorstLatency]
,[BestLatency]
,[AverageLatency]
,[CurrentLatency]
SELECT 'Transactional Replication Summary Details' AS [Comments];
SELECT [Publisher]
,[PublicationName]
,[PublisherDB]
,[Article]
,[Subscriber]
,[SubscriberDB]
,[SubscriptionType]
,[DistributionDB]
,SUM([UndelivCmdsInDistDB]) AS [UndelivCmdsInDistDB] ,SUM([DelivCmdsInDistDB]) AS [DelivCmdsInDistDB]
FROM #ReplStats
GROUP BY [Publisher]
,[PublicationName]
,[PublisherDB]
,[Article]
,[Subscriber]
,[SubscriberDB]
,[SubscriptionType]
,[DistributionDB]
I hope that these few T-SQL scripts provided above will help you in your replication agents’ monitoring. I highly recommend you monitor them closely. Otherwise, users at the subscriber’s end may complain endlessly about not having (close to) real-time data.
In the coming articles, I will dig deeper into the SQL technology of replicating data to any part of the globe. Happy monitoring!
Author: Carla Abanes
Originally posted at https://codingsight.com/managing-your-ms-sql-replication/
#sql #sql-server #database #mysql #replication #tsql
1621546380
Hello guys, if you want to learn Microsoft SQL and T-SQL and looking for free online courses then you have come to the right place. Earlier, I have shared the best SQL Server courses and Database courses and in this article, I am going to share the best free T-SQL courses for beginners.
The Microsoft SQL Server is not just one of the popular database solutions but also one of the most complicated software offerings from Microsoft. It requires you to have a foundation in networks, databases, and programming.
This wide range of skills is often challenging to obtain without rigorous learning and years of hands-on experience. Since it’s difficult to learn and master the demand for expert SQL Server DBAs and Programmers is always high, particularly in banking sectors.
I know many of my friends in London and all around the world become SQL Server DBAs after starting as a programmer just to work on those big banks and earn very high salaries.
#sql #sql-server #t-sql #microsoft sql #courses
1620573840
Convert means to change the form or value of something. The CONVERT() function in SQL server is used to convert a value of one type to another type.
Syntax :
SELECT CONVERT
( target_type ( length ), expression )
Parameters used :
#sql #dbms-sql #sql-server