Как использовать типы команд SQL

Команды SQL используются в SQL для выполнения различных функций. Эти функции включают в себя создание объектов базы данных, управление объектами, заполнение таблиц базы данных данными, обновление существующих данных в таблицах, удаление данных, выполнение запросов к базе данных, управление доступом к базе данных и общее администрирование базы данных.

Основные категории команд SQL:

  • Язык определения данных (DDL)
  • Язык манипулирования данными (DML)
  • Язык запросов данных (DQL)
  • Язык управления данными (DCL)
  • Язык управления транзакциями (TCL)

Язык определения данных (DDL)

Язык определения данных (DDL) представляет собой набор команд SQL, используемых для определения структуры базы данных, включая таблицы, индексы и ограничения. Команды DDL используются для создания, изменения и удаления объектов базы данных. Вот некоторые распространенные команды DDL:

СОЗДАВАТЬ

Эта команда создает новые объекты базы данных, такие как таблицы, представления, индексы и ограничения.

Пример

CREATE TABLE Employee (
   EmployeeID INT PRIMARY KEY,
   FirstName VARCHAR(50),
   LastName VARCHAR(50),
   Email VARCHAR(100),
   HireDate DATE,
   Salary DECIMAL(10, 2),
   DepartmentID INT
);

CREATE VIEW RecentOrders AS
SELECT OrderID, OrderDate, CustomerID, TotalAmount
FROM Orders

CREATE INDEX idx_Employee_DepartmentID ON Employee (DepartmentID);

CREATE PROCEDURE InsertOrder
  @OrderDate DATE,
  @CustomerID INT,
  @TotalAmount DECIMAL(10,2)
AS
BEGIN
  INSERT INTO Orders (OrderDate, CustomerID, TotalAmount)
  VALUES (@OrderDate, @CustomerID, @TotalAmount)
END;

CREATE FUNCTION GetYearsWithCompany (@EmployeeID INT)
RETURNS INT
AS
BEGIN
  DECLARE @YearsWithCompany INT;
  SELECT @YearsWithCompany = DATEDIFF(YEAR, HireDate, GETDATE())
  FROM Employees
  WHERE EmployeeID = @EmployeeID;
  RETURN @YearsWithCompany;
END;

CREATE TRIGGER OrderAuditTrigger
ON Orders
AFTER INSERT
AS
BEGIN
  INSERT INTO OrderAudit (OrderID, OrderDate, CustomerID, TotalAmount)
  SELECT OrderID, OrderDate, CustomerID, TotalAmount
  FROM inserted;
END;

ИЗМЕНИТЬ

Эта команда используется для изменения структуры существующих объектов базы данных, например для добавления или удаления столбцов из таблицы или изменения типа данных столбца.

Примеры

ALTER TABLE Employees
ADD EmailAddress varchar(100);

ALTER TABLE Employees
DROP COLUMN EmailAddress;

ALTER TABLE Employees
ALTER COLUMN Salary decimal(10, 2);

ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (EmployeeID);

ALTER VIEW SalesData
AS SELECT ProductID, ProductName, QuantitySold
FROM Sales
WHERE SaleDate BETWEEN '2022-01-01' AND '2022-12-31';

ALTER PROCEDURE GetEmployeesByDepartment
   @DepartmentID int
AS
BEGIN
   SELECT * FROM Employees WHERE DepartmentID = @DepartmentID;
END;

ALTER INDEX idx_Employees_LastName
ON Employees(LastName, FirstName)
INCLUDE (Email);

ALTER FUNCTION GetTotalSales
(@StartDate DATE, @EndDate DATE)
RETURNS MONEY
AS
BEGIN
   DECLARE @TotalSales MONEY;
   SELECT @TotalSales = SUM(TotalAmount)
   FROM Sales
   WHERE SaleDate BETWEEN @StartDate AND @EndDate;
   RETURN @TotalSales;
END;

ALTER TRIGGER trg_Employees_Insert
ON Employees
AFTER INSERT
AS
BEGIN
   INSERT INTO EmployeeAudit(EmployeeID, AuditDate, EventType)
   SELECT EmployeeID, GETDATE(), 'INSERT'
   FROM inserted;
END;

УРОНИТЬ

Эта команда удаляет существующий объект базы данных, такой как таблица, представление или индекс.

DROP TABLE Employee;
DROP VIEW Get_EmployeeDetail;
DROP INDEX idx_Employees_Name;
DROP PROCEDURE GetEmployeesByDepartment;
DROP FUNCTION my_function;
DROP TRIGGER my_trigger ON my_table;

ОБРЕЗАТЬ

Эта команда удаляет все данные из таблицы, но сохраняет структуру таблицы нетронутой.

TRUNCATE TABLE suppliers;

ПЕРЕИМЕНОВАТЬ

Эта команда переименовывает существующий объект базы данных, например таблицу или столбец.

EXEC sp_rename 'old_table_name', 'new_table_name';
EXEC sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

Примечание.  SQL Server не поддерживает ключевое слово RENAME в операторе ALTER TABLE. Вместо этого вы можете использовать системную хранимую процедуру sp_rename для переименования таблицы и столбца таблицы.

КОММЕНТАРИЙ

Эта команда добавляет комментарии к объекту базы данных, такому как таблица или столбец, чтобы предоставить дополнительную информацию об объекте.

Single-line comments: These comments start with two hyphens "--" and continue until the end of the line. For example:
SELECT * FROM customers -- This is a comment
Multi-line comments: These comments start with "/" and end with "/". They can span multiple lines. For example:
/* This is a
multi-line comment */

Короче говоря, команды DDL используются для создания и изменения структуры базы данных.

Язык манипулирования данными (DML)

Язык манипулирования данными (DML) представляет собой набор команд SQL, используемых для управления данными, хранящимися в базе данных. Команды DML извлекают, вставляют, обновляют и удаляют данные в таблицах. Вот некоторые распространенные команды DML:

ВЫБИРАТЬ

Эта команда извлекает данные из одной или нескольких таблиц в базе данных.

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM customers;
SELECT * FROM customers;

ВСТАВЛЯТЬ

Эта команда используется для вставки новых данных в таблицу.

INSERT INTO customers  VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO customers (name, email, phone) VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO OrderDetail (CustomerName, City, Country)
SELECT Name, City, Country FROM Customers;

ОБНОВЛЯТЬ

Эта команда используется для изменения существующих данных в таблице.

UPDATE customers
SET email = 'rj@gmail.com', first_name = 'Rj'
WHERE id=1

SQL

Копировать

УДАЛИТЬ

Эта команда используется для удаления данных из таблицы.

DELETE FROM customers; -- delete all data
DELETE FROM customers -- delete record from customers which id is 5
WHERE id = 5;

ОБЪЕДИНИТЬ

Эта команда выполняет операции вставки, обновления или удаления в целевой таблице на основе данных в исходной таблице.

-- Insert Merge

MERGE employees AS target
USING employees_new AS source
ON (target.id = source.id)
WHEN NOT MATCHED THEN
  INSERT (id, name, salary)
  VALUES (source.id, source.name, source.salary);

-- Update Merge

MERGE INTO customers c
USING (
  SELECT id, phone, address
  FROM customers
  WHERE email IN ('email1@example.com', 'email2@example.com', 'email3@example.com')
) s
ON (c.id = s.id)
WHEN MATCHED THEN
  UPDATE SET c.phone = s.phone, c.address = s.address;

-- Delete Merge

MERGE INTO orders o
USING (
  SELECT order_id
  FROM orders
  WHERE order_date < '2022-01-01'
) s
ON (o.order_id = s.order_id)
WHEN MATCHED THEN DELETE;

Команды DML необходимы для управления данными, хранящимися в базе данных. С помощью команд DML пользователи могут добавлять, обновлять или удалять данные в таблице, что крайне важно для поддержания точности и целостности данных.

Язык запросов данных (DQL)

Язык запросов данных (DQL) — это подмножество команд SQL, используемых для извлечения данных из одной или нескольких таблиц в базе данных. Команды DQL также известны как команды извлечения данных.

Вот некоторые распространенные команды DQL,

ВЫБИРАТЬ

Эта команда извлекает данные из одной или нескольких таблиц в базе данных.

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM Employees;
SELECT * FROM Employees;

ОТЧЕТЛИВЫЙ

Эта команда используется для извлечения уникальных значений из столбца таблицы.

SELECT DISTINCT category
FROM products;

ГДЕ

Эта команда используется для фильтрации данных на основе определенных критериев.

SELECT *
FROM customers
WHERE age > 30;

UPDATE customers
SET email = 'newemail@example.com'
WHERE id = 1;

DELETE
FROM customers
WHERE age > 30 AND email LIKE '%@yahoo.com';

СОРТИРОВАТЬ ПО

Эта команда используется для сортировки данных в порядке возрастания или убывания.

SELECT *
FROM customers
ORDER BY age DESC;

UPDATE customers
SET age = age + 1
WHERE id IN (
    SELECT id
    FROM customers
    ORDER BY age ASC
);

DELETE FROM customers
WHERE age > 50
ORDER BY age DESC;

ГРУППА ПО

Эта команда используется для группировки данных на основе одного или нескольких столбцов.

SELECT product, SUM(quantity * price) as total_sales
FROM sales
GROUP BY product;

UPDATE sales
SET price = (
    SELECT AVG(price)
    FROM sales
    WHERE product = sales.product
)
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
);

DELETE FROM sales
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
    HAVING COUNT(*) = 1
);

ПРИСОЕДИНИТЬСЯ

Эта команда объединяет данные из двух или более таблиц в один результирующий набор.

SELECT orders.id, customers.name, orders.product, orders.quantity, orders.price
FROM orders
INNER JOIN customers ON orders.customer_id = customers.id;

SELECT departments.name AS department_name, employees.name AS employee_name
FROM departments
RIGHT JOIN employees ON departments.id = employees.department_id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
LEFT JOIN departments ON employees.department_id = departments.id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
FULL JOIN departments ON employees.department_id = departments.id;

SELECT A.EmployeeName AS EmployeeName1, B.EmployeeName AS EmployeeName2, A.City
FROM Employee A, Employee B
WHERE A.EmployeeID <> B.EmployeeID
AND A.City = B.City
ORDER BY A.City;

Команды DQL необходимы для извлечения данных из базы данных. Используя команды DQL, пользователи могут фильтровать, сортировать и группировать данные на основе определенных критериев, что имеет решающее значение для анализа и интерпретации данных, хранящихся в базе данных.

Язык управления данными (DCL)

Язык управления данными (DCL) представляет собой набор команд SQL, используемых для управления доступом к базе данных. Команды DCL используются для предоставления или отзыва разрешений пользователям и ролям. 
Вот некоторые распространенные команды DCL:

ГРАНТ

Эта команда используется для предоставления разрешений пользователю или роли.

GRANT SELECT ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT ON mydatabase.* TO myuser;

ОТЗЫВ

Эта команда используется для отзыва разрешений у пользователя или роли.

REVOKE SELECT ON mydatabase.mytable FROM myuser;
REVOKE SELECT, INSERT, UPDATE ON mydatabase.mytable FROM myuser;
REVOKE ALL PRIVILEGES ON mydatabase.* FROM myuser;
REVOKE SELECT, INSERT ON mydatabase.* FROM myuser;

ОТРИЦАТЬ

Эта команда используется для отказа в разрешениях пользователю или роли.

DENY SELECT ON mydatabase.mytable TO myuser;
DENY SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;

Команды DCL необходимы для управления доступом к базе данных. Используя команды DCL, администраторы баз данных могут контролировать, кто имеет доступ к базе данных и какие действия они могут выполнять с данными, хранящимися в базе данных. 

Это критически важно для обеспечения безопасности и целостности данных, хранящихся в базе данных.

Язык управления транзакциями (TCL)

Язык управления транзакциями (TCL) представляет собой набор команд SQL, используемых для управления транзакциями в базе данных. Транзакция — это последовательность одного или нескольких операторов SQL, рассматриваемая как единая единица работы. Команды TCL используются для подтверждения или отката транзакций. Вот некоторые распространенные команды TCL:

СОВЕРШИТЬ

Эта команда навсегда сохраняет изменения, сделанные транзакцией в базе данных.

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    COMMIT;
END

ОТКАТ

Эта команда используется для отмены изменений, сделанных транзакцией, и восстановления базы данных до ее предыдущего состояния.

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    IF @@ERROR <> 0
    BEGIN
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

ТОЧКА СОХРАНЕНИЯ

Эта команда используется для установки точки сохранения внутри транзакции, что позволяет вам вернуться к определенной точке транзакции.

CREATE PROCEDURE transfer_funds
    @from_account INT,
    @to_account INT,
    @amount DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    -- Savepoint
    SAVE TRANSACTION transfer_start;

    UPDATE bank.accounts
    SET balance = balance - @amount
    WHERE id = @from_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION transfer_start;
        RETURN;
    END;

    UPDATE bank.accounts
    SET balance = balance + @amount
    WHERE id = @to_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback entire transaction
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

ОТМЕНИТЬ ТОЧКУ СОХРАНЕНИЯ

Эта команда используется для удаления точки сохранения внутри транзакции.

CREATE PROCEDURE example_procedure
AS
BEGIN
    BEGIN TRANSACTION;

    -- Perform some operations
    INSERT INTO myTable (column1, column2) VALUES (1, 'A');
    INSERT INTO myTable (column1, column2) VALUES (2, 'B');
    INSERT INTO myTable (column1, column2) VALUES (3, 'C');

    -- Set a savepoint
    SAVE TRANSACTION mySavepoint;

    -- More operations
    INSERT INTO myTable (column1, column2) VALUES (4, 'D');
    INSERT INTO myTable (column1, column2) VALUES (5, 'E');

    -- Check for errors
    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION mySavepoint;
        RETURN;
    END;

    -- Mark the savepoint as complete
    RELEASE SAVEPOINT mySavepoint;

    COMMIT;
END;

Команды TCL необходимы для управления транзакциями в базе данных. Используя команды TCL, пользователи могут гарантировать, что изменения, внесенные в базу данных, будут последовательными и надежными, даже если во время транзакции возникнут ошибки или сбои. Это очень важно для поддержания целостности данных, хранящихся в базе данных.

Обратите внимание
, что обычно рекомендуется использовать транзакции в хранимых процедурах для обеспечения целостности данных и предотвращения их повреждения. Использование транзакций и фиксация или откат изменений по мере необходимости может помочь обеспечить согласованность и надежность вашей базы данных.

Заключение

Я надеюсь, что блог помог вам понять команды SQL.

Оригинальный источник статьи:   https://www.c-sharpcorner.com/

#sql #command #types 

Как использовать типы команд SQL
津田  淳

津田 淳

1678895591

如何使用 SQL 命令类型

SQL命令在SQL中用于执行各种功能。这些功能包括构建数据库对象、操作对象、用数据填充数据库表、更新表中的现有数据、删除数据、执行数据库查询、控制数据库访问和整体数据库管理。

SQL命令的主要类别是,

  • 数据定义语言 (DDL)
  • 数据操作语言 (DML)
  • 数据查询语言 (DQL)
  • 数据控制语言 (DCL)
  • 事务控制语言 (TCL)

数据定义语言 (DDL)

数据定义语言 (DDL) 是一组用于定义数据库结构的 SQL 命令,包括表、索引和约束。DDL 命令用于创建、修改和删除数据库对象。以下是一些常见的 DDL 命令:

创造

此命令创建新的数据库对象,例如表、视图、索引和约束。

例子

CREATE TABLE Employee (
   EmployeeID INT PRIMARY KEY,
   FirstName VARCHAR(50),
   LastName VARCHAR(50),
   Email VARCHAR(100),
   HireDate DATE,
   Salary DECIMAL(10, 2),
   DepartmentID INT
);

CREATE VIEW RecentOrders AS
SELECT OrderID, OrderDate, CustomerID, TotalAmount
FROM Orders

CREATE INDEX idx_Employee_DepartmentID ON Employee (DepartmentID);

CREATE PROCEDURE InsertOrder
  @OrderDate DATE,
  @CustomerID INT,
  @TotalAmount DECIMAL(10,2)
AS
BEGIN
  INSERT INTO Orders (OrderDate, CustomerID, TotalAmount)
  VALUES (@OrderDate, @CustomerID, @TotalAmount)
END;

CREATE FUNCTION GetYearsWithCompany (@EmployeeID INT)
RETURNS INT
AS
BEGIN
  DECLARE @YearsWithCompany INT;
  SELECT @YearsWithCompany = DATEDIFF(YEAR, HireDate, GETDATE())
  FROM Employees
  WHERE EmployeeID = @EmployeeID;
  RETURN @YearsWithCompany;
END;

CREATE TRIGGER OrderAuditTrigger
ON Orders
AFTER INSERT
AS
BEGIN
  INSERT INTO OrderAudit (OrderID, OrderDate, CustomerID, TotalAmount)
  SELECT OrderID, OrderDate, CustomerID, TotalAmount
  FROM inserted;
END;

改变

此命令用于修改现有数据库对象的结构,例如在表中添加或删除列或更改列的数据类型。

例子

ALTER TABLE Employees
ADD EmailAddress varchar(100);

ALTER TABLE Employees
DROP COLUMN EmailAddress;

ALTER TABLE Employees
ALTER COLUMN Salary decimal(10, 2);

ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (EmployeeID);

ALTER VIEW SalesData
AS SELECT ProductID, ProductName, QuantitySold
FROM Sales
WHERE SaleDate BETWEEN '2022-01-01' AND '2022-12-31';

ALTER PROCEDURE GetEmployeesByDepartment
   @DepartmentID int
AS
BEGIN
   SELECT * FROM Employees WHERE DepartmentID = @DepartmentID;
END;

ALTER INDEX idx_Employees_LastName
ON Employees(LastName, FirstName)
INCLUDE (Email);

ALTER FUNCTION GetTotalSales
(@StartDate DATE, @EndDate DATE)
RETURNS MONEY
AS
BEGIN
   DECLARE @TotalSales MONEY;
   SELECT @TotalSales = SUM(TotalAmount)
   FROM Sales
   WHERE SaleDate BETWEEN @StartDate AND @EndDate;
   RETURN @TotalSales;
END;

ALTER TRIGGER trg_Employees_Insert
ON Employees
AFTER INSERT
AS
BEGIN
   INSERT INTO EmployeeAudit(EmployeeID, AuditDate, EventType)
   SELECT EmployeeID, GETDATE(), 'INSERT'
   FROM inserted;
END;

降低

此命令删除现有的数据库对象,例如表、视图或索引。

DROP TABLE Employee;
DROP VIEW Get_EmployeeDetail;
DROP INDEX idx_Employees_Name;
DROP PROCEDURE GetEmployeesByDepartment;
DROP FUNCTION my_function;
DROP TRIGGER my_trigger ON my_table;

截短

此命令从表中删除所有数据,但保持表结构不变。

TRUNCATE TABLE suppliers;

改名

此命令重命名现有的数据库对象,例如表或列。

EXEC sp_rename 'old_table_name', 'new_table_name';
EXEC sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

注意:  SQL Server 不支持 ALTER TABLE 语句中的 RENAME 关键字。相反,您可以使用 sp_rename 系统存储过程来重命名表和表列。

评论

此命令向数据库对象(例如表或列)添加注释以提供有关该对象的附加信息。

Single-line comments: These comments start with two hyphens "--" and continue until the end of the line. For example:
SELECT * FROM customers -- This is a comment
Multi-line comments: These comments start with "/" and end with "/". They can span multiple lines. For example:
/* This is a
multi-line comment */

简而言之,DDL 命令用于创建和修改数据库的结构。

数据操作语言 (DML)

数据操作语言 (DML) 是一组 SQL 命令,用于操作存储在数据库中的数据。DML 命令检索、插入、更新和删除表中的数据。以下是一些常见的 DML 命令:

选择

此命令从数据库中的一个或多个表中检索数据。

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM customers;
SELECT * FROM customers;

插入

此命令用于将新数据插入表中。

INSERT INTO customers  VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO customers (name, email, phone) VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO OrderDetail (CustomerName, City, Country)
SELECT Name, City, Country FROM Customers;

更新

此命令用于修改表中的现有数据。

UPDATE customers
SET email = 'rj@gmail.com', first_name = 'Rj'
WHERE id=1

数据库

复制

删除

此命令用于从表中删除数据。

DELETE FROM customers; -- delete all data
DELETE FROM customers -- delete record from customers which id is 5
WHERE id = 5;

合并

此命令根据源表中的数据对目标表执行插入、更新或删除操作。

-- Insert Merge

MERGE employees AS target
USING employees_new AS source
ON (target.id = source.id)
WHEN NOT MATCHED THEN
  INSERT (id, name, salary)
  VALUES (source.id, source.name, source.salary);

-- Update Merge

MERGE INTO customers c
USING (
  SELECT id, phone, address
  FROM customers
  WHERE email IN ('email1@example.com', 'email2@example.com', 'email3@example.com')
) s
ON (c.id = s.id)
WHEN MATCHED THEN
  UPDATE SET c.phone = s.phone, c.address = s.address;

-- Delete Merge

MERGE INTO orders o
USING (
  SELECT order_id
  FROM orders
  WHERE order_date < '2022-01-01'
) s
ON (o.order_id = s.order_id)
WHEN MATCHED THEN DELETE;

DML 命令对于管理存储在数据库中的数据至关重要。通过使用 DML 命令,用户可以添加、更新或删除表中的数据,这对于维护数据的准确性和完整性至关重要。

数据查询语言 (DQL)

数据查询语言 (DQL) 是 SQL 命令的子集,用于从数据库中的一个或多个表中检索数据。DQL 命令也称为数据检索命令。

下面是一些常用的 DQL 命令,

选择

此命令从数据库中的一个或多个表中检索数据。

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM Employees;
SELECT * FROM Employees;

清楚的

此命令用于从表中的列中检索唯一值。

SELECT DISTINCT category
FROM products;

在哪里

此命令用于根据特定条件过滤数据。

SELECT *
FROM customers
WHERE age > 30;

UPDATE customers
SET email = 'newemail@example.com'
WHERE id = 1;

DELETE
FROM customers
WHERE age > 30 AND email LIKE '%@yahoo.com';

订购方式

此命令用于按升序或降序对数据进行排序。

SELECT *
FROM customers
ORDER BY age DESC;

UPDATE customers
SET age = age + 1
WHERE id IN (
    SELECT id
    FROM customers
    ORDER BY age ASC
);

DELETE FROM customers
WHERE age > 50
ORDER BY age DESC;

通过...分组

此命令用于根据一列或多列对数据进行分组。

SELECT product, SUM(quantity * price) as total_sales
FROM sales
GROUP BY product;

UPDATE sales
SET price = (
    SELECT AVG(price)
    FROM sales
    WHERE product = sales.product
)
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
);

DELETE FROM sales
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
    HAVING COUNT(*) = 1
);

加入

此命令将来自两个或多个表的数据合并到一个结果集中。

SELECT orders.id, customers.name, orders.product, orders.quantity, orders.price
FROM orders
INNER JOIN customers ON orders.customer_id = customers.id;

SELECT departments.name AS department_name, employees.name AS employee_name
FROM departments
RIGHT JOIN employees ON departments.id = employees.department_id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
LEFT JOIN departments ON employees.department_id = departments.id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
FULL JOIN departments ON employees.department_id = departments.id;

SELECT A.EmployeeName AS EmployeeName1, B.EmployeeName AS EmployeeName2, A.City
FROM Employee A, Employee B
WHERE A.EmployeeID <> B.EmployeeID
AND A.City = B.City
ORDER BY A.City;

DQL 命令对于从数据库中检索数据至关重要。使用 DQL 命令,用户可以根据特定条件过滤、排序和分组数据,这对于分析和解释存储在数据库中的数据至关重要。

数据控制语言 (DCL)

数据控制语言 (DCL) 是一组用于控制对数据库的访问的 SQL 命令。DCL 命令用于授予或撤销用户和角色的权限。 
以下是一些常用的 DCL 命令:

授予

此命令用于向用户或角色授予权限。

GRANT SELECT ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT ON mydatabase.* TO myuser;

撤销

此命令用于撤销用户或角色的权限。

REVOKE SELECT ON mydatabase.mytable FROM myuser;
REVOKE SELECT, INSERT, UPDATE ON mydatabase.mytable FROM myuser;
REVOKE ALL PRIVILEGES ON mydatabase.* FROM myuser;
REVOKE SELECT, INSERT ON mydatabase.* FROM myuser;

否定

此命令用于拒绝用户或角色的权限。

DENY SELECT ON mydatabase.mytable TO myuser;
DENY SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;

DCL 命令对于管理对数据库的访问至关重要。使用 DCL 命令,数据库管理员可以控制谁可以访问数据库以及他们可以对数据库中存储的数据执行什么操作。 

这对于维护数据库中存储的数据的安全性和完整性至关重要。

事务控制语言 (TCL)

事务控制语言 (TCL) 是一组用于管理数据库中的事务的 SQL 命令。事务是被视为单个工作单元的一个或多个 SQL 语句的序列。TCL 命令用于提交或回滚事务。以下是一些常用的 TCL 命令:

犯罪

此命令将事务所做的更改永久保存到数据库中。

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    COMMIT;
END

回滚

此命令用于撤消事务所做的更改并将数据库恢复到以前的状态。

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    IF @@ERROR <> 0
    BEGIN
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

保存点

此命令用于在事务中设置一个保存点,它允许您回滚到事务中的特定点。

CREATE PROCEDURE transfer_funds
    @from_account INT,
    @to_account INT,
    @amount DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    -- Savepoint
    SAVE TRANSACTION transfer_start;

    UPDATE bank.accounts
    SET balance = balance - @amount
    WHERE id = @from_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION transfer_start;
        RETURN;
    END;

    UPDATE bank.accounts
    SET balance = balance + @amount
    WHERE id = @to_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback entire transaction
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

释放保存点

此命令用于删除事务中的保存点。

CREATE PROCEDURE example_procedure
AS
BEGIN
    BEGIN TRANSACTION;

    -- Perform some operations
    INSERT INTO myTable (column1, column2) VALUES (1, 'A');
    INSERT INTO myTable (column1, column2) VALUES (2, 'B');
    INSERT INTO myTable (column1, column2) VALUES (3, 'C');

    -- Set a savepoint
    SAVE TRANSACTION mySavepoint;

    -- More operations
    INSERT INTO myTable (column1, column2) VALUES (4, 'D');
    INSERT INTO myTable (column1, column2) VALUES (5, 'E');

    -- Check for errors
    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION mySavepoint;
        RETURN;
    END;

    -- Mark the savepoint as complete
    RELEASE SAVEPOINT mySavepoint;

    COMMIT;
END;

TCL 命令对于管理数据库中的事务至关重要。使用 TCL 命令,用户可以确保对数据库所做的更改是一致和可靠的,即使在事务期间出现错误或失败。这对于维护数据库中存储的数据的完整性至关重要。

请注意,
在存储过程中使用事务来确保数据完整性并防止数据损坏通常是一种很好的做法。根据需要使用事务和提交或回滚更改可以帮助确保您的数据库保持一致和可靠。

结论

我希望该博客帮助您理解 SQL 命令。

文章原文出处:https:   //www.c-sharpcorner.com/

#sql #command #types 

如何使用 SQL 命令类型

How to Use Types Of SQL Commands

SQL commands are used in SQL to perform various functions. These functions include building database objects, manipulating objects, populating database tables with data, updating existing data in tables, deleting data, performing database queries, controlling database access, and overall database administration.

The main categories of SQL Commands are,

  • Data Definition Language (DDL)
  • Data Manipulation Language (DML)
  • Data Query Language (DQL)
  • Data Control Language (DCL)
  • Transaction Control Language (TCL)

Data Definition Language (DDL)

Data Definition Language (DDL) is a set of SQL commands used to define a database's structure, including tables, indexes, and constraints.DDL commands are used to create, modify, and delete database objects. Here are some common DDL commands:

CREATE

This command creates new database objects, such as tables, views, indexes, and constraints.

Example

CREATE TABLE Employee (
   EmployeeID INT PRIMARY KEY,
   FirstName VARCHAR(50),
   LastName VARCHAR(50),
   Email VARCHAR(100),
   HireDate DATE,
   Salary DECIMAL(10, 2),
   DepartmentID INT
);

CREATE VIEW RecentOrders AS
SELECT OrderID, OrderDate, CustomerID, TotalAmount
FROM Orders

CREATE INDEX idx_Employee_DepartmentID ON Employee (DepartmentID);

CREATE PROCEDURE InsertOrder
  @OrderDate DATE,
  @CustomerID INT,
  @TotalAmount DECIMAL(10,2)
AS
BEGIN
  INSERT INTO Orders (OrderDate, CustomerID, TotalAmount)
  VALUES (@OrderDate, @CustomerID, @TotalAmount)
END;

CREATE FUNCTION GetYearsWithCompany (@EmployeeID INT)
RETURNS INT
AS
BEGIN
  DECLARE @YearsWithCompany INT;
  SELECT @YearsWithCompany = DATEDIFF(YEAR, HireDate, GETDATE())
  FROM Employees
  WHERE EmployeeID = @EmployeeID;
  RETURN @YearsWithCompany;
END;

CREATE TRIGGER OrderAuditTrigger
ON Orders
AFTER INSERT
AS
BEGIN
  INSERT INTO OrderAudit (OrderID, OrderDate, CustomerID, TotalAmount)
  SELECT OrderID, OrderDate, CustomerID, TotalAmount
  FROM inserted;
END;

ALTER

This command is used to modify the structure of existing database objects, such as adding or removing columns from a table or changing the data type of a column.

Examples

ALTER TABLE Employees
ADD EmailAddress varchar(100);

ALTER TABLE Employees
DROP COLUMN EmailAddress;

ALTER TABLE Employees
ALTER COLUMN Salary decimal(10, 2);

ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (EmployeeID);

ALTER VIEW SalesData
AS SELECT ProductID, ProductName, QuantitySold
FROM Sales
WHERE SaleDate BETWEEN '2022-01-01' AND '2022-12-31';

ALTER PROCEDURE GetEmployeesByDepartment
   @DepartmentID int
AS
BEGIN
   SELECT * FROM Employees WHERE DepartmentID = @DepartmentID;
END;

ALTER INDEX idx_Employees_LastName
ON Employees(LastName, FirstName)
INCLUDE (Email);

ALTER FUNCTION GetTotalSales
(@StartDate DATE, @EndDate DATE)
RETURNS MONEY
AS
BEGIN
   DECLARE @TotalSales MONEY;
   SELECT @TotalSales = SUM(TotalAmount)
   FROM Sales
   WHERE SaleDate BETWEEN @StartDate AND @EndDate;
   RETURN @TotalSales;
END;

ALTER TRIGGER trg_Employees_Insert
ON Employees
AFTER INSERT
AS
BEGIN
   INSERT INTO EmployeeAudit(EmployeeID, AuditDate, EventType)
   SELECT EmployeeID, GETDATE(), 'INSERT'
   FROM inserted;
END;

DROP

This command deletes an existing database object, such as a table, view, or index.

DROP TABLE Employee;
DROP VIEW Get_EmployeeDetail;
DROP INDEX idx_Employees_Name;
DROP PROCEDURE GetEmployeesByDepartment;
DROP FUNCTION my_function;
DROP TRIGGER my_trigger ON my_table;

TRUNCATE

This command deletes all data from a table but keeps the table structure intact.

TRUNCATE TABLE suppliers;

RENAME

This command renames an existing database object, such as a table or column.

EXEC sp_rename 'old_table_name', 'new_table_name';
EXEC sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

Note: SQL Server doesn't support the RENAME keyword in the ALTER TABLE statement. Instead, you can use the sp_rename system stored procedure to rename a table and table column.

COMMENT

This command adds comments to a database object, such as a table or column, to provide additional information about the object.

Single-line comments: These comments start with two hyphens "--" and continue until the end of the line. For example:
SELECT * FROM customers -- This is a comment
Multi-line comments: These comments start with "/" and end with "/". They can span multiple lines. For example:
/* This is a
multi-line comment */

In short, DDL commands are used for creating and modifying the structure of a database.

Data Manipulation Language (DML)

Data Manipulation Language (DML) is a set of SQL commands used to manipulate data stored in a database. DML commands retrieve, insert, update, and delete data in tables. Here are some common DML commands:

SELECT

This command retrieves data from one or more tables in a database.

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM customers;
SELECT * FROM customers;

INSERT

This command is used to insert new data into a table.

INSERT INTO customers  VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO customers (name, email, phone) VALUES ('RAJ', 'Raj@yahoo.com', '7019160263');
INSERT INTO OrderDetail (CustomerName, City, Country)
SELECT Name, City, Country FROM Customers;

UPDATE

This command is used to modify existing data in a table.

UPDATE customers
SET email = 'rj@gmail.com', first_name = 'Rj'
WHERE id=1

SQL

Copy

DELETE

This command is used to delete data from a table.

DELETE FROM customers; -- delete all data
DELETE FROM customers -- delete record from customers which id is 5
WHERE id = 5;

MERGE

This command performs insert, update, or delete operations on a target table based on the data in a source table.

-- Insert Merge

MERGE employees AS target
USING employees_new AS source
ON (target.id = source.id)
WHEN NOT MATCHED THEN
  INSERT (id, name, salary)
  VALUES (source.id, source.name, source.salary);

-- Update Merge

MERGE INTO customers c
USING (
  SELECT id, phone, address
  FROM customers
  WHERE email IN ('email1@example.com', 'email2@example.com', 'email3@example.com')
) s
ON (c.id = s.id)
WHEN MATCHED THEN
  UPDATE SET c.phone = s.phone, c.address = s.address;

-- Delete Merge

MERGE INTO orders o
USING (
  SELECT order_id
  FROM orders
  WHERE order_date < '2022-01-01'
) s
ON (o.order_id = s.order_id)
WHEN MATCHED THEN DELETE;

DML commands are essential for managing the data stored in a database. By using DML commands, users can add, update, or delete data in a table, which is crucial for maintaining the data's accuracy and integrity.

Data Query Language (DQL)

Data Query Language (DQL) is a subset of SQL commands used to retrieve data from one or more tables in a database. DQL commands are also known as data retrieval commands.

Here are some common DQL commands,

SELECT

This command retrieves data from one or more tables in a database.

SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM Employees;
SELECT * FROM Employees;

DISTINCT

This command is used to retrieve unique values from a column in a table.

SELECT DISTINCT category
FROM products;

WHERE

This command is used to filter data based on specific criteria.

SELECT *
FROM customers
WHERE age > 30;

UPDATE customers
SET email = 'newemail@example.com'
WHERE id = 1;

DELETE
FROM customers
WHERE age > 30 AND email LIKE '%@yahoo.com';

ORDER BY

This command is used to sort data in ascending or descending order.

SELECT *
FROM customers
ORDER BY age DESC;

UPDATE customers
SET age = age + 1
WHERE id IN (
    SELECT id
    FROM customers
    ORDER BY age ASC
);

DELETE FROM customers
WHERE age > 50
ORDER BY age DESC;

GROUP BY

This command is used to group data based on one or more columns.

SELECT product, SUM(quantity * price) as total_sales
FROM sales
GROUP BY product;

UPDATE sales
SET price = (
    SELECT AVG(price)
    FROM sales
    WHERE product = sales.product
)
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
);

DELETE FROM sales
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
    HAVING COUNT(*) = 1
);

JOIN

This command combines data from two or more tables into a single result set.

SELECT orders.id, customers.name, orders.product, orders.quantity, orders.price
FROM orders
INNER JOIN customers ON orders.customer_id = customers.id;

SELECT departments.name AS department_name, employees.name AS employee_name
FROM departments
RIGHT JOIN employees ON departments.id = employees.department_id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
LEFT JOIN departments ON employees.department_id = departments.id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
FULL JOIN departments ON employees.department_id = departments.id;

SELECT A.EmployeeName AS EmployeeName1, B.EmployeeName AS EmployeeName2, A.City
FROM Employee A, Employee B
WHERE A.EmployeeID <> B.EmployeeID
AND A.City = B.City
ORDER BY A.City;

DQL commands are essential for retrieving data from a database. Using DQL commands, users can filter, sort, and group data based on specific criteria,  which is crucial for analyzing and interpreting the data stored in the database.

Data Control Language (DCL)

Data Control Language (DCL) is a set of SQL commands used to control access to a database. DCL commands are used to grant or revoke permissions to users and roles. 
Here are some common DCL commands:

GRANT

This command is used to grant permissions to a user or a role.

GRANT SELECT ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT ON mydatabase.* TO myuser;

REVOKE

This command is used to revoke permissions from a user or a role.

REVOKE SELECT ON mydatabase.mytable FROM myuser;
REVOKE SELECT, INSERT, UPDATE ON mydatabase.mytable FROM myuser;
REVOKE ALL PRIVILEGES ON mydatabase.* FROM myuser;
REVOKE SELECT, INSERT ON mydatabase.* FROM myuser;

DENY

This command is used to deny permissions to a user or a role.

DENY SELECT ON mydatabase.mytable TO myuser;
DENY SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;

DCL commands are essential for managing access to a database. Using DCL commands, database administrators can control who has access to the database and what actions they can perform on the data stored in the database. 

This is critical for maintaining the security and integrity of the data stored in the database.

Transaction Control Language (TCL)

Transaction Control Language (TCL) is a set of SQL commands used to manage transactions in a database. A transaction is a sequence of one or more SQL statements treated as a single unit of work. TCL commands are used to commit or rollback transactions. Here are some common TCL commands:

COMMIT

This command permanently saves the changes made by a transaction to the database.

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    COMMIT;
END

ROLLBACK

This command is used to undo the changes made by a transaction and restore the database to its previous state.

CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    IF @@ERROR <> 0
    BEGIN
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

SAVEPOINT

This command is used to set a savepoint within a transaction, which allows you to roll back to a specific point in the transaction.

CREATE PROCEDURE transfer_funds
    @from_account INT,
    @to_account INT,
    @amount DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    -- Savepoint
    SAVE TRANSACTION transfer_start;

    UPDATE bank.accounts
    SET balance = balance - @amount
    WHERE id = @from_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION transfer_start;
        RETURN;
    END;

    UPDATE bank.accounts
    SET balance = balance + @amount
    WHERE id = @to_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback entire transaction
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

RELEASE SAVEPOINT

This command is used to remove a savepoint within a transaction.

CREATE PROCEDURE example_procedure
AS
BEGIN
    BEGIN TRANSACTION;

    -- Perform some operations
    INSERT INTO myTable (column1, column2) VALUES (1, 'A');
    INSERT INTO myTable (column1, column2) VALUES (2, 'B');
    INSERT INTO myTable (column1, column2) VALUES (3, 'C');

    -- Set a savepoint
    SAVE TRANSACTION mySavepoint;

    -- More operations
    INSERT INTO myTable (column1, column2) VALUES (4, 'D');
    INSERT INTO myTable (column1, column2) VALUES (5, 'E');

    -- Check for errors
    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION mySavepoint;
        RETURN;
    END;

    -- Mark the savepoint as complete
    RELEASE SAVEPOINT mySavepoint;

    COMMIT;
END;

TCL commands are essential for managing transactions in a database. Using TCL commands, users can ensure that changes made to the database are consistent and reliable, even if there are errors or failures during the transaction. This is critical for maintaining the integrity of the data stored in the database.

Note
it's generally a good practice to use transactions in stored procedures to ensure data integrity and prevent data corruption. Using transactions and committing or rolling back changes as needed can help ensure that your database remains consistent and reliable.

Conclusion

I hope the blog has helped you understand SQL commands.

Original article source at:  https://www.c-sharpcorner.com/

#sql #command #types 

How to Use Types Of SQL Commands

Best 3 Types Of Leadership for Open Organizations

Servant leaders, quiet leaders, and open leaders have traits useful to open organizations.

In the classic movie Born Yesterday, a crime boss repeatedly demonstrates his leadership style by bellowing, "Do what I'm tellin' ya!" in a loud, threatening voice. It's entertaining in a comedy, but it would be a recipe for failure and getting ignored in an open organization.

In this article, I review forms of leadership that can be effective in an open organization. Remember that these leadership forms do not exist in a vacuum or silos. To be an effective manager, you want to mix and match techniques from each leadership style based on the requirements of a situation.

These three approaches to leadership are helpful for open organizations.

Servant leadership

There is a saying that politicians want to get elected either to be something or to do something. This adage applies to any type of leader. Some leaders simply want to be in command. While all leaders are ambitious, for this type of leader, satisfying their ambition is the primary goal. The acquisition of power is an end unto itself; once they have it, they may be uninterested in using it to solve problems or build something. Anything that the organization achieves looks like a personal triumph to them.

By contrast, when you're a servant leader, you see your leadership role as a means to serve people. In the political world, you would view public service as not a cliche but as an opportunity to help the public. As a servant leader, you work to improve things for the people you lead and are primarily concerned about the welfare of those around you.

Servant leadership is also contagious. By focusing on the welfare and development of the people you lead, you're growing the next generation of servant leaders. As a servant leader, you're not interested in taking all the credit. For example, when legendary baseball manager Casey Stengel was congratulated for winning a league championship, he famously remarked, "I couldn't have done it without my players." One of his greatest skills as a manager was maximizing each player's contributions to benefit the whole team.

Quiet leadership

For the past several years, we've been living in the age of the celebrity CEO. They are easy to recognize: They are brash and loud, they promote themselves constantly, and they act as if they know the answer to every problem. They attempt to dominate every interaction, want to be the center of attention, and often lead by telling others what to do. Alice Roosevelt Longworth described her father, US President Theodore Roosevelt, as someone who "wanted to be the corpse at every funeral, the bride at every wedding, and the baby at every christening." Roosevelt was an effective leader who did extraordinary things, such as starting the US National Park Service and building the Panama Canal, but he was anything but quiet.

In contrast, when you're a quiet leader, you lead by example. You don't fixate on problems; instead, you maintain a positive attitude and let your actions speak for themselves. You focus on what can be done. You lead by solving problems and by providing an example to your team. When faced with unexpected issues, the quiet leader doesn't spend time complaining but looks for solutions and implements them.

Open leadership

As a servant leader, you work to assist the members of your organization in growing into leaders. Quiet leaders lead by example. Servant leaders and quiet leaders do not act in an autocratic manner. Open leaders combine many of these characteristics.

An open leader is also not a top-down autocratic leader. As an open leader, you succeed by creating organizations in which teams can thrive. In other words, as an open leader, you create a framework or environment in which your organization can achieve the following goals according to The Open Organization Definition:

  • Greater agility: In an open organization, all team members have a clear understanding of the organization's goals and can, therefore, better work together to achieve those goals.
     
  • Faster innovation: In an open organization, ideas are heard (and reviewed and argued over) regardless of their origin. Ideas are not imposed on the organization by its leaders.
     
  • Increased engagement: Because members of the organization can contribute to decisions about the organization's direction, they have a sense of ownership for the team's goals.

The Open Organization defines the following five characteristics as basic tenants of open organizations:

  • Transparency: The organization's decision-making process is open, as are all supporting project resources. The team is never surprised by decisions made in isolation.
     
  • Inclusivity: All team members are included in discussions and reviews. Rules and protocols are established to ensure that all viewpoints are reviewed and respected.
     
  • Adaptability: Feedback is requested and accepted on an ongoing basis. The team continually adjusts its future actions based on results and inputs.
     
  • Collaboration: Team members work together from the start of a project or task. Work is not performed in isolation or in silos and then presented to the rest of the team for input.
     
  • Community: Team members have shared values regarding how the organization functions. Team leaders model these values. All team members are encouraged to make contributions to the team.

Putting leadership styles to work

How can you, as an open leader, incorporate the characteristics of servant and quiet leadership?

In an open organization, to support an inclusive community, you function as a mentor. Just as a servant leader acts to teach and cultivate future servant leaders, you must walk the walk, leading by example, ensuring transparency and collaboration, and operating according to shared values.

How can a quiet leader contribute to an open organization? Open organizations tend to be, for lack of a better word, noisy. Communication and collaboration in an open organization are constant and can sometimes be overwhelming to people not accustomed to it. The ownership felt by members of open organizations can result in contentious and passionate discussions and disagreements.

Quiet leaders with a positive outlook tend to see paths forward through seemingly contradictory viewpoints. Amid these discussions, a quiet leader cuts through the noise. As a calming influence on an open organization, a quiet leader can help people get past differences while driving solutions.

Original article source at: https://opensource.com/

#types #organization 

Best 3 Types Of Leadership for Open Organizations
Lawrence  Lesch

Lawrence Lesch

1675821861

CSStype: Strict TypeScript and Flow Types for Style Based on MDN Data

CSSType

TypeScript and Flow definitions for CSS, generated by data from MDN. It provides autocompletion and type checking for CSS properties and values.

TypeScript

import type * as CSS from 'csstype';

const style: CSS.Properties = {
  colour: 'white', // Type error on property
  textAlign: 'middle', // Type error on value
};

Flow

// @flow strict
import * as CSS from 'csstype';

const style: CSS.Properties<> = {
  colour: 'white', // Type error on property
  textAlign: 'middle', // Type error on value
};

Further examples below will be in TypeScript!

Getting started

$ npm install csstype

Style types

Properties are categorized in different uses and in several technical variations to provide typings that suits as many as possible.

 DefaultHyphenFallbackHyphenFallback
AllPropertiesPropertiesHyphenPropertiesFallbackPropertiesHyphenFallback
StandardStandardPropertiesStandardPropertiesHyphenStandardPropertiesFallbackStandardPropertiesHyphenFallback
VendorVendorPropertiesVendorPropertiesHyphenVendorPropertiesFallbackVendorPropertiesHyphenFallback
ObsoleteObsoletePropertiesObsoletePropertiesHyphenObsoletePropertiesFallbackObsoletePropertiesHyphenFallback
SvgSvgPropertiesSvgPropertiesHyphenSvgPropertiesFallbackSvgPropertiesHyphenFallback

Categories:

  • All - Includes Standard, Vendor, Obsolete and Svg
  • Standard - Current properties and extends subcategories StandardLonghand and StandardShorthand (e.g. StandardShorthandProperties)
  • Vendor - Vendor prefixed properties and extends subcategories VendorLonghand and VendorShorthand (e.g. VendorShorthandProperties)
  • Obsolete - Removed or deprecated properties
  • Svg - SVG-specific properties

Variations:

  • Default - JavaScript (camel) cased property names
  • Hyphen - CSS (kebab) cased property names
  • Fallback - Also accepts array of values e.g. string | string[]

At-rule types

At-rule interfaces with descriptors.

TypeScript: These will be found in the AtRule namespace, e.g. AtRule.Viewport.
Flow: These will be prefixed with AtRule$, e.g. AtRule$Viewport.

 DefaultHyphenFallbackHyphenFallback
@counter-styleCounterStyleCounterStyleHyphenCounterStyleFallbackCounterStyleHyphenFallback
@font-faceFontFaceFontFaceHyphenFontFaceFallbackFontFaceHyphenFallback
@viewportViewportViewportHyphenViewportFallbackViewportHyphenFallback

Pseudo types

String literals of pseudo classes and pseudo elements

Pseudos

Extends:

AdvancedPseudos

Function-like pseudos e.g. :not(:first-child). The string literal contains the value excluding the parenthesis: :not. These are separated because they require an argument that results in infinite number of variations.

SimplePseudos

Plain pseudos e.g. :hover that can only be one variation.

Generics

All interfaces has two optional generic argument to define length and time: CSS.Properties<TLength = string | 0, TTime = string>

  • Length is the first generic parameter and defaults to string | 0 because 0 is the only length where the unit identifier is optional. You can specify this, e.g. string | number, for platforms and libraries that accepts any numeric value as length with a specific unit.
const style: CSS.Properties<string | number> = {
  width: 100,
};
  • Time is the second generic argument and defaults to string. You can specify this, e.g. string | number, for platforms and libraries that accepts any numeric value as length with a specific unit.
const style: CSS.Properties<string | number, number> = {
  transitionDuration: 1000,
};

Usage

import type * as CSS from 'csstype';

const style: CSS.Properties = {
  width: '10px',
  margin: '1em',
};

In some cases, like for CSS-in-JS libraries, an array of values is a way to provide fallback values in CSS. Using CSS.PropertiesFallback instead of CSS.Properties will add the possibility to use any property value as an array of values.

import type * as CSS from 'csstype';

const style: CSS.PropertiesFallback = {
  display: ['-webkit-flex', 'flex'],
  color: 'white',
};

There's even string literals for pseudo selectors and elements.

import type * as CSS from 'csstype';

const pseudos: { [P in CSS.SimplePseudos]?: CSS.Properties } = {
  ':hover': {
    display: 'flex',
  },
};

Hyphen cased (kebab cased) properties are provided in CSS.PropertiesHyphen and CSS.PropertiesHyphenFallback. It's not not added by default in CSS.Properties. To allow both of them, you can simply extend with CSS.PropertiesHyphen or/and CSS.PropertiesHyphenFallback.

import type * as CSS from 'csstype';

interface Style extends CSS.Properties, CSS.PropertiesHyphen {}

const style: Style = {
  'flex-grow': 1,
  'flex-shrink': 0,
  'font-weight': 'normal',
  backgroundColor: 'white',
};

Adding type checked CSS properties to a HTMLElement.

import type * as CSS from 'csstype';

const style: CSS.Properties = {
  color: 'red',
  margin: '1em',
};

let button = document.createElement('button');

Object.assign(button.style, style);

What should I do when I get type errors?

The goal is to have as perfect types as possible and we're trying to do our best. But with CSS Custom Properties, the CSS specification changing frequently and vendors implementing their own specifications with new releases sometimes causes type errors even if it should work. Here's some steps you could take to get it fixed:

If you're using CSS Custom Properties you can step directly to step 3.

First of all, make sure you're doing it right. A type error could also indicate that you're not 😉

  • Some CSS specs that some vendors has implemented could have been officially rejected or haven't yet received any official acceptance and are therefor not included
  • If you're using TypeScript, type widening could be the reason you get Type 'string' is not assignable to... errors

Have a look in issues to see if an issue already has been filed. If not, create a new one. To help us out, please refer to any information you have found.

Fix the issue locally with TypeScript (Flow further down):

The recommended way is to use module augmentation. Here's a few examples:

// My css.d.ts file
import type * as CSS from 'csstype';

declare module 'csstype' {
  interface Properties {
    // Add a missing property
    WebkitRocketLauncher?: string;

    // Add a CSS Custom Property
    '--theme-color'?: 'black' | 'white';

    // Allow namespaced CSS Custom Properties
    [index: `--theme-${string}`]: any;
    
    // Allow any CSS Custom Properties
    [index: `--${string}`]: any;

    // ...or allow any other property
    [index: string]: any;
  }
}

The alternative way is to use type assertion. Here's a few examples:

const style: CSS.Properties = {
  // Add a missing property
  ['WebkitRocketLauncher' as any]: 'launching',

  // Add a CSS Custom Property
  ['--theme-color' as any]: 'black',
};

Use type assertion. Here's a few examples:

const style: $Exact<CSS.Properties<*>> = {
  // Add a missing property
  [('WebkitRocketLauncher': any)]: 'launching',

  // Add a CSS Custom Property
  [('--theme-color': any)]: 'black',
};

Version 3.0

  • All property types are exposed with namespace
    TypeScript: Property.AlignContent (was AlignContentProperty before)
    Flow: Property$AlignContent
  • All at-rules are exposed with namespace
    TypeScript: AtRule.FontFace (was FontFace before)
    Flow: AtRule$FontFace
  • Data types are NOT exposed
    E.g. Color and Box. Because the generation of data types may suddenly be removed or renamed.
  • TypeScript hack for autocompletion
    Uses (string & {}) for literal string unions and (number & {}) for literal number unions (related issue). Utilize PropertyValue<T> to unpack types from e.g. (string & {}) to string.
  • New generic for time
    Read more on the "Generics" section.
  • Flow types improvements
    Flow Strict enabled and exact types are used.

Contributing

Never modify index.d.ts and index.js.flow directly. They are generated automatically and committed so that we can easily follow any change it results in. Therefor it's important that you run $ git config merge.ours.driver true after you've forked and cloned. That setting prevents merge conflicts when doing rebase.

Commands

  • npm run build Generates typings and type checks them
  • npm run watch Runs build on each save
  • npm run test Runs the tests
  • npm run lazy Type checks, lints and formats everything

Download Details:

Author: Frenic
Source Code: https://github.com/frenic/csstype 
License: MIT license

#typescript #css #flow #types 

CSStype: Strict TypeScript and Flow Types for Style Based on MDN Data
Monty  Boehm

Monty Boehm

1675312560

Guide to IoT Testing Tools, Challenges and Its Types

Introduction to IoT Testing

The Internet of Things is the new emerging trend in digital transformation, which many organizations are adopting as the world moves toward digitalization. According to a survey, it is assumed that there will be billions of connected devices by the end of the year 2020.

IoT testing considers functional and Integration testing to perform the functional type of testing and performance testing to check how a particular software handles large amounts of data transmitted within a few seconds.

What are the benefits of IoT testing?

The Internet of Things (IoT) has the most significant impact on digital transformation and manufacturing products and software services businesses worldwide. According to research, it is assumed that there will be billions of connected devices in upcoming years. Delivery of robust, bug-free IoT solutions earlier to the market could be beneficial for organizations. They are testing many diversified devices, and generating data causes consequential challenges in scaling the items, speed, and different types of variety for internal testing teams.

Traditional software applications’ approaches to controlling the processes are not sufficient for such IoT solutions. Order to gain the standard of such difficult IoT solutions requires a multidisciplinary testing approach. Instead of developing pre-planned testing scenarios and strategies, integrating a suitable automation approach, virtualization, and measuring tools are big in IoT testing.

IoT is connected using the Internet capable of sending the data to the Cloud. Click to explore about, IoT Platform and Internet of Things Applications, Use Cases

What are the challenges involved in IoT Testing?

IoT solutions are the composition of several approaches. Firstly, the mixture of solution components consists of – hardware device, application software, server software, network, and client platforms. Secondly, the large scale and throughput at which they're expected to function across networks. Thirdly, the innumerable user and environmental situations under which they're contemplated to work.

The Scale of Operations

IoT solution deployment necessitates thousands of interconnected devices, which hook up with servers (on-premise or within the cloud) over near real-time networks. Server infrastructure and framework are made on multiple and distinct interconnected services and applications from different vendors. Testing such a posh, multi-vendor environment and simulating real-time situations is often a challenge.

Software-Hardware Interconnection

Testing an IoT game plan isn't sensitive to the application or the gear. It requires a joined IoT testing approach for this interconnected and dynamic and incredible and intensive environment. Other than ordinary helpful and non-utilitarian testing of the certifiable programming and gear parts, investigate a couple of commonsense circumstances and even theory ones that contemplate the relationship between them.

Platform Heterogeneity

In such various fields, there are many software, firmware, and hardware platform reconstructions. Additionally, there are distinct network protocols and mechanisms for device-to-server connections like MQTT, HTTP, COAP, and WebSockets. Testing for all possible combinations isn't practical. Shortlisting significant test scenarios requires an intensive understanding of end-use situations, domain knowledge with specifications, and a platform-skeptic and automatic test suite.

Functional Testing reviews every aspect of a piece of software to make sure that it works correctly. Click to explore about, Functional Testing and Its Types

Real-time Data Velocity

Difficulties from eccentric organization equipment and Internet associations could influence gadget execution and, at last, the IoT arrangement. Since these gadgets are, for the most part, distantly associated, such circumstances bring about baffled end-client encounters. Testing the responsiveness of gadgets and applications for genuine results could be a consistent necessity throughout the IoT arrangement advancement life cycle.

User Experience

Consistent and steady client experience covering portable (normally iOS, Android) and work area (commonly Windows, Mac) conditions is fundamental for any IoT arrangement. Further, saving local experience on versatile stages is also a certain necessity. Testing should consider these different client conditions across various brands, forms, and screen sizes.

Security & Privacy

Networked devices and applications exposed on the general public Internet are always liable to being hacked. Conforming devices and applications against the prescribed security standards is significant. As the Internet of Things grows, hackers are constantly trying to seek out system weaknesses. Constant security upgrades and testing could be a must in today's environment.

What are the different types of IoT Testing?

The highlighted below are the various types of IoT Testing:xenonstack-types-of-testing

Functional

This is to make sure that the work product that's visiting is interacting with various other connected devices within the IoT ecosystem. It first works consistently for what it was designed to do.

Usability Testing

Usability testing ensures that the interface of the gadget and the application meets client assumptions and affirmation. The principal focal point of those tests is to affirm the accommodation of utilization for some fundamental tasks, responsiveness, protecting nativity, elegant treatment of blunders, and type to utilize the gadget/application without preparing or an aide.

Reliability Testing

This is to make sure that the work product is ready to perform reliably under distinct changes in the environmental, network, and the other operational conditions and still ready to deliver what's expected.

Security Testing

Security in its simplest form means authorized access is granted to the protected device, and its data and unauthorized access is restricted. Testing is completed using threat modeling tools, static code analysis tools, and runtime check tools, subjecting the device and application to a spread of simulated threats. Security tests also encompass checks for OWASP Top Ten Threats.

Connectivity Testing

This testing intakes checking the device and application behavior by subjecting the network through a load, fragmentary failures, and total loss of connectivity. By inducing these real-life scenarios, the robustness and sturdiness of the device, edge, platform, and application are examined thoroughly.

Performance Testing

Load generators are performance measuring tools on the cloud rate system performance under normal and full load. These tests check their responsiveness to user actions on the device, and on a platform layer, they check the flexibility to handle spikes in traffic gracefully. They've supported metrics for assessing the responsiveness of the device/application and underlying system performance.

Compatibility Testing

In a complex IoT climate, it's basic that gadgets, organizations, stages, applications, and end-client work areas/mobiles add pair. Each of them includes a deep level of fluctuation in the firmware and equipment models and forms; network type, speed, conventions, and adaptations; programming framework type and shapes; program type and forms; screen sizes and show goals, to name a few. Look at the apparatus by and large for potential mixes of those adaptations to downsize disappointments inside the field.

Compliance & Certification Testing

A well-tested IoT product can also require the correct certification to line foot within the market. IoT devices generally must meet distinct certification qualifications for the network, protocol compliance, device drivers, app store submissions, etc.

Beta (Pilot) Testing

After testing in an exceedingly controlled and managed lab environment, the work product must be deployed in its target environment with all the variables to determine its behavior. Beta testing enables acceptance testing because the intended user validates the work product for functionality, usability, reliability, and compatibility. Since end-users do it, beta testing isn't a controlled activity.

Upgrade Testing

Whenever the firmware, software, or hardware updates or upgrades occur, it concerns thorough regression testing as failures may appear because of compatibility issues. To handle this, special tests are often performed in an exceedingly staging environment before upgrades are pushed over-the-air (OTA) to devices and on server systems. Post an upgrade, update, data preservation, and a smooth system restart are critical.

What are IoT Testing Tools?
xenonstack-iot-testing-framework

To accomplish the wide selection of IoT tests listed above in an exceedingly staging environment, the use of the proper simulation, virtualization, automation, counterfeit, and measurement tools is necessary. Number of the tools that would be used are listed below:

Protocol/Device Simulators

Devices and Protocols, which are standards-compliant, are often simulated using tools. They'll be simulated in bulk in addition to being configured to map the desired real-life states.

Record & Play Tools

Whether it’s devices or applications, system and user actions/data are often recorded and replayed on simulators and apps as a method of automating the test execution process.

Mobile Testing Tools

They provide automated functional mobile testing that replicates end-user experience and confirms that the application works and is enhanced.

Security Testing Tools

They can be arranged into static code investigation and runtime danger, danger demonstrating, and inciting devices. Devices Micro Focus, Fortify on Demand, OWASP ZAP, VGC, and Microsoft Threat Modeling Tool distinguish dangers, focus on them, and give suggestions en route to fix them. Acunetix and Netsparker are the premier two open-source security instruments that may assist with uncovering weaknesses.

API Testing Tools

Drastically increasing solutions are now built using REST APIs and Web services. Tools like Postman, SoapUI, Progress, Telerik, FiddlerTM, etc., test their connectivity, response, Latency, and performance.

Automated Deployment Tools

They are wont to create virtual machines either on-premise technically or within the cloud, rapidly commission managed services and configure and deploy custom-built services and applications. Tools like Foreman, Ansible Tower®, and Katello ensure that the staging is up so automated and manual tests are often automatically activated on time in continuous build, Integration, and deployment environments.

Other Tools

Below there are a few tools/equipment which will be used for distinct purposes:
Tcpdump and Wireshark to watch traffic over the network, Fiddler to debug HTTP traffic, and JTAG Dongle and Digital Storage Oscilloscope to check the hardware and monitor its framework and parameters. Additionally, law and defect management tools and proprietary tools can improve internal control execution efficiency, momentum, and effectiveness.

Conclusion

As IoT generates an unabridged new set of testing requirements, automated testing has a significant role. Testing tools and strategic approaches will need to verify various communication protocols, including WiFi, Bluetooth, CDMA, and 3G. Simulation models will also be fundamental, given the summons with real-time testing. IoT solutions are composite and challenging, given the multiple components and interactions between them. Wide-ranging IoT tests can ensure a quality IoT solution. However, executing IoT test cases requires a good strategic testing approach using appropriate test tools.

Original article source at: https://www.xenonstack.com/

#iot #testing #types 

Guide to IoT Testing Tools, Challenges and Its Types
Lawrence  Lesch

Lawrence Lesch

1673503320

PGtyped: Typesafe SQL in TypeScript

PgTyped

PgTyped makes it possible to use raw SQL in TypeScript with guaranteed type-safety.
No need to map or translate your DB schema to TypeScript, PgTyped automatically generates types and interfaces for your SQL queries by using your running Postgres database as the source of type information.


Features:

  1. Automatically generates TS types for parameters/results of SQL queries of any complexity.
  2. Supports extracting and typing queries from both SQL and TS files.
  3. Generate query types as you write them, using watch mode.
  4. Useful parameter interpolation helpers for arrays and objects.
  5. No need to define your DB schema in TypeScript, your running DB is the live source of type data.
  6. Prevents SQL injections by not doing explicit parameter substitution. Instead, queries and parameters are sent separately to the DB driver, allowing parameter substitution to be safely done by the PostgreSQL server.

Documentation

Visit our new documentation page at https://pgtyped.now.sh/

Getting started

  1. npm install @pgtyped/cli @pgtyped/query typescript (typescript is a required peer dependency for pgtyped)
  2. Create a PgTyped config.json file.
  3. Run npx pgtyped -w -c config.json to start PgTyped in watch mode.

Refer to the example app for a preconfigured example.

Example

Lets save some queries in books.sql:

/* @name FindBookById */
SELECT * FROM books WHERE id = :bookId;

PgTyped parses the SQL file, extracting all queries and generating strictly typed TS queries in books.queries.ts:

/** Types generated for queries found in "books.sql" */

//...

/** 'FindBookById' parameters type */
export interface IFindBookByIdParams {
  bookId: number | null;
}

/** 'FindBookById' return type */
export interface IFindBookByIdResult {
  id: number;
  rank: number | null;
  name: string | null;
  author_id: number | null;
}

/**
 * Query generated from SQL:
 * SELECT * FROM books WHERE id = :bookId
 */
export const findBookById = new PreparedQuery<
  IFindBookByIdParams,
  IFindBookByIdResult
>(...);

Query findBookById is now statically typed, with types inferred from the PostgreSQL schema.
This generated query can be imported and executed as follows:

import { Client } from 'pg';
import { findBookById } from './books.queries';

export const client = new Client({
  host: 'localhost',
  user: 'test',
  password: 'example',
  database: 'test',
});

async function main() {
  await client.connect();
  const books = await findBookById.run(
    {
      bookId: 5,
    },
    client,
  );
  console.log(`Book name: ${books[0].name}`);
  await client.end();
}

main();

Resources

  1. Configuring Pgtyped
  2. Writing queries in SQL files
  3. Advanced queries and parameter expansions in SQL files
  4. Writing queries in TS files
  5. Advanced queries and parameter expansions in TS files

Project state:

This project is being actively developed and its APIs might change. All issue reports, feature requests and PRs appreciated.

Download Details:

Author: Adelsz
Source Code: https://github.com/adelsz/pgtyped 
License: MIT license

#nativescript #generator #types 

PGtyped: Typesafe SQL in TypeScript
Lawrence  Lesch

Lawrence Lesch

1672809420

TS-essentials: All Basic TypeScript Types in one Place

ts-essentials

All essential TypeScript types in one place 🤙

Install

npm install --save-dev ts-essentials

👉 We require typescript>=4.1. If you're looking for support for older TS versions, please have a look at the TypeScript dependency table

👉 As we really want types to be stricter, we require enabled strictNullChecks in your project

If you use any functions you should add ts-essentials to your dependencies (npm install --save ts-essentials) to avoid runtime errors in production.

What's inside?

ts-essentials is a set of high-quality, useful TypeScript types that make writing type-safe code easier.

Basic

  • Primitive type matching all primitive values.
  • noop function that takes any arguments and returns nothing, as a placeholder for e.g. callbacks.

Dictionaries

keywords: map

const stringDict: Dictionary<string> = {
  a: "A",
  b: "B",
};

// Specify second type argument to change dictionary keys type
const dictOfNumbers: Dictionary<string, number> = {
  420: "four twenty",
  1337: "HAX",
};

// You may specify union types as key to cover all possible cases. It acts the same as Record from TS's standard library
export type DummyOptions = "open" | "closed" | "unknown";
const dictFromUnionType: Dictionary<number, DummyOptions> = {
  closed: 1,
  open: 2,
  unknown: 3,
};

// and get dictionary values
type StringDictionaryValueType = DictionaryValues<typeof stringDict>;
//   ^? string

// When building a map using JS objects consider using SafeDictionary
const safeDict: SafeDictionary<number> = {};
const value: number | undefined = safeDict["foo"];

// With SafeDictionary you don't need to use all of the sub-types of a finite type.
// If you care about the key exhaustiveness, use a regular Dictionary.
type ConfigKeys = "LOGLEVEL" | "PORT" | "DEBUG";
const configSafeDict: SafeDictionary<number, ConfigKeys> = {
  LOGLEVEL: 2,
};
const maybePort: number | undefined = configSafeDict["PORT"];

const configDict: Dictionary<number, ConfigKeys> = {
  LOGLEVEL: 2,
  PORT: 8080,
  DEBUG: 1,
};
const port: number = configDict["PORT"];

Type checkers

  • IsUnknown checks whether we get unknown or not. If so, we get true. Otherwise, false
// ✅ true
type Test1 = IsUnknown<unknown>;
// ❌ false
type Test2 = IsUnknown<{ name: "Alexey" }>;
  • IsNever checks whether we get never or not. If so, we get true. Otherwise, false
// ✅ true
type Test1 = IsNever<never>;
// ❌ false
type Test2 = IsNever<{ name: "Alexey" }>;
  • IsAny checks whether we get any or not. If so, we get true. Otherwise, false
// ✅ true
type Test1 = IsAny<any>;
// ❌ false
type Test2 = IsAny<{ name: "Alexey" }>;

Deep* wrapper types

  • DeepPartial
  • DeepRequired
  • DeepReadonly
  • DeepNonNullable
  • DeepNullable
  • DeepUndefinable

keywords: recursive, nested, optional

type ComplexObject = {
  simple: number;
  nested: {
    a: string;
    array: [{ bar: number }];
  };
};

type ComplexObjectPartial = DeepPartial<ComplexObject>;
const samplePartial: ComplexObjectPartial = {
  nested: {
    array: [{}],
  },
};

type ComplexObjectAgain = DeepRequired<ComplexObjectPartial>;
const sampleRequired: ComplexObjectAgain = {
  simple: 5,
  nested: {
    a: "test",
    array: [{ bar: 1 }],
  },
};

type ComplexObjectReadonly = DeepReadonly<ComplexObject>;

type ComplexNullableObject = {
  simple: number | null | undefined;
  nested: {
    a: string | null | undefined;
    array: [{ bar: number | null | undefined }] | null | undefined;
  };
};

type ComplexObjectNonNullable = DeepNonNullable<ComplexNullableObject>;
const sampleNonNullable: ComplexObjectNonNullable = {
  simple: 5,
  nested: {
    a: "test",
    array: [{ bar: null }], // Error: Type 'null' is not assignable to type 'number'
  },
};

type ComplexObjectNullable = DeepNullable<ComplexObject>;
const sampleDeepNullable1: ComplexObjectNullable = {
  simple: null,
  nested: {
    a: null,
    array: [{ bar: null }],
  },
};
const sampleDeepNullable2: ComplexObjectNullable = {
  simple: 1,
  nested: {
    array: [null], // OK
    // error -- property `a` missing, should be `number | null`
  },
};

// DeepUndefinable will come in handy if:
//  - you want to explicitly assign values to all of the properties
//  AND
//  - the expression used for the assignment can return an `undefined` value
// In most situations DeepPartial will suffice.
declare function tryGet(name: string): string | undefined;
type ComplexObjectUndefinable = DeepUndefinable<ComplexObject>;
const sampleDeepUndefinable1: ComplexObjectUndefinable = {
  simple: undefined,
  nested: {
    a: tryGet("a-value"),
    array: [{ bar: tryGet("bar-value") }],
  },
};
const sampleDeepUndefinable2: ComplexObjectUndefinable = {
  // error -- property `simple` missing, should be `number | undefined`
  nested: {
    array: [[{ bar: undefined }]],
    // error -- property `a` missing, should be `string | undefined`
  },
};

Difference between DeepRequired and DeepNonNullable

DeepRequired is closer to Required but DeepNonNullable on the other hand is closer to NonNullable

It means that DeepRequired doesn't remove null and undefined but only makes fields required. On the other hand, DeepNonNullable will only remove null and undefined but doesn't prohibit the field to be optional.

Let's have a look at the optional nullable field:

type Person = {
  name?: string | null | undefined;
};

type NonNullablePerson = DeepNonNullable<Person>;
// { name?: string | undefined; }
type RequiredPerson = DeepRequired<Person>;
// { name: string | null; }

Let's have a look at the required nullable field:

type FullName = {
  first: string | null | undefined;
};

type NonNullableFullName = DeepNonNullable<FullName>;
// { first: string; }
type RequiredFullName = DeepRequired<FullName>;
// { first: string | null | undefined; }

And there's no difference between DeepNonNullable and DeepRequired if the property is non nullable and required

Writable

Make all attributes of object writable.

type Foo = {
  readonly a: number;
  readonly b: string;
};

const foo: Foo = { a: 1, b: "b" };
(foo as Writable<typeof foo>).a = 42;
type Foo = {
  readonly foo: string;
  bar: {
    readonly x: number;
  };
}[];

const test: DeepWritable<Foo> = [
  {
    foo: "a",
    bar: {
      x: 5,
    },
  },
];

// we can freely write to this object
test[0].foo = "b";
test[0].bar.x = 2;

Buildable

keywords: builder

A combination of both DeepWritable and DeepPartial. This type allows building an object step-by-step by assigning values to its attributes in multiple statements.

interface ReadonlyObject
  extends Readonly<{
    simple: number;
    nested: Readonly<{
      a: string;
      array: ReadonlyArray<Readonly<{ bar: number }>>;
    }>;
  }> {}

const buildable: Buildable<ReadonlyObject> = {};
buildable.simple = 7;
buildable.nested = {};
buildable.nested.a = "test";
buildable.nested.array = [];
buildable.nested.array.push({ bar: 1 });
const finished = buildable as ReadonlyObject;

Pick

There's no need for own implementation of Pick, as it's already strict:

type Pick<T, K extends keyof T> = { [P in K]: T[P] };
//           ^^^^^^^^^^^^^^^^^

interface Person {
  age: number;
  name: string;
}

// @ts-expect-error: Type '"job"' does not satisfy the constraint 'keyof Person'
type WithJob = Pick<Person, "job">;
//                          ^^^^^

Omit

Our version of Omit is renamed to StrictOmit in v3, since the builtin Omit has become part of TypeScript 3.5

StrictOmit

Usage is similar to the builtin version, but checks the filter type more strictly.

type ComplexObject = {
  simple: number;
  nested: {
    a: string;
    array: [{ bar: number }];
  };
};

type SimplifiedComplexObject = StrictOmit<ComplexObject, "nested">;
//   ^? { simple: number }

// if you want to Omit multiple properties just use union type:
type SimplifiedComplexObject = StrictOmit<ComplexObject, "nested" | "simple">;
//   ^? {}

Comparison between Omit and StrictOmit

Following the code above, we can compare the behavior of Omit and StrictOmit.

// Type '"simple" | "nested" | "nonexistent"' does not satisfy the constraint '"simple" | "nested"'
// @ts-expect-error: Type '"nonexistent"' is not assignable to type '"simple" | "nested"'
type SimplifiedComplexObjectWithStrictOmit = StrictOmit<ComplexObject, "nested" | "simple" | "nonexistent">;
//                                                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

type SimplifiedComplexObjectWithOmit = Omit<ComplexObject, "nested" | "simple" | "nonexistent">;
//   ^? {}

As is shown in the example, StrictOmit ensures that no extra key is specified in the filter.

StrictExtract

Usage is similar to the builtin version, but checks the filter type more strictly.

interface Dog {
  type: "dog";
  woof(): void;
}

interface Cat {
  type: "cat";
  meow(): void;
}

interface Mouse {
  type: "mouse";
  squeak(): void;
}

type Animal = Dog | Cat | Mouse;

type DogAnimal = StrictExtract<Animal, { type: "dog" }>;
//   ^? Dog

// if you want to Extract multiple properties just use union type:

// 1. if you use typescript up to version 4.5
type HouseAnimal = StrictExtract<Animal, { type: "dog" | "cat" }>;
//   ^? Cat | Dog

// 2. otherwise use
type HouseAnimal = StrictExtract<Animal, { type: "dog" } | { type: "cat" }>;
//   ^? Cat | Dog

Comparison between Extract and StrictExtract

Following the code above, we can compare the behavior of Extract and StrictExtract.

// Type '{ type: "dog"; } | { type: "cat"; } | { type: "horse"; }' does not satisfy the constraint 'Partial<Animal>'
//   Type '{ type: "horse"; }' is not assignable to type 'Partial<Animal>'
//     Type '{ type: "horse"; }' is not assignable to type 'Partial<Mouse>'
//       Types of property 'type' are incompatible
// @ts-expect-error: Type '"horse"' is not assignable to type '"mouse"'.
type HouseAnimalWithStrictExtract = StrictExtract<Animal, { type: "dog" } | { type: "cat" } | { type: "horse" }>;
//                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

// no error
type HouseAnimalWithExtract = Extract<Animal, { type: "dog" } | { type: "cat" } | { type: "horse" }>;
//   ^? Dog | Cat

StrictExclude

Usage is similar to the builtin version, but checks the filter type more strictly.

type Animal = "dog" | "cat" | "mouse";

type DogAnimal = StrictExclude<Animal, "dog">;
//   ^? 'cat' | 'mouse'

// if you want to Exclude multiple properties just use union type:
type MouseAnimal = StrictExclude<Animal, "dog" | "cat">;
//   ^? 'mouse'

Comparison between Exclude and StrictExclude

Following the code above, we can compare the behavior of Exclude and StrictExclude.

// Type '"dog" | "cat" | "horse"' is not assignable to type '"dog" | "cat" | "mouse"'
// @ts-expect-error: '"horse"' is not assignable to type '"dog" | "cat" | "mouse"'.
type HouseAnimalWithStrictExclude = StrictExclude<Animal, "dog" | "cat" | "horse">;

// no error
type HouseAnimalWithExclude = Exclude<Animal, "dog" | "cat" | "horse">;

DeepOmit

Recursively omit deep properties according to key names.

Here is the Teacher interface.

interface Teacher {
  name: string;
  gender: string;
  students: { name: string; score: number }[];
}

Now suppose you want to omit gender property of Teacher, and score property of students. You can achieve this with a simple type filter.

In the filter, the properties to be omitted completely should be defined as either never or true. For the properties you want to partially omit, you should recursively define the sub-properties to be omitted.

type TeacherSimple = DeepOmit<
  Teacher,
  {
    gender: never;
    students: {
      score: never;
    }[];
  }
>;
// ^? { name: string; students: { name: string }[] }

NOTE

  • DeepOmit works fine with Arrays and Sets. When applied to a Map, the filter is only applied to its value.
  • If there exists any property in the filter which is not in the original type, an error will occur.

DeepPick

Recursively pick deep properties according to key names.

This type works as complementary type to DeepOmit, in the similar way like Exclude and Extract types complement each other.

The filter syntax is the same as for the DeepPick, so one filter can be used to obtain both DeepPick and DeepOmit types from it.

The properties to be picked completely should be defined as never. For the properties you want to partially pick, you should recursively define the sub-properties to be picked.

interface Teacher {
  name: string;
  gender: string;
  students: { name: string; score: number }[];
}
type TeacherSimple = DeepPick<
  Teacher,
  {
    gender: never;
    students: {
      score: never;
    }[];
  }
>;
// ^? { gender: string; students: { score: number }[] }

OmitProperties

keywords: filter, props

Removes all properties extending type P in type T. NOTE: it works opposite to filtering.

interface Example {
  log(): void;
  version: string;
}

type ExampleWithoutMethods = OmitProperties<Example, Function>;
//   ^? { version: string }

// if you want to Omit multiple properties just use union type like:
type ExampleWithoutMethods = OmitProperties<Example, Function | string>;
//   ^? {}

PickProperties

Pick only properties extending type P in type T.

interface Example {
  log(): void;
  version: string;
  versionNumber: number;
}

type ExampleOnlyMethods = PickProperties<Example, Function>;
//   ^? { log(): void }

// if you want to pick multiple properties just use union type like:
type ExampleOnlyMethodsAndString = PickProperties<Example, Function | string>;
//   ^? { log(): void; version: string }

NonNever

Useful for purifying object types. It improves intellisense but also allows for extracting keys satisfying a conditional type.

type GetDefined<TypesMap extends { [key: string]: any }> = keyof NonNever<{
  [T in keyof TypesMap]: TypesMap[T] extends undefined ? never : TypesMap[T];
}>;

NonEmptyObject

Useful for accepting only objects with keys, great after a filter like OmitProperties or PickProperties.

/* return never if the object doesn't have any number value*/
type NumberDictionary<T> = NonEmptyObject<PickProperties<T, number>>;

// return { a: number }
type SomeObject = NumberDictionary<{ a: number; b: string }>;

// return never
type EmptyObject = NumberDictionary<{}>;

NonEmptyArray

Useful for accepting only arrays containing at least one element.

// declare function expression type accepting some rest parameters, but at least one element for the rest parameters is required
type FunctionAcceptingRestParameters = (someString: string, ...args: NonEmptyArray<number>) => void;

// declare some non-empty array variables
const okay: NonEmptyArray<number> = [1, 2];
const alsoOkay: NonEmptyArray<number> = [1];
// @ts-expect-error: Type '[]' is not assignable to type 'NonEmptyArray<number>'. Source has 0 element(s) but target requires 1.
const error: NonEmptyArray<number> = [];

Merge

keywords: override

type Foo = {
  a: number;
  b: string;
};

type Bar = {
  b: number;
};

const xyz: Merge<Foo, Bar> = { a: 4, b: 2 };
//   ^? { a: number; b: number }

MergeN

keywords: override

type Tuple = [
  {
    a: number;
    b: string;
  },
  {
    b: number;
  },
];

const xyz: MergeN<Tuple> = { a: 4, b: 2 };
//   ^? { a: number; b: number }

MarkRequired

Useful when you're sure some optional properties will be set. A real life example: when selecting an object with its related entities from an ORM.

class User {
  id: number;
  posts?: Post[];
  photos?: Photo[];
}
type UserWithPosts = MarkRequired<User, "posts">;

// example usage with a TypeORM repository -- `posts` are now required, `photos` are still optional
async function getUserWithPosts(id: number): Promise<UserWithPosts> {
  return userRepo.findOneOrFail({ id }, { relations: ["posts"] }) as Promise<UserWithPosts>;
}

MarkOptional

Useful when you want to make some properties optional without creating a separate type.

interface User {
  email: string;
  password: string;
}

type UserWithoutPassword = MarkOptional<User, "password">;
//   ^? { email: string; password?: string }

MarkReadonly

Useful when you want to make some properties readonly without creating a separate type.

interface User {
  id: number;
  name: string;
}

type UserThatCannotChangeName = MarkReadonly<User, "name">;
//   ^? { id: number; readonly name: string }

MarkWritable

Useful when you want to make some properties writable (or unset readonly) without creating a separate type.

interface User {
  readonly id: number;
  readonly name: string;
}

type UserThatCanChangeName = MarkWritable<User, "name">;
//   ^? { readonly id: number; name: string }

ReadonlyKeys

Gets keys of an object which are readonly.

type T = {
  readonly a: number;
  b: string;
};

type Result = ReadonlyKeys<T>;
//   ^? 'a'

WritableKeys

Gets keys of an object which are writable.

type T = {
  readonly a: number;
  b: string;
};

type Result = WritableKeys<T>;
//   ^? 'b'

OptionalKeys

Gets keys of an object which are optional.

type T = {
  a: number;
  b?: string;
  c: string | undefined;
  d?: string;
};

type Result = OptionalKeys<T>;
//   ^? 'b' | 'd'

RequiredKeys

Gets keys of an object which are required.

type T = {
  a: number;
  b?: string;
  c: string | undefined;
  d?: string;
};

type Result = RequiredKeys<T>;
//   ^? 'a' | 'c'

PickKeys

Gets keys of properties of given type in object type.

type T = {
  a: number;
  b?: string;
  c: string | undefined;
  d: string;
};

type Result1 = PickKeys<T, string>;
//   ^? 'd'

type Result2 = PickKeys<T, string | undefined>;
//   ^? 'b' | 'c' | 'd'

UnionToIntersection

Useful for converting mapped types with function values to intersection type (so in this case - overloaded function).

type Foo = {
  bar: string;
  xyz: number;
};

type Fn = UnionToIntersection<{ [K in keyof Foo]: (type: K, arg: Foo[K]) => any }[keyof Foo]>;

Opaque types

Opaque types allow you to create unique type that can't be assigned to base type by accident. Good examples of opaque types include:

  • JWTs or other tokens - these are special kinds of string used for authorization purposes. If your app uses multiple types of tokens each should be a separate opaque type to avoid confusion.
  • specific currencies - amount of different currencies shouldn't be mixed
  • bitcoin address - special kind of string

It's critical to understand that each token (second argument to Opaque) has to be unique across your codebase.

We encourage you to leverage a pattern where you have single function to validate base type and create opaque type.

type PositiveNumber = Opaque<number, "PositiveNumber">;
function makePositiveNumber(n: number): PositiveNumber {
  if (n <= 0) {
    throw new Error(`Value ${n} is not positive !`);
  }
  return n as PositiveNumber; // you can cast it directly without unknown and any
}

type NegativeNumber = Opaque<number, "NegativeNumber">;
function makeNegativeNumber(n: number): NegativeNumber {
  if (n >= 0) {
    throw new Error(`Value ${n} is not negative !`);
  }
  return n as NegativeNumber; // you can cast it directly without unknown and any
}

let a = makePositiveNumber(5); // runtime check
let b = makeNegativeNumber(-10); // runtime check

a = b; // error at compile time

Tuple constraint

function foo<T extends Tuple>(tuple: T): T {
  return tuple;
}

const ret = foo(["s", 1]);
// return type of [string, number]

You can also parametrize Tuple type with a type argument to constraint it to certain types, i.e. Tuple<string | number>.

Exhaustive switch cases

function actOnDummyOptions(options: DummyOptions): string {
  switch (options) {
    case "open":
      return "it's open!";
    case "closed":
      return "it's closed";
    case "unknown":
      return "i have no idea";
    default:
      // if you would add another option to DummyOptions, you'll get error here!
      throw new UnreachableCaseError(options);
  }
}

ValueOf type

const obj = {
  id: "123e4567-e89b-12d3-a456-426655440000",
  name: "Test object",
  timestamp: 1548768231486,
};

type ObjectValueType = ValueOf<typeof obj>;
//   ^? string | number

ElementOf type

const array = [1, 2, true, false];

type ArrayElementType = ElementOf<typeof array>;
//   ^? number | boolean

ArrayOrSingle

Useful for the functions where data can be passed as a value or an array

const castArray = <T extends any>(value: ArrayOrSingle<T>): T[] => {
  if (Array.isArray(value)) {
    return value;
  }

  return [value];
};

// number[]
const numbers = castArray(1);
// string[]
const strings = castArray(["a", "b", "c"]);

AsyncOrSync type

Useful as a return type in interfaces or abstract classes with missing implementation

interface CiProvider {
  getSHA(): AsyncOrSync<string>;
  // same as
  getSHA(): Promise<string> | string;
}

class Circle implements CiProvider {
  // implementation can use sync version
  getSHA() {
    return "abc";
  }
}

class Travis implements CiProvider {
  // implementation can use async version when needed
  async getSHA() {
    // do async call
    return "def";
  }
}

// to get original type use AsyncOrSyncType
AsyncOrSyncType<AsyncOrSync<number>> // return 'number'

Awaited type

Unwrap promised type:

Awaited<Promise<number>> // number

Newable

keywords: constructor, class

Type useful when working with classes (not their instances).

class TestCls {
  constructor(arg1: string) {}
}

const t1: Newable<any> = TestCls;

Assertions

keywords: invariant

Simple runtime assertion that narrows involved types using assertion functions.

Note: This function is not purely type level and leaves minimal runtime trace in generated code.

const something: string | undefined = "abc" as any;
assert(something, "Something has to be defined!");
// from now on `something` is string, if this wouldn't be a case, assert would throw

const anything = "abc" as any;
assert(anything instanceof String, "anything has to be a string!");
// from now on `anything` is string

PredicateType

keywords: narrow, guard, validate

Works just like ReturnType but will return the predicate associated with the function instead. This is particularly useful if you need to chain guards to narrow broader types.

// Without PredicateType you can never use a set of functions like this together; how can you resolve ???
// You would need a specific instance of isArrayOf for each type you want to narrow
const isArrayOf = (thing: unknown, validator: (...x: any[]) => boolean): thing is ???[] => {
  return Array.isArray(thing) && thing.every(validator);
};

// With PredicateType you can pull the predicate of the validator into the higher level guard
const isArrayOf = <T extends (...x: any[]) => boolean>(
  thing: unknown,
  validator: T,
): thing is Array<PredicateType<T>> => {
  return Array.isArray(thing) && thing.every(validator);
};

Exact

keywords: same, equals, equality

Exact<TYPE, SHAPE> Checks if TYPE is exactly the same as SHAPE, if yes than TYPE is returned otherwise never.

type ABC = { a: number; b: number; c: number }
type BC = { b: number; c: number }
type C = { c: number }

Exact<ABC, C> // returns NEVER
Exact<C, C> // returns C

isExact

isExact<SHAPE>()(value) is a runtime function that returns (on the type level) value if value is exactly of type SHAPE or never otherwise.

type ABC = { a: number; b: number; c: number };
type BC = { b: number; c: number };

let abc: ABC = { a: 1, b: 2, c: 3 };
let bc: BC = { b: 2, c: 3 };

// due to TS limitations, `isExact` has to be a curried function
const isBC = isExact<BC>();

isBC(abc); // returns NEVER -- abc has different structure from BC (excessive property a)
isBC(bc); // works fine

// note: that isExact can be used inline too
isExact<BC>()(abc); // returns NEVER

createFactoryWithConstraint

createFactoryWithConstraint<Constraint>()(value) is a runtime function that returns (on the type level) value, narrowed within constraint type Constraint, or throws type error otherwise

type NumericDictionary = Dictionary<number>;

// due to TS limitations, `createFactoryWithConstraint` has to be a curried function
const createNumericDictionary = createFactoryWithConstraint<NumericDictionary>();

const abNumber = createNumericDictionary({ a: 1, b: 2 });
//    ^? { a: number; b: number }

// @ts-expect-error: Type 'string' is not assignable to type 'number'
createNumericDictionary({ a: "1", b: "2" });

XOR

Gets the XOR (Exclusive-OR) type which could make 2 types exclude each other.

type A = { a: string };
type B = { a: number; b: boolean };
type C = { c: number };

let A_XOR_B: XOR<A, B>;
let A_XOR_C: XOR<A, C>;

// fail
A_XOR_B = { a: 0 };
A_XOR_B = { b: true };
A_XOR_B = { a: "", b: true };
A_XOR_C = { a: "", c: 0 }; // would be allowed with `A | C` type

// ok
A_XOR_B = { a: 0, b: true };
A_XOR_B = { a: "" };
A_XOR_C = { c: 0 };

Functional type essentials

Head & Tail: useful for functional programming, or as building blocks for more complex functional types.

function tail<T extends any[]>(array: T): Tail<T> {
  return array.slice(1) as Tail<T>;
}

type FirstParameter<FnT extends (...args: any) => any> = FnT extends (...args: infer ArgsT) => any
  ? Head<ArgsT>
  : never;

TypeScript dependency table

ts-essentialstypescript / type of dependency
^8.0.0^4.1.0 / peer
^5.0.0^3.7.0 / peer
^3.0.1^3.5.0 / peer
^1.0.1^3.2.2 / dev
^1.0.0^3.0.3 / dev

Download Details:

Author: ts-essentials
Source Code: https://github.com/ts-essentials/ts-essentials 
License: MIT license

#typescript #types 

TS-essentials: All Basic TypeScript Types in one Place
Nigel  Uys

Nigel Uys

1671714027

Types and its Use Cases - Blockchain in Cybersecurity

Introduction to Blockchain and Cybersecurity

No cyber defense or traditional system can be considered 100% secure. Given the lucrative nature of cybercrime and the ingenuity of criminals seeking new attack methods, what is most secure today may not be secure the next day.

It is gaining momentum today, but critics remain who question the scalability, security, and sustainability of the technology. Although it provides data confidentiality, integrity, and availability, organizations should have cybersecurity controls and standards to protect them from external attacks.

What is Blockchain?

It is a decentralized network with millions of users around the world. All users have access to it, and all data on it is protected by encryption. All other members are responsible for ensuring that the data added to it is correct. In short, it is a decentralized, distributed, digital public ledger that enables the recording of transactions and the tracking of assets across corporate networks.

What is Cybersecurity?

It is the exercise of protecting systems, networks, and programs from digital attacks. These cyberattacks usually aim to access, alter, or destroy confidential information, extort money from users, or disrupt normal business processes. Implementing effective cybersecurity measures is especially challenging today when there are more devices than people and attackers are becoming more innovative.

Different types of Security?

Its networks differ in who can participate and who can access data. Networks are typically labeled public or private, indicating who is allowed to join them, and authorized or not, indicating how participants gain access to the network.

What is Public Blockchain?

Its networks are generally open to anyone to join, and participants remain anonymous. It use internet-connected computers to validate transactions and reach consensus. Bitcoin is perhaps the most famous example of it is achieving consensus through "bitcoin mining." On the Bitcoin network, computers, or "miners," attempt to solve complex cryptographic problems to create proofs of work, thereby validating transactions. Aside from public keys, this network has little identity and access control.

What is Private Blockchain?

It use identity to verify membership and access rights and typically only allow known organizations to join. These organizations form a private "business network" for members only. Its on a permissioned network reach consensus through a process called "selective confirmation," in which known users verify transactions. Only members with special access rights and privileges can maintain the trading ledger. This type of network requires more identity and access control.

When building a its application, evaluating which type of network best suits your business goals is essential. Private and permissioned networks can be tightly managed and are desirable for compliance and regulatory reasons. However, public and permissionless networks can provide greater decentralization and dispersion.

  • Public blockchains are public, and anyone can connect them and authenticate transactions.
  • Private blockchains are restricted and generally limited to professional networks. A single entity controls membership.
  • Permissionless blockchains have no restrictions on processors.
  • Permissioned blockchains are restricted to a select set of users who are allowed identities using certificates.

How fraudsters attack?

Cyberpunks and fraudsters threaten it in four primary ways: phishing, routing, Sybil, and 51% attacks.

Phishing attacks

Phishing is a fraudulent aim to obtain user credentials. Scammers send wallet key owners emails that appear to come from legitimate sources. The email asks the user for credentials via a fake hyperlink. Accessing user credentials and other sensitive information can result in a loss for users and the its network.

Routing Attacks

It relies on large-scale real-time data transfer. Hackers can intercept data in transit to your ISP. In a routing attack, the threat is usually invisible to its participants, so everything looks fine. However, scammers are extracting sensitive data and currency behind the scenes.

Sybil Attacks

In a Sybil attack, cyberpunks create and use many fake network identities to flood networks and crash systems. Sybil refers to a famous book character diagnosed with multiple identity disorder.

Sybil attacks allow hackers to gain disproportionate influence over honest nodes in the network if they create enough fake identities. You can then refuse to receive or send blocks, blocking others from your network.

51% Attacks

Mining requires enormous computational power. Although, if a miner or group of miners can gather enough resources, he can achieve more than 50% of his power mining the its network. More than 50% of his power means controlling and manipulating the ledger.

Use Cases for Cybersecurity

One of the most secure ways to conduct transactions in the digital world has made it an alluring proposition for many sectors, including financial services. Organizations can develop cybersecurity solutions for many other applications and technologies by leveraging integrity assurance.

DNS and DDoS Attack Mitigation

A distributed denial of service (DDoS) is when a cybercriminal floods a network with malicious traffic that prevents it from functioning correctly. These attacks slow down or completely shut down a website or resource system. In the case of Domain Name System (DNS) attacks, attackers either compromise a network's DNS or exploit underlying attributes to launch a broader attack. It enables the establishment of peer-to-peer (P2P) and zero-trust networks, removing the need for devices to trust each other and eliminating single centralized points of failure. Organizations can decommission the attacked node and continue working as usual. Even if large parts of its network are attacked, the system will continue to function due to its decentralized structure.

KYC Verification

Fraudulent e-KYC renewal is one of the latest methods scammers use to trick naive people. The scammer poses as her service provider and asks for confidential information such as their Aadhaar number and bank account details. This scenario can be avoided by using their KYC verification method.

As a distributed ledger, it allows data from various governments and private data portals to be collected and stored in a single, immutable, secure database. Cryptographic keys (a means of encrypting data in machine-readable form) protect each user's private information on the ledger. Hackers and cybercriminals will have difficulty cracking the keys and gaining access to the necessary credentials.

End User Security

Hackers are increasingly using edge devices such as thermostats and routers to penetrate systems. Cybercriminals can easily infiltrate through centralized control and edge devices with the rapid proliferation of smart devices and home automation.

It helps secure IoT systems and end-user devices by decentralizing management. The device can make security decisions without relying on central administrators or permissions. These advantages are one of the primary reasons for the popularity of this technology in financial institutions such as banks. For example, end-user security in banking is a complicated problem. Simple logins centralized IT infrastructure, and weak passwords often allow cyber attackers to penetrate network infrastructure. It enables passwordless authentication of users and devices with multi-party verification via SSL certificates. The decentralized and decentralized nature of the network that checks the integrity of transactions and account balances makes attacks virtually impossible.

Conclusion

Blockchain is an advance in cybersecurity for the digital world and helps ensure compliance with the CIA's cybersecurity triad. However, the implementation's complexity can cause problems in your application. Xenonstack can help in providing Security that can prevent attacks from cyberpunks.

Original article source at: https://www.xenonstack.com/

#blockchain #cybersecurity #types 

Types and its Use Cases - Blockchain in Cybersecurity
Desmond  Gerber

Desmond Gerber

1671272760

Learn Different Types Of Operations Used in Algorithms/Programming

Introduction

This blog will let you know how many different types of operations we can include/use in our algorithm/ function.

Operations in programming

Every developer performs various types of operations on input data inside the function to generate correct output. So, it's better to explore it before deep dive into writing programs.

Mostly we perform below types of operations: -

  • Arithmetic operations
  • Logical operations
  • Assignment operations
  • Increment/Decrement operations
  • Relational operations
  • Bitwise operations

Arithmetic operations:   Add (+), Subtract (-), Multiply (*), Divide (/) and Modulus (%)

Logical operations: Logical AND (&&), Logical OR (||) , Logical NOT(!)

Assignment operations:   = , +=, -=, *=, /= etc.

Increment/Decrement operations:  ++ , --

Relational operations:  greater than (>), less than (<), >= , <=, equal (=), not equl (!=)

Bitwise operations:  bitwise AND (&), bitwise OR (|), bitwise NOT (~), bitwise XOR(^)

Summary

In this blog we understand, how many different types of operations we can include/use in our algorithm/ function.

Thank you!

Original article source at: https://www.c-sharpcorner.com/

#algorithms #programming #types 

Learn Different Types Of Operations Used in Algorithms/Programming
Rupert  Beatty

Rupert Beatty

1670568484

How the Different Types of Database Constraints?

Database constraints are a key feature of database management systems. They ensure that rules defined at data model creation are enforced when the data is manipulated ( inserted, updated, or deleted) in a database.

Constraints allow us to rely on the database to ensure integrity, accuracy, and reliability of the data stored in it. They are different from validations or controls we define at application or presentation layers; nor do they depend on the experience or knowledge of the users interacting with the system.

In this article, we will briefly explain how to define the following types of constraint and their usage:

Why Do We Need to Define Constraints in a Database Model?

Although it is not mandatory, defining constraints ensures at database level that the data is accurate and reliable. Having those rules enforced at database level rather than in the application or presentation layer ensures that the rules are consistently enforced no matter who manipulates the data and how it is manipulated. When a customer uses a mobile or web application, when an employee uses an internal application, or when a DBA executes data manipulation scripts, any changes are evaluated at the database level to guarantee that no constraint is violated.

Are Database Constraints the Same in all RDBMSs?

Not all database management systems support the same types of constraints. When they do, there may be special features or considerations for the specific system. The syntax for creating constraints also depends on the specific RDBMS.

Constraints can be defined when we create a table or can be added later. They can be explicitly named when created (thus allowing us to identify them easily), or they can have system-generated names if an explicit name is omitted.

For the examples in this article, we are going to use Microsoft SQL Server as the platform to show the syntax of adding named constraints to an existing table. We will be starting with a basic table named Product with the following attributes:

What Are the Different Types of Database Constraints?

Note:

You can also read the article “CONSTRAINTS IN POSTGRESQL AND HOW TO MODEL THEM IN VERTABELO” if you want to learn the specifics of PostgreSQL constraints.

Constraint Types

Now, let’s review the different constraint types we can find in most database engines. We are going to start from the most basic then move on to the more complex.

DEFAULT

This type of constraint allows us to define a value to be used for a given column when no data is provided at insert time. If a column with a DEFAULT constraint is omitted in the INSERT statement, then the database will automatically use the defined value and assign it to the column (if there is no DEFAULT defined and the column is omitted, the database will assign a NULL value for it). Defaults can be fixed values or calls to system-provided or user-defined SQL functions.

Let’s look at our example data model. We will start by defining a couple of DEFAULT constraints for our Product table:

  • For the column EntryDate, we will use GETDATE(), a system function that returns the current date.
  • For the column CurrentStock, we will use a fixed value of 0.

We need to issue the following two statements in SQL Server to create these constraints:

ALTER TABLE Product ADD CONSTRAINT DF_Product_EntryDate

DEFAULT GETDATE() FOR EntryDate;

ALTER TABLE Product ADD CONSTRAINT DF_Product_CurrentStock

DEFAULT 0 FOR CurrentStock;

And we can see how they are defined using the VERTABELO DATA MODELER tool:

What Are the Different Types of Database Constraints?

Once the DEFAULT is created for a column, we can insert a row in our table without specifying the column:

INSERT INTO Product (ProductCode, ProductName, Price)

VALUES (‘S10’, ‘Spoon’, 120.50);

A new row will be inserted, the EntryDate column will have today’s date stored for this row, and the CurrentStock column will have the initial value of 0.

Extra Tip #1

Most database engines (like Oracle, SQL Server, DB2, and MySQL) allow the DEFAULT to be explicitly included in INSERT statements, making it clear that the value to be used is the one defined in the constraint rather than omitting the column(s) in question:

INSERT INTO Product (ProductCode, ProductName, Price, CurrentStock, EntryDate)

VALUES (‘S10’, ‘Spoon’, 120.50, DEFAULT, DEFAULT);

Extra Tip #2

Most database engines (like Oracle, SQL Server, DB2, and MySQL) also allow the value to be used in UPDATE statements. If there is a DEFAULT defined, you can use the following syntax to set the column to the DEFAULT:

UPDATE Product SET CurrentStock = DEFAULT;

CHECK

CHECK constraints allow us to define a logical condition that will generate an error if it returns FALSE. Every time a row is inserted or updated, the condition is automatically checked, and an error is generated if the condition is false. The condition can be an expression evaluating one or more columns. It can also include hardcoded values, system-provided functions, or user-defined functions.

Now, we are going to define a couple of CHECK constraints for our table, so that we:

  • Do not allow the CurrentStock column to store negative values.
  • Do not allow the Price column to store zero or negative values.

To do so, we need to execute the following statements:

ALTER TABLE Product ADD CONSTRAINT CK_Product_CurrentStock

CHECK (CurrentStock >= 0);

ALTER TABLE Product ADD CONSTRAINT CK_Product_Price

CHECK (Price > 0);

The VERTABELO DATA MODELER tool allows us to define CHECK constraints easily:

What Are the Different Types of Database Constraints?

Extra Tip

CHECK constraints return an error only when the condition evaluates to FALSE. Be sure to consider handling scenarios where the CHECK condition may return a NULL, since the database would not consider that an error. For example, this UPDATE statement will not return an error:

UPDATE Product SET CurrentStock = NULL;

By default, all columns in a table accept NULL values. A NOT NULL constraint prevents a column from accepting NULL values. Unlike other constraints, NOT NULL constraints cannot be created with a name, and they require a different syntax when we want to add them to an existing table.

We will continue enhancing our model, modifying our Product table so that all columns except Notes have a NOT NULL constraint in place by executing the following statement:

ALTER TABLE Product

MODIFY ProductCode VARCHAR(20) NOT NULL,

ProductName VARCHAR(100) NOT NULL,

Price MONEY NOT NULL,

CurrentStock INT NOT NULL,

EntryDate DATE NOT NULL;

Columns are defined to be NOT NULL by default in VERTABELO DATA MODELER, but you can easily change a column behavior by selecting or deselecting the N[ull] checkbox in the column definition:

What Are the Different Types of Database Constraints?

Extra Tip #1

Some databases implement the NOT NULL constraint as a special class of the CHECK constraint, with the condition to be checked internally generated as “<ColumnName> IS NOT NULL”. This does not change how the NOT NULL is defined, just how it is handled internally by the RDBMS.

Extra Tip #2

Review your data model and ensure that you do not accept NULL in columns that should not be NULL. This will save you time when debugging errors or issues in the future. It may also have performance impact, since the database engine may use different execution plans depending on whether or not a column has NULL values.

UNIQUE KEY

Unique keys are defined at table level and can include one or more columns. They guarantee that values in a row do not repeat in another. You can create as many unique keys as you need in each table to ensure that all business rules associated with uniqueness are enforced.

We are going to add a couple of unique keys to our Product table to ensure we do not allow duplicate values in the following two columns:

  • ProductCode. Since we use this value to identify a product, we should not accept duplicate values.
  • ProductName. Since this is the description shown when searching for products, we need to be sure that we do not have two products with the same value.

To create those two constraints, we execute the following statements:

ALTER TABLE Product ADD CONSTRAINT UK_Product_ProductCode

UNIQUE (ProductCode);

ALTER TABLE Product ADD CONSTRAINT UK_Product_ProductName

UNIQUE (ProductName);

VERTABELO DATA MODELER allows you to define any unique key in a couple of simple steps:

What Are the Different Types of Database Constraints?

Extra Tip

Most RDBMSs implement unique keys by using an index to speed up searching for duplicates when a row is inserted or updated. Also, most RDBMSs will automatically create a unique index when a unique key is added. However, you can elect to use an already existing index if one is available.

PRIMARY KEY

A primary key is a constraint defined at table level and can be composed of one or more columns. Each table can have only one primary key defined, which guarantees two things at row level:

  • The combination of the values of the columns that are part of the primary key is unique.
  • All the columns that are part of the primary key have non-null

So, we can consider the primary key as a combination of the NOT NULL and UNIQUE constraints.

Continuing with our example, we are now going to add a primary key to our table, first adding a ProductID column that will act as a surrogate primary key (please read the article “WHAT IS A SURROGATE KEY?” to learn about surrogate keys and the differences from natural primary keys) and then adding the primary key. The syntax to add the constraint is:

ALTER TABLE Product ADD CONSTRAINT PK_Product PRIMARY KEY (ProductID);

When we verify our model, we can see that the column ProductID does not accept NULL and is identified as part of the new primary key in the table:

What Are the Different Types of Database Constraints?

Extra Tip #1

Usually, the columns that are part of the primary key are the ones that are referenced by the foreign keys in child tables (we will explain this a little later in this article). Like any other unique keys, primary keys can be created on a single column or on a set of columns. Choosing the right column or columns for each primary key is a critical task when creating a data model. This topic is discussed in various articles we mention throughout here.

Extra Tip #2

When using surrogate primary keys, we can take advantage of built-in features like IDENTITY to populate the column created for the primary key. But if we use GUID (UniqueIdentifier) or another data type, we can also consider adding a DEFAULT constraint to the primary key column that includes a function like NEWID() to populate the column with system-generated values.

FOREIGN KEY

Foreign keys are vital to maintaining referential integrity in a database. They guarantee that each row in a child table (like Order) has one and only one row associated in a parent table (like Product). Foreign keys are created in child tables, and they “reference” a parent table. To be able to reference a table, a constraint that ensures uniqueness (either a UNIQUE or PRIMARY KEY) must exist for the referenced columns of the parent table.

When a foreign key is defined, the two tables become related, and the database engine will ensure that:

  • Every value or combination of values entered at INSERT or UPDATE in the columns that are part of a foreign key exist exactly once in the parent table. This means that we cannot insert or update a row in the Order table with a reference to a product that does not exist in the Product
  • Every time we try to DELETE a row in the parent table, the database will verify that it does not have child rows associated; the DELETE will fail if it does. This means that we would not be able to remove a row in Product if it has one or more related rows in the Order

In our example model, we have created two child tables named PurchaseDetail and OrderDetail, and we need them to reference the existing Product table. Since foreign keys can reference either primary keys (the most common scenario) or unique keys, we are going to use the ProductID column (which is defined as PRIMARY KEY) as reference for our OrderDetail table. However, we will use ProductCode (which is defined as a unique key) as the reference for the PurchaseDetail table. To create the constraints, we need to execute the following two statements:

ALTER TABLE OrderDetail ADD CONSTRAINT FK_OrderDetail_ProductID

    FOREIGN KEY (ProductID) REFERENCES Product (ProductID);

ALTER TABLE PurchaseDetail ADD CONSTRAINT FK_PurchaseDetail_ProductCode

    FOREIGN KEY (ProductCode) REFERENCES Product (ProductCode);

Foreign keys can be created easily in VERTABELO DATA MODELER by relating the parent table to the child table then confirming the columns that define the relationship:

What Are the Different Types of Database Constraints?

Extra Tip

A foreign key can ensure that a child row points to an existing parent row and also that a parent row is not deleted if it has child rows. There are additional behaviors discussed in the article “ON DELETE RESTRICT VS NO ACTION.” You may want to read this to take full advantage of the foreign key features.

How Database Constraints Are Classified

Constraints are usually either column-level or table-level, and the classification depends on the sections of the CREATE TABLE statement in which they can be defined. Reviewing the CREATE TABLE syntax, we can easily identify those places:

CREATE TABLE table_name (

   column1 datatype column_level_constraint1 column_level_constraint2,

   column2 datatype column_level_constraint3,

   table_level_constraint1,

   table_level_constraint2

);

All constraint types we have reviewed can be defined at column level as long as they involve only a single column (the column that is being defined). All constraint types except NOT NULL can also be defined at table level, and this is mandatory when a constraint involves more than one column (like complex CHECK conditions and multiple-column unique, primary, or foreign keys). DEFAULT constraints can involve only one column, but they can be defined at either level.

Constraint TypeTable LevelColumn Level
DEFAULTYes (only one column)Yes (only one column)
CHECKYes (multiple columns)Yes (only one column)
NOT NULLNoYes (only one column)
UNIQUEYes (multiple columns)Yes (only one column)
FOREIGN KEYYes (multiple columns)Yes (only one column)
PRIMARY KEYYes (multiple columns)Yes (only one column)

Database Constraints: What Next?

We have reviewed the six types of constraints available in most RDBMSs and taken a quick look at how to create them using the Microsoft SQL Server syntax. We have also seen examples using VERTABELO DATABASE MODELER as our modeling tool. If you want to learn in depth how to create and maintain constraints using Vertabelo, you should follow this up with reading the article “DATABASE CONSTRAINTS: WHAT THEY ARE AND HOW TO DEFINE THEM IN VERTABELO”.

Original article source at: https://www.vertabelo.com/

#database #types 

How the Different Types of Database Constraints?
Monty  Boehm

Monty Boehm

1670504760

What is, Types of Indexes in a Relational Database

Using database indexes is one of the easiest ways to improve the overall performance of a database, more specifically query performance, if you select the right type. Knowing the types of indexes in SQL is very important since different indexes provide different benefits. We review commonly used indexes from the most popular RDBMS and explain when to use them.

What Are Database Indexes?

A database index is an additional data structure created on top of the data in a table. You specify an index over a table and a column or set of columns. This creates an additional data search structure associated with that table and the set of columns.

In this article, we detail what an index is, how to create one, what types of indexes exist, and when to use them. If you’re looking for just the basics of indexes, we have a great article that takes you through the DATABASE INDEX ESSENTIALS.

What Is the Purpose of an Index?

The purpose of a database index is to improve query performance by speeding up data retrieval. This is done at the cost of additional storage space to hold the B-TREE data structure and the pointers to the actual data.

Indexes are used to help the database find data quickly without having to go over every row in a table every time it is queried. Overall, indexes provide a very efficient way of accessing ordered records.

If you want more information about the internal workings of a database index, how the data is structured, and a real-life example of how an index is used, we have AN IN-DEPTH ARTICLE ABOUT DATABASE INDEXES.

How to Create an Index

Different relational database management systems have slightly different syntax for creating an index. Also, different database engines offer slightly different options when creating an index, as we see later in this article. However, there is a general syntax of creating a basic index that applies to all database engines. The syntax below creates a basic index on a table.

CREATE INDEX index_name

ON table_name (column_name_1, column_name2, ..)

Now, let’s apply this to a real table. Suppose we have a Customer table like the one in the screenshot below. We want to create an index to speed up the search by customer name.

What Are the Types of Indexes in a Relational Database?

CREATE INDEX IX_CustomerName

ON Customer (FirstName, LastName);

When executed, this syntax creates an index called IX_CustomerName on the Customer table. This allows us to search much faster for data in the FirstName and LastName columns.

The type of index created behind the scenes is sometimes called a nonclustered index or a binary search index. This type of index helps us run optimized queries for scenarios when we run queries similar to this:

SELECT FirstName, LastName, Email

FROM Customer

WHERE FirstName = ‘Mark’ and LastName = ‘Thompson’;

As a rule of thumb, we look at the columns used for filtering and see if we have an index on those columns whenever we want to optimize a query. If the columns in the SELECT clause are very similar to the columns in the filtering clauses, we get an optimized plan and a fast execution.

But this is not a silver bullet, and there’s much more to indexing than just these rules. If you’re using SQL SERVER, we have an article outlining the STRATEGIES YOU CAN APPLY USING DIFFERENT TYPES OF INDEXES depending on your scenario. Although it is specific to SQL Server, take a look through it anyway, since most of the advice can be adjusted to fit other database engines.

What Are the Types of Indexes in SQL?

Now that we have seen how an index is created, let’s review the main types of relational database indexes you can create to improve your queries. Some of these are available only with certain database engines, but we’ll point out where they are available.

All indexes store pointers to the data rows in a data structure called a search tree. The search tree is optimized for searches and is the backbone of the index. It allows for something similar to searching in a BINARY SEARCH TREE except a bit more complex.

There are different types of indexes, each of which has different internal data structures and is useful for specific scenarios. We go into more detail later in the article. Here, we just briefly mention the types of indexes available:

  • From the point of view of the characteristics of the index attribute:
    • Primary Index
    • Clustered Index
    • Secondary Index
  • From the point of view of the number of index references to a data file:
    • Dense Index
    • Sparse Index
  • Specialized indexes for highly specific scenarios:
    • Bitmap Index
    • Reverse Index
    • Hash Index
    • Filtered Index
    • Function-based Index
    • Spatial Index

Let’s use the Customer table from the above as an example. To see what sample data looks like, we write a simple SELECT query and return everything from the table.

What Are the Types of Indexes in a Relational Database?

What Are the Types of Indexes in a Relational Database?

Clustered Index

One of the most common indexes available in all modern and full-fledged relational database systems is the clustered or clustering index. A clustered index defines the order in which data is physically stored on DATA PAGES and implicitly in the table.

Let’s look at an example. Say the first two rows fit on Page 1, the third and fourth on Page 2, and the last row on Page 3, as shown below.

What Are the Types of Indexes in a Relational Database?

The purpose of a clustered index is to physically store the rows in ascending or descending order based on the column selected. The reason for creating such an index is to have the data always sorted, which helps very much in searching either for one or multiple values in a range. However, a clustered index shines best when we’re searching in a range.

Suppose our reporting dashboards always display customers sorted alphabetically. So, we want to store the data sorted alphabetically by first and last names in our database. To create a clustered index to do so, we write the query below:

CREATE CLUSTERED INDEX CI_FirstName_LastName

ON Customer (FirstName ASC, LastName ASC);

This has a small impact on the previous query that returns all of the data. By creating a clustered index with an ascending sort on the first and last name columns, we have physically reordered the data in the pages. If we look into the data pages, our data shows up like this:

What Are the Types of Indexes in a Relational Database?

The data is now sorted by the first name and then the last name as we see. If we query to sort the rows alphabetically, this gives us the best performance because the rows are already stored sorted. This helps us avoid a sort in the query.

If we want to show the first 10 customers in alphabetical order, we avoid searching the entire table to find and select the 10 customers. The database just returns the data pages associated with the first 10 customers already sorted.

Bitmap Index

The bitmap index is another type of index, AVAILABLE ONLY ON ORACLE at the time of this writing. It is useful in a very specific scenario: it is ideal when you have to query and filter a table on a column with a small number of distinct values compared to the total number of rows in the table.

Let’s look at a scenario with our sample data where we may benefit from a bitmap index. Imagine our Customer table has not just 5 rows but over 10 million rows. Suppose we have to filter our query for female customers whose last name is Watson.

What Are the Types of Indexes in a Relational Database?

We may write a query like the following:

SELECT FirstName, LastName, Email

FROM Customer

WHERE Gender = 2 AND LastName = “Watson”;

The bitmap index is ideal here because we have only a few distinct values for gender compared to the 10 million rows of the entire table. To speed up this query with a bitmap index, we create it using the following syntax:

CREATE BITMAP INDEX BMP_Gender

ON Customer (Gender)

And now, we select “Kate Watson” and her email from the screenshot below along with all other matching rows from the 10 million rows in the table.

What Are the Types of Indexes in a Relational Database?

The bitmap index is even more powerful when created in a JOIN clause – for example, if we join the Customer table with the Sales table and filter by Gender. A bitmap index in this scenario looks something like this:

CREATE BITMAP INDEX BMP_Gender_Sales

ON Customer (Gender)

FROM Customer, Sales

WHERE Customer.ID = Sales.Customer_ID;

Whenever we have a query that joins these two tables and filters by gender, we get close to peak performance.

Reverse Index

The reverse index is similar to the regular index. However, instead of creating a binary search tree for a faster search in ascending order, the reverse index is optimized to search for data in descending order. The syntax for building a reverse index is very similar to that of the usual nonclustered index, except we specify the reverse or descending order.

Suppose we want to optimize for finding out which customers have placed the 3 most recent orders. The syntax for the index is like this:

CREATE INDEX IX_LastOrder_Customer

ON Customer (LastOrderDate DESC);

The essential keyword in our syntax is DESC. It tells the database engine to create the index in reverse order. So, we get the best performance whenever we query the Customer table for the 3 most recent orders.

What Data Structure Does an Index Use?

As mentioned before, indexes are implemented on top of other data structures to optimize search operations. But what are these data structures?

Balanced Tree

The most common indexes use a BALANCED TREE behind the scenes to speed up queries. Most database engines use either a balanced tree or a variation of a balanced tree like a B+ TREE. The structure of a general balanced tree is shown below.

What Are the Types of Indexes in a Relational Database?

The top node is the root, and those below it are either child nodes or leaf nodes. We always start searching for our row from the root node and compare if the value we’re searching for is less than or greater than the value in the node at hand. The result of the comparison tells us which way to go, left or right, depending on the result of our comparison. In the example above, all values lower than 8 take us to the left, while values greater than 8 take us to the right, and so on.

Hash

A hash is used by hash indexes. It is a data structure that provides some of the fastest search speeds. Hashes allow the index to do very fast lookups on the data stored in a table.

The idea behind hashes is that the search key is taken through a HASHING FUNCTION instead of being searched through an index or the entire table. The input search key is converted to a hash value that determines the associated bucket. In the example below, the search key “Mike” is taken through the hash function and is associated with a bucket.

What Are the Types of Indexes in a Relational Database?

Each bucket in the array of buckets contains the same number of records. No matter how many distinct values exist in a column, every row is mapped to one single bucket. The matching row is selected and returned from that bucket.

Index Implementations by Relational Database Engine

As you can see, there are multiple types of indexes in a relational database. Each database engine has its proprietary index implementations. Let’s go over the most popular database engines, list the available indexes for each, and discuss when to use them.

PostgreSQL Indexes

PostgreSQL provides A LONG LIST OF INDEXES that are useful in different scenarios:

  • The B-tree index is the most common type of index. It is used to search for equality and range comparisons in columns that can be sorted.
  • A hash index stores a 32-bit hash code derived from the value of the indexed columns. It is used when simple equality comparisons are needed.
  • GiST is not a single index but rather a logical structure within which multiple different indexing strategies can be implemented. It is often used in finding the “nearest neighbor” in geometric data types.
  • SP-GiST, like GiST, is a type of index that implements multiple indexing strategies. It is based on different data structures such as quadtrees, k-d trees, and radix trees and is used in similar scenarios as GiST.
  • GIN is also called an “inverted index.” This type of index is used in scenarios in which the data is formed by an array. The inverted index contains a separate entry for each component value of the array.
  • BRIN stands for “Block Range Index.” It’s used to store summaries of values in consecutive physical data pages within a table. They are best suited when the values in their rows are correlated with the physical order of the data pages.

Oracle Indexes

Oracle has slightly fewer types of indexes. However, they are more robust in terms of applicability.

  • The B-tree is the standard index type, also used by other database engines. It is great for primary keys and columns with a very large number of distinct values compared to the total number of rows.
  • Bitmap indexes are used for the opposite scenarios of a b-tree. Specifically, you want to use them when the number of distinct values in a column is very small compared to the total number of rows.
  • The function-based index is a type of index in which the value stored in the search tree is defined by a function. It provides excellent performance when we have functions in WHERE clauses.
  •  
    • A clustered index is not just a way for the database engine to search in queries. It physically reorganizes rows in the data pages so that they are sorted (ascending or descending).
    • The nonclustered index is the equivalent of the standard b-tree index in other database engines. It is generally great for searching through data with a very large number of distinct values.
    • Filtered indexes are created on specific subsets of data. They can be used for optimizing searches for data with given criteria when the data is skewed. For example, the value 55 of a numeric column may be searched for frequently but is found only in a few rows compared to the total number of rows in the table. You can create a filtered index in a way similar to a nonclustered index but by specifying a WHERE column = 55 condition in the index definition.
  • The primary key creates a unique index that allows for very efficient and fast query performance when searching for unique values. It also benefits from the NOT NULL optimization because it can’t have NULL It is always used in defining the primary key and is created automatically when specifying the PRIMARY KEY keywords.
  • The unique index is similar to the primary key index. However, it is more lenient in the sense that it does allow NULL values to be stored multiple times. It can be used to enforce additional uniqueness if a primary key has already been created.

Keep Database Indexes in Your Toolbox

If you have made it this far, that means you like reading about database indexes! I hope you have found the information in this article useful and learned a few new things. Knowing what indexes are available in a specific database engine helps you improve performance when queries start to bog down.

Sometimes the general B-tree index is not enough and may not fit the schema and/or the data. Knowing all other types of indexes in a relational database is like having a Swiss army knife in your toolbox. To read more about database indexes, have a look at our LIST OF ARTICLES ON THIS TOPIC.

Original article source at: https://www.vertabelo.com/

#types #database 

What is, Types of Indexes in a Relational Database
Sheldon  Grant

Sheldon Grant

1670292660

Explanation with Examples: Data Model Types

Data modeling is an essential part of designing a database. If you want to achieve the best outcome, make sure to utilize the available data models. Read on to find out more.

Every relational database has clearly defined objects and relationships among these objects. Together, they comprise the data model.

This article presents the concept of data modeling. First, we’ll go over data modeling and the steps of its process. Then we’ll jump into the various types of data models. You’ll see examples of conceptual, logical, and physical data models. I’ll also mention some of the more specific data models.

Let’s get started!

About Data Modeling

Relational databases organize data into tables that have connections among them. But before creating a physical database, you should model your data. Data models help visualize data and group it logically. Below are the three data models we’ll be focusing on in this article:

data model types

The base is the conceptual data model. What follows are the logical and physical data models. We’ll find out more about these data models in the following sections.

Data modeling is a vast subject. It is essential to the database design process. So make sure to check out our other articles on data modeling, such as WHAT IS DATA MODELING, THE BENEFITS OF DATA MODELING, and WHY DO YOU NEED DATA MODELING. And if you still ask why we need data modeling processes and diagrams, read THIS ARTICLE to learn about common database design errors that could be avoided by following the data modeling process.

The Data Modeling Process

There are several steps to be followed during the data modeling process. Let’s go through them one by one.

Step 1. Identifying entities

This step is a part of conceptual data modeling. Here, we decide on data groups according to the business rules. For example, when visualizing a grocery shop database, we would have entities such as Products, Orders, and Customers, as shown below:

data model types

Step 2. Identifying connections between entities

This step is part of conceptual data modeling. Here, we decide on the relationships (i.e. connections) between entities. For example, each customer would have one or more orders, and each order would have one or more products. We can see this in the image below.

data model types

Step 3. Identifying entities’ attributes

This step is part of logical data modeling. Each entity is assigned its attributes; this becomes the base for the physical data model. For example, each order would have an order ID, a customer who placed the order (customer_id), and products ordered:

data model types

Step 4. Deciding attributes’ specific data types

This step is part of physical data modeling. Here, we assign database-specific data types to the attributes of each entity. For example, an order_id would be an INTEGER and a customer name would be VARCHAR, as shown below.

data model types

Step 5. Identifying many-to-many relationships and implementing junction tables

This step is also part of the physical data modeling. Here, we create an additional table that stores many-to-many relationship data. For example, each order can have one or more products, and at the same time, each product can be ordered zero or more times.

data model types

Step 6. Creating database constraints, indices, triggers, and other database-specific objects

This step is part of physical data modeling. Here, we focus on implementing database-specific features. For example, let’s mark the primary keys and foreign keys (if needed) for each table:

data model types

Vertabelo lets you create an SQL script from the physical data model; when you complete the data modeling process, you can create your physical database in no time by executing the Vertabelo-provided SQL script.

Data modeling is part of database modeling. Check out THIS ARTICLE to get a different perspective on the database modeling process as a whole.

Common Data Models

You now know the basics of the data modeling process. Let’s see how you might use it in practice.

Imagine that the local zoo hired you to design their database. We’ll create conceptual, logical, and physical data models to complete the entire database design process.

Conceptual Data Model

The conceptual data model focuses on identifying entities and relationships among them. We take into consideration business data, business rules, and business processes.

This data model is a strictly abstract representation of data. Its components include:

  • Entities representing groups of objects that share attributes (which are defined later, in the logical model).
  • Relationships between

Conceptual data models are typically created by data architects to present a high-level data overview to business stakeholders.

Example

First, let’s identify the entities.

  1. Zoo_Employee stores data about the employees of the zoo.
  2. Zoo_Animal stores data about the animals living in the zoo.
  3. Animal_Species stores data on the animal species present in the zoo.
  4. Animal_Food_Type stores the types of food eaten by the zoo’s animals.
  5. Food_Provider stores data about companies or organizations that provide food types.

Now let’s discuss the relationships among the entities.

  • Each animal has one caretaker, who is an employee of the zoo.
  • Each employee can be a caretaker of zero or more
  • Each animal has a species and eats a specific food type.
  • Each food type is provided by one or more food providers, and each food provider can provide one or more food types.

This is the conceptual model to represent this data:

data model types

Next, let’s move on to the logical data model.

Logical Data Model

A logical data model dives deep into the data structure, assigns attributes to each entity, and specifies the database implementation details.

This data model is a base for the physical data model. The only difference is that logical data models are not database-specific (as opposed to physical data models, which are designed for one database management system like Oracle or MySQL).

Example

We can create the logical data model in Vertabelo. Notice how many more details there are:

data model types

In addition to the attribute names, we have general data types (i.e. integer or varchar) and indicators for mandatory or non-nullable columns (M) and primary identifier fields (PI). PI fields will become primary keys in the physical data model.

This data model is still database-agnostic. The attributes’ data types are abstract, but Vertabelo converts them into database-specific data types when generating a physical data model.

Physical Data Model

The physical data model includes all database-specific features, such as data types, database indices, triggers, constraints, and more.

This data model is directly related to the database, as we can generate the database creation script solely based on this data model. It includes primary and foreign keys, column and value constraints, and other database-specific features.

Example

Let’s generate a physical data model from our logical data model in Vertabelo.

data model types

This data model is database-specific. Here, we’re using PostgreSQL. To learn even more about the conceptual, logical, and physical data models, read THIS ARTICLE.

Now that we’ve learned about the fundamental data models, let’s look at other available data models.

Other Data Model Examples

There are many different data models. The UNIFIED MODELING LANGUAGE (UML) offers various models used in software engineering. Some of them, such as a class diagram, are helpful in data modeling. Let’s look at some other useful data models.

Dimensional Data Model

Dimensional data models are used to implement data warehousing systems. These data models are handy in facilitating the analysis and retrieval of data.

The elements of a dimensional data model include:

  • Facts, i.e. business processes whose information can be retrieved.
  • Dimension, e. details for each fact. These usually answer the questions of who, where, and what.

For example, if we consider the feeding of an animal business process to be a fact, then the possible dimensions include the caretaker dimension, food type dimension, and feeding time dimension.

Object-Oriented Data Model

An object-oriented data model helps us more easily relate complex real-world objects. The elements of this model include:

  • Class, e. a template for object creation.
  • Object, e. an instance of a class.
  • Attributes that characterize each object.
  • Methods that describe the behavior of objects.

Below we have a  sample object-oriented data model:

data model types

This data model provides more information on the specificities of each object/entity,

Entity-Relationship Data Model

The entity-relationship data model falls under the category of conceptual data models. It consists of entities, their attributes, and any relationships among entities.

data model types

Conceptual data models are all about the correct perception of data.

Try Your Hand at Different Data Model Types!

Any database design process begins with visualizing the data using various data modeling tools and diagrams. We usually use a top-down approach, starting with a general overview of the available data (conceptual models) and then drilling down to more and more details (logical and physical models).

Following this approach, the first step is to create a conceptual data model. It helps us initially organize the data and decide on the objects/entities and relationships among them. Next comes a logical data model that provides more details on the data structure, such as the attributes of each entity. At last, we convert a logical data model to a physical data model. A physical data model is an exact blueprint of your database.

With that knowledge, you’re ready to design your own database.

Good luck!

Original article source at: https://www.vertabelo.com/

#data #model #types 

Explanation with Examples: Data Model Types

Data Types and Declaration Of Variables & Strings in PHP

PHP Tutorial: Data Types and Declaration of Variables & Strings in PHP

In the previous PHP Tutorial we learnt the various ways to differentiate the PHP code from the HTML code, so that it becomes easy for the PHP parser to interpret the code. In this post, let’s learn how to store information in the PHP script. Storing information is necessary so that you can use the value anywhere in the program.Variable is a name given to store values that is to be used in the program. In PHP, the variable names are preceded with the ‘$’ symbol.

Example:

<?php
$a=10;
$b=10;
$c=$a+$b;
echo $c;
?>

Variable Naming:

There are certain rules that must be followed when naming the variables. They are:

Variables are  defined with a preceded dollar sign ($)

PHP variables must start with a letter or underscore “_”

PHP variables should contain only alpha-numeric characters and underscores

A variable name cannot start with a number

We can separate the variable names by using underscores. E.g.$employee_details

Variable names differentiates both lower case letters and upper case letters($y and $Y are two different variables)

Values can be assigned using the “=” operator

Another important thing in PHP is that all the statements must end with a semicolon “;”

The first step to use a variable, is to assign a value to it.

Assigning Variables:

Variable assignment is simple. Just write the name and add a single equal sign (=), and then expression that we want to assign to that variable.

Example: $pi=3 + 0.1489;

Reassigning Variables:

After assigning a value to the variable, if it has to be changed in the later stages of the program, you can also reassign the values to the same variable name.

Example:

$my_num_var = “this should be a number – hope it is reassigned later”;
$my_num_var = 5;

Unassigned Variables:

Many programming languages reports an error if you try to use the variables before they are assigned any value. But the PHP deals with such unassigned variables. In PHP, the default error-reporting settings allow you to use the unassigned variables without reporting any errors. If you would like to be warned about variables that have not been assigned,  the error-reporting level to E_ALL from the default level of error reporting.

This can be done in two ways:

By including the statement error_reporting(E_ALL) at the top of script.

By changing the php.ini file to set the default level.

Default Values:

When you do not pass values to the parameters in functions, PHP assigns a value by default, which is called the default value

Variables in PHP do not have intrinsic types

A variable does not know in advance whether it will be used to store a number or string of characters

The type of  a variable is interpreted depending on the context in which it is used

The data to be stored and used in the PHP script can be of various types. In order to differentiate this, PHP provides various Data Types.

Data Types in PHP

The values stored and used in the PHP script are of different types. The Data Types define the type of data being used in the program. PHP has 8 Data Types. They are:

Integers are whole numbers, without a decimal point, like 495

Doubles are floating point numbers, like 3.1415 or 49

Booleans have only two possible values: TRUE and FALSE

NULL is a special data type. It has only one value: NULL

Strings are series of characters

Arrays is a collection of  values

Objects are instances of classes, which can access both data and functions of that specific class

Resources are special variables that hold references to resources external to PHP(db connection)

These data types are classified into 2 major types:

Simple Types – integers, doubles, Booleans, Null and strings

Compound Types – strings, arrays, objects, resources

Simple Data Types:

Integers:

Integers are of the simplest type. They correspond to simple whole numbers, both positive and negative.

Example:

$int_var = 6789;
$another_int = -1245+134 //will zero

Doubles and Floating Point Numbers:

Real numbers (i.e., numbers containing a decimal point)

Example:

$first_double =568.998;
$second_double = 0.444;
$even_double =6.0;

Booleans:

Booleans are true – or – false values, which are used in control constructs like the testing portion of an if statement

Boolean Constants:

To use Booleans, PHP provides a couple of constants: TRUE and FALSE

Example:

If(TRUE)
Print();
Else
Print();

NULL:

NULL type, however, takes this to the logical extreme: The type NULL has only one possible value, which is the value NULL

Example:

$my_var = NULL;

A variable that has assigned to null has the following properties:

It evaluates to FALSE in a Boolean context

It returns FALSE when tested with IsSet()

Apart from declaring numbers, PHP also supports “Strings” where sequence of characters are treated as a single unit.

Declarations of Strings in PHP

Strings are a sequence of character that are treated as one unit.Strings in PHP are declared in two ways:

Single quoted

Double quoted

Single Quoted String:

Here the statement present within the single quotes will be displayed as it is without any changes.

Example:

<?php
$string_variable = "name";
$literally = 'My $string_variable is Happy!
';
print($literally);
?>

Output:

My $string_variable is Happy!

Double Quoted String:

Here the statement present within the double quotes will be interpreted and the output of the program will be displayed.

Example:

<?php
$string_variable = "name";
$literally = “My $string_variable is Happy!
”;
print($literally);
?>
Output:

My name is Happy!

When we want to execute a block of statements repeatedly, functions are used.

Stay tuned for our next post on how to pass parameter to functions and the various built-in functions supported in PHP.

Got a question for us? Please mention them in the comments section and we will get back to you.

Original article source at: https://www.edureka.co/

#php #data #types 

Data Types and Declaration Of Variables & Strings in PHP

How to Solve PHP Fatal Error: Unsupported Operand Types

Learn how you can Solve PHP Fatal Error: Unsupported operand types.

The PHP Fatal Error: Unsupported operand types is an error message that indicates you have used operators on values that are not supported.

PHP operators like + and - can be used on int type values. When you try to combine values like array or string with an int, the error message will appear:

// 👇 unsupported operand types: string + int
print "Hello" + 29;

This fatal error commonly occurs in PHP v8, where the execution of operands is made stricter.

In PHP v7 or below, the string will be ignored and PHP will print out the int value:

// 👇 php v7 or below
print "Hello" + 29; // 29

To solve this fatal error, you need to make sure that the operands (the values on the left or right side of the operator) are supported by PHP.

Arithmetic operators like +, -, *, and / support int and float types.

You can’t add an array and an int, or a string and a float when using these operators:

<?php
$arr = [
    "name" => "Orange",
    "price" => 5,
];

// 👇 unsupported operands: array + int
$arr + 8;

Even PHP v5 will show “Unsupported operand types” when you add an array and an int as in the example above.

When you don’t know how to fix the issue, use var_dump() to see the type of the variable like this:

<?php
$arr = [
    "name" => "Orange",
    "price" => 5,
];

var_dump($arr);

The above example will output the content of the $arr variable:

array(2) {
  ["name"]=>
  string(6) "Orange"
  ["price"]=>
  int(5)
}

Knowing that it’s an array, you can choose to perform addition to the number element, which is the price element:

<?php
$arr = [
    "name" => "Orange",
    "price" => 5,
];

// 👇 add to the price element
$arr['price'] += 8;

print $arr['price']; // 13

PHP requires you to use supported operand types for the code.

When you see the message “Fatal Error: Unsupported operand types”, you need to pay attention to the variables used as operands.

They are the variables on the left or right side of the operator symbol. You can find information about them using the var_dump() function.

Once you know the operand types, adjust your code accordingly to solve this error.

And that’s how you handle PHP Fatal Error: Unsupported operand types. Nice work! 😉

Original article source at: https://sebhastian.com/

#php #error #types 

How to Solve PHP Fatal Error: Unsupported Operand Types