1671557220
It refers to IT solutions that combine severs BigData Tools and utilities into one packaged answer, and this is then used further for managing as well as analyzing Big Data. The emphasis on why this is needed is taken care of later in the blog, but know how much data is getting created daily. This Data if not maintained well, enterprises are bound to lose out on customers.
This solution combines all the capabilities and every feature of many its applications into a single solution. It generally consists of its servers, management, storage, databases, management utilities, and business intelligence.
It also focuses on providing their user with efficient analytics tools for massive datasets. These platforms are often used by data engineers to aggregate, clean, and prepare data for business analysis. Data scientists use this platform to discover relationships and patterns in large data sets using a Machine learning algorithm. The user of such platforms can custom build applications according to their use case like to calculate customer loyalty (E-Commerce user case), and so on, there are countless use cases.
This aims around four letters which are S, A, P, S; which means Scalability, Availability, Performance, and Security. There are various tools responsible to manage hybrid data of IT systems. The list of platforms are listed below:
It is an open-source software platform managed by Apache Software Foundation. It is used to manage and store large data sets at a low cost and with great efficiency.
It provides a wide range of tool to work upon it; this functionality of it comes handy while using it over the IoT case.
This layer is the first step for the data coming from variable sources to start its journey. This means the data here is prioritized and categorized, making data flow smoothly in further layers in this process flow.
It provides a single self-service environment to the users, helping them find, understand, and trust the data source. It also helps the users to discover the new data sources if there are any. Discovering and understanding data sources are the initial steps for registering the sources. Users search for the Data Catalog Tools based on the needs and filter the appropriate results. In Enterprises, Data Lake is needed for Business Intelligence, Data Scientists, ETL Developers where the right data needed. The users use catalog discovery to find the data which fits their needs.
This Platform can be used to build pipelines and even schedule the running of the same for data transformation. Get more insight on ETL
There are many essential components which are given as follows:
Recommendation engines
In this section, we provided you with the details of platforms where it is being used in the Big Data environment. Based on your requirement, you can choose from these technologies in managing, operating, developing, and deploying your organization's Big Data securely.
Original article source at: https://www.xenonstack.com/
1671070920
In a relational database, each table should have a primary key (PK). A primary key has multiple advantages for a table, including a:
A primary key adds a UNIQUE constraint to a column. This ensures that the data in that column is not duplicated. If an object with the same primary key value is already present in the table, we should update the object instead of creating another one.
A database index is automatically created together with a primary key. This makes data searches faster. Indexes work like the table of contents in a book – they allow the database to quickly locate a specific row without scanning the whole table.
To set the primary key in Vertabelo, select the table. In the right pane, find the Columns section and check the PK box next to the column name.
Note that you can choose multiple columns to create a composite (i.e. multi-column) primary key.
Vertabelo checks if each table has a primary key; if a table doesn’t have a PK, Vertabelo displays a warning. You can find out more about MODEL VALIDATION IN VERTABELO HERE.
There is also a special property section for each table called Primary key. This section is useful when your table needs a multicolumn key. Here, you can set the order of the columns used to form the PK and set the key name; this will be the name of the database constraint for this key.
Original article source at: https://www.vertabelo.com/
1670929380
A foreign key is one of the fundamental concepts of relational databases. You don’t store all your data in one table, but many different tables. Nonetheless, all your data is related. That’s where the foreign key comes into play. It facilitates the process of linking the tables. Read on to find out more.
This article focuses on the concept of the foreign key in a physical model. First, we’ll briefly go over foreign key basics. Next, we’ll dive into the details of defining a foreign key in a physical model of the database. We’ll discuss the difference between candidate keys, primary keys, and alternate keys so you can understand which one to use as a reference column. You’ll also see how vital the foreign key concept is for joining the tables. Finally, I’ll show you how to fetch the SQL code of the physical model from VERTABELO.
Let’s get started.
A foreign key creates a link between two tables. This link is based upon a column shared by these tables. Look at the picture below:
The two tables presented above store information about books and authors. The line between them signifies that one author can have one or more books, but one book can have only one author.
The Books
and Authors
tables share the AuthorId
column. In the Authors
table, which is a primary table, the AuthorId
column is a primary key (PK). In the Books
table (a foreign table), the AuthorId
column is a foreign key (FK
).
Let’s look at some data.
The Books
table assigns an author to each book using the AuthorId
column. You can trace the AuthorId
column to the Authors
table to fetch information about the author. Please note that the AuthorId column in Books can use only the values present in the AuthorId
column of Authors
, as Authors
is the primary table.
The information about authors is stored in the Authors
table, not in the Books
table. This way, there is no duplicated data. Otherwise, our data would all be stored in one table, like this:
We aim to avoid data duplication. Foreign keys let us divide data into tables and then link these tables.
To summarize, the foreign key constraint in a physical model joins the tables on a shared column(s). Here, this shared column is the AuthorId
column.
Now we’re ready to move on to defining a foreign key in a physical model.
VERTABELO enables us to create ER diagrams easily. ERDs include all the information necessary to construct a database; as such, they let us define foreign keys.
Let’s see how to define a foreign key in a physical model.
We’ll start by defining our Books
and Authors
tables with their respective primary key columns.
First, we need to switch the cursor from (1) Select to (4) Add new reference.
Let’s add a new reference by dragging the line from our primary table (Authors
) to our foreign table (Books
).
We can now customize our new reference and add the remaining columns to our tables.
We’ve modified the name of the foreign key column in the Books
table to be the same as in the Authors
table. Also, we added the remaining columns to each table. Furthermore, the cardinality was adjusted to emphasize that one author can have one or more books.
Let’s take a closer look at the Reference Properties section.
By default, the name of our reference is Books_Authors
, but you can change it into anything you want. The cardinality defines the link specification. Here, it says that one row from the primary table can be assigned to one or more (that is, 1..*) rows of the foreign table. As mentioned before, one author can have one or more books, but one book can have only one author.
Later, there is a block that stores primary and foreign tables’ information. We see that our primary table is Authors
, and our foreign table is Books
. These tables share the AuthorId
column, which is a reference column. Later on, we’ll see how to modify the reference column(s).
Lastly, the two drop-downs at the bottom let you select the action on update/delete of the primary table’s rows. (More on that later.) You can also ADD MULTIPLE REFERENCES BETWEEN THE TWO TABLES.
Why does this matter? Because the primary table column that’s used as a reference doesn’t always need to be that table’s primary key column.
We could choose a column or their combination from the set of candidate key columns. The candidate key columns fulfill the requirements to be the primary key. Let’s visualize it:
What the picture above says is candidate keys = primary key + alternate keys. The set of candidate keys contains all the columns that could well be the primary key. You can GET MORE INSIGHT ON CANDIDATE KEYS IN THIS ARTICLE.
So how do we choose a primary key – especially if all the candidate keys could fill this role? Plus, there are different types of primary keys, such as SURROGATE KEYS or NATURAL KEYS. In this article, you can LEARN HOW TO CHOOSE A GOOD PRIMARY KEY.
You can also CREATE A REFERENCE AS AN ALTERNATE KEY if you don’t want to use the PK as the reference column between tables.
There is an easy way to modify reference columns.
In Vertabelo, we can add more reference columns by choosing the columns from the drop-down menus for the primary and foreign tables and clicking the Add button next to it.
We can also remove the reference columns by clicking the x button next to the already-added ones.
In Vertabelo, the Reference Properties section lets us define the specific actions on when we update/delete a row from the primary table. Let’s look at the possibilities:
After creating your ER diagram in VERTABELO, you can grab the SQL codes straight from there and save yourself additional work!
To do so, click the Generate SQL Script button from the toolbar.
Every update/delete action made to the primary table’s AuthorId column results in the foreign table’s AuthorId column value(s) set to its default value.
You can choose whether to generate a SQL script to create or remove the objects. You can also select which elements are included in the SQL script.
Once you click the Generate button, the window is extended. You can choose a file name and save the file in Vertabelo (click the Save button) or download it to your local environment (click the Download button).
Now your SQL script is ready to run!
First, we create the Books
and Authors
tables using the CREATE TABLE
statement. After that, we add the foreign key constraint into the Books
table. To do so, we use the ALTER TABLE
statement. This constraint’s name is Books_Authors
. It links the Books
and Authors
tables using the AuthorId
column.
A foreign key is a foreign column that comes from another table. It gives you the means to relate data stored in different tables. Now you can divide and conquer your data! But it is still possible to put it all back together, thanks to the references created between the tables.
Check out our VIDEO TUTORIAL about references between the tables. And don’t forget to practice on your own.
Good luck!
Original article source at: https://www.vertabelo.com/
1670686994
A foreign key is a field that is used to establish the relationship between two tables via the primary key (You can also use a non-primary field but not recommended).
In this tutorial, I show how you can add a foreign key constraint while creating a table using migration in the Laravel 9 project.
Open .env
file.
Specify the host, database name, username, and password.
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=tutorial
DB_USERNAME=root
DB_PASSWORD=
Create countries
, states
, and cities
tables using migration.
I am adding foreign key on states
and cities
tables.
states
table is linked to countries
table, andcities
table is linked to states
table.Countries
table –php artisan make:migration create_countries_table
database/migrations/
folder from the project root.create_countries_table
and open it.up()
method.public function up()
{
Schema::create('countries', function (Blueprint $table) {
$table->id();
$table->string('name');
$table->timestamps();
});
}
States
table –php artisan make:migration create_states_table
create_states_table
in database/migrations/
folder and open it.up()
method.Adding foreign key –
country_id
field.$table->unsignedBigInteger('country_id');
. Datatype must be UNSIGNED
and same as the parent table linking field datatype.countries
table id
field has biginteger
datatype.$table->foreign('country_id')
->references('id')->on('countries')->onDelete('cascade');
Values –
public function up()
{
Schema::create('states', function (Blueprint $table) {
$table->id();
$table->unsignedBigInteger('country_id');
$table->string('name');
$table->timestamps();
$table->foreign('country_id')
->references('id')->on('countries')->onDelete('cascade');
});
}
Cities
table –php artisan make:migration create_cities_table
create_cities_table
in database/migrations/
folder and open it.up()
method.Adding foreign key –
states_id
field.$table->unsignedBigInteger('state_id');
. Datatype must be UNSIGNED
and same as the parent table linking field datatype.states
table id
field has biginteger
datatype.$table->foreign('state_id')
->references('id')->on('states')->onDelete('cascade');
Values –
public function up()
{
Schema::create('cities', function (Blueprint $table) {
$table->id();
$table->unsignedBigInteger('state_id');
$table->string('name');
$table->foreign('state_id')
->references('id')->on('states')->onDelete('cascade');
$table->timestamps();
});
}
php artisan migrate
Create Countries, States, and Cities models.
Countries
Model.php artisan make:model Countries
app/Models/Countries.php
file.name
using the $fillable
property.Completed Code
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Countries extends Model
{
use HasFactory;
protected $fillable = [
'name'
];
}
States
Model.php artisan make:model States
app/Models/States.php
file.country_id
, and name
using the $fillable
property.Completed Code
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class States extends Model
{
use HasFactory;
protected $fillable = [
'country_id','name'
];
}
Cities
Model.Cities
app/Models/Cities.php
file.state_id
, and name
using the $fillable
property.Completed Code
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Cities extends Model
{
use HasFactory;
protected $fillable = [
'state_id','name'
];
}
In the example, I added a single foreign key to a table but you can add more than one foreign key to a table by following the same steps.
You can learn more about it from here.
If you found this tutorial helpful then don't forget to share.
Original article source at: https://makitweb.com/
1670654955
Among the many database constraints available to us, the UNIQUE key constraint ensures the uniqueness of data in a column or a set of columns. Read on to find out more about the UNIQUE key constraint and how to define it in Vertabelo.
In this article, we’ll focus on the UNIQUE
key constraint. We’ll start with its basic definition and usage and gradually build up to more advanced options. Also, we’ll jump into Vertabelo and create an ER diagram that uses the UNIQUE
key constraint. You’ll see that the UNIQUE
key offers different options in various database engines.
Let’s get started.
The UNIQUE
key is one of the database constraints that allow you to set rules for your data. It prevents the column from storing duplicate values.
You can learn about the OTHER DATABASE CONSTRAINTS IN THIS ARTICLE.
Let’s look at the various ways we can define the UNIQUE
key constraint on a single column.
The UNIQUE
key constraint can be defined with the column definitions, like this …
CREATE TABLE Persons (
Id int PRIMARY KEY,
FirstName varchar(50),
LastName varchar(50),
SSN varchar(9) UNIQUE);
… or after all the columns are defined, like this:
CREATE TABLE Persons (
Id int PRIMARY KEY,
FirstName varchar(50),
LastName varchar(50),
SSN varchar(9),
CONSTRAINT unique_ssn UNIQUE(SSN));
Sometimes, we decide to make a column unique after table creation. Here’s how we do it:
CREATE TABLE Persons (
Id int PRIMARY KEY,
FirstName varchar(50),
LastName varchar(50),
SSN varchar(9));
ALTER TABLE Persons ADD UNIQUE(SSN);
We use the ALTER TABLE
statement to add or remove database constraints.
The constraint implemented on the SSN
column is ready to be tested.
First, we insert some data into our Persons
table:
INSERT INTO Persons VALUES(1, 'David', 'Anderson', '123123123');
But what if we insert another data row with the same SSN
value?
INSERT INTO Persons VALUES(2, 'Anne', 'Johns', '123123123');
This results in an error:
SQL Error [23505]: ERROR: duplicate key value violates unique constraint "unique_ssn"
Detail: Key (ssn)=(123123123) already exists.
The UNIQUE key constraint works as expected!
Let’s see how to define the UNIQUE key on a set of columns:
CREATE TABLE Persons (
Id int PRIMARY KEY,
FirstName varchar(50),
LastName varchar(50),
SSN varchar(9),
CONSTRAINT unique_name UNIQUE(FirstName, LastName));
Or, after table creation:
ALTER TABLE Persons ADD UNIQUE(FirstName, LastName);
Please note that in this case, only the combination of columns must be unique, but not each column individually. So, the following INSERT
statements are all valid.
INSERT INTO Persons VALUES(1, 'David', 'Anderson', '123123123');
INSERT INTO Persons VALUES(2, 'Anne', 'Johns', '123123123');
INSERT INTO Persons VALUES(3, 'David', 'Johns', '123123123');
INSERT INTO Persons VALUES(4, 'Anne', 'Anderson', '123123123');
The UNIQUE
key constraint implemented on a set of columns doesn’t let duplicate value groups sneak in, just like the one implemented on a single column doesn’t let duplicate values in.
A candidate key is a column or a set of columns that identify each row uniquely. So, the UNIQUE
key qualifies to be the candidate key. Check out this ARTICLE TO LEARN MORE ABOUT DATABASE KEYS.
Have you heard about the foreign key constraint? It lets you link data stored in different tables. Get the BASICS ON FOREIGN KEYS IN THIS ARTICLE and then see how to IMPLEMENT FOREIGN KEYS IN A PHYSICAL MODEL HERE!
In this section, you’ll learn how to define the UNIQUE
key in Vertabelo. Also, we’ll take a look at some of the more advanced options.
Let’s define the UNIQUE
key constraint in Vertabelo. To do so, create a table and navigate to the Alternate (unique) keys section in the right-side panel.
Now, we are ready to add the UNIQUE
key.
Click on the Add key button and expand the data. Don’t forget to name your UNIQUE
key constraint by filling in the Name field. To give it some context, use the Comment field. And, most importantly, add the column(s) that will implement this constraint.
That’s all! The UNIQUE
key constraint is now ready.
You can also generate an SQL code like this:
And here it is:
In the next section, you’ll learn about the additional options available in various database engines.
There are quite a few advanced constraint options that are database-specific. Let’s look at them one by one.
Available In | Option | Description |
---|---|---|
PostgreSQL | Deferrable | The available options are
|
Initially deferred | Here, we have either
| |
Index tablespace | This option lets us define the tablespace where the unique index (associated with the UNIQUE key) resides. | |
With | This clause is optional. It specifies storage parameters for a table or index. | |
MySQL | Using | This clause lets us define the type of the UNIQUE key index, e.g. BTREE or HASH . |
Key block size | This specifies the size of index key blocks in bytes. The database engine treats it as a hint. | |
Microsoft SQL Server | Is clustered | This indicates whether we deal with a clustered or non-clustered index. |
WITH index options | The WITH clause lets you specify index options, such as FILLFACTOR , PAD_INDEX , or ONLINE . | |
ON clause | The ON clause lets you specify the partition scheme name, filegroup name, or default filegroup. |
In Vertabelo, you can define your database when creating a new physical model. All the UNIQUE
key constraint options are available in the right-side panel.
For PostgreSQL database engine:
For more info, see our article on DATABASE CONSTRAINTS IN POSTGRESQL.
For the MySQL database engine:
For further reading, we have an article about DATABASE CONSTRAINTS IN MYSQL.
For the Microsoft SQL Server database engine:
And here’s an ARTICLE about MICROSOFT SQL SERVER DATABASE CONSTRAINTS.
Database constraints are a crucial part of any database design. Make sure to check out our database-specific articles to learn more about the different constraints available.
The UNIQUE
key constraint is a very straightforward concept. It simply prevents duplicate values in a column. Try out some examples on your own and you’ll see!
There are many more database constraints, such as primary and foreign keys, the CHECK
constraint, the DEFAULT
constraint, and the NOT NULL
constraint. Continue to our article on DATABASE CONSTRAINTS: WHAT THEY ARE AND HOW TO DEFINE THEM IN VERTABELO to get a glimpse of them all.
Good luck!
Original article source at: https://www.vertabelo.com/
1670512810
Every row in a table is given a special identification by the PRIMARY KEY constraint.
A table can have only one primary key, which can be made up of one or more fields.
Syntax
CREATE TABLE <TABLE_NAME>(
<COLUMN_NAME> <DATATYPE> PRIMARY KEY,
<COLUMN_NAME> <DATATYPE>,
<COLUMN_NAME> <DATATYPE>
);
Example
CREATE TABLE Employee(
Emp_id Integer PRIMARY KEY,
Emp_Name Varchar(20),
Emp_Gender Varchar(10),
Dept_Id Integer
);
Syntax
ALTER TABLE <TABLE_NAME> ADD PRIMARY KEY (<COLUMN_NAME>);
Example
ALTER TABLE Employee ADD PRIMARY KEY (Emp_id);
The PRIMARY KEY constraint uniquely identifies each row in a table.
Original article source at: https://www.c-sharpcorner.com/
1670326740
Get Mastodon's green checkmark with PGP and Keyoxide.
Mastodon permits its users to self-verify. The easiest method to do this is through a verification link. For advanced verification, though, you can use the power of shared encrypted keys, which Mastodon can link to thanks to the open source project Keyoxide.
Pretty good privacy (PGP) is a standard for shared key encryption. All PGP keys come in pairs. There's a public key, for use by anyone in the world, and a secret key, for use by only you. Anyone with your public key can encode data for you, and once it's encrypted only your secret key (which only you have access to) can decode it again.
If you don't already have a key pair, the first step for encrypted verification is to generate one.
There are many ways to generate a PGP key pair, but I recommend the open source GnuPG suite.
On Linux, GnuPG is already installed.
On Windows, download and install GPG4Win, which includes the Kleopatra desktop application.
On macOS, download and install GPGTools.
If you already have a GPG key pair, you can skip this step. You do not need to create a unique key just for Mastodon.
To create a new key, you can use the Kleopatra application. Go to the File menu and select New key pair. In the Key Pair Creation Wizard that appears, click Create a personal OpenPGP key pair. Enter your name and a valid email address, and select the Protect the generated key with a passphrase option. Click Create to generate your key pair.
(Seth Kenlon, CC BY-SA 4.0)
Alternately, you can use the terminal:
$ gpg2 --full-generate-key
Follow the prompts until you have generated a key pair.
Now that you have a key, you must add special metadata to it. This step requires the use of the terminal (Powershell on Windows) but it's highly interactive and isn't very complex.
First, take a look at your secret key:
gpg2 --list-secret-keys
The output displays your GnuPG keyring, containing at least one secret key. Locate the one you want to use for Mastodon (this might be the only key, if you've just created your first one today.) In the output, there's a long alphanumeric string just above a line starting with uid
. That long number is your key's fingerprint. Here's an example:
sec rsa4096 2022-11-17 [SC]
22420E443871CF4313B9E90D50C9169F563E50CF
uid [ultimate] Tux <tux@example.com>
ssb rsa4096 2022-11-17 [E]
This example key's fingerprint is 22420E443871CF4313B9E90D50C9169F563E50CF
. Select your key's fingerprint with your mouse, and then right-click and copy it to your clipboard. Then copy it into a document somewhere, because you're going to need it a lot during this process.
Now you can add metadata to the key. Enter the GnuPG interface using the gpg2 --edit-key
command along with the fingerprint:
gpg2 --edit-key 22420E443871CF4313B9E90D50C9169F563E50CF
At the GnuPG prompt, select the user ID (that's your name and email address) you want to use as your verification method. In this example, there's only one user ID (uid [ultimate] Tux <
tux@example.com
>
) so that's user ID 1:
gpg> uid 1
Designate this user as the primary user of the key:
gpg> primary
For Keyoxide to recognize your Mastodon identity, you must add a special notation:
gpg> notation
The notation metadata, at least in this context, is data formatted to the Ariadne specification. The metadata starts with proof@ariadne.id=
and is followed by the URL of your Mastodon profile page.
In a web browser, navigate to your Mastodon profile page, and copy the URL. For me, in this example, the URL is https://example.com/@tux
, so Enter the notation at the GnuPG prompt:
gpg> notation
Enter the notation: proof@ariadne.id=http://example.com/@tux
That's it. Type save
to save and exit GnuPG.
gpg> save
Next, export your key. To do this in Kleopatra, select your key and click the Export button in the top toolbar.
Alternately, you can use the terminal. Reference your key by its fingerprint (I told you that you'd be using it a lot):
gpg2 --armor --export \
22420E443871CF4313B9E90D50C9169F563E50CF > pubkey.asc
Either way, you end up with a public key ending in .asc
. It's always safe to distribute your public key. (You would never, of course, distribute your secret key because it is, as its very name implies, meant to be secret.)
Open your web browser and navigate to keys.openpgp.org.
On the keys.openpgp.org website, click the Upload link to upload your exported key. Do this even if you've had a GPG key for years and know all about the --send-key
option. This is a unique step to the Keyoxide process, so don't skip it.
(Seth Kenlon, CC BY-SA 4.0)
After your key's been uploaded, click the Send confirmation email button next to your email address so you can confirm that you own the email your key claims it belongs to. It can take 15 minutes or so, but when you receive an email from Openpgp.org, click the confirmation link to verify your email.
Now that everything's set up, you can use Keyoxide as your verification link for Mastodon. Go to your Mastodon profile page and click the Edit profile link.
On the Edit profile page, scroll down to the Profile Metadata section. Type PGP
(or anything you want) into the Label field. In the Content field, type https://keyoxide.org/hkp/
and then your key fingerprint. For me, in this example, the full URL is https://keyoxide.org/hkp/22420E443871CF4313B9E90D50C9169F563E50CF
.
Click the Save button and then return to your profile page.
(Seth Kenlon, CC BY-SA 4.0)
You can click the Keyoxide link in your profile to see your Keyoxide "profile" page. This page is actually just a rendering of the GPG key you created. Keyoxide's job is to parse your key, and to be a valid destination when you need to link to it (from Mastodon, or any other online service.)
The old Twitter verification method was opaque and exclusive. Somebody somewhere claimed that somebody else somewhere else was really who they said they were. It proved nothing, unless you agree to accept that somewhere there's a reliable network of trust. Most people choose to believe that, because Twitter was a big corporation with lots at stake, and relatively few people (of the relative few who were granted it) complained about the accuracy of the Twitter blue checkmark.
Open source verification is different. It's available to everyone, and proves as much as Twitter verification did. But you can do even better. When you use encrypted keys for verification, you grant yourself the ability to have your peers review your identity and to digitally sign your PGP key as a testament that you are who you claim you are. It's a method of building a web of trust, so that when you link from Mastodon to your PGP key through Keyoxide, people can trust that you're really the owner of that digital key. I also means that several community members have met you in person and signed your key.
Help build human-centric trust online, and use PGP and Keyoxide to verify your identity. And if you see me at a tech conference, let's sign keys!
Original article source at: https://opensource.com/
1670317260
Hi all, today we are going to perform an automation stuff. Using this script we will be going to delete all unused key pair from our AWS account using boto3. By doing this we can ensure that our AWS account remains secure to some extent.
Firstly we will be writing code to see our all key pairs that exist with the aws account. Below is the code.
import boto3
session = boto3.Session()
# Creating empty lists for all, used, and unused key pairs
key_pairs = []
used_key_pairs = []
unused_key_pairs = []
# List the key pairs in the selected region
ec2 = session.client('ec2')
key_pairs = list(map(lambda i: i['KeyName'], ec2.describe_key_pairs()['KeyPairs']))
print(key_pairs)
Below is the code output. I am getting awslearning1, check-1 and check-2
output
Below is the picture of my aws account showing the same 3 key pairs.
Now we are going to list all used key pairs.
ec2 = session.client('ec2')
instance_groups = list(map(lambda i: i['Instances'], ec2.describe_instances()['Reservations']))
for group in instance_groups:
for i in group:
if i['KeyName'] not in used_key_pairs:
used_key_pairs.append(i['KeyName'])
print(used_key_pairs)
Below pic shows the code output i am getting. As i only have one used key pair associated with one of my ec2 instance.
Since we have listed listed all key pairs and used key pairs, we can easily fetch unused key pairs. so, we will be first finding that and deleting the unused key pairs.
for key in key_pairs:
if key not in used_key_pairs:
unused_key_pairs.append(key)
print(unused_key_pairs)
Now we have the unused key pairs. Lastly we will be deleting these unused key pairs.
for key in unused_key_pairs:
print(key)
ec2.delete_key_pair(KeyName=key)
The above pic is the code output. You can now see that both unused key pairs have been delete from my aws account.
Ready to gain a competitive advantage with Future Ready Emerging Technologies?
In this blog we have seen how to list all key pairs, used key pairs and finally unused key pairs from your AWS account. For the security purposes it is recommended not to have unused key pairs in your AWS account. After listing unused key pairs , we have learned how to delete them also. All by using boto3 package of python. You can automate this stuff also. For more detalis about key pair please visit https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
Please visit https://blog.knoldus.com/tag/aws/ for more blogs on AWS
Original article source at: https://blog.knoldus.com/
1669995563
In a database, a foreign key is a field that references another table.
They keep track of related records and which table they exist in. They also let you know what record they relate to, which means updating them is simple and quick.
In this tutorial, I show how you can add a foreign key while creating table using migration in CodeIgniter 4.
.env
file which is available at the project root.NOTE – If dot (.) not added at the start then rename the file to .env.
database.default.hostname
, database.default.database
, database.default.username
, database.default.password
, and database.default.DBDriver
.database.default.hostname = 127.0.0.1
database.default.database = testdb
database.default.username = root
database.default.password =
database.default.DBDriver = MySQLi
I am creating 2 tables –
Adding foreign key depart_id
field on the employees
table.
departments
departments
table –php spark migrate:create create_departments_table
app/Database/Migrations/
folder.CreateDepartmentsTable.php
and open it.up()
method define table structure.down()
method delete departments
table that calls when undoing migration.<?php
namespace App\Database\Migrations;
use CodeIgniter\Database\Migration;
class CreateDepartmentsTable extends Migration
{
public function up() {
$this->forge->addField([
'id' => [
'type' => 'INT',
'constraint' => 5,
'unsigned' => true,
'auto_increment' => true,
],
'name' => [
'type' => 'VARCHAR',
'constraint' => '100',
]
]);
$this->forge->addKey('id', true);
$this->forge->createTable('departments');
}
public function down() {
$this->forge->dropTable('departments');
}
}
employees
employees
table –php spark migrate:create create_employees_table
app/Database/Migrations/
folder and open PHP file that ends with CreateEmployeesTable.php
.Add foreign key –
depart_id
field to define foreign key.$this->forge->addForeignKey()
method to set foreign key.$this->forge->addForeignKey('depart_id', 'departments', 'id', 'CASCADE', 'CASCADE');
employees
table in the down()
method that calls when undoing migration.<?php
namespace App\Database\Migrations;
use CodeIgniter\Database\Migration;
class CreateEmployeesTable extends Migration
{
public function up() {
$this->forge->addField([
'id' => [
'type' => 'INT',
'constraint' => 5,
'unsigned' => true,
'auto_increment' => true,
],
'depart_id' => [
'type' => 'INT',
'constraint' => 5,
'unsigned' => true,
],
'name' => [
'type' => 'VARCHAR',
'constraint' => '100',
]
]);
$this->forge->addKey('id', true);
$this->forge->addForeignKey('depart_id', 'departments', 'id', 'CASCADE', 'CASCADE');
$this->forge->createTable('employees');
}
public function down() {
$this->forge->dropTable('employees');
}
}
Run the migration –
php spark migrate
Create 2 models –
Departments
Departments
Model –php spark make:model Departments
app/Models/Departments.php
file.$allowedFields
Array specify field names – ['name']
that can be set during insert and update.Completed Code
<?php
namespace App\Models;
use CodeIgniter\Model;
class Departments extends Model
{
protected $DBGroup = 'default';
protected $table = 'departments';
protected $primaryKey = 'id';
protected $useAutoIncrement = true;
protected $insertID = 0;
protected $returnType = 'array';
protected $useSoftDeletes = false;
protected $protectFields = true;
protected $allowedFields = ['name'];
// Dates
protected $useTimestamps = false;
protected $dateFormat = 'datetime';
protected $createdField = 'created_at';
protected $updatedField = 'updated_at';
protected $deletedField = 'deleted_at';
// Validation
protected $validationRules = [];
protected $validationMessages = [];
protected $skipValidation = false;
protected $cleanValidationRules = true;
// Callbacks
protected $allowCallbacks = true;
protected $beforeInsert = [];
protected $afterInsert = [];
protected $beforeUpdate = [];
protected $afterUpdate = [];
protected $beforeFind = [];
protected $afterFind = [];
protected $beforeDelete = [];
protected $afterDelete = [];
}
Employees
Employees
Model –php spark make:model Employees
app/Models/Employees.php
file.$allowedFields
Array specify field names – ['depart_id','name']
that can be set during insert and update.Completed Code
<?php
namespace App\Models;
use CodeIgniter\Model;
class Employees extends Model
{
protected $DBGroup = 'default';
protected $table = 'employees';
protected $primaryKey = 'id';
protected $useAutoIncrement = true;
protected $insertID = 0;
protected $returnType = 'array';
protected $useSoftDeletes = false;
protected $protectFields = true;
protected $allowedFields = ['depart_id','name'];
// Dates
protected $useTimestamps = false;
protected $dateFormat = 'datetime';
protected $createdField = 'created_at';
protected $updatedField = 'updated_at';
protected $deletedField = 'deleted_at';
// Validation
protected $validationRules = [];
protected $validationMessages = [];
protected $skipValidation = false;
protected $cleanValidationRules = true;
// Callbacks
protected $allowCallbacks = true;
protected $beforeInsert = [];
protected $afterInsert = [];
protected $beforeUpdate = [];
protected $afterUpdate = [];
protected $beforeFind = [];
protected $afterFind = [];
protected $beforeDelete = [];
protected $afterDelete = [];
}
If you don’t want to make any changes on the child table when the delete/update action is performed on the parent table then remove CASCADE while defining the foreign key.
If you found this tutorial helpful then don't forget to share.
Original article source at: https://makitweb.com/
1669738848
These are replacements to BottomNavigationBar
, IndexedStack
, and TabController
that use item keys instead if numeric indexes.
With traditional widgets, you write something like
const tabFavorites = 0;
const tabSearch = 1;
// ...
if (tabIndex == tabFavorites) {
// ...
}
If items in your bar can change, you get an error-prone conversion from indexes to meanings. Also with a mature architecture you tend to use enum
for your tabs, and even with constant bar items you must write code to convert between enum
and int
.
This package provides widgets to be used with any type instead of int
. In most cases you will use enum
.
Some advantages of enum
over indexes:
String
is also a good type to use with these widgets if you have dynamic or potentially unlimited tabs (like in a browser or a document editor) but still want meaningful keys instead of indexes.
This example uses an enum
for selectable navigation items:
This example is runnable. Download the repository and open the example project. Then run nav_stack.dart
enum TabEnum { favorites, search }
class _MyScreenState extends State<MyScreen> {
TabEnum _tab = TabEnum.favorites;
@override
Widget build(BuildContext context) {
// This is a simplified example: IndexedStack and KeyedStack are only
// meaningful if they contain stateful widgets to preserve state
// between switches.
return Scaffold(
body: KeyedStack<TabEnum>(
itemKey: _tab,
children: const {
TabEnum.favorites: Center(
key: ValueKey('favorites_pane'),
child: Text('Favorites'),
),
TabEnum.search: Center(
key: ValueKey('search_pane'),
child: Text('Search'),
),
},
),
bottomNavigationBar: KeyedBottomNavigationBar<TabEnum>(
currentItemKey: _tab,
items: const {
TabEnum.favorites: BottomNavigationBarItem(
icon: Icon(Icons.star),
label: 'Favorites',
),
TabEnum.search: BottomNavigationBarItem(
icon: Icon(Icons.search),
label: 'Search',
),
},
onTap: (tab) => setState(() {
_tab = tab;
}),
),
);
}
}
KeyedBottomNavigationBar
and KeyedStack
support all the arguments of their traditional counterparts. The only difference is that current keys are required and do not default to first element.
This example uses enum
for tabs:
This example is runnable. Download the repository and open the example project. Then run tabs.dart
enum MyTab { one, two, three }
class _MyScreenState extends State<MyScreen> with TickerProviderStateMixin {
late final _tabController = KeyedTabController<TabEnum>(
initialKey: TabEnum.three,
keys: [TabEnum.one, TabEnum.two, TabEnum.three],
vsync: this,
);
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('${_tabController.currentKey}'),
bottom: KeyedTabBar(
tabs: {
for (final key in _tabController.keys) key: Tab(text: '$key'),
},
controller: _tabController,
),
),
body: KeyedTabBarView(
children: {
for (final key in _tabController.keys)
key: Center(child: Text('$key content')),
},
controller: _tabController,
),
);
}
}
The ordinary TabBar
and TabBarView
must have exactly as many children as their controller is set to. This means that if you need to hide some tabs, there must be three locations in your code to know that:
TabBar
widget with tab headers.TabBarView
widget with tab contents.This is extremely error-prone.
With this package, KeyedTabBar
and KeyedTabBarView
have maps of children, so they can contain more widgets than the controller wants to show.
This means that you can unconditionally pass all children for all possible tabs to them, and the only location in your code that needs to know what tabs to show is the code that updates the controller.
KeyedTabController
implements TabController
and is immediately usable as one. If you ever need to get the tab index or select a tab by index, do it as you normally would.
With ordinary TabController
, you need TickerProvider to create it. And this limits the usage. You must create TabController
in a widget. If you want your BLoC or other business logic code to be aware of tabs or control them, it may be tricky to pass the controller there.
This package provides UnanimatedKeyedTabController
which has the logic core for KeyedTabController
, but not its animation. You can create this controller anywhere and then add the animation in your widget.
Create it like this:
final unanimatedController = UnanimatedKeyedTabController<TabEnum>(
keys: [TabEnum.one, TabEnum.two, TabEnum.three],
initialKey: TabEnum.three,
);
Then create KeyedTabController
in your widget:
class _MyScreenState extends State<MyScreen> with TickerProviderStateMixin {
late final _tabController = KeyedTabController<TabEnum>.fromUnanimated(
controller: unanimatedController,
vsync: this,
);
// ...
This binds the two controllers. If you change the tab via UnanimatedKeyedTabController
, then KeyedTabController
gets updated, and the tab change is animated in the UI.
And if the user changes the tab by interacting with it, both controllers get updated.
Ordinary TabController
has a fixed length
and animationDuration
. If you need to change them, you must create a new controller and replace it everywhere.
KeyedTabController
has these mutable.
You can change the tabs at any time by setting KeyedTabController.keys
property. If the currently selected tab also exists in the new set, its selection is preserved, otherwise the first new tab gets selected.
This is possible because KeyedTabController
does not extend but contains TabController
and so it can re-create its internal TabController
with different parameters without disturbing its own listeners.
In Flutter's tab examples, you often see the widget's State
created with SingleTickerProviderStateMixin
. This only allows one TabController
to be created in it. However, KeyedTabController
re-creates its TabController
if you change keys
or animationDuration
, so it will break if created with SingleTickerProviderStateMixin
.
You should use TickerProviderStateMixin
for your widgets instead. It allows many TabController
objects to be created with it.
Flutter provides DefaultTabController widget which accepts the number of tabs, creates a TabController
and provides it to all tab-related widgets under it.
It has the following advantages:
TabController
and dispose it.TabController
, and all widgets under it are updated for the new number of tabs.This is matched by DefaultKeyedTabController
. Although tabs get mutable with this package, the advantage #1 still stands.
This widget comes in two forms:
DefaultKeyedTabController.fromKeys(
keys: [TabEnum.one, TabEnum.two],
child: ...
),
Use this when you know the keys to show in your widget.
DefaultKeyedTabController.fromUnanimated(
controller: unanimatedController,
child: ...
),
Use this if you use UnanimatedKeyedTabController
.
In Flutter, both TabBar
and TabBarView
widgets can be created without the controller
argument. In this case, they rely on DefaultTabController
widget present in the tree above them and break if it is missing.
This is error-prone because the controller
argument may simply be forgotten, and this cannot be detected at compile time.
In this package, the controller
argument to KeyedTabBar
and KeyedTabBarView
is required. To rely on the DefaultKeyedTabController
, use .withDefaultController
static methods of those widgets instead of their default constructors.
There is still no check at compile time that the default controller is present in the tree, but at least you must explicitly declare that you want it and not just have forgotten to pass the controller
.
All things equal, prefer DefaultKeyedTabController
over manual KeyedTabController
creation. This is because that widget will dispose the controller for you when it is not needed anymore.
This example shows:
DefaultKeyedTabController
widgets.UnanimatedKeyedTabController
.UnanimatedKeyedTabController
that a KeyedTabController
is linked to.This example is runnable. Download the repository and open the example project. Then run nav_stack_tabs.dart
Although enum
enhances type safety for tabs, it is still not absolute. In widgets, you may still forget to use all keys in children
map and only know that at runtime.
You can make this compile-time safe by using enum_map package that generates maps that are guaranteed to have all keys at compile time (see that package's README for more info):
@unmodifiableEnumMap // CHANGED
enum TabEnum { one, two, three }
class _MyScreenState extends State<MyScreen> with TickerProviderStateMixin {
late final _tabController = KeyedTabController<TabEnum>(
initialKey: TabEnum.three,
keys: [TabEnum.one, TabEnum.two, TabEnum.three],
vsync: this,
);
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('${_tabController.currentKey}'),
bottom: KeyedTabBar(
controller: _tabController,
tabs: const UnmodifiableTabEnumMap( // CHANGED
one: Tab(text: 'One'), // CHANGED
two: Tab(text: 'Two'), // CHANGED
three: Tab(text: 'Three'), // CHANGED
), // CHANGED
),
),
body: KeyedTabBarView(
controller: _tabController,
children: const UnmodifiableTabEnumMap( // CHANGED
one: Center(child: Text('One content')), // CHANGED
two: Center(child: Text('Two content')), // CHANGED
three: Center(child: Text('Three content')), // CHANGED
), // CHANGED
),
);
}
}
Do you have any questions? Feel free to ask in the Telegram Support Chat.
Or even just join to say 'Hi!'. I like to hear from the users.
Run this command:
With Flutter:
$ flutter pub add keyed_collection_widgets
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
keyed_collection_widgets: ^0.4.1
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:keyed_collection_widgets/keyed_collection_widgets.dart';
example
All examples here are runnable. Download this repository to your computer and open it in your editor. In Android Studio, you can run examples like this:
This example shows KeyedBottomNavigationBar
and KeyedStack
widgets:
This example shows KeyedTabController
and widgets KeyedTabBar
and KeyedTabBarView
:
This is a variation of tabs.dart
that uses enum_map for compile-time type safety with tabs.
This is a variation of tabs.dart
that uses `UnanimatedKeyedTabController.
This is the ultimate advanced example that shows:
DefaultKeyedTabController
widgets.UnanimatedKeyedTabController
.UnanimatedKeyedTabController
that KeyedTabController
is linked to.Download Details:
Author: alexeyinkin
Source Code: https://github.com/alexeyinkin/flutter-keyed-collection-widgets
1668497419
How you can search multidimensional arrays in PHP that match specific values in PHP
To handle searching a multidimensional array, you can use either the foreach
statement or the array_search()
function.
A PHP multidimensional array can be searched to see if it has a certain value.
Let’s see an example of performing the search. Suppose you have a multidimensional array with the following structure:
$users = [
[
"uid" => "111",
"name" => "Nathan",
"age" => 29,
],
[
"uid" => "254",
"name" => "Jessie",
"age" => 25,
],
[
"uid" => "305",
"name" => "Michael",
"age" => 30,
],
];
To search the array by its value, you can use the foreach
statement.
You need to loop over the array and see if one of the child arrays has a specific value.
For example, suppose you want to get the array with the uid
value of 111
:
$id = "111";
foreach ($users as $k => $v) {
if ($v["uid"] === $id) {
print $users[$k];
}
}
Note that the comparison operator in the code above uses triple equal ===
.
This means the type of compared values must be the same.
The code above will produce the following output:
Array
(
[uid] => 111
[name] => Nathan
[age] => 29
)
In PHP 5.5 and above, you can also use the array_search()
function combined with the array_column()
function to find an array that matches a condition.
See the example below:
$name = "Michael";
$key = array_search($name, array_column($users, "name"));
print_r($users[$key]);
The above code will produce the following output:
Array
(
[uid] => 305
[name] => Michael
[age] => 30
)
Let’s create a custom function from the search code so that you can perform a more dynamic search based on key and value.
This custom function accepts three parameters:
key
you want to searchvalue
you want the key to havearray
you want to searchThe function can be written as follows:
function find_array($name, $value, $array) {
$key = array_search($name, array_column($array, $value));
return $array[$key];
}
To handle a case where the specific value is not found, you need to add an if
condition to the function.
You can return false
or null
when the $key
is not found:
function find_array($k, $v, $array) {
$key = array_search($v, array_column($array, $k));
// 👇 key is found, return the array
if ($key !== false) {
return $array[$key];
}
// 👇 key is not found, return false
return false;
}
Now you can use the find_array()
function anytime you need to search a multidimensional array.
Here are some examples:
// 👇 value exists
$result = find_array("name", "Jessie", $users);
if ($result) {
print_r($result);
} else {
print "Array with that value is not found!";
}
// 👇 value doesn't exists
$result = find_array("uid", "1000", $users);
if ($result) {
print_r($result);
} else {
print "Array with that value is not found!";
}
The code above will produce the following output:
Array
(
[uid] => 254
[name] => Jessie
[age] => 25
)
Array with that value is not found!
Now you’ve learned how to search a multidimensional array in PHP.
When you need to find an array with specific values, you only need to call the find_array()
function above.
Feel free to use the function in your PHP project. 👍
Original article source at: https://sebhastian.com/
1666044720
Convenience functions for dictionaries with Symbol
keys.
Create a Dict{Symbol,}
:
@SymDict(a=1, b=2)
Dict{Symbol,Any}(:a=>1,:b=>2)
Capture local variables in a dictionary:
a = 1
b = 2
@SymDict(a,b)
Dict{Symbol,Any}(:a=>1,:b=>2)
a = 1
b = 2
@SymDict(a,b,c=3)
Dict{Symbol,Any}(:a=>1,:b=>2,:c=3)
Capture varags key,value arguments in a dictionary:
function f(x; option="Option", args...)
@SymDict(x, option, args...)
end
f("X", foo="Foo", bar="Bar")
Dict{Symbol,Any}(:x=>"X",:option=>"Option",:foo=>"Foo",:bar=>"Bar")
Merge new entries into a dictionary:
d = @SymDict(a=1, b=2)
merge!(d, c=3, d=4)
Dict{Symbol,Any}(:a=>1,:b=>2,:c=3,:d=>4)
Convert to/from `Dict{AbstractString,}:
d = @SymDict(a=1, b=2)
d = stringdict(d)
Dict{String,Any}("a"=>1,"b"=>2)
d = symboldict(d)
Dict{Symbol,Any}(:a=>1,:b=>2)
Author: JuliaCloud
Source Code: https://github.com/JuliaCloud/SymDict.jl
License: View license
1665583020
The goal of keys
is to add hotkeys to shiny applications using Mousetrap
. With keys
, you can:
Install the released version of keys
from CRAN:
install.packages("keys")
Or install the development version from GitHub with:
# install.packages("devtools")
devtools::install_github("r4fun/keys")
You can also install keys
with conda-forge
. More information here: https://github.com/conda-forge/r-keys-feedstock
To use keys
, start by adding a dependency to it using useKeys()
.
Then, you can add a keysInput
to the UI:
library(shiny)
library(keys)
hotkeys <- c(
"1",
"command+shift+k",
"up up down down left right left right b a enter"
)
ui <- fluidPage(
useKeys(),
keysInput("keys", hotkeys)
)
server <- function(input, output, session) {
observeEvent(input$keys, {
print(input$keys)
})
}
shinyApp(ui, server)
You can add binding after application launch using addKeys
.
library(shiny)
library(keys)
ui <- fluidPage(
useKeys(),
actionButton("add", "Add keybinding")
)
server <- function(input, output, session) {
observeEvent(input$add, {
addKeys("keys", c("a", "b", "c"))
})
observeEvent(input$keys, {
print(input$keys)
})
}
shinyApp(ui, server)
Bindings can be removed after application launch using removeKey
.
library(shiny)
library(keys)
ui <- fluidPage(
useKeys(),
keysInput("keys", c("a", "b", "c")),
actionButton("rm", "Remove `a` keybinding")
)
server <- function(input, output, session) {
observeEvent(input$rm, {
removeKeys("a")
})
observeEvent(input$keys, {
print(input$keys)
})
}
shinyApp(ui, server)
For more information about what types of hotkeys you can use, please take a look at the mousetrap github repository.
All credit goes to Craig Campbell who is the author of Mousetrap
.
Author: r4fun
Source Code: https://github.com/r4fun/keys
License: View license
1665479536
Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user.
Driftwood performs lookups with the computed public key, so the private key never leaves where you run the tool. Additionally it supports some basic password cracking for encrypted keys.
Three easy ways to get started.
cat private.key | docker run --rm -i trufflesecurity/driftwood --pretty-json -
Download the binary from the releases page and run it.
go install github.com/trufflesecurity/driftwood@latest
Minimal usage is
$ driftwood path/to/privatekey.pem
Run with --help
to see more options.
Packages under pkg/
are libraries that can be used for external consumption. Packages under pkg/exp/
are considered to be experimental status and may have breaking changes.
Author: Trufflesecurity
Source Code: https://github.com/trufflesecurity/driftwood
License: Apache-2.0 license
1665155228
Generate a RSA PEM key pair from pure JS
var keypair = require('keypair');
var pair = keypair();
console.log(pair);
outputs
$ node example.js
{ public: '-----BEGIN RSA PUBLIC KEY-----\r\nMIGJAoGBAM3CosR73CBNcJsLv5E90NsFt6qN1uziQ484gbOoule8leXHFbyIzPQRozgEpSpi\r\nwhr6d2/c0CfZHEJ3m5tV0klxfjfM7oqjRMURnH/rmBjcETQ7qzIISZQ/iptJ3p7Gi78X5ZMh\r\nLNtDkUFU9WaGdiEb+SnC39wjErmJSfmGb7i1AgMBAAE=\r\n-----END RSA PUBLIC KEY-----\n',
private: '-----BEGIN RSA PRIVATE KEY-----\r\nMIICXAIBAAKBgQDNwqLEe9wgTXCbC7+RPdDbBbeqjdbs4kOPOIGzqLpXvJXlxxW8iMz0EaM4\r\nBKUqYsIa+ndv3NAn2RxCd5ubVdJJcX43zO6Ko0TFEZx/65gY3BE0O6syCEmUP4qbSd6exou/\r\nF+WTISzbQ5FBVPVmhnYhG/kpwt/cIxK5iUn5hm+4tQIDAQABAoGBAI+8xiPoOrA+KMnG/T4j\r\nJsG6TsHQcDHvJi7o1IKC/hnIXha0atTX5AUkRRce95qSfvKFweXdJXSQ0JMGJyfuXgU6dI0T\r\ncseFRfewXAa/ssxAC+iUVR6KUMh1PE2wXLitfeI6JLvVtrBYswm2I7CtY0q8n5AGimHWVXJP\r\nLfGV7m0BAkEA+fqFt2LXbLtyg6wZyxMA/cnmt5Nt3U2dAu77MzFJvibANUNHE4HPLZxjGNXN\r\n+a6m0K6TD4kDdh5HfUYLWWRBYQJBANK3carmulBwqzcDBjsJ0YrIONBpCAsXxk8idXb8jL9a\r\nNIg15Wumm2enqqObahDHB5jnGOLmbasizvSVqypfM9UCQCQl8xIqy+YgURXzXCN+kwUgHinr\r\nutZms87Jyi+D8Br8NY0+Nlf+zHvXAomD2W5CsEK7C+8SLBr3k/TsnRWHJuECQHFE9RA2OP8W\r\noaLPuGCyFXaxzICThSRZYluVnWkZtxsBhW2W8z1b8PvWUE7kMy7TnkzeJS2LSnaNHoyxi7Ia\r\nPQUCQCwWU4U+v4lD7uYBw00Ga/xt+7+UqFPlPVdz1yyr4q24Zxaw0LgmuEvgU5dycq8N7Jxj\r\nTubX0MIRR+G9fmDBBl8=\r\n-----END RSA PRIVATE KEY-----\n' }
Performance greatly depends on the bit size of the generated private key. With 1024 bits you get a key in 0.5s-2s, with 2048 bits it takes 8s-20s, on the same machine. As this will block the event loop while generating the key, make sure that's ok or to spawn a child process or run it inside a webworker.
@maxogden found out how to use this module to create entries for the authorized_keys
file:
var keypair = require('keypair');
var forge = require('node-forge');
var pair = keypair();
var publicKey = forge.pki.publicKeyFromPem(pair.public);
var ssh = forge.ssh.publicKeyToOpenSSH(publicKey, 'user@domain.tld');
console.log(ssh);
Get an RSA PEM key pair.
opts
can be
bits
: the size for the private key in bits. Default: 2048.e
: the public exponent to use. Default: 65537.With npm do
$ npm install keypair
To digitalbazaar for their forge project, this library is merely a wrapper around some of forge's functions.
Author: juliangruber
Source Code: https://github.com/juliangruber/keypair
License: View license