1652318880
NEAR Lake is an indexer built on top of NEAR Indexer microframework to watch the network and store all the events as JSON files on AWS S3.
We used to have NEAR Indexer for Explorer that was watching for the network and stored all the events to PostgreSQL database. PostgreSQL became the main bottleneck for us. After some brainstorming sessions and researches we decided to go with SingleStore database.
Knowing the fact that NEAR Explorer is not the only project that uses the Indexer for Explorer's database, we wanted to come up with the concept that will allow us to cover even more projects that can benefit from the data from NEAR Protocol.
That's why we decided to store the data from the blockchain as JSON files on AWS S3 bucket that can be used as a data source for different projects.
As "Indexer for Explorer Remake" project we are going to have near-lake
as a data writer. There's going to be another project that will read from AWS S3 bucket and will store all the data in SingleStore database. This will replace NEAR Indexer for Explorer PostgreSQL database at some moment and will become the main source for NEAR Explorer.
The final setup consists of the following components:
Before you proceed, make sure you have the following software installed:
rust-toolchain
file in the root of nearcore project.For example, the files generated by the AWS CLI for a default profile configured with aws configure looks similar to the following.
~/.aws/credentials
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ cargo build --release
To connect NEAR Lake to the specific chain you need to have necessary configs, you can generate it as follows:
$ ./target/release/near-lake --home ~/.near/testnet init --chain-id testnet --download-config --download-genesis
The above code will download the official genesis config and generate necessary configs. You can replace testnet
in the command above to different network ID (betanet
, mainnet
).
NB! According to changes in nearcore
config generation we don't fill all the necessary fields in the config file. While this issue is open https://github.com/nearprotocol/nearcore/issues/3156 you need to download config you want and replace the generated one manually.
Configs for the specified network are in the --home
provided folder. We need to ensure that NEAR Lake follows all the necessary shards, so "tracked_shards"
parameters in ~/.near/testnet/config.json
needs to be configured properly. Currently, nearcore
treats empty value for "tracked_shards"
as "do not track any shard" and any value as "track all shards". For example, in order to track all shards, you just add the shard #0 to the list:
...
"tracked_shards": [0],
...
Commands to run NEAR Lake, after ./target/release/near-lake
Command | Key/Subcommand | Required/Default | Responsible for |
---|---|---|---|
--home | Default ~/.near | Tells the node where too look for necessary files: config.json , genesis.json , node_key.json , and data folder | |
init | Tells the node to generate config files in --home-dir | ||
--chain-id | Required * localnet * testnet * mainnet | Defines the chain to generate config files for | |
--download-config | Optional | If provided tells the node to download config.json from the public URL. You can download them manually- testnet config.json - mainnet config.json | |
--download-genesis | Optional | If provided tells the node to download genesis.json from the public URL. You can download them manually- testnet genesis.json - mainnet genesis.json | |
TODO: Other neard keys | |||
run | Runs the node | ||
--bucket | Required | AWS S3 Bucket name | |
--region | Required | AWS S3 Bucket region | |
--endpoint | Optional | AWS S3 compatible API endpoint | |
--stream-while-syncing | Optional | If provided Indexer streams blocks while they appear on the node instead of waiting the node to be fully synced | |
--concurrency | Default 1 | Defines the concurrency for the process of saving block data to AWS S3 | |
sync-from-latest | One of the sync- subcommands is required | Tells the node to start indexing from the latest block in the network | |
sync-from-interruption | One of the sync- subcommands is required | Tells the node to start indexing from the block the node was interrupted on (if it is a first start it will fallback to sync-from-latest ) | |
sync-from-block --height N | One of the sync- subcommands is required | Tells the node to start indexing from the specified block height N (Ensure you node data has the block you want to start from) |
$ ./target/release/near-lake --home ~/.near/testnet run --stream-while-syncing --concurrency 50 sync-from-latest
After the network is synced, you should see logs of every block height currently received by NEAR Lake.
Whenever you run NEAR Lake for any network except localnet you'll need to sync with the network. This is required because it's a natural behavior of nearcore
node and NEAR Lake is a wrapper for the regular nearcore
node. In order to work and index the data your node must be synced with the network. This process can take a while, so we suggest to download a fresh backup of the data
folder and put it in you --home-dir
of your choice (by default it is ~/.near
)
Running your NEAR Lake node on top of a backup data will reduce the time of syncing process because your node will download only the data after the backup was cut and it takes reasonable amount time.
All the backups can be downloaded from the public S3 bucket which contains latest daily snapshots:
It's not necessary but in order to index everything in the network it is better to do it from the genesis. nearcore
node is running in non-archival mode by default. That means that the node keeps data only for 5 last epochs. In order to index data from the genesis we need to turn the node in archival mode.
To do it we need to update config.json
located in --home-dir
(by default it is ~/.near
).
Find next keys in the config and update them as following:
{
...
"archive": true,
"tracked_shards": [0],
...
}
The syncing process in archival mode can take a lot of time, so it's better to download a backup provided by NEAR and put it in your data
folder. After that your node will download only the data after the backup was cut and it takes reasonable amount time.
All the backups can be downloaded from the public S3 bucket which contains the latest daily snapshots:
See https://docs.near.org/docs/roles/integrator/exchange-integration#running-an-archival-node for reference
We write all the data to AWS S3 buckets:
near-lake-data-testnet
(eu-central-1
region) for testnetnear-lake-data-mainnet
(eu-central-1
region) for mainnetIn case you want to run you own near-lake instance and store data in some S3 compatible storage (Minio or Localstack as example) You can owerride default S3 API endpoint by using --endpoint
option
$ mkdir -p /data/near-lake-custom && minio server /data
$ ./target/release/near-lake --home ~/.near/testnet run --endpoint http://127.0.0.1:9000 --bucket near-lake-custom sync-from-latest
The data structure we use is the following:
<block_height>/
block.json
shard_0.json
shard_1.json
...
shard_N.json
<block_height>
is a 12-character-long u64
string with leading zeros (e.g 000042839521
). See this issue for a reasoningblock_json
contains JSON-serialized BlockView
struct. NB! this struct might change in the future, we will announce itshard_N.json
where N
is u64
starting from 0
. Represents the index number of the shard. In order to find out the expected number of shards in the block you can look in block.json
at .header.chunks_included
All NEAR Lake AWS S3 buckets have Request Payer enabled. It means that anyone with their own AWS credentials can List and Read the bucket's content and be charged for it by AWS. Connections to the bucket have to be done with AWS credentials provided. See NEAR Lake Framework for a reference.
Once we set up the public access to the buckets anyone will be able to build their own code to read it through.
For our own needs we are working on NEAR Lake Framework to have a simple way to create an indexer on top of the data stored by NEAR Lake itself.
See the official announce of NEAR Lake Framework on the NEAR Gov Forum
Download Details:
Author: near
Source Code: https://github.com/near/near-lake
License: GPL-3.0 license
#blockchain #smartcontract #near #rust
1652318880
NEAR Lake is an indexer built on top of NEAR Indexer microframework to watch the network and store all the events as JSON files on AWS S3.
We used to have NEAR Indexer for Explorer that was watching for the network and stored all the events to PostgreSQL database. PostgreSQL became the main bottleneck for us. After some brainstorming sessions and researches we decided to go with SingleStore database.
Knowing the fact that NEAR Explorer is not the only project that uses the Indexer for Explorer's database, we wanted to come up with the concept that will allow us to cover even more projects that can benefit from the data from NEAR Protocol.
That's why we decided to store the data from the blockchain as JSON files on AWS S3 bucket that can be used as a data source for different projects.
As "Indexer for Explorer Remake" project we are going to have near-lake
as a data writer. There's going to be another project that will read from AWS S3 bucket and will store all the data in SingleStore database. This will replace NEAR Indexer for Explorer PostgreSQL database at some moment and will become the main source for NEAR Explorer.
The final setup consists of the following components:
Before you proceed, make sure you have the following software installed:
rust-toolchain
file in the root of nearcore project.For example, the files generated by the AWS CLI for a default profile configured with aws configure looks similar to the following.
~/.aws/credentials
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ cargo build --release
To connect NEAR Lake to the specific chain you need to have necessary configs, you can generate it as follows:
$ ./target/release/near-lake --home ~/.near/testnet init --chain-id testnet --download-config --download-genesis
The above code will download the official genesis config and generate necessary configs. You can replace testnet
in the command above to different network ID (betanet
, mainnet
).
NB! According to changes in nearcore
config generation we don't fill all the necessary fields in the config file. While this issue is open https://github.com/nearprotocol/nearcore/issues/3156 you need to download config you want and replace the generated one manually.
Configs for the specified network are in the --home
provided folder. We need to ensure that NEAR Lake follows all the necessary shards, so "tracked_shards"
parameters in ~/.near/testnet/config.json
needs to be configured properly. Currently, nearcore
treats empty value for "tracked_shards"
as "do not track any shard" and any value as "track all shards". For example, in order to track all shards, you just add the shard #0 to the list:
...
"tracked_shards": [0],
...
Commands to run NEAR Lake, after ./target/release/near-lake
Command | Key/Subcommand | Required/Default | Responsible for |
---|---|---|---|
--home | Default ~/.near | Tells the node where too look for necessary files: config.json , genesis.json , node_key.json , and data folder | |
init | Tells the node to generate config files in --home-dir | ||
--chain-id | Required * localnet * testnet * mainnet | Defines the chain to generate config files for | |
--download-config | Optional | If provided tells the node to download config.json from the public URL. You can download them manually- testnet config.json - mainnet config.json | |
--download-genesis | Optional | If provided tells the node to download genesis.json from the public URL. You can download them manually- testnet genesis.json - mainnet genesis.json | |
TODO: Other neard keys | |||
run | Runs the node | ||
--bucket | Required | AWS S3 Bucket name | |
--region | Required | AWS S3 Bucket region | |
--endpoint | Optional | AWS S3 compatible API endpoint | |
--stream-while-syncing | Optional | If provided Indexer streams blocks while they appear on the node instead of waiting the node to be fully synced | |
--concurrency | Default 1 | Defines the concurrency for the process of saving block data to AWS S3 | |
sync-from-latest | One of the sync- subcommands is required | Tells the node to start indexing from the latest block in the network | |
sync-from-interruption | One of the sync- subcommands is required | Tells the node to start indexing from the block the node was interrupted on (if it is a first start it will fallback to sync-from-latest ) | |
sync-from-block --height N | One of the sync- subcommands is required | Tells the node to start indexing from the specified block height N (Ensure you node data has the block you want to start from) |
$ ./target/release/near-lake --home ~/.near/testnet run --stream-while-syncing --concurrency 50 sync-from-latest
After the network is synced, you should see logs of every block height currently received by NEAR Lake.
Whenever you run NEAR Lake for any network except localnet you'll need to sync with the network. This is required because it's a natural behavior of nearcore
node and NEAR Lake is a wrapper for the regular nearcore
node. In order to work and index the data your node must be synced with the network. This process can take a while, so we suggest to download a fresh backup of the data
folder and put it in you --home-dir
of your choice (by default it is ~/.near
)
Running your NEAR Lake node on top of a backup data will reduce the time of syncing process because your node will download only the data after the backup was cut and it takes reasonable amount time.
All the backups can be downloaded from the public S3 bucket which contains latest daily snapshots:
It's not necessary but in order to index everything in the network it is better to do it from the genesis. nearcore
node is running in non-archival mode by default. That means that the node keeps data only for 5 last epochs. In order to index data from the genesis we need to turn the node in archival mode.
To do it we need to update config.json
located in --home-dir
(by default it is ~/.near
).
Find next keys in the config and update them as following:
{
...
"archive": true,
"tracked_shards": [0],
...
}
The syncing process in archival mode can take a lot of time, so it's better to download a backup provided by NEAR and put it in your data
folder. After that your node will download only the data after the backup was cut and it takes reasonable amount time.
All the backups can be downloaded from the public S3 bucket which contains the latest daily snapshots:
See https://docs.near.org/docs/roles/integrator/exchange-integration#running-an-archival-node for reference
We write all the data to AWS S3 buckets:
near-lake-data-testnet
(eu-central-1
region) for testnetnear-lake-data-mainnet
(eu-central-1
region) for mainnetIn case you want to run you own near-lake instance and store data in some S3 compatible storage (Minio or Localstack as example) You can owerride default S3 API endpoint by using --endpoint
option
$ mkdir -p /data/near-lake-custom && minio server /data
$ ./target/release/near-lake --home ~/.near/testnet run --endpoint http://127.0.0.1:9000 --bucket near-lake-custom sync-from-latest
The data structure we use is the following:
<block_height>/
block.json
shard_0.json
shard_1.json
...
shard_N.json
<block_height>
is a 12-character-long u64
string with leading zeros (e.g 000042839521
). See this issue for a reasoningblock_json
contains JSON-serialized BlockView
struct. NB! this struct might change in the future, we will announce itshard_N.json
where N
is u64
starting from 0
. Represents the index number of the shard. In order to find out the expected number of shards in the block you can look in block.json
at .header.chunks_included
All NEAR Lake AWS S3 buckets have Request Payer enabled. It means that anyone with their own AWS credentials can List and Read the bucket's content and be charged for it by AWS. Connections to the bucket have to be done with AWS credentials provided. See NEAR Lake Framework for a reference.
Once we set up the public access to the buckets anyone will be able to build their own code to read it through.
For our own needs we are working on NEAR Lake Framework to have a simple way to create an indexer on top of the data stored by NEAR Lake itself.
See the official announce of NEAR Lake Framework on the NEAR Gov Forum
Download Details:
Author: near
Source Code: https://github.com/near/near-lake
License: GPL-3.0 license
1592470812
Before we learn anything about ETL Testing its important to learn about Business Intelligence and Dataware. Let’s get started –
What is BI?
Business Intelligence is the process of collecting raw data or business data and turning it into information that is useful and more meaningful. The raw data is the records of the daily transaction of an organization such as interactions with customers, administration of finance, and management of employee and so on. These data’s will be used for “Reporting, Analysis, Data mining, Data quality and Interpretation, Predictive Analysis”.
What is Data Warehouse?
A data warehouse is a database that is designed for query and analysis rather than for transaction processing. The data warehouse is constructed by integrating the data from multiple heterogeneous sources.It enables the company or organization to consolidate data from several sources and separates analysis workload from transaction workload. Data is turned into high quality information to meet all enterprise reporting requirements for all levels of users.
What is ETL?
ETL stands for Extract-Transform-Load and it is a process of how data is loaded from the source system to the data warehouse. Data is extracted from an OLTP database, transformed to match the data warehouse schema and loaded into the data warehouse database. Many data warehouses also incorporate data from non-OLTP systems such as text files, legacy systems and spreadsheets.
Let see how it works
For example, there is a retail store which has different departments like sales, marketing, logistics etc. Each of them is handling the customer information independently, and the way they store that data is quite different. The sales department have stored it by customer’s name, while marketing department by customer id.
Now if they want to check the history of the customer and want to know what the different products he/she bought owing to different marketing campaigns; it would be very tedious.
The solution is to use a Datawarehouse to store information from different sources in a uniform structure using ETL. ETL can transform dissimilar data sets into an unified structure.Later use BI tools to derive meaningful insights and reports from this data.
The following diagram gives you the ROAD MAP of the ETL process
ETL Testing or Datawarehouse Testing : Ultimate Guide
1.Extract
3.Load
What is ETL Testing?
ETL testing is done to ensure that the data that has been loaded from a source to the destination after business transformation is accurate. It also involves the verification of data at various middle stages that are being used between source and destination. ETL stands for Extract-Transform-Load.
ETL Testing Process
Similar to other Testing Process, ETL also go through different phases. The different phases of ETL testing process is as follows
ETL Testing or Datawarehouse Testing : Ultimate Guide
ETL testing is performed in five stages
1.Identifying data sources and requirements
2.Data acquisition
3.Implement business logics and dimensional Modelling
4.Build and populate data
5.Build Reports
ETL Testing or Datawarehouse Testing : Ultimate Guide.
These are the basics of etl testing course if u want to know more plz visit Online IT Guru website.
#etl testing #etl testing course #etl testing online #etl testing training #online etl testing training #etl testing online training
1595595360
We recently wrote an article debunking common myths about data lake architectures, data lake definitions, and data lake analytics. It is called "What is a Data Lake_? Get A Leg Up Avoiding The Biggest Myths." _In that article, we framed the current conversation about data lakes and how they fit within enterprise data strategies. This topic has historically been confusing and opaque for those wanting to get value from a data lake due to conflicting advice from consultants and vendors.
One area that can be particularly confusing is the perception that lakes are only for “big data.” If you spend any time reading materials on lakes, you would think there is only one type and it would look like the Capsian Sea (it’s a lake despite “sea” in the name). People describe data lakes as massive, all-encompassing entities, designed to hold all knowledge. The good news is that lakes are not just for “big data” and you have more opportunities than ever to have them be part of your data stack.
Just as they do in nature, lakes come in all different shapes and sizes. Each has a natural state, often reflecting ecosystems of data, just like those in nature reflect ecosystems of fish, birds, or other organisms.
Unfortunately, the “big data” angle gives the impression that lakes are only for “Caspian” scale data endeavors. This certainly makes the use of data lakes intimidating. As a result, describing things in such massive terms makes the concept of a lake inaccessible to those who can benefit from them on a smaller scale. Here are a few data lake examples;
We recently worked with a customer to create a “Domain” type lake. This lake would hold Adobe event data to an AWS to support an enterprise Oracle Cloud environment. Why AWS to Oracle? It was an efficient and cost-effective data consumption pattern for the customer Oracle BI environment, especially considering the agility and economics of using an AWS lake and Athena as the on-demand query service for lake content.
By design, all types of lakes should embrace an abstraction that minimizes risk and affords you greater flexibility. Also, they should be structured for easy consumption independent of their size. This ensures a lake used by a data scientist or business user or analyst all have an environment structured for easy data consumption.
Being a successful early adopter means taking a business value approach rather than a technology one. Here are a few tips as you think about how to get started:
#big data #data lake #data lakes #data lake architecture #data lake solutions #data analysis
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1607661398
NEAR is a decentralized application platform that is secure enough to manage high value assets like money or identity and performant enough to make them useful for everyday people, putting the power of the Open Web in their hands.
So what is NEAR (aka “the NEAR Platform”)? NEAR is a decentralized development platform built on top of the NEAR Protocol, which is a public, sharded, developer-friendly, proof-of-stake blockchain. Put another way, NEAR is like a public community-run cloud platform. That means it is a highly scalable, low cost platform for developers to create decentralized apps on top of. While it’s built on top of the NEAR Protocol blockchain, the NEAR Platform also contains a wide range of tooling from explorers to CLI tools to wallet apps to interoperability components which help developers build much more easily and the ecosystem to scale more widely.
Whereas most other “scalable” blockchains use approaches that centralize processing on high-end hardware to provide a temporary boost in throughput, NEAR Protocol’s approach allows the platform’s capacity to scale nearly linearly up to billions of transactions in a fully decentralized way.
NEAR is being built by the NEAR Collective, a global collection of people and organizations who are collaboratively building this massive open source project. Everyone in this Collective is fanatically focused on enabling usability improvements for both developers and their end-users so the next wave of apps can cross the chasm to a more general audience that has thus far been unable to consistently work with blockchain-based apps built on today’s platforms.
This Collective contains a number of extraordinary teams and includes championship-level competitive programmers who have built some of the only at-scale sharded database systems in the world. In a space dominated by academic research projects and failures to launch, NEAR has a team well accustomed to shipping. It is also backed by financial and community contributions by the best names in the crypto industry.
Why are we doing this? Because this is an opportunity to build the ground floor for a far better Internet which puts the user in control of their money, their data and their identity. With the potential for creating composable open-state services, it’s a chance to kick start the biggest wave of innovation — and business progress — since the Internet. This vision is the biggest sort — there are few opportunities in the world larger than the one we are tackling.
Let’s be clear: NEAR is not a side chain, an ERC20 token or a highly specialized task-specific blockchain… it is nothing less than a brand new and fundamentally reimagined layer 1 protocol designed to independently power the base of the emerging Open Web stack.
Ok… Let’s back up and assume you’re still getting up to speed on what the heck a public, sharded, developer-friendly, proof-of-stake blockchain really is.
From 10,000 feet…
The NEAR Collective is basically building the infrastructure for a new Internet which makes it harder for giant companies to steal your data and for bad guy countries to shut it down. People have been trying to figure this stuff out with similar technologies since 2008 but it’s been slow going.
You’ve heard of Bitcoin, that digital currency everyone thinks is only used by criminals and third-world dictators. Well, despite a really bad reputation and a lot of hiccups along the way, ten years later they still haven’t been able to kill Bitcoin so you know it’s built on very resilient technology. We’re basically trying to use that same kind of technology to power a brand new Internet which is just as hard to kill or screw up.
A few other projects, especially one called Ethereum, tried doing this a few years ago and got a really good start but ultimately got totally bogged down in the growing pains of early technology and they were too slow and expensive to get mainstream adoption. Now, a lot of really smart people are working on a way to speed this up and keep costs low while making sure this new Internet is just as hard to mess up as Bitcoin.
You can read more about this journey in the Evolution of the Open Web.
From 1,000 feet…
We aren’t building the only blockchain which is taking on the scaling and cost problems but NEAR has a team that’s all aces and we’re coming at this in a slightly different way.
To set the stage, we’re building a “base-layer blockchain”, meaning that it’s on the same level of the infrastructure as projects like Ethereum, EOS or Polkadot. That means everything else will be built on top of NEAR.
It’s a general-purpose platform that allows developers to create and deploy decentralized applications on top of it. A decent analogy is that it’s sort of like Amazon’s AWS platform, which is where most of the applications you know and love host their servers, except the NEAR platform isn’t actually run and controlled by one company, it’s run and controlled by thousands or even millions of people. You can call it a “community-operated cloud” but we usually prefer to simply call it a “decentralized application platform.”
It’s worth checking in quickly with how we got here because it will help you understand the context of the current ecosystem. A recent post goes into greater detail but here’s the quick version:
Bitcoin is the original “programmable money” or “digital gold”. It has been doing a pretty good job of fulfilling those functions but its use so far as a more general-purpose computing platform (like we’re building) is mostly an accident. Essentially, developers saw they could hack together some basic programs on top of the limited functionality that Bitcoin provided and they began using Bitcoin as the base for some of these new applications because it’s now highly trusted and secure.
Unfortunately, transactions are very costly and, because this was definitely NOT what the Bitcoin platform was meant for, the functionality is very limited. The platform there is slow (roughly 4 transactions per second), costly, and a massive waste of global energy.
Ethereum, back in 2014, tried to directly address this use case by creating a platform that was, from day 1, intended to use the same blockchain technology to build a global virtual computer which any application could be built on top of.
So, if Bitcoin was really just a basic calculator, Ethereum was a fancy TI-83 graphing calculator on which you could write some interesting, if basic, games. While it put lots of good ideas into place, it is also rather slow (14 transactions per second) and still quite costly for developers to use. They’ve tried to upgrade this but are now having difficulty pivoting because of how much technical work, value storage and community growth has already occurred in their legacy model.
“Layer 2” scaling solutions, including “state channels” and “side chains”, have popped up to try and improve the performance and cost of these slower (but rather secure) platforms by taking some of the work off the main chain and doing it elsewhere. They exist for both Bitcoin and Ethereum but haven’t achieved the adoption we hoped.
The first serious challenger blockchains launched in 2017–2018 with a wide variety of approaches to helping the scaling problem. They generally tried centralizing more of the hardware (eg EOS) but most of the approaches are still ultimately bounded by a fixed limit because every single one of the “nodes” that make up the network are repeating the exact same work, whether there are 21 of them or 1,000. So these approaches have been able to achieve throughputs of thousands (or more) transactions per second but often sacrifice decentralization to do so.
Next generation scalable blockchains like NEAR represent the new wave. In this case, NEAR breaks free from the idea that every single node which participates in the network has to run all of the code because that essentially creates one big wasteful bottleneck and slows down all of the other approaches.
To fix this, NEAR uses a technique called “sharding” from the database world (technical explanation) which splits the network so that much of the computation is actually being done in parallel. This allows the network’s capacity to scale up as the number of nodes in the network increases so there isn’t a theoretical limit on the network’s capacity.
Unlike a lot of other sharding approaches, which still require nodes to be run on increasingly complex hardware (reducing the ability of more people to participate in the network), NEAR’s technique allows nodes to stay small enough to run on simple cloud-hosted instances.
But it’s not all about scaling. In fact, for scaling to even be a benefit, developers need to be able to create apps that people actually use and current blockchains make this difficult on both the developer and the end-user. Many of these issues have to be addressed by setting up the protocol properly from the beginning and few projects who focus on scalability have taken this properly into account.
For example, many scalability solutions require developers to build and provision their own blockchain (or “app chain”), which is a massive amount of work and maintenance, and it seems equally as unnecessary for most teams as building and on-premise server farm would be for most traditional web developers. By comparison, NEAR allows developers to just deploy their app without thinking too much about how the infrastructure around it operates or scales, which is more like the modern clouds like Amazon AWS or GCP or Azure which drive almost all of today’s web applications.
There are a few kinds of projects that sort of fit into the landscape but won’t be covered much further here:
So, as we said before, the collection of teams that make up the NEAR Collective is building the NEAR Platform, which is built on top of the NEAR Protocol, which is a sharded, developer-friendly, proof-of-stake blockchain that developers can build decentralized apps on top of.
Let’s dig into what the NEAR Protocol actually does…
As mentioned above, NEAR is similar in principle to the “cloud-based” infrastructure that developers currently build applications on top of except the cloud is no longer controlled by a single company running a giant data center — that data center is actually made up of all the people around the world who are operating nodes of that _decentralized _network. Instead of a “company-operated cloud” it is a “community-operated cloud”.
Here are a couple of perspectives on why this decentralization is useful:
Let’s dig a bit deeper into how this network is run.
So who actually runs all those individual “nodes” that make up this decentralized network? Anyone who feels properly incentivized to do so! The incentives for this permissionless network are powered by the NEAR token.
The NEAR token is how people who use applications on the network pay to submit transactions to the nodes who actually run the network. The token is thus a utility — if you hold it, you can use applications hosted on the network.
This is a little different from today’s web, where applications are owned by single developers or corporations who pay their cloud hosting bills on behalf of their users. Some aspects of the NEAR protocol allow developers to do this as well but, for simplicity, we will assume users generally pay directly for their use of the network.
Because NEAR is a permissionless protocol, anyone can run one of the nodes which operate the network by validating transactions which have been submitted to the network. But running infrastructure, even simple code that you can run from a laptop, costs some money and time, so few people would do it for free. Thus, in exchange for performing that service, you earn a portion of the transaction fees paid by users during each block where you are validating transactions.
How does the network make sure you’re actually running the code you’re supposed to and not just freeloading and earning income? You are required to “stake” your tokens (which basically means putting them in escrow) as a gesture of good faith. If you perform any malicious behavior (like trying to hack the system or mess with other people’s transactions), you will lose your stake. The system figures this out by coming to “consensus” among the nodes in each period and determining how the code should have been run so it’s easy to identify who did so improperly.
Luckily, you really don’t have to think about this stuff because, as long as you download and start up the standard node program from a reputable source, it all happens behind the scenes by the application code you downloaded and so you aren’t likely to lose your stake.
Good question. For starters, NEAR is not a company!
One of the biggest stumbling blocks people seem to have with blockchains is figuring out how they work as if they were traditional businesses. And that’s totally valid since the 2017 bubble saw all manner of convoluted monetization schemes which actually didn’t make any sense and usually didn’t require a token anyway.
The key to understanding this is realizing that the entire economics supporting the NEAR network are embedded at the protocol level and allow anyone to participate in the protocol by running a validating node themselves. Users of the network pay costs to use this network and the providers of the network capacity receive rewards from this activity. There is no shadowy company behind it all which is secretly trying to sell subscriptions or anything like that. The protocol has self-sustaining economics.
The people who build the early technology are rewarded with participation in the initial allocation of tokens and funded by fiat contributions from early financial backers.
NEAR Collective is the globally distributed group of teams, made up of many individual organizations and contributors, who self-organize in order to bring this technology to life. It is not a business or anything nearly so formal. Think of it instead like the groups of people who run large open-source software projects.
One of the Collective’s projects is writing the initial code and the reference implementation for the open source NEAR network, sort of like building the rocket boosters on the space shuttle. Their job is to do the necessary R&D work to help the blockchain get into orbit. The code for that chain is open source so literally anyone can contribute to or run it.
It’s important to stress that networks like NEAR are designed to be totally decentralized. This means they ultimately operate completely on their own and can’t actually be censored, shut down or otherwise messed with by third parties… not even the teams who initially built them! So, while members of this collective are here to get the ball rolling on building the reference implementation, they quickly become nonessential to the operation of the network once it has started running. In fact, once it’s launch-ready, anyone could modify and run the NEAR Protocol code to start up their own blockchain because it’s all open source and any changes would have to be democratically accepted by the independent validators who run it.
That said, the core teams can (and hopefully will) stick around to keep updating the system and performing bug fixes. After the network has been launched, any ongoing development work will hopefully be supported by the governance of the network through grant funding or other means.
One Collective member worth noting is the NEAR Foundation, a nonprofit entity whose entire goal is to build a vibrant and active long-term ecosystem around the blockchain and which has commissioned the development of the reference implementation of that chain. The Foundation helps coordinate some of the early development work and governance activities.
The NEAR blockchain is only one of the NEAR Collective’s projects, so there are plenty of other areas where we can help the ecosystem going forward.
In the early phases of this market, projects focused on strutting a bunch of vanity metrics in order to get as many retail investors on board for a big ICO (initial coin offering). That meant trying to have the biggest Telegram community, the most big-name advisors, the best-looking proof-of-concept projects from big businesses and so on.
The ICO boom finished in 2018 so, luckily, we’re able to focus more on what actually matters. In this case, it’s all about community and adoption.
We have extraordinary technical teams who are working to make sure the technology is implemented properly but everything they build will be open source. That means anyone in the world could, theoretically, just copy the code and run their own NEAR blockchain. Now, it’s not quite as easy as that and our teams’ expertise puts this version ahead should anyone try that, but ultimately technology is merely a time-based advantage. The real traction comes from building a great community.
The ecosystem is most useful when there are lots of applications and lots of users who want to use those applications. Because we see a world where the major development paradigm is to build in a decentralized fashion, there is lots of ground to cover between here and there. So the key metric we’re shooting for is adoption and usage of the platform.
A lot of projects are focusing on targeting developers to build apps on top of their platforms. That’s obviously important since this is a highly technical field but it’s not the only thing that matters. Specifically, if you look at the history of major development platforms, the ones that are truly successful need to support real, at-scale businesses not just a bunch of side projects.
So building an ecosystem that has the kind of functional breadth and technical depth to create at-scale businesses for the long term is everything to us. That requires significant efforts on the dimensions of community building, education and overall user experience. And plenty of cool technical tools as well, of course!
Luckily, if we succeed, we have the opportunity to drive the greatest wealth creation since the original Internet by helping developers and entrepreneurs everywhere access new markets and build new kinds of businesses on top of the NEAR Protocol. The upside of what we’re doing is super exciting!
You can take this from two perspectives — what makes the NEAR Protocol likely to actually get to market and what makes it better to developers/end-users once we get it there? Let’s explore both.
Execution Matters
What says the NEAR team is going to out-execute the competition and get into the market with the right technology on the right timeline?
For starters… NEAR has already launched its MainNet genesis! And it handed off operation to the community as planned :). But also…
If you hop over to the NEAR project website and you’ll see the best team in the industry and, importantly, a team who has shipped sharded systems before in production settings. Having that kind of experienced team behind it already sets the NEAR Protocol apart from almost every other network out there.
This technology is complicated! Technically speaking, sharding is a scaling approach that’s being seriously attempted by roughly a dozen chains, including the market-leading Ethereum… but they’ve taken years to put together their proposal and expect to take more years to implement it. So it’s non-trivial to get this started.
To round things out, the NEAR team has the support of the best financial and non-financial backers in the space and a cadre of informal advisors who span everything from economics and mechanism design to cryptography and blockchain design.
Killer Features for Everyone
This market — and the technology — is still in the “roll up your sleeves, it’s going to be messy” phase. Every major stakeholder group has challenges:
All of these problems require a fanatical focus on user experience and user needs as the driving force for creating technology instead of starting by driving technology forward and then seeing what happens. We’ve been developing the NEAR Protocol and many supporting tools with a focus on these experiences while stepping forward to engage the broader community on solving these issues. It’s not going to be fast or easy, but it is the priority for us.
We describe the NEAR platform as “developer-friendly” or “usable” because it implements approaches at the protocol level which address each of those problems:
One of the hottest questions is what the use cases are for blockchains precisely because it hasn’t been definitively answered. Sure, it’s being used in everything from supply chain tracking to cross border payments, but most of these cases are still early enough that they haven’t achieved mainstream adoption. We’re still in a phase of trying new things to see what new primitives they unlock and how they extend.
In our case, we’re talking to hundreds of existing businesses and opportunistic entrepreneurs about how a truly scalable, usable blockchain can unlock new business opportunities. I won’t spoil the surprises by getting into that here but suffice it to say, there are a few burning areas where people are begging us to solve their problems and existing blockchains have been too slow or expensive or painful for users. A couple of areas to start include gaming and decentralized finance but that’s just the beginning. In many cases, it starts by taking the things people already do on less performant chains — like Open Finance on Ethereum — and scaling them out to more potential users and uses by providing a more versatile and performant chain.
In the long run, just like with the original Internet revolution, the earliest use cases are likely just going to bridge the gap until people invent entirely new business models. So, while we’re excited about addressing things in the short term, we’re especially excited about building a toolkit that future entrepreneurs can combine with their creativity to change the world in ways we can’t even imagine now.
This is where new ideas around composability of microservices and open/transferrable state (a paradigm we call the Open Web) are really exciting. How different will it be to create businesses which sit atop a portfolio of open services rather than gated APIs and centralized platforms? These are new development paradigms that we’ve only scratched the surface of so far.
If you’ve gotten this far, you’re probably curious to learn more. There are a lot of resources out there but I’ll recommend a reasonable path through them. Blockchain is a highly technical field that covers just about every discipline under the sun so it can feel like drinking from a firehose without proper guidance.
If you’re pretty new to blockchain, start here to get a strong overview of the ecosystem and its development:
After that, it’s a good idea to start diving a little deeper into the functional disciplines. The next layer deeper is to explore the NEAR Protocol White Paper, which is a human-readable deconstruction of every aspect of the project. You can find it along with the other technical papers at:
No matter what role you have, you’ll need to get up to speed technically because the lingo is going to come up repeatedly. That doesn’t necessarily mean you have to understand all the computer science underneath it but you should know the basics.
Generally:
NEAR specific:
We’re focusing on building a great developer and end-user experience. And here are a few resources that will be helpful from a design perspective.
Podcasts
Podcasts are one of the best ways to educate yourself because the back catalogs can be very informative. I recommend taking up a time-consuming endurance sport where you can listen to them for hours on end and then consuming the following:
Newsletters
There aren’t too many great newsletters and you should avoid anything that is primarily price or trading focused. That said, check out:
Writing
The NEAR token (aka $NEAR) is a utility token which powers the NEAR Protocol blockchain and all applications which use it. NEAR Protocol is a fully operational, open source blockchain designed to solve both the scalability and usability problems faced by other protocols. It is designed from the ground up to give builders the best tools to build scalable applications that real people can actually use.
As described in the Economics section below, $NEAR uses a block-rewards-with-burn model that, at high rates of usage, means token supply will be reduced over time.
Validators & Delegators
The nodes which run the network are compensated by inflationary rewards. Tokenholders of any size can stake by lending their tokens to a validating pool and earn return by helping to secure the network this way.
You can earn $NEAR by taking part in development bounties, by running a community which helps people build on NEAR, by winning a NEAR hackathon or otherwise being an active part of the community. If you are able to attract other people to lend you tokens for staking, you can also earn $NEAR by running a validator.
$NEAR is available on several major exchanges (see below), where you can sign up and buy the token using either fiat currency or crypto.
You don’t have to have a NEAR account to receive NEAR tokens! The “NEAR Drop” approach allows your friend to pre-fund a new account and send you a hot link to retrieve the tokens.
Would you like to earn NEAR right now! ☞ CLICK HERE
NEAR has been listed on a number of crypto exchanges, unlike other main cryptocurrencies, it cannot be directly purchased with fiats money. However, You can still easily buy this coin by first buying Bitcoin, ETH, USDT from any large exchanges and then transfer to the exchange that offers to trade this coin, in this guide article we will walk you through in detail the steps to buy NEAR
You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT)…
We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.
Binance is a popular cryptocurrency exchange which was started in China but then moved their headquarters to the crypto-friendly Island of Malta in the EU. Binance is popular for its crypto to crypto exchange services. Binance exploded onto the scene in the mania of 2017 and has since gone on to become the top crypto exchange in the world.
Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT)
Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)
After the deposit is confirmed you may then purchase NEAR from the exchange: Binance, Huobi Global, OKEx, BitZ, and Upbit
Apart from the exchange(s) above, there are a few popular crypto exchanges where they have decent daily trading volumes and a huge user base. This will ensure you will be able to sell your coins at any time and the fees will usually be lower. It is suggested that you also register on these exchanges since once NEAR gets listed there it will attract a large amount of trading volumes from the users there, that means you will be having some great trading opportunities!
Top exchanges for token-coin trading. Follow instructions and make unlimited money
☞ https://www.bittrex.com
☞ https://www.poloniex.com
☞ https://www.bitfinex.com
☞ https://www.huobi.com
☞ https://www.mxc.ai
☞ https://www.probit.com
☞ https://www.gate.io
☞ https://www.coinbase.com
Find more information NEAR
☞ Website
☞ Whitepaper
☞ Source Code
☞ Social Channel
☞ Message Board
☞ Coinmarketcap
🔺DISCLAIMER: Trading Cryptocurrency is VERY risky. Make sure that you understand these risks if you are a beginner. The Information in the post is my OPINION and not financial advice. You are responsible for what you do with your funds
Learn about Cryptocurrency in this article ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
I hope this post will help you. If you liked this, please sharing it with others. Thank you!
#cryptocurrency #bitcoin #blockchain #near protocol #near