The Safe Network is a fully autonomous data and communications network. For a general introduction and more information about its features and the problems it intends to solve, please see The Safe Network Primer.
The network is implemented in Rust. This repository is a workspace consisting of 4 crates:
Currently, the network can be used via these 3 published crates. To see how, a good place to start is the README for the CLI. You can run your own local network or perhaps participate in a remote network.
Some of the
safe_network tests require a live network to test against.
You should first ensure that your local machine does not have any artefacts from prior runs. Eg on unix:
killall sn_node ||true && rm ~/.safe/node/local-test-network || true will remove any running
sn_node instances and remove any prior run's data stored.
You can then run a local testnet using the
NODE_COUNT=15 RUST_LOG=safe_network=trace cargo run --release --bin testnet
NODE_COUNT defaults to 33 nodes and will give you a split section. 15 nodes as above will give only one section. How many nodes you want to run will depend on your hardware. 15 nodes can be considered the minimum for a viable section.
Once you have your network running you can simply run
cargo test --release --features=test-utils.
test-utils is needed to enable some of the test setup for the clients. This will run all tests in
Note: if you're running in the root directory, either
cd snbefore running tests or inclide
-p safe_networkin the cargo command to target only that package. Otherwise you'll be running tests from all crates, including sn_api and sn_cli. Eg:
cargo test --release -p safe_network --features=test-utils
In general it can be useful to scope your test running, eg
cargo test --release --features=test-utils client_api will run only the client tests. Or perhaps you want to ignore
proptests as they can be quite long:
cargo test --release --features=test-utils client_api --skip proptest
Safe is being developed iteratively and has frequent releases. You can use these to experiment with new features when they become available. It's also possible to participate in community 'testnets' hosted by members of The Safe Network Forum.
Want to contribute? Great :tada:
There are many ways to give back to the project, whether it be writing new code, fixing bugs, or just reporting errors. All forms of contributions are encouraged!
For instructions on how to contribute, see our guide to contributing.
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.
Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.
#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt
CVDC 2020, the Computer Vision conference of the year, is scheduled for 13th and 14th of August to bring together the leading experts on Computer Vision from around the world. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.
The second day of the conference started with quite an informative talk on the current pandemic situation. Speaking of talks, the second session “Application of Data Science Algorithms on 3D Imagery Data” was presented by Ramana M, who is the Principal Data Scientist in Analytics at Cyient Ltd.
Ramana talked about one of the most important assets of organisations, data and how the digital world is moving from using 2D data to 3D data for highly accurate information along with realistic user experiences.
The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment, 3D data for object detection and two general case studies, which are-
This talk discussed the recent advances in 3D data processing, feature extraction methods, object type detection, object segmentation, and object measurements in different body cross-sections. It also covered the 3D imagery concepts, the various algorithms for faster data processing on the GPU environment, and the application of deep learning techniques for object detection and segmentation.
#developers corner #3d data #3d data alignment #applications of data science on 3d imagery data #computer vision #cvdc 2020 #deep learning techniques for 3d data #mesh data #point cloud data #uav data
Data integration solutions typically advocate that one approach – either ETL or ELT – is better than the other. In reality, both ETL (extract, transform, load) and ELT (extract, load, transform) serve indispensable roles in the data integration space:
Because ETL and ELT present different strengths and weaknesses, many organizations are using a hybrid “ETLT” approach to get the best of both worlds. In this guide, we’ll help you understand the “why, what, and how” of ETLT, so you can determine if it’s right for your use-case.
#data science #data #data security #data integration #etl #data warehouse #data breach #elt #bid data