1559614042
Saving Data in JavaScript Without a Database - You’ve just written a great piece of JavaScript. But when the running process stops, or the user refreshes, all of that nice data disappears into the ether…
Is this you?
When prototyping, or otherwise working on tiny projects, it can be helpful to manage some state without resorting to a database solution that wasn’t designed for that creative itch you’re trying to scratch.
We’re going to explore some options that I wish I knew about when I started tinkering on the web. We’ll look at JavaScript in the browser and Node.js on the back end. We’ll also look at some lightweight databases that use the local file system.
First up is JSON serializing your data and saving it to disk. The MDN Docs have a great article if you haven’t worked with JSON before.
const fs = require('fs');
const users = {
'Bob': {
age: 25,
language: 'Python'
},
'Alice': {
age: 36,
language: 'Haskell'
}
}
fs.writeFile('users.json', JSON.stringify(users), (err) => {
// Catch this!
if (err) throw err;
console.log('Users saved!');
});
We created our users object, converted it to JSON with JSON#stringify and called fs#writeFile. We passed in a filename, our serialized data, and an arrow function as a callback to execute when the write operation finishes. Your program will continue executing code in the meanwhile.
You can also use this method to write normal serialized data by passing in anything that can be cast to a string. If you’re storing text data, you may find fs#appendFile useful. It uses an almost identical API but sends the data to the end of the file, keeping the existing contents.
There is a synchronous option, fs#writeFileSync but it isn’t recommended as your program will be unresponsive until the write operation finishes. In JavaScript, you should aim to Never Block.
If you’re dealing with CSV files, reach for the battle-hardened node-csvproject.
SQLite uses a local file as a database — and is one of my favorite pieces of software in the world. It enables many of my smaller projects to exist with low maintenance and little deploying hassle.
Here are some facts about SQLite:
Seriously, How SQLite Is Tested is a wild ride.
In Node.js, we commonly use the sqlite3
npm package. I’ll be using some code from Glitch’s hello-sqlite
template, which you can play around with and remix without an account.
// hello-sqlite
var fs = require('fs');
var dbFile = './.data/sqlite.db'; // Our database file
var exists = fs.existsSync(dbFile); // Sync is okay since we're booting up
var sqlite3 = require('sqlite3').verbose(); // For long stack traces
var db = new sqlite3.Database(dbFile);
Through this db
object, we can interact with our local database like we would through a connection to an outside database.
We can create tables.
db.run('CREATE TABLE Dreams (dream TEXT)');
Insert data (with error handling).
db.run('INSERT INTO Dreams (dream) VALUES (?)', ['Well tested code'], function(err) {
if (err) {
console.error(err);
} else {
console.log('Dream saved!');
}
});
Select that data back.
db.all('SELECT * from Dreams', function(err, rows) {
console.log(JSON.stringify(rows));
});
You may want to consider serializing some of your database queries. Each command inside the serialize() function is guaranteed to finish executing before the next one starts. The sqlite3 documentation is expansive. Keep an eye on the SQLite Data Types as they can be a little different to other databases.
If even SQLite seems like too much overhead for your project, consider lowdb (also remixable on Glitch). lowdb is exciting because it’s a small local JSON database powered by Lodash (supports Node, Electron and the browser). Not only does it work as a wrapper for JSON files on the back end it also provides an API which wraps localStorage in the browser.
From their examples:
import low from 'lowdb'
import LocalStorage from 'lowdb/adapters/LocalStorage'
const adapter = new LocalStorage('db')
const db = low(adapter)
db.defaults({ posts: [] })
.write()
// Data is automatically saved to localStorage
db.get('posts')
.push({ title: 'lowdb' })
.write()
This brings us to the front end. window#localStorage is the modern solution to storing data in HTTP cookies — which MDN doesn’t recommend for storing things anymore.
Let’s interact with them right now. If you’re on desktop, open your dev console (F12 on Chrome) and see what DEV is storing for you:
for (const thing in localStorage) {
console.log(thing, localStorage.getItem(thing))
}
// Example of one thing:
// pusherTransportTLS {"timestamp":1559581571665,"transport":"ws","latency":543}
We saw how lowdb interacted with localStorage but for our small projects it’s probably easier to talk to the API directly. Like this:
// As a script, or in console
localStorage.setItem('Author', 'Andrew') // returns undefined
localStorage.getItem('Author') // returns "Andrew"
localStorage.getItem('Unset key') // returns null
It gets easier still: you can treat it like an object. Although, MDN recommends the API over this shortcut.
console.log(localStorage['Author']); // prints "Andrew"
If you don’t want to store data on the user’s computer forever (which can be cleared with localStorage.clear()
but don’t run this on DEV) you may be interested in sessionStorage which has a near identical API and only stores data while the user is on the page.
I read somewhere that SQLite is used onboard the Internation Space Station in some capacity but I haven’t been able to find a source. My fiancée wants you to know that SQLite is a database and the title of this post is incorrect.
#javascript
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1618039260
The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.
Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.
#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt
1597579680
CVDC 2020, the Computer Vision conference of the year, is scheduled for 13th and 14th of August to bring together the leading experts on Computer Vision from around the world. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.
The second day of the conference started with quite an informative talk on the current pandemic situation. Speaking of talks, the second session “Application of Data Science Algorithms on 3D Imagery Data” was presented by Ramana M, who is the Principal Data Scientist in Analytics at Cyient Ltd.
Ramana talked about one of the most important assets of organisations, data and how the digital world is moving from using 2D data to 3D data for highly accurate information along with realistic user experiences.
The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment, 3D data for object detection and two general case studies, which are-
This talk discussed the recent advances in 3D data processing, feature extraction methods, object type detection, object segmentation, and object measurements in different body cross-sections. It also covered the 3D imagery concepts, the various algorithms for faster data processing on the GPU environment, and the application of deep learning techniques for object detection and segmentation.
#developers corner #3d data #3d data alignment #applications of data science on 3d imagery data #computer vision #cvdc 2020 #deep learning techniques for 3d data #mesh data #point cloud data #uav data
1618457700
Data integration solutions typically advocate that one approach – either ETL or ELT – is better than the other. In reality, both ETL (extract, transform, load) and ELT (extract, load, transform) serve indispensable roles in the data integration space:
Because ETL and ELT present different strengths and weaknesses, many organizations are using a hybrid “ETLT” approach to get the best of both worlds. In this guide, we’ll help you understand the “why, what, and how” of ETLT, so you can determine if it’s right for your use-case.
#data science #data #data security #data integration #etl #data warehouse #data breach #elt #bid data