1647336300
About
An easy to use, lightweight, thread-safe and append-only in-memory data structure modeled as a Log.
The Log
also serves as an abstraction and building block. See sharded.Log
for an implementation of a sharded variant of memlog.Log
.
❌ Note: this package is not about providing an in-memory logging
library. To read more about the ideas behind memlog
please see "The Log: What every software engineer should know about real-time data's unifying abstraction".
I keep hitting the same user story (use case) over and over again: one or more clients connected to my application wanting to read an immutable stream of data, e.g. events or sensor data, in-order, concurrently (thread-safe) and asynchronously (at their own pace) and in a resource (memory) efficient way.
There's many solutions to this problem, e.g. exposing some sort of streaming API (gRPC, HTTP/REST long-polling) based on custom logic using Go channels or an internal ring buffer, or putting data into an external platform like Kafka, Redis Streams or RabbitMQ Streams.
The challenges I faced with these solutions were that either they were too complex (or simply overkill) for my problem. Or, the system I had to integrate with and read data from did not have a nice streaming API or Go SDK, thus repeating myself writing complex internal caching, buffering and concurrency handling logic for the client APIs.
I looked around and could not find a simple and easy to use Go library for this problem, so I created memlog
: an easy to use, lightweight (in-memory), thread-safe, append-only log inspired by popular streaming systems with a minimal API using Go's standard library primitives 🤩
💡 For an end-to-end API modernization example using memlog
see the vsphere-event-streaming
project, which transforms a SOAP-based events API into an HTTP/REST streaming API.
True, it sounds like an oxymoron. Why would someone use (build) an in-memory append-only log that is not durable?
I'm glad you asked 😀
This library certainly is not intended to replace messaging, queuing or streaming systems. It was built for use cases where there exists a durable data/event source, e.g. a legacy system, REST API, database, etc. that can't (or should not) be changed. But the requirement being that the (source) data should be made available over a streaming-like API, e.g. gRPC or processed by a Go application which requires the properties of a Log
.
memlog
helps as it allows to bridge between these different APIs and use cases as a building block to extract and store data Records
from an external system into an in-memory Log
(think ordered cache).
These Records
can then be internally processed (lightweight ETL) or served asynchronously, in-order (Offset
-based) and concurrently over a modern streaming API, e.g. gRPC or HTTP/REST (chunked encoding via long polling), to remote clients.
Given the data source needs to be durable in this design, one can optionally build periodic checkpointing logic using the Record
Offset
as the checkpoint value.
💡 When running in Kubernetes, kvstore
provides a nice abstraction on top of a ConfigMap
for such requirements.
If the memlog
process crashes, it can then resume from the last checkpointed Offset
, load the changes since then from the source and resume streaming.
💡 This approach is quiet similar to the Kubernetes ListerWatcher()
pattern. See memlog_test.go
for some inspiration.
Usage
The API is intentionally kept minimal. A new Log
is constructed with memlog.New(ctx, options...)
. Data as []byte
is written to the log with Log.Write(ctx, data)
.
The first write to the Log
using default Options
starts at position (Offset
) 0
. Every write creates an immutable Record
in the Log
. Records
are purged from the Log
when the history segment
is replaced (see notes below).
The earliest and latest Offset
available in a Log
can be retrieved with Log.Range(ctx)
.
A specified Record
can be read with Log.Read(ctx, offset)
.
💡 Instead of manually polling the Log
for new Records
, the streaming API Log.Stream(ctx, startOffset)
should be used.
Log
to rule them allOne is not constrained by just creating one Log
. For certain use cases, creating multiple Logs
might be useful. For example:
Log
sizes (i.e. retention times), e.g. premium users will have access to a larger history of Records
💡 For use cases where you want to order the log by key(s)
, consider using the specialised sharded.Log
.
package main
import (
"context"
"fmt"
"os"
"github.com/embano1/memlog"
)
func main() {
ctx := context.Background()
l, err := memlog.New(ctx)
if err != nil {
fmt.Printf("create log: %v", err)
os.Exit(1)
}
offset, err := l.Write(ctx, []byte("Hello World"))
if err != nil {
fmt.Printf("write: %v", err)
os.Exit(1)
}
fmt.Printf("reading record at offset %d\n", offset)
record, err := l.Read(ctx, offset)
if err != nil {
fmt.Printf("read: %v", err)
os.Exit(1)
}
fmt.Printf("data says: %s", record.Data)
// reading record at offset 0
// data says: Hello World
}
Log
The Log
is divided into an active and history segment
. When the active segment
is full (configurable via WithMaxSegmentSize()
), it is sealed (i.e. read-only) and becomes the history segment
. A new empty active segment
is created for writes. If there is an existing history, it is replaced, i.e. all Records
are purged from the history.
See pkg.go.dev for the API reference and examples.
Benchmark
I haven't done any extensive benchmarking or code optimization. Feel free to chime in and provide meaningful feedback/optimizations.
One could argue, whether using two slices (active and history data []Record
as part of the individual segments
) is a good engineering choice, e.g. over using a growable slice as an alternative.
The reason I went for two segments
was that for me dividing the Log
into multiple segments
with fixed size (and capacity) was easier to reason about in the code (and I followed my intuition from how log-structured data platforms do it). I did not inspect the Go compiler optimizations, e.g. it might actually be smart and create one growable slice under the hood. 🤓
These are some results on my MacBook using a log size of 1,000
(records), i.e. where the Log
history is constantly purged and new segments
(slices) are created.
go test -v -run=none -bench=. -cpu 1,2,4,8,16 -benchmem
goos: darwin
goarch: amd64
pkg: github.com/embano1/memlog
cpu: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
BenchmarkLog_write 9973622 116.7 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-2 10612510 111.4 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-4 10465269 112.2 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-8 10472682 112.7 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-16 10525519 113.6 ns/op 89 B/op 1 allocs/op
BenchmarkLog_read 19875546 59.97 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-2 22287092 55.22 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-4 21024020 54.66 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-8 20789745 55.03 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-16 22367100 55.74 ns/op 32 B/op 1 allocs/op
PASS
ok github.com/embano1/memlog 13.125s
Author: Embano1
Source Code: https://github.com/embano1/memlog
License: Apache-2.0 License
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1617959340
Companies across every industry rely on big data to make strategic decisions about their business, which is why data analyst roles are constantly in demand. Even as we transition to more automated data collection systems, data analysts remain a crucial piece in the data puzzle. Not only do they build the systems that extract and organize data, but they also make sense of it –– identifying patterns, trends, and formulating actionable insights.
If you think that an entry-level data analyst role might be right for you, you might be wondering what to focus on in the first 90 days on the job. What skills should you have going in and what should you focus on developing in order to advance in this career path?
Let’s take a look at the most important things you need to know.
#data #data-analytics #data-science #data-analysis #big-data-analytics #data-privacy #data-structures #good-company
1647336300
About
An easy to use, lightweight, thread-safe and append-only in-memory data structure modeled as a Log.
The Log
also serves as an abstraction and building block. See sharded.Log
for an implementation of a sharded variant of memlog.Log
.
❌ Note: this package is not about providing an in-memory logging
library. To read more about the ideas behind memlog
please see "The Log: What every software engineer should know about real-time data's unifying abstraction".
I keep hitting the same user story (use case) over and over again: one or more clients connected to my application wanting to read an immutable stream of data, e.g. events or sensor data, in-order, concurrently (thread-safe) and asynchronously (at their own pace) and in a resource (memory) efficient way.
There's many solutions to this problem, e.g. exposing some sort of streaming API (gRPC, HTTP/REST long-polling) based on custom logic using Go channels or an internal ring buffer, or putting data into an external platform like Kafka, Redis Streams or RabbitMQ Streams.
The challenges I faced with these solutions were that either they were too complex (or simply overkill) for my problem. Or, the system I had to integrate with and read data from did not have a nice streaming API or Go SDK, thus repeating myself writing complex internal caching, buffering and concurrency handling logic for the client APIs.
I looked around and could not find a simple and easy to use Go library for this problem, so I created memlog
: an easy to use, lightweight (in-memory), thread-safe, append-only log inspired by popular streaming systems with a minimal API using Go's standard library primitives 🤩
💡 For an end-to-end API modernization example using memlog
see the vsphere-event-streaming
project, which transforms a SOAP-based events API into an HTTP/REST streaming API.
True, it sounds like an oxymoron. Why would someone use (build) an in-memory append-only log that is not durable?
I'm glad you asked 😀
This library certainly is not intended to replace messaging, queuing or streaming systems. It was built for use cases where there exists a durable data/event source, e.g. a legacy system, REST API, database, etc. that can't (or should not) be changed. But the requirement being that the (source) data should be made available over a streaming-like API, e.g. gRPC or processed by a Go application which requires the properties of a Log
.
memlog
helps as it allows to bridge between these different APIs and use cases as a building block to extract and store data Records
from an external system into an in-memory Log
(think ordered cache).
These Records
can then be internally processed (lightweight ETL) or served asynchronously, in-order (Offset
-based) and concurrently over a modern streaming API, e.g. gRPC or HTTP/REST (chunked encoding via long polling), to remote clients.
Given the data source needs to be durable in this design, one can optionally build periodic checkpointing logic using the Record
Offset
as the checkpoint value.
💡 When running in Kubernetes, kvstore
provides a nice abstraction on top of a ConfigMap
for such requirements.
If the memlog
process crashes, it can then resume from the last checkpointed Offset
, load the changes since then from the source and resume streaming.
💡 This approach is quiet similar to the Kubernetes ListerWatcher()
pattern. See memlog_test.go
for some inspiration.
Usage
The API is intentionally kept minimal. A new Log
is constructed with memlog.New(ctx, options...)
. Data as []byte
is written to the log with Log.Write(ctx, data)
.
The first write to the Log
using default Options
starts at position (Offset
) 0
. Every write creates an immutable Record
in the Log
. Records
are purged from the Log
when the history segment
is replaced (see notes below).
The earliest and latest Offset
available in a Log
can be retrieved with Log.Range(ctx)
.
A specified Record
can be read with Log.Read(ctx, offset)
.
💡 Instead of manually polling the Log
for new Records
, the streaming API Log.Stream(ctx, startOffset)
should be used.
Log
to rule them allOne is not constrained by just creating one Log
. For certain use cases, creating multiple Logs
might be useful. For example:
Log
sizes (i.e. retention times), e.g. premium users will have access to a larger history of Records
💡 For use cases where you want to order the log by key(s)
, consider using the specialised sharded.Log
.
package main
import (
"context"
"fmt"
"os"
"github.com/embano1/memlog"
)
func main() {
ctx := context.Background()
l, err := memlog.New(ctx)
if err != nil {
fmt.Printf("create log: %v", err)
os.Exit(1)
}
offset, err := l.Write(ctx, []byte("Hello World"))
if err != nil {
fmt.Printf("write: %v", err)
os.Exit(1)
}
fmt.Printf("reading record at offset %d\n", offset)
record, err := l.Read(ctx, offset)
if err != nil {
fmt.Printf("read: %v", err)
os.Exit(1)
}
fmt.Printf("data says: %s", record.Data)
// reading record at offset 0
// data says: Hello World
}
Log
The Log
is divided into an active and history segment
. When the active segment
is full (configurable via WithMaxSegmentSize()
), it is sealed (i.e. read-only) and becomes the history segment
. A new empty active segment
is created for writes. If there is an existing history, it is replaced, i.e. all Records
are purged from the history.
See pkg.go.dev for the API reference and examples.
Benchmark
I haven't done any extensive benchmarking or code optimization. Feel free to chime in and provide meaningful feedback/optimizations.
One could argue, whether using two slices (active and history data []Record
as part of the individual segments
) is a good engineering choice, e.g. over using a growable slice as an alternative.
The reason I went for two segments
was that for me dividing the Log
into multiple segments
with fixed size (and capacity) was easier to reason about in the code (and I followed my intuition from how log-structured data platforms do it). I did not inspect the Go compiler optimizations, e.g. it might actually be smart and create one growable slice under the hood. 🤓
These are some results on my MacBook using a log size of 1,000
(records), i.e. where the Log
history is constantly purged and new segments
(slices) are created.
go test -v -run=none -bench=. -cpu 1,2,4,8,16 -benchmem
goos: darwin
goarch: amd64
pkg: github.com/embano1/memlog
cpu: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
BenchmarkLog_write 9973622 116.7 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-2 10612510 111.4 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-4 10465269 112.2 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-8 10472682 112.7 ns/op 89 B/op 1 allocs/op
BenchmarkLog_write-16 10525519 113.6 ns/op 89 B/op 1 allocs/op
BenchmarkLog_read 19875546 59.97 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-2 22287092 55.22 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-4 21024020 54.66 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-8 20789745 55.03 ns/op 32 B/op 1 allocs/op
BenchmarkLog_read-16 22367100 55.74 ns/op 32 B/op 1 allocs/op
PASS
ok github.com/embano1/memlog 13.125s
Author: Embano1
Source Code: https://github.com/embano1/memlog
License: Apache-2.0 License
1618039260
The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.
Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.
#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt