Gordon  Taylor

Gordon Taylor

1656198180

Firespray: Blazingly Fast Streaming Charts

Firespray

Streaming charts library developed by Boundary

Bar charts

Stacked, percent bar charts



 

Line charts

Stacked area charts

Mirror line charts, everything is stylable



 

Range selector



 

Optimized for large datasets

Progressive rendering using render-slicer



 

Canvas and SVG renderers

Canvas for large datasets, SVG for crisp lines at any zoom level

Live examples

Author: Boundary
Source Code: https://github.com/boundary/firespray 
License: MIT license

#javascript #d3 #streaming #charts 

Firespray: Blazingly Fast Streaming Charts

Gilbert Elena

1655276343

Game Recording Software Allows To Record Your Game In HD Quality

Game recording software is a type of programme that allows you to record your gaming in high definition. You can post your clips on social media sites like Facebook, WhatsApp, Twitter, and others using these apps. These programmes can be used to store videos in formats such as MOV, MP4, GIF, and others. Many of these tools can also be used to capture live streaming.

Visit: https://smartphonecrunch.com/game-recording-software/

#game #recording #software #desktop #videos #streaming #live


 

Game Recording Software Allows To Record Your Game In HD Quality
Nigel  Uys

Nigel Uys

1654794240

Fast, Concurrent, Streaming Access to Amazon S3, including Gof3r, CLI

s3gof3r 

s3gof3r provides fast, parallelized, pipelined streaming access to Amazon S3. It includes a command-line interface: gof3r.

It is optimized for high speed transfer of large objects into and out of Amazon S3. Streaming support allows for usage like:

  $ tar -czf - <my_dir/> | gof3r put -b <s3_bucket> -k <s3_object>    
  $ gof3r get -b <s3_bucket> -k <s3_object> | tar -zx

Speed Benchmarks

On an EC2 instance, gof3r can exceed 1 Gbps for both puts and gets:

  $ gof3r get -b test-bucket -k 8_GB_tar | pv -a | tar -x
  Duration: 53.201632211s
  [ 167MB/s]
  

  $ tar -cf - test_dir/ | pv -a | gof3r put -b test-bucket -k 8_GB_tar
  Duration: 1m16.080800315s
  [ 119MB/s]

These tests were performed on an m1.xlarge EC2 instance with a virtualized 1 Gigabit ethernet interface. See Amazon EC2 Instance Details for more information.

Features

Speed: Especially for larger s3 objects where parallelism can be exploited, s3gof3r will saturate the bandwidth of an EC2 instance. See the Benchmarks above.

Streaming Uploads and Downloads: As the above examples illustrate, streaming allows the gof3r command-line tool to be used with linux/unix pipes. This allows transformation of the data in parallel as it is uploaded or downloaded from S3.

End-to-end Integrity Checking: s3gof3r calculates the md5 hash of the stream in parallel while uploading and downloading. On upload, a file containing the md5 hash is saved in s3. This is checked against the calculated md5 on download. On upload, the content-md5 of each part is calculated and sent with the header to be checked by AWS. s3gof3r also checks the 'hash of hashes' returned by S3 in the Etag field on completion of a multipart upload. See the S3 API Reference for details.

Retry Everything: All http requests and every part is retried on both uploads and downloads. Requests to S3 frequently time out, especially under high load, so this is essential to complete large uploads or downloads.

Memory Efficiency: Memory used to upload and download parts is recycled. For an upload or download with the default concurrency of 10 and part size of 20 MB, the maximum memory usage is less than 300 MB. Memory footprint can be further reduced by reducing part size or concurrency.

Installation

s3gof3r is written in Go and requires go 1.5 or later. It can be installed with go get to download and compile it from source. To install the command-line tool, gof3r set GO15VENDOREXPERIMENT=1 in your environment:

$ go get github.com/rlmcpherson/s3gof3r/gof3r

To install just the package for use in other Go programs:

$ go get github.com/rlmcpherson/s3gof3r

Release Binaries

To try the latest release of the gof3r command-line interface without installing go, download the statically-linked binary for your architecture from Github Releases.

gof3r (command-line interface) usage:

  To stream up to S3:
     $  <input_stream> | gof3r put -b <bucket> -k <s3_path>
  To stream down from S3:
     $ gof3r get -b <bucket> -k <s3_path> | <output_stream>
  To upload a file to S3:
     $ $ gof3r cp <local_path> s3://<bucket>/<s3_path>
  To download a file from S3:
     $ gof3r cp s3://<bucket>/<s3_path> <local_path>

Set AWS keys as environment Variables:

  $ export AWS_ACCESS_KEY_ID=<access_key>
  $ export AWS_SECRET_ACCESS_KEY=<secret_key>

gof3r also supports IAM role-based keys from EC2 instance metadata. If available and environment variables are not set, these keys are used are used automatically.

Examples:

$ tar -cf - /foo_dir/ | gof3r put -b my_s3_bucket -k bar_dir/s3_object -m x-amz-meta-custom-metadata:abc123 -m x-amz-server-side-encryption:AES256
$ gof3r get -b my_s3_bucket -k bar_dir/s3_object | tar -x    

see the gof3r man page for complete usage

Documentation

s3gof3r package: See the godocs for api documentation.

gof3r cli : godoc and gof3r man page

Have a question? Ask it on the s3gof3r Mailing List

Author: rlmcpherson
Source Code: https://github.com/rlmcpherson/s3gof3r 
License: MIT license

#go #golang #cli #amazon #streaming 

Fast, Concurrent, Streaming Access to Amazon S3, including Gof3r, CLI

Parse & generate m3u8 playlists for Apple HTTP Live Streaming (HLS)

go-m3u8

go-m3u8 provides easy generation and parsing of m3u8 playlists defined in the HTTP Live Streaming (HLS) Internet Draft published by Apple.

  • The library completely implements version 20 of the HLS Internet Draft.
  • Provides parsing of an m3u8 playlist into an object model from any File, io.Reader or string.
  • Provides ability to write playlist to a string via String()
  • Distinction between a master and media playlist is handled automatically (single Playlist class).
  • Optionally, the library can automatically generate the audio/video codecs string used in the CODEC attribute based on specified H.264, AAC, or MP3 options (such as Profile/Level).

Installation

go get github.com/quangngotan95/go-m3u8

Usage (creating playlists)

Create a master playlist and child playlists for adaptive bitrate streaming:

import (
    "github.com/quangngotan95/go-m3u8/m3u8"
    "github.com/AlekSi/pointer"
)

playlist := m3u8.NewPlaylist()

Create a new playlist item:

item := &m3u8.PlaylistItem{
    Width:      pointer.ToInt(1920),
    Height:     pointer.ToInt(1080),
    Profile:    pointer.ToString("high"),
    Level:      pointer.ToString("4.1"),
    AudioCodec: pointer.ToString("aac-lc"),
    Bandwidth:  540,
    URI:        "test.url",
}
playlist.AppendItem(item)

Add alternate audio, camera angles, closed captions and subtitles by creating MediaItem instances and adding them to the Playlist:

item := &m3u8.MediaItem{
    Type:          "AUDIO",
    GroupID:       "audio-lo",
    Name:          "Francais",
    Language:      pointer.ToString("fre"),
    AssocLanguage: pointer.ToString("spoken"),
    AutoSelect:    pointer.ToBool(true),
    Default:       pointer.ToBool(false),
    Forced:        pointer.ToBool(true),
    URI:           pointer.ToString("frelo/prog_index.m3u8"),
}
playlist.AppendItem(item)

Create a standard playlist and add MPEG-TS segments via SegmentItem. You can also specify options for this type of playlist, however these options are ignored if playlist becomes a master playlist (anything but segments added):

playlist := &m3u8.Playlist{
    Target:   12,
    Sequence: 1,
    Version:  pointer.ToInt(1),
    Cache:    pointer.ToBool(false),
    Items: []m3u8.Item{
        &m3u8.SegmentItem{
            Duration: 11,
            Segment:  "test.ts",
        },
    },
}

You can also access the playlist as a string:

var str string
str = playlist.String()
...
fmt.Print(playlist)

Alternatively you can set codecs rather than having it generated automatically:

item := &m3u8.PlaylistItem{
    Width:     pointer.ToInt(1920),
    Height:    pointer.ToInt(1080),
    Codecs:    pointer.ToString("avc1.66.30,mp4a.40.2"),
    Bandwidth: 540,
    URI:       "test.url",
}

Usage (parsing playlists)

Parse from file

playlist, err := m3u8.ReadFile("path/to/file")

Read from string

playlist, err := m3u8.ReadString(string)

Read from generic io.Reader

playlist, err := m3u8.Read(reader)

Access items in playlist:

gore> playlist.Items[0]
(*m3u8.SessionKeyItem)#EXT-X-SESSION-KEY:METHOD=AES-128,URI="https://priv.example.com/key.php?r=52"
gore> playlist.Items[1]
(*m3u8.PlaybackStart)#EXT-X-START:TIME-OFFSET=20.2

Misc

Codecs:

  • Values for audio_codec (codec name): aac-lc, he-aac, mp3
  • Values for profile (H.264 Profile): baseline, main, high.
  • Values for level (H.264 Level): 3.0, 3.1, 4.0, 4.1.

Not all Levels and Profiles can be combined and validation is not currently implemented, consult H.264 documentation for further details.

Contributing

  1. Fork it https://github.com/quangngotan95/go-m3u8/fork
  2. Create your feature branch git checkout -b my-new-feature
  3. Run tests go test ./test/..., make sure they all pass and new features are covered
  4. Commit your changes git commit -am "Add new features"
  5. Push to the branch git push origin my-new-feature
  6. Create a new Pull Request

Golang package for m3u8 (ported m3u8 gem https://github.com/sethdeckard/m3u8)

Author: Quangngotan95
Source Code: https://github.com/quangngotan95/go-m3u8 
License: MIT license

#go #golang #streaming 

Parse & generate m3u8 playlists for Apple HTTP Live Streaming (HLS)

Oppressor: Streaming Http Compression Response Negotiator

oppressor

streaming http compression response negotiator

example

You can use plain old streams:

var oppressor = require('oppressor');
var fs = require('fs');
var http = require('http');

var server = http.createServer(function (req, res) {
    fs.createReadStream(__dirname + '/data.txt')
        .pipe(oppressor(req))
        .pipe(res)
    ;
});
server.listen(8000);

or you can use fancy streaming static file server modules like filed that set handy things like etag, last-modified, and content-type headers for you:

var oppressor = require('oppressor');
var filed = require('filed');
var http = require('http');

var server = http.createServer(function (req, res) {
     filed(__dirname + '/data.txt')
        .pipe(oppressor(req))
        .pipe(res)
    ;
});
server.listen(8000);

methods

var oppressor = require('oppressor')

var stream = oppressor(req)

Return a duplex stream that will be compressed with gzip, deflate, or no compression depending on the accept-encoding headers sent.

oppressor will emulate calls to http.ServerResponse methods like writeHead() so that modules like filed that expect to be piped directly to the response object will work.

install

With npm do:

npm install oppressor

Author: Substack
Source Code: https://github.com/substack/oppressor 
License: View license

#node #streaming #http 

Oppressor: Streaming Http Compression Response Negotiator

Fstream: Advanced FS Streaming for Node

Like FS streams, but with stat on them, and supporting directories and symbolic links, as well as normal files. Also, you can use this to set the stats on a file, even if you don't change its contents, or to create a symlink, etc.

So, for example, you can "write" a directory, and it'll call mkdir. You can specify a uid and gid, and it'll call chown. You can specify a mtime and atime, and it'll call utimes. You can call it a symlink and provide a linkpath and it'll call symlink.

Note that it won't automatically resolve symbolic links. So, if you call fstream.Reader('/some/symlink') then you'll get an object that stats and then ends immediately (since it has no data). To follow symbolic links, do this: fstream.Reader({path:'/some/symlink', follow: true }).

There are various checks to make sure that the bytes emitted are the same as the intended size, if the size is set.

Examples

fstream
  .Writer({ path: "path/to/file"
          , mode: 0755
          , size: 6
          })
  .write("hello\n")
  .end()

This will create the directories if they're missing, and then write hello\n into the file, chmod it to 0755, and assert that 6 bytes have been written when it's done.

fstream
  .Writer({ path: "path/to/file"
          , mode: 0755
          , size: 6
          , flags: "a"
          })
  .write("hello\n")
  .end()

You can pass flags in, if you want to append to a file.

fstream
  .Writer({ path: "path/to/symlink"
          , linkpath: "./file"
          , SymbolicLink: true
          , mode: "0755" // octal strings supported
          })
  .end()

If isSymbolicLink is a function, it'll be called, and if it returns true, then it'll treat it as a symlink. If it's not a function, then any truish value will make a symlink, or you can set type: 'SymbolicLink', which does the same thing.

Note that the linkpath is relative to the symbolic link location, not the parent dir or cwd.

fstream
  .Reader("path/to/dir")
  .pipe(fstream.Writer("path/to/other/dir"))

This will do like cp -Rp path/to/dir path/to/other/dir. If the other dir exists and isn't a directory, then it'll emit an error. It'll also set the uid, gid, mode, etc. to be identical. In this way, it's more like rsync -a than simply a copy.

Author: npm
Source Code: https://github.com/npm/fstream 
License: ISC license

#node #streaming 

Fstream: Advanced FS Streaming for Node

Ndjson: Streaming Line Delimited json Parser + Serializer

ndjson

streaming newline delimited json parser + serializer. Available as a JS API or a command line tool

NPM

usage

var ndjson = require('ndjson')

ndjson.parse(opts)

returns a transform stream that accepts newline delimited json and emits objects

example newline delimited json:

data.txt:

{"foo": "bar"}
{"hello": "world"}

If you want to discard non-valid JSON messages, you can call ndjson.parse({strict: false})

usage:

fs.createReadStream('data.txt')
  .pipe(ndjson.parse())
  .on('data', function(obj) {
    // obj is a javascript object
  })

ndjson.serialize() / ndjson.stringify()

returns a transform stream that accepts json objects and emits newline delimited json

example usage:

var serialize = ndjson.serialize()
serialize.on('data', function(line) {
  // line is a line of stringified JSON with a newline delimiter at the end
})
serialize.write({"foo": "bar"})
serialize.end()

Development on this npm package is moved to https://github.com/ndjson/ndjson.js

Author: Maxogden
Source Code: https://github.com/maxogden/ndjson 
License: BSD-3-Clause

#node #streaming #json 

Ndjson: Streaming Line Delimited json Parser + Serializer

Readdirp: Recursive Version Of Fs.readdir with Streaming Api

readdirp 

Recursive version of fs.readdir. Exposes a stream API and a promise API.

npm install readdirp
const readdirp = require('readdirp');

// Use streams to achieve small RAM & CPU footprint.
// 1) Streams example with for-await.
for await (const entry of readdirp('.')) {
  const {path} = entry;
  console.log(`${JSON.stringify({path})}`);
}

// 2) Streams example, non for-await.
// Print out all JS files along with their size within the current folder & subfolders.
readdirp('.', {fileFilter: '*.js', alwaysStat: true})
  .on('data', (entry) => {
    const {path, stats: {size}} = entry;
    console.log(`${JSON.stringify({path, size})}`);
  })
  // Optionally call stream.destroy() in `warn()` in order to abort and cause 'close' to be emitted
  .on('warn', error => console.error('non-fatal error', error))
  .on('error', error => console.error('fatal error', error))
  .on('end', () => console.log('done'));

// 3) Promise example. More RAM and CPU than streams / for-await.
const files = await readdirp.promise('.');
console.log(files.map(file => file.path));

// Other options.
readdirp('test', {
  fileFilter: '*.js',
  directoryFilter: ['!.git', '!*modules'],
  // directoryFilter: (di) => di.basename.length === 9
  type: 'files_directories',
  depth: 1
});

For more examples, check out examples directory.

API

const stream = readdirp(root[, options])Stream API

  • Reads given root recursively and returns a stream of entry infos
  • Optionally can be used like for await (const entry of stream) with node.js 10+ (asyncIterator).
  • on('data', (entry) => {}) entry info for every file / dir.
  • on('warn', (error) => {}) non-fatal Error that prevents a file / dir from being processed. Example: inaccessible to the user.
  • on('error', (error) => {}) fatal Error which also ends the stream. Example: illegal options where passed.
  • on('end') — we are done. Called when all entries were found and no more will be emitted.
  • on('close') — stream is destroyed via stream.destroy(). Could be useful if you want to manually abort even on a non fatal error. At that point the stream is no longer readable and no more entries, warning or errors are emitted
  • To learn more about streams, consult the very detailed nodejs streams documentation or the stream-handbook

const entries = await readdirp.promise(root[, options])Promise API. Returns a list of entry infos.

First argument is awalys root, path in which to start reading and recursing into subdirectories.

options

  • fileFilter: ["*.js"]: filter to include or exclude files. A Function, Glob string or Array of glob strings.
    • Function: a function that takes an entry info as a parameter and returns true to include or false to exclude the entry
    • Glob string: a string (e.g., *.js) which is matched using picomatch, so go there for more information. Globstars (**) are not supported since specifying a recursive pattern for an already recursive function doesn't make sense. Negated globs (as explained in the minimatch documentation) are allowed, e.g., !*.txt matches everything but text files.
    • Array of glob strings: either need to be all inclusive or all exclusive (negated) patterns otherwise an error is thrown. ['*.json', '*.js'] includes all JavaScript and Json files. ['!.git', '!node_modules'] includes all directories except the '.git' and 'node_modules'.
    • Directories that do not pass a filter will not be recursed into.
  • directoryFilter: ['!.git']: filter to include/exclude directories found and to recurse into. Directories that do not pass a filter will not be recursed into.
  • depth: 5: depth at which to stop recursing even if more subdirectories are found
  • type: 'files': determines if data events on the stream should be emitted for 'files' (default), 'directories', 'files_directories', or 'all'. Setting to 'all' will also include entries for other types of file descriptors like character devices, unix sockets and named pipes.
  • alwaysStat: false: always return stats property for every file. Default is false, readdirp will return Dirent entries. Setting it to true can double readdir execution time - use it only when you need file size, mtime etc. Cannot be enabled on node <10.10.0.
  • lstat: false: include symlink entries in the stream along with files. When true, fs.lstat would be used instead of fs.stat

EntryInfo

Has the following properties:

  • path: 'assets/javascripts/react.js': path to the file/directory (relative to given root)
  • fullPath: '/Users/dev/projects/app/assets/javascripts/react.js': full path to the file/directory found
  • basename: 'react.js': name of the file/directory
  • dirent: fs.Dirent: built-in dir entry object - only with alwaysStat: false
  • stats: fs.Stats: built in stat object - only with alwaysStat: true

Changelog

  • 3.5 (Oct 13, 2020) disallows recursive directory-based symlinks. Before, it could have entered infinite loop.
  • 3.4 (Mar 19, 2020) adds support for directory-based symlinks.
  • 3.3 (Dec 6, 2019) stabilizes RAM consumption and enables perf management with highWaterMark option. Fixes race conditions related to for-await looping.
  • 3.2 (Oct 14, 2019) improves performance by 250% and makes streams implementation more idiomatic.
  • 3.1 (Jul 7, 2019) brings bigint support to stat output on Windows. This is backwards-incompatible for some cases. Be careful. It you use it incorrectly, you'll see "TypeError: Cannot mix BigInt and other types, use explicit conversions".
  • 3.0 brings huge performance improvements and stream backpressure support.
  • Upgrading 2.x to 3.x:
    • Signature changed from readdirp(options) to readdirp(root, options)
    • Replaced callback API with promise API.
    • Renamed entryType option to type
    • Renamed entryType: 'both' to 'files_directories'
    • EntryInfo
      • Renamed stat to stats
        • Emitted only when alwaysStat: true
        • dirent is emitted instead of stats by default with alwaysStat: false
      • Renamed name to basename
      • Removed parentDir and fullParentDir properties
  • Supported node.js versions:
    • 3.x: node 8+
    • 2.x: node 0.6+

Author: Paulmillr
Source Code: https://github.com/paulmillr/readdirp 
License: MIT license

#node #api #streaming #javascript 

Readdirp: Recursive Version Of Fs.readdir with Streaming Api
Nigel  Uys

Nigel Uys

1650135720

Archiver: Easily Create, Extract Archives, Compress & Decompress Files

archiver 

Introducing Archiver 4.0 - a cross-platform, multi-format archive utility and Go library. A powerful and flexible library meets an elegant CLI in this generic replacement for several platform-specific or format-specific archive utilities.

⚠️ v4 is in ALPHA. The core library APIs work pretty well but the command has not been implemented yet, nor have most automated tests. If you need the arc command, stick with v3 for now.

Features

  • Stream-oriented APIs
  • Automatically identify archive and compression formats:
    • By file name
    • By header
  • Traverse directories, archive files, and any other file uniformly as io/fs file systems:
  • Compress and decompress files
  • Create and extract archive files
  • Walk or traverse into archive files
  • Extract only specific files from archives
  • Insert (append) into .tar files
  • Numerous archive and compression formats supported
  • Extensible (add more formats just by registering them)
  • Cross-platform, static binary
  • Pure Go (no cgo)
  • Multithreaded Gzip
  • Adjust compression levels
  • Automatically add compressed files to zip archives without re-compressing
  • Open password-protected RAR archives

Supported compression formats

  • brotli (br)
  • bzip2 (bz2)
  • flate (zip)
  • gzip (gz)
  • lz4
  • snappy (sz)
  • xz
  • zstandard (zst)

Supported archive formats

  • .zip
  • .tar (including any compressed variants like .tar.gz)
  • .rar (read-only)

Tar files can optionally be compressed using any compression format.

Command use

Coming soon for v4. See the last v3 docs.

Library use

$ go get github.com/mholt/archiver/v4

Create archive

Creating archives can be done entirely without needing a real disk or storage device since all you need is a list of File structs to pass in.

However, creating archives from files on disk is very common, so you can use the FilesFromDisk() function to help you map filenames on disk to their paths in the archive. Then create and customize the format type.

In this example, we add 4 files and a directory (which includes its contents recursively) to a .tar.gz file:

// map files on disk to their paths in the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
    "/path/on/disk/file1.txt": "file1.txt",
    "/path/on/disk/file2.txt": "subfolder/file2.txt",
    "/path/on/disk/file3.txt": "",              // put in root of archive as file3.txt
    "/path/on/disk/file4.txt": "subfolder/",    // put in subfolder as file4.txt
    "/path/on/disk/folder":    "Custom Folder", // contents added recursively
})
if err != nil {
    return err
}

// create the output file we'll write to
out, err := os.Create("example.tar.gz")
if err != nil {
    return err
}
defer out.Close()

// we can use the CompressedArchive type to gzip a tarball
// (compression is not required; you could use Tar directly)
format := archiver.CompressedArchive{
    Compression: archiver.Gz{},
    Archival:    archiver.Tar{},
}

// create the archive
err = format.Archive(context.Background(), out, files)
if err != nil {
    return err
}

The first parameter to FilesFromDisk() is an optional options struct, allowing you to customize how files are added.

Extract archive

Extracting an archive, extracting from an archive, and walking an archive are all the same function.

Simply use your format type (e.g. Zip) to call Extract(). You'll pass in a context (for cancellation), the input stream, the list of files you want out of the archive, and a callback function to handle each file.

If you want all the files, pass in a nil list of file paths.

// the type that will be used to read the input stream
format := archiver.Zip{}

// the list of files we want out of the archive; any
// directories will include all their contents unless
// we return fs.SkipDir from our handler
// (leave this nil to walk ALL files from the archive)
fileList := []string{"file1.txt", "subfolder"}

handler := func(ctx context.Context, f archiver.File) error {
    // do something with the file
    return nil
}

err := format.Extract(ctx, input, fileList, handler)
if err != nil {
    return err
}

Identifying formats

Have an input stream with unknown contents? No problem, archiver can identify it for you. It will try matching based on filename and/or the header (which peeks at the stream):

format, input, err := archiver.Identify("filename.tar.zst", input)
if err != nil {
    return err
}
// you can now type-assert format to whatever you need;
// be sure to use returned stream to re-read consumed bytes during Identify()

// want to extract something?
if ex, ok := format.(archiver.Extractor); ok {
    // ... proceed to extract
}

// or maybe it's compressed and you want to decompress it?
if decom, ok := format.(archiver.Decompressor); ok {
    rc, err := decom.OpenReader(unknownFile)
    if err != nil {
        return err
    }
    defer rc.Close()

    // read from rc to get decompressed data
}

Identify() works by reading an arbitrary number of bytes from the beginning of the stream (just enough to check for file headers). It buffers them and returns a new reader that lets you re-read them anew.

Virtual file systems

This is my favorite feature.

Let's say you have a file. It could be a real directory on disk, an archive, a compressed archive, or any other regular file. You don't really care; you just want to use it uniformly no matter what it is.

Use archiver to simply create a file system:

// filename could be:
// - a folder ("/home/you/Desktop")
// - an archive ("example.zip")
// - a compressed archive ("example.tar.gz")
// - a regular file ("example.txt")
// - a compressed regular file ("example.txt.gz")
fsys, err := archiver.FileSystem(filename)
if err != nil {
    return err
}

This is a fully-featured fs.FS, so you can open files and read directories, no matter what kind of file the input was.

For example, to open a specific file:

f, err := fsys.Open("file")
if err != nil {
    return err
}
defer f.Close()

If you opened a regular file, you can read from it. If it's a compressed file, reads are automatically decompressed.

If you opened a directory, you can list its contents:

if dir, ok := f.(fs.ReadDirFile); ok {
    // 0 gets all entries, but you can pass > 0 to paginate
    entries, err := dir.ReadDir(0)
    if err != nil {
        return err
    }
    for _, e := range entries {
        fmt.Println(e.Name())
    }
}

Or get a directory listing this way:

entries, err := fsys.ReadDir("Playlists")
if err != nil {
    return err
}
for _, e := range entries {
    fmt.Println(e.Name())
}

Or maybe you want to walk all or part of the file system, but skip a folder named .git:

err := fs.WalkDir(fsys, ".", func(path string, d fs.DirEntry, err error) error {
    if err != nil {
        return err
    }
    if path == ".git" {
        return fs.SkipDir
    }
    fmt.Println("Walking:", path, "Dir?", d.IsDir())
    return nil
})
if err != nil {
    return err
}

Compress data

Compression formats let you open writers to compress data:

// wrap underlying writer w
compressor, err := archiver.Zstd{}.OpenWriter(w)
if err != nil {
    return err
}
defer compressor.Close()

// writes to compressor will be compressed

Decompress data

Similarly, compression formats let you open readers to decompress data:

// wrap underlying reader r
decompressor, err := archiver.Brotli{}.OpenReader(r)
if err != nil {
    return err
}
defer decompressor.Close()

// reads from decompressor will be decompressed

Append to tarball

Tar archives can be appended to without creating a whole new archive by calling Insert() on a tar stream. However, this requires that the tarball is not compressed (due to complexities with modifying compression dictionaries).

Here is an example that appends a file to a tarball on disk:

tarball, err := os.OpenFile("example.tar", os.O_RDWR, 0644)
if err != nil {
    return err
}
defer tarball.Close()

// prepare a text file for the root of the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
    "/home/you/lastminute.txt": "",
})

err := archiver.Tar{}.Insert(context.Background(), tarball, files)
if err != nil {
    return err
}

Author: Mholt
Source Code: https://github.com/mholt/archiver 
License: MIT License

#go #golang #streaming 

Archiver: Easily Create, Extract Archives, Compress & Decompress Files
Elian  Harber

Elian Harber

1649918580

Gosd: A Library for Scheduling When to Dispatch A Message To A Channel

gosd

go-schedulable-dispatcher (gosd), is a library for scheduling when to dispatch a message to a channel.

Implementation

The implementation provides an ease-of-use API with both an ingress (ingest) channel and egress (dispatch) channel. Messages are ingested and processed into a heap based priority queue for dispatching. At most two separate goroutines are used, one for processing of messages from both the ingest channel and heap then the other as a timer. Order is not guaranteed by default when messages have the same scheduled time, but can be changed through the config. By guaranteeing order, performance will be slightly worse. If strict-ordering isn't critical to your application, it's recommended to keep the default setting.

Example

// create instance of dispatcher
dispatcher, err := gosd.NewDispatcher(&gosd.DispatcherConfig{
    IngressChannelSize:  100,
    DispatchChannelSize: 100,
    MaxMessages:         100,
    GuaranteeOrder:      false,
})
checkErr(err)

// spawn process
go dispatcher.Start()

// schedule a message
dispatcher.IngressChannel() <- &gosd.ScheduledMessage{
    At:      time.Now().Add(1 * time.Second),
    Message: "Hello World in 1 second!",
}

// wait for the message
msg := <-dispatcher.DispatchChannel()

// type assert
msgStr := msg.(string)
fmt.Println(msgStr)
// Hello World in 1 second!

// shutdown without deadline
dispatcher.Shutdown(context.Background(), false)

More examples under examples.

Benchmarking

Tested with Intel Core i7-8700K CPU @ 3.70GHz, DDR4 RAM and 1000 messages per iteration.

Benchmark_integration_unordered-12                        142       8654906 ns/op
Benchmark_integration_unorderedSmallBuffer-12             147       9503403 ns/op
Benchmark_integration_unorderedSmallHeap-12               122       8860732 ns/op
Benchmark_integration_ordered-12                           96      13354174 ns/op
Benchmark_integration_orderedSmallBuffer-12               121      10115702 ns/op
Benchmark_integration_orderedSmallHeap-12                 129      10441857 ns/op
Benchmark_integration_orderedSameTime-12                 99             12575961 ns/op

Author: Alexsniffin
Source Code: https://github.com/alexsniffin/gosd 
License: MIT License

#go #golang #streaming 

Gosd: A Library for Scheduling When to Dispatch A Message To A Channel
Elian  Harber

Elian Harber

1649804220

Centrifugo: Real-time Messaging (Websockets Or SockJS) Server in Go

 Centrifugo is a scalable real-time messaging server in a language-agnostic way.

Centrifugo works in conjunction with application backend written in any programming language. It runs as a separate service and keeps persistent connections from application clients established over several supported types of transport (WebSocket, SockJS, EventSource, GRPC, HTTP-streaming). When you need to deliver an event to your clients in real-time, you publish it to Centrifugo server API – and Centrifugo then broadcasts the event to all connected clients interested in this event (clients subscribed to the event channel). In other words – Centrifugo is a user-facing PUB/SUB server.

scheme

How to install

See installation instructions in Centrifugo documentation.

Demo

Try our demo instance on Heroku (admin password is password, token_hmac_secret_key is secret, API key is api_key). Or deploy your own Centrifugo instance in one click:

Deploy

Highlights

  • Centrifugo is fast and capable to scale to millions of simultaneous connections
  • Simple integration with any application – works as separate service, provides HTTP and GRPC API
  • Client connectors for popular frontend environments – for both web and mobile development
  • Strict client protocol based on Protobuf schema
  • Bidirectional transport support (WebSocket and SockJS) for full-featured communication
  • Unidirectional transport support without need in client connectors - use native APIs (SSE, Fetch, WebSocket, GRPC)
  • User authentication with a JWT or over connection request proxy to configured HTTP/GRPC endpoint
  • Proper connection management and expiration control
  • Various types of channels: anonymous, authenticated, private, user-limited
  • Various types of subscriptions: client-side or server-side
  • Transform RPC calls over WebSocket/SockJS to configured HTTP or GRPC endpoint call
  • Presence information for channels (show all active clients in a channel)
  • History information for channels (last messages published into a channel)
  • Join/leave events for channels (client subscribed/unsubscribed)
  • Automatic recovery of missed messages between reconnects over configured retention period
  • Built-in administrative web panel
  • Cross platform – works on Linux, macOS and Windows
  • Ready to deploy (Docker, RPM/DEB packages, automatic TLS certificates, Prometheus instrumentation, Grafana dashboard)
  • Open-source license

Backing

This repository is hosted by packagecloud.io.

Private NPM registry and Maven, RPM, DEB, PyPi and RubyGem Repository · packagecloud

Also thanks to JetBrains for supporting OSS (most of the code here written in Goland):

JetBrains logo

For more information follow to Centrifugo documentation site.

Author: Centrifugal
Source Code: https://github.com/centrifugal/centrifugo 
License: Apache-2.0 License

#go #golang #streaming #redis 

Centrifugo: Real-time Messaging (Websockets Or SockJS) Server in Go
Sheldon  Grant

Sheldon Grant

1647655980

WebTorrent: Streaming torrent client for the web

WebTorrent 

The streaming torrent client. For node.js and the web.

WebTorrent is a streaming torrent client for node.js and the browser. YEP, THAT'S RIGHT. THE BROWSER. It's written completely in JavaScript – the language of the web – so the same code works in both runtimes.

In node.js, this module is a simple torrent client, using TCP and UDP to talk to other torrent clients.

In the browser, WebTorrent uses WebRTC (data channels) for peer-to-peer transport. It can be used without browser plugins, extensions, or installations. It's Just JavaScript™. Note: WebTorrent does not support UDP/TCP peers in browser.

Simply include the webtorrent.min.js script on your page to start fetching files over WebRTC using the BitTorrent protocol, or require('webtorrent') with browserify. See demo apps and code examples below.

To make BitTorrent work over WebRTC (which is the only P2P transport that works on the web) we made some protocol changes. Therefore, a browser-based WebTorrent client or "web peer" can only connect to other clients that support WebTorrent/WebRTC.

To seed files to web peers, use a client that supports WebTorrent, e.g. WebTorrent Desktop, a desktop client with a familiar UI that can connect to web peers, webtorrent-hybrid, a command line program, or Instant.io, a website. Established torrent clients like Vuze have already added WebTorrent support so they can connect to both normal and web peers. We hope other clients will follow.

Network

Features

  • Torrent client for node.js & the browser (same npm package!)
  • Insanely fast
  • Download multiple torrents simultaneously, efficiently
  • Pure Javascript (no native dependencies)
  • Exposes files as streams
    • Fetches pieces from the network on-demand so seeking is supported (even before torrent is finished)
    • Seamlessly switches between sequential and rarest-first piece selection strategy
  • Supports advanced torrent client features
  • Comprehensive test suite (runs completely offline, so it's reliable and fast)
  • Check all the supported BEPs here

Browser/WebRTC environment features

  • WebRTC data channels for lightweight peer-to-peer communication with no plugins
  • No silos. WebTorrent is a P2P network for the entire web. WebTorrent clients running on one domain can connect to clients on any other domain.
  • Stream video torrents into a <video> tag (webm (vp8, vp9) or mp4 (h.264))
  • Supports Chrome, Firefox, Opera and Safari.

Sauce Labs 

Install

To install WebTorrent for use in node or the browser with require('webtorrent'), run:

npm install webtorrent

To install a webtorrent command line program, run:

npm install webtorrent-cli -g

To install a WebTorrent desktop application for Mac, Windows, or Linux, see WebTorrent Desktop.

Ways to help

Who is using WebTorrent today?

Lots of folks!

WebTorrent API Documentation

Read the full API Documentation.

Usage

WebTorrent is the first BitTorrent client that works in the browser, using open web standards (no plugins, just HTML5 and WebRTC)! It's easy to get started!

In the browser

Downloading a file is simple:

const WebTorrent = require('webtorrent')

const client = new WebTorrent()
const magnetURI = '...'

client.add(magnetURI, function (torrent) {
  // Got torrent metadata!
  console.log('Client is downloading:', torrent.infoHash)

  torrent.files.forEach(function (file) {
    // Display the file by appending it to the DOM. Supports video, audio, images, and
    // more. Specify a container element (CSS selector or reference to DOM node).
    file.appendTo('body')
  })
})

Seeding a file is simple, too:

const dragDrop = require('drag-drop')
const WebTorrent = require('webtorrent')

const client = new WebTorrent()

// When user drops files on the browser, create a new torrent and start seeding it!
dragDrop('body', function (files) {
  client.seed(files, function (torrent) {
    console.log('Client is seeding:', torrent.infoHash)
  })
})

There are more examples in docs/get-started.md.

Browserify

WebTorrent works great with browserify, an npm package that lets you use node-style require() to organize your browser code and load modules installed by npm (as seen in the previous examples).

Webpack

WebTorrent also works with webpack, another module bundler. However, webpack requires the following extra configuration:

{
  target: 'web',
  node: {
    fs: 'empty'
  }
}

Or, you can just use the pre-built version via require('webtorrent/webtorrent.min.js') and skip the webpack configuration.

Script tag

WebTorrent is also available as a standalone script (webtorrent.min.js) which exposes WebTorrent on the window object, so it can be used with just a script tag:

<script src="webtorrent.min.js"></script>

The WebTorrent script is also hosted on fast, reliable CDN infrastructure (Cloudflare and MaxCDN) for easy inclusion on your site:

<script src="https://cdn.jsdelivr.net/npm/webtorrent@latest/webtorrent.min.js"></script>

Chrome App

If you want to use WebTorrent in a Chrome App, you can include the following script:

<script src="webtorrent.chromeapp.js"></script>

Be sure to enable the chrome.sockets.udp and chrome.sockets.tcp permissions!

In Node.js

WebTorrent also works in node.js, using the same npm package! It's mad science!

NOTE: To connect to "web peers" (browsers) in addition to normal BitTorrent peers, use webtorrent-hybrid which includes WebRTC support for node.

As a command line app

WebTorrent is also available as a command line app. Here's how to use it:

$ npm install webtorrent-cli -g
$ webtorrent --help

To download a torrent:

$ webtorrent magnet_uri

To stream a torrent to a device like AirPlay or Chromecast, just pass a flag:

$ webtorrent magnet_uri --airplay

There are many supported streaming options:

--airplay               Apple TV
--chromecast            Chromecast
--mplayer               MPlayer
--mpv                   MPV
--omx [jack]            omx [default: hdmi]
--vlc                   VLC
--xbmc                  XBMC
--stdout                standard out [implies --quiet]

In addition to magnet uris, WebTorrent supports many ways to specify a torrent.

Talks about WebTorrent

Modules

Most of the active development is happening inside of small npm packages which are used by WebTorrent.

The Node Way™

"When applications are done well, they are just the really application-specific, brackish residue that can't be so easily abstracted away. All the nice, reusable components sublimate away onto github and npm where everybody can collaborate to advance the commons." — substack from "how I write modules"

node.js is shiny

Modules

These are the main modules that make up WebTorrent:

moduletestsversiondescription
webtorrenttorrent client (this module)
bittorrent-dhtdistributed hash table client
bittorrent-peerididentify client name/version
bittorrent-protocolbittorrent protocol stream
bittorrent-trackerbittorrent tracker server/client
bittorrent-lsdbittorrent local service discovery
create-torrentcreate .torrent files
magnet-uriparse magnet uris
parse-torrentparse torrent identifiers
render-mediaintelligently render media files
torrent-discoveryfind peers via dht, tracker, and lsd
ut_metadatametadata for magnet uris (protocol extension)
ut_pexpeer discovery (protocol extension)

Enable debug logs

In node, enable debug logs by setting the DEBUG environment variable to the name of the module you want to debug (e.g. bittorrent-protocol, or * to print all logs).

DEBUG=* webtorrent

In the browser, enable debug logs by running this in the developer console:

localStorage.debug = '*'

Disable by running this:

localStorage.removeItem('debug')

Author: Webtorrent
Source Code: https://github.com/webtorrent/webtorrent 
License: MIT License

#node #javascript #streaming 

WebTorrent: Streaming torrent client for the web

instant.io: Streaming File Transfer Over Webtorrent

Streaming file transfer over WebTorrent (torrents on the web) 

Download/upload files using the WebTorrent protocol (BitTorrent over WebRTC). This is a beta.

Powered by WebTorrent, the first torrent client that works in the browser without plugins. WebTorrent is powered by JavaScript and WebRTC. Supports Chrome, Firefox, Opera (desktop and Android). Run localStorage.debug = '*' in the console and refresh to get detailed log output.

Install

If you just want to do file transfer on your site, or fetch/seed files over WebTorrent, then there's no need to run a copy of instant.io on your own server. Just use the WebTorrent script directly. You can learn more at https://webtorrent.io.

The client-side code that instant.io uses is here.

Run a copy of this site on your own server

To get a clone of https://instant.io running on your own server, follow these instructions.

Get the code:

git clone https://github.com/webtorrent/instant.io
cd instant.io
npm install

Modify the configuration options in config.js to set the IP/port you want the server to listen on.

Copy secret/index-sample.js to secret/index.js and update the options in there to a valid TURN server if you want a NAT traversal service (to help peers connect when behind a firewall).

To start the server, run npm start. That should be it!

Tips

Create a shareable link by adding a torrent infohash or magnet link to the end of the URL. For example: https://instant.io#INFO_HASH or https://instant.io/#MAGNET_LINK.

You can add multiple torrents in the same browser window.

Author: Webtorrent
Source Code: https://github.com/webtorrent/instant.io 
License: MIT License

#network #javascript #node #streaming 

instant.io: Streaming File Transfer Over Webtorrent
Elian  Harber

Elian Harber

1645200660

Smart-open: Utils for streaming large files (S3, HDFS, gzip, bz2...)

smart_open — utils for streaming large files in Python

What?

smart_open is a Python 3 library for efficient streaming of very large files from/to storages such as S3, GCS, Azure Blob Storage, HDFS, WebHDFS, HTTP, HTTPS, SFTP, or local filesystem. It supports transparent, on-the-fly (de-)compression for a variety of different formats.

smart_open is a drop-in replacement for Python's built-in open(): it can do anything open can (100% compatible, falls back to native open wherever possible), plus lots of nifty extra stuff on top.

Python 2.7 is no longer supported. If you need Python 2.7, please use smart_open 1.10.1, the last version to support Python 2.

Why?

Working with large remote files, for example using Amazon's boto3 Python library, is a pain. boto3's Object.upload_fileobj() and Object.download_fileobj() methods require gotcha-prone boilerplate to use successfully, such as constructing file-like object wrappers. smart_open shields you from that. It builds on boto3 and other remote storage libraries, but offers a clean unified Pythonic API. The result is less code for you to write and fewer bugs to make.

How?

smart_open is well-tested, well-documented, and has a simple Pythonic API:

>>> from smart_open import open
>>>
>>> # stream lines from an S3 object
>>> for line in open('s3://commoncrawl/robots.txt'):
...    print(repr(line))
...    break
'User-Agent: *\n'

>>> # stream from/to compressed files, with transparent (de)compression:
>>> for line in open('smart_open/tests/test_data/1984.txt.gz', encoding='utf-8'):
...    print(repr(line))
'It was a bright cold day in April, and the clocks were striking thirteen.\n'
'Winston Smith, his chin nuzzled into his breast in an effort to escape the vile\n'
'wind, slipped quickly through the glass doors of Victory Mansions, though not\n'
'quickly enough to prevent a swirl of gritty dust from entering along with him.\n'

>>> # can use context managers too:
>>> with open('smart_open/tests/test_data/1984.txt.gz') as fin:
...    with open('smart_open/tests/test_data/1984.txt.bz2', 'w') as fout:
...        for line in fin:
...           fout.write(line)
74
80
78
79

>>> # can use any IOBase operations, like seek
>>> with open('s3://commoncrawl/robots.txt', 'rb') as fin:
...     for line in fin:
...         print(repr(line.decode('utf-8')))
...         break
...     offset = fin.seek(0)  # seek to the beginning
...     print(fin.read(4))
'User-Agent: *\n'
b'User'

>>> # stream from HTTP
>>> for line in open('http://example.com/index.html'):
...     print(repr(line))
...     break
'<!doctype html>\n'

Other examples of URLs that smart_open accepts:

s3://my_bucket/my_key
s3://my_key:my_secret@my_bucket/my_key
s3://my_key:my_secret@my_server:my_port@my_bucket/my_key
gs://my_bucket/my_blob
azure://my_bucket/my_blob
hdfs:///path/file
hdfs://path/file
webhdfs://host:port/path/file
./local/path/file
~/local/path/file
local/path/file
./local/path/file.gz
file:///home/user/file
file:///home/user/file.bz2
[ssh|scp|sftp]://username@host//path/file
[ssh|scp|sftp]://username@host/path/file
[ssh|scp|sftp]://username:password@host/path/file

Documentation

Installation

smart_open supports a wide range of storage solutions, including AWS S3, Google Cloud and Azure. Each individual solution has its own dependencies. By default, smart_open does not install any dependencies, in order to keep the installation size small. You can install these dependencies explicitly using:

pip install smart_open[azure] # Install Azure deps
pip install smart_open[gcs] # Install GCS deps
pip install smart_open[s3] # Install S3 deps

Or, if you don't mind installing a large number of third party libraries, you can install all dependencies using:

pip install smart_open[all]

Be warned that this option increases the installation size significantly, e.g. over 100MB.

If you're upgrading from smart_open versions 2.x and below, please check out the Migration Guide.

Built-in help

For detailed API info, see the online help:

help('smart_open')

or click here to view the help in your browser.

More examples

For the sake of simplicity, the examples below assume you have all the dependencies installed, i.e. you have done:

pip install smart_open[all]
>>> import os, boto3
>>>
>>> # stream content *into* S3 (write mode) using a custom session
>>> session = boto3.Session(
...     aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],
...     aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'],
... )
>>> url = 's3://smart-open-py37-benchmark-results/test.txt'
>>> with open(url, 'wb', transport_params={'client': session.client('s3')}) as fout:
...     bytes_written = fout.write(b'hello world!')
...     print(bytes_written)
12
# stream from HDFS
for line in open('hdfs://user/hadoop/my_file.txt', encoding='utf8'):
    print(line)

# stream from WebHDFS
for line in open('webhdfs://host:port/user/hadoop/my_file.txt'):
    print(line)

# stream content *into* HDFS (write mode):
with open('hdfs://host:port/user/hadoop/my_file.txt', 'wb') as fout:
    fout.write(b'hello world')

# stream content *into* WebHDFS (write mode):
with open('webhdfs://host:port/user/hadoop/my_file.txt', 'wb') as fout:
    fout.write(b'hello world')

# stream from a completely custom s3 server, like s3proxy:
for line in open('s3u://user:secret@host:port@mybucket/mykey.txt'):
    print(line)

# Stream to Digital Ocean Spaces bucket providing credentials from boto3 profile
session = boto3.Session(profile_name='digitalocean')
client = session.client('s3', endpoint_url='https://ams3.digitaloceanspaces.com')
transport_params = {'client': client}
with open('s3://bucket/key.txt', 'wb', transport_params=transport_params) as fout:
    fout.write(b'here we stand')

# stream from GCS
for line in open('gs://my_bucket/my_file.txt'):
    print(line)

# stream content *into* GCS (write mode):
with open('gs://my_bucket/my_file.txt', 'wb') as fout:
    fout.write(b'hello world')

# stream from Azure Blob Storage
connect_str = os.environ['AZURE_STORAGE_CONNECTION_STRING']
transport_params = {
    'client': azure.storage.blob.BlobServiceClient.from_connection_string(connect_str),
}
for line in open('azure://mycontainer/myfile.txt', transport_params=transport_params):
    print(line)

# stream content *into* Azure Blob Storage (write mode):
connect_str = os.environ['AZURE_STORAGE_CONNECTION_STRING']
transport_params = {
    'client': azure.storage.blob.BlobServiceClient.from_connection_string(connect_str),
}
with open('azure://mycontainer/my_file.txt', 'wb', transport_params=transport_params) as fout:
    fout.write(b'hello world')

Compression Handling

The top-level compression parameter controls compression/decompression behavior when reading and writing. The supported values for this parameter are:

  • infer_from_extension (default behavior)
  • disable
  • .gz
  • .bz2

By default, smart_open determines the compression algorithm to use based on the file extension.

>>> from smart_open import open, register_compressor
>>> with open('smart_open/tests/test_data/1984.txt.gz') as fin:
...     print(fin.read(32))
It was a bright cold day in Apri

You can override this behavior to either disable compression, or explicitly specify the algorithm to use. To disable compression:

>>> from smart_open import open, register_compressor
>>> with open('smart_open/tests/test_data/1984.txt.gz', 'rb', compression='disable') as fin:
...     print(fin.read(32))
b'\x1f\x8b\x08\x08\x85F\x94\\\x00\x031984.txt\x005\x8f=r\xc3@\x08\x85{\x9d\xe2\x1d@'

To specify the algorithm explicitly (e.g. for non-standard file extensions):

>>> from smart_open import open, register_compressor
>>> with open('smart_open/tests/test_data/1984.txt.gzip', compression='.gz') as fin:
...     print(fin.read(32))
It was a bright cold day in Apri

You can also easily add support for other file extensions and compression formats. For example, to open xz-compressed files:

>>> import lzma, os
>>> from smart_open import open, register_compressor

>>> def _handle_xz(file_obj, mode):
...      return lzma.LZMAFile(filename=file_obj, mode=mode, format=lzma.FORMAT_XZ)

>>> register_compressor('.xz', _handle_xz)

>>> with open('smart_open/tests/test_data/1984.txt.xz') as fin:
...     print(fin.read(32))
It was a bright cold day in Apri

lzma is in the standard library in Python 3.3 and greater. For 2.7, use backports.lzma.

Transport-specific Options

smart_open supports a wide range of transport options out of the box, including:

  • S3
  • HTTP, HTTPS (read-only)
  • SSH, SCP and SFTP
  • WebHDFS
  • GCS
  • Azure Blob Storage

Each option involves setting up its own set of parameters. For example, for accessing S3, you often need to set up authentication, like API keys or a profile name. smart_open's open function accepts a keyword argument transport_params which accepts additional parameters for the transport layer. Here are some examples of using this parameter:

>>> import boto3
>>> fin = open('s3://commoncrawl/robots.txt', transport_params=dict(client=boto3.client('s3')))
>>> fin = open('s3://commoncrawl/robots.txt', transport_params=dict(buffer_size=1024))

For the full list of keyword arguments supported by each transport option, see the documentation:

help('smart_open.open')

S3 Credentials

smart_open uses the boto3 library to talk to S3. boto3 has several mechanisms for determining the credentials to use. By default, smart_open will defer to boto3 and let the latter take care of the credentials. There are several ways to override this behavior.

The first is to pass a boto3.Client object as a transport parameter to the open function. You can customize the credentials when constructing the session for the client. smart_open will then use the session when talking to S3.

session = boto3.Session(
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY,
    aws_session_token=SESSION_TOKEN,
)
client = session.client('s3', endpoint_url=..., config=...)
fin = open('s3://bucket/key', transport_params=dict(client=client))

Your second option is to specify the credentials within the S3 URL itself:

fin = open('s3://aws_access_key_id:aws_secret_access_key@bucket/key', ...)

Important: The two methods above are mutually exclusive. If you pass an AWS client and the URL contains credentials, smart_open will ignore the latter.

Important: smart_open ignores configuration files from the older boto library. Port your old boto settings to boto3 in order to use them with smart_open.

Iterating Over an S3 Bucket's Contents

Since going over all (or select) keys in an S3 bucket is a very common operation, there's also an extra function smart_open.s3.iter_bucket() that does this efficiently, processing the bucket keys in parallel (using multiprocessing):

>>> from smart_open import s3
>>> # get data corresponding to 2010 and later under "silo-open-data/annual/monthly_rain"
>>> # we use workers=1 for reproducibility; you should use as many workers as you have cores
>>> bucket = 'silo-open-data'
>>> prefix = 'annual/monthly_rain/'
>>> for key, content in s3.iter_bucket(bucket, prefix=prefix, accept_key=lambda key: '/201' in key, workers=1, key_limit=3):
...     print(key, round(len(content) / 2**20))
annual/monthly_rain/2010.monthly_rain.nc 13
annual/monthly_rain/2011.monthly_rain.nc 13
annual/monthly_rain/2012.monthly_rain.nc 13

GCS Credentials

smart_open uses the google-cloud-storage library to talk to GCS. google-cloud-storage uses the google-cloud package under the hood to handle authentication. There are several options to provide credentials. By default, smart_open will defer to google-cloud-storage and let it take care of the credentials.

To override this behavior, pass a google.cloud.storage.Client object as a transport parameter to the open function. You can customize the credentials when constructing the client. smart_open will then use the client when talking to GCS. To follow allow with the example below, refer to Google's guide to setting up GCS authentication with a service account.

import os
from google.cloud.storage import Client
service_account_path = os.environ['GOOGLE_APPLICATION_CREDENTIALS']
client = Client.from_service_account_json(service_account_path)
fin = open('gs://gcp-public-data-landsat/index.csv.gz', transport_params=dict(client=client))

If you need more credential options, you can create an explicit google.auth.credentials.Credentials object and pass it to the Client. To create an API token for use in the example below, refer to the GCS authentication guide.

import os
from google.auth.credentials import Credentials
from google.cloud.storage import Client
token = os.environ['GOOGLE_API_TOKEN']
credentials = Credentials(token=token)
client = Client(credentials=credentials)
fin = open('gs://gcp-public-data-landsat/index.csv.gz', transport_params=dict(client=client))

Azure Credentials

smart_open uses the azure-storage-blob library to talk to Azure Blob Storage. By default, smart_open will defer to azure-storage-blob and let it take care of the credentials.

Azure Blob Storage does not have any ways of inferring credentials therefore, passing a azure.storage.blob.BlobServiceClient object as a transport parameter to the open function is required. You can customize the credentials when constructing the client. smart_open will then use the client when talking to. To follow allow with the example below, refer to Azure's guide to setting up authentication.

import os
from azure.storage.blob import BlobServiceClient
azure_storage_connection_string = os.environ['AZURE_STORAGE_CONNECTION_STRING']
client = BlobServiceClient.from_connection_string(azure_storage_connection_string)
fin = open('azure://my_container/my_blob.txt', transport_params=dict(client=client))

If you need more credential options, refer to the Azure Storage authentication guide.

File-like Binary Streams

The open function also accepts file-like objects. This is useful when you already have a binary file open, and would like to wrap it with transparent decompression:

>>> import io, gzip
>>>
>>> # Prepare some gzipped binary data in memory, as an example.
>>> # Any binary file will do; we're using BytesIO here for simplicity.
>>> buf = io.BytesIO()
>>> with gzip.GzipFile(fileobj=buf, mode='w') as fout:
...     _ = fout.write(b'this is a bytestring')
>>> _ = buf.seek(0)
>>>
>>> # Use case starts here.
>>> buf.name = 'file.gz'  # add a .name attribute so smart_open knows what compressor to use
>>> import smart_open
>>> smart_open.open(buf, 'rb').read()  # will gzip-decompress transparently!
b'this is a bytestring'

In this case, smart_open relied on the .name attribute of our binary I/O stream buf object to determine which decompressor to use. If your file object doesn't have one, set the .name attribute to an appropriate value. Furthermore, that value has to end with a known file extension (see the register_compressor function). Otherwise, the transparent decompression will not occur.

Drop-in replacement of pathlib.Path.open

smart_open.open can also be used with Path objects. The built-in Path.open() is not able to read text from compressed files, so use patch_pathlib to replace it with smart_open.open() instead. This can be helpful when e.g. working with compressed files.

>>> from pathlib import Path
>>> from smart_open.smart_open_lib import patch_pathlib
>>>
>>> _ = patch_pathlib()  # replace `Path.open` with `smart_open.open`
>>>
>>> path = Path("smart_open/tests/test_data/crime-and-punishment.txt.gz")
>>>
>>> with path.open("r") as infile:
...     print(infile.readline()[:41])
В начале июля, в чрезвычайно жаркое время

How do I ...?

See this document.

Extending smart_open

See this document.

Testing smart_open

smart_open comes with a comprehensive suite of unit tests. Before you can run the test suite, install the test dependencies:

pip install -e .[test]

Now, you can run the unit tests:

pytest smart_open

The tests are also run automatically with Travis CI on every commit push & pull request.

Comments, bug reports

smart_open lives on Github. You can file issues or pull requests there. Suggestions, pull requests and improvements welcome!


smart_open is open source software released under the MIT license. Copyright (c) 2015-now Radim Řehůřek.

Author: RaRe-Technologies
Source Code: https://github.com/RaRe-Technologies/smart_open 
License: MIT License

#python #streaming 

Smart-open: Utils for streaming large files (S3, HDFS, gzip, bz2...)

7 Scripts To Develop Live Video Streaming Apps - Streambiz

7 Scripts That Will Help You Build A Live Video Streaming Application. Live Video Streaming App can Be for Live Events or Individuals to Use it for Fun and also Earn Money Online.

StreamBiz is one of the most exclusive offers for you who are looking for the best live stream video app builder. Are you looking for a video host app for your company, educator Institute, or events? Download Now. Click here! StreamBiz is a unique app presented by BSETEC to fulfill your requirement for any time of video streaming creative business plan online. Maybe you are the best marketer in your company who has got the opportunity to present your brand more lively on social media apps. However, if you want to excel now in these times with a personal video app for your business idea, then you should start without any worry. Take the free live streaming script app called StreamBiz available on Google playstore or the Apple store. Get the latest features and technology support for free

Now Here are the 7 Best Scripts for Live Video Streaming Apps 

1. StreamBiz Live Video Script

Build a Bigo live clone or a periscope clone, all you need is Streambiz free video app script to allow the creator to build an exclusive unique video app that can be used for a particular crowd or can be used by anyone to present online. The script is available on Google Play or Apple Store easily without any technology or sign-up interaction. Use this in media, technology, education, sports, corporate, or the government. The app itself is a proud technology provided by one of the excellent and leading technology companies BSETEC

2. Zoom Clone Script

Live video streaming script apps have become more important than any other application, Zoom app is the most suitable for all just like StreamBiz high technology suite for video app development. Zoom video app, mostly used for live conferences, the scream script is now available on our website. It has some amazing features that you can apply while creating a video stream app for your company or business. The feature includes a user can start a video session on zoom and share with others, also anyone can have a personal chat room, group chat, recording of the meeting, and set an extra miles sample for others in the video app script clone business. 

3. Periscope Clone Script Live Streaming App

Periscope is a broadcast live video sharing platform for the targeted audience who are logged in already and for everyone who wants to join and watch hosted videos on periscope. Also, it allows streaming a live moment that can be shared with anyone on any social media app. Periscope is a good app for speakers and educators, business professionals who cover a huge audience at one moment can easily use this, and similarly if you want to clone this app, we have a free live streaming script for video making apps

Benefits of PERISCOPE CLONE SCRIPT FOR ALL INDUSTRIES – BSETEC
 

4. Live TV Stream Script

Interestingly this app script is famous and empowers the user to create a live stream video app. It Is compatible with any desktop and mobile app. The user can easily watch the stream and share it with others on social media. One can have a full panel control to edit, filter, and do lots more. The script can be cloned easily with a live stream script. It has transaction options for all the viewers. And this helps the businessman to earn money through the paid version of the video stream for various events. 

5. Castasy Video Streaming App 

The App is famous and can be used for multiple purposes, we can clone this type of template, and yes. It is very easy and convenient. Anyone can use cactasy so bring a huge new set of video streaming apps in the market with the best features that can be very unique and different for everyone. To clone this app contact our experts now BSETEC

6. Gentle Ninja Meerkat Turnkey App Script

This app script is making a special noise. Yet very interesting, it can be cloned with our free open source code online. The app has features that allow the user to log in via social media, live re-streaming, scheduling, follow up, get the entire management admin panel, and much more. One can just start using our free live streaming script and build the most exclusive video app in less time. 

How Live Streaming has Helped Small Businesses to Grow?

7. Bigo Live App 

The Bigo live app can be cloned easily with the help of a Streambiz, a very useful and smart option that allows the end-user to log in, like, follow and share. Play interesting games and get the best options to earn money online. This is way different than other applications like zoom or periscope. Contact our experts for more details. 

Live Video Streaming App can Be for Live Events or Individuals to Use it for Fun and also Earn Money Online.

Conclusion:

Ask us all about live video streaming app cloning, free advice with 100% technical support, free download at https://www.bsetec.com/periscope-clone/ 

 

#bsetec #bigolive #clonescripts #livestreamingscript #live video #streaming  #apps #live video streaming app #create a live streaming video #live streaming php script #periscopeclone #streambiz 
 

7 Scripts To Develop Live Video Streaming Apps - Streambiz