1675373100
A JSONDecodeError: Expecting value
when running Python code means you are trying to decode an invalid JSON string.
This error can happen in three different cases:
Case 1: Decoding invalid JSON content Case 2: Loading an empty or invalid .json
file Case 3: A request you made didn’t return a valid JSON
The following article shows how to resolve this error in each case.
The Python json
library requires you to pass valid JSON content when calling the load()
or loads()
function.
Suppose you pass a string to the loads()
function as follows:
data = '{"name": Nathan}'
res = json.loads(data)
Because the loads()
function expects a valid JSON string, the code above raises this error:
Traceback (most recent call last):
File ...
res = json.loads(data)
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
To resolve this error, you need to ensure that the JSON string you passed to the loads()
function is valid.
You can use a try-except
block to check if your data is a valid JSON string like this:
data = '{"name": Nathan}'
try:
res = json.loads(data)
print("data is a valid JSON string")
except json.decoder.JSONDecodeError:
print("data is not a valid JSON string")
By using a try-except
block, you will be able to catch when the JSON string is invalid.
You still need to find out why an invalid JSON string is passed to the loads()
function, though.
Most likely, you may have a typo somewhere in your JSON string as in the case above.
Note that the value Nathan
is not enclosed in double quotes:
data = '{"name": Nathan}' # ❌ wrong
data = '{"name": "Nathan"}' # ✅ correct
If you see this in your code, then you need to fix the data to conform to the JSON standards.
Another case when this error may happen is when you load an empty .json
file.
Suppose you try to load a file named data.json
file with the following code:
with open("data.json", "r") as file:
data = json.loads(file.read())
If the data.json
file is empty, Python will respond with an error:
Traceback (most recent call last):
File ...
data = json.loads(file.read())
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The same error also occurs if you have invalid JSON content in the file as follows:
JSON string with invalid format
To avoid this error, you need to make sure that the .json
file you load is not empty and has valid JSON content.
You can use a try-except
block in this case to catch this error:
try:
with open("data.json", "r") as file:
data = json.loads(file.read())
print("file has valid JSON content")
except json.decoder.JSONDecodeError:
print("file is empty or contain invalid JSON")
If you want to validate the source file, you can use jsonlint.com.
When you send an HTTP request using the requests
library, you may use the .json()
method from the response
object to extract the JSON content:
import requests
response = requests.get('https://api.github.com')
data = response.json()
print(data)
But if the response
object doesn’t contain a valid JSON encoding, then a JSONDecodeError
will be raised:
Traceback (most recent call last):
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ...
data = response.json()
requests.exceptions.JSONDecodeError: Expecting value: line 7 column 1 (char 6)
As you can see, the requests
object also has the JSONDecodeError: Expecting value
message.
To resolve this error, you need to surround the call to response.json()
with a try-except
block as follows:
import requests
response = requests.get('https://api.github.com')
try:
data = response.json()
print(data)
except:
print("Error from server: " + str(response.content))
When the except
block is triggered, you will get the response content
printed in a string format.
You need to inspect the print output for more insight as to why the response is not a valid JSON format.
In this article, we have seen how to fix the JSONDecodeError: Expecting value
error when using Python.
This error can happen in three different cases: when you decode invalid JSON content, load an empty or invalid .json file, and make an HTTP request that doesn’t return a valid JSON.
By following the steps in this article, you will be able to debug and fix this error when it occurs.
Until next time, happy coding! 🙌
Original article source at: https://sebhastian.com/
1673972160
Himotoki (紐解き) is a type-safe JSON decoding library written purely in Swift. This library is highly inspired by the popular Swift JSON parsing libraries: Argo and ObjectMapper.
Himotoki has the same meaning of 'decoding' in Japanese.
struct
s with non-optional let
properties.Let's take a look at a simple example:
struct Group: Himotoki.Decodable {
let name: String
let floor: Int
let locationName: String
let optional: [String]?
// MARK: Himotoki.Decodable
static func decode(_ e: Extractor) throws -> Group {
return try Group(
name: e <| "name",
floor: e <| "floor",
locationName: e <| [ "location", "name" ], // Parse nested objects
optional: e <|? "optional" // Parse optional arrays of values
)
}
}
func testGroup() {
var JSON: [String: AnyObject] = [ "name": "Himotoki", "floor": 12 ]
let g = try? Group.decodeValue(JSON)
XCTAssert(g != nil)
XCTAssert(g?.name == "Himotoki")
XCTAssert(g?.floor == 12)
XCTAssert(g?.optional == nil)
JSON["name"] = nil
do {
try Group.decodeValue(JSON)
} catch let DecodeError.MissingKeyPath(keyPath) {
XCTAssert(keyPath == "name")
} catch {
XCTFail()
}
}
:warning: Please note that you should need to add the module name Himotoki
to Decodable
(Himotoki.Decodable
) to avoid type name collision with Foundation.Decodable
in Xcode 9 or later. :warning:
decode
method for your modelsTo implement the decode
method for you models conforming to the Decodable
protocol, you can use the following Extractor
's extraction methods:
public func value<T: Decodable>(_ keyPath: KeyPath) throws -> T
public func valueOptional<T: Decodable>(_ keyPath: KeyPath) throws -> T?
public func array<T: Decodable>(_ keyPath: KeyPath) throws -> [T]
public func arrayOptional<T: Decodable>(_ keyPath: KeyPath) throws -> [T]?
public func dictionary<T: Decodable>(_ keyPath: KeyPath) throws -> [String: T]
public func dictionaryOptional<T: Decodable>(_ keyPath: KeyPath) throws -> [String: T]?
Himotoki also supports the following operators to decode JSON elements, where T
is a generic type conforming to the Decodable
protocol.
Operator | Decode element as | Remarks |
---|---|---|
<| | T | A value |
<|? | T? | An optional value |
<|| | [T] | An array of values |
<||? | [T]? | An optional array of values |
<|-| | [String: T] | A dictionary of values |
<|-|? | [String: T]? | An optional dictionary of values |
You can transform an extracted value to an instance of non-Decodable
types by passing the value to a Transformer
instance as follows:
// Creates a `Transformer` instance.
let URLTransformer = Transformer<String, URL> { urlString throws -> URL in
if let url = URL(string: urlString) {
return url
}
throw customError("Invalid URL string: \(urlString)")
}
let url: URL = try URLTransformer.apply(e <| "foo_url")
let otherURLs: [URL] = try URLTransformer.apply(e <| "bar_urls")
Himotoki 4.x requires / supports the following environments:
Currently Himotoki supports installation via the package managers Carthage and CocoaPods.
Himotoki is Carthage compatible.
github "ikesyo/Himotoki" ~> 3.1
to your Cartfile.carthage update
.Himotoki also can be used by CocoaPods.
Add the followings to your Podfile:
use_frameworks!
pod "Himotoki", "~> 3.1"
Run pod install
.
Author: ikesyo
Source Code: https://github.com/ikesyo/Himotoki
License: MIT license
1665139524
base64-js
does basic base64 encoding/decoding in pure JS.
Many browsers already have base64 encoding/decoding functionality, but it is for text data, not all-purpose binary data.
Sometimes encoding/decoding binary data in the browser is useful, and that is what this module does.
With npm do:
npm install base64-js
and var base64js = require('base64-js')
For use in web browsers do:
<script src="base64js.min.js"></script>
Get supported base64-js with the Tidelift Subscription
base64js
has three exposed functions, byteLength
, toByteArray
and fromByteArray
, which both take a single argument.
byteLength
- Takes a base64 string and returns length of byte arraytoByteArray
- Takes a base64 string and returns a byte arrayfromByteArray
- Takes a byte array and returns a base64 stringAuthor: Beatgammit
Source Code: https://github.com/beatgammit/base64-js
License: MIT license
1664931000
A Simple to use javascript .GIF decoder.
We needed to be able to efficiently load and manipulate GIF files for the Ruffle hybrid app (for mobiles). There are a couple of example libraries out there like jsgif & its derivative libgif-js, however these are admittedly inefficient, and a mess. After pulling our hair out trying to understand the ancient, mystic gif format (hence the project name), we decided to just roll our own. This library also removes any specific drawing code, and simply parses, and decompresses gif files so that you can manipulate and display them however you like. We do include imageData
patch construction though to get you most of the way there.
You can see a demo of this library in action here
Installation:
npm install gifuct-js
Decoding:
This decoder uses js-binary-schema-parser to parse the gif files (you can examine the schema in the source). This means the gif file must firstly be converted into a Uint8Array
buffer in order to decode it. Some examples:
fetch
import { parseGIF, decompressFrames } from 'gifuct-js'
var promisedGif = fetch(gifURL)
.then(resp => resp.arrayBuffer())
.then(buff => {
var gif = parseGIF(buff)
var frames = decompressFrames(gif, true)
return gif;
});
XMLHttpRequest
import { parseGIF, decompressFrames } from 'gifuct-js'
var oReq = new XMLHttpRequest();
oReq.open("GET", gifURL, true);
oReq.responseType = "arraybuffer";
oReq.onload = function (oEvent) {
var arrayBuffer = oReq.response; // Note: not oReq.responseText
if (arrayBuffer) {
var gif = parseGIF(arrayBuffer);
var frames = decompressFrames(gif, true);
// do something with the frame data
}
};
oReq.send(null);
Result:
The result of the decompressFrames(gif, buildPatch)
function returns an array of all the GIF image frames, and their meta data. Here is a an example frame:
{
// The color table lookup index for each pixel
pixels: [...],
// the dimensions of the gif frame (see disposal method)
dims: {
top: 0,
left: 10,
width: 100,
height: 50
},
// the time in milliseconds that this frame should be shown
delay: 50,
// the disposal method (see below)
disposalType: 1,
// an array of colors that the pixel data points to
colorTable: [...],
// An optional color index that represents transparency (see below)
transparentIndex: 33,
// Uint8ClampedArray color converted patch information for drawing
patch: [...]
}
Automatic Patch Generation:
If the buildPatch
param of the dcompressFrames()
function is true
, the parser will not only return the parsed and decompressed gif frames, but will also create canvas ready Uint8ClampedArray
arrays of each gif frame image, so that they can easily be drawn using ctx.putImageData()
for example. This requirement is common, however it was made optional because it makes assumptions about transparency. The demo makes use of this option.
Disposal Method:
The pixel
data is stored as a list of indexes for each pixel. These each point to a value in the colorTable
array, which contain the color that each pixel should be drawn. Each frame of the gif may not be the full size, but instead a patch that needs to be drawn over a particular location. The disposalType
defines how that patch should be drawn over the gif canvas. In most cases, that value will be 1
, indicating that the gif frame should be simply drawn over the existing gif canvas without altering any pixels outside the frames patch dimensions. More can be read about this here.
Transparency:
If a transparentIndex
is defined for a frame, it means that any pixel within the pixel data that matches this index should not be drawn. When drawing the patch using canvas, this means setting the alpha value for this pixel to 0
.
Check out the demo for an example of how to draw/manipulate a gif using this library. We wanted the library to be drawing agnostic to allow users to do what they wish with the raw gif data, rather than impose a method that has to be altered. On this note however, we provide an easy interface for creating commonly used canvas pixel data for drawing ease.
We underestimated the convolutedness of the GIF format, so this library couldn't have been made without the help of:
Author: Matt-way
Source Code: https://github.com/matt-way/gifuct-js
License: MIT license
1664915280
A small, fast and advanced PNG / APNG encoder and decoder. It is the main PNG engine for Photopea image editor.
Download and include the UPNG.js
file in your code.
UPNG.js supports APNG and the interface expects "frames". Regular PNG is just a single-frame animation (single-item array).
UPNG.encode(imgs, w, h, cnum, [dels])
imgs
: array of frames. A frame is an ArrayBuffer containing the pixel data (RGBA, 8 bits per channel)w
, h
: width and height of the imagecnum
: number of colors in the result; 0: all colors (lossless PNG)dels
: array of millisecond delays for each frame (only when 2 or more frames)UPNG.js can do a lossy minification of PNG files, similar to TinyPNG and other tools. It performed quantization with k-means algorithm in the past, but now we use K-d trees.
Lossy compression is allowed by the last parameter cnum
. Set it to zero for a lossless compression, or write the number of allowed colors in the image. Smaller values produce smaller files. Or just use 0 for lossless / 256 for lossy.
// Read RGBA from canvas and encode with UPNG
var dta = ctx.getImageData(0,0,200,300).data; // ctx is Context2D of a Canvas
// dta = new Uint8Array(200 * 300 * 4); // or generate pixels manually
var png = UPNG.encode([dta.buffer], 200, 300, 0); console.log(new Uint8Array(png));
UPNG.encodeLL(imgs, w, h, cc, ac, depth, [dels])
- low-level encodeimgs
: array of frames. A frame is an ArrayBuffer containing the pixel data (corresponding to following parameters)w
, h
: width and height of the imagecc
, ac
: number of color channels (1 or 3) and alpha channels (0 or 1)depth
: bit depth of pixel data (1, 2, 4, 8, 16)dels
: array of millisecond delays for each frame (only when 2 or more frames)This function does not do any optimizations, it just stores what you give it. There are two cases when it is useful:
Supports all color types (including Grayscale and Palettes), all channel depths (1, 2, 4, 8, 16), interlaced images etc. Opens PNGs which other libraries can not open (tested with PngSuite).
UPNG.decode(buffer)
buffer
: ArrayBuffer containing the PNG filewidth
: the width of the imageheight
: the height of the imagedepth
: number of bits per channelctype
: color type of the file (Truecolor, Grayscale, Palette ...)frames
: additional info about frames (frame delays etc.)tabs
: additional chunks of the PNG filedata
: pixel data of the imagePNG files may have a various number of channels and a various color depth. The interpretation of data
depends on the current color type and color depth (see the PNG specification).
UPNG.toRGBA8(img)
img
: PNG image object (returned by UPNG.decode())var img = UPNG.decode(buff); // put ArrayBuffer of the PNG file into UPNG.decode
var rgba = UPNG.toRGBA8(img)[0]; // UPNG.toRGBA8 returns array of frames, size: width * height * 4 bytes.
PNG format uses the Inflate algorithm. Right now, UPNG.js calls Pako.js for the Inflate and Deflate method.
UPNG.js contains a very good Quantizer of 4-component 8-bit vectors (i.e. pixels). It can be used to generate nice color palettes (e.g. Photopea uses UPNG.js to make palettes for GIF images).
Quantization consists of two important steps: Finding a nice palette and Finding the closest color in the palette for each sample (non-trivial for large palettes). UPNG perfroms both steps.
var res = UPNG.quantize(data, psize);
data
: ArrayBuffer of samples (byte length is a multiple of four)psize
: Palette size (how many colors you want to have)The result object "res" has following properties:
abuf
: ArrayBuffer corresponding to data
, where colors are remapped by a paletteinds
: Uint8Array : the index of a color for each sample (only when psize
<=256)plte
: Array : the Palette - a list of colors, plte[i].est.q
and plte[i].est.rgba
is the color valuedata
.Author: Photopea
Source Code: https://github.com/photopea/UPNG.js
License: MIT license
1664911320
A pure javascript JPEG encoder and decoder for node.js
NOTE: this is a synchronous (i.e. CPU-blocking) library that is much slower than native alternatives. If you don't need a pure javascript implementation, consider using async alternatives like sharp in node or the Canvas API in the browser.
This module is installed via npm:
$ npm install jpeg-js
Will decode a buffer or typed array into a Buffer
;
var jpeg = require('jpeg-js');
var jpegData = fs.readFileSync('grumpycat.jpg');
var rawImageData = jpeg.decode(jpegData);
console.log(rawImageData);
/*
{ width: 320,
height: 180,
data: <Buffer 5b 40 29 ff 59 3e 29 ff 54 3c 26 ff 55 3a 27 ff 5a 3e 2f ff 5c 3c 31 ff 58 35 2d ff 5b 36 2f ff 55 35 32 ff 5a 3a 37 ff 54 36 32 ff 4b 32 2c ff 4b 36 ... > }
*/
To decode directly into a Uint8Array
, pass useTArray: true
in options decode
:
var jpeg = require('jpeg-js');
var jpegData = fs.readFileSync('grumpycat.jpg');
var rawImageData = jpeg.decode(jpegData, {useTArray: true}); // return as Uint8Array
console.log(rawImageData);
/*
{ width: 320,
height: 180,
data: { '0': 91, '1': 64, ... } } // typed array
*/
Option | Description | Default |
---|---|---|
colorTransform | Transform alternate colorspaces like YCbCr. undefined means respect the default behavior encoded in metadata. | undefined |
useTArray | Decode pixels into a typed Uint8Array instead of a Buffer . | false |
formatAsRGBA | Decode pixels into RGBA vs. RGB. | true |
tolerantDecoding | Be more tolerant when encountering technically invalid JPEGs. | true |
maxResolutionInMP | The maximum resolution image that jpeg-js should attempt to decode in megapixels. Images larger than this resolution will throw an error instead of decoding. | 100 |
maxMemoryUsageInMB | The (approximate) maximum memory that jpeg-js should allocate while attempting to decode the image in mebibyte. Images requiring more memory than this will throw an error instead of decoding. | 512 |
var jpeg = require('jpeg-js');
var width = 320,
height = 180;
var frameData = new Buffer(width * height * 4);
var i = 0;
while (i < frameData.length) {
frameData[i++] = 0xff; // red
frameData[i++] = 0x00; // green
frameData[i++] = 0x00; // blue
frameData[i++] = 0xff; // alpha - ignored in JPEGs
}
var rawImageData = {
data: frameData,
width: width,
height: height,
};
var jpegImageData = jpeg.encode(rawImageData, 50);
console.log(jpegImageData);
/*
{ width: 320,
height: 180,
data: <Buffer 5b 40 29 ff 59 3e 29 ff 54 3c 26 ff 55 3a 27 ff 5a 3e 2f ff 5c 3c 31 ff 58 35 2d ff 5b 36 2f ff 55 35 32 ff 5a 3a 37 ff 54 36 32 ff 4b 32 2c ff 4b 36 ... > }
*/
// write to file
fs.writeFileSync('image.jpg', jpegImageData.data);
jpeg-js is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the CONTRIBUTING.md file for more details.
jpeg-js is only possible due to the excellent work of the following contributors:
Adobe | GitHub/adobe |
---|---|
Yury Delendik | GitHub/notmasteryet |
Eugene Ware | GitHub/eugeneware |
Michael Kelly | GitHub/mrkelly |
Peter Liljenberg | GitHub/petli |
XadillaX | GitHub/XadillaX |
strandedcity | GitHub/strandedcity |
wmossman | GitHub/wmossman |
Patrick Hulce | GitHub/patrickhulce |
Ben Wiley | GitHub/benwiley4000 |
Author: jpeg-js
Source Code: https://github.com/jpeg-js/jpeg-js
License: View license
1663584180
In today's post we will learn about 10 Popular Golang Libraries for Parsers/Encoders/Decoders.
What is a Parser?
A parser is a compiler or interpreter component that breaks data into smaller elements for easy translation into another language. A parser takes input in the form of a sequence of tokens, interactive commands, or program instructions and breaks them up into parts that can be used by other components in programming.
A parser usually checks all data provided to ensure it is sufficient to build a data structure in the form of a parse tree or an abstract syntax tree.
What is Encoder?
An encoder is a mechanism that can transform the data signal into a message that can be read by some type of control device. Or in other words, the combinational circuits that modify the binary data into N output lines are known as Encoders.
What is Decoder?
The combinational circuits that convert the binary data into 2N output lines are called Decoders.
Table of contents:
Placeholder and wildcard text parsing for CLI tools and bots.
allot is a small Golang
library to match and parse commands with pre-defined strings. For example use allot to define a list of commands your CLI application or Slackbot supports and check if incoming requests are matching your commands.
The allot library supports placeholders and regular expressions for parameter matching and parsing.
cmd := allot.NewCommand("revert <commits:integer> commits on <project:string> at (stage|prod)")
match, err := cmd.Match("revert 12 commits on example at prod")
if (err != nil)
commits, _ = match.Integer("commits")
project, _ = match.String("project")
env, _ = match.Match(2)
fmt.Printf("Revert \"%d\" on \"%s\" at \"%s\"", commits, project, env)
} else {
fmt.Println("Request did not match command.")
}
See the hanu Slackbot framework for a usecase for allot:
Parses indented code (python, pixy, scarlet, etc.) and returns a tree structure.
go get github.com/aerogo/codetree
tree, err := codetree.New(reader)
defer tree.Close()
parent1
child1
child2
child3
child3.1
child3.2
child4
parent2
child1
See CodeTree structure.
The root node always starts with Indent
being -1
.
A collection of common regular expressions for Go.
This is a collection of often used regular expressions. It provides these as simple functions for getting the matched strings corresponding to specific patterns.
go get github.com/mingrammer/commonregex
import (
cregex "github.com/mingrammer/commonregex"
)
func main() {
text := `John, please get that article on www.linkedin.com to me by 5:00PM on Jan 9th 2012. 4:00 would be ideal, actually. If you have any questions, You can reach me at (519)-236-2723x341 or get in touch with my associate at harold.smith@gmail.com`
dateList := cregex.Date(text)
// ['Jan 9th 2012']
timeList := cregex.Time(text)
// ['5:00PM', '4:00']
linkList := cregex.Links(text)
// ['www.linkedin.com', 'harold.smith@gmail.com']
phoneList := cregex.PhonesWithExts(text)
// ['(519)-236-2723x341']
emailList := cregex.Emails(text)
// ['harold.smith@gmail.com']
}
DID (Decentralized Identifiers) Parser and Stringer in Go.
did
is a Go package that provides tools to work with Decentralized Identifiers (DIDs).
go get github.com/ockam-network/did
package main
import (
"fmt"
"log"
"github.com/ockam-network/did"
)
func main() {
d, err := did.Parse("did:example:q7ckgxeq1lxmra0r")
if err != nil {
log.Fatal(err)
}
fmt.Printf("%#v", d)
}
The above example parses the input string according to the rules defined in the DID Grammar and prints the following value of DID type.
&did.DID{
Method:"example",
ID:"q7ckgxeq1lxmra0r",
IDStrings:[]string{"q7ckgxeq1lxmra0r"},
Path:"",
PathSegments:[]string(nil),
Query:"",
Fragment:""
}
The input string may also be a DID Reference with a DID Path:
d, err := did.Parse("did:example:q7ckgxeq1lxmra0r/abc/pqr")
which would result in:
&did.DID{
Method:"example",
ID:"q7ckgxeq1lxmra0r",
IDStrings:[]string{"q7ckgxeq1lxmra0r"},
Path:"abc/pqr",
PathSegments:[]string{"abc", "pqr"},
Query:"",
Fragment:""
}
or a DID Reference with a DID Path and a DID Query:
d, err := did.Parse("did:example:q7ckgxeq1lxmra0r/abc/pqr?xyz")
fmt.Println(d.Query)
// Output: xyz
or a DID Reference with a DID Fragment:
d, err := did.Parse("did:example:q7ckgxeq1lxmra0r#keys-1")
fmt.Println(d.Fragment)
// Output: keys-1
This package also implements the Stringer interface for the DID type. It is easy to convert DID type structures into valid DID strings:
d := &did.DID{Method: "example", ID: "q7ckgxeq1lxmra0r"}
fmt.Println(d.String())
// Output: did:example:q7ckgxeq1lxmra0r
or with a refence with a fragment:
d := &did.DID{Method: "example", ID: "q7ckgxeq1lxmra0r", Fragment: "keys-1"}
fmt.Println(d.String())
// Output: did:example:q7ckgxeq1lxmra0r#keys-1
For more documentation and examples, please see godoc.
Document object identifier (doi) parser in Go.
dealing with dois in go
d, err := doi.Parse("11.1038/123456")
if err != nil {
println(d.ToString())
}
if d.IsValid() {
println("We are happy!")
}
Editorconfig file parser and manipulator for Go.
We recommend the use of Go 1.17+ modules for this package. Lower versions, such as 1.13, should be fine.
Import by the same path. The package name you will use to access it is editorconfig
.
import "github.com/editorconfig/editorconfig-core-go/v2"
fp, err := os.Open("path/to/.editorconfig")
if err != nil {
log.Fatal(err)
}
defer fp.Close()
editorConfig, err := editorconfig.Parse(fp)
if err != nil {
log.Fatal(err)
}
data := []byte("...")
editorConfig, err := editorconfig.ParseBytes(data)
if err != nil {
log.Fatal(err)
}
This method builds a definition to a given filename. This definition is a merge of the properties with selectors that matched the given filename. The lasts sections of the file have preference over the priors.
def := editorConfig.GetDefinitionForFilename("my/file.go")
This definition have the following properties:
type Definition struct {
Selector string
Charset string
IndentStyle string
IndentSize string
TabWidth int
EndOfLine string
TrimTrailingWhitespace *bool
InsertFinalNewline *bool
Raw map[string]string
}
Package provides a generic interface to encoders and decoders.
package main
import (
"flag"
"fmt"
"log"
"strings"
"github.com/mickep76/encoding"
_ "github.com/mickep76/encoding/json"
_ "github.com/mickep76/encoding/toml"
_ "github.com/mickep76/encoding/yaml"
)
type Message struct {
Name, Text string
}
type Messages struct {
Messages []*Message
}
func main() {
codec := flag.String("codec", "json", fmt.Sprintf("Codecs: [%s].", strings.Join(encoding.Codecs(), ", ")))
indent := flag.String("indent", "", "Indent encoding (only supported by JSON codec)")
flag.Parse()
in := Messages{
Messages: []*Message{
&Message{Name: "Ed", Text: "Knock knock."},
&Message{Name: "Sam", Text: "Who's there?"},
&Message{Name: "Ed", Text: "Go fmt."},
&Message{Name: "Sam", Text: "Go fmt who?"},
&Message{Name: "Ed", Text: "Go fmt yourself!"},
},
}
var opts []encoding.Option
if *indent != "" {
opts = append(opts, encoding.WithIndent(*indent))
}
c, err := encoding.NewCodec(*codec, opts...)
if err != nil {
log.Fatal(err)
}
b, err := c.Encode(in)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Codec: %s\n", *codec)
fmt.Printf("Encoded:\n%s\n", string(b))
out := Messages{}
if err := c.Decode(b, &out); err != nil {
log.Fatal(err)
}
fmt.Println("Decoded:")
for _, m := range out.Messages {
fmt.Printf("%s: %s\n", m.Name, m.Text)
}
}
High performance top level domains (TLD) extraction module.
go get github.com/elliotwutingfeng/go-fasttld
First, build the CLI application.
# `git clone` and `cd` to the go-fasttld repository folder first
make build_cli
Afterwards, try extracting subcomponents from a URL.
# `git clone` and `cd` to the go-fasttld repository folder first
./dist/fasttld extract https://user@a.subdomain.example.a%63.uk:5000/a/b\?id\=42
All of the following examples can be found at examples/demo.go
. To play the demo, run the following command:
# `git clone` and `cd` to the go-fasttld repository folder first
make demo
// Initialise fasttld extractor
extractor, _ := fasttld.New(fasttld.SuffixListParams{})
// Extract URL subcomponents
url := "https://user@a.subdomain.example.a%63.uk:5000/a/b?id=42"
res, _ := extractor.Extract(fasttld.URLParams{URL: url})
// Display results
fasttld.PrintRes(url, res) // Pretty-prints res.Scheme, res.UserInfo, res.SubDomain etc.
Scheme | UserInfo | SubDomain | Domain | Suffix | RegisteredDomain | Port | Path | HostType |
---|---|---|---|---|---|---|---|---|
https:// | user | a.subdomain | example | a%63.uk | example.a%63.uk | 5000 | /a/b?id=42 | hostname |
extractor, _ := fasttld.New(fasttld.SuffixListParams{})
url := "https://127.0.0.1:5000"
res, _ := extractor.Extract(fasttld.URLParams{URL: url})
Scheme | UserInfo | SubDomain | Domain | Suffix | RegisteredDomain | Port | Path | HostType |
---|---|---|---|---|---|---|---|---|
https:// | 127.0.0.1 | 127.0.0.1 | 5000 | ipv4 address |
extractor, _ := fasttld.New(fasttld.SuffixListParams{})
url := "https://[aBcD:ef01:2345:6789:aBcD:ef01:2345:6789]:5000"
res, _ := extractor.Extract(fasttld.URLParams{URL: url})
Scheme | UserInfo | SubDomain | Domain | Suffix | RegisteredDomain | Port | Path | HostType |
---|---|---|---|---|---|---|---|---|
https:// | aBcD:ef01:2345:6789:aBcD:ef01:2345:6789 | aBcD:ef01:2345:6789:aBcD:ef01:2345:6789 | 5000 | ipv6 address |
NMEA parser library for the Go language.
To install go-nmea use go get
:
go get github.com/adrianmo/go-nmea
This will then make the github.com/adrianmo/go-nmea
package available to you.
To update go-nmea to the latest version, use go get -u github.com/adrianmo/go-nmea
.
package main
import (
"fmt"
"log"
"github.com/adrianmo/go-nmea"
)
func main() {
sentence := "$GPRMC,220516,A,5133.82,N,00042.24,W,173.8,231.8,130694,004.2,W*70"
s, err := nmea.Parse(sentence)
if err != nil {
log.Fatal(err)
}
if s.DataType() == nmea.TypeRMC {
m := s.(nmea.RMC)
fmt.Printf("Raw sentence: %v\n", m)
fmt.Printf("Time: %s\n", m.Time)
fmt.Printf("Validity: %s\n", m.Validity)
fmt.Printf("Latitude GPS: %s\n", nmea.FormatGPS(m.Latitude))
fmt.Printf("Latitude DMS: %s\n", nmea.FormatDMS(m.Latitude))
fmt.Printf("Longitude GPS: %s\n", nmea.FormatGPS(m.Longitude))
fmt.Printf("Longitude DMS: %s\n", nmea.FormatDMS(m.Longitude))
fmt.Printf("Speed: %f\n", m.Speed)
fmt.Printf("Course: %f\n", m.Course)
fmt.Printf("Date: %s\n", m.Date)
fmt.Printf("Variation: %f\n", m.Variation)
}
}
Output:
$ go run main/main.go
Raw sentence: $GPRMC,220516,A,5133.82,N,00042.24,W,173.8,231.8,130694,004.2,W*70
Time: 22:05:16.0000
Validity: A
Latitude GPS: 5133.8200
Latitude DMS: 51° 33' 49.200000"
Longitude GPS: 042.2400
Longitude DMS: 0° 42' 14.400000"
Speed: 173.800000
Course: 231.800000
Date: 13/06/94
Variation: -4.200000
Parse and format vCard.
f, err := os.Open("cards.vcf")
if err != nil {
log.Fatal(err)
}
defer f.Close()
dec := vcard.NewDecoder(f)
for {
card, err := dec.Decode()
if err == io.EOF {
break
} else if err != nil {
log.Fatal(err)
}
log.Println(card.PreferredValue(vcard.FieldFormattedName))
}
Thank you for following this article.
Go (Golang) JSON Encoding Tutorial
1660417380
final emvdecode = EMVMPM.decode(emvqr);
debugPrint("emv decode ------> ${emvdecode.toJson()}");
final emv = EMVQR();
emv.setPayloadFormatIndicator("00");
emv.setPointOfInitiationMethod("12");
/// merchant account information
final mcAccountInfo = MerchantAccountInformation();
mcAccountInfo.setGloballyUniqueIdentifier("IT");
mcAccountInfo.addPaymentNetworkSpecific(id: "01", value: "abc");
mcAccountInfo.addPaymentNetworkSpecific(id: "02", value: "def");
emv.addMerchantAccountInformation(id: "03", value: mcAccountInfo);
final additionalData = AdditionalDataFieldTemplate();
additionalData.setBillNumber("aaaa");
additionalData.setMerchantTaxID("111");
additionalData.setMerchantChannel("cha");
additionalData.addRfuForEMVCo(id: "12", value: "00");
additionalData.addPaymentSystemSpecific(id: "50", value: "123");
additionalData.addPaymentSystemSpecific(id: "51", value: "123");
emv.setAdditionalDataFieldTemplate(additionalData);
final mcInfoLang = MerchantInformationLanguageTemplate();
mcInfoLang.setLanguagePreferencer("LA");
mcInfoLang.setMerchantCity("Vientaine");
mcInfoLang.setMerchantName("MW");
mcInfoLang.addRfuForEMVCo(id: "03", value: "asfg");
mcInfoLang.addRfuForEMVCo(id: "04", value: "asfg");
emv.setMerchantInformationLanguageTemplate(mcInfoLang);
emv.addRfuForEMVCo(id: "66", value: "bbb");
final unreserved = UnreservedTemplate();
unreserved.setGloballyUniqueIdentifier("abs");
unreserved.addContextSpecificData(id: "01", value: "qw12");
emv.addUnreservedTemplate(id: "89", value: unreserved);
debugPrint("emv body ----------> ${emv.value.toJson()}");
final emvEncode = EMVMPM.encode(emv);
debugPrint("emv encode -------> ${emvEncode.toJson()}");
final mcAccountInfo = MerchantAccountInformation();
final additionalData = AdditionalDataFieldTemplate();
final mcInfoLang = MerchantInformationLanguageTemplate();
final unreserved = UnreservedTemplate();
TODO
Run this command:
With Dart:
$ dart pub add emvqrcode
With Flutter:
$ flutter pub add emvqrcode
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
emvqrcode: ^1.0.4
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:emvqrcode/emvqrcode.dart';
example/main.dart
import 'package:emvqrcode/emvqrcode.dart';
void main(List<String> args) {
/**
* generate emv QR code
*/
final emv = EMVQR();
emv.setPayloadFormatIndicator("00");
emv.setPointOfInitiationMethod("12");
/// merchant account information
final mAccountInfo = MerchantAccountInformation();
mAccountInfo.setGloballyUniqueIdentifier("IT");
mAccountInfo.addPaymentNetworkSpecific(id: "01", value: "abc");
mAccountInfo.addPaymentNetworkSpecific(id: "02", value: "def");
emv.addMerchantAccountInformation(id: "03", value: mAccountInfo);
final mAccountInfo2 = MerchantAccountInformation();
mAccountInfo2.setGloballyUniqueIdentifier("IT");
mAccountInfo2.addPaymentNetworkSpecific(id: "01", value: "abc");
mAccountInfo2.addPaymentNetworkSpecific(id: "02", value: "def");
emv.addMerchantAccountInformation(id: "04", value: mAccountInfo2);
final additionalData = AdditionalDataFieldTemplate();
additionalData.setBillNumber("0qwea");
additionalData.setMerchantTaxID("tax id");
additionalData.setMerchantChannel("cha");
additionalData.addRfuForEMVCo(id: "12", value: "00");
additionalData.addRfuForEMVCo(id: "13", value: "13");
additionalData.addPaymentSystemSpecific(id: "50", value: "123");
additionalData.addPaymentSystemSpecific(id: "51", value: "123");
emv.setAdditionalDataFieldTemplate(additionalData);
final mInfoLang = MerchantInformationLanguageTemplate();
mInfoLang.setLanguagePreferencer("LA");
mInfoLang.setMerchantCity("Vientaine");
mInfoLang.setMerchantName("MW");
mInfoLang.addRfuForEMVCo(id: "03", value: "asfg");
mInfoLang.addRfuForEMVCo(id: "04", value: "asfg");
emv.setMerchantInformationLanguageTemplate(mInfoLang);
emv.addRfuForEMVCo(id: "65", value: "bbc");
emv.addRfuForEMVCo(id: "66", value: "bbb");
final unreserved1 = UnreservedTemplate();
unreserved1.setGloballyUniqueIdentifier("abs");
unreserved1.addContextSpecificData(id: "01", value: "qw12");
unreserved1.addContextSpecificData(id: "02", value: "qw12");
emv.addUnreservedTemplate(id: "89", value: unreserved1);
// encode data
final emvEncode = EMVMPM.encode(emv);
print("result -------> ${emvEncode.toJson()}");
// result -------> {emvqr: 00020001021203200002IT0103abc0203def04200002IT0103abc0203def625201050qwea1006tax id1103cha1202001302135003123510312364410002LA0102MW0209Vientaine0304asfg0404asfg6503bbc6603bbb89230003abs0104qw120204qw126304735A, error: null}
/**
* decode emv qr code
*/
final emvqrcode =
"00020001021203200002IT0103abc0203def04200002IT0103abc0203def625201050qwea1006tax id1103cha1202001302135003123510312364410002LA0102MW0209Vientaine0304asfg0404asfg6503bbc6603bbb89230003abs0104qw120204qw126304735A";
final emvDecode = EMVMPM.decode(emvqrcode);
print("result -------> ${emvDecode.toJson()}");
/**
* decode wrong emv qr code
*
* check crc checksum qr code
*/
String emvQrcode =
"00020101021138670016A00526628466257701082771041802030010324ZPOSUALNJBWWVYSEIRIESGFE6304D1B9";
final emvdecode = EMVMPM.decode(emvQrcode);
print("result ------> ${emvdecode.toJson()}");
//result ------> {emvqr: null, error: {type: EmvErrorType.verifyqrErr, message: The emv data was wrong}}
/**
* decode not emv qr code
*/
String notEmvQrcode = "https://laoitdev.com";
final notEmvDecode = EMVMPM.decode(notEmvQrcode);
print("result ------> ${notEmvDecode.toJson()}");
//result ------> {emvqr: null, error: {type: EmvErrorType.verifyqrErr, message: The emv data was wrong}}
}
Author: LaoitdevOpen
Source Code: https://github.com/LaoitdevOpen/dart-emv-code
License: MIT license
1660135980
Faraday middleware for decoding XML requests.
Add this line to your application's Gemfile:
gem "faraday-decode_xml"
And then execute:
bundle install
Or install it yourself as:
gem install faraday-decode_xml
require "faraday/decode_xml"
Faraday.new { |faraday| faraday.response :xml }
After checking out the repo, run bin/setup
to install dependencies.
Then, run bin/test
to run the tests.
To install this gem onto your local machine, run rake build
.
To release a new version, make a commit with a message such as "Bumped to 0.0.2" and then run rake release
. See how it works here.
To run prettier, run rake prettier
Bug reports and pull requests are welcome on GitHub.
Author: Soberstadt
Source Code: https://github.com/soberstadt/faraday-decode_xml
License: MIT license
1651837140
binstruct
Golang binary decoder to structure
Install
go get -u github.com/ghostiam/binstruct
package main
import (
"encoding/binary"
"fmt"
"log"
"os"
"github.com/ghostiam/binstruct"
)
func main() {
file, err := os.Open("testdata/file.bin")
if err != nil {
log.Fatal(err)
}
type dataStruct struct {
Arr []int16 `bin:"len:4"`
}
var actual dataStruct
decoder := binstruct.NewDecoder(file, binary.BigEndian)
// decoder.SetDebug(true) // you can enable the output of bytes read for debugging
err = decoder.Decode(&actual)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%+v", actual)
// Output:
// {Arr:[1 2 3 4]}
}
package main
import (
"fmt"
"log"
"github.com/ghostiam/binstruct"
)
func main() {
data := []byte{
0x00, 0x01,
0x00, 0x02,
0x00, 0x03,
0x00, 0x04,
}
type dataStruct struct {
Arr []int16 `bin:"len:4"`
}
var actual dataStruct
err := binstruct.UnmarshalBE(data, &actual) // UnmarshalLE() or Unmarshal()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%+v", actual)
// Output: {Arr:[1 2 3 4]}
}
You can not use the functionality for mapping data into the structure, you can use the interface to get data from the stream (io.ReadSeeker)
type Reader interface {
io.ReadSeeker
// Peek returns the next n bytes without advancing the reader.
Peek(n int) ([]byte, error)
// ReadBytes reads up to n bytes. It returns the number of bytes
// read, bytes and any error encountered.
ReadBytes(n int) (an int, b []byte, err error)
// ReadAll reads until an error or EOF and returns the data it read.
ReadAll() ([]byte, error)
// ReadByte read and return one byte
ReadByte() (byte, error)
// ReadBool read one byte and return boolean value
ReadBool() (bool, error)
// ReadUint8 read one byte and return uint8 value
ReadUint8() (uint8, error)
// ReadUint16 read two bytes and return uint16 value
ReadUint16() (uint16, error)
// ReadUint32 read four bytes and return uint32 value
ReadUint32() (uint32, error)
// ReadUint64 read eight bytes and return uint64 value
ReadUint64() (uint64, error)
// ReadUintX read X bytes and return uint64 value
ReadUintX(x int) (uint64, error)
// ReadInt8 read one byte and return int8 value
ReadInt8() (int8, error)
// ReadInt16 read two bytes and return int16 value
ReadInt16() (int16, error)
// ReadInt32 read four bytes and return int32 value
ReadInt32() (int32, error)
// ReadInt64 read eight bytes and return int64 value
ReadInt64() (int64, error)
// ReadIntX read X bytes and return int64 value
ReadIntX(x int) (int64, error)
// ReadFloat32 read four bytes and return float32 value
ReadFloat32() (float32, error)
// ReadFloat64 read eight bytes and return float64 value
ReadFloat64() (float64, error)
// Unmarshal parses the binary data and stores the result
// in the value pointed to by v.
Unmarshal(v interface{}) error
// WithOrder changes the byte order for the new Reader
WithOrder(order binary.ByteOrder) Reader
}
Example:
package main
import (
"encoding/binary"
"fmt"
"log"
"github.com/ghostiam/binstruct"
)
func main() {
data := []byte{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}
reader := binstruct.NewReaderFromBytes(data, binary.BigEndian, false)
i16, err := reader.ReadInt16()
if err != nil {
log.Fatal(err)
}
fmt.Println(i16)
i32, err := reader.ReadInt32()
if err != nil {
log.Fatal(err)
}
fmt.Println(i32)
b, err := reader.Peek(4)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Peek bytes: %#v\n", b)
an, b, err := reader.ReadBytes(4)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Read %d bytes: %#v\n", an, b)
other, err := reader.ReadAll()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Read all: %#v\n", other)
// Output:
// 258
// 50595078
// Peek bytes: []byte{0x7, 0x8, 0x9, 0xa}
// Read 4 bytes: []byte{0x7, 0x8, 0x9, 0xa}
// Read all: []byte{0xb, 0xc, 0xd, 0xe, 0xf}
}
Decode to fields
type test struct {
// Read 1 byte
Field bool
Field byte
Field [1]byte
Field int8
Field uint8
// Read 2 bytes
Field int16
Field uint16
Field [2]byte
// Read 4 bytes
Field int32
Field uint32
Field [4]byte
// Read 8 bytes
Field int64
Field uint64
Field [8]byte
// You can override length
Field int64 `bin:"len:2"`
// Or even use very weird byte lengths for int
Field int64 `bin:"len:3"`
Field int64 `bin:"len:5"`
Field int64 `bin:"len:7"`
// Fields of type int, uint and string are not read automatically
// because the size is not known, you need to set it manually
Field int `bin:"len:2"`
Field uint `bin:"len:4"`
Field string `bin:"len:42"`
// Can read arrays and slices
Array [2]int32 // read 8 bytes (4+4byte for 2 int32)
Slice []int32 `bin:"len:2"` // read 8 bytes (4+4byte for 2 int32)
// Also two-dimensional slices work (binstruct_test.go:307 Test_SliceOfSlice)
Slice2D [][]int32 `bin:"len:2,[len:2]"`
// and even three-dimensional slices (binstruct_test.go:329 Test_SliceOfSliceOfSlice)
Slice3D [][][]int32 `bin:"len:2,[len:2,[len:2]]"`
// Structures and embedding are also supported.
Struct struct {
...
}
OtherStruct Other
Other // embedding
}
type Other struct {
...
}
Tags
type test struct {
IgnoredField []byte `bin:"-"` // ignore field
CallMethod []byte `bin:"MethodName"` // Call method "MethodName"
ReadLength []byte `bin:"len:42"` // read 42 bytes
// Offsets test binstruct_test.go:9
Offset byte `bin:"offset:42"` // move to 42 bytes from current position and read byte
OffsetStart byte `bin:"offsetStart:42"` // move to 42 bytes from start position and read byte
OffsetEnd byte `bin:"offsetEnd:-42"` // move to -42 bytes from end position and read byte
OffsetStart byte `bin:"offsetStart:42, offset:10"` // also worked and equally `offsetStart:52`
// Calculations supported +,-,/,* and are performed from left to right that is 2+2*2=8 not 6!!!
CalcTagValue []byte `bin:"len:10+5+2+3"` // equally len:20
// You can refer to another field to get the value.
DataLength int // actual length
ValueFromOtherField string `bin:"len:DataLength"`
CalcValueFromOtherField string `bin:"len:DataLength+10"` // also work calculations
// You can change the byte order directly from the tag
UInt16LE uint16 `bin:"le"`
UInt16BE uint16 `bin:"be"`
// Or when you call the method, it will contain the Reader with the byte order you need
CallMethodWithLEReader uint16 `bin:"MethodNameWithLEReader,le"`
CallMethodWithBEReader uint16 `bin:"be,MethodNameWithBEReader"`
}
// Method can be:
func (*test) MethodName(r binstruct.Reader) (error) {}
// or
func (*test) MethodName(r binstruct.Reader) (FieldType, error) {}
See the tests and examples for more information.
Examples
Author: Ghostiam
Source Code: https://github.com/ghostiam/binstruct
License: MIT License
1649361840
MPO Decoder Library
Simple Go JPEG MPO (Multi Picture Object) Decoder - Library and CLI Tool
The library and CLI tool contain the ability to convert MPO to Stereoscopic JPEG as well as various color combinations of Analglyph.
mpo2img
usage: mpo2img <mpofile>
-format string
Output format [stereo|red-cyan|cyan-red|red-green|green-red] (default "stereo")
-help
Displays this text
-outfile string
Output filename (default "output.jpg")
A Web UI for this library exists here:
https://donatstudios.com/MPO-to-JPEG-Stereo
Binaries are availible for Darwin (macOS), Linux and Windows on the release page:
https://github.com/donatj/mpo/releases
go get -u github.com/donatj/mpo/cmd/mpo2img
Todo:
Author: Donatj
Source Code: https://github.com/donatj/mpo
License: MIT License
1646684040
minimp3
$ go get -u github.com/tosone/minimp3
import "github.com/tosone/minimp3"
Example1: Decode the whole mp3 and play.
package main
import (
"io/ioutil"
"log"
"time"
"github.com/hajimehoshi/oto"
"github.com/tosone/minimp3"
)
func main() {
var err error
var file []byte
if file, err = ioutil.ReadFile("test.mp3"); err != nil {
log.Fatal(err)
}
var dec *minimp3.Decoder
var data []byte
if dec, data, err = minimp3.DecodeFull(file); err != nil {
log.Fatal(err)
}
var context *oto.Context
if context, err = oto.NewContext(dec.SampleRate, dec.Channels, 2, 1024); err != nil {
log.Fatal(err)
}
var player = context.NewPlayer()
player.Write(data)
<-time.After(time.Second)
dec.Close()
if err = player.Close(); err != nil {
log.Fatal(err)
}
}
Example2: Decode and play.
package main
import (
"io"
"log"
"os"
"sync"
"time"
"github.com/hajimehoshi/oto"
"github.com/tosone/minimp3"
)
func main() {
var err error
var file *os.File
if file, err = os.Open("../test.mp3"); err != nil {
log.Fatal(err)
}
var dec *minimp3.Decoder
if dec, err = minimp3.NewDecoder(file); err != nil {
log.Fatal(err)
}
started := dec.Started()
<-started
log.Printf("Convert audio sample rate: %d, channels: %d\n", dec.SampleRate, dec.Channels)
var context *oto.Context
if context, err = oto.NewContext(dec.SampleRate, dec.Channels, 2, 1024); err != nil {
log.Fatal(err)
}
var waitForPlayOver = new(sync.WaitGroup)
waitForPlayOver.Add(1)
var player = context.NewPlayer()
go func() {
for {
var data = make([]byte, 1024)
_, err := dec.Read(data)
if err == io.EOF {
break
}
if err != nil {
break
}
player.Write(data)
}
log.Println("over play.")
waitForPlayOver.Done()
}()
waitForPlayOver.Wait()
<-time.After(time.Second)
dec.Close()
if err = player.Close(); err != nil {
log.Fatal(err)
}
}
Example3: Play the network audio.
package main
import (
"io"
"log"
"net/http"
"os"
"sync"
"time"
"github.com/hajimehoshi/oto"
"github.com/tosone/minimp3"
)
func main() {
var err error
var args = os.Args
if len(args) != 2 {
log.Fatal("Run test like this:\n\n\t./networkAudio.test [mp3url]\n\n")
}
var response *http.Response
if response, err = http.Get(args[1]); err != nil {
log.Fatal(err)
}
var dec *minimp3.Decoder
if dec, err = minimp3.NewDecoder(response.Body); err != nil {
log.Fatal(err)
}
<-dec.Started()
log.Printf("Convert audio sample rate: %d, channels: %d\n", dec.SampleRate, dec.Channels)
var context *oto.Context
if context, err = oto.NewContext(dec.SampleRate, dec.Channels, 2, 4096); err != nil {
log.Fatal(err)
}
var waitForPlayOver = new(sync.WaitGroup)
waitForPlayOver.Add(1)
var player = context.NewPlayer()
go func() {
defer response.Body.Close()
for {
var data = make([]byte, 512)
_, err = dec.Read(data)
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
break
}
player.Write(data)
}
log.Println("over play.")
waitForPlayOver.Done()
}()
waitForPlayOver.Wait()
<-time.After(time.Second)
dec.Close()
player.Close()
}
Decode mp3 base on https://github.com/lieff/minimp3
Author: Tosone
Source Code: https://github.com/tosone/minimp3
License: MIT License
1645409220
he (for “HTML entities”) is a robust HTML entity encoder/decoder written in JavaScript. It supports all standardized named character references as per HTML, handles ambiguous ampersands and other edge cases just like a browser would, has an extensive test suite, and — contrary to many other JavaScript solutions — he handles astral Unicode symbols just fine. An online demo is available.
Via npm:
npm install he
Via Bower:
bower install he
Via Component:
component install mathiasbynens/he
In a browser:
<script src="he.js"></script>
In Node.js, io.js, Narwhal, and RingoJS:
var he = require('he');
In Rhino:
load('he.js');
Using an AMD loader like RequireJS:
require(
{
'paths': {
'he': 'path/to/he'
}
},
['he'],
function(he) {
console.log(he);
}
);
he.version
A string representing the semantic version number.
he.encode(text, options)
This function takes a string of text and encodes (by default) any symbols that aren’t printable ASCII symbols and &
, <
, >
, "
, '
, and `
, replacing them with character references.
he.encode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
As long as the input string contains allowed code points only, the return value of this function is always valid HTML. Any (invalid) code points that cannot be represented using a character reference in the input are not encoded:
he.encode('foo \0 bar');
// → 'foo \0 bar'
However, enabling the strict
option causes invalid code points to throw an exception. With strict
enabled, he.encode
either throws (if the input contains invalid code points) or returns a string of valid HTML.
The options
object is optional. It recognizes the following properties:
useNamedReferences
The default value for the useNamedReferences
option is false
. This means that encode()
will not use any named character references (e.g. ©
) in the output — hexadecimal escapes (e.g. ©
) will be used instead. Set it to true
to enable the use of named references.
Note that if compatibility with older browsers is a concern, this option should remain disabled.
// Using the global default setting (defaults to `false`):
he.encode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly disallow named references:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'useNamedReferences': false
});
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly allow named references:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'useNamedReferences': true
});
// → 'foo © bar ≠ baz 𝌆 qux'
decimal
The default value for the decimal
option is false
. If the option is enabled, encode
will generally use decimal escapes (e.g. ©
) rather than hexadecimal escapes (e.g. ©
). Beside of this replacement, the basic behavior remains the same when combined with other options. For example: if both options useNamedReferences
and decimal
are enabled, named references (e.g. ©
) are used over decimal escapes. HTML entities without a named reference are encoded using decimal escapes.
// Using the global default setting (defaults to `false`):
he.encode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly disable decimal escapes:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'decimal': false
});
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly enable decimal escapes:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'decimal': true
});
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly allow named references and decimal escapes:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'useNamedReferences': true,
'decimal': true
});
// → 'foo © bar ≠ baz 𝌆 qux'
encodeEverything
The default value for the encodeEverything
option is false
. This means that encode()
will not use any character references for printable ASCII symbols that don’t need escaping. Set it to true
to encode every symbol in the input string. When set to true
, this option takes precedence over allowUnsafeSymbols
(i.e. setting the latter to true
in such a case has no effect).
// Using the global default setting (defaults to `false`):
he.encode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
// Passing an `options` object to `encode`, to explicitly encode all symbols:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'encodeEverything': true
});
// → 'foo © bar ≠ baz 𝌆 qux'
// This setting can be combined with the `useNamedReferences` option:
he.encode('foo © bar ≠ baz 𝌆 qux', {
'encodeEverything': true,
'useNamedReferences': true
});
// → 'foo © bar ≠ baz 𝌆 qux'
strict
The default value for the strict
option is false
. This means that encode()
will encode any HTML text content you feed it, even if it contains any symbols that cause parse errors. To throw an error when such invalid HTML is encountered, set the strict
option to true
. This option makes it possible to use he as part of HTML parsers and HTML validators.
// Using the global default setting (defaults to `false`, i.e. error-tolerant mode):
he.encode('\x01');
// → ''
// Passing an `options` object to `encode`, to explicitly enable error-tolerant mode:
he.encode('\x01', {
'strict': false
});
// → ''
// Passing an `options` object to `encode`, to explicitly enable strict mode:
he.encode('\x01', {
'strict': true
});
// → Parse error
allowUnsafeSymbols
The default value for the allowUnsafeSymbols
option is false
. This means that characters that are unsafe for use in HTML content (&
, <
, >
, "
, '
, and `
) will be encoded. When set to true
, only non-ASCII characters will be encoded. If the encodeEverything
option is set to true
, this option will be ignored.
he.encode('foo © and & ampersand', {
'allowUnsafeSymbols': true
});
// → 'foo © and & ampersand'
encode
options globallyThe global default setting can be overridden by modifying the he.encode.options
object. This saves you from passing in an options
object for every call to encode
if you want to use the non-default setting.
// Read the global default setting:
he.encode.options.useNamedReferences;
// → `false` by default
// Override the global default setting:
he.encode.options.useNamedReferences = true;
// Using the global default setting, which is now `true`:
he.encode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
he.decode(html, options)
This function takes a string of HTML and decodes any named and numerical character references in it using the algorithm described in section 12.2.4.69 of the HTML spec.
he.decode('foo © bar ≠ baz 𝌆 qux');
// → 'foo © bar ≠ baz 𝌆 qux'
The options
object is optional. It recognizes the following properties:
isAttributeValue
The default value for the isAttributeValue
option is false
. This means that decode()
will decode the string as if it were used in a text context in an HTML document. HTML has different rules for parsing character references in attribute values — set this option to true
to treat the input string as if it were used as an attribute value.
// Using the global default setting (defaults to `false`, i.e. HTML text context):
he.decode('foo&bar');
// → 'foo&bar'
// Passing an `options` object to `decode`, to explicitly assume an HTML text context:
he.decode('foo&bar', {
'isAttributeValue': false
});
// → 'foo&bar'
// Passing an `options` object to `decode`, to explicitly assume an HTML attribute value context:
he.decode('foo&bar', {
'isAttributeValue': true
});
// → 'foo&bar'
strict
The default value for the strict
option is false
. This means that decode()
will decode any HTML text content you feed it, even if it contains any entities that cause parse errors. To throw an error when such invalid HTML is encountered, set the strict
option to true
. This option makes it possible to use he as part of HTML parsers and HTML validators.
// Using the global default setting (defaults to `false`, i.e. error-tolerant mode):
he.decode('foo&bar');
// → 'foo&bar'
// Passing an `options` object to `decode`, to explicitly enable error-tolerant mode:
he.decode('foo&bar', {
'strict': false
});
// → 'foo&bar'
// Passing an `options` object to `decode`, to explicitly enable strict mode:
he.decode('foo&bar', {
'strict': true
});
// → Parse error
decode
options globallyThe global default settings for the decode
function can be overridden by modifying the he.decode.options
object. This saves you from passing in an options
object for every call to decode
if you want to use a non-default setting.
// Read the global default setting:
he.decode.options.isAttributeValue;
// → `false` by default
// Override the global default setting:
he.decode.options.isAttributeValue = true;
// Using the global default setting, which is now `true`:
he.decode('foo&bar');
// → 'foo&bar'
he.escape(text)
This function takes a string of text and escapes it for use in text contexts in XML or HTML documents. Only the following characters are escaped: &
, <
, >
, "
, '
, and `
.
he.escape('<img src=\'x\' onerror="prompt(1)">');
// → '<img src='x' onerror="prompt(1)">'
he.unescape(html, options)
he.unescape
is an alias for he.decode
. It takes a string of HTML and decodes any named and numerical character references in it.
he
binaryTo use the he
binary in your shell, simply install he globally using npm:
npm install -g he
After that you will be able to encode/decode HTML entities from the command line:
$ he --encode 'föo ♥ bår 𝌆 baz'
föo ♥ bår 𝌆 baz
$ he --encode --use-named-refs 'föo ♥ bår 𝌆 baz'
föo ♥ bår 𝌆 baz
$ he --decode 'föo ♥ bår 𝌆 baz'
föo ♥ bår 𝌆 baz
Read a local text file, encode it for use in an HTML text context, and save the result to a new file:
$ he --encode < foo.txt > foo-escaped.html
Or do the same with an online text file:
$ curl -sL "http://git.io/HnfEaw" | he --encode > escaped.html
Or, the opposite — read a local file containing a snippet of HTML in a text context, decode it back to plain text, and save the result to a new file:
$ he --decode < foo-escaped.html > foo.txt
Or do the same with an online HTML snippet:
$ curl -sL "http://git.io/HnfEaw" | he --decode > decoded.txt
See he --help
for the full list of options.
he has been tested in at least:
After cloning this repository, run npm install
to install the dependencies needed for he development and testing. You may want to install Istanbul globally using npm install istanbul -g
.
Once that’s done, you can run the unit tests in Node using npm test
or node tests/tests.js
. To run the tests in Rhino, Ringo, Narwhal, and web browsers as well, use grunt test
.
To generate the code coverage report, use grunt cover
.
Thanks to Simon Pieters (@zcorpan) for the many suggestions.
Author: Mathiasbynens
Source Code: https://github.com/mathiasbynens/he
License: MIT License
1633439047
A set of codecs for encode and decode data.
Supported hex alphabet and custom alphabets.
Supported Rfc, RfcHex, Crockford, ZBase, GeoHash, WordSafe, Custom alphabets
Supported Bitcoin, Flickr, Ripple, Custom alphabets
Supported Ascii85, ZeroMq, IPv6
Run this command:
With Dart:
$ dart pub add base_codecs
With Flutter:
$ flutter pub add base_codecs
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
base_codecs: ^1.0.0
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:base_codecs/base_codecs.dart';
example/base_codecs_example.dart
import 'dart:convert';
import 'dart:typed_data';
import 'package:base_codecs/base_codecs.dart';
void main() {
const testString =
"Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure.";
/// Base16 (Hex)
const data = [0xDE, 0xAD, 0xBE, 0xEF];
hexEncode(Uint8List.fromList(data)); //DEADBEEF
hexDecode('DEADBEEF'); // == data
/// Base16 Custom
const custom = Base16CodecCustom('ABCDEF9876543210');
final customData = Uint8List.fromList(
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
);
custom.encode(customData); // AAABACADAEAFA9A8A7A6A5A4A3A2A1A0
custom.decode('AAABACADAEAFA9A8A7A6A5A4A3A2A1A0'); // == customData
/// Base32
/// RFC
const encoded =
"JVQW4IDJOMQGI2LTORUW4Z3VNFZWQZLEFQQG433UEBXW43DZEBRHSIDINFZSA4TFMFZW63RMEBRHK5BAMJ4SA5DINFZSA43JNZTXK3DBOIQHAYLTONUW63RAMZZG63JAN52GQZLSEBQW42LNMFWHGLBAO5UGSY3IEBUXGIDBEBWHK43UEBXWMIDUNBSSA3LJNZSCYIDUNBQXIIDCPEQGCIDQMVZHGZLWMVZGC3TDMUQG6ZRAMRSWY2LHNB2CA2LOEB2GQZJAMNXW45DJNZ2WKZBAMFXGIIDJNZSGKZTBORUWOYLCNRSSAZ3FNZSXEYLUNFXW4IDPMYQGW3TPO5WGKZDHMUWCAZLYMNSWKZDTEB2GQZJAONUG64TUEB3GK2DFNVSW4Y3FEBXWMIDBNZ4SAY3BOJXGC3BAOBWGKYLTOVZGKLQ=";
base32RfcEncode(Uint8List.fromList(utf8.encode(testString)));
base32RfcDecode(encoded);
/// RFC HEX
const encodedRFCHex =
"9LGMS839ECG68QBJEHKMSPRLD5PMGPB45GG6SRRK41NMSR3P41H7I838D5PI0SJ5C5PMURHC41H7AT10C9SI0T38D5PI0SR9DPJNAR31E8G70OBJEDKMURH0CPP6UR90DTQ6GPBI41GMSQBDC5M76B10ETK6IOR841KN683141M7ASRK41NMC83KD1II0RB9DPI2O83KD1GN8832F4G6283GCLP76PBMCLP62RJ3CKG6UPH0CHIMOQB7D1Q20QBE41Q6GP90CDNMST39DPQMAP10C5N68839DPI6APJ1EHKMEOB2DHII0PR5DPIN4OBKD5NMS83FCOG6MRJFETM6AP37CKM20PBOCDIMAP3J41Q6GP90EDK6USJK41R6AQ35DLIMSOR541NMC831DPSI0OR1E9N62R10E1M6AOBJELP6ABG=";
base32RfcHexEncode(Uint8List.fromList(utf8.encode(testString)));
base32RfcHexDecode(encodedRFCHex);
/// ZBase
const encodedZbase =
"JIOSHEDJQCOGE4MUQTWSH35IPF3SO3MRFOOGH55WRBZSH5D3RBT81EDEPF31YHUFCF3S65TCRBT8K7BYCJH1Y7DEPF31YH5JP3UZK5DBQEO8YAMUQPWS65TYC33G65JYP74GO3M1RBOSH4MPCFS8GMBYQ7WG1A5ERBWZGEDBRBS8KH5WRBZSCEDWPB11Y5MJP31NAEDWPBOZEEDNXROGNEDOCI38G3MSCI3GN5UDCWOG63TYCT1SA4M8PB4NY4MQRB4GO3JYCPZSH7DJP34SK3BYCFZGEEDJP31GK3UBQTWSQAMNPT11Y35FP31ZRAMWPFZSHEDXCAOGS5UXQ7SGK3D8CWSNY3MACP1SK3DURB4GO3JYQPWG6HUWRB5GK4DFPI1SHA5FRBZSCEDBP3H1YA5BQJZGN5BYQBSGKAMUQI3GKMO";
base32ZBaseEncode(Uint8List.fromList(utf8.encode(testString)));
base32ZBaseDecode(encodedZbase);
const encodedCrockford =
"9NGPW839ECG68TBKEHMPWSVND5SPGSB45GG6WVVM41QPWV3S41H7J838D5SJ0WK5C5SPYVHC41H7AX10C9WJ0X38D5SJ0WV9DSKQAV31E8G70RBKEDMPYVH0CSS6YV90DXT6GSBJ41GPWTBDC5P76B10EXM6JRV841MQ683141P7AWVM41QPC83MD1JJ0VB9DSJ2R83MD1GQ8832F4G6283GCNS76SBPCNS62VK3CMG6YSH0CHJPRTB7D1T20TBE41T6GS90CDQPWX39DSTPAS10C5Q68839DSJ6ASK1EHMPERB2DHJJ0SV5DSJQ4RBMD5QPW83FCRG6PVKFEXP6AS37CMP20SBRCDJPAS3K41T6GS90EDM6YWKM41V6AT35DNJPWRV541QPC831DSWJ0RV1E9Q62V10E1P6ARBKENS6ABG";
base32CrockfordEncode(Uint8List.fromList(utf8.encode(testString)));
base32CrockfordDecode(encodedCrockford);
/// WordSafe
const encodedWordSafe =
"FfRgrC5FPJR8CpHXPVcgrmqfM7mgRmH67RR8rqqc63hgrq5m63V9WC5CM7mW2rX7J7mgwqVJ63V9Gv32JFrW2v5CM7mW2rqFMmXhGq53PCR92jHXPMcgwqV2Jmm8wqF2Mvp8RmHW63RgrpHMJ7g98H32Pvc8WjqC63ch8C5363g9Grqc63hgJC5cM3WW2qHFMmW4jC5cM3RhCC54Q6R84C5RJfm98mHgJfm84qX5JcR8wmV2JVWgjpH9M3p42pHP63p8RmF2JMhgrv5FMmpgGm32J7h8CC5FMmW8GmX3PVcgPjH4MVWW2mq7MmWh6jHcM7hgrC5QJjR8gqXQPvg8Gm59Jcg42mHjJMWgGm5X63p8RmF2PMc8wrXc63q8Gp57MfWgrjq763hgJC53MmrW2jq3PFh84q32P3g8GjHXPfm8GHR";
base32WordSafeEncode(Uint8List.fromList(utf8.encode(testString)));
base32WordSafeDecode(encodedWordSafe);
/// GeoHash
const encodedGeoHash =
"9phqw839fdh68ucmfjnqwtvpe5tqhtc45hh6wvvn41rqwv3t41j7k838e5tk0wm5d5tqyvjd41j7bx10d9wk0x38e5tk0wv9etmrbv31f8h70scmfenqyvj0dtt6yv90exu6htck41hqwuced5q76c10fxn6ksv841nr683141q7bwvn41rqd83ne1kk0vc9etk2s83ne1hr8832g4h6283hdpt76tcqdpt62vm3dnh6ytj0djkqsuc7e1u20ucf41u6ht90derqwx39etuqbt10d5r68839etk6btm1fjnqfsc2ejkk0tv5etkr4scne5rqw83gdsh6qvmgfxq6bt37dnq20tcsdekqbt3m41u6ht90fen6ywmn41v6bu35epkqwsv541rqd831etwk0sv1f9r62v10f1q6bscmfpt6bch";
base32GeoHashEncode(Uint8List.fromList(utf8.encode(testString)));
base32GeoHashDecode(encodedGeoHash);
///Custom'
const codec = Base32CodecCustom("0123456789ABCDEFGHJKMNPQRSTVWXYZ", '');
const encodedCustom =
"9NGPW839ECG68TBKEHMPWSVND5SPGSB45GG6WVVM41QPWV3S41H7J838D5SJ0WK5C5SPYVHC41H7AX10C9WJ0X38D5SJ0WV9DSKQAV31E8G70RBKEDMPYVH0CSS6YV90DXT6GSBJ41GPWTBDC5P76B10EXM6JRV841MQ683141P7AWVM41QPC83MD1JJ0VB9DSJ2R83MD1GQ8832F4G6283GCNS76SBPCNS62VK3CMG6YSH0CHJPRTB7D1T20TBE41T6GS90CDQPWX39DSTPAS10C5Q68839DSJ6ASK1EHMPERB2DHJJ0SV5DSJQ4RBMD5QPW83FCRG6PVKFEXP6AS37CMP20SBRCDJPAS3K41T6GS90EDM6YWKM41V6AT35DNJPWRV541QPC831DSWJ0RV1E9Q62V10E1P6ARBKENS6ABG";
codec.encode(Uint8List.fromList(utf8.encode(testString)));
codec.decode(encodedCustom);
/// Base58
const encodedBitcoin =
"2KG5obUH7D2G2qLPjujWXdCd1FK6heTdfCjVn1MwP3unVrwcoTz3QyxtBe8Dpxfc5Afnf6VL2b4Ae9RWHEJ957WJpTXTXKcSyFZb17ALWU1BcBsNv2Cncqm5qTadzLcryeftfjtFZfJ14EKKf7UVd5h7UXFSqpmB144w2Eyb9gwvh7mofZpc7oSQv4ZSso9tD1589EjLERTebQoFtt8isgKarX4HGRWUpQCRkAPWuiNrYeV4XEmE4ez4f2mWN1vgGPcX8mKm7RXjYnQ1aGF3oZvKrQQ1ySEq4b5fLvQxcGzCp9xsVdfgK3pXC1RQPf8nyhik8JEnGdXV999wjaj7ggrcEtmkZH41ynpvSYkDecL8nNMT";
base58BitcoinEncode(Uint8List.fromList(utf8.encode(testString)));
base58BitcoinDecode(encodedBitcoin);
/// Flickr
const encodedFlickr =
'2jg5NAth7d2g2QkoJUJvwCcC1fj6GDsCEcJuM1mWo3UMuRWBNsZ3pYXTbD8dPXEB5aEME6uk2A4aD9qvhei957viPswswjBrYfyA17akvt1bBbSnV2cMBQL5QszCZkBRYDETEJTfyEi14ejjE7tuC5G7twfrQPLb144W2eYA9FWVG7LNEyPB7NrpV4yrSN9Td1589eJkeqsDApNfTT8HSFjzRw4hgqvtPpcqKaovUHnRxDu4weLe4DZ4E2Lvn1VFgoBw8LjL7qwJxMp1zgf3NyVjRpp1YreQ4A5EkVpXBgZcP9XSuCEFj3Pwc1qpoE8MYGHK8ieMgCwu999WJzJ7FFRBeTLKyh41YMPVrxKdDBk8Mnms';
base58FlickrEncode(Uint8List.fromList(utf8.encode(testString)));
base58FlickrDecode(encodedFlickr);
/// Ripple
const encodedRipple =
'pKGnob7HfDpGpqLPjujWXdUdrEKa6eTdCUjV8rMAPsu8ViAcoTzsQyxtBe3DFxCcnwC8CaVLpbhwe9RWHNJ9nfWJFTXTXKcSyEZbrfwLW7rBcB14vpU8cqmnqT2dzLciyeCtCjtEZCJrhNKKCf7Vdn6f7XESqFmBrhhApNyb9gAv6fmoCZFcfoSQvhZS1o9tDrn39NjLNRTebQoEtt351gK2iXhHGRW7FQURkwPWu54iYeVhXNmNhezhCpmW4rvgGPcX3mKmfRXjY8Qr2GEsoZvKiQQrySNqhbnCLvQxcGzUF9x1VdCgKsFXUrRQPC38y65k3JN8GdXV999Aj2jfggicNtmkZHhry8FvSYkDecL384MT';
base58RippleEncode(Uint8List.fromList(utf8.encode(testString)));
base58RippleDecode(encodedRipple);
/// Base58Check
const encodedCheck = "5Kd3NBUAdUnhyzenEwVLy9pBKxSwXvE9FMPyR4UKZvpe6E3AgLr";
const decodedCheck =
"80EDDBDC1168F1DAEADBD3E44C1E3F8F5A284C2029F78AD26AF98583A499DE5B19";
base58CheckEncode(base16.decode(decodedCheck));
base16.encode(base58CheckDecode(encodedCheck));
///Base85
/// Ascii
final zeroDecoded = Uint8List.fromList(
[0, 0, 0, 0, 0, 0, 0, 0, 0xd9, 0x47, 0xa3, 0xd5, 0, 0, 0, 0],
);
base85AsciiEncode(Uint8List.fromList(utf8.encode(testString)));
base85AsciiEncode(zeroDecoded);
/// Z85
const rfcTestData = [0x86, 0x4F, 0xD2, 0x6F, 0xB5, 0x59, 0xF7, 0x5B];
const rfcTestDataEncoded = "HelloWorld";
base85ZEncode(
Uint8List.fromList(rfcTestData),
);
base85ZDecode(rfcTestDataEncoded);
/// IPv6
// 1080:0:0:0:8:800:200C:417A from RFC 1924
const address = '108000000000000000080800200C417A';
const encodedIPv6 = "4)+k&C#VzJ4br>0wv%Yp";
base85IPv6Encode(
Uint8List.fromList(base16.decode(address)),
);
base85IPv6Decode(encodedIPv6);
}
Download Details:
Author: KirsApps
Source Code: https://github.com/KirsApps/base_codecs
1625507100
Today we will create Onboarding Travel App UI with React Native. The main focus of this application is displaying onboarding tips for travel applications.
In this design, we will use react native app intro slider library.
Source code: https://github.com/Minte-grace/React-Native-Onboarding
Music from Soundcloud
Music provided by RFM: https://youtu.be/aVSLV8KMwPQ
DeCode
Thanks for watching!
Make sure to like + Subscribe For More!
#decode #onboarding screen #react