Elian  Harber

Elian Harber

1659903480

Honeytrap: Advanced Honeypot Framework

Honeytrap

Honeytrap is an extensible and opensource system for running, monitoring and managing honeypots.

Features

  • Combine multiple services to one honeypot, eg a LAMP server
  • Honeytrap Agent will download the configuration from the Honeytrap Server
  • Use the Honeytrap Agent to redirect traffic out of the network to a seperate network
  • Deploy a large amount agents while having one Honeytrap Server, configuration will be downloaded automatically and logging centralized
  • Payload detection to determine which service should handle the request, one port can handle multiple protocols
  • Monitor lateral movement within your network with the Sensor listener. The sensor will complete the handshake (in case of tcp), and store the payload
  • Create high interaction honeypots using the LXC or remote hosts directors, traffic will be man-in-the-middle proxied, while information will be extracted
  • Extend honeytrap with existing honeypots (like cowrie or glutton), while using the logging and listening framework of Honeytrap
  • Advanced logging system with filtering and logging to Elasticsearch, Kafka, Splunk, Raven, File or Console
  • Services are easily extensible and will extract as much information as possible
  • Low- to high interaction Honeypots, where connections will be upgraded seamless to high interaction

To start using Honeytrap

See our documentation on docs.honeytrap.io.

Community

Join the honeytrap-users mailing list to discuss all things Honeytrap.

Creators

DutchSec’s mission is to safeguard the evolution of technology and therewith humanity. By delivering groundbreaking and solid, yet affordable security solutions we make sure no people, companies or institutes are harmed while using technology. We aim to make cyber security available for everyone.

Our team consists of boundary pushing cyber crime experts, grey hat hackers and developers specialized in big data, machine learning, data- and context driven security. By building open source and custom-made security tooling we protect and defend data, both offensively and proactively.

We work on the front line of security development and explore undiscovered grounds to fulfill our social (and corporate) responsibility. We are driven by the power of shared knowledge and constant learning, and hope to instigate critical thinking in all who use technology in order to increase worldwide safety. We therefore stimulate an open culture, without competition or rivalry, for our own team, as well as our clients. Security is what we do, safety is what you get.

Author: Honeytrap
Source Code: https://github.com/honeytrap/honeytrap 
License: View license

#go #golang #security #framework 

Honeytrap: Advanced Honeypot Framework

Colston.js: Fast, Lightweight and Zero Dependency Framework for Bunjs

🍥 Colston.js

Fast, lightweight and zero dependency framework for bunjs 🚀   

Background

Bun is the lastest and arguably the fastest runtime environment for javascript, similar to node and deno. Bun uses JSC (JavaScriptCore) engine unlike node and deno which is the part of the reason why it's faster than node and deno.

Bun is written in a low-level manual memory management programming language called ZIG.

Bun supports ~90% of the native nodejs APIs including fs, pathetc and also distribute it's packages using npm hence both yarn and npm are supported in bun.

Colstonjs is a fast, minimal and higly configurable typescript based api framework highly inspired by Expressjs and fastify for building high performance APIs, colstonjs is completely built on bunjs.

Prerequisite

🐎 Bun - Bun needs to be installed locally on your development machine.

Installation

💻 To install bun head over to the offical website and follow the installation instructions.

🧑‍💻 To install coltsonjs run

$ bun add colstonjs

NOTE

Although colstonjs is distributed under npm, colstonjs is only available for bun, node and deno are not currently supported.

Usage

Importing the colstonjs into the application

import Colston from "colstonjs";

// initializing Colston 
const serverOptions = {
  port: 8000,
  env: "development"
};

// initialize app with server options
const app: Colston = new Colston(serverOptions);

A simple get request

// server.ts
...
app.get("/", function(ctx) {
  return ctx.status(200).text("OK"); // OK
});
...

To allow the application to accept requests, we have to call the start() method with an optional port and/or callback function.

This will start an http sever on the listening on all interfaces (0.0.0.0) listening on the specified port.

// server.ts
...
server.start(port?, cb?);

NOTE

port number can be passed into the app through the server options or the as the first argument of the start() mthod. If the the port number is passed as part of the server options and also in the start() mthod, then port number passed into to the start() takes priority. If no neither is provided, then the app will default to port 3000

callback method is immediately invoked once the connection is successfully established and the application is ready to accept requests.

Examples

Hello Bun

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.set("port", 8000);

app.get("/", (ctx: Context) => {
  return ctx.status(200).json({ message: "Hello World!" });
});

// start the server 
app.start(app.get('port'), () => console.log(`server listening on port ${app.get("port")}`));

Read request body as json or text

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get("/", async (ctx: Context) => {
  const body = await ctx.request.json();
  const body2 = await ctx.request.text();

  return ctx.status(200).json({ body, body2 });
});

app.start(8000);

Using named parameters

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get("/user/:id/name/:name", async (ctx: Context) => {
  const user = ctx.request.params;

  // make an api call to a backend datastore a to retrieve usre details
  const userDetails = await getUserDetails(details.id); // e.g: { id: 12345, name: "jane"}

  return ctx.status(200).json({ user: userDetails});
});

app.start(8000);

Using query parameters

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app.get('/?name&age', async (ctx: Context) => {
  const query = ctx.request.query;

  return ctx.status(200).json(query); // { name: "jane", age: 50 }
});

app.start(8000);

Method chaining

Colstonjs also provide the flexibility of method chaining, create one app instance and chain all methods on that single instance.

// server.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

app
  .get("/one", (ctx: Context) => {
      return ctx.status(200).text("One");
  })
  .post("/two", (ctx: Context) => {
      return ctx.status(200).text("Two");
  })
  .patch("/three", (ctx: Context) => {
      return ctx.status(200).text("Three");
  });

app.start(8000);

Running the demo note-app

Follow the steps below to run the demo note-taking api application in the examplesdirectory.

  • Clone this repository
  • Change directory into the note-app folder by running cd examples/note-app
  • Start the http server to listen on port 8000 by running bun app.js
  • User your favourite http client (e.g Postman) to make requests to the listening http server.

 

Middleware

Colstonjs support both route level middleware as well as app level middleware.

Application-level middleware

This is a middleware which will be called on each request made to the server, one use case can be for logging.

// logger.ts
export function logger(ctx) {
  const { pathname } = new URL(ctx.request.url);
  console.info([new Date()], " - - " + ctx.request.method + " " + pathname + " HTTP 1.1" + " - ");
}

// server.ts
import Colston, { type Context } from "colstonjs";
import { logger } from "./logger";

const app: Colston = new Colston({ env: "development" });

// middleware
app.use(logger); // [2022-07-16T01:01:00.327Z] - - GET / HTTP 1.1 - 

app.get("/", (ctx: Context) => {
  return ctx.status(200).text("Hello logs...");
});

app.start(8000);

The .use() accepts k numbers of middleware function.

...
app.use(fn-1, fn-2, fn-3, ..., fn-k)
...

Route-level middleware

Colston on the other hand allows you to add a middleware function in-between the route path and the handler function.

// request-id.ts
export function requestID(ctx) {
  ctx.request.id = crypto.randomBytes(18).toString('hex');
}

// server.ts
import crypto from "crypto";
import Colston, { type Context } from "colstonjs";
import { requestID } from "./request-id";

const app: Colston = new Colston({ env: "development" });

app.get("/", requestID, (ctx: Context) => {
  return ctx.status(200).text(`id: ${ctx.request.id}`); // id: 410796b6d64e3dcc1802f290dc2f32155c5b
});

app.start(8000);

It is also worthy to note that we can also have k numbers of route-level middleware functions

// server.ts
...
app.get("/", middleware-1, middleware-2, middleware-3, ..., middleware-k, (ctx: Context) => { 
  return ctx.status(200).text(`id: ${ctx.request.id}`);
});
...

Context locals

ctx.locals is a plain javascript object that is specifically added to allow sharing of data amongst the chain of middlewares and/or handler functions.

// server.ts
...
let requestCount = 0;
app.post("/request-count", (ctx, next) => {
  /**
   * req.locals can be used to pass
   * data from one middleware to another 
   */
  ctx.locals.requestCount = requestCount;
  next();
}, (ctx, next) => {
  ++ctx.locals.requestCount;
  next();
}, (ctx) => {
  let count = ctx.locals.requestCount;
  return ctx.status(200).text(count); // 1
});

Router

Instantiating Router class

Router class provide a way to separate router specific declaration/blocks from the app logic, by providing that extra abstraction layer for your project.

// router.ts
import Router from "Router";

// instantiate the router class
const router1 = new Router();
const router2 = new Router();

// define user routes - can be in a separate file or module.
router1.post('/user', (ctx) => { return ctx.status(200).json({ user }) });
router1.get('/users', (ctx) => { return ctx.json({ users }) });
router1.delete('/user?id', (ctx) => { return ctx.status(204).head() });

// define the notes route - can also be in separate module.
router2.get('/note/:id', (ctx) => { return ctx.json({ note }) });
router2.get('/notes', (ctx) => { return ctx.json({ notes }) });
router2.post('/note', (ctx) => { return ctx.status(201).json({ note }) });

export { router1, router2 };

Injecting Router instance into the app

// server.ts
import Colston from "colstonjs";
import { router1, router2 } from "./router";

const app: Colston = new Colston();

app.all(router1, router2);

// other routes can still be defined here
app.get("/", (ctx) => {
  return ctx.status(200).text("Welcome to colstonjs framework for bun");
});

app.start(8000)

The app.all() method takes in k numbers of router instance objects e.g app.all(router-1, router-2, ..., router-k);. The example folder contains a full note taking backend app that utilizes this pattern.

Application instance cache

We can cache simple data which will leave throughout the application instance lifecycle.

import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

// set properties to cache
app.set("age", 50);
app.set("name", "jane doe");

// check if a key exists in the cache
app.has("age"); // true
app.has("name"); // true

// retrieve the value stored in a given key
app.get("age"); // 50
app.get("name"); // jane doe

app.start(8000);

Error handler

Errors are handled internally by colstonjs, however this error handler method can aslo be customised.

// index.ts
import Colston, { type Context } from "colstonjs";

const app: Colston = new Colston({ env: "development" });

// a broken route
app.get("/error", (ctx) => {
  throw new Error("This is a broken route");
});

// Custom error handler
app.error = async function (error) {
  console.error("This is an error...");
  return Response.json(JSON.stringify(
    // return custom error here
    const err = JSON.stringify(error);
    new Error(error.message || "An error occurred" + err);
  ), { status: 500 });
}

app.start(8000);

Benchmark

Click to expand

Benchmarking was performed using k6 load testing library.

Colstonjs

Colsonjs on bunjs runtime environment

import Colston from "colstonjs";

const app = new Colston({ env: "development" });

app.get("/", (ctx) => {
  return ctx.text("OK");
});

app.start(8000)
$ ./k6 run index.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: index.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
           * default: 100 looping VUs for 10s (gracefulStop: 30s)


running (10.0s), 000/100 VUs, 240267 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  10s

     ✓ success

     checks.........................: 100.00% ✓ 240267       ✗ 0     
     data_received..................: 16 MB   1.6 MB/s
     data_sent......................: 19 MB   1.9 MB/s
     http_req_blocked...............: avg=1.42µs  min=0s       med=1µs    max=9.24ms  p(90)=1µs    p(95)=2µs   
     http_req_connecting............: avg=192ns   min=0s       med=0s     max=2.18ms  p(90)=0s     p(95)=0s    
     http_req_duration..............: avg=4.1ms   min=89µs     med=3.71ms max=41.18ms p(90)=5.3ms  p(95)=6.53ms
       { expected_response:true }...: avg=4.1ms   min=89µs     med=3.71ms max=41.18ms p(90)=5.3ms  p(95)=6.53ms
     http_req_failed................: 0.00%   ✓ 0            ✗ 240267
     http_req_receiving.............: avg=24.17µs min=7µs      med=12µs   max=15.01ms p(90)=18µs   p(95)=21µs  
     http_req_sending...............: avg=6.33µs  min=3µs      med=4µs    max=14.78ms p(90)=7µs    p(95)=8µs   
     http_req_tls_handshaking.......: avg=0s      min=0s       med=0s     max=0s      p(90)=0s     p(95)=0s    
     http_req_waiting...............: avg=4.07ms  min=75µs     med=3.69ms max=41.16ms p(90)=5.27ms p(95)=6.48ms
     http_reqs......................: 240267  24011.563111/s
     iteration_duration.............: avg=4.15ms  min=117.88µs med=3.74ms max=41.25ms p(90)=5.37ms p(95)=6.62ms
     iterations.....................: 240267  24011.563111/s
     vus............................: 100     min=100        max=100 
     vus_max........................: 100     min=100        max=100 

Express

Expressjs on nodejs runtime environment

const express = require("express");
const app = express();

app.get("/", (req, res) => {
  res.send("OK");
});

app.listen(8000);
$ ~/k6 run index.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: index.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 40s max duration (incl. graceful stop):
           * default: 100 looping VUs for 10s (gracefulStop: 30s)


running (10.0s), 000/100 VUs, 88314 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  10s

     ✓ success

     checks.........................: 100.00% ✓ 88314       ✗ 0    
     data_received..................: 20 MB   2.0 MB/s
     data_sent......................: 7.1 MB  705 kB/s
     http_req_blocked...............: avg=1.54µs  min=0s     med=1µs     max=2.04ms  p(90)=1µs     p(95)=2µs    
     http_req_connecting............: avg=451ns   min=0s     med=0s      max=1.99ms  p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
       { expected_response:true }...: avg=11.28ms min=1.22ms med=10.04ms max=90.96ms p(90)=15.04ms p(95)=18.71ms
     http_req_failed................: 0.00%   ✓ 0           ✗ 88314
     http_req_receiving.............: avg=18.18µs min=10µs   med=15µs    max=10.16ms p(90)=22µs    p(95)=25µs   
     http_req_sending...............: avg=6.53µs  min=3µs    med=5µs     max=12.61ms p(90)=8µs     p(95)=9µs    
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s      p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=11.25ms min=1.2ms  med=10.01ms max=90.93ms p(90)=15ms    p(95)=18.68ms
     http_reqs......................: 88314   8818.015135/s
     iteration_duration.............: avg=11.32ms min=1.25ms med=10.08ms max=91.01ms p(90)=15.08ms p(95)=18.76ms
     iterations.....................: 88314   8818.015135/s
     vus............................: 100     min=100       max=100
     vus_max........................: 100     min=100       max=100

From the above results we can see that Colsonjs on bun handles ~ 2.72x number of requests per second when compared with Expressjs on node, benchmarking files can be found in this repository.

Contribute

PRs for features, enhancements and bug fixes are welcomed. ✨ You can also look at the todo file for feature contributions. 🙏🏽

Todo

See the TODO doc here, feel free to also add to the list by editing the TODO file.

DevNote:

Although this version is fairly stable, it is actively still under development so also is bunjs and might contain some bugs, hence, not ideal for a production app.

Author: Ajimae
Source Code: https://github.com/ajimae/colstonjs 
License: MIT license

#javascript #typescript #framework #dependency 

Colston.js: Fast, Lightweight and Zero Dependency Framework for Bunjs
Elian  Harber

Elian Harber

1659766200

Bubbletea: A Powerful Little TUI Framework

Bubble Tea  

The fun, functional and stateful way to build terminal apps. A Go framework based on The Elm Architecture. Bubble Tea is well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both.

Bubble Tea Example

Bubble Tea is in use in production and includes a number of features and performance optimizations we’ve added along the way. Among those is a standard framerate-based renderer, a renderer for high-performance scrollable regions which works alongside the main renderer, and mouse support.

Getting Started

We recommend starting with the basics tutorial followed by the commands tutorial, both of which should give you a good understanding of how things work.

There are a bunch of examples, too!

Components

For a bunch of basic user interface components check out Bubbles, the official Bubble Tea component library.

Bubbles Badge   Text Input Example from Bubbles

Debugging

Debugging with Delve

Since Bubble Tea apps assume control of stdin and stdout, you’ll need to run delve in headless mode and then connect to it:

# Start the debugger
$ dlv debug --headless .
API server listening at: 127.0.0.1:34241

# Connect to it from another terminal
$ dlv connect 127.0.0.1:34241

Note that the default port used will vary on your system and per run, so actually watch out what address the first dlv run tells you to connect to.

Logging Stuff

You can log to a debug file to print debug Bubble Tea applications. To do so, include something like…

if len(os.Getenv("DEBUG")) > 0 {
    f, err := tea.LogToFile("debug.log", "debug")
    if err != nil {
        fmt.Println("fatal:", err)
        os.Exit(1)
    }
    defer f.Close()
}

…before you start your Bubble Tea program. To see what’s printed in real time, run tail -f debug.log while you run your program in another window.

Libraries we use with Bubble Tea

  • Bubbles: Common Bubble Tea components such as text inputs, viewports, spinners and so on
  • Lip Gloss: Style, format and layout tools for terminal applications
  • Harmonica: A spring animation library for smooth, natural motion
  • BubbleZone: Easy mouse event tracking for Bubble Tea components
  • Termenv: Advanced ANSI styling for terminal applications
  • Reflow: Advanced ANSI-aware methods for working with text

Bubble Tea in the Wild

For some Bubble Tea programs in production, see:

  • AT CLI: a utility for executing AT Commands via serial port connections
  • Aztify: bring Microsoft Azure resources under Terraform
  • Canard: an RSS client
  • charm: the official Charm user account manager
  • clidle: a Wordle clone for your terminal
  • container-canary: a container validator
  • dns53: dynamic DNS with Amazon Route53. Expose your EC2 quickly, securely and privately
  • fm: a terminal-based file manager
  • flapioca: Flappy Bird on the CLI!
  • fztea: connect to your Flipper's UI over serial or make it accessible via SSH
  • fork-cleaner: cleans up old and inactive forks in your GitHub account
  • gambit: play chess in the terminal
  • gembro: a mouse-driven Gemini browser
  • gh-b: GitHub CLI extension to easily manage your branches
  • gh-dash: GitHub CLI extension to display a dashboard of PRs and issues
  • gitflow-toolkit: a GitFlow submission tool
  • Glow: a markdown reader, browser and online markdown stash
  • gocovsh: explore Go coverage reports from the CLI
  • httpit: a rapid http(s) benchmark tool
  • IDNT: batch software uninstaller
  • kboard: a typing game
  • mergestat: run SQL queries on git repositories
  • mc: the official MinIO client
  • pathos: pathos - CLI for editing a PATH env variable
  • portal: securely send transfer between computers
  • redis-viewer: browse Redis databases
  • Slides: a markdown-based presentation tool
  • Soft Serve: a command-line-first Git server that runs a TUI over SSH
  • StormForge Optimize Controller: a tool for experimenting with application configurations in Kubernetes
  • STTG: teletext client for SVT, Sweden’s national public television station
  • sttr: run various text transformations
  • tasktimer: a dead-simple task timer
  • termdbms: a keyboard and mouse driven database browser
  • ticker: a terminal stock watcher and stock position tracker
  • tran: securely transfer stuff between computers (based on portal)
  • tz: an aid for scheduling across multiple time zones
  • ugm: a unix user and group browser
  • Typer: a typing test
  • wishlist: an SSH directory

Feedback

We'd love to hear your thoughts on this tutorial. Feel free to drop us a note!

Acknowledgments

Bubble Tea is based on the paradigms of The Elm Architecture by Evan Czaplicki et alia and the excellent go-tea by TJ Holowaychuk. It’s inspired by the many great Zeichenorientierte Benutzerschnittstellen of days past.


Part of Charm.

The Charm logo

Charm热爱开源 • Charm loves open source

Download Details: 

Author: Charmbracelet
Source Code: https://github.com/charmbracelet/bubbletea 
License: MIT license

#go #golang #cli #framework 

Bubbletea: A Powerful Little TUI Framework

Bun-bakery: A Web Framework for Bun

Bun Bakery

Bun-Bakery is a web framework for Bun. It uses a file based router in style like svelte-kit. No need to define routes during runtime.

Quick Start

bun add @kapsonfire/bun-bakery

On your main script import Router from bun-bakery and define your pathes. i.e. main.ts

import {Router} from "@kapsonfire/bun-bakery"

new Router({
    assetsPath: import.meta.dir + '/assets/',
    routesPath: import.meta.dir + '/routes/'
})

After that run the server and open your browser http://localhost:3000

bun main.ts

Routing

Routes are added automatically when creating files inside your routesPath when exporting functions with the corresponding Method Names. Given example above create index.ts inside routes/ and export a GET function calling ctx.sendResponse().

import {Context} from "@kapsonfire/bun-bakery"

export async function GET(ctx: Context) {
    ctx.sendResponse(new Response('hello world!'));
}

Parameters

Routes can have parameters inside dirname and/or filename. Just put the parameter name inside brackets and it will be added to ctx.params. In example: given routes/user/[username].ts and open http://localhost:3000/user/kapsonfire

import {Context} from "@kapsonfire/bun-bakery"

export async function GET(ctx: Context) {
    ctx.sendResponse(new Response('hello '+ ctx.params.username +'!'));
}

will output hello kapsonfire!

Spread Paramaters

Routes can also have wildcard/spread paramaters. In example: given routes/users/[...usernames].ts and open http://localhost:3000/users/kapsonfire/jarred/tricked

import {Context} from "@kapsonfire/bun-bakery"

export async function GET(ctx: Context) {
    ctx.sendAsJson(JSON.stringify(ctx.params));
}

will output

{"usernames":["kapsonfire","jarred","tricked"]}

Handlers

Inside the context variable you can access the native bun Request object inside ctx.request. ctx.sendResponse expects a native bun Response object.

Middlewares

bun-bakery supports some life-cycles to add middleware

  • onRequest will be called before the router handles the request
  • onRoute will be called before the route function will be called
  • onResponse will be called after the route function finished
router.addMiddleware({
    onRequest: (ctx: Context) => { ctx.params.injected = "1"; console.log('onRequest', ctx) },
    onRoute: (ctx: Context) => console.log('onRoute', ctx),
    onResponse: (ctx: Context) => {
        ctx.response.headers.set('content-type', 'application/jsonx');
        console.log('onResponse', ctx)
    },
});

Author: Kapsonfire-DE
Source Code: https://github.com/Kapsonfire-DE/bun-bakery 
License: MIT license

#typescript #framework 

Bun-bakery: A Web Framework for Bun

Baojs: A Fast, Minimalist Web Framework for The Bun JavaScript Runtime

🥟 Bao.js

A fast, minimalist web framework for the Bun JavaScript runtime.

⚡️ Bao.js is 3.7x faster than Express.js and has similar syntax for an easy transition.

Background

Bun was released as a fast, modern JavaScript runtime. One of the many improvements over Node.js was the 2.5x increase in HTTP request throughput when compared to the Node.js http module (reference).

Bao.js uses Bun's built in Bun.serve module to serve routes and uses a radix tree for finding those routes resulting in exceptionally low latency response times. Bao is loosely syntactically modeled after Express.js and Koa.js with a handful of changes and upgrades to help with speed and improve the developer experience.

Bao works by creating a Context object (ctx) for each request and passing that through middleware and the destination route. This Context object also has various shortcut methods to make life easier such as by having standard response types (e.g. ctx.sendJson({ "hello": "world" })). When a route or middleware is finished, it should return the Context object to pass it along the chain until it is sent to the user.

The code is well documented and uses TypeScript, but more complete documentation will be added here in the future. It is not recommended to use this in production yet as both Bun and Bao.js are in beta.

Install

Although this package is distributed via NPM, is only compatible with Bun (not Node.js) as it uses native Bun libraries.

You must first install Bun and use it to run your server.

🧑‍💻 To install Bao.js, in your project directory, run bun add baojs

Usage

You can import Bao by using

import Bao from "baojs";

const app = new Bao();

To create a GET route, run

app.get("/", (ctx) => {
  return ctx.sendText("OK");
});

Then to get Bao to listen for requests, run

app.listen();

This will start a web server on the default port 3000 listening on all interfaces (0.0.0.0). The port can be modified in the listen() options

app.listen({ port: 8080 });

Examples

Hello World

Run bun index.ts

// index.ts
import Bao from "baojs";

const app = new Bao();

app.get("/", (ctx) => {
  return ctx.sendText("Hello World!");
});

app.listen();

Read request body

Run bun index.ts

// index.ts
import Bao from "baojs";

const app = new Bao();

app.post("/pretty", async (ctx) => {
  const json = await ctx.req.json();
  return ctx.sendPrettyJson(json);
});

app.listen();

Named parameters

Run bun index.ts

// index.ts
import Bao from "baojs";

const app = new Bao();

app.get("/user/:user", (ctx) => {
  const user = await getUser(ctx.params.user);
  return ctx.sendJson(user);
});

app.get("/user/:user/:post/data", (ctx) => {
  const post = await getPost(ctx.params.post);
  return ctx.sendJson({ post: post, byUser: user });
});

app.listen();

Wildcards

Wildcards are different to named parameters as wildcards must be at the end of paths as they will catch everything.

The following would be produced from the example below

  • GET /posts/123 => /123
  • GET /posts/123/abc => /123/abc

Run bun index.ts

// index.ts
import Bao from "baojs";

const app = new Bao();

app.get("/posts/*post", (ctx) => {
  return ctx.sendText(ctx.params.post);
});

app.listen();

Custom error handler

Run bun index.ts

// index.ts
import Bao from "baojs";

const app = new Bao();

app.get("/", (ctx) => {
  return ctx.sendText("Hello World!");
});

// A perpetually broken POST route
app.post("/broken", (ctx) => {
  throw "An intentional error has occurred in POST /broken";
  return ctx.sendText("I will never run...");
});

// Custom error handler
app.errorHandler = (error: Error) => {
  logErrorToLoggingService(error);
  return new Response("Oh no! An error has occurred...");
};

// Custom 404 not found handler
app.notFoundHandler = () => {
  return new Response("Route not found...");
};

app.listen();

Middleware

Middleware is split into middleware that runs before the routes, and middleware that runs after them. This helps to contribute to the performance of Bao.js.

// index.ts
import Bao from "baojs";

const app = new Bao();

// Runs before the routes
app.before((ctx) => {
  const user = getUser(ctx.headers.get("Authorization"));
  if (user === null) return ctx.sendEmpty({ status: 403 }).forceSend();
  ctx.extra["auth"] = user;
  return ctx;
});

app.get("/", (ctx) => {
  return ctx.sendText(`Hello ${ctx.extra.user.displayName}!`);
});

// Runs after the routes
app.after((ctx) => {
  ctx.res.headers.append("version", "1.2.3");
  return ctx;
});

app.listen();

The .forceSend() method tells Bao to not pass the Context object to anything else but instead send it straight to the user. This is useful in cases like this where we don't want unauthenticated users to be able to access our routes and so we just reject their request before it can make it to the route handler.

Benchmarks

Benchmarks were conducted using wrk with the results shown below.

Bao.js

$ wrk -t12 -c 500 -d10s http://localhost:3000/
Running 10s test @ http://localhost:3000/
  12 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.38ms    1.47ms  39.19ms   76.59%
    Req/Sec     2.67k   195.60     2.90k    82.33%
  318588 requests in 10.01s, 24.31MB read
  Socket errors: connect 0, read 667, write 0, timeout 0
Requests/sec:  31821.34
Transfer/sec:      2.43MB
import Bao from "baojs";

const app = new Bao();

app.get("/", (ctx) => {
  return ctx.sendText("OK");
});

app.listen();

Express.js

Bao.js can handle 3.7x more requests per second, with an equal 3.7x reduction in latency per request when compared to Express.js.

$ wrk -t12 -c 500 -d10s http://localhost:5000/
Running 10s test @ http://localhost:5000/
  12 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    56.34ms   13.42ms 246.38ms   90.62%
    Req/Sec   729.26    124.31     0.88k    86.42%
  87160 requests in 10.01s, 18.95MB read
  Socket errors: connect 0, read 928, write 0, timeout 0
Requests/sec:   8705.70
Transfer/sec:      1.89MB
const express = require("express");
const app = express();

app.get("/", (req, res) => {
  res.send("OK");
});

app.listen(5000);

Koa.js

Bao.js can handle 1.2x more requests per second, with an equal 1.2x reduction in latency per request when compared to the modern Koa.js.

$ wrk -t12 -c 500 -d10s http://localhost:1234/
Running 10s test @ http://localhost:1234/
  12 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.12ms    2.47ms  65.03ms   91.12%
    Req/Sec     2.26k   280.16     4.53k    90.46%
  271623 requests in 10.11s, 42.74MB read
  Socket errors: connect 0, read 649, write 0, timeout 0
Requests/sec:  26877.94
Transfer/sec:      4.23MB
const Koa = require("koa");
const app = new Koa();

app.use((ctx) => {
  ctx.body = "OK";
});

app.listen(1234);

Fastify

Bao.js is about equal to Fastify in both throughput and latency.

$ wrk -t12 -c 500 -d10s http://localhost:5000/
Running 10s test @ http://localhost:5000/
  12 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.32ms    1.90ms  60.53ms   78.74%
    Req/Sec     2.68k   274.95     3.25k    72.08%
  319946 requests in 10.01s, 50.65MB read
  Socket errors: connect 0, read 681, write 0, timeout 0
Requests/sec:  31974.36
Transfer/sec:      5.06MB
const fastify = require("fastify");
const app = fastify({ logger: false });

app.get("/", () => "OK");

app.listen({ port: 5000 });

Contribute

PRs are welcome! If you're looking for something to do, maybe take a look at the Issues?

If updating the README, please stick to the standard-readme specification.

Author: Mattreid1
Source Code: https://github.com/mattreid1/baojs 
License: MIT license

#typescript #framework #javascript 

Baojs: A Fast, Minimalist Web Framework for The Bun JavaScript Runtime
Dexter  Goodwin

Dexter Goodwin

1659395160

SitePen/dstore: A Data infrastructure Framework

dstore

The dstore package is a data infrastructure framework, providing the tools for modelling and interacting with data collections and objects. dstore is designed to work with a variety of data storage mediums, and provide a consistent interface for accessing data across different user interface components. There are several key entities within dstore:

Collection

A Collection is the interface for a collection of items, which can be filtered or sorted to create new collections. When implementing this interface, every method and property is optional, and is only needed if the functionality it provides is required. However, all the included collections implement every method. Note that the objects in the collection might not be immediately retrieved from the underlying data storage until they are actually accessed through forEach(), fetch(), or fetchRange(). These fetch methods return a snapshot of the data, and if the data has changed, these methods can later be used to retrieve the latest data.

Querying

Several methods are available for querying collections. These methods allow you to define a query through several steps. Normally, stores are queried first by calling filter() to specify which objects to be included, if the filtering is needed. Next, if an order needs to be specified, the sort() method is called to ensure the results will be sorted. A typical query from a store would look like:

store.filter({priority: 'high'}).sort('dueDate').forEach(function (object) {
    // called for each item in the final result set
});

In addition, the track() method may be used to track store changes, ensuring notifications include index information about object changes, and keeping result sets up-to-date after a query. The fetch() method is an alternate way to retrieve results, providing a promise to an array for accessing query results. The sections below describes each of these methods and how to use them.

Filtering

Filtering is used to specify a subset of objects to be returned in a filtered collection. The simplest use of the filter() method is to call it with a plain object as the argument, that specifies name-value pairs that the returned objects must match. Or a filter builder can be used to construct more sophisticated filter conditions. To use the filter builder, first construct a new filter object from the Filter constructor on the collection you would be querying:

var filter = new store.Filter();

We now have a filter object, that represents a filter, without any operators applied yet. We can create new filter objects by calling the operator methods on the filter object. The operator methods will return new filter objects that hold the operator condition. For example, to specify that we want to retrieve objects with a priority property with a value of "high", and stars property with a value greater than 5, we could write:

var highPriorityFiveStarFilter = filter.eq('priority', 'high').gt('stars', 5);

This filter object can then be passed as the argument to the filter() method on a collection/store:

var highPriorityFiveStarCollection = store.filter(highPriorityFiveStarFilter);

The following methods are available on the filter objects. First are the property filtering methods, which each take a property name as the first argument, and a property value to compare for the second argument:

  • eq: Property values must equal the filter value argument.
  • ne: Property values must not equal the filter value argument.
  • lt: Property values must be less than the filter value argument.
  • lte: Property values must be less than or equal to the filter value argument.
  • gt: Property values must be greater than the filter value argument.
  • gte: Property values must be greater than or equal to the filter value argument.
  • in: An array should be passed in as the second argument, and property values must be equal to one of the values in the array.
  • match: Property values must match the provided regular expression.
  • contains: Filters for objects where the specified property's value is an array and the array contains any value that equals the provided value or satisfies the provided expression.

The following are combinatorial methods:

  • and: This takes two arguments that are other filter objects, that both must be true.
  • or: This takes two arguments that are other filter objects, where one of the two must be true.

Nesting

A few of the filters can also be built upon with other collections (potentially from other stores). In particular, you can provide a collection as the argument for the in or contains filter. This provides functionality similar to nested queries or joins. This generally will need to be combined with a select to return the correct values for matching. For example, if we wanted to find all the tasks in high priority projects, where the task store has a projectId property/column that is a foreign key, referencing objects in a project store. We can perform our nested query:

var tasksOfHighPriorityProjects = taskStore.filter(
    new Filter().in('projectId',
        projectStore.filter({ priority: 'high' }).select('id')
    )
);

Implementations

Different stores may implement filtering in different ways. The dstore/Memory will perform filtering in memory. The dstore/Request/dstore/Rest stores will translate the filters into URL query strings to send to the server. Simple queries will be in standard URL-encoded query format and complex queries will conform to RQL syntax (which is a superset of standard query format).

New filter methods can be created by subclassing dstore/Filter and adding new methods. New methods can be created by calling Filter.filterCreator and by providing the name of the new method. If you will be using new methods with stores that mix in SimpleQuery like memory stores, you can also add filter comparators by overriding the _getFilterComparator method, returning comparators for the additional types, and delegating to this.inherited for the rest.

dstore/SimpleQuery provides a simple shorthand for nested property queries - a side-effect of this is that property names that contain the period character are not supported. Example nested property query:

store.filter({ 'name.last': 'Smith' })

This would match the object:

{
	name: {
		first: 'John',
		last: 'Smith'
	}
}

For the dstore/Request/dstore/Rest stores, you can define alternate serializations of filters to URL queries for existing or new methods by overriding the _renderFilterParams. This method is called with a filter object (and by default is recursively called by combinatorial operators), and should return a string serialization of the filter, that will be inserted into the query string of the URL sent to the server.

The filter objects themselves consist of tree structures. Each filter object has two properties, the operator type, which corresponds to whichever operator was used (like eq or and), and the args, which is an array of values provided to the operator. With and and or operators, the arguments are other filter objects, forming a hierarchy. When filter operators are chained together (through sequential calls), they are combined with the and operator (each operator defined in a sub-filter object).

Collection API

The following property and methods are available on dstore collections:

Property Summary

PropertyDescription
ModelThis constructor represents the data model class to use for the objects returned from the store. All objects returned from the store should have their prototype set to the prototype property of the model, such that objects from this store should return true from object instanceof collection.Model.

Method Summary

filter(query)

This filters the collection, returning a new subset collection. The query can be an object, or a filter object, with the properties defining the constraints on matching objects. Some stores, like server or RQL stores, may accept string-based queries. Stores with in-memory capabilities (like dstore/Memory) may accept a function for filtering as well, but using the filter builder will ensure the greatest cross-store compatibility.

matchesFilter(item)

This tests the provided item to see if it matches the current filter or not.

sort(property, [descending])

This sorts the collection, returning a new ordered collection. Note that if sort is called multiple times, previous sort calls may be ignored by the store (it is up to store implementation how to handle that). If a multiple sort order is desired, use the array of sort orders defined by below.

sort([highestSortOrder, nextSortOrder...])

This also sorts the collection, but can be called to define multiple sort orders by priority. Each argument is an object with a property property and an optional descending property (defaults to ascending, if not set), to define the order. For example:

collection.sort([
	{ property: 'lastName' },
	{ property: 'firstName' }
])

would result in a new collection sorted by lastName, with firstName used to sort identical lastName values.

select([property, ...])

This selects specific properties that should be included in the returned objects.

select(property)

This will indicate that the return results will consist of the values of the given property of the queried objects. For example, this would return a collection of name values, pulled from the original collection of objects:

collection.select('name');

forEach(callback, thisObject)

This iterates over the query results. Note that this may be executed asynchronously and the callback may be called after this function returns. This will return a promise to indicate the completion of the iteration. This method forces a fetch of the data.

fetch()

Normally collections may defer the execution (like making an HTTP request) required to retrieve the results until they are actually accessed. Calling fetch() will force the data to be retrieved, returning a promise to an array.

fetchRange({start: start, end: end})

This fetches a range of objects from the collection, returning a promise to an array. The returned (and resolved) promise should have a totalLength property with a promise that resolves to a number indicating the total number of objects available in the collection.

on(type, listener, filterEvents)

This allows you to define a listener for events that take place on the collection or parent store. When an event takes place, the listener will be called with an event object as the single argument. The following event types are defined:

TypeDescription
addThis indicates that a new object was added to the store. The new object is available on the target property.
updateThis indicates that an object in the stores was updated. The updated object is available on the target property.
deleteThis indicates that an object in the stores was removed. The id of the object is available on the id property.

Setting filterEvents to true indicates the listener will be called only when the emitted event references an item (event.target) that satisfies the collection's current filter query. Note: if filterEvents is set to true for type update, the listener will be called only when the item passed to put matches the collection's query. The original item will not be evaluted. For example, a store contains items marked "to-do" and items marked "done" and one collection uses a query looking for "to-do" items and one looks for "done" items. Both collections are listening for "update" events. If an item is updated from "to-do" to "done", only the "done" collection will be notified of the update.

If detecting when an item is removed from a collection due to an update is desired, set filterEvents to false and use the matchesFilter(item) method to test if each item updated is currently in the collection.

There is also a corresponding emit(type, event) method (from the Store interface) that can be used to emit events when objects have changed.

track()

This method will create a new collection that will be tracked and updated as the parent collection changes. This will cause the events sent through the resulting collection to include an index and previousIndex property to indicate the position of the change in the collection. This is an optional method, and is usually provided by dstore/Trackable. For example, you can create an observable store class, by using dstore/Trackable as a mixin:

var TrackableMemory = declare([Memory, Trackable]);

Trackable requires client side querying functionality. Client side querying functionality is available in dstore/SimpleQuery (and inherited by dstore/Memory). If you are using a Request, Rest, or other server side store, you will need to implement client-side query functionality (by implementing querier methods), or mixin SimpleQuery:

var TrackableRest = declare([Rest, SimpleQuery, Trackable]);

Once we have created a new instance from this store, we can track a collection, which could be the top level store itself, or a downstream filtered or sorted collection:

var store = new TrackableMemory({ data: [...] });
var filteredSorted = store.filter({ inStock: true }).sort('price');
var tracked = filteredSorted.track();

Once we have a tracked collection, we can listen for notifications:

tracked.on('add, update, delete', function (event) {    var newIndex = event.index;    var oldIndex = event.previousIndex;    var object = event.target; });

Trackable requires fetched data to determine the position of modified objects and can work with either full or partial data. We can do a fetch() or forEach() to access all the items in the filtered collection:

tracked.fetch();

Or we can do a fetchRange() to make individual range requests for items in the collection:

tracked.fetchRange(0, 10);

Trackable will keep track of each page of data, and send out notifications based on the data it has available, along with index information, indicating the new and old position of the object that was modified. Regardless of whether full or partial data is fetched, tracked events and the indices they report are relative to the entire collection, not relative to individual fetched ranges. Tracked events also include a totalLength property indicating the total length of the collection.

If an object is added or updated, and falls outside of all of the fetched ranges, the index will be undefined. However, if the object falls between fetched ranges (but within one), there will also be a beforeIndex that indicates the index of the first object that the new or update objects comes before.

Custom Querying

Custom query methods can be created using the dstore/QueryMethod module. We can define our own query method, by extending a store, and defining a method with the QueryMethod. The QueryMethod constructor should be passed an object with the following possible properties:

  • type - This is a string, identifying the query method type.
  • normalizeArguments - This can be a function that takes the arguments passed to the method, and normalizes them for later execution.
  • applyQuery - This is an optional function that can be called on the resulting collection that is returned from the generated query method.
  • querierFactory - This is an optional function that can be used to define the computation of the set of objects returned from a query, on client-side or in-memory stores. It is called with the normalized arguments, and then returns a new function that will be called with an array, and is expected to return a new array.

For example, we could create a getChildren method that queried for children object, by simply returning the children property array from a parent:

declare([Memory], {
    getChildren: new QueryMethod({
        type: 'children',
        querierFactory: function (parent) {
            var parentId = this.getIdentity(parent);

            return function (data) {
                // note: in this case, the input data is ignored as this querier
                // returns an object's array of children instead

                // return the children of the parent
                // or an empty array if the parent no longer exists
                var parent = this.getSync(parentId);
                return parent ? parent.children : [];
            };
        }
	})
});

Store

A store is an extension of a collection and is an entity that not only contains a set of objects, but also provides an interface for identifying, adding, modifying, removing, and querying data. Below is the definition of the store interface. Every method and property is optional, and is only needed if the functionality it provides is required (although the provided full stores (Rest and Memory) implement all the methods except transaction() and getChildren()). Every method returns a promise for the specified return value, unless otherwise noted.

In addition to the methods and properties inherited from Collections, the Store API also exposes the following properties and methods.

Property Summary

PropertyDescription
idPropertyIf the store has a single primary key, this indicates the property to use as the identity property. The values of this property should be unique. This defaults to "id".
ModelThis is the model class to use for all the data objects that originate from this store. By default this will be set to null, so that all objects will be plain objects, but this property can be set to the class from dmodel/Model or any other model constructor. You can create your own model classes (and schemas), and assign them to a store. All objects that come from the store will have their prototype set such that they will be instances of the model. The default value of null will disable any prototype modifications and leave data as plain objects.
defaultNewToStartIf a new object is added to a store, this will indicate it if it should go to the start or end. By default, it will be placed at the end.

Method Summary

MethodDescription
get(id)This retrieves an object by its identity. This returns a promise for the object. If no object was found, the resolved value should be undefined.
getIdentity(object)This returns an object's identity (note: this should always execute synchronously).
put(object, [directives])This stores an object. It can be used to update or create an object. This returns a promise that may resolve to the object after it has been saved.
add(object, [directives])This creates an object, and throws an error if the object already exists. This should return a promise for the newly created object.
remove(id)This deletes an object, using the identity to indicate which object to delete. This returns a promise that resolves to a boolean value indicating whether the object was successfully removed.
transaction()Starts a transaction and returns a transaction object. The transaction object should include a commit() and abort() to commit and abort transactions, respectively. Note, that a store user might not call transaction() prior to using put, delete, etc. in which case these operations effectively could be thought of as “auto-commit” style actions.
create(properties)Creates and returns a new instance of the data model. The returned object will not be stored in the object store until it its save() method is called, or the store's add() is called with this object. This should always execute synchronously.
getChildren(parent)This retrieves the children of the provided parent object. This should return a new collection representing the children.
mayHaveChildren(parent)This should return true or false indicating whether or not a parent might have children. This should always return synchronously, as a way of checking if children might exist before actually retrieving all the children.
getRootCollection()This should return a collection of the top level objects in a hierarchical store.
emit(type, event)This can be used to dispatch event notifications, indicating changes to the objects in the collection. This should be called by put, add, and remove methods if the autoEmit property is false. This can also be used to notify stores if objects have changed from other sources (if a change has occurred on the server, from another user). There is a corresponding on method on collections for listening to data change events. Also, the Trackable mixin can be used to add index/position information to the events.

Synchronous Methods

Stores that can perform synchronous operations may provide analogous methods for get, put, add, and remove that end with Sync to provide synchronous support. For example getSync(id) will directly return an object instead of a promise. The dstore/Memory store provides Sync methods in addition to the promise-based methods. This behavior has been separated into distinct methods to provide consistent return types.

It is generally advisable to always use the asynchronous methods so that client code does not have to be updated in case the store is changed. However, if you have very performance intensive store accesses, the synchronous methods can be used to avoid the minor overhead imposed by promises.

MethodDescription
getSync(id)This retrieves an object by its identity. If no object was found, the returned value should be undefined.
putSync(object, [directives])This stores an object. It can be used to update or create an object. This returns the object after it has been saved.
addSync(object, [directives])This creates an object, and throws an error if the object already exists. This should return the newly created object.
removeSync(id)This deletes an object, using the identity to indicate which object to delete. This returns a boolean value indicating whether the object was successfully removed.

Included Stores

The dstore package includes several store implementations that can be used for the needs of different applications. These include:

  • Memory - This is a simple memory-based store that takes an array and provides access to the objects in the array through the store interface.
  • Request - This is a simple server-based collection that sends HTTP requests following REST conventions to access and modify data requested through the store interface.
  • Rest - This is a store built on Request that implements add, remove, and update operations using HTTP requests following REST conventions.
  • RequestMemory - This is a Memory-based store that will retrieve its contents from a server/URL.
  • LocalDB - This a store based on the browser's local database/storage capabilities. Data stored in this store will be persisted in the local browser.
  • Cache - This is a store mixin that combines a master and caching store to provide caching functionality.
  • Trackable - This a store mixin that adds index information to add, update, and remove events of tracked store instances. This adds a track() method for tracking stores.
  • Tree - This is a store mixin that provides hierarchical querying functionality, defining a parent/child relationships for the display of data in a tree.
  • SimpleQuery - This is a mixin with basic querying functionality, which is extended by the Memory store, and can be used to add client side querying functionality to the Request/Rest store.
  • Store - This is a base store, with the base methods that are used by all other stores.

Constructing Stores

All the stores can be instantiated with an options argument to the constructor, to provide properties to be copied to the store. This can include methods to be added to the new store.

Stores can also be constructed by combining a base store with mixins. The various store mixins are designed to be combined through dojo declare to create a class to instantiate a store. For example, if you wish to add tracking and tree functionality to a Memory store, we could combine these:

// create the class based on the Memory store with added functionality
var TrackedTreeMemoryStore = declare([Memory, Trackable, Tree]);
// now create an instance
var myStore = new TrackedTreeMemoryStore({ data: [...] });

The store mixins can only be used as mixins, but stores can be combined with other stores as well. For example, if we wanted to add the Rest functionality to the RequestMemory store (so the entire store data was retrieved from the server on construction, but data changes are sent to the server), we could write:

var RestMemoryStore = declare([Rest, RequestMemory]);
// now create an instance
var myStore = new RestMemoryStore({ target: '/data-source/' });

Another common case is needing to add tracking to the dstore/Rest store, which requires client side querying, which can be provided by dstore/SimpleQuery:

var TrackedRestStore = declare([Rest, SimpleQuery, Trackable]);

Memory

The Memory store is a basic client-side in-memory store that can be created from a simple JavaScript array. When creating a memory store, the data (which should be an array of objects) can be provided in the data property to the constructor. The data should be an array of objects, and all the objects are considered to be existing objects and must have identities (this is not "creating" new objects, no events are fired for the objects that are provided, nor are identities assigned).

For example:

myStore = new Memory({
    data: [{
        id: 1,
        aProperty: ...,
        ...
    }]
});

The array supplied as the data property will not be copied, it will be used as-is as the store's data. It can be changed at run-time with the setData method.

Methods

The Memory store provides synchronous equivalents of standard asynchronous store methods that directly return objects or results, without a promise.

NameDescription
getSync(id)Retrieve an object by its identity. If no object is found, the returned value is undefined.
addSync(object, options)Create an object (throws an error if the object already exists). Returns the newly created object.
putSync(object, options)Store an object. Can be used to update or create an object. Returns the object after it has been saved.
removeSync(id)Delete an object, using the identity to indicate which object to delete. Returns a boolean value indicating whether the object was successfully removed.
setData(data)Set the store's data to the specified array.

Request

This is a simple collection for accessing data by retrieval from a server (typically through XHR). The target URL path to use for requests can be defined with the target property. A request for data will be sent to the server when a fetch occurs (due a call to fetch(), fetchRange(), or forEach()). Request supports several properties for defining the generation of query strings:

  • sortParam - This will specify the query parameter to use for specifying the sort order. This will default to sort(<properties>) in the query string.
  • selectParam - This will specify the query parameter to use for specifying the select properties. This will default to select(<properties>) in the query string.
  • rangeStartParam and rangeCountParam - This will specify the query parameter to use for specifying the range. This will default to limit(<count>,<start>) in the query string.
    • e.g. limit(50,200) will request items 200-249
  • useRangeHeaders - This will specify that range information should be specified in the Range (or X-Range) header.
    • e.g. Range: items 200-249 will request items 200-249

Server considerations for a Request/Rest store

The response should be in JSON format. It should include the data and a number indicating the total number of items:

  • data: the response can either be a JSON array containing the items or a JSON object with an items property that is an array containing the items
  • total: if the response is an array then the total should be specified in the Content-Range header, e.g.:
    • Content-Range: items 0-24/500
    • If the response is an object then the total should be specified on the total property of the object, e.g.:
{
    "total": 500,
    "items": [ /* ...items */ ]
}

Rest

This store extends the Request store, to add functionality for adding, updating, and removing objects. All modifications trigger HTTP requests to the server using the corresponding RESTful HTTP methods. A get() triggers a GET, remove() triggers a DELETE, and add() and put() will trigger a PUT if an id is available or provided, and a POST will be used to create new objects with server provided ids.

For example:

myStore = new Rest({
    target: '/PathToData/'
});

All modification or retrieval methods (except getIdentity()) on Request and Rest execute asynchronously, returning a promise.

The server must respond to GET requests for an item by ID with an object representing the item (not an array).

Store

This is the base class used for all stores, providing basic functionality for tracking collection states and converting objects to be model instances. This (or any of the other classes above) can be extended for creating custom stores.

RequestMemory

This store provides client-side querying functionality, but will load its data from the server up-front, using the provided URL. This is an asynchronous store since queries and data retrieval may be made before the data has been retrieved from the server.

RequestMemory accepts the same target option for its URL as Request and Rest. Additionally, it supports a refresh method which can be called (and optionally passed a new target URL) to reload data from the server endpoint.

LocalDB

This a store based on the browser's local database/storage capabilities. Data stored in this store will be persisted in the local browser. The LocalDB will automatically load the best storage implementation based on browser's capabilities. These storage implementation follow the same interface. LocalDB will attempt to load one of these stores (highest priority first, and these can also be used directly if you do not want automatic selection):

  • dstore/db/IndexedDB - This uses the IndexedDB API. This is available on the latest version of all major browsers (introduced in IE 10 and Safari 7.1/8, but with some serious bugs).
  • dstore/db/SQL - This uses the WebSQL API. This is available on Safari and Chrome.
  • dstore/db/LocalStorage - This uses the localStorage API. This is available on all major browsers, going back to IE8. The localStorage API does not provide any indexed querying, so this loads the entire database in memory. This can be very expensive for large datasets, so this store is generally avoided, except to provide functionality on old versions of IE.
  • dstore/db/has - This is not a store, but provides feature has tests for indexeddb and sql.

The LocalDB stores requires a few extra parameters, not needed by other stores. First, it needs a database configuration object. A database configuration object defines all the stores or tables that are used by the stores, and which properties to index. There should be a single database configuration object for the entire application, and it should be passed to all the store instances. The configuration object should include a version (which should be incremented whenever the configuration is changed), and a set of stores in the stores object. Within the stores object, each property that will be used should be defined. Each property value should have a property configuration object with the following optional properties:

  • preference - This defines the priority of using this property for index-based querying. This should be a larger number for more unique properties. A boolean property would generally have a preference of 1, and a completely unique property should be 100.
  • indexed - This is a boolean indicating if a property should be indexed. This defaults to true.
  • multiEntry - This indicates the property will have an array of values, and should be indexed correspondingly. Internet Explorer's implementation of IndexedDB does not currently support multiEntry.
  • autoIncrement - This indicates if a property should automatically increment.

Alternately a number can be provided as a property configuration, and will be used as the preference.

An example database configuration object is:

var dbConfig = {
    version: 5,
    stores: {
        posts: {
            name: 10,
            id: {
                autoIncrement: true,
                preference: 100
            },
            tags: {
                multiEntry: true,
                preference: 5
            },
            content: {
                indexed: false
            }
        },
        commments: {
            author: {},
            content: {
                indexed: false
            }
        }
    }
};

In addition, each store should define a storeName property to identify which database store corresponds to the store instance. For example:

var postsStore = new LocalDB({ dbConfig: dbConfig, storeName: 'posts' });
var commentsStore = new LocalDB({ dbConfig: dbConfig, storeName: 'comments' });

Once created, these stores can be used like any other store.

Cache

This is a mixin that can be used to add caching functionality to a store. This can also be used to wrap an existing store, by using the static create function:

var cachedStore = Cache.create(existingStore, {
    cachingStore: new Memory()
});

This store has the following properties and methods:

NameDescription
cachingStoreThis can be used to define the store to be used for caching the data. By default a Memory store will be used.
isValidFetchCacheThis is a flag that indicates if the data fetched for a collection/store can be cached to fulfill subsequent fetches. This is false by default, and the value will be inherited by downstream collections. It is important to note that only full fetch() requests will fill the cache for subsequent fetch() requests. fetchRange() requests will not fulfill a collection, and subsequent fetchRange() requests will not go to the cache unless the collection has been fully loaded through a fetch() request.
allLoadedThis is a flag indicating that the given collection/store has its data loaded. This can be useful if you want to provide a caching store prepopulated with data for a given collection. If you are setting this to true, make sure you set isValidFetchCache to true as well to indicate that the data is available for fetching.
canCacheQuery(method, args)This can be a boolean or a method that will indicate if a collection can be cached (if it should have isValidFetchCache set to true), based on the query method and arguments used to derive the collection.
isLoaded(object)This can be defined to indicate if a given object in a query can be cached (by default, objects are cached).

Tree

This is a mixin that provides basic support for hierarchical data. This implements several methods that can then be used by hierarchical UI components (like dgrid with a tree column). This mixin uses a parent-based approach to finding children, retrieving the children of an object by querying for objects that have parent property with the id of the parent object. In addition, objects may have a hasChildren property to indicate if they have children (if the property is absent, it is assumed that they may have children). This mixin implements the following methods:

  • getChildren(parent) - This returns a collection representing the children of the provided parent object. This is produced by filtering for objects that have a parent property with the id of the parent object.
  • mayHaveChildren(parent) - This synchronously returns a boolean indicating whether or not the parent object might have children (the actual children may need to be retrieved asynchronously).
  • getRootCollection() - This returns the root collection, the collection of objects with parent property that is null.

The Tree mixin may serve as an example for alternate hierarchical implementations. By implementing these methods as they are in dstore/Tree, one could change the property names for data that uses different parent references or indications of children. Another option would be define the children of an object as direct references from the parent object. In this case, you would define getChildren to associate the parent object with the returned collection and override fetch and fetchRange to return a promise to the array of the children of the parent.

Trackable

The Trackable mixin adds functionality for tracking the index positions of objects as they are added, updated, or deleted. The Trackable mixin adds a track() method to create a new tracked collection. When events are fired (from modification operations, or other sources), the tracked can match the changes from the events to any cached data in the collection (which may be ordered by sorting, or filtered), and decorates the events with index positions. More information about tracked collections and events can be found in the collections documentation.

Resource Query Language

Resource Query Language (RQL) is a query language specifically designed to be easily embedded in URLs (it is a compatible superset of standard encoded query parameters), as well as easily interpreted within JavaScript for client-side querying. Therefore RQL is a query language suitable for consistent client and server-delegated queries. The dstore packages serializes complex filter/queries into RQL (RQL supersets standard query parameters, and so simple queries are simply serialized as standard query parameters).

dstore also includes support for using RQL as the query language for filtering. This can be enabled by mixing dstore/extensions/RqlQuery into your collection type:

require([
    'dojo/_base/declare',
    'dstore/Memory',
    'dstore/extensions/RqlQuery'
], function (declare, Memory, RqlQuery) {
    var RqlStore = declare([ Memory, RqlQuery ]);
    var rqlStore = new RqlStore({
        ...
    });

    rqlStore.filter('price<10|rating>3').forEach(function (product) {
        // return each product that has a price less than 10 or a rating greater than 3
    });
}};

Make sure you have installed/included the rql package if you are using the RQL query engine.

Collection

A Collection is the interface for a collection of items, which can be filtered or sorted to create new collections. When implementing this interface, every method and property is optional, and is only needed if the functionality it provides is required. However, all the included collections implement every method. Note that the objects in the collection might not be immediately retrieved from the underlying data storage until they are actually accessed through forEach(), fetch(), or fetchRange(). These fetch methods return a snapshot of the data, and if the data has changed, these methods can later be used to retrieve the latest data.

Querying

Several methods are available for querying collections. These methods allow you to define a query through several steps. Normally, stores are queried first by calling filter() to specify which objects to be included, if the filtering is needed. Next, if an order needs to be specified, the sort() method is called to ensure the results will be sorted. A typical query from a store would look like:

store.filter({priority: 'high'}).sort('dueDate').forEach(function (object) {
    // called for each item in the final result set
});

In addition, the track() method may be used to track store changes, ensuring notifications include index information about object changes, and keeping result sets up-to-date after a query. The fetch() method is an alternate way to retrieve results, providing a promise to an array for accessing query results. The sections below describes each of these methods and how to use them.

Filtering

Filtering is used to specify a subset of objects to be returned in a filtered collection. The simplest use of the filter() method is to call it with a plain object as the argument, that specifies name-value pairs that the returned objects must match. Or a filter builder can be used to construct more sophisticated filter conditions. To use the filter builder, first construct a new filter object from the Filter constructor on the collection you would be querying:

var filter = new store.Filter();

We now have a filter object, that represents a filter, without any operators applied yet. We can create new filter objects by calling the operator methods on the filter object. The operator methods will return new filter objects that hold the operator condition. For example, to specify that we want to retrieve objects with a priority property with a value of "high", and stars property with a value greater than 5, we could write:

var highPriorityFiveStarFilter = filter.eq('priority', 'high').gt('stars', 5);

This filter object can then be passed as the argument to the filter() method on a collection/store:

var highPriorityFiveStarCollection = store.filter(highPriorityFiveStarFilter);

The following methods are available on the filter objects. First are the property filtering methods, which each take a property name as the first argument, and a property value to compare for the second argument:

  • eq: Property values must equal the filter value argument.
  • ne: Property values must not equal the filter value argument.
  • lt: Property values must be less than the filter value argument.
  • lte: Property values must be less than or equal to the filter value argument.
  • gt: Property values must be greater than the filter value argument.
  • gte: Property values must be greater than or equal to the filter value argument.
  • in: An array should be passed in as the second argument, and property values must be equal to one of the values in the array.
  • match: Property values must match the provided regular expression.
  • contains: Filters for objects where the specified property's value is an array and the array contains any value that equals the provided value or satisfies the provided expression.

The following are combinatorial methods:

  • and: This takes two arguments that are other filter objects, that both must be true.
  • or: This takes two arguments that are other filter objects, where one of the two must be true.

Nesting

A few of the filters can also be built upon with other collections (potentially from other stores). In particular, you can provide a collection as the argument for the in or contains filter. This provides functionality similar to nested queries or joins. This generally will need to be combined with a select to return the correct values for matching. For example, if we wanted to find all the tasks in high priority projects, where the task store has a projectId property/column that is a foreign key, referencing objects in a project store. We can perform our nested query:

var tasksOfHighPriorityProjects = taskStore.filter(
    new Filter().in('projectId',
        projectStore.filter({ priority: 'high' }).select('id')
    )
);

Implementations

Different stores may implement filtering in different ways. The dstore/Memory will perform filtering in memory. The dstore/Request/dstore/Rest stores will translate the filters into URL query strings to send to the server. Simple queries will be in standard URL-encoded query format and complex queries will conform to RQL syntax (which is a superset of standard query format).

New filter methods can be created by subclassing dstore/Filter and adding new methods. New methods can be created by calling Filter.filterCreator and by providing the name of the new method. If you will be using new methods with stores that mix in SimpleQuery like memory stores, you can also add filter comparators by overriding the _getFilterComparator method, returning comparators for the additional types, and delegating to this.inherited for the rest.

dstore/SimpleQuery provides a simple shorthand for nested property queries - a side-effect of this is that property names that contain the period character are not supported. Example nested property query:

store.filter({ 'name.last': 'Smith' })

This would match the object:

{
	name: {
		first: 'John',
		last: 'Smith'
	}
}

For the dstore/Request/dstore/Rest stores, you can define alternate serializations of filters to URL queries for existing or new methods by overriding the _renderFilterParams. This method is called with a filter object (and by default is recursively called by combinatorial operators), and should return a string serialization of the filter, that will be inserted into the query string of the URL sent to the server.

The filter objects themselves consist of tree structures. Each filter object has two properties, the operator type, which corresponds to whichever operator was used (like eq or and), and the args, which is an array of values provided to the operator. With and and or operators, the arguments are other filter objects, forming a hierarchy. When filter operators are chained together (through sequential calls), they are combined with the and operator (each operator defined in a sub-filter object).

Collection API

The following property and methods are available on dstore collections:

Property Summary

PropertyDescription
ModelThis constructor represents the data model class to use for the objects returned from the store. All objects returned from the store should have their prototype set to the prototype property of the model, such that objects from this store should return true from object instanceof collection.Model.

Method Summary

filter(query)

This filters the collection, returning a new subset collection. The query can be an object, or a filter object, with the properties defining the constraints on matching objects. Some stores, like server or RQL stores, may accept string-based queries. Stores with in-memory capabilities (like dstore/Memory) may accept a function for filtering as well, but using the filter builder will ensure the greatest cross-store compatibility.

matchesFilter(item)

This tests the provided item to see if it matches the current filter or not.

sort(property, [descending])

This sorts the collection, returning a new ordered collection. Note that if sort is called multiple times, previous sort calls may be ignored by the store (it is up to store implementation how to handle that). If a multiple sort order is desired, use the array of sort orders defined by below.

sort([highestSortOrder, nextSortOrder...])

This also sorts the collection, but can be called to define multiple sort orders by priority. Each argument is an object with a property property and an optional descending property (defaults to ascending, if not set), to define the order. For example:

collection.sort([
	{ property: 'lastName' },
	{ property: 'firstName' }
])

would result in a new collection sorted by lastName, with firstName used to sort identical lastName values.

select([property, ...])

This selects specific properties that should be included in the returned objects.

select(property)

This will indicate that the return results will consist of the values of the given property of the queried objects. For example, this would return a collection of name values, pulled from the original collection of objects:

collection.select('name');

forEach(callback, thisObject)

This iterates over the query results. Note that this may be executed asynchronously and the callback may be called after this function returns. This will return a promise to indicate the completion of the iteration. This method forces a fetch of the data.

fetch()

Normally collections may defer the execution (like making an HTTP request) required to retrieve the results until they are actually accessed. Calling fetch() will force the data to be retrieved, returning a promise to an array.

fetchRange({start: start, end: end})

This fetches a range of objects from the collection, returning a promise to an array. The returned (and resolved) promise should have a totalLength property with a promise that resolves to a number indicating the total number of objects available in the collection.

on(type, listener, filterEvents)

This allows you to define a listener for events that take place on the collection or parent store. When an event takes place, the listener will be called with an event object as the single argument. The following event types are defined:

TypeDescription
addThis indicates that a new object was added to the store. The new object is available on the target property.
updateThis indicates that an object in the stores was updated. The updated object is available on the target property.
deleteThis indicates that an object in the stores was removed. The id of the object is available on the id property.

Setting filterEvents to true indicates the listener will be called only when the emitted event references an item (event.target) that satisfies the collection's current filter query. Note: if filterEvents is set to true for type update, the listener will be called only when the item passed to put matches the collection's query. The original item will not be evaluted. For example, a store contains items marked "to-do" and items marked "done" and one collection uses a query looking for "to-do" items and one looks for "done" items. Both collections are listening for "update" events. If an item is updated from "to-do" to "done", only the "done" collection will be notified of the update.

If detecting when an item is removed from a collection due to an update is desired, set filterEvents to false and use the matchesFilter(item) method to test if each item updated is currently in the collection.

There is also a corresponding emit(type, event) method (from the Store interface) that can be used to emit events when objects have changed.

track()

This method will create a new collection that will be tracked and updated as the parent collection changes. This will cause the events sent through the resulting collection to include an index and previousIndex property to indicate the position of the change in the collection. This is an optional method, and is usually provided by dstore/Trackable. For example, you can create an observable store class, by using dstore/Trackable as a mixin:

var TrackableMemory = declare([Memory, Trackable]);

Trackable requires client side querying functionality. Client side querying functionality is available in dstore/SimpleQuery (and inherited by dstore/Memory). If you are using a Request, Rest, or other server side store, you will need to implement client-side query functionality (by implementing querier methods), or mixin SimpleQuery:

var TrackableRest = declare([Rest, SimpleQuery, Trackable]);

Once we have created a new instance from this store, we can track a collection, which could be the top level store itself, or a downstream filtered or sorted collection:

var store = new TrackableMemory({ data: [...] });
var filteredSorted = store.filter({ inStock: true }).sort('price');
var tracked = filteredSorted.track();

Once we have a tracked collection, we can listen for notifications:

tracked.on('add, update, delete', function (event) {
    var newIndex = event.index;
    var oldIndex = event.previousIndex;
    var object = event.target;
});

Trackable requires fetched data to determine the position of modified objects and can work with either full or partial data. We can do a fetch() or forEach() to access all the items in the filtered collection:

tracked.fetch();

Or we can do a fetchRange() to make individual range requests for items in the collection:

tracked.fetchRange(0, 10);

Trackable will keep track of each page of data, and send out notifications based on the data it has available, along with index information, indicating the new and old position of the object that was modified. Regardless of whether full or partial data is fetched, tracked events and the indices they report are relative to the entire collection, not relative to individual fetched ranges. Tracked events also include a totalLength property indicating the total length of the collection.

If an object is added or updated, and falls outside of all of the fetched ranges, the index will be undefined. However, if the object falls between fetched ranges (but within one), there will also be a beforeIndex that indicates the index of the first object that the new or update objects comes before.

Custom Querying

Custom query methods can be created using the dstore/QueryMethod module. We can define our own query method, by extending a store, and defining a method with the QueryMethod. The QueryMethod constructor should be passed an object with the following possible properties:

  • type - This is a string, identifying the query method type.
  • normalizeArguments - This can be a function that takes the arguments passed to the method, and normalizes them for later execution.
  • applyQuery - This is an optional function that can be called on the resulting collection that is returned from the generated query method.
  • querierFactory - This is an optional function that can be used to define the computation of the set of objects returned from a query, on client-side or in-memory stores. It is called with the normalized arguments, and then returns a new function that will be called with an array, and is expected to return a new array.

For example, we could create a getChildren method that queried for children object, by simply returning the children property array from a parent:

declare([Memory], {
    getChildren: new QueryMethod({
        type: 'children',
        querierFactory: function (parent) {
            var parentId = this.getIdentity(parent);

            return function (data) {
                // note: in this case, the input data is ignored as this querier
                // returns an object's array of children instead

                // return the children of the parent
                // or an empty array if the parent no longer exists
                var parent = this.getSync(parentId);
                return parent ? parent.children : [];
            };
        }
	})
});

Store

A store is an extension of a collection and is an entity that not only contains a set of objects, but also provides an interface for identifying, adding, modifying, removing, and querying data. Below is the definition of the store interface. Every method and property is optional, and is only needed if the functionality it provides is required (although the provided full stores (Rest and Memory) implement all the methods except transaction() and getChildren()). Every method returns a promise for the specified return value, unless otherwise noted.

In addition to the methods and properties inherited from Collections, the Store API also exposes the following properties and methods.

Property Summary

PropertyDescription
idPropertyIf the store has a single primary key, this indicates the property to use as the identity property. The values of this property should be unique. This defaults to "id".
ModelThis is the model class to use for all the data objects that originate from this store. By default this will be set to null, so that all objects will be plain objects, but this property can be set to the class from dmodel/Model or any other model constructor. You can create your own model classes (and schemas), and assign them to a store. All objects that come from the store will have their prototype set such that they will be instances of the model. The default value of null will disable any prototype modifications and leave data as plain objects.
defaultNewToStartIf a new object is added to a store, this will indicate it if it should go to the start or end. By default, it will be placed at the end.

Method Summary

MethodDescription
get(id)This retrieves an object by its identity. This returns a promise for the object. If no object was found, the resolved value should be undefined.
getIdentity(object)This returns an object's identity (note: this should always execute synchronously).
put(object, [directives])This stores an object. It can be used to update or create an object. This returns a promise that may resolve to the object after it has been saved.
add(object, [directives])This creates an object, and throws an error if the object already exists. This should return a promise for the newly created object.
remove(id)This deletes an object, using the identity to indicate which object to delete. This returns a promise that resolves to a boolean value indicating whether the object was successfully removed.
transaction()Starts a transaction and returns a transaction object. The transaction object should include a commit() and abort() to commit and abort transactions, respectively. Note, that a store user might not call transaction() prior to using put, delete, etc. in which case these operations effectively could be thought of as “auto-commit” style actions.
create(properties)Creates and returns a new instance of the data model. The returned object will not be stored in the object store until it its save() method is called, or the store's add() is called with this object. This should always execute synchronously.
getChildren(parent)This retrieves the children of the provided parent object. This should return a new collection representing the children.
mayHaveChildren(parent)This should return true or false indicating whether or not a parent might have children. This should always return synchronously, as a way of checking if children might exist before actually retrieving all the children.
getRootCollection()This should return a collection of the top level objects in a hierarchical store.
emit(type, event)This can be used to dispatch event notifications, indicating changes to the objects in the collection. This should be called by put, add, and remove methods if the autoEmit property is false. This can also be used to notify stores if objects have changed from other sources (if a change has occurred on the server, from another user). There is a corresponding on method on collections for listening to data change events. Also, the Trackable mixin can be used to add index/position information to the events.

Synchronous Methods

Stores that can perform synchronous operations may provide analogous methods for get, put, add, and remove that end with Sync to provide synchronous support. For example getSync(id) will directly return an object instead of a promise. The dstore/Memory store provides Sync methods in addition to the promise-based methods. This behavior has been separated into distinct methods to provide consistent return types.

It is generally advisable to always use the asynchronous methods so that client code does not have to be updated in case the store is changed. However, if you have very performance intensive store accesses, the synchronous methods can be used to avoid the minor overhead imposed by promises.

MethodDescription
getSync(id)This retrieves an object by its identity. If no object was found, the returned value should be undefined.
putSync(object, [directives])This stores an object. It can be used to update or create an object. This returns the object after it has been saved.
addSync(object, [directives])This creates an object, and throws an error if the object already exists. This should return the newly created object.
removeSync(id)This deletes an object, using the identity to indicate which object to delete. This returns a boolean value indicating whether the object was successfully removed.

Promise-based API and Synchronous Operations

All CRUD methods, such as get, put, remove, and fetch, return promises. However, stores and collections may provide synchronous versions of those methods with a "Sync" suffix (e.g., Memory#fetchSync to fetch synchronously from a Memory store).

Data Modelling

In addition to handling collections of items, dstore works with the dmodel package to provides robust data modeling capabilities for managing individual objects. dmodel provides a data model class that includes multiple methods on data objects, for saving, validating, and monitoring objects for changes. By setting a model on stores, all objects returned from a store, whether a single object returned from a get() or an array of objects returned from a fetch(), will be an instance of the store's data model.

For more information, please see the dmodel project.

Adapters

Adapters make it possible work with legacy Dojo object stores and widgets that expect Dojo object stores. dstore also includes an adapter for using a store with charts. See the Adapters section for more information.

Testing

dstore uses Intern as its test runner. A full description of how to setup testing is available here. Tests can either be run using the browser, or using Sauce Labs. More information on writing your own tests with Intern can be found in the Intern wiki.

Dependencies

dstore's only required dependency is Dojo version 1.8 or higher. Running the unit tests requires the intern-geezer package (see the testing docs for more information). The extensions/RqlQuery module can leverage the rql package, but the rql package is only needed if you use this extension.

Contributing

We welcome contributions, but please read the contributing documentation to help us be able to effectively receive your contributions and pull requests.

Download Details: 

Author: SitePen
Source Code: https://github.com/SitePen/dstore 
License: View license

#javascript #framework #data #infrastructure 

SitePen/dstore: A Data infrastructure Framework

Laravel The Preferred Web-Application Development Framework

Are you looking to Boost your business through rich-featured, sleek & charismatic UI integrated Web Applications?

Choose Laravel, a PHP-based Framework that is freely accessible across online repositories with an open-source license, with a default built-in wide array of PHP modules, components, security mechanisms, and database elements, furnishing a dynamic Laravel solution to the user. This allows users to craft feather-light, platform-independent and easy-to-customize web applications as per your requirements, which enables your services to reach out globally, accelerating your business productivity to a higher level.

#laravel  #webappdevelopment #framework #programming #technologies 

Laravel The Preferred Web-Application Development Framework
Royce  Reinger

Royce Reinger

1658755620

Omnicat: A Generalized Rack Framework for Text Classifications

OmniCat 

A generalized framework for text classifications.

Installation

Add this line to your application's Gemfile:

gem 'omnicat'

And then execute:

$ bundle

Or install it yourself as:

$ gem install omnicat

Usage

Stand-alone version of omnicat is just a strategy holder for developers. Its aim is providing omnification of methods for text classification gems with loseless conversion of a strategy to another one. End-users should see 'classifier strategies' section and 'changing classifier strategy' sub section.

Changing classifier strategy

OmniCat allows you to change strategy on runtime.

# Declare classifier with Naive Bayes classifier
classifier = OmniCat::Classifier.new(OmniCat::Classifiers::Bayes.new())
...
# do some operations like adding category, training, etc...
...
# make some classification using Bayes
classifier.classify('I am happy :)')
...
# change strategy to Support Vector Machine (SVM) on runtime
classifier.strategy = OmniCat::Classifiers::SVM.new
# now you do not need to re-train, add category and so on..
# just classify with new strategy
classifier.classify('I am happy :)')

Classifier strategies

Here is the classifier list avaliable for OmniCat.

Naive Bayes classifier

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: Mustafaturan
Source Code: https://github.com/mustafaturan/omnicat 
License: MIT license

#ruby #classification #texts #framework 

Omnicat: A Generalized Rack Framework for Text Classifications
Rupert  Beatty

Rupert Beatty

1658669280

Stapler-based File Upload Package for The Laravel Framework

laravel-stapler

Laravel-Stapler is a Stapler-based file upload package for the Laravel framework. It provides a full set of Laravel commands, a migration generator, and a cascading package config on top of the Stapler package. It also bootstraps Stapler with very sensible defaults for use with Laravel. If you are wanting to use Stapler with Laravel, it is strongly recommended that you use this package to do so.

Requirements

This package currently requires php >= 5.4 as well as Laravel >= 4, up to 5.4 (5.4 is the last version of Laravel this package will officially support). Due to the recent inconsistencies/changes introduced into Eloquent and the fact that Laravel now ships with Disks/Flysystem support, I have decided not to try and maintain this package for future version of Laravel. I am not adding a hard requirement for Laravel <= 5.4 due to the fact that some folks are already using it in their Laravel > 5.4 projects. If you want use this package in new version of Laravel you may do so at your own risk.

If you're going to be performing image processing as part of your file upload, you'll also need GD, Gmagick, or Imagick (your preference) installed as part of your php environment.

Installation

Laravel-Stapler is distributed as a composer package, which is how it should be used in your app.

Install the package using Composer. Edit your project's composer.json file to require codesleeve/laravel-stapler.

  "require": {
    "laravel/framework": "4.*",
    "codesleeve/laravel-stapler": "1.0.*"
  }

Once this operation completes, the final step is to add the service provider.

For Laravel 4, Open app/config/app.php, and add a new item to the providers array:

    'Codesleeve\LaravelStapler\Providers\L4ServiceProvider'

For Laravel 5, Open config/app.php, and add a new item to the providers array:

    'Codesleeve\LaravelStapler\Providers\L5ServiceProvider'

Deprecations

As of 1.0.04, the 'Codesleeve\LaravelStapler\LaravelStaplerServiceProvider' service provider has been deprecated (this provider will be removed in the next major release). Instead, you should now be using the corresponding service provider for the specific version of Laravel that you're using.

migrating-from-Stapler-v1.0.0-Beta4

If you've been using Stapler (prior to v1.0.0-Beta4) in your Laravel app, you now need to be using this package instead. Uninstall Stapler (remove it from your composer.json, remove the service provider, etc) and install this package following the instructions above. Once installed, the following changes may need need to be made in your application:

In your models that are using Stapler, change use Codesleeve\Stapler\Stapler to use Codesleeve\Stapler\ORM\EloquentTrait. Your models will also need to implement Codesleeve\Stapler\ORM\StaplerableInterface.

If you published stapler's config, you'll need to rename config folder from app/config/packages/codesleeve/stapler to app/config/packages/codesleeve/laravel-stapler.

Image processing libraries are now referenced by their full class name from the Imagine Image package (e.g gd is now reference by Imagine\Gd\Imagine).

In your s3 configuration, instead of passing 'key', 'secret', 'region', and 'scheme' options, you'll now need to pass a single 's3_client_config' array containing these options (and any others you might want). These will be passed directly to the s3ClientFactory when creating an S3 client. Passing the params as an array now allows you to configure your s3 client (for a given model/attachment) however you like. See: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/configuration.html#client-configuration-options

In your s3 configuration, instead of passing 'Bucket' and 'ACL', you'll now need to pass a single 's3_object_config' array containing these values (this is used by the S3Client::putObject() method). See: http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws.S3.S3Client.html#_putObject

The ':laravel_root' interpolation has been changed to ':app_root'

Quickstart

In the document root of your application (most likely the public folder), create a folder named system and grant your application write permissions to it. For this, we're assuming the existence of an existing User model in which we're going to add an avatar image to.

In your model:

use Codesleeve\Stapler\ORM\StaplerableInterface;
use Codesleeve\Stapler\ORM\EloquentTrait;

class User extends Eloquent implements StaplerableInterface {
    use EloquentTrait;

    // Add the 'avatar' attachment to the fillable array so that it's mass-assignable on this model.
    protected $fillable = ['avatar', 'first_name', 'last_name'];

    public function __construct(array $attributes = array()) {
        $this->hasAttachedFile('avatar', [
            'styles' => [
                'medium' => '300x300',
                'thumb' => '100x100'
            ]
        ]);

        parent::__construct($attributes);
    }
}

Make sure that the hasAttachedFile() method is called right before parent::__construct() of your model.

From the command line, use the migration generator:

php artisan stapler:fasten users avatar
php artisan migrate

In your new view:

<?= Form::open(['url' => action('UsersController@store'), 'method' => 'POST', 'files' => true]) ?>
    <?= Form::input('first_name') ?>
    <?= Form::input('last_name') ?>
    <?= Form::file('avatar') ?>
    <?= Form::submit('save') ?>
<?= Form::close() ?>

In your controller:

public function store()
{
    // Create and save a new user, mass assigning all of the input fields (including the 'avatar' file field).
    $user = User::create(Input::all());
}

In your show view:

<img src="<?= $user->avatar->url() ?>" >
<img src="<?= $user->avatar->url('medium') ?>" >
<img src="<?= $user->avatar->url('thumb') ?>" >

To detach (reset) a file, simply assign the constant STAPLER_NULL to the attachment and the save):

$user->avatar = STAPLER_NULL;
$user->save();

This will ensure the the corresponding attachment fields in the database table record are cleared and the current file is removed from storage. The database table record itself will not be destroyed and can be used normally (or even assigned a new file upload) as needed.

Commands

fasten

This package provides a fasten command that can be used to generate migrations for adding image file fields to existing tables. The method signature for this command looks like this: php artisan stapler:fasten <tablename> <attachment>

In the quickstart example above, calling php artisan stapler:fasten users avatar followed by php artisan migrate added the following fields to the users table:

  • (string) avatar_file_name
  • (integer) avatar_file_size
  • (string) avatar_content_type
  • (timestamp) avatar_updated_at

refresh

The refresh command can be used to reprocess uploaded images on a model's attachments. It works by calling the reprocess() method on each of the model's attachments (or on specific attachments only). This is very useful for adding new styles to an existing attachment when a file has already been uploaded for that attachment.

Reprocess all attachments for the ProfilePicture model: php artisan stapler:refresh ProfilePicture

Reprocess only the photo attachment on the ProfilePicture model: php artisan stapler:refresh TestPhoto --attachments="photo"

Reprocess a list of attachments on the ProfilePicture model: php artisan stapler:refresh TestPhoto --attachments="foo, bar, baz, etc"

Troubleshooting

Before you submit an issue or create a pull request, please take a look at the Troubleshooting Section section of the Stapler package. There's a very good chance that many (if not all) of the issues you're having with this package are related to the base stapler package and have already been addressed there.

Contributing

This package is always open to contributions:

  • Master will always contain the newest work (bug fixes, new features, etc), however it may not always be stable; use at your own risk. Every new tagged release will come from the work done on master, once things have stablized, etc.

Laravel-Stapler was created by Travis Bennett.

Author: CodeSleeve
Source Code: https://github.com/CodeSleeve/laravel-stapler 
License: MIT license

#laravel #framework 

Stapler-based File Upload Package for The Laravel Framework
Royce  Reinger

Royce Reinger

1658503800

Wechat: API, Command and Message Handling for WeChat in Rails

WeChat   

Wechat is a Chinese multi-purpose messaging, social media and mobile payment app developed by Tencent. It was first released in 2011, and by 2018 it was one of the world's largest standalone mobile apps by monthly active users, with over 1 billion monthly active users (902 million daily active users). (According to wiki)

WeChat gem helps Rails developers integrate WeChat Official Accounts Platform or Wechat mini program easily, including features:

  • Sending message API(can be both accessed via console or rails server)
  • Receiving message(rails server is required to be running)
  • Wechat JS-SDK config signature
  • OAuth 2.0 authentication
  • Record session when receiving message from user (Optional)

wechat command shares the same API in console, so you can interact with wechat server quickly without starting up web environment/code.

A responder DSL can be used in Rails controller, which gives an event based interface to handle messages sent by end users.

If Wechat OAuth 2.0 is required by your app, omniauth-wechat-oauth2 is recommended in order to apply devise authentication.

If tencent's weui UI style is adoped in your project, gem weui-rails is available for you.

For web page only wechat application, please use wechat_api, which only contains web features, compared with traditional message type wechat_responder.

There is a more complete wechat-starter demo available, which futher includes the payment SDK feature.

Installation

Use gem install

gem install "wechat"
# If your ruby version < 2.6
# gem install wechat -v 0.12.4

Or add it to your app's Gemfile:

gem 'wechat'
# If your rails version < 6.0
# gem 'wechat', '~> 0.12.4'

Run the following command to install it:

bundle install

Run the generator:

rails generate wechat:install

rails g wechat:install will generate the initial wechat.yml configuration file, including an sample wechat controller and corresponding routes.

Enable session record:

rails g wechat:session
rake db:migrate

Enabling session will generate two files in Rails folder, you can add more columns to wechat_session table and add declaration to link to users table, it's also possible to store data directly in hash_store. If you are using PostgreSQL, using hstore/json maybe better, but the best way is to add a dedicated column to record the data (the Rails way).

Using Redis to store wechat token and ticket:

rails g wechat:redis_store

Redis storage supports Rails application running in multiple servers. It is recommended to use default file storage if there is only one single server. Besides that, wechat command won't read token/ticket stored in Redis.

Enable database wechat configurations:

rails g wechat:config
rake db:migrate

After running the migration, a wechat_configs table will be created that allows storage of multiple wechat accounts.

Configuration

Configure wechat for the first time

Make sure to finish all the setup on rails side first, then submit those setting to Tencent wechat management website. Otherwise, wechat will raise error.

URL address for wechat created by running rails g wechat:install is http://your-server.com/wechat

How to setup appid/corpid and secret see below section.

Configure for command line

To use standalone wechat command, you need to create configuration file ~/.wechat.yml and include content below for public account. The access_token will be written to file /var/tmp/wechat_access_token.

appid: "my_appid"
secret: "my_secret"
access_token: "/var/tmp/wechat_access_token"

For enterprise account, you need to use corpid instead of appid as enterprise account supports multiply application (Tencent calls them agents) in one enterprise account. Obtaining the corpsecret is a little bit tricky, must be created at management mode->privilege setting and create any of management group to obtain. Due to Tencent currently only providing Chinese interface for their management console, it's highly recommended you find a colleague knowing Mandarin to help you to obtain the corpsecret.

Windows users need to store .wechat.yml at C:/Users/[user_name]/ (replace with your user name), also pay attention to the direction of folder separator.

corpid: "my_appid"
corpsecret: "my_secret"
agentid: 1 # Integer, which can be obtained from application settings
access_token: "C:/Users/[user_name]/wechat_access_token"

Configure for Rails

Rails configuration file supports different environment similar to database.yml, after running rails generate wechat:install you can find configuration file at config/wechat.yml

Public account configuration example:

default: &default
  appid: "app_id"
  secret: "app_secret"
  token:  "app_token"
  access_token: "/var/tmp/wechat_access_token"
  jsapi_ticket: "/var/tmp/wechat_jsapi_ticket"

production:
  appid: <%= ENV['WECHAT_APPID'] %>
  secret: <%= ENV['WECHAT_APP_SECRET'] %>
  token:   <%= ENV['WECHAT_TOKEN'] %>
  access_token: <%= ENV['WECHAT_ACCESS_TOKEN'] %>
  jsapi_ticket: <%= ENV['WECHAT_JSAPI_TICKET'] %>
  oauth2_cookie_duration: <%= ENV['WECHAT_OAUTH2_COOKIE_DURATION'] %> # seconds

development:
  <<: *default
  trusted_domain_fullname: "http://your_dev.proxy.qqbrowser.cc"

test:
  <<: *default

Although it's optional for public account, but highly recommended to enable encrypt mode by adding these two items to wechat.yml

default: &default
  encrypt_mode: true
  encoding_aes_key:  "my_encoding_aes_key"

Enterprise account must use encrypt mode (encrypt_mode: true is on by default, no need to configure).

The token and encoding_aes_key can be obtained from management console -> one of the agent application -> Mode selection, select callback mode and get/set.

default: &default
  corpid: "corpid"
  corpsecret: "corpsecret"
  agentid:  1
  access_token: "C:/Users/[user_name]/wechat_access_token"
  token:    ""
  encoding_aes_key:  ""
  jsapi_ticket: "C:/Users/[user_name]/wechat_jsapi_ticket"

production:
  corpid:     <%= ENV['WECHAT_CORPID'] %>
  corpsecret: <%= ENV['WECHAT_CORPSECRET'] %>
  agentid:    <%= ENV['WECHAT_AGENTID'] %>
  access_token:  <%= ENV['WECHAT_ACCESS_TOKEN'] %>
  token:      <%= ENV['WECHAT_TOKEN'] %>
  timeout:    30,
  skip_verify_ssl: true # not recommend
  encoding_aes_key:  <%= ENV['WECHAT_ENCODING_AES_KEY'] %>
  jsapi_ticket: <%= ENV['WECHAT_JSAPI_TICKET'] %>
  oauth2_cookie_duration: <%= ENV['WECHAT_OAUTH2_COOKIE_DURATION'] %>

development:
  <<: *default
  trusted_domain_fullname: "http://your_dev.proxy.qqbrowser.cc"

test:
  <<: *default

 # Multiple Accounts
 #
 # wx2_development:
 #  <<: *default
 #  appid: "my_appid"
 #  secret: "my_secret"
 #  access_token: "tmp/wechat_access_token2"
 #  jsapi_ticket: "tmp/wechat_jsapi_ticket2"
 #
 # wx2_test:
 #  <<: *default
 #  appid: "my_appid"
 #  secret: "my_secret"
 #
 # wx2_production:
 #  <<: *default
 #  appid: "my_appid"
 #  secret: "my_secret"

Notes about supporting multiple accounts of WeChat Official Accounts Platform / WeChat Enterprise (for example, adding account wx2):

Configuration for multiple accounts is similar to multi-database configuration in config/database.yml, where development, test, production segments are the default configuration, one needs to add wx2_development, wx2_test, wx2_production in order to add additional account named wx2.

Declaration of additional wechat_responder:

wechat_responder account: :wx2

Use Wechat.api or Wechat.api(:default) to represent the default wechat api. Use Wechat.api(:wx2) to call for wechat api of account wx2.

When using Wechat command line, one can switch to another wechat account by adding optional parameters -a ACCOUNT [--account=ACCOUNT].

For details about supporting multiple accounts, please check PR 150

For wechat mini program, can specified by the item type:

# Mini Program Accounts

  mini_development:
    <<: *default
    appid: "my_appid"
    secret: "my_secret"
    # `mp` is short for **mini program**
    type: 'mp'

Database wechat account configuration

After enabling database account configuration, the following table will be created:

AttributeTypeAnnotation
environmentstringRequired. Environment of account configuration. Typical values are: production, development and test. For example, a production config will only be available in production. Default to development.
accountstringRequired. Custom wechat account name. Account names must be unique within each environment.
enabledbooleanRequired. Whether this configuration is activated. Default to true.
appidstringPublic account id. Either this attribute or corpid must be specified.
secretstringPublic account configuration. Required when appid exists.
corpidstringCorp account id. Either this attribute or appid must be specified.
corpsecretstringCorp account configuration. Required when corpid exists.
agentidintegerCorp account configuration. Required when corpid exists.
encrypt_modeboolean 
encoding_aes_keystringRequired when encrypt_mode is true.
tokenstringRequired.
access_tokenstringRequired. Path to access token storage file.
jsapi_ticketstringRequired. Path to jsapi ticket storage file.
skip_verify_sslboolean 
timeoutintegerDefault to 20.
trusted_domain_fullnamestring 

After updating database account configurations, you need to restart the server, or call Wechat.reload_config! to reload the updates.

Configure priority

Running wechat command in the root folder of Rails application will be using the Rails configuration first (default section), if can not find it, will relay on ~\.wechat.yml, such behavior enables managing more wechat public account and enterprise account without changing your home ~\.wechat.yml file.

When database account configuration is enabled, database configurations will be loaded after yml configuration file or environment parameters. When configurations with the same account name exist in both database and yml file or environment parameter, the one in the database will take precedence.

Wechat server timeout setting

Stability varies for Tencent wechat server, so setting a long timeout may be needed, default is 20 seconds if not set.

Skip the SSL verification

SSL Certification can also be corrupted for some reason in China, it's reported and if it happens to you, you can set skip_verify_ssl: true. (not recommend)

Configure individual responder with different appid

Sometimes, you may want to host more than one enterprise/public wechat account in one Rails application, so you can provide this configuration info when calling wechat_responder or wechat_api

class WechatFirstController < ActionController::Base
   wechat_responder account: :new_account, account_from_request: Proc.new{ |request| request.params[:wechat] }

   on :text, with:"help", respond: "help content"
end

Or you can provide full list of options.

class WechatFirstController < ActionController::Base
   wechat_responder appid: "app1", secret: "secret1", token: "token1", access_token: Rails.root.join("tmp/access_token1"),
                    account_from_request: Proc.new{ |request| request.params[:wechat] }

   on :text, with:"help", respond: "help content"
end

account_from_request is a Proc that takes in request as its parameter, and returns the corresponding wechat account name. In the above examples, controller will choose the account based on the wechat parameter passed in the request. If account_from_request is not specified, or this Proc evaluates to nil, configuration specified by account or the full list of options will be used.

JS-SDK helper

JS-SDK gives you control over Wechat App behavior in html, by injecting a config signature, helper wechat_config_js does that in a simple way:

To make wechat_config_js work, you need to put wechat_api or wechat_responder at controller first.

<body>
<%= wechat_config_js debug: false, api: %w(hideMenuItems closeWindow) -%>
<script type="application/javascript">
  wx.ready(function() {
      wx.hideOptionMenu();
  });
</script>
<a href="javascript:wx.closeWindow();">Close</a>
</body>

Configure the trusted_domain_fullname if you are in development mode and app is running behind a reverse proxy server, otherwise wechat gem won't be able to get the correct url to be signed later.

OAuth2.0 authentication

For public account, code below will get following user's info.

class CartController < ActionController::Base
  wechat_api
  def index
    wechat_oauth2 do |openid|
      @current_user = User.find_by(wechat_openid: openid)
      @articles = @current_user.articles
    end

    # specify account_name to use arbitrary wechat account configuration
    # wechat_oauth2('snsapi_base', nil, account_name) do |openid|
    #  ...
    # end
  end
end

For enterprise account, code below will get enterprise member's userinfo.

class WechatsController < ActionController::Base
  layout 'wechat'
  wechat_responder
  def apply_new
    wechat_oauth2 do |userid|
      @current_user = User.find_by(wechat_userid: userid)
      @apply = Apply.new
      @apply.user_id = @current_user.id
    end
  end
end

wechat_oauth2 already implements the necessary OAuth2.0 and cookie logic. userid defined as the enterprise member UserID. openid defined as the user who following the public account, also notice openid will be different for the same user for different following public accounts.

Notice:

  • If you use wechat_responder in your controller, you cannot use create and show action in your controller, otherwise it will throw errors.
  • If you get redirect_uri parameter error message, make sure you set the correct callback url value in wechat management console with path Development center / Webpage service / Webpage authorization for retrieving user basic information.

The API privilege

wechat gems won't handle any privilege exceptions. (except token timeout, but it's not important to you as it's auto retry/recovery in gems internally), but Tencent will control a lot of privilege based on your public account type and certification, for more info please reference official document.

Command line mode

The available API is different between public account and enterprise account, so wechat gems provide different set of command.

Feel safe if you can not read Chinese in the comments, it's kept there in order to copy & find in the official documentation easier.

Public account command line

$ wechat
Wechat Public Account commands:
  wechat addvoicetorecofortext [VOICE_ID]                       # AI 开放接口 - 提交语音
  wechat callbackip                                             # 获取微信服务器 IP 地址
  wechat clear_quota                                            # 接口调用次数清零
  wechat custom_image [OPENID, IMAGE_PATH]                      # 发送图片客服消息
  wechat custom_music [OPENID, THUMBNAIL_PATH, MUSIC_URL]       # 发送音乐客服消息
  wechat custom_news [OPENID, NEWS_YAML_PATH]                   # 发送图文客服消息
  wechat custom_text [OPENID, TEXT_MESSAGE]                     # 发送文字客服消息
  wechat custom_video [OPENID, VIDEO_PATH]                      # 发送视频客服消息
  wechat custom_voice [OPENID, VOICE_PATH]                      # 发送语音客服消息
  wechat customservice_getonlinekflist                          # 获取在线客服接待信息
  wechat group_create [GROUP_NAME]                              # 创建分组
  wechat group_delete [GROUP_ID]                                # 删除分组
  wechat group_update [GROUP_ID, NEW_GROUP_NAME]                # 修改分组名
  wechat groups                                                 # 查询所有分组
  wechat material_get [MEDIA_ID, PATH]                          # 永久媒体下载
  wechat material_add [MEDIA_TYPE, PATH]                        # 永久媒体上传
  wechat material_add_news [MPNEWS_YAML_PATH]                   # 永久图文素材上传
  wechat material_count                                         # 获取永久素材总数
  wechat material_delete [MEDIA_ID]                             # 删除永久素材
  wechat material_list [TYPE, OFFSET, COUNT]                    # 获取永久素材列表
  wechat media [MEDIA_ID, PATH]                                 # 媒体下载
  wechat media_hq [MEDIA_ID, PATH]                              # 高清音频下载
  wechat media_create [MEDIA_TYPE, PATH]                        # 媒体上传
  wechat media_uploadimg [IMAGE_PATH]                           # 上传图文消息内的图片
  wechat media_uploadnews [MPNEWS_YAML_PATH]                    # 上传图文消息素材
  wechat menu                                                   # 当前菜单
  wechat menu_addconditional [CONDITIONAL_MENU_YAML_PATH]       # 创建个性化菜单
  wechat menu_create [MENU_YAML_PATH]                           # 创建菜单
  wechat menu_delconditional [MENU_ID]                          # 删除个性化菜单
  wechat menu_delete                                            # 删除菜单
  wechat menu_trymatch [USER_ID]                                # 测试个性化菜单匹配结果
  wechat message_mass_delete [MSG_ID]                           # 删除群发消息
  wechat message_mass_get [MSG_ID]                              # 查询群发消息发送状态
  wechat message_mass_preview [WX_NAME, MPNEWS_MEDIA_ID]        # 预览图文消息素材
  wechat qrcode_create_limit_scene [SCENE_ID_OR_STR]            # 请求永久二维码
  wechat qrcode_create_scene [SCENE_ID_OR_STR, EXPIRE_SECONDS]  # 请求临时二维码
  wechat qrcode_download [TICKET, QR_CODE_PIC_PATH]             # 通过 ticket 下载二维码
  wechat queryrecoresultfortext [VOICE_ID]                      # AI 开放接口 - 获取语音识别结果
  wechat shorturl [LONG_URL]                                    # 长链接转短链接
  wechat tag [TAGID]                                            # 获取标签下粉丝列表
  wechat tag_add_user [TAG_ID, OPEN_IDS]                        # 批量为用户打标签
  wechat tag_create [TAGNAME, TAG_ID]                           # 创建标签
  wechat tag_del_user [TAG_ID, OPEN_IDS]                        # 批量为用户取消标签
  wechat tag_delete [TAG_ID]                                    # 删除标签
  wechat tag_update [TAG_ID, TAGNAME]                           # 更新标签名字
  wechat tags                                                   # 获取所有标签
  wechat template_message [OPENID, TEMPLATE_YAML_PATH]          # 模板消息接口
  wechat translatecontent [CONTENT]                             # AI 开放接口 - 微信翻译
  wechat user [OPEN_ID]                                         # 获取用户基本信息
  wechat user_batchget [OPEN_ID_LIST]                           # 批量获取用户基本信息
  wechat user_change_group [OPEN_ID, TO_GROUP_ID]               # 移动用户分组
  wechat user_group [OPEN_ID]                                   # 查询用户所在分组
  wechat user_update_remark [OPEN_ID, REMARK]                   # 设置备注名
  wechat users                                                  # 关注者列表
  wechat wxa_msg_sec_check [CONTENT]                            # 检查一段文本是否含有违法违规内容。
  wechat wxacode_download [WXA_CODE_PIC_PATH, PATH, WIDTH]      # 下载小程序码
  wechat clear_quota                                            # 接口调用次数清零

Enterprise account command line

$ wechat
Wechat Enterprise Account commands:
  wechat agent [AGENT_ID]                                  # 获取企业号应用详情
  wechat agent_list                                        # 获取应用概况列表
  wechat batch_job_result [JOB_ID]                         # 获取异步任务结果
  wechat batch_replaceparty [BATCH_PARTY_CSV_MEDIA_ID]     # 全量覆盖部门
  wechat batch_replaceuser [BATCH_USER_CSV_MEDIA_ID]       # 全量覆盖成员
  wechat batch_syncuser [SYNC_USER_CSV_MEDIA_ID]           # 增量更新成员
  wechat callbackip                                        # 获取微信服务器 IP 地址
  wechat clear_quota                                       # 接口调用次数清零
  wechat convert_to_openid [USER_ID]                       # userid 转换成 openid
  wechat convert_to_userid [OPENID]                        # openid 转换成 userid
  wechat custom_image [OPENID, IMAGE_PATH]                 # 发送图片客服消息
  wechat custom_music [OPENID, THUMBNAIL_PATH, MUSIC_URL]  # 发送音乐客服消息
  wechat custom_news [OPENID, NEWS_YAML_PATH]              # 发送图文客服消息
  wechat custom_text [OPENID, TEXT_MESSAGE]                # 发送文字客服消息
  wechat custom_video [OPENID, VIDEO_PATH]                 # 发送视频客服消息
  wechat custom_voice [OPENID, VOICE_PATH]                 # 发送语音客服消息
  wechat department [DEPARTMENT_ID]                        # 获取部门列表
  wechat department_create [NAME, PARENT_ID]               # 创建部门
  wechat department_delete [DEPARTMENT_ID]                 # 删除部门
  wechat department_update [DEPARTMENT_ID, NAME]           # 更新部门
  wechat getusercumulate [BEGIN_DATE, END_DATE]            # 获取累计用户数据
  wechat getusersummary [BEGIN_DATE, END_DATE]             # 获取用户增减数据
  wechat invite_user [USER_ID]                             # 邀请成员关注
  wechat material [MEDIA_ID, PATH]                         # 永久媒体下载
  wechat material_add [MEDIA_TYPE, PATH]                   # 永久媒体上传
  wechat material_count                                    # 获取永久素材总数
  wechat material_delete [MEDIA_ID]                        # 删除永久素材
  wechat material_list [TYPE, OFFSET, COUNT]               # 获取永久素材列表
  wechat media [MEDIA_ID, PATH]                            # 媒体下载
  wechat media_create [MEDIA_TYPE, PATH]                   # 媒体上传
  wechat media_hq [MEDIA_ID, PATH]                         # 高清音频媒体下载
  wechat media_uploadimg [IMAGE_PATH]                      # 上传图文消息内的图片
  wechat menu                                              # 当前菜单
  wechat menu_addconditional [CONDITIONAL_MENU_YAML_PATH]  # 创建个性化菜单
  wechat menu_create [MENU_YAML_PATH]                      # 创建菜单
  wechat menu_delconditional [MENU_ID]                     # 删除个性化菜单
  wechat menu_delete                                       # 删除菜单
  wechat menu_trymatch [USER_ID]                           # 测试个性化菜单匹配结果
  wechat message_send [OPENID, TEXT_MESSAGE]               # 发送文字消息
  wechat qrcode_download [TICKET, QR_CODE_PIC_PATH]        # 通过 ticket 下载二维码
  wechat tag [TAG_ID]                                      # 获取标签成员
  wechat tag_add_department [TAG_ID, PARTY_IDS]            # 增加标签部门
  wechat tag_add_user [TAG_ID, USER_IDS]                   # 增加标签成员
  wechat tag_create [TAGNAME, TAG_ID]                      # 创建标签
  wechat tag_del_department [TAG_ID, PARTY_IDS]            # 删除标签部门
  wechat tag_del_user [TAG_ID, USER_IDS]                   # 删除标签成员
  wechat tag_delete [TAG_ID]                               # 删除标签
  wechat tag_update [TAG_ID, TAGNAME]                      # 更新标签名字
  wechat tags                                              # 获取所有标签
  wechat template_message [OPENID, TEMPLATE_YAML_PATH]     # 模板消息接口
  wechat upload_replaceparty [BATCH_PARTY_CSV_PATH]        # 上传文件方式全量覆盖部门
  wechat upload_replaceuser [BATCH_USER_CSV_PATH]          # 上传文件方式全量覆盖成员
  wechat user [OPEN_ID]                                    # 获取用户基本信息
  wechat user_batchdelete [USER_ID_LIST]                   # 批量删除成员
  wechat user_create [USER_ID, NAME]                       # 创建成员
  wechat user_delete [USER_ID]                             # 删除成员
  wechat user_list [DEPARTMENT_ID]                         # 获取部门成员详情
  wechat user_simplelist [DEPARTMENT_ID]                   # 获取部门成员
  wechat user_update_remark [OPEN_ID, REMARK]              # 设置备注名

Note: replaceparty full departments uploads only supports a single root node as a department and does not support parallel multiple root nodes.

Command line usage demo (partially)

Fetch all users open id

$ wechat users

{"total"=>4, "count"=>4, "data"=>{"openid"=>["oCfEht9***********", "oCfEhtwqa***********", "oCfEht9oMCqGo***********", "oCfEht_81H5o2***********"]}, "next_openid"=>"oCfEht_81H5o2***********"}

Fetch user info

$ wechat user "oCfEht9***********"

{"subscribe"=>1, "openid"=>"oCfEht9***********", "nickname"=>"Nickname", "sex"=>1, "language"=>"zh_CN", "city"=>"徐汇", "province"=>"上海", "country"=>"中国", "headimgurl"=>"http://wx.qlogo.cn/mmopen/ajNVdqHZLLBd0SG8NjV3UpXZuiaGGPDcaKHebTKiaTyof*********/0", "subscribe_time"=>1395715239}

Fetch menu

$ wechat menu

{"menu"=>{"button"=>[{"type"=>"view", "name"=>"保护的", "url"=>"http://***/protected", "sub_button"=>[]}, {"type"=>"view", "name"=>"公开的", "url"=>"http://***", "sub_button"=>[]}]}}

Menu create

Running command rails g wechat:menu to generate a menu definition yaml file:

button:
 -
  name: "Want"
  sub_button:
   -
    type: "scancode_waitmsg"
    name: "绑定用餐二维码"
    key: "BINDING_QR_CODE"
   -
    type: "click"
    name: "预订午餐"
    key:  "BOOK_LUNCH"
   -
    type: "miniprogram"
    name: "小程序示例"
    url:  "http://ericguo.com/"
    appid: "wx1234567890"
    pagepath: "pages/index"
 -
  name: "Query"
  sub_button:
   -
    type: "click"
    name: "进出记录"
    key:  "BADGE_IN_OUT"
   -
    type: "click"
    name: "年假余额"
    key:  "ANNUAL_LEAVE"
 -
  type: "view"
  name: "About"
  url:  "http://blog.cloud-mes.com/"

Running command below to upload the menu:

$ wechat menu_create menu.yaml

Caution: make sure you have management privilege for this application, otherwise you will get 60011 error.

Send custom news

Sending custom_news should also be defined as a yaml file, like articles.yml

articles:
 -
  title: "习近平在布鲁日欧洲学院演讲"
  description: "新华网比利时布鲁日 4 月 1 日电 国家主席习近平 1 日在比利时布鲁日欧洲学院发表重要演讲"
  url: "http://news.sina.com.cn/c/2014-04-01/232629843387.shtml"
  pic_url: "http://i3.sinaimg.cn/dy/c/2014-04-01/1396366518_bYays1.jpg"

After that, you can run this command:

$ wechat custom_news oCfEht9oM*********** articles.yml

Send template message

Sending template message via yaml file is similar, too, define template.yml and content is just the template content.

template:
  template_id: "o64KQ62_xxxxxxxxxxxxxxx-Qz-MlNcRKteq8"
  url: "http://weixin.qq.com/download"
  topcolor: "#FF0000"
  data:
    first:
      value: "Hello, you successfully registered"
      color: "#0A0A0A"
    keynote1:
      value: "5km Health Running"
      color: "#CCCCCC"
    keynote2:
      value: "2014-09-16"
      color: "#CCCCCC"
    keynote3:
      value: "Centry Park, Pudong, Shanghai"
      color: "#CCCCCC"
    remark:
      value: "Welcome back"
      color: "#173177"

After that, you can run this command:

$ wechat template_message oCfEht9oM*********** template.yml

In code:

template = YAML.load(File.read(template_yaml_path))
Wechat.api.template_message_send Wechat::Message.to(openid).template(template["template"])

If using wechat_api or wechat_responder in controller, can also use wechat as shortcut (supports multi account):

template = YAML.load(File.read(template_yaml_path))
wechat.template_message_send Wechat::Message.to(openid).template(template["template"])

wechat_api - Rails Controller Wechat API

Although user can always access all wechat features via Wechat.api, but it's highly recommended to use wechat directly in the controller. It's not only mandatory required if you plan to support multi-account, it also helps to separate the wechat specific logic from the model layer.

class WechatReportsController < ApplicationController
  wechat_api
  layout 'wechat'

  def index
    @lots = Lot.with_preloading.wip_lot
  end
end

Using wechat api at ActiveJob/Rake tasks

Using Wechat.api to access the wechat api function at any place.

Below is an example via rails console to call AI Voice Recognition API:

# Audio file with ID3 version 2.4.0, contains:MPEG ADTS, layer III, v2,  40 kbps, 16 kHz, Monaural
test_voice_file='test_voice.mp3'
Wechat.api.addvoicetorecofortext('test_voice_id', File.open(test_voice_file))
Wechat.api.queryrecoresultfortext 'test_voice_id'

Checking the signature

Using Wechat.decrypt(encrypted_data,session_key, iv) to decode the data. via. Signature Checking

wechat_responder - Rails Responder Controller DSL

In order to respond to the message user sent, Rails developer needs to create a wechat responder controller and define the routing in routes.rb

  resource :wechat, only: [:show, :create]

So the ActionController should be defined like below:

class WechatsController < ActionController::Base
  wechat_responder

  # default text responder when no other match
  on :text do |request, content|
    request.reply.text "echo: #{content}" # Just echo
  end

  # When receive 'help', will trigger this responder
  on :text, with: 'help' do |request|
    request.reply.text 'help content'
  end

  # When receive '<n>news', will match and will get count as <n> as parameter
  on :text, with: /^(\d+) news$/ do |request, count|
    # Wechat article can only contain max 8 items, large than 8 will be dropped.
    news = (1..count.to_i).each_with_object([]) { |n, memo| memo << { title: 'News title', content: "No. #{n} news content" } }
    request.reply.news(news) do |article, n, index| # article is return object
      article.item title: "#{index} #{n[:title]}", description: n[:content], pic_url: 'http://www.baidu.com/img/bdlogo.gif', url: 'http://www.baidu.com/'
    end
  end

  on :event, with: 'subscribe' do |request|
    request.reply.text "#{request[:FromUserName]} subscribe now"
  end

  # When unsubscribe user scan qrcode qrscene_xxxxxx to subscribe in public account
  # notice user will subscribe public account at the same time, so wechat won't trigger subscribe event anymore
  on :scan, with: 'qrscene_xxxxxx' do |request, ticket|
    request.reply.text "Unsubscribe user #{request[:FromUserName]} Ticket #{ticket}"
  end

  # When subscribe user scan scene_id in public account
  on :scan, with: 'scene_id' do |request, ticket|
    request.reply.text "Subscribe user #{request[:FromUserName]} Ticket #{ticket}"
  end

  # When no any on :scan responder can match subscribe user scanned scene_id
  on :event, with: 'scan' do |request|
    if request[:EventKey].present?
      request.reply.text "event scan got EventKey #{request[:EventKey]} Ticket #{request[:Ticket]}"
    end
  end

  # When enterprise user press menu BINDING_QR_CODE and success to scan bar code
  on :scan, with: 'BINDING_QR_CODE' do |request, scan_result, scan_type|
    request.reply.text "User #{request[:FromUserName]} ScanResult #{scan_result} ScanType #{scan_type}"
  end

  # Except QR code, wechat can also scan CODE_39 bar code in enterprise account
  on :scan, with: 'BINDING_BARCODE' do |message, scan_result|
    if scan_result.start_with? 'CODE_39,'
      message.reply.text "User: #{message[:FromUserName]} scan barcode, result is #{scan_result.split(',')[1]}"
    end
  end

  # When user clicks the menu button
  on :click, with: 'BOOK_LUNCH' do |request, key|
    request.reply.text "User: #{request[:FromUserName]} click #{key}"
  end

  # When user views URL in the menu button
  on :view, with: 'http://wechat.somewhere.com/view_url' do |request, view|
    request.reply.text "#{request[:FromUserName]} view #{view}"
  end

  # When user sends an image
  on :image do |request|
    request.reply.image(request[:MediaId]) # Echo the sent image to user
  end

  # When user sends a voice
  on :voice do |request|
    # Echo the sent voice to user
    # request.reply.voice(request[:MediaId])

    voice_id = request[:MediaId]
    # It's only avaiable for Service Account and enable it in dashboard.
    recognition = request[:Recognition]
    request.reply.text "#{voice_id} #{recognition}"
  end

  # When user sends a video
  on :video do |request|
    nickname = wechat.user(request[:FromUserName])['nickname'] # Call wechat api to get sender nickname
    request.reply.video(request[:MediaId], title: 'Echo', description: "Got #{nickname} sent video") # Echo the sent video to user
  end

  # When user sends location message with label
  on :label_location do |request|
    request.reply.text("Label: #{request[:Label]} Location_X: #{request[:Location_X]} Location_Y: #{request[:Location_Y]} Scale: #{request[:Scale]}")
  end

  # When user sends location
  on :location do |request|
    request.reply.text("Latitude: #{request[:Latitude]} Longitude: #{request[:Longitude]} Precision: #{request[:Precision]}")
  end

  on :event, with: 'unsubscribe' do |request|
    request.reply.success # user can not receive this message
  end

  # When user enters the app / agent app
  on :event, with: 'enter_agent' do |request|
    request.reply.text "#{request[:FromUserName]} enter agent app now"
  end

  # When batch job "create/update user (incremental)" is finished.
  on :batch_job, with: 'sync_user' do |request, batch_job|
    request.reply.text "sync_user job #{batch_job[:JobId]} finished, return code #{batch_job[:ErrCode]}, return message #{batch_job[:ErrMsg]}"
  end

  # When batch job "replace user (full sync)" is finished.
  on :batch_job, with: 'replace_user' do |request, batch_job|
    request.reply.text "replace_user job #{batch_job[:JobId]} finished, return code #{batch_job[:ErrCode]}, return message #{batch_job[:ErrMsg]}"
  end

  # When batch job "invite user" is finished.
  on :batch_job, with: 'invite_user' do |request, batch_job|
    request.reply.text "invite_user job #{batch_job[:JobId]} finished, return code #{batch_job[:ErrCode]}, return message #{batch_job[:ErrMsg]}"
  end

  # When batch job "replace department (full sync)" is finished.
  on :batch_job, with: 'replace_party' do |request, batch_job|
    request.reply.text "replace_party job #{batch_job[:JobId]} finished, return code #{batch_job[:ErrCode]}, return message #{batch_job[:ErrMsg]}"
  end

  # mass sent job finish result notification
  on :event, with: 'masssendjobfinish' do |request|
    # https://mp.weixin.qq.com/wiki?action=doc&id=mp1481187827_i0l21&t=0.03571905015619936#8
    request.reply.success # request is XML result hash.
  end

  # The customer agrees to call back the chat content archive event
  on :change_external_contact do |request|
    # https://open.work.weixin.qq.com/api/doc/90000/90135/92005
    request.reply.success # request is XML result hash.
  end

  # Session event callback
  on :msgaudit_notify do |request|
    # https://open.work.weixin.qq.com/api/doc/90000/90135/95039
    request.reply.success # request is XML result hash.
  end

  # If no match above will fallback to below
  on :fallback, respond: 'fallback message'
end

So the important statement is only wechat_responder, all other is just a DSL:

on <message_type> do |message|
 message.reply.text "some text"
end

The block code will be running to respond to user's message.

Below are currently supported message_types:

  • :text text message, using :with to match text content like on(:text, with:'help'){|message, content| ...}
  • :image image message
  • :voice voice message
  • :shortvideo shortvideo message
  • :video video message
  • :label_location location message with label
  • :link link message
  • :event event message, using :with to match particular event, supports regular expression match similar to text message.
  • :click virtual event message, wechat still sends event message,but gems will map to menu click event.
  • :view virtual view message, wechat still sends event message,but gems will map to menu view page event.
  • :scan virtual scan message, wechat still sends event message, but gems will map to scan event.
  • :batch_job virtual batch job message
  • :location virtual location message
  • :fallback default message, when no other responder can handle incoming message, will be used as a fallback handler

Transfer to customer service

class WechatsController < ActionController::Base
  # When no other responder can handle incoming message, will transfer to human customer service.
  on :fallback do |message|
    message.reply.transfer_customer_service
  end
end

Caution: do not set default text responder if you want to use multiply human customer service, other will lead text message can not transfer.

Notifications

Example:

ActiveSupport::Notifications.subscribe('wechat.responder.after_create') do |name, started, finished, unique_id, data|
  WechatLog.create request: data[:request], response: data[:response]
end

Known Issues

  • Sometimes, enterprise account can not receive the menu message due to Tencent server unable to resolve DNS, so using IP as a callback URL is more stable, but it never happens for user sent text messages.
  • Enterprise batch "replace users" uses a CSV format file, but if you are using the downloaded template directly, it's not working, must open the CSV file in Excel first, then save as CSV format again, seems Tencent only supports Excel "Save as CSV" file format.
  • If you using unicorn behind nginx and https, you need to set trusted_domain_fullname and point it to https, otherwise it will be http and will lead to invalid signature in the JS-SDK.

中文文档 Chinese document

Author: Eric-Guo
Source Code: https://github.com/Eric-Guo/wechat 
License: MIT license

#ruby #wechat #sdk #framework 

Wechat: API, Command and Message Handling for WeChat in Rails
Rupert  Beatty

Rupert Beatty

1658441100

Teamwork: User to Team Associations with invitation System

Teamwork

This package supports Laravel 6 and above.     

Teamwork is the fastest and easiest method to add a User / Team association with Invites to your Laravel 6+ project.

Installation

composer require mpociot/teamwork

The Teamwork Facade will be auto discovered by Laravel automatically.

Configuration

To publish Teamwork's configuration and migration files, run the vendor:publish command.

php artisan vendor:publish --provider="Mpociot\Teamwork\TeamworkServiceProvider"

This will create a teamwork.php in your config directory. The default configuration should work just fine for you, but you can take a look at it, if you want to customize the table / model names Teamwork will use.

User relation to teams

Run the migration command, to generate all tables needed for Teamwork. If your users are stored in a different table other than users be sure to modify the published migration.

php artisan migrate

After the migration, 3 new tables will be created:

  • teams — stores team records
  • team_user — stores many-to-many relations between users and teams
  • team_invites — stores pending invites for email addresses to teams

You will also notice that a new column current_team_id has been added to your users table. This column will define the Team, the user is currently assigned to.

Models

Team

Create a Team model inside app/Team.php using the following example:

<?php namespace App;

use Mpociot\Teamwork\TeamworkTeam;

class Team extends TeamworkTeam
{
}

The Team model has two main attributes:

  • owner_id — Reference to the User model that owns this Team.
  • name — Human readable name for the Team.

The owner_id is an optional attribute and is nullable in the database.

When extending TeamworkTeam, remember to change the team_model variable in config/teamwork.php to your new model. For instance: 'team_model' => App\Team::class

User

Add the UserHasTeams trait to your existing User model:

<?php namespace App;

use Mpociot\Teamwork\Traits\UserHasTeams;

class User extends Model {

    use UserHasTeams; // Add this trait to your model
}

This will enable the relation with Team and add the following methods teams(), ownedTeams() currentTeam(), invites(), isTeamOwner(), isOwnerOfTeam($team), attachTeam($team, $pivotData = []), detachTeam($team), attachTeams($teams), detachTeams($teams), switchTeam($team) within your User model.

Don't forget to dump composer autoload

composer dump-autoload

Middleware

If you would like to use the middleware to protect to current team owner then just add the middleware provider to your app\Http\Kernel.php file.

    protected $routeMiddleware = [
        ...
        'teamowner' => \Mpociot\Teamwork\Middleware\TeamOwner::class,
        ...
    ];

Afterwards you can use the teamowner middleware in your routes file like so.

Route::get('/owner', function(){
    return "Owner of current team.";
})->middleware('auth', 'teamowner');

Now only if the authenticated user is the owner of the current team can access that route.

This middleware is aimed to protect routes where only the owner of the team can edit/create/delete that model

And you are ready to go.

Usage

Scaffolding

The easiest way to give your new Laravel project Team abilities is by using the make:teamwork command.

php artisan make:teamwork

This command will create all views, routes and controllers to make your new project team-ready.

Out of the box, the following parts will be created for you:

  • Team listing
  • Team creation / editing / deletion
  • Invite new members to teams

Imagine it as a the make:auth command for Teamwork.

To get started, take a look at the new installed /teams route in your project.

Basic concepts

Let's start by creating two different Teams.

$team    = new Team();
$team->owner_id = User::where('username', '=', 'sebastian')->first()->getKey();
$team->name = 'My awesome team';
$team->save();

$myOtherCompany = new Team();
$myOtherCompany->owner_id = User::where('username', '=', 'marcel')->first()->getKey();
$myOtherCompany->name = 'My other awesome team';
$myOtherCompany->save();

Now thanks to the UserHasTeams trait, assigning the Teams to the user is super easy:

$user = User::where('username', '=', 'sebastian')->first();

// team attach alias
$user->attachTeam($team, $pivotData); // First parameter can be a Team object, array, or id

// or eloquent's original technique
$user->teams()->attach($team->id); // id only

By using the attachTeam method, if the User has no Teams assigned, the current_team_id column will automatically be set.

Get to know my team(s)

The currently assigned Team of a user can be accessed through the currentTeam relation like this:

echo "I'm currently in team: " . Auth::user()->currentTeam->name;
echo "The team owner is: " . Auth::user()->currentTeam->owner->username;

echo "I also have these teams: ";
print_r( Auth::user()->teams );

echo "I am the owner of these teams: ";
print_r( Auth::user()->ownedTeams );

echo "My team has " . Auth::user()->currentTeam->users->count() . " users.";

The Team model has access to these methods:

  • invites() — Returns a many-to-many relation to associated invitations.
  • users() — Returns a many-to-many relation with all users associated to this team.
  • owner() — Returns a one-to-one relation with the User model that owns this team.
  • hasUser(User $user) — Helper function to determine if a user is a teammember

Team owner

If you need to check if the User is a team owner (regardless of the current team) use the isTeamOwner() method on the User model.

if( Auth::user()->isTeamOwner() )
{
    echo "I'm a team owner. Please let me pay more.";
}

Additionally if you need to check if the user is the owner of a specific team, use:

$team = Auth::user()->currentTeam;
if( Auth::user()->isOwnerOfTeam( $team ) )
{
    echo "I'm a specific team owner. Please let me pay even more.";
}

The isOwnerOfTeam method also allows an array or id as team parameter.

Switching the current team

If your Users are members of multiple teams you might want to give them access to a switch team mechanic in some way.

This means that the user has one "active" team, that is currently assigned to the user. All other teams still remain attached to the relation!

Glad we have the UserHasTeams trait.

try {
    Auth::user()->switchTeam( $team_id );
    // Or remove a team association at all
    Auth::user()->switchTeam( null );
} catch( UserNotInTeamException $e )
{
    // Given team is not allowed for the user
}

Just like the isOwnerOfTeam method, switchTeam accepts a Team object, array, id or null as a parameter.

Inviting others

The best team is of no avail if you're the only team member.

To invite other users to your teams, use the Teamwork facade.

Teamwork::inviteToTeam( $email, $team, function( $invite )
{
    // Send email to user / let them know that they got invited
});

You can also send invites by providing an object with an email property like:

$user = Auth::user();

Teamwork::inviteToTeam( $user , $team, function( $invite )
{
    // Send email to user / let them know that they got invited
});

This method will create a TeamInvite model and return it in the callable third parameter.

This model has these attributes:

  • email — The email that was invited.
  • accept_token — Unique token used to accept the invite.
  • deny_token — Unique token used to deny the invite.

In addition to these attributes, the model has these relations:

  • user() — one-to-one relation using the email as a unique identifier on the User model.
  • team() — one-to-one relation return the Team, that invite was aiming for.
  • inviter() — one-to-one relation return the User, that created the invite.

Note: The inviteToTeam method will not check if the given email already has a pending invite. To check for pending invites use the hasPendingInvite method on the Teamwork facade.

Example usage:

if( !Teamwork::hasPendingInvite( $request->email, $request->team) )
{
    Teamwork::inviteToTeam( $request->email, $request->team, function( $invite )
    {
                // Send email to user
    });
} else {
    // Return error - user already invited
}

Accepting invites

Once you invited other users to join your team, in order to accept the invitation use the Teamwork facade once again.

$invite = Teamwork::getInviteFromAcceptToken( $request->token ); // Returns a TeamworkInvite model or null

if( $invite ) // valid token found
{
    Teamwork::acceptInvite( $invite );
}

The acceptInvite method does two thing:

  • Call attachTeam with the invite-team on the currently authenticated user.
  • Delete the invitation afterwards.

Denying invites

Just like accepting invites:

$invite = Teamwork::getInviteFromDenyToken( $request->token ); // Returns a TeamworkInvite model or null

if( $invite ) // valid token found
{
    Teamwork::denyInvite( $invite );
}

The denyInvite method is only responsible for deleting the invitation from the database.

Attaching/Detaching/Invite Events

If you need to run additional processes after attaching/detaching a team from a user or inviting a user, you can Listen for these events:

\Mpociot\Teamwork\Events\UserJoinedTeam

\Mpociot\Teamwork\Events\UserLeftTeam

\Mpociot\Teamwork\Events\UserInvitedToTeam

In your EventServiceProvider add your listener(s):

/**
 * The event listener mappings for the application.
 *
 * @var array
 */
protected $listen = [
    ...
    \Mpociot\Teamwork\Events\UserJoinedTeam::class => [
        App\Listeners\YourJoinedTeamListener::class,
    ],
    \Mpociot\Teamwork\Events\UserLeftTeam::class => [
        App\Listeners\YourLeftTeamListener::class,
    ],
    \Mpociot\Teamwork\Events\UserInvitedToTeam::class => [
        App\Listeners\YourUserInvitedToTeamListener::class,
    ],
];

The UserJoinedTeam and UserLeftTeam event exposes the User and Team's ID. In your listener, you can access them like so:

<?php

namespace App\Listeners;

use Mpociot\Teamwork\Events\UserJoinedTeam;

class YourJoinedTeamListener
{
    /**
     * Create the event listener.
     *
     * @return void
     */
    public function __construct()
    {
        //
    }

    /**
     * Handle the event.
     *
     * @param  UserJoinedTeam  $event
     * @return void
     */
    public function handle(UserJoinedTeam $event)
    {
        // $user = $event->getUser();
        // $teamId = $event->getTeamId();

        // Do something with the user and team ID.
    }
}

The UserInvitedToTeam event contains an invite object which could be accessed like this:

<?php

namespace App\Listeners;

use Mpociot\Teamwork\Events\UserInvitedToTeam;

class YourUserInvitedToTeamListener
{
    /**
     * Create the event listener.
     *
     * @return void
     */
    public function __construct()
    {
        //
    }

    /**
     * Handle the event.
     *
     * @param  UserInvitedToTeam  $event
     * @return void
     */
    public function handle(UserInvitedToTeam $event)
    {
        // $user = $event->getInvite()->user;
        // $teamId = $event->getTeamId();

        // Do something with the user and team ID.
    }
}

Limit Models to current Team

If your models are somehow limited to the current team you will find yourself writing this query over and over again: Model::where('team_id', auth()->user()->currentTeam->id)->get();.

To automate this process, you can let your models use the UsedByTeams trait. This trait will automatically append the current team id of the authenticated user to all queries and will also add it to a field called team_id when saving the models.

Note:

This assumes that the model has a field called team_id

Usage

use Mpociot\Teamwork\Traits\UsedByTeams;

class Task extends Model
{
    use UsedByTeams;
}

When using this trait, all queries will append WHERE team_id=CURRENT_TEAM_ID. If theres a place in your app, where you really want to retrieve all models, no matter what team they belong to, you can use the allTeams scope.

Example:

// gets all tasks for the currently active team of the authenticated user
Task::all();

// gets all tasks from all teams globally
Task::allTeams()->get();

Author: mpociot
Source Code: https://github.com/mpociot/teamwork 
License: MIT license

#laravel #framework 

Teamwork: User to Team Associations with invitation System
Royce  Reinger

Royce Reinger

1658439780

Botframework-ruby: Microsoft Bot Framework Ruby Client

BotFramework

Ruby client to make stateful bots using the Microsoft Bot Framework.

Currently under development; don't try this in production until v1.0

Installation

Add this line to your application's Gemfile:

gem 'bot_framework'

And then execute:

$ bundle

Or install it yourself as:

$ gem install bot_framework

Usage

Simple echo bot:

BotFramework.configure do |connector|
  connector.app_id = ENV['MICROSOFT_APP_ID']
  connector.app_secret = ENV['MICROSOFT_APP_SECRET']
end

BotFramework::Bot.on :activity do |activity|
  # Activity.id , identifier of the activity
  # activity.timestamp
  # activity.channel_id
  # activity.from, sender 
  # activity.conversation
  # activity.topic_name
  # activity.locale
  # activity.text
  # and so on

  reply(activity,activity.text)
end

Emulator

You can use Bot Framework Emulator for testing your bot in local system.

emulator1 emulator2

Development

After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.

To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/tachyons/botframework-ruby. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

Author: Tachyons
Source Code: https://github.com/tachyons/botframework-ruby 
License: MIT license

#ruby #bot #framework 

Botframework-ruby: Microsoft Bot Framework Ruby Client
Monty  Boehm

Monty Boehm

1658436600

GradientBoost.jl: Gradient Boosting Framework for Julia

GradientBoost

This package covers the gradient boosting paradigm: a framework that builds additive expansions based on any fitting criteria.

In machine learning parlance, this is typically referred to as gradient boosting machines, generalized boosted models and stochastic gradient boosting.

Normally, gradient boosting implementations cover a specific algorithm: gradient boosted decision trees. This package covers the framework itself, including such implementations.

References:

  • Friedman, Jerome H. "Greedy function approximation: a gradient boosting machine." Annals of Statistics (2001): 1189-1232.
  • Friedman, Jerome H. "Stochastic gradient boosting." Computational Statistics & Data Analysis 38.4 (2002): 367-378.
  • Hastie, Trevor, et al. The elements of statistical learning. Vol. 2. No. 1. New York: Springer, 2009.
  • Ridgeway, Greg. "Generalized Boosted Models: A guide to the gbm package." Update 1.1 (2007).
  • Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." The Journal of Machine Learning Research 12 (2011): 2825-2830.
  • Natekin, Alexey, and Alois Knoll. "Gradient boosting machines, a tutorial." Frontiers in neurorobotics 7 (2013).

Machine Learning API

Module GradientBoost.ML is provided for users who are only interested in using existing gradient boosting algorithms for prediction. To get a feel for the API, we will run a demonstration of gradient boosted decision trees on the iris dataset.

Obtain Data

At the moment only two-class classification is handled, so our learner will attempt to separate "setosa" from the other species.

using GradientBoost.ML
using RDatasets

# Obtain iris dataset
iris = dataset("datasets", "iris")
instances = array(iris[:, 1:end-1])
labels = [species == "setosa" ? 1.0 : 0.0 for species in array(iris[:, end])]

# Obtain training and test set (20% test)
num_instances = size(instances, 1)
train_ind, test_ind = GradientBoost.Util.holdout(num_instances, 0.2)

Build Learner

The gradient boosting (GB) learner comprises of a GB algorithm and what output it must produce. In this case, we shall assign a gradient boosted decision tree to output classes.

# Build GBLearner
gbdt = GBDT(;
  loss_function = BinomialDeviance(),
  sampling_rate = 0.6,
  learning_rate = 0.1,
  num_iterations = 100
)
gbl = GBLearner(
  gbdt,  # Gradient boosting algorithm
  :class # Output (:class, :class_prob, :regression)
)

Train and Predict

Currently Matrix{Float64} instances and Vector{Float64} labels are the only handled types for training and prediction. In this case, it is not an issue.

# Train
ML.fit!(gbl, instances[train_ind, :], labels[train_ind])

# Predict
predictions = ML.predict!(gbl, instances[test_ind, :])

Evaluate

If all is well, we should obtain better than baseline accuracy (67%).

# Obtain accuracy
accuracy = mean(predictions .== labels[test_ind]) * 100.0
println("GBDT accuracy: $(accuracy)")

That concludes the demonstration. Detailed below are the available GB learners.

Algorithms

Documented below are the currently implemented gradient boosting algorithms.

GB Decision Tree

Gradient Boosted Decision Tree algorithm backed by DecisionTree.jl regression trees. Current loss functions covered are: LeastSquares, LeastAbsoluteDeviation and BinomialDeviance.

gbdt = GBDT(;
  loss_function = BinomialDeviance(), # Loss function
  sampling_rate = 0.6,                # Sampling rate
  learning_rate = 0.1,                # Learning rate
  num_iterations = 100,               # Number of iterations
  tree_options = {                    # Tree options (DecisionTree.jl regressor)
    :maxlabels => 5,
    :nsubfeatures => 0
  }
)

GB Base Learner

Gradient boosting with a given base learner. Current loss functions covered are: LeastSquares and LeastAbsoluteDeviation. In order to use this, ML.learner_fit and ML.learner_predict functions must be extended. Example provided below for linear regression found in GLM.jl.

import GLM: fit, predict, LinearModel

# Extend functions
function ML.learner_fit(lf::LossFunction, 
  learner::Type{LinearModel}, instances, labels)
  
  model = fit(learner, instances, labels)
end
function ML.learner_predict(lf::LossFunction,
  learner::Type{LinearModel}, model, instances)
  
  predict(model, instances)
end

Once this is done, the algorithm can be instantiated with the respective base learner.

gbl = GBBL(
  LinearModel;                    # Base Learner
  loss_function = LeastSquares(), # Loss function
  sampling_rate = 0.8,            # Sampling rate
  learning_rate = 0.1,            # Learning rate
  num_iterations = 100            # Number of iterations
)
gbl = GBLearner(gbl, :regression)

Gradient Boosting Framework

All previously developed algorithms follow the framework provided by GradientBoost.GB.

As this package is in its preliminary stage, major changes may occur in the near future and as such we provide minimal README documentation.

All of what is required to be implemented is exampled below:

import GradientBoost.GB
import GradientBoost.LossFunctions: LossFunction

# Must subtype from GBAlgorithm defined in GB module.
type ExampleGB <: GB.GBAlgorithm
  loss_function::LossFunction
  sampling_rate::FloatingPoint
  learning_rate::FloatingPoint
  num_iterations::Int
end

# Model training and co-efficient optimization should be done here.
function GB.build_base_func(
  gb::ExampleGB, instances, labels, prev_func_pred, psuedo)

  model_const = 0.5
  model_pred = (instances) -> Float64[
    sum(instances[i,:]) for i = 1:size(instances, 1)
  ]

  return (instances) -> model_const .* model_pred(instances)
end

A relatively light algorithm that implements GBAlgorithm is GBBL, found in src/gb_bl.jl.

Misc

The links provided below will only work if you are viewing this in the GitHub repository.

Changes

See CHANGELOG.yml.

Future Work

See FUTUREWORK.md.

Contributing

See CONTRIBUTING.md.

Author: svs14
Source Code: https://github.com/svs14/GradientBoost.jl 
License: View license

#julia #machinelearning #framework 

GradientBoost.jl: Gradient Boosting Framework for Julia
Monty  Boehm

Monty Boehm

1658421731

FunctionalData.jl: Functional, Efficient Data Manipulation Framework

FunctionalData

FunctionalData is a package for fast and expressive data modification.

Built around a simple memory layout convention, it provides a small set of general purpose functional constructs as well as routines for efficient computation with dense numerical arrays.

Optionally, it supplies a syntax for clean, concise code:

wordcount(filename) = @p read filename String | lines | map split | flatten | length

Memory Layout

Indexing is simplified for dense n-dimensional arrays, which are viewed as collections of (n-1)-dimensional items.

For example, this allows to use the exact same code for 2D patches and 3D blocks:

a = [1 2 3; 4 5 6]
b = ones(2, 2, 10)          #  10 2D patches
c = ones(2, 2, 2, 10)       #  10 3D blocks

len(a)       =>   3
len(b)       =>  10
len(c)       =>  10

at(a,2)      =>  [2 5]'
part(a,2:3)  =>  [2 3; 5 6]

normsum(x) = x/sum(x)

map(b, normsum)   =>  [0.25 ...  ] of size 2 x 2 x 10
map(c, normsum)   =>  [0.125 ... ] of size 2 x 2 x 2 x 10

#  Result shape may change:
map(b, sum)       =>  [4 ... ]     of size 1 x 10
map(c, sum)       =>  [8 ... ]     of size 1 x 10

Efficiency

Using a custom View type based on this memory layout assumption, the provided map operations can be considerably faster than built-ins. Given our data and desired operation:

a = rand(10, 1000000)   #  =>  80 MB

csum!(x) = for i = 2:length(x) x[i] += x[i-1] end
csumoncopy(x) = (for i = 2:length(x) x[i] += x[i-1] end; x)

we can use the following simple, general and efficient statement:

map!(a, csum!) 
#  elapsed time: 0.027491752 seconds (256 bytes allocated)

Built-in alternatives are either slower or require manual inlining, for a specific data layout:

mapslices(csumoncopy, a, [1])
#  elapsed time: 0.85726391 seconds (404 MB allocated, 5.34% gc time)

f(a) = for i = 1:size(a,2)  a[:,i] = csumoncopy(a[:,i])  end
#  elapsed time: 0.110978216 seconds (144 MB allocated, 3.86% gc time)

f2(a) = for i = 1:size(a,2)  csum!(sub(a,:,i))  end
#  elapsed time: 0.071394038 seconds (160 MB allocated, 16.46% gc time)

function f3(a)
    for n = 1:size(a,2)
        for m = 2:size(a,1)  a[m,n] += a[m-1,n]  end
    end
end
#  elapsed time: 0.017072235 seconds (80 bytes allocated)

function f4(a)
    for n = 1:size(a,1):length(a)
        for m = 1:size(a,1)-1  a[n+m] += a[n+m-1]  end
    end
end
#  elapsed time: 0.013347679 seconds (80 bytes allocated)

With the exact same syntax we can easily parallelize our code using the local workers via shared memory or Julia's inter-process serialization, both on the local host or all machines:

shmap!(a, csum!)      # local processes, shared memory
lmap!(a, csum!)       # local processes
pmap!(a, csum!)       # all available processes

For each of these variants there are optimized functions available for in-place operation on the input array, in-place operation on a new output array, or fallback options for functions which do not work in-place. For details, see the section on map and Friends.

News

0.0.9

  • version requirement for 0.4 build
  • map and mapmap for Dict
  • fix typed

0.0.7 / 0.0.8

  • fixed repeat for numeric arrays
  • made test_equal more robust
  • reworked map and view for Array{T,1} / scalar return values
  • fix partsoflen, concat
  • add takelast(a), unequal, sortpermrev, filter
  • fix map for Dict

0.0.6

  • added localworkers and hostpids
  • added hmap and variants, which map tasks to the first pid of each machine
  • removed makeliteral, as the built-in repr does the same
  • sped up matrix
  • added map2, map3, map4, map5
  • fixed unzip
  • added flip, flipdims
  • added extract, removed @getfield

Documentation

Please see the overview below for one-line descriptions of each function. More details and examples can then be found in the following sections (work in progress)

Overview

Length and Size [details]

len(a)                              # length
siz(a)                              # lsize, ndims x 1
siz3(a)                             # lsize, 3 x 1

Data Access [details]

at(a, i)                            # item i
setat!(a, i, value)                 # set item i to value
fst(a)                              # first item
snd(a)                              # second item
third(a)                            # third item
last(a)                             # last item
part(a, ind)                        # items at indices ind
trimmedpart(a, ind)                 # items at ind, no error if a is too short
take(a, n)                          # the first up to n elements
takelast(a,n=1)                     # the last up to elements
drop(a,n)                           # a, except for the first n elements
droplast(a,n=1)                     # a, except for the last n elements
partition(a, n)                     # partition into n parts
partsoflen(a, n)                    # partition into parts of length n
extract(a, field, default)          # get key x of dict or field x of composite type instance

Data Layout [details]

row(a)                              # reshape into row vector
col(a)                              # reshape into column vector
reshape(a, siz)                     # reshape into size in ndim x 1 vector siz
split(a, x or f)                    # split a where item == x or f(item) == true                         
concat(a...)                        # same as flatten([a...])
subtoind(sub, a)                    # transform ndims x npoints sub to linear ind for a
indtosub(ind, a)                    # transform linear ind to ndims x len(ind) sub for a
stack(a)                            # concat along the n + 1st dim of the items in a
flatten(a)                          # reduce the nestedness of a
unstack(a)                          # split the dense array a into array of items
riffle(a, x)                        # insert x between the items of a
matrix(a)                           # reshape items of a to column vectors
unmatrix(a, example)                # reshape the column vector items in a according to example
lines(a)                            # split the text a into array of lines
unlines(a)                          # concat a with newlines 
unzip(a)                            # unzip items
findsub(a)                          # return sub for the non-zero entries
randsample(a, n)                    # draw n items from a with repetition
randperm(a)                         # randomly permute order of items
flip(a)                             # reverse the order of items
flipdims(a,d1,d2)                   # flip dims d1 and d2

Pipeline Syntax [details]

r = @p f1 a b | f2 | f3 c           # pipeline macro, equals f3(f2(f1(a,b)),c)
r = @p f1 a | f2 b _ | f3 e         # equals f3(f2(b,f1(a)),c)

Efficient Views [details]

view(a,i)                           # lightweight view of item i of a
view(a,i,v)                         # lightweight view of item i of a, reusing v
next!(v)                            # make v point to the i + 1th item of a
trytoview(a,v)                      # for dense array, use view, otherwise part
trytoview(a,v,i)                    # for dense array, use view reusing v, otherwise part

Computing: map and Friends [details]

map(a, f)                           # apply f to each item
map!(a, f!)                         # apply f! to each item in-place
map!r(a, f)                         # apply f to each item, overwriting a                         
map2!(a, f, f!)                     # apply f to fst(a), f! to other items
map2!(a, r, f!)                     # apply f!(resultitem, item) to each item
shmap(a, f)                         # parallel map f to shared array a, accross procs(a)
shmap!(a, f!)                       # inplace shmap f!, overwriting a, accross procs(a)
shmap!r(a, f)                       # apply f to each item, overwriting a, accross procs(a)                         
shmap2!(a, f, f!)                   # apply f to fst(a), f! to other items, accross procs(a)
shmap2!(a, r, f!)                   # apply f!(resultitem, item), accross procs(a)
pmap(a, f)                          # parallel map of f accross all workers
lmap(a, f)                          # parallel map of f accross local workers
mapmap(a, f)                        # shorthand for map(a, x->map(x,f))
map2(a,b,f), map3, map4, map5       # map over a and b invoking f(x,y)
work(a, f)                          # apply f to each item, no result value
pwork, lwork, shwork, workwork      # like the corresponding map variants
any(a, f)                           # is any f(item) true
anyequal(a, x)                      # is any item == x
all(a, f)                           # are all f(item) true
allequal(a, x)                      # are all items == x
unequal(a,b)                        # shortcut for !isequal(a,b)
sort(a, f; kargs...)                # sort a accorting to f(item)
uniq(a[, f])                        # unique elements of a or uniq(a,map(a,f))
table(f, a...)                      # like [f(m,n) for m in a[1], n in a[2]], for any length of a
ptable, ltable                      # parallel table using all workers, local workes
tableany, ptableany, ltableany      # like table, but does not flatten result

Output [details]

showinfo
tee

I/O [details]

read
write
existsfile
mkdir 
filenames
filepaths
dirnames
dirpaths
readmat
writemat

Helpers [details]

zerossiz(s, typ)                    # zeros(s...), default typ is Float64
shzerossiz(s, typ)                  # shared zerossiz
shzeros([typ,] s...)                # shared zeros
onessiz(s, typ)                     # ones(s...), default typ is Float64
shonessiz(s, typ)                   # shared onessiz
shones([typ,] s...)                 # shared ones
randsiz(s, typ)                     # rand(s...), default typ is Float64
shrandnsiz(s, typ)                  # shared randsiz
shrand([typ,] s...)                 # shared rand
randnsiz(s, typ)                    # randn(s...), default typ is Float64
shrandnsiz(s, typ)                  # shared randnsiz
shrandn([typ,] s...)                # shared randn
zeroel(a)                           # zero(eltype(a))
oneel                               # one(eltype(a))
@dict a b c ...                     # Dict("a" => a, "b" => b, "c" => c, ...)
+
* 
repeat(a, n)                        # repeat a n times
nop()                               # no-op
id(a...)                            # returns a...
istrue(a or f)                      # is a or result of f true
isfalse(a or f)                     # !istrue
not                                 # alias for !
or                                  # alias for ||
and                                 # alias for &&
plus                                # alias for .+
minus                               # alias for .-
times                               # alias for .*
divby                               # alias for ./

Unit Tests [details]

@test_equal a b                     # test a and b for equality, show detailed info if not
@assert_equal a b                   # like test_equal, then throws error
@test_almostequal a b maxdiff       # like test_equal, but allows up to maxdiff difference

Author: Rened
Source Code: https://github.com/rened/FunctionalData.jl 
License: View license

#julia #functional #framework 

FunctionalData.jl: Functional, Efficient Data Manipulation Framework
Rupert  Beatty

Rupert Beatty

1658271420

Laravel-form-builder: Laravel form Builder For Version 5+!

Laravel 5 form builder

Form builder for Laravel 5 inspired by Symfony's form builder. With help of Laravels FormBuilder class creates forms that can be easy modified and reused. By default it supports Bootstrap 3.

Installation

Using Composer

composer require kris/laravel-form-builder

Or manually by modifying composer.json file:

{
    "require": {
        "kris/laravel-form-builder": "1.*"
    }
}

And run composer install

Then add Service provider to config/app.php

    'providers' => [
        // ...
        Kris\LaravelFormBuilder\FormBuilderServiceProvider::class
    ]

And Facade (also in config/app.php)

    'aliases' => [
        // ...
        'FormBuilder' => Kris\LaravelFormBuilder\Facades\FormBuilder::class
    ]

Notice: This package will add laravelcollective/html package and load aliases (Form, Html) if they do not exist in the IoC container.

Quick start

Creating form classes is easy. With a simple artisan command:

php artisan make:form Forms/SongForm --fields="name:text, lyrics:textarea, publish:checkbox"

Form is created in path app/Forms/SongForm.php with content:

<?php

namespace App\Forms;

use Kris\LaravelFormBuilder\Form;
use Kris\LaravelFormBuilder\Field;

class SongForm extends Form
{
    public function buildForm()
    {
        $this
            ->add('name', Field::TEXT, [
                'rules' => 'required|min:5'
            ])
            ->add('lyrics', Field::TEXTAREA, [
                'rules' => 'max:5000'
            ])
            ->add('publish', Field::CHECKBOX);
    }
}

If you want to instantiate empty form without any fields, just skip passing --fields parameter:

php artisan make:form Forms/PostForm

Gives:

<?php

namespace App\Forms;

use Kris\LaravelFormBuilder\Form;

class PostForm extends Form
{
    public function buildForm()
    {
        // Add fields here...
    }
}

After that instantiate the class in the controller and pass it to view:

<?php

namespace App\Http\Controllers;

use Illuminate\Routing\Controller as BaseController;
use Kris\LaravelFormBuilder\FormBuilder;

class SongsController extends BaseController {

    public function create(FormBuilder $formBuilder)
    {
        $form = $formBuilder->create(\App\Forms\SongForm::class, [
            'method' => 'POST',
            'url' => route('song.store')
        ]);

        return view('song.create', compact('form'));
    }

    public function store(FormBuilder $formBuilder)
    {
        $form = $formBuilder->create(\App\Forms\SongForm::class);

        if (!$form->isValid()) {
            return redirect()->back()->withErrors($form->getErrors())->withInput();
        }

        // Do saving and other things...
    }
}

Alternative example:

<?php

namespace App\Http\Controllers;

use Illuminate\Routing\Controller as BaseController;
use Kris\LaravelFormBuilder\FormBuilder;
use App\Forms\SongForm;

class SongsController extends BaseController {

    public function create(FormBuilder $formBuilder)
    {
        $form = $formBuilder->create(SongForm::class, [
            'method' => 'POST',
            'url' => route('song.store')
        ]);

        return view('song.create', compact('form'));
    }

    public function store(FormBuilder $formBuilder)
    {
        $form = $formBuilder->create(SongForm::class);

        if (!$form->isValid()) {
            return redirect()->back()->withErrors($form->getErrors())->withInput();
        }

        // Do saving and other things...
    }
}

If you want to store a model after a form submit considerating all fields are model properties:

<?php

namespace App\Http\Controllers;

use App\Http\Controllers\Controller;
use Kris\LaravelFormBuilder\FormBuilder;
use App\SongForm;

class SongFormController extends Controller
{
    public function store(FormBuilder $formBuilder)
    {
        $form = $formBuilder->create(\App\Forms\SongForm::class);
        $form->redirectIfNotValid();
        
        SongForm::create($form->getFieldValues());

        // Do redirecting...
    }

You can only save properties you need:

<?php

namespace App\Http\Controllers;

use App\Http\Controllers\Controller;
use Kris\LaravelFormBuilder\FormBuilder;
use App\SongForm;

class SongFormController extends Controller
{
    public function store(FormBuilder $formBuilder, Request $request)
    {
        $form = $formBuilder->create(\App\Forms\SongForm::class);
        $form->redirectIfNotValid();
        
        $songForm = new SongForm();
        $songForm->fill($request->only(['name', 'artist'])->save();

        // Do redirecting...
    }

Or you can update any model after form submit:

<?php

namespace App\Http\Controllers;

use App\Http\Controllers\Controller;
use Kris\LaravelFormBuilder\FormBuilder;
use App\SongForm;

class SongFormController extends Controller
{
    public function update(int $id, Request $request)
    {
        $songForm = SongForm::findOrFail($id);

        $form = $this->getForm($songForm);
        $form->redirectIfNotValid();

        $songForm->update($form->getFieldValues());

        // Do redirecting...
    }

Create the routes

// app/Http/routes.php
Route::get('songs/create', [
    'uses' => 'SongsController@create',
    'as' => 'song.create'
]);

Route::post('songs', [
    'uses' => 'SongsController@store',
    'as' => 'song.store'
]);

Print the form in view with form() helper function:

<!-- resources/views/song/create.blade.php -->

@extends('app')

@section('content')
    {!! form($form) !!}
@endsection

Go to /songs/create; above code will generate this html:

<form method="POST" action="http://example.dev/songs">
    <input name="_token" type="hidden" value="FaHZmwcnaOeaJzVdyp4Ml8B6l1N1DLUDsZmsjRFL">
    <div class="form-group">
        <label for="name" class="control-label">Name</label>
        <input type="text" class="form-control" id="name">
    </div>
    <div class="form-group">
        <label for="lyrics" class="control-label">Lyrics</label>
        <textarea name="lyrics" class="form-control" id="lyrics"></textarea>
    </div>
    <div class="form-group">
        <label for="publish" class="control-label">Publish</label>
        <input type="checkbox" name="publish" id="publish">
    </div>
</form>

Or you can generate forms easier by using simple array

<?php

namespace App\Http\Controllers;

use Illuminate\Routing\Controller as BaseController;
use Kris\LaravelFormBuilder\FormBuilder;
use Kris\LaravelFormBuilder\Field;
use App\Forms\SongForm;

class SongsController extends BaseController {

    public function create(FormBuilder $formBuilder)
    {
        $form = $formBuilder->createByArray([
                        [
                            'name' => 'name',
                            'type' => Field::TEXT,
                        ],
                        [
                            'name' => 'lyrics',
                            'type' => Field::TEXTAREA,
                        ],
                        [
                            'name' => 'publish',
                            'type' => Field::CHECKBOX
                        ],
                    ]
            ,[
            'method' => 'POST',
            'url' => route('song.store')
        ]);

        return view('song.create', compact('form'));
    }
}

Laravel 4

For Laravel 4 version check laravel4-form-builder.

Bootstrap 4 support

To use bootstrap 4 instead of bootstrap 3, install laravel-form-builder-bs4.

Upgrade to 1.6

If you upgraded to >1.6.* from 1.5.* or earlier, and having problems with form value binding, rename default_value to value.

More info in changelog.

Documentation

For detailed documentation refer to https://kristijanhusak.github.io/laravel-form-builder/.

Changelog

Changelog can be found here.

Contributing

Project follows PSR-2 standard and it's covered with PHPUnit tests. Pull requests should include tests and pass Travis CI build.

To run tests first install dependencies with composer install.

After that tests can be run with vendor/bin/phpunit

Author: Kristijanhusak
Source Code: https://github.com/kristijanhusak/laravel-form-builder 
License: MIT license

#laravel #builder #php #framework 

Laravel-form-builder: Laravel form Builder For Version 5+!