1618645344
Adobe Flash became the latest internet technology to reach end-of-life on 31st December, 2020, and has a built-in kill switch on 12th January, 2021. From Adobe’s announcement:
Since Adobe will no longer be supporting Flash Player after December 31, 2020 and Adobe will block Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems.
Flash was used in the early days of the internet to provide interactive graphics and applications, and later as a mechanism for distributing ads, viruses, and malware, but its popularity for writing games gave it the biggest use case on the internet over the last couple of decades, in part because its vector graphics format made creating portable games easy, without having to worry about whatever the latest incompatible graphics drivers were for various operating systems. It also reached popularity by being able to play MPEG4 and H264 videos in a portable way without being constrained by the browser’s own expected video formats.
#web development #web #internet #html #development #news
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1648972740
Generis
Versatile Go code generator.
Generis is a lightweight code preprocessor adding the following features to the Go language :
package main;
// -- IMPORTS
import (
"html"
"io"
"log"
"net/http"
"net/url"
"strconv"
);
// -- DEFINITIONS
#define DebugMode
#as true
// ~~
#define HttpPort
#as 8080
// ~~
#define WriteLine( {{text}} )
#as log.Println( {{text}} )
// ~~
#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};
// ~~
#define DeclareStack( {{type}}, {{name}} )
#as
// -- TYPES
type {{name}}Stack struct
{
ElementArray []{{type}};
}
// -- INQUIRIES
func ( stack * {{name}}Stack ) IsEmpty(
) bool
{
return len( stack.ElementArray ) == 0;
}
// -- OPERATIONS
func ( stack * {{name}}Stack ) Push(
element {{type}}
)
{
stack.ElementArray = append( stack.ElementArray, element );
}
// ~~
func ( stack * {{name}}Stack ) Pop(
) {{type}}
{
local
element : {{type}};
element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];
stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];
return element;
}
#end
// ~~
#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )
// -- TYPES
DeclareStack( string )
DeclareStack( int32 )
// -- FUNCTIONS
func HandleRootPage(
response_writer http.ResponseWriter,
request * http.Request
)
{
local
boolean : bool;
local
natural : uint;
local
integer : int;
local
real : float64;
local
escaped_html_text,
escaped_url_text,
text : string;
local
integer_stack : Int32Stack;
boolean = true;
natural = 10;
integer = 20;
real = 30.0;
text = "text";
escaped_url_text = "&escaped text?";
escaped_html_text = "<escaped text/>";
integer_stack.Push( 10 );
integer_stack.Push( 20 );
integer_stack.Push( 30 );
#write response_writer
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title><%= request.URL.Path %></title>
</head>
<body>
<% if ( boolean ) { %>
<%= "URL : " + request.URL.Path %>
<br/>
<%@ natural %>
<%# integer %>
<%& real %>
<br/>
<%~ text %>
<%^ escaped_url_text %>
<%= escaped_html_text %>
<%= "<%% ignored %%>" %>
<%% ignored %%>
<% } %>
<br/>
Stack :
<br/>
<% for !integer_stack.IsEmpty() { %>
<%# integer_stack.Pop() %>
<% } %>
</body>
</html>
#end
}
// ~~
func main()
{
http.HandleFunc( "/", HandleRootPage );
#if DebugMode
WriteLine( "Listening on http://localhost:HttpPort" );
#end
log.Fatal(
http.ListenAndServe( ":HttpPort", nil )
);
}
Constants and generic code can be defined with the following syntax :
#define old code
#as new code
#define old code
#as
new
code
#end
#define
old
code
#as new code
#define
old
code
#as
new
code
#end
The #define
directive can contain one or several parameters :
{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code
They can have a boolean expression to require they match specific conditions :
HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
false
true
!expression
expression && expression
expression || expression
( expression )
The #define
directive must not start or end with a parameter.
The #as
directive can use the value of the #define
parameters :
{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}
Their value can be changed through one or several filter functions :
LowerCase
UpperCase
MinorCase
MajorCase
SnakeCase
PascalCase
CamelCase
RemoveComments
RemoveBlanks
PackStrings
PackIdentifiers
ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier
Conditional code can be defined with the following syntax :
#if boolean expression
#if boolean expression
...
#else
...
#end
#else
#if boolean expression
...
#else
...
#end
#end
The boolean expression can use the following operators :
false
true
!expression
expression && expression
expression || expression
( expression )
Templated HTML code can be sent to a stream writer using the following syntax :
#write writer expression
<% code %>
<%@ natural expression %>
<%# integer expression %>
<%& real expression %>
<%~ text expression %>
<%= escaped text expression %>
<%! removed content %>
<%% ignored tags %%>
#end
--join
option requires to end the statements with a semicolon.#writer
directive is only available for the Go language.Install the DMD 2 compiler (using the MinGW setup option on Windows).
Build the executable with the following command line :
dmd -m64 generis.d
generis [options]
--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension
generis --process GS/ GO/
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder.
generis --process GS/ GO/ --create
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed.
generis --process GS/ GO/ --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, creating the output folders if needed and watching the Generis files for modifications.
generis --process GS/ GO/ --trim --join --create --watch
Reads the Generis files in the GS/
folder and writes Go files in the GO/
folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.
2.0
Author: Senselogic
Source Code: https://github.com/senselogic/GENERIS
License: View license
1594753020
Multiple vulnerabilities in the Citrix Application Delivery Controller (ADC) and Gateway would allow code injection, information disclosure and denial of service, the networking vendor announced Tuesday. Four of the bugs are exploitable by an unauthenticated, remote attacker.
The Citrix products (formerly known as NetScaler ADC and Gateway) are used for application-aware traffic management and secure remote access, respectively, and are installed in at least 80,000 companies in 158 countries, according to a December assessment from Positive Technologies.
Other flaws announced Tuesday also affect Citrix SD-WAN WANOP appliances, models 4000-WO, 4100-WO, 5000-WO and 5100-WO.
Attacks on the management interface of the products could result in system compromise by an unauthenticated user on the management network; or system compromise through cross-site scripting (XSS). Attackers could also create a download link for the device which, if downloaded and then executed by an unauthenticated user on the management network, could result in the compromise of a local computer.
“Customers who have configured their systems in accordance with Citrix recommendations [i.e., to have this interface separated from the network and protected by a firewall] have significantly reduced their risk from attacks to the management interface,” according to the vendor.
Threat actors could also mount attacks on Virtual IPs (VIPs). VIPs, among other things, are used to provide users with a unique IP address for communicating with network resources for applications that do not allow multiple connections or users from the same IP address.
The VIP attacks include denial of service against either the Gateway or Authentication virtual servers by an unauthenticated user; or remote port scanning of the internal network by an authenticated Citrix Gateway user.
“Attackers can only discern whether a TLS connection is possible with the port and cannot communicate further with the end devices,” according to the critical Citrix advisory. “Customers who have not enabled either the Gateway or Authentication virtual servers are not at risk from attacks that are applicable to those servers. Other virtual servers e.g. load balancing and content switching virtual servers are not affected by these issues.”
A final vulnerability has been found in Citrix Gateway Plug-in for Linux that would allow a local logged-on user of a Linux system with that plug-in installed to elevate their privileges to an administrator account on that computer, the company said.
#vulnerabilities #adc #citrix #code injection #critical advisory #cve-2020-8187 #cve-2020-8190 #cve-2020-8191 #cve-2020-8193 #cve-2020-8194 #cve-2020-8195 #cve-2020-8196 #cve-2020-8197 #cve-2020-8198 #cve-2020-8199 #denial of service #gateway #information disclosure #patches #security advisory #security bugs
1618645344
Adobe Flash became the latest internet technology to reach end-of-life on 31st December, 2020, and has a built-in kill switch on 12th January, 2021. From Adobe’s announcement:
Since Adobe will no longer be supporting Flash Player after December 31, 2020 and Adobe will block Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems.
Flash was used in the early days of the internet to provide interactive graphics and applications, and later as a mechanism for distributing ads, viruses, and malware, but its popularity for writing games gave it the biggest use case on the internet over the last couple of decades, in part because its vector graphics format made creating portable games easy, without having to worry about whatever the latest incompatible graphics drivers were for various operating systems. It also reached popularity by being able to play MPEG4 and H264 videos in a portable way without being constrained by the browser’s own expected video formats.
#web development #web #internet #html #development #news
1648079400
This is a general purpose OpenAPI code generator. It is currently used to completely generate the HTTP code in the Java SDK, and generate some of the HTTP code in our Golang SDK.
We currently have two HTTP endpoints. One for algod and one for indexer, so in most cases, this tool would be run once with each OpenAPI spec.
~$ mvn package -DskipTests
~$ java -jar target/generator-*-jar-with-dependencies.jar -h
You'll see that there are a number of subcommands:
The command line interface uses JCommander to define the command line interface. See Main.java.
The main code involves an OpenAPI parser / event generator and several listeners for the actual generation.
The template subcommand is using Apache Velocity as the underlying template engine. Things like variables, loops, and statements are all supported. So business logic can technically be implemented in the template if it's actually necessary.
There are three phases: client, query, and model. Each phase must provide two templates, one for the file generation and one to specify the filename to be used. If all results should go to the same file. For query and model generation the template will be executed once for each query / model. If you want to put everything in one file return the same filename twice in a row and the processing will exit early.
phase | filename | purpose |
---|---|---|
client | client.vm | Client class with functions to call each query. |
client | client_filename.vm | File to write to the client output directory. |
query | query.vm | Template to use for generating query files. |
query | query_filename.vm | File to write to the query output directory. |
model | model.vm | Template to use for generating model files. |
model | model_filename.vm | File to write to the model output directory. |
The template command will only run the templates which have an output directory is provided. So if you just want to regenerate models, only use the -m option.
-c, --clientOutputDir
Directory to write client file(s).
-m, --modelsOutputDir
Directory to write model file(s).
-q, --queryOutputDir
Directory to write query file(s).
The template subcommand accepts a --propertyFiles option. It can be provided multiple times, or as a comma separated list of files. Property files will be processed and bound to a velocity variable available to templates.
For details on a type you can put it directly into your template. It will be serialized along with its fields for your reference. Here is a high level description of what is available:
template | variable | type | purpose |
---|---|---|---|
all | str | StringHelpers.java | Some string utilities are available. See StringHelpers.java for details. There are simple things like $str.capitalize("someData") -> SomeData , and also some more complex helpers like $str.formatDoc($query.doc, "// ") which will split the document at the word boundary nearest to 80 characters without going over, and add a prefix to each new line. |
all | order | OrderHelpers.java | Some ordering utilities available. See OrderHelpers.java for details. An example utility function is $order.propertiesWithOrdering($props, $preferred_order) , where $props is a list of properties and $preferred_order is a string list to use when ordering the properties list. |
all | propFile | Properties | The contents of all property files are available with this variable. For example if package=com.algorand.v2.algod is in the property file, the template may use ${propFile.package} . |
all | models | HashMap<StructDef, List<TypeDef>> | A list of all models. |
all | queries | List<QueryDef> | A list of all queries. |
query | q | QueryDef | The current query definition. |
model | def | StructDef | The current model definition if multiple files are being generated. |
model | props | List<TypeDef> | A list of properties for the current model. |
In the following example, we are careful to generate the algod code first because the algod models are a strict subset of the indexer models. For that reason, we are able to reuse some overlapping models from indexer in algod.
~$ java -jar generator*jar template
-s algod.oas2.json
-t go_templates
-c algodClient
-m allModels
-q algodQueries
-p common_config.properties,algod_config.properties
~$ java -jar generator*jar template
-s indexer.oas2.json
-t go_templates
-c indexerClient
-m allModels
-q indexerQueries
-p common_config.properties,indexer_config.properties
There is a test template that gives you some basic usage in the test_templates directory.
You can generate the test code in the output directory with the following commands:
~$ mkdir output
~$ java -jar target/generator-*-jar-with-dependencies.jar \
template \
-s /path/to/a/spec/file/indexer.oas2.json \
-t test_templates/ \
-m output \
-q output \
-c output \
-p test_templates/my.properties
The Golang templates are in the go_templates directory.
The Golang HTTP API is only partially generated. The hand written parts were not totally consistent with the spec and that makes it difficult to regenerate them. Regardless, an attempt has been made. In the templates there are some macros which map "generated" values to the hand written ones. For example the query types have this mapping:
#macro ( queryType )
#if ( ${str.capitalize($q.name)} == "SearchForAccounts" )
SearchAccounts## The hand written client doesn't quite match the spec...
#elseif ( ${str.capitalize($q.name)} == "GetStatus" )
Status##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactionsByAddress" )
PendingTransactionInformationByAddress##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactions" )
PendingTransactions##
#else
${str.capitalize($q.name)}##
#end
#end
Other mappings are more specific to the language, such as the OpenAPI type to SDK type:
#macro ( toQueryType $param )##
#if ( $param.algorandFormat == "RFC3339 String" )
string##
#elseif ( $param.type == "integer" )
uint64##
#elseif ( $param.type == "string" )
string##
#elseif ( $param.type == "boolean" )
bool##
#elseif( $param.type == "binary" )
string##
#else
UNHANDLED TYPE
- ref: $!param.refType
- type: $!param.type
- array type: $!param.arrayType
- algorand format: $!param.algorandFormat
- format: $!param.format
##$unknown.type ## force a template failure because $unknown.type does not exist.
#end
#end
Because of this, we are phasing in code generation gradually by skipping some types. The skipped types are specified in the property files:
common_config.properties
model_skip=AccountParticipation,AssetParams,RawBlockJson,etc,...
algod_config.properties
query_skip=Block,BlockRaw,SendRawTransaction,SuggestedParams,etc,...
indexer_config.properties
query_skip=LookupAssetByID,LookupAccountTransactions,SearchForAssets,LookupAssetBalances,LookupAssetTransactions,LookupBlock,LookupTransactions,SearchForTransactions
The Java templates are in the java_templates directory.
These are not used yet, they are the initial experiments for the template engine. Since the Java SDK has used code generation from the beginning, we should be able to fully migrate to the template engine eventually.
In general, the automation pipeline will build and run whatever Dockerfile
is found in a repository's templates
directory. For instructions on how to configure the templates
directory, look at the repository template directory example.
If you are trying to verify that automatic code generation works as intended, we recommend creating a testing branch from that repository and using the SKIP_PR=true
environment variable to avoid creating pull requests. If all goes according to plan, generated files should be available in the container's /repo
directory.
The automatic generator scripts depend on certain prerequisites that are listed in automation/REQUIREMENTS.md. Once those conditions have been satisfied, automatically generating code for external repositories should be as easy as building and running a particular SDK's templates/Dockerfile
file.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/generator
License:
#algorand #blockchain #cryptocurrency #java #golang #openapi