1618754340
In the software field, every tool gives the flexibility to use the tool’s functionality according to the user’s requirements. Moreover, this flexibility comes into the form of “Configurations”. In the same way, Jenkins also provides its configurations so that user uses it according to its ease and requirements. So, in this article, we will discuss some important and most widely used Jenkins configurations. Subsequently, let’s see how the Jenkins configure options work by covering the details under the following topics:
#jenkins #jenkins configuration
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1652543820
Background Fetch is a very simple plugin which attempts to awaken an app in the background about every 15 minutes, providing a short period of background running-time. This plugin will execute your provided callbackFn
whenever a background-fetch event occurs.
There is no way to increase the rate which a fetch-event occurs and this plugin sets the rate to the most frequent possible — you will never receive an event faster than 15 minutes. The operating-system will automatically throttle the rate the background-fetch events occur based upon usage patterns. Eg: if user hasn't turned on their phone for a long period of time, fetch events will occur less frequently or if an iOS user disables background refresh they may not happen at all.
:new: Background Fetch now provides a scheduleTask
method for scheduling arbitrary "one-shot" or periodic tasks.
scheduleTask
seems only to fire when the device is plugged into power.stopOnTerminate: false
for iOS.@config enableHeadless
)⚠️ If you have a previous version of react-native-background-fetch < 2.7.0
installed into react-native >= 0.60
, you should first unlink
your previous version as react-native link
is no longer required.
$ react-native unlink react-native-background-fetch
yarn
$ yarn add react-native-background-fetch
npm
$ npm install --save react-native-background-fetch
react-native >= 0.60
react-native >= 0.60
ℹ️ This repo contains its own Example App. See /example
import React from 'react';
import {
SafeAreaView,
StyleSheet,
ScrollView,
View,
Text,
FlatList,
StatusBar,
} from 'react-native';
import {
Header,
Colors
} from 'react-native/Libraries/NewAppScreen';
import BackgroundFetch from "react-native-background-fetch";
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
events: []
};
}
componentDidMount() {
// Initialize BackgroundFetch ONLY ONCE when component mounts.
this.initBackgroundFetch();
}
async initBackgroundFetch() {
// BackgroundFetch event handler.
const onEvent = async (taskId) => {
console.log('[BackgroundFetch] task: ', taskId);
// Do your background work...
await this.addEvent(taskId);
// IMPORTANT: You must signal to the OS that your task is complete.
BackgroundFetch.finish(taskId);
}
// Timeout callback is executed when your Task has exceeded its allowed running-time.
// You must stop what you're doing immediately BackgroundFetch.finish(taskId)
const onTimeout = async (taskId) => {
console.warn('[BackgroundFetch] TIMEOUT task: ', taskId);
BackgroundFetch.finish(taskId);
}
// Initialize BackgroundFetch only once when component mounts.
let status = await BackgroundFetch.configure({minimumFetchInterval: 15}, onEvent, onTimeout);
console.log('[BackgroundFetch] configure status: ', status);
}
// Add a BackgroundFetch event to <FlatList>
addEvent(taskId) {
// Simulate a possibly long-running asynchronous task with a Promise.
return new Promise((resolve, reject) => {
this.setState(state => ({
events: [...state.events, {
taskId: taskId,
timestamp: (new Date()).toString()
}]
}));
resolve();
});
}
render() {
return (
<>
<StatusBar barStyle="dark-content" />
<SafeAreaView>
<ScrollView
contentInsetAdjustmentBehavior="automatic"
style={styles.scrollView}>
<Header />
<View style={styles.body}>
<View style={styles.sectionContainer}>
<Text style={styles.sectionTitle}>BackgroundFetch Demo</Text>
</View>
</View>
</ScrollView>
<View style={styles.sectionContainer}>
<FlatList
data={this.state.events}
renderItem={({item}) => (<Text>[{item.taskId}]: {item.timestamp}</Text>)}
keyExtractor={item => item.timestamp}
/>
</View>
</SafeAreaView>
</>
);
}
}
const styles = StyleSheet.create({
scrollView: {
backgroundColor: Colors.lighter,
},
body: {
backgroundColor: Colors.white,
},
sectionContainer: {
marginTop: 32,
paddingHorizontal: 24,
},
sectionTitle: {
fontSize: 24,
fontWeight: '600',
color: Colors.black,
},
sectionDescription: {
marginTop: 8,
fontSize: 18,
fontWeight: '400',
color: Colors.dark,
},
});
export default App;
In addition to the default background-fetch task defined by BackgroundFetch.configure
, you may also execute your own arbitrary "oneshot" or periodic tasks (iOS requires additional Setup Instructions). However, all events will be fired into the Callback provided to BackgroundFetch#configure
:
scheduleTask
on iOS seems only to run when the device is plugged into power.scheduleTask
on iOS are designed for low-priority tasks, such as purging cache files — they tend to be unreliable for mission-critical tasks. scheduleTask
will never run as frequently as you want.fetch
event is much more reliable and fires far more often.scheduleTask
on iOS stop when the user terminates the app. There is no such thing as stopOnTerminate: false
for iOS.// Step 1: Configure BackgroundFetch as usual.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback
// This is the fetch-event callback.
console.log("[BackgroundFetch] taskId: ", taskId);
// Use a switch statement to route task-handling.
switch (taskId) {
case 'com.foo.customtask':
print("Received custom task");
break;
default:
print("Default fetch task");
}
// Finish, providing received taskId.
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
// Step 2: Schedule a custom "oneshot" task "com.foo.customtask" to execute 5000ms from now.
BackgroundFetch.scheduleTask({
taskId: "com.foo.customtask",
forceAlarmManager: true,
delay: 5000 // <-- milliseconds
});
API Documentation
@param {Integer} minimumFetchInterval [15]
The minimum interval in minutes to execute background fetch events. Defaults to 15
minutes. Note: Background-fetch events will never occur at a frequency higher than every 15 minutes. Apple uses a secret algorithm to adjust the frequency of fetch events, presumably based upon usage patterns of the app. Fetch events can occur less often than your configured minimumFetchInterval
.
@param {Integer} delay (milliseconds)
ℹ️ Valid only for BackgroundFetch.scheduleTask
. The minimum number of milliseconds in future that task should execute.
@param {Boolean} periodic [false]
ℹ️ Valid only for BackgroundFetch.scheduleTask
. Defaults to false
. Set true to execute the task repeatedly. When false
, the task will execute just once.
@config {Boolean} stopOnTerminate [true]
Set false
to continue background-fetch events after user terminates the app. Default to true
.
@config {Boolean} startOnBoot [false]
Set true
to initiate background-fetch events when the device is rebooted. Defaults to false
.
❗ NOTE: startOnBoot
requires stopOnTerminate: false
.
@config {Boolean} forceAlarmManager [false]
By default, the plugin will use Android's JobScheduler
when possible. The JobScheduler
API prioritizes for battery-life, throttling task-execution based upon device usage and battery level.
Configuring forceAlarmManager: true
will bypass JobScheduler
to use Android's older AlarmManager
API, resulting in more accurate task-execution at the cost of higher battery usage.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15,
forceAlarmManager: true
}, async (taskId) => { // <-- Event callback
console.log("[BackgroundFetch] taskId: ", taskId);
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
.
.
.
// And with with #scheduleTask
BackgroundFetch.scheduleTask({
taskId: 'com.foo.customtask',
delay: 5000, // milliseconds
forceAlarmManager: true,
periodic: false
});
@config {Boolean} enableHeadless [false]
Set true
to enable React Native's Headless JS mechanism, for handling fetch events after app termination.
index.js
(MUST BE IN index.js
):import BackgroundFetch from "react-native-background-fetch";
let MyHeadlessTask = async (event) => {
// Get task id from event {}:
let taskId = event.taskId;
let isTimeout = event.timeout; // <-- true when your background-time has expired.
if (isTimeout) {
// This task has exceeded its allowed running-time.
// You must stop what you're doing immediately finish(taskId)
console.log('[BackgroundFetch] Headless TIMEOUT:', taskId);
BackgroundFetch.finish(taskId);
return;
}
console.log('[BackgroundFetch HeadlessTask] start: ', taskId);
// Perform an example HTTP request.
// Important: await asychronous tasks when using HeadlessJS.
let response = await fetch('https://reactnative.dev/movies.json');
let responseJson = await response.json();
console.log('[BackgroundFetch HeadlessTask] response: ', responseJson);
// Required: Signal to native code that your task is complete.
// If you don't do this, your app could be terminated and/or assigned
// battery-blame for consuming too much time in background.
BackgroundFetch.finish(taskId);
}
// Register your BackgroundFetch HeadlessTask
BackgroundFetch.registerHeadlessTask(MyHeadlessTask);
@config {integer} requiredNetworkType [BackgroundFetch.NETWORK_TYPE_NONE]
Set basic description of the kind of network your job requires.
If your job doesn't need a network connection, you don't need to use this option as the default value is BackgroundFetch.NETWORK_TYPE_NONE
.
NetworkType | Description |
---|---|
BackgroundFetch.NETWORK_TYPE_NONE | This job doesn't care about network constraints, either any or none. |
BackgroundFetch.NETWORK_TYPE_ANY | This job requires network connectivity. |
BackgroundFetch.NETWORK_TYPE_CELLULAR | This job requires network connectivity that is a cellular network. |
BackgroundFetch.NETWORK_TYPE_UNMETERED | This job requires network connectivity that is unmetered. Most WiFi networks are unmetered, as in "you can upload as much as you like". |
BackgroundFetch.NETWORK_TYPE_NOT_ROAMING | This job requires network connectivity that is not roaming (being outside the country of origin) |
@config {Boolean} requiresBatteryNotLow [false]
Specify that to run this job, the device's battery level must not be low.
This defaults to false. If true, the job will only run when the battery level is not low, which is generally the point where the user is given a "low battery" warning.
@config {Boolean} requiresStorageNotLow [false]
Specify that to run this job, the device's available storage must not be low.
This defaults to false. If true, the job will only run when the device is not in a low storage state, which is generally the point where the user is given a "low storage" warning.
@config {Boolean} requiresCharging [false]
Specify that to run this job, the device must be charging (or be a non-battery-powered device connected to permanent power, such as Android TV devices). This defaults to false.
@config {Boolean} requiresDeviceIdle [false]
When set true, ensure that this job will not run if the device is in active use.
The default state is false: that is, the for the job to be runnable even when someone is interacting with the device.
This state is a loose definition provided by the system. In general, it means that the device is not currently being used interactively, and has not been in use for some time. As such, it is a good time to perform resource heavy jobs. Bear in mind that battery usage will still be attributed to your application, and shown to the user in battery stats.
Method Name | Arguments | Returns | Notes |
---|---|---|---|
configure | {FetchConfig} , callbackFn , timeoutFn | Promise<BackgroundFetchStatus> | Configures the plugin's callbackFn and timeoutFn . This callback will fire each time a background-fetch event occurs in addition to events from #scheduleTask . The timeoutFn will be called when the OS reports your task is nearing the end of its allowed background-time. |
scheduleTask | {TaskConfig} | Promise<boolean> | Executes a custom task. The task will be executed in the same Callback function provided to #configure . |
status | callbackFn | Promise<BackgroundFetchStatus> | Your callback will be executed with the current status (Integer) 0: Restricted , 1: Denied , 2: Available . These constants are defined as BackgroundFetch.STATUS_RESTRICTED , BackgroundFetch.STATUS_DENIED , BackgroundFetch.STATUS_AVAILABLE (NOTE: Android will always return STATUS_AVAILABLE ) |
finish | String taskId | Void | You MUST call this method in your callbackFn provided to #configure in order to signal to the OS that your task is complete. iOS provides only 30s of background-time for a fetch-event -- if you exceed this 30s, iOS will kill your app. |
start | none | Promise<BackgroundFetchStatus> | Start the background-fetch API. Your callbackFn provided to #configure will be executed each time a background-fetch event occurs. NOTE the #configure method automatically calls #start . You do not have to call this method after you #configure the plugin |
stop | [taskId:String] | Promise<boolean> | Stop the background-fetch API and all #scheduleTask from firing events. Your callbackFn provided to #configure will no longer be executed. If you provide an optional taskId , only that #scheduleTask will be stopped. |
BGTaskScheduler
API for iOS 13+[||]
button to initiate a Breakpoint.(lldb)
, paste the following command (Note: use cursor up/down keys to cycle through previously run commands):e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateLaunchForTaskWithIdentifier:@"com.transistorsoft.fetch"]
[ > ]
button to continue. The task will execute and the Callback function provided to BackgroundFetch.configure
will receive the event.BGTaskScheduler
api supports simulated task-timeout events. To simulate a task-timeout, your fetchCallback
must not call BackgroundFetch.finish(taskId)
:let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback.
// This is the task callback.
console.log("[BackgroundFetch] taskId", taskId);
//BackgroundFetch.finish(taskId); // <-- Disable .finish(taskId) when simulating an iOS task timeout
}, async (taskId) => { // <-- Event timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
print("[BackgroundFetch] TIMEOUT taskId:", taskId);
BackgroundFetch.finish(taskId);
});
e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateExpirationForTaskWithIdentifier:@"com.transistorsoft.fetch"]
BackgroundFetch
APIDebug->Simulate Background Fetch
$ adb logcat
:$ adb logcat *:S ReactNative:V ReactNativeJS:V TSBackgroundFetch:V
21+
:$ adb shell cmd jobscheduler run -f <your.application.id> 999
<21
, simulate a "Headless JS" event with (insert <your.application.id>)$ adb shell am broadcast -a <your.application.id>.event.BACKGROUND_FETCH
Download Details:
Author: transistorsoft
Source Code: https://github.com/transistorsoft/react-native-background-fetch
License: MIT license
1618754340
In the software field, every tool gives the flexibility to use the tool’s functionality according to the user’s requirements. Moreover, this flexibility comes into the form of “Configurations”. In the same way, Jenkins also provides its configurations so that user uses it according to its ease and requirements. So, in this article, we will discuss some important and most widely used Jenkins configurations. Subsequently, let’s see how the Jenkins configure options work by covering the details under the following topics:
#jenkins #jenkins configuration
1651319520
Serverless APIGateway Service Proxy
This Serverless Framework plugin supports the AWS service proxy integration feature of API Gateway. You can directly connect API Gateway to AWS services without Lambda.
Run serverless plugin install
in your Serverless project.
serverless plugin install -n serverless-apigateway-service-proxy
Here is a services list which this plugin supports for now. But will expand to other services in the feature. Please pull request if you are intersted in it.
Define settings of the AWS services you want to integrate under custom > apiGatewayServiceProxies
and run serverless deploy
.
Sample syntax for Kinesis proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- kinesis: # partitionkey is set apigateway requestid by default
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey: 'hardcordedkey' # use static partitionkey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis/{myKey} # use path parameter
method: post
partitionKey:
pathParam: myKey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
bodyParam: data.myKey # use body parameter
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
queryStringParam: myKey # use query string param
streamName: { Ref: 'YourStream' }
cors: true
- kinesis: # PutRecords
path: /kinesis
method: post
action: PutRecords
streamName: { Ref: 'YourStream' }
cors: true
resources:
Resources:
YourStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/kinesis -d '{"message": "some data"}' -H 'Content-Type:application/json'
Sample syntax for SQS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sqs -d '{"message": "testtest"}' -H 'Content-Type:application/json'
If you'd like to pass additional data to the integration request, you can do so by including your custom API Gateway request parameters in serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'cognitoIdentityId'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'context.identity.cognitoIdentityId'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
'integration.request.querystring.MessageAttribute.2.Name': "'cognitoAuthenticationProvider'"
'integration.request.querystring.MessageAttribute.2.Value.StringValue': 'context.identity.cognitoAuthenticationProvider'
'integration.request.querystring.MessageAttribute.2.Value.DataType': "'String'"
The alternative way to pass MessageAttribute
parameters is via a request body mapping template.
See the SQS section under Customizing request body mapping templates
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
- statusCode: 200
selectionPattern: '2\\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Sample syntax for S3 proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json # use static key
cors: true
- s3:
path: /s3/{myKey} # use path param
method: get
action: GetObject
bucket:
Ref: S3Bucket
key:
pathParam: myKey
cors: true
- s3:
path: /s3
method: delete
action: DeleteObject
bucket:
Ref: S3Bucket
key:
queryStringParam: key # use query string param
cors: true
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/s3 -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Similar to the SQS support, you can customize the default request parameters serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
If you'd like use custom API Gateway request templates, you can do so like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: get
action: GetObject
bucket:
Ref: S3Bucket
request:
template:
application/json: |
#set ($specialStuff = $context.request.header.x-special)
#set ($context.requestOverride.path.object = $specialStuff.replaceAll('_', '-'))
{}
Note that if the client does not provide a Content-Type
header in the request, ApiGateway defaults to application/json
.
Added the new customization parameter that lets the user set a custom Path Override in API Gateway other than the {bucket}/{object}
This parameter is optional and if not set, will fall back to {bucket}/{object}
The Path Override will add {bucket}/
automatically in front
Please keep in mind, that key or path.object still needs to be set at the moment (maybe this will be made optional later on with this)
Usage (With 2 Path Parameters (folder and file and a fixed file extension)):
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{folder}/{file}
method: get
action: GetObject
pathOverride: '{folder}/{file}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.folder': 'method.request.path.folder'
'integration.request.path.file': 'method.request.path.file'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will result in API Gateway setting the Path Override attribute to {bucket}/{folder}/{file}.xml
So for example if you navigate to the API Gatway endpoint /language/en
it will fetch the file in S3 from {bucket}/language/en.xml
Can use greedy, for deeper Folders
The forementioned example can also be shortened by a greedy approach. Thanks to @taylorreece for mentioning this.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{myPath+}
method: get
action: GetObject
pathOverride: '{myPath}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.myPath': 'method.request.path.myPath'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will translate for example /s3/a/b/c
to a/b/c.xml
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Sample syntax for SNS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sns -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
- statusCode: 200
selectionPattern: '2\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Content Handling and Pass Through Behaviour customization
If you want to work with binary fata, you can not specify contentHandling
and PassThrough
inside the request
object.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
contentHandling: CONVERT_TO_TEXT
passThrough: WHEN_NO_TEMPLATES
The allowed values correspond with the API Gateway Method integration for ContentHandling and PassthroughBehavior
Sample syntax for DynamoDB proxy in serverless.yml
. Currently, the supported DynamoDB Operations are PutItem
, GetItem
and DeleteItem
.
custom:
apiGatewayServiceProxies:
- dynamodb:
path: /dynamodb/{id}/{sort}
method: put
tableName: { Ref: 'YourTable' }
hashKey: # set pathParam or queryStringParam as a partitionkey.
pathParam: id
attributeType: S
rangeKey: # required if also using sort key. set pathParam or queryStringParam.
pathParam: sort
attributeType: S
action: PutItem # specify action to the table what you want
condition: attribute_not_exists(Id) # optional Condition Expressions parameter for the table
cors: true
- dynamodb:
path: /dynamodb
method: get
tableName: { Ref: 'YourTable' }
hashKey:
queryStringParam: id # use query string parameter
attributeType: S
rangeKey:
queryStringParam: sort
attributeType: S
action: GetItem
cors: true
- dynamodb:
path: /dynamodb/{id}
method: delete
tableName: { Ref: 'YourTable' }
hashKey:
pathParam: id
attributeType: S
action: DeleteItem
cors: true
resources:
Resources:
YourTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: YourTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
- AttributeName: sort
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
- AttributeName: sort
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Sample request after deploying.
curl -XPUT https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/dynamodb/<hashKey>/<sortkey> \
-d '{"name":{"S":"john"},"address":{"S":"xxxxx"}}' \
-H 'Content-Type:application/json'
Sample syntax for EventBridge proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- eventbridge: # source and detailType are hardcoded; detail defaults to POST body
path: /eventbridge
method: post
source: 'hardcoded_source'
detailType: 'hardcoded_detailType'
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source and detailType as path parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
pathParam: detailTypeKey
source:
pathParam: sourceKey
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source, detail, and detailType as body parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
bodyParam: data.detailType
source:
bodyParam: data.source
detail:
bodyParam: data.detail
eventBusName: { Ref: 'YourBusName' }
cors: true
resources:
Resources:
YourBus:
Type: AWS::Events::EventBus
Properties:
Name: YourEventBus
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/eventbridge -d '{"message": "some data"}' -H 'Content-Type:application/json'
To set CORS configurations for your HTTP endpoints, simply modify your event configurations as follows:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
Setting cors to true assumes a default configuration which is equivalent to:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
Configuring the cors property sets Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods,Access-Control-Allow-Credentials headers in the CORS preflight response. To enable the Access-Control-Max-Age preflight response header, set the maxAge property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
maxAge: 86400
If you are using CloudFront or another CDN for your API Gateway, you may want to setup a Cache-Control header to allow for OPTIONS request to be cached to avoid the additional hop.
To enable the Cache-Control header on preflight response, set the cacheControl property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
cacheControl: 'max-age=600, s-maxage=600, proxy-revalidate' # Caches on browser and proxy for 10 minutes and doesnt allow proxy to serve out of date content
You can pass in any supported authorization type:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
# optional - defaults to 'NONE'
authorizationType: 'AWS_IAM' # can be one of ['NONE', 'AWS_IAM', 'CUSTOM', 'COGNITO_USER_POOLS']
# when using 'CUSTOM' authorization type, one should specify authorizerId
# authorizerId: { Ref: 'AuthorizerLogicalId' }
# when using 'COGNITO_USER_POOLS' authorization type, one can specify a list of authorization scopes
# authorizationScopes: ['scope1','scope2']
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Source: AWS::ApiGateway::Method docs
You can indicate whether the method requires clients to submit a valid API key using private
flag:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
private: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
which is the same syntax used in Serverless framework.
Source: Serverless: Setting API keys for your Rest API
Source: AWS::ApiGateway::Method docs
By default, the plugin will generate a role with the required permissions for each service type that is configured.
You can configure your own role by setting the roleArn
attribute:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
roleArn: # Optional. A default role is created when not configured
Fn::GetAtt: [CustomS3Role, Arn]
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
CustomS3Role:
# Custom Role definition
Type: 'AWS::IAM::Role'
The plugin allows one to specify which parameters the API Gateway method accepts.
A common use case is to pass custom data to the integration request:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
cors: true
acceptParameters:
'method.request.header.Custom-Header': true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'custom-Header'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'method.request.header.Custom-Header'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
resources:
Resources:
SqsQueue:
Type: 'AWS::SQS::Queue'
Any published SQS message will have the Custom-Header
value added as a message attribute.
If you'd like to add content types or customize the default templates, you can do so by including your custom API Gateway request mapping template in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
request:
template:
text/plain:
Fn::Sub:
- |
#set($msgBody = $util.parseJson($input.body))
#set($msgId = $msgBody.MessageId)
{
"Data": "$util.base64Encode($input.body)",
"PartitionKey": "$msgId",
"StreamName": "#{MyStreamArn}"
}
- MyStreamArn:
Fn::GetAtt: [MyStream, Arn]
It is important that the mapping template will return a valid
application/json
string
Source: How to connect SNS to Kinesis for cross-account delivery via API Gateway
Customizing SQS request templates requires us to force all requests to use an application/x-www-form-urlencoded
style body. The plugin sets the Content-Type
header to application/x-www-form-urlencoded
for you, but API Gateway will still look for the template under the application/json
request template type, so that is where you need to configure you request body in serverless.yml
:
custom:
apiGatewayServiceProxies:
- sqs:
path: /{version}/event/receiver
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
request:
template:
application/json: |-
#set ($body = $util.parseJson($input.body))
Action=SendMessage##
&MessageGroupId=$util.urlEncode($body.event_type)##
&MessageDeduplicationId=$util.urlEncode($body.event_id)##
&MessageAttribute.1.Name=$util.urlEncode("X-Custom-Signature")##
&MessageAttribute.1.Value.DataType=String##
&MessageAttribute.1.Value.StringValue=$util.urlEncode($input.params("X-Custom-Signature"))##
&MessageBody=$util.urlEncode($input.body)
Note that the ##
at the end of each line is an empty comment. In VTL this has the effect of stripping the newline from the end of the line (as it is commented out), which makes API Gateway read all the lines in the template as one line.
Be careful when mixing additional requestParameters
into your SQS endpoint as you may overwrite the integration.request.header.Content-Type
and stop the request template from being parsed correctly. You may also unintentionally create conflicts between parameters passed using requestParameters
and those in your request template. Typically you should only use the request template if you need to manipulate the incoming request body in some way.
Your custom template must also set the Action
and MessageBody
parameters, as these will not be added for you by the plugin.
When using a custom request body, headers sent by a client will no longer be passed through to the SQS queue (PassthroughBehavior
is automatically set to NEVER
). You will need to pass through headers sent by the client explicitly in the request body. Also, any custom querystring parameters in the requestParameters
array will be ignored. These also need to be added via the custom request body.
Similar to the Kinesis support, you can customize the default request mapping templates in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
template:
application/json:
Fn::Sub:
- "Action=Publish&Message=$util.urlEncode('This is a fixed message')&TopicArn=$util.urlEncode('#{MyTopicArn}')"
- MyTopicArn: { Ref: MyTopic }
It is important that the mapping template will return a valid
application/x-www-form-urlencoded
string
Source: Connect AWS API Gateway directly to SNS using a service integration
You can customize the response body by providing mapping templates for success, server errors (5xx) and client errors (4xx).
Templates must be in JSON format. If a template isn't provided, the integration response will be returned as-is to the client.
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
response:
template:
success: |
{
"success": true
}
serverError: |
{
"success": false,
"errorMessage": "Server Error"
}
clientError: |
{
"success": false,
"errorMessage": "Client Error"
}
Author: Serverless-operations
Source Code: https://github.com/serverless-operations/serverless-apigateway-service-proxy
License:
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license