1673316180
Julia package for limiting the rate at which expressions are evaluated. This can be useful for rate limiting access to network resources (e.g. websites). All methods are thread safe.
This example uses the Token-Bucket algorithm to limit how quickly the functions can be called.
using RateLimiter
tokens_per_second = 2
max_tokens = 100
initial_tokens = 0
limiter = TokenBucketRateLimiter(tokens_per_second, max_tokens, initial_tokens)
function f_cheap()
println("cheap")
return 1
end
function f_costly()
println("costly")
return 2
end
result = 0
for i in 1:10
result += @rate_limit limiter 1 f_cheap()
result += @rate_limit limiter 10 f_costly()
end
println("RESULT: $result")
See https://chipkent.github.io/RateLimiter.jl/dev/.
Pull requests will publish documentation to https://chipkent.github.io/RateLimiter.jl/previews/PR##
.
Author: Chipkent
Source Code: https://github.com/chipkent/RateLimiter.jl
License: MIT license
1673137140
Various popular libraries, pre-compiled to be compatible with AWS Lambda.
Currently includes (at least Python 2.7) support for:
This project is intended for use by Zappa, but could also be used by any Python/Lambda project.
pip install lambda-packages
The best way to use these packages is with Zappa, which will automatically install the right packages during deployment, and do a million other useful things. Whatever you're currently trying to do on Lambda, it'll be way easier for you if you just use Zappa right now. Trust me. It's awesome. As a bonus, Zappa now also provides support for manylinux wheels, which adds support for hundreds of other packages.
But, if you want to use this project the other (wrong) way, just put the contents of the .tar.gz archive into your lambda .zip.
lambda-packages also includes a manifest with information about the included packages and the paths to their binaries.
from lambda_packages import lambda_packages
print lambda_packages['psycopg2']
#{
# 'python2.7': {
# 'version': '2.6.1',
# 'path': '<absolute-local-path>/lambda_packages/psycopg2/python2.7-psycopg2-2.6.1.tar.gz'
# }
#}
To add support for more packages, send a pull request containing a gzipped tarball (tar -zcvf <package-name>.tar.gz <list-of-files>
) of the package (built on Amazon Linux and tested on AWS Lambda) in the appropriate directory, an updated manifest, and deterministic build instructions for creating the archive.
You may find the build.sh script useful as a starting point.
Before contributing, you should also make sure that there is no manylinux
wheel on PyPI for your package, as Zappa will automatically use those in addition to lambda-packages
.
You may also be interested in this guide on deploying with Zappa and Docker.
Useful targets which don't have manylinux wheels versions include:
Do you need help with..
Good news! We're currently available for remote and on-site consulting for small, large and enterprise teams. Please contact miserlou@gmail.com with your needs and let's work together!
Author: Miserlou
Source Code: https://github.com/Miserlou/lambda-packages
1669978996
Laravel does not come with pre-installed packages. You will need to install them manually before you can make use of them in your project.
Installing these packages using composer is simple if you know the exact package names or their GitHub repositories.
In this tutorial, I show how you can add remove package using composer in Laravel.
For example purpose, I am installing the Yajra Datatables package –
composer require yajra/laravel-datatables-oracle:"^10.0"
After installation, in composer.json
file a new line was added under the require
–
"require": {
"php": "^8.0.2",
"guzzlehttp/guzzle": "^7.2",
"laravel/framework": "^9.19",
"laravel/sanctum": "^3.0",
"laravel/tinker": "^2.7",
"yajra/laravel-datatables-oracle": "^10.0"
},
To remove the above-installed package execute the following command –
composer remove yajra/laravel-datatables-oracle
Package name is removed from composer.json
file –
"require": {
"php": "^8.0.2",
"guzzlehttp/guzzle": "^7.2",
"laravel/framework": "^9.19",
"laravel/sanctum": "^3.0",
"laravel/tinker": "^2.7"
},
Make sure to also remove any reference stored in the project. For example, when setting up Yajra Datatables also need to update config/app.php
file and to use this need to import and call them.
In this case, need to remove the Yajra from ‘providers’ and ‘alias’ in the config/app.php
file. Also, need to remove the code from the pages if used.
If a package is no longer needed then it is better to remove them from the project. In case you mistakenly removed the package then you can reinstall it following the above method.
If you found this tutorial helpful then don't forget to share.
Original article source at: https://makitweb.com/
1668999139
Learn how to change your project's package name in Android Studio
An Android application is written using Java or Kotlin (which compiles to Java for Android apps)
The package system is a folder that groups together relevant files. It makes the structure of an Android project.
A newly generated Android application usually has one package under the src/main/java
folder. The package name follows the name you specify in the New Project wizard.
For example, the Android app below uses the package name com.example.newapp
:
But when you try to change the package name through the Refactor > Rename… menu, you’ll find that you can only change the newapp
part of the package com.example.newapp
.
See the example below:
To completely change the three parts package name in Android Studio, you need to separate the compressed package name first.
In your Android Studio, click the gear icon ⚙ on the right side of the project view, then uncheck the Compact Middle Packages option:
Android Studio compact middle packages option
You’ll see the package names being separated into their own directories, like how it appears in the file explorer:
Android Studio separated package names
Now you should be able to rename the com
and example
parts of the package name one by one.
Right-click on the package name you want to change and use the Refactor > Rename… option again.
Select Rename all when prompted by Android Studio.
In the following example, the package name has been changed to org.metapx.myapp
And that’s how you change a package name in Android Studio. You can select the Compact Middle Packages option again to shorten the package in the sidebar.
If you are changing the main package name generated by Android Studio, then there are two optional changes that you can do for your application.
First, open the AndroidManifest.xml
file in the manifests/
folder.
Change the package name of the <manifest>
tag as shown below:
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.newapp">
Next, open the build.gradle
file in your app
module.
It’s the second one under the Gradle Scripts menu as shown below:
You can change the applicationId
property as shown below:
android {
compileSdk 31
defaultConfig {
applicationId "com.example.newapp"
minSdk 21
targetSdk 31
// ...
}
}
These two changes are optional because the application ID and your package name have been decoupled in the latest Android framework.
In the past, the applicationId
and the manifest’s package
attribute must be changed accordingly.
And that’s how you change the package name from Android Studio. You can use the method above to change any package name you have in your project.
Original article source at: https://sebhastian.com/
1668253200
Facilitates wrapping Julia functions into a remote callable API via message queues (e.g. ZMQ, RabbitMQ) and HTTP.
It can plug in to a different messaging infrastructure through an implementation of transport (AbstractTransport
) and message format (AbstractMsgFormat
). Multiple instances of the front (HTTP API) and back (Julia methods) end can help scale an application. Bundled with the package are implementations for:
Channel
for transport within the same processDict
as message format, for use within the same processCombined with a HTTP/Messaging frontend (like JuliaBox), it helps deploy Julia packages and code snippets as hosted, auto-scaling HTTP APIs.
Some amount of basic request filtering and pre-processing is possible by registering a pre-processor with the HTTP frontend. The pre-processor is run at the HTTP server side, where it has access to the complete request. It can examine headers and data and take decision whether to allow calling the service or respond directly and immediately. It can also rewrite the request before passing it on to the service.
A pre-processor can be used to implement features like authentication, request rewriting and such. See example below.
Create a file srvr.jl
with the following code
# Load required packages
using JuliaWebAPI
# Define functions testfn1 and testfn2 that we shall expose
function testfn1(arg1, arg2; narg1="1", narg2="2")
return (parse(Int, arg1) * parse(Int, narg1)) + (parse(Int, arg2) * parse(Int, narg2))
end
testfn2(arg1, arg2; narg1="1", narg2="2") = testfn1(arg1, arg2; narg1=narg1, narg2=narg2)
# Expose testfn1 and testfn2 via a ZMQ listener
process(
JuliaWebAPI.create_responder([
(testfn1, true),
(testfn2, false)
], "tcp://127.0.0.1:9999", true, "")
)
Start the server process in the background. This process will run the ZMQ listener.
julia srvr.jl &
Then, on a Julia REPL, run the following code
using JuliaWebAPI #Load package
#Create the ZMQ client that talks to the ZMQ listener above
const apiclnt = APIInvoker("tcp://127.0.0.1:9999");
#Start the HTTP server in current process (Ctrl+C to interrupt)
run_http(apiclnt, 8888)
Then, on your browser, navigate to http://localhost:8888/testfn1/4/5?narg1=6&narg2=4
This will return the following JSON response to your browser, which is the result of running the testfn1
function defined above: {"data"=>44,"code"=>0}
Example of an authentication filter implemented using a pre-processor:
function auth_preproc(req::HTTP.Request)
if !validate(req)
return HTTP.Response(401)
end
return nothing
end
run_http(apiclnt, 8888, auth_preproc)
Author: JuliaWeb
Source Code: https://github.com/JuliaWeb/JuliaWebAPI.jl
License: View license
1668185820
This Julia package handles some of the low-level details for writing cache-efficient, possibly-multithreaded code for multidimensional arrays. A "tile" corresponds to a chunk of a larger array, typically a region that is large enough to encompass any "local" computations you need to perform; some of these computations may require temporary storage.
This package offers two basic kinds of functionality: the management of temporary buffers for processing on tiles, and the iteration over disjoint tiles of a larger array.
The main use for these simple types is in distributing work across threads, usually in circumstances that do not require multidimensional locality as provided by TileIterator
. SplitAxis
splits a single array axis, and SplitAxes
splits multidimensional axes along the final axis. For example:
julia> using TiledIteration
julia> A = rand(3, 20);
julia> collect(SplitAxes(axes(A), 4))
4-element Vector{Tuple{UnitRange{Int64}, UnitRange{Int64}}}:
(1:3, 1:5)
(1:3, 6:10)
(1:3, 11:15)
(1:3, 16:20)
You can also reduce the amount of work assigned to thread 1 (often the main thread is responsible for scheduling the other threads):
julia> collect(SplitAxes(axes(A), 3.5))
4-element Vector{Tuple{UnitRange{Int64}, UnitRange{Int64}}}:
(1:3, 1:2)
(1:3, 3:8)
(1:3, 9:14)
(1:3, 15:20)
Using "3.5 chunks" forces the later workers to perform 6 columns of work (rounding 20/3.5 up to the next integer), leaving only two columns remaining for the first thread.
More general iteration over disjoint tiles of a larger array can be done with TileIterator
:
using TiledIteration
A = rand(1000,1000); # our big array
for tileaxs in TileIterator(axes(A), (128,8))
@show tileaxs
end
This produces
tileaxs = (1:128,1:8)
tileaxs = (129:256,1:8)
tileaxs = (257:384,1:8)
tileaxs = (385:512,1:8)
tileaxs = (513:640,1:8)
tileaxs = (641:768,1:8)
tileaxs = (769:896,1:8)
tileaxs = (897:1000,1:8)
tileaxs = (1:128,9:16)
tileaxs = (129:256,9:16)
tileaxs = (257:384,9:16)
tileaxs = (385:512,9:16)
...
You can see that the total axes range is split up into chunks, which are of size (128,8)
except at the edges of A
. Naturally, these axes serve as the basis for processing individual chunks of the array.
As a further example, suppose you've started julia with JULIA_NUM_THREADS=4
; then
function fillid!(A, tilesz)
tileinds_all = collect(TileIterator(axes(A), tilesz))
Threads.@threads for i = 1:length(tileinds_all)
tileaxs = tileinds_all[i]
A[tileaxs...] .= Threads.threadid()
end
A
end
A = zeros(Int, 8, 8)
fillid!(A, (2,2))
would yield
8×8 Array{Int64,2}:
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
1 1 2 2 3 3 4 4
See also "EdgeIterator" below.
Stencil computations typically require "padding" values, so the inputs to a computation may be of a different size than the resulting outputs. Naturally, you can set the tile size manually; a simple convenience function, padded_tilesize
, attempts to pick reasonable choices for you depending on the size of your kernel (stencil) and element type you'll be using:
julia> padded_tilesize(UInt8, (3,3))
(768,18)
julia> padded_tilesize(UInt8, (3,3), 4) # we want 4 of these to fit in L1 cache at once
(512,12)
julia> padded_tilesize(Float64, (3,3))
(96,18)
julia> padded_tilesize(Float32, (3,3,3))
(64,6,6)
To allocate temporary storage while working with tiles, use TileBuffer
:
julia> tileaxs = (-1:15, 0:7) # really this might have come from TileIterator
julia> buf = TileBuffer(Float32, tileaxs)
TiledIteration.TileBuffer{Float32,2,2} with indices -1:15×0:7:
0.0 0.0 2.38221f-44 0.0 0.0 0.0 9.3887f-44 0.0
0.0 1.26117f-44 0.0 0.0 0.0 8.26766f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 6.02558f-44 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 7.28675f-44 0.0 0.0 0.0
0.0 1.54143f-44 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 9.94922f-44 0.0
0.0 0.0 0.0 0.0 0.0 8.82818f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 9.10844f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.03696f-43 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
This returns an uninitialized buffer for use over the indicated domain. You can reuse this same storage for the next tile, even if the tile is smaller because it corresponds to the edge of the original array:
julia> pointer(buf)
Ptr{Float32} @0x00007f79131fd550
julia> buf = TileBuffer(buf, (16:20, 0:7))
TiledIteration.TileBuffer{Float32,2,2} with indices 16:20×0:7:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.54143f-44 0.0 0.0 0.0
0.0 0.0 0.0 1.26117f-44 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 2.38221f-44 0.0
julia> pointer(buf)
Ptr{Float32} @0x00007f79131fd550
When you use it again at the top of the next block of columns, it returns to its original size while still reusing the same memory:
julia> buf = TileBuffer(buf, (-1:15, 8:15))
TiledIteration.TileBuffer{Float32,2,2} with indices -1:15×8:15:
0.0 0.0 2.38221f-44 0.0 0.0 0.0 9.3887f-44 0.0
0.0 1.26117f-44 0.0 0.0 0.0 8.26766f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 6.02558f-44 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 7.28675f-44 0.0 0.0 0.0
0.0 1.54143f-44 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 9.94922f-44 0.0
0.0 0.0 0.0 0.0 0.0 8.82818f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 9.10844f-44 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.03696f-43 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> pointer(buf)
Ptr{Float32} @0x00007f79131fd550
When performing stencil operations, oftentimes the edge of the array requires special treatment. Several approaches to handling the edges (adding explicit padding, or executing special code just when on the boundaries) can slow your algorithm down because of extra steps or branches.
This package helps support implementations which first handle the "interior" of an array (for example using TiledIterator
over just the interior) using a "fast path," and then handle just the edges by a (possibly) less carefully optimized algorithm. The key component of this is EdgeIterator
:
outerrange = CartesianIndices((-1:4, 0:3))
innerrange = CartesianIndices(( 1:3, 1:2))
julia> for I in EdgeIterator(outerrange, innerrange)
@show I
end
I = CartesianIndex(-1, 0)
I = CartesianIndex(0, 0)
I = CartesianIndex(1, 0)
I = CartesianIndex(2, 0)
I = CartesianIndex(3, 0)
I = CartesianIndex(4, 0)
I = CartesianIndex(-1, 1)
I = CartesianIndex(0, 1)
I = CartesianIndex(4, 1)
I = CartesianIndex(-1, 2)
I = CartesianIndex(0, 2)
I = CartesianIndex(4, 2)
I = CartesianIndex(-1, 3)
I = CartesianIndex(0, 3)
I = CartesianIndex(1, 3)
I = CartesianIndex(2, 3)
I = CartesianIndex(3, 3)
I = CartesianIndex(4, 3)
The time required to visit these edge sites is on the order of the number of edge sites, not the order of the number of sites encompassed by outerrange
, and consequently is efficient.
Author: JuliaArrays
Source Code: https://github.com/JuliaArrays/TiledIteration.jl
License: View license
1668143100
GrammaticalEvolution provides the evolutionary technique of Grammatical Evolution optimization (cite) for Julia. The library focuses on providing the tools necessary to construct a grammar and evolve a population without forcing the user to use a large framework. No XML or configuration files are necessary.
One important aspect of the GrammaticalEvolution library is that uses the strength of Julia to insert arbritrary into the program. Instead of creating a string that then has to be parsed, GrammaticalEvolution instead directly generates Julia code that can then be run. The advantages of doing so are: speeed, simplification of the grammar, and the grammar does not have to be defined in an auxillary file.
The following code defines a grammar:
@grammar <name> begin
rule1 = ...
rule2 = ...
end
The following rules are supported:
Expr
sa | b | c
a + b + c
(a + b) | (c + d)
Example
In the examples
directory there is an example to learn an arbritrary mathemtical equation. Below is an annotated version of that code.
First, we can define the grammar:
@grammar example_grammar begin
start = ex
ex = number | sum | product | (ex) | value
sum = Expr(:call, :+, ex, ex)
product = Expr(:call, :*, ex, ex)
value = :x | :y | number
number[convert_number] = digit + '.' + digit
digit = 0:9
end
Every grammar must define a start
symbol. This grammar supports equations that add and multiply values together, but it's trivial to build more mathematical operations into the grammar.
There are several items of note:
Expr
. This results in the Julia code that makes a function call to the selected function.number
rule is followed by [convert_number]
. This appends an action to the rule which is invoked when the rule gets applied.digit = 0:9
is translated into digit = 0 | 1 | 2 | .. | 9
The action convert_number
is defined as:
convert_number(lst) = float(join(lst))
Next individuals which are generated by the grammar can be defined as:
type ExampleIndividual <: Individual
genome::Array{Int64, 1}
fitness::Float64
code
function ExampleIndividual(size::Int64, max_value::Int64)
genome = rand(1:max_value, size)
return new(genome, -1.0, nothing)
end
ExampleIndividual(genome::Array{Int64, 1}) = new(genome, -1.0, nothing)
end
with a population of the individuals defined as:
type ExamplePopulation <: Population
individuals::Array{ExampleIndividual, 1}
function ExamplePopulation(population_size::Int64, genome_size::Int64)
individuals = Array(ExampleIndividual, 0)
for i=1:population_size
push!(individuals, ExampleIndividual(genome_size, 1000))
end
return new(individuals)
end
end
Finally, the evaluation function for the individuals can be defined as:
function evaluate!(grammar::Grammar, ind::ExampleIndividual)
fitness::Array{Float64, 1} = {}
# transform the individual's genome into Julia code
try
ind.code = transform(grammar, ind)
@eval fn(x, y) = $(ind.code)
catch e
println("exception = $e")
ind.fitness = Inf
return
end
# evaluate the generated code at multiple points
for x=0:10
for y=0:10
# this the value of the individual's code
value = fn(x, y)
# the difference between the ground truth
diff = (value - gt(x, y)).^2
if !isnan(diff) && diff > 0
insert!(fitness, length(fitness)+1, sqrt(diff))
elseif diff == 0
insert!(fitness, length(fitness)+1, 0)
end
end
end
# total fitness is the average over all of the sample points
ind.fitness = mean(fitness)
end
To use our defined grammar and population we can write:
# here is our ground truth
gt(x, y) = 2*x + 5*y
# create population
pop = ExamplePopulation(500, 100)
fitness = Inf
generation = 1
while fitness > 1.0
# generate a new population (based off of fitness)
pop = generate(example_grammar, pop, 0.1, 0.2, 0.2)
# population is sorted, so first entry it the best
fitness = pop[1].fitness
println("generation: $generation, max fitness=$fitness, code=$(pop[1].code)")
generation += 1
end
A couple items of note:
generate
is defined in the base package, it is easy to write your own version.Population
and Individual
are user defined. The fields of these types can differ from what are used in ExamplePopulation
and ExampleIndividual
. However, this may require several methods to be defined so the library can index into the population and genome.The following piece of code in evaluate!
may look strange:
ind.code = transform(grammar, ind)
@eval fn(x, y) = $(ind.code)
So it's worth explaining it in a little more detail. The first part ind.code = transform(grammar, ind)
takes the current genome of the genome and turns it into unevalauate Julia code. In the code we use the variables x
and y
which are currently not defined.
The variables x
and y
are defined in the loop further below. However, they must first be bound to the generated code. Simply running eval
won't work because it is designed to only use variables defined in the global scope. The line @eval fn(x, y) = $(ind.code)
creates a function that has its body bound to an evaluated version of the code. When fn(x, y)
is called, the variables x
and y
will now be in scope of the code.
Author: Abeschneider
Source Code: https://github.com/abeschneider/GrammaticalEvolution
License: View license
1667440980
ggdist is an R package that provides a flexible set of ggplot2
geoms and stats designed especially for visualizing distributions and uncertainty. It is designed for both frequentist and Bayesian uncertainty visualization, taking the view that uncertainty visualization can be unified through the perspective of distribution visualization: for frequentist models, one visualizes confidence distributions or bootstrap distributions (see vignette("freq-uncertainty-vis")
); for Bayesian models, one visualizes probability distributions (see the tidybayes package, which builds on top of ggdist
).
The geom_slabinterval()
/ stat_slabinterval()
family (see vignette("slabinterval")
) makes it easy to visualize point summaries and intervals, eye plots, half-eye plots, ridge plots, CCDF bar plots, gradient plots, histograms, and more:
The geom_dotsinterval()
/ stat_dotsinterval()
family (see vignette("dotsinterval")
) makes it easy to visualize dot+interval plots, Wilkinson dotplots, beeswarm plots, and quantile dotplots (and combined with half-eyes, composite plots like rain cloud plots):
The geom_lineribbon()
/ stat_lineribbon()
family (see vignette("lineribbon")
) makes it easy to visualize fit lines with an arbitrary number of uncertainty bands:
All stat in ggdist
also support visualizing analytical distributions and vectorized distribution data types like distributional objects or posterior::rvar()
objects. This is particularly useful when visualizing uncertainty in frequentist models (see vignette("freq-uncertainty-vis")
) or when visualizing priors in a Bayesian analysis.
The ggdist
geoms and stats also form a core part of the tidybayes package (in fact, they originally were part of tidybayes
). For examples of the use of ggdist
geoms and stats for visualizing uncertainty in Bayesian models, see the vignettes in tidybayes, such as vignette("tidybayes", package = "tidybayes")
or vignette("tidy-brms", package = "tidybayes")
.
You can install the currently-released version from CRAN with this R command:
install.packages("ggdist")
Alternatively, you can install the latest development version from GitHub with these R commands:
install.packages("devtools")
devtools::install_github("mjskay/ggdist")
I welcome feedback, suggestions, issues, and contributions! I am not particularly reliable over email, though you can contact me at mjskay@northwestern.edu. On Twitter I am more reliable. If you have found a bug, please file it here with minimal code to reproduce the issue. Pull requests should be filed against the dev
branch.
ggdist
Matthew Kay (2022). ggdist: Visualizations of Distributions and Uncertainty. R package version 3.2.0, https://mjskay.github.io/ggdist/. DOI: 10.5281/zenodo.3879620.
Author: mjskay
Source Code: https://github.com/mjskay/ggdist
License: GPL-3.0 license
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1667408880
XCLogParser is a CLI tool that parses the SLF
serialization format used by Xcode and xcodebuild
to store its Build and Test logs (xcactivitylog
files).
You can find more information about the format used in the logs here. You can also check Erick Camacho's talk at AltConf 2019 about it.
The tool supports creating reports of different kinds to analyze the content of the logs. XCLogParser can give a lot of insights in regards to build times for every module and file in your project, warnings, errors and unit tests results.
This is an example of a report created from the Build Log of the Kickstarter iOS open source app.
XCLogParser
is written as a SPM executable and it supports three commands:
xcactivitylog
into a JSON
document.xcactivitylog
into different kind of reports (json
, flatJson
, summaryJson
, chromeTracer
, issues
and html
).LogStoreManifest.plist
file into a JSON
document.Depending on your needs, there are various use-cases where XCLogParser
can help you:
You can compile the executable with the command rake build[debug]
or rake build[release]
or simply use the Swift Package Manager commands directly. You can also run rake install
to install the executable in your /usr/local/bin
directory.
$ brew install xclogparser
We are currently working on adding more installation options.
You can automate the parsing of xcactivitylog
files with a post-scheme build action. In this way, the last build log can be parsed as soon as a build finishes. To do that, open the scheme editor in a project and expand the "Build" panel on the left side. You can then add a new "Post-action" run script and invoke the xclogparser
executable with the required parameters:
xclogparser parse --project MyApp --reporter html
This script assumes that the xclogparser
executable is installed and present in your PATH.
The run script is executed in a temporary directory by Xcode, so you may find it useful to immediately open the generated output with open MyAppLogs
at the end of the script. The Finder will automatically open the output folder after a build completes and you can then view the generated HTML page that contains a nice visualization of your build! ✨
Errors thrown in post-action run scripts are silenced, so it could be hard to notice simple mistakes.
Since Xcode 11, xcodebuild
only generates the .xcactivitylog build logs when the option -resultBundlePath
is present. If you're compiling with that command and not with Xcode, be sure to set that option to a valid path ending in .xcresult
.
Xcode likes to wait for all subprocesses to exit before completing the build. For this reason, you may notice a delayed "Build Succeeded" message if your post-scheme action is taking too long to execute. You can workaround this by offloading the execution to another script in the background and immediately close the input, output and error streams in order to let Xcode and xcodebuild finish cleanly. Create the following launcher
script and invoke it from your post-scheme action as follows launcher command-that-parses-the-log-here
:
#!/bin/sh
# The first argument is the directory of the executable you want to run.
# The following arguments are directly forwarded to the executable.
# We execute the command in the background and immediately close the input, output
# and error streams in order to let Xcode and xcodebuild finish cleanly.
# This is done to prevent Xcode and xcodebuild being stuck in waiting for all
# subprocesses to end before exiting.
executable=$1
shift;
$executable "$@" <&- >&- 2>&- &
The post-scheme action is not executed in case the build fails. An undocumented feature in Xcode allows you to execute it even in this case. Set the attribute runPostActionsOnFailure
to YES
in your scheme's BuildAction
as follows:
<BuildAction buildImplicitDependencies='YES' parallelizeBuildables='YES' runPostActionsOnFailure='YES'>
The xcactivitylog
files are created by Xcode/xcodebuild
a few seconds after a build completes. The log is placed in the DerivedData/YourProjectName-UUID/Logs/Build
directory. It is a binary file in the SLF
format compressed with gzip.
In the same directory, you will find a LogStoreManifest.plist
file with the list of xcactivitylog
files generated for the project. This file can be monitored in order to get notified every time a new log is ready.
The test logs are created inside the DerivedData/YourProjectName-UUID/Logs/Test
directory. Xcode and xcodebuild
create different logs. You can find a good description about which ones are created in this blog post.
Dumps the whole content of an xcactivitylog
file as JSON
document. You can use this command if you want to have a raw but easy to parse representation of a log.
Examples:
xclogparser dump --file path/to/log.xcactivitylog --output activity.json
xclogparser dump --project MyProject --output activity.json --redacted
An example output has been omitted for brevity since it can contain a lot of information regarding a build.
Available parameters
Parameter Name | Description | Required |
---|---|---|
--file | The path to the xcactivitylog . | No * |
--project | The name of the project if you don't know the path to the log. The tool will try to find the latest Build log in a folder that starts with that name inside the DerivedData directory. Use --strictProjectName for stricter name matching. | No * |
--workspace | The path to the xcworkspace file if you don't know the path to the log. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--xcodeproj | The path to the xcodeproj file if you don't know the path to the log and if the project doesn't have a xcworkspace file. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--derived_data | The path to the derived data folder if you are using xcodebuild to build your project with the -derivedDataPath option. | No |
--output | If specified, the JSON file will be written to the given path. If not defined, the command will output to the standard output. | No |
--redacted | If specified, the username will be replaced by the word redacted in the file paths contained in the logs. Useful for privacy reasons but slightly decreases the performance. | No |
--without_build_specific_info | If specified, build specific information will be removed from the logs (for example bolnckhlbzxpxoeyfujluasoupft will be removed from DerivedData/Product-bolnckhlbzxpxoeyfujluasoupft/Build ). Useful for grouping logs by its content. | No |
--strictProjectName | Used in conjunction with --project . If specified, a stricter name matching will be done for the project name. | No |
No *: One of
--file
,--project
,--workspace
,--xcodeproj
parameters is required.
Parses the build information from a xcactivitylog
and converts it into different representations such as a JSON file, flat JSON file, summary JSON file, issues JSON file, Chrome Tracer file or a static HTML page.
This command supports parsing additional data if some flags are passed to Xcode/xcodebuild:
swiftc
reported compilation times. For using that feature, you need to build your project with the options -Xfrontend -debug-time-expression-type-checking
and -Xfrontend -debug-time-function-bodies
.-Xlinker -print_statistics
to Xcode's "Other Linker Flags" and it's useful for tracking linking time regression.-ftime-trace
flag is specified, clang will generate a json
tracing file for each translation unit and XCLogParser will collect them and add its data to the parser output.Examples:
xclogparser parse --project MyApp --reporter json --output build.json
xclogparser parse --file /path/to/log.xcactivitylog --reporter chromeTracer
xclogparser parse --workspace /path/to/MyApp.xcworkspace --derived_data /path/to/custom/DerivedData --reporter html --redacted
Example output available in the reporters section.
Available parameters
Parameter Name | Description | Required |
---|---|---|
--reporter | The reporter used to transform the logs. It can be either json , flatJson , summaryJson , chromeTracer , issues or html . (required) | Yes |
--file | The path to the xcactivitylog . | No * |
--project | The name of the project if you don't know the path to the log. The tool will try to find the latest Build log in a folder that starts with that name inside the DerivedData directory. Use --strictProjectName for stricter name matching. | No * |
--workspace | The path to the xcworkspace file if you don't know the path to the log. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--xcodeproj | The path to the xcodeproj file if you don't know the path to the log and if the project doesn't have a xcworkspace file. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--derived_data | The path to the derived data folder if you are using xcodebuild to build your project with the -derivedDataPath option. | No |
--output | If specified, the JSON file will be written to the given path. If not defined, the command will output to the standard output. | No |
--rootOutput | If specified, the HTML file will be written to the given folder, it has precedence over output if the folder doesn't exist will be created. It works with relative home path.~ | No |
--redacted | If specified, the username will be replaced by the word redacted in the file paths contained in the logs. Useful for privacy reasons but slightly decreases the performance. | No |
--without_build_specific_info | If specified, build specific information will be removed from the logs (for example bolnckhlbzxpxoeyfujluasoupft will be removed from DerivedData/Product-bolnckhlbzxpxoeyfujluasoupft/Build ). Useful for grouping logs by its content. | No |
--strictProjectName | Used in conjunction with --project . If specified, a stricter name matching will be done for the project name. | No |
--machine_name | If specified, the machine name will be used to create the buildIdentifier . If it is not specified, the host name will be used. | No |
--omit_warnings | Omit the warnings details in the final report. This is useful if there are too many of them and the report's size is too big with them. | No |
--omit_notes | Omit the notes details in the final report. This is useful if there are too many of them and the report's size is too big with them. | No |
--trunc_large_issues | If an individual task has more than a 100 issues (Warnings, notes, errors) it truncates them to be 100. This is useful to reduce the amount of memory used. | No |
No *: One of
--file
,--project
,--workspace
,--xcodeproj
parameters is required.
Outputs the contents of LogStoreManifest.plist
which lists all the xcactivitylog
files generated for the project as JSON.
Example:
xclogparser manifest --project MyApp
Example output:
{
"scheme" : "MyApp",
"timestampEnd" : 1548337458,
"fileName" : "D6539DED-8AC8-4508-9841-46606D0C794A.xcactivitylog",
"title" : "Build MyApp",
"duration" : 46,
"timestampStart" : 1548337412,
"uniqueIdentifier" : "D6539DED-8AC8-4508-9841-46606D0C794A",
"type" : "xcode"
}
Available parameters
Parameter Name | Description | Required |
---|---|---|
--log_manifest | The path to an existing LogStoreManifest.plist . | No * |
--project | The name of the project if you don't know the path to the log. The tool will try to find the latest Build log in a folder that starts with that name inside the DerivedData directory. Use --strictProjectName for stricter name matching. | No * |
--workspace | The path to the xcworkspace file if you don't know the path to the log. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--xcodeproj | The path to the xcodeproj file if you don't know the path to the log and if the project doesn't have a xcworkspace file. It will generate the folder name for the project in the DerivedData folder using Xcode's hash algorithm and it will try to locate the latest Build Log inside that directory. | No * |
--derived_data | The path to the derived data folder if you are using xcodebuild to build your project with the -derivedDataPath option. | No |
--output | If specified, the JSON file will be written to the given path. If not defined, the command will output to the standard output. | No |
--strictProjectName | Used in conjunction with --project . If specified, a stricter name matching will be done for the project name. | No |
No *: One of
--log-manifest
,--project
,--workspace
,--xcodeproj
parameters is required.
The parse command has different types of reporters built-in that can represent and visualize the data of the logs:
This reporter parses the log and outputs it as JSON. It contains information about the duration of each step in the build, along other metadata and interesting information such as errors and warnings.
Example:
xclogparser parse --project MyApp --reporter json
Example Output
{
"detailStepType" : "swiftCompilation",
"startTimestamp" : 1545143336.649699,
"endTimestamp" : 1545143336.649699,
"schema" : "MyApp",
"domain" : "com.apple.dt.IDE.BuildLogSection",
"parentIdentifier" : "095709ba230e4eda80ab43be3b68f99c_1545299644.4805899_20",
"endDate" : "2018-12-18T14:28:56.650000+0000",
"title" : "Compile \/Users\/<redacted>\/projects\/MyApp\/Libraries\/Utilities\/Sources\/Disposables\/Cancelable.swift",
"identifier" : "095709ba230e4eda80ab43be3b68f99c_1545299644.4805899_185",
"signature" : "CompileSwift normal x86_64 \/Users\/<redacted>\/MyApp\/Libraries\/Utilities\/Sources\/Disposables\/Cancelable.swift",
"type" : "detail",
"buildStatus" : "succeeded",
"subSteps" : [
],
"startDate" : "2018-12-18T14:28:56.650000+0000",
"buildIdentifier" : "095709ba230e4eda80ab43be3b68f99c_1545299644.4805899",
"machineName" : "095709ba230e4eda80ab43be3b68f99c",
"duration" : 5.5941859483718872,
"errors" : "",
"warnings" : "",
"errorCount" : 0,
"warningCount" : 0,
"errors" : [],
"warnings" : [],
"swiftFunctionTimes" : [
{
"durationMS" : 0.08,
"occurrences" : 5,
"startingColumn" : 36,
"startingLine" : 48,
"file" : "file:\/\/\/Users\/<redacted>\/MyApp\/Libraries\/Utilities\/Sources\/Disposables\/Cancelable.swift",
"signature" : "getter description"
}
],
"swiftTypeCheckTimes" : [
{
"durationMS" : 0.5,
"occurrences" : 2,
"startingColumn" : 16,
"startingLine" : 9,
"file" : "file:\/\/\/Users\/<redacted>\/MyApp\/Libraries\/Utilities\/Sources\/Disposables\/Cancelable.swift",
}
]
}
For more information regarding each field, check out the JSON format documentation.
Parses the log as an array of JSON objects, with no nested steps (the field subSteps
is always empty). Useful to dump the data into a database so it's easier to analyze.
The format of the JSON objects in the array is the same to the one used in the json
reporter.
Example:
xclogparser parse --file path/to/log.xcactivitylog --reporter flatJson
Example Output
[
{
"parentIdentifier" : "",
"title" : "Build MobiusCore",
"warningCount" : 0,
"duration" : 0,
"startTimestamp" : 1558590748,
"signature" : "Build MobiusCore",
"endDate" : "2019-05-23T05:52:28.274000Z",
"errorCount" : 0,
"domain" : "Xcode.IDEActivityLogDomainType.BuildLog",
"type" : "main",
"identifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_1",
"buildStatus" : "succeeded",
"schema" : "MobiusCore",
"subSteps" : [
],
"endTimestamp" : 1558590748,
"architecture" : "",
"machineName" : "68a2bbd0048a454d91b3734b5d5dc45e",
"buildIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253",
"startDate" : "2019-05-23T05:52:28.244000Z",
"documentURL" : "",
"detailStepType" : "none"
},
{
"parentIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_1",
"title" : "Prepare build",
"warningCount" : 0,
"duration" : 0,
"startTimestamp" : 1558590748,
"signature" : "Prepare build",
"endDate" : "2019-05-23T05:52:28.261000Z",
"errorCount" : 0,
"domain" : "Xcode.IDEActivityLogDomainType.XCBuild.Preparation",
"type" : "target",
"identifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_2",
"buildStatus" : "succeeded",
"schema" : "MobiusCore",
"subSteps" : [
],
"endTimestamp" : 1558590748,
"architecture" : "",
"machineName" : "68a2bbd0048a454d91b3734b5d5dc45e",
"buildIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253",
"startDate" : "2019-05-23T05:52:28.254000Z",
"documentURL" : "",
"detailStepType" : "none"
},{
"parentIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_1",
"title" : "Build target MobiusCore",
"warningCount" : 0,
"duration" : 4,
"startTimestamp" : 1558590708,
"signature" : "MobiusCore-fmrwijcuutzbrmbgantlsfqxegcg",
"endDate" : "2019-05-23T05:51:51.890000Z",
"errorCount" : 0,
"domain" : "Xcode.IDEActivityLogDomainType.target.product-type.framework",
"type" : "target",
"identifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_3",
"buildStatus" : "succeeded",
"schema" : "MobiusCore",
"subSteps" : [
],
"endTimestamp" : 1558590712,
"architecture" : "",
"machineName" : "68a2bbd0048a454d91b3734b5d5dc45e",
"buildIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253",
"startDate" : "2019-05-23T05:51:48.206000Z",
"documentURL" : "",
"detailStepType" : "none"
},
...
]
For more information regarding each field, check out the JSON format documentation.
Parses the log as a JSON object, with no nested steps (the field subSteps
is always empty). Useful to get a high level summary of the build.
Example:
xclogparser parse --file path/to/log.xcactivitylog --reporter summaryJson
Example Output
{
"parentIdentifier" : "",
"title" : "Build MobiusCore",
"warningCount" : 0,
"duration" : 0,
"startTimestamp" : 1558590748,
"signature" : "Build MobiusCore",
"endDate" : "2019-05-23T05:52:28.274000Z",
"errorCount" : 0,
"domain" : "Xcode.IDEActivityLogDomainType.BuildLog",
"type" : "main",
"identifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253_1",
"buildStatus" : "succeeded",
"schema" : "MobiusCore",
"subSteps" : [
],
"endTimestamp" : 1558590748,
"architecture" : "",
"machineName" : "68a2bbd0048a454d91b3734b5d5dc45e",
"buildIdentifier" : "68a2bbd0048a454d91b3734b5d5dc45e_1558640253",
"startDate" : "2019-05-23T05:52:28.244000Z",
"documentURL" : "",
"detailStepType" : "none"
}
For more information regarding each field, check out the JSON format documentation.
Parses the xcactivitylog
as an array of JSON objects in the format used by the Chrome tracer. You can use this JSON to visualize the build times in the Chrome tracing tool inside Chrome: chrome://tracing
.
Example:
xclogparser parse --file path/to/log.xcactivitylog --reporter chromeTracer
Example Output
Outputs the list of Errors and Warnings found in the log as a JSON document. Useful when you only want to check the issues found while building.
Example:
xclogparser parse --file path/to/log.xcactivitylog --reporter issues
Example Output
```json
{
"errors" : [
{
"characterRangeStart" : 0,
"startingColumnNumber" : 5,
"endingColumnNumber" : 30,
"characterRangeEnd" : 18446744073709551615,
"title" : "use of undeclared type 'AType'",
"endingLineNumber" : 10,
"type" : "swiftError",
"documentURL" : "file:\/\/\/MyProject\/MyFile.swift",
"startingLineNumber" : 10,
"severity" : 1
"detail": "\/MyProject\/MyFile.swift:10:5: error: use of undeclared type 'AType'\r func doSomething(completion: @escaping () -> AType) -> void) {\r^~~~\r"
}
],
"warnings" : [
{
"characterRangeStart" : 0,
"startingColumnNumber" : 5,
"endingColumnNumber" : 30,
"characterRangeEnd" : 18446744073709551615,
"title" : "Warning",
"endingLineNumber" : 10,
"type" : "swiftWarning",
"documentURL" : "file:\/\/\/MyProject\/MyFile.swift",
"startingLineNumber" : 10,
"severity" : 1
}
]
}
```
Generates an HTML report to visualize build times per module and file, along with warning and error messages.
Example:
xclogparser parse --file path/to/log.xcactivitylog --reporter html --output build/reports
Example Output
Environment | Version |
---|---|
🛠 Xcode | 11.0 |
🐦 Language | Swift 5.0 |
XCLogParser is currently in alpha status. We are using it internally and tested it on various projects, but we need the help from the community to test and improve it with more iOS and Mac applications.
MacOS:
git clone git@github.com:MobileNativeFoundation/XCLogParser.git
.rake gen_resources
to generate a static resource Swift file that is needed to compile the app.swift package generate-xcodeproj
to generate a Xcode project (or use any text editor).rake test
.Linux:
docker build --tag xclogparser .
./run-in-docker.sh
If you find a bug or you would like to propose an improvement, you're welcome to create an issue.
vx.x.x
. Provide release title and description.DEVELOPER_DIR=<path_to_xcode_version> rake archive
. Use Xcode version matching with Requirements section../build_release_in_docker.sh
releases/XCLogParser-x.x.x.zip
releases/linux/XCLogParser-x.x.x-Linux.zip
This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.
Author: MobileNativeFoundation
Source Code: https://github.com/MobileNativeFoundation/XCLogParser
License: Apache-2.0 license
1667189836
GitScrum is a Project Management Tool, developed to help entrepreneurs, freelancers, managers, and teams Skyrocket their Productivity with the Agile methodology and Gamification.
It’s a powerful and functional tool you can use to organize your projects and manage your team's tasks within workspaces.
GitScrum brings all important features for you to establish high standard objectives and lead your team to reach them, working in an interactive and collaborative way. It facilitates tasks delegation and monitoring, with visual resources that help you guide your team throughout all projects execution, from start to finish.
Get rid of the outdated confusing methods that mixed dozens of isolated apps, files, emails and physical sticky notes and switch communication madness by an effective working project management tool that’s “all in one” with everything you need - GitScrum Board with dynamic Kanban boards, GitScrum Sprints to associate your tasks to milestones, GitScrum Gantt Charts for an agenda view, slick integrations and many more interactive features.
GitScrum is a way better option to manage your tasks, because not only will you take notes on your pending actions, but you’ll turn them into objectives and accomplish them.
Promote collaboration among team members, partners, clients and stakeholders working together in all project stages to develop innovative solutions. Create discussions, comment on each other's actions, recall attention by mentioning each other’s names.
Improve your products and services with GitScrum User Stories - small reports on end-users’ needs and wishes, applying the agile principles of welcoming changes and delivering value constantly.
Make GitScrum yours, choosing your preference among 23 languages, dozens of project templates and, the best: with the possibility to showcase YOUR BRAND and domain with the GitScrum White Label feature.
Is that all? No! GitScrum helps you turn your team members into Superstars, with the power of gamification. Meet the GitScrum Rock Star Team feature to add joy and healthy competitiveness to your work environment.
Our team of Scrum and Agile specialists have developed the ultimate tool for you to create amazing projects, manage your tasks smartly, lead your team to enthusiasm and reach unprecedented results.
Skyrocket your productivity with GitScrum!
Site: https://site.gitscrum.com
Learm more about GitScrum and Agile Methodologies : https://magazine.gitscrum.com
GitScrum's goal is to "Transform your IT Team into Instant Rock Stars" !!!
Facebook Group: https://www.facebook.com/groups/gitscrum/
Follow us on Twitter: https://twitter.com/gitscrum
This version available here is the first code of the GitScrum application developed in 2016 and supported until 2017.
It's a free and open source version, if you want to know the current GitScrum, go to our website [ https://site.gitscrum.com ]
GitScrum can be integrated with Github or Gitlab or Bitbucket.
Product Backlog contains the Product Owner's assessment of business value
User Story is a description consisting of one or more sentences in the everyday or business language that captures what a user does or needs to do as part of his or her job function.
Features: Acceptance criteria, prioritization using MoSCoW, definition of done checklist, pie chart, assign labels, team members, activities, comments and issues.
Sprint Backlog is the property of the development team and all included estimates are provided by development team. Often an accompanying sprint planning is the board used to see and change state of the issues.
Features: Sprint planning using Kanban board, burndown chart, definition of done checklist, effort, attachments, activities, comments and issues.
Issue is added in user story to one sprint backlog, or directly in sprint backlog. Generally, each issue should be small enough to be easily completed within a single day.
Features: Progress state (e.g. to do, in progress, done or archived), issue type (e.g. Improvement, Support Request, Feedback, Customer Problem, UX, Infrastructure, Testing Task, etc...), definition of done checklist, assign labels, effort, attachments, comments, activities, team members.
The requirements to Laravel GitScrum application is:
Use Docker - Containers: php7, nginx and mysql57
$ composer create-project gitscrum-community-edition/laravel-gitscrum --stability=stable --keep-vcs
$ cd laravel-gitscrum
Important: If you have not yet installed composer: Installation - Linux / Unix / OSX
$ git clone git@github.com:GitScrum-Community/laravel-gitscrum.git
$ cd laravel-gitscrum
$ composer update
$ composer run-script post-root-package-install
Important: If you have not the .env file in root folder, you must copy or rename the .env.example to .env
.env file
APP_URL=http://yourdomain.tld (you must use protocol http or https)
Options: en | zh | zh_cn | ru | de | es | pt | it | id | fr | hu
.env file
APP_LANG=en
Can you help us translate a few phrases into different languages? See: https://github.com/GitScrum-Community/laravel-gitscrum/tree/feature/language-pack/resources/lang
.env file
DB_CONNECTION=mysql
DB_HOST=XXXXXX
DB_PORT=3306
DB_DATABASE=XXXXX
DB_USERNAME=XXXX
DB_PASSWORD=XXXXX
Remember: Create the database for GitScrum before run artisan command.
php artisan migrate
php artisan db:seed --class=SettingSeeder
You must create a new Github App, visit GitHub's New OAuth Application page, fill out the form, and grab your Client ID and Secret.
Application name: gitscrum
Homepage URL: URL (Same as APP_URL at .env)
Application description: gitscrum
Authorization callback URL: http://{URL is the SAME APP_URL}/auth/provider/github/callback
.env file
GITHUB_CLIENT_ID=XXXXX
GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXX
You must create a new Gitlab App, visit Gitlab new application, fill out the form, and grab your Application ID and Secret.
name: gitscrum
Redirect URI: http://{URL is the SAME APP_URL}/auth/provider/gitlab/callback
Scopes: api and read_user
.env file
GITLAB_KEY=XXXXX -> Application Id
GITLAB_SECRET=XXXXXXXXXXXXXXXXXX
GITLAB_INSTANCE_URI=https://gitlab.com/
You must create a new Bitbucket OAuth Consumer, visit Bitbucket new consumer guide, and make sure you give write permissions when creating the consumer specially on (repositories , issues)
name: gitscrum
Callback URL: http://{URL is the SAME APP_URL}/auth/provider/bitbucket/callback
URL: http://{URL is the SAME APP_URL}
Uncheck (This is a private consumer)
.env file
BITBUCKET_CLIENT_ID=XXXXX -> Bitbucket Key
BITBUCKET_CLIENT_SECRET=XXXXXXXXXXXXXXXXXX Bitbucket Secret
.env file
PROXY_PORT=
PROXY_METHOD=
PROXY_SERVER=
PROXY_USER=
PROXY_PASS=
Renato Marinho: Facebook / LinkedIn / Skype: renatomarinho13
Contributions are always welcome! https://github.com/GitScrum-Community/laravel-gitscrum/graphs/contributors
Author: Gitscrum-team
Source Code: https://github.com/gitscrum-team/laravel-gitscrum
License: MIT license
1666912320
The R package forecast provides methods and tools for displaying and analysing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.
This package is now retired in favour of the fable package. The forecast package will remain in its current state, and maintained with bug fixes only. For the latest features and development, we recommend forecasting with the fable package.
You can install the stable version from CRAN.
install.packages('forecast', dependencies = TRUE)
You can install the development version from Github
# install.packages("remotes")
remotes::install_github("robjhyndman/forecast")
library(forecast)
library(ggplot2)
# ETS forecasts
USAccDeaths %>%
ets() %>%
forecast() %>%
autoplot()
# Automatic ARIMA forecasts
WWWusage %>%
auto.arima() %>%
forecast(h=20) %>%
autoplot()
# ARFIMA forecasts
library(fracdiff)
x <- fracdiff.sim( 100, ma=-.4, d=.3)$series
arfima(x) %>%
forecast(h=30) %>%
autoplot()
# Forecasting with STL
USAccDeaths %>%
stlm(modelfunction=ar) %>%
forecast(h=36) %>%
autoplot()
AirPassengers %>%
stlf(lambda=0) %>%
autoplot()
USAccDeaths %>%
stl(s.window='periodic') %>%
forecast() %>%
autoplot()
# TBATS forecasts
USAccDeaths %>%
tbats() %>%
forecast() %>%
autoplot()
taylor %>%
tbats() %>%
forecast() %>%
autoplot()
Author: Robjhyndman
Source Code: https://github.com/robjhyndman/forecast
1666753500
Robust Shortest Path Finder for the Julia Language.
This package provides functions to find robust shortest paths. Please see the reference papers below.
Install
julia> Pkg.add("RobustShortestPath")
This will also install LightGraphs.jl, if you don't have it installed in your Julia system already.
To check if works
julia> Pkg.test("RobustShortestPath")
get_robust_path
This function solves the robust shortest path problem proposed by Bertsimas and Sim (2003) and integrates the idea of Lee and Kwon (2014).
get_robust_path_two
This function solves the robust shortest path problem with two multiplicative uncertain cost coefficients proposed by Kwon et al. (2013).
Example
Example network and data from Kwon et al. (2013):
The above network data should be prepared in the column vector form as follows:
data = [
1 4 79 31 66 28;
1 2 59 97 41 93;
2 4 31 21 50 40;
2 3 90 52 95 38;
2 5 9 23 95 59;
2 6 32 57 73 7;
3 9 89 100 38 21;
3 8 66 13 4 72;
3 6 68 95 58 58;
3 7 47 12 56 20;
4 3 14 19 36 84;
4 9 95 65 88 42;
4 8 88 13 62 54;
5 3 44 8 62 53;
5 6 83 66 30 19;
6 7 33 3 7 8;
6 8 37 99 29 46;
7 11 79 54 23 3;
7 12 10 37 35 43;
8 7 95 71 85 56;
8 10 0 95 16 64;
8 12 30 38 16 3;
9 10 5 69 51 71;
9 11 44 60 60 17;
10 13 79 78 16 59;
10 14 91 59 64 61;
11 14 53 38 84 77;
11 15 80 85 78 6;
11 13 56 23 26 85;
12 15 75 80 31 38;
12 14 1 100 18 40;
13 14 48 28 45 33;
14 15 25 71 33 56;
]
start_node = data[:,1] #first column of data
end_node = data[:,2] #second column of data
p = data[:,3] #third
q = data[:,4] #fourth
c = data[:,5] #fifth
d = data[:,6] #sixth
For a single-coefficient case as in Bertsimas and Sim (2003):
using RobustShortestPath
Gamma=3
origin=1
destination=15
robust_path, robust_x, worst_case_cost = get_robust_path(start_node, end_node, c, d, Gamma, origin, destination)
The result will look like:
([1,4,8,12,15],[1,0,0,0,0,0,0,0,0,0 … 0,0,0,0,0,0,1,0,0,0],295)
For a two-coefficient case as in Kwon et al. (2013):
using RobustShortestPath
Gamma_u=2
Gamma_v=3
origin=1
destination=15
robust_path, robust_x, worst_case_cost = get_robust_path_two(start_node, end_node, p, q, c, d, Gamma_u, Gamma_v, origin, destination)
The result should look like:
([1,4,3,7,12,14,15],[1,0,0,0,0,0,0,0,0,1 … 0,0,0,0,0,0,0,1,0,1],25314.0)
See runtest.jl for more information.
get_shortest_path
This package also provides an interface to dijkstra_shortest_paths
of LightGraphs.jl
.
path, x = get_shortest_path(start_node, end_node, link_length, origin, destination)
Contributor
This package is written and maintained by Changhyun Kwon.
Author: Chkwon
Source Code: https://github.com/chkwon/RobustShortestPath.jl
License: View license
1666751564
Check out how to implement badge number in flutter apps icons: https://youtu.be/VDNS5f6awYM
Visit my channel for more awesome flutter contents: http://youtube.com/c/vijaycreationsflutter
#flutter #flutterdev #flutterpackage #package #flutterapp #aappdev #mobileapp #developer #ui #ux #uiux #app
1666455108
Check out how to implement flutter_credit_card package in our flutter apps: https://youtu.be/pPFFtgu8Ows
Visit my channel for more awesome flutter contents: http://youtube.com/c/vijaycreationsflutter
#flutter #flutterdev #flutterpackage #package #flutterapp #aappdev #mobileapp #developer #ui #ux #uiux #app