1660607580
This package provides load support for Stata, SPSS, and SAS files under the FileIO.jl package.
Use Pkg.add("StatFiles")
in Julia to install StatFiles and its dependencies.
To read a Stata, SPSS, or SAS file into a DataFrame
, use the following julia code:
using StatFiles, DataFrames
df = DataFrame(load("data.dta"))
The call to load
returns a struct
that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a Stata, SPSS, or SAS file into data structures that are not a DataFrame
:
using StatFiles, DataTables, IndexedTables, TimeSeries, Temporal, Gadfly
# Load into a DataTable
dt = DataTable(load("data.dta"))
# Load into an IndexedTable
it = IndexedTable(load("data.dta"))
# Load into a TimeArray
ta = TimeArray(load("data.dta"))
# Load into a TS
ts = TS(load("data.dta"))
# Plot directly with Gadfly
plot(load("data.dta"), x=:a, y=:b, Geom.line)
load
also support the pipe syntax. For example, to load a Stata, SPSS, or SAS file into a DataFrame
, one can use the following code:
using StatFiles, DataFrames
df = load("data.dta") |> DataFrame
The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a Stata, SPSS, or SAS file, pipe it into a query, then pipe it to the save
function to store the results in a new file.
Author: Queryverse
Source Code: https://github.com/queryverse/StatFiles.jl
License: View license
1660599900
BedgraphFiles.jl
This project follows the semver pro forma and uses the git-flow branching model.
This package provides load and save support for Bedgraph under the FileIO package, and also implements the IterableTables interface for easy conversion between tabular data structures.
You can install BedgraphFiles from the Julia REPL. Press ]
to enter pkg mode, then enter the following:
add BedgraphFiles
If you are interested in the cutting edge of the development, please check out the develop branch to try new features before release.
To load a bedGraph file into a Vector{Bedgraph.Record}
, use the following Julia code:
using FileIO, BedgraphFiles, Bedgraph
records = Vector{Bedgraph.Record}(load("data.bedgraph"))
records = collect(Bedgraph.Record, load("data.bedgraph"))
Note: saving on top of an existing file will overwrite metadata/header information with a minimal working header.
The following example saves a Vector{Bedgraph.Record}
to a bedGraph file:
using FileIO, BedgraphFiles, Bedgraph
records = [Bedgraph.Record("chr", i, i + 99, rand()) for i in 1:100:1000]
save("output.bedgraph", records)
The execution of load
returns a struct
that adheres to the IterableTables interface, and can be passed to any function that also implements the interface, i.e. all the sinks in IterableTable.jl.
The following code shows an example of loading a bedGraph file into a DataFrame:
using FileIO, BedgraphFiles, DataFrames
df = DataFrame(load("data.bedgraph"))
Here are some more examples of materialising a bedGraph file into other data structures:
using FileIO, BedgraphFiles, DataTables, IndexedTables, Gadfly
# Load into a DataTable
dt = DataTable(load("data.bedgraph"))
# Load into an IndexedTable
it = IndexedTable(load("data.bedgraph"))
# Plot directly with Gadfly
plot(load("data.bedgraph"), xmin=:leftposition, xmax=:rightposition, y=:value, Geom.bar)
The following code saves any compatible source to a bedGraph file:
using FileIO, BedgraphFiles
it = getiterator(data)
save("output.bedgraph", it)
Both load
and save
also support the pipe syntax. For example, to load a bedGraph file into a DataFrame
, one can use the following code:
using FileIO, BedgraphFiles, DataFrame
df = load("data.bedgraph") |> DataFrame
To save an iterable table, one can use the following form:
using FileIO, BedgraphFiles, DataFrame
df = # Aquire a DataFrame somehow.
df |> save("output.bedgraph")
The save
method returns the data provided or Vector{Bedgraph.Record}
. This is useful when periodically saving your work during a sequence of operations.
records = some sequence of operations |> save("output.bedgraph")
The pipe syntax is especially useful when combining it with Query.jl queries. For example, one can easily load a bedGraph file, pipe its data into a query, and then store the query result by piping it to the save
function.
using FileIO, BedgraphFiles, Query
load("data.bedgraph") |> @filter(_.chrom == "chr19") |> save("data-chr19.bedgraph")
This package is largely -- if not completely -- inspired by the work of David Anthoff. Other influences are from the BioJulia community.
Author: CiaranOMara
Source Code: https://github.com/CiaranOMara/BedgraphFiles.jl
License: MIT license
1660596000
This package provides load support for Parquet files under the FileIO.jl package.
Use ] add ParquetFiles
in Julia to install ParquetFiles and its dependencies.
To read a Parquet file into a DataFrame
, use the following julia code:
using ParquetFiles, DataFrames
df = DataFrame(load("data.parquet"))
The call to load
returns a struct
that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a Parquet file into data structures that are not a DataFrame
:
using ParquetFiles, IndexedTables, TimeSeries, Temporal, VegaLite
# Load into an IndexedTable
it = IndexedTable(load("data.parquet"))
# Load into a TimeArray
ta = TimeArray(load("data.parquet"))
# Load into a TS
ts = TS(load("data.parquet"))
# Plot directly with Gadfly
@vlplot(:point, data=load("data.parquet"), x=:a, y=:b)
load
also support the pipe syntax. For example, to load a Parquet file into a DataFrame
, one can use the following code:
using ParquetFiles, DataFrame
df = load("data.parquet") |> DataFrame
The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a Parquet file, pipe it into a query, then pipe it to the save
function to store the results in a new file.
Author: Queryverse
Source Code: https://github.com/queryverse/ParquetFiles.jl
License: View license
1660592100
This package provides load and save support for Feather files under the FileIO.jl package.
Use Pkg.add("FeatherFiles") in Julia to install FeatherFiles and its dependencies.
To read a feather file into a DataFrame
, use the following julia code:
using FeatherFiles, DataFrames
df = DataFrame(load("data.feather"))
The call to load
returns a struct
that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a feather file into data structures that are not a DataFrame
:
using FeatherFiles, DataTables, IndexedTables, TimeSeries, Temporal, Gadfly
# Load into a DataTable
dt = DataTable(load("data.feather"))
# Load into an IndexedTable
it = IndexedTable(load("data.feather"))
# Load into a TimeArray
ta = TimeArray(load("data.feather"))
# Load into a TS
ts = TS(load("data.feather"))
# Plot directly with Gadfly
plot(load("data.feather"), x=:a, y=:b, Geom.line)
The following code saves any iterable table as a feather file:
using FeatherFiles
save("output.feather", it)
This will work as long as it
is any of the types supported as sources in IterableTables.jl.
Both load
and save
also support the pipe syntax. For example, to load a feather file into a DataFrame
, one can use the following code:
using FeatherFiles, DataFrame
df = load("data.feather") |> DataFrame
To save an iterable table, one can use the following form:
using FeatherFiles, DataFrame
df = # Aquire a DataFrame somehow
df |> save("output.feather")
The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a feather file, pipe it into a query, then pipe it to the save
function to store the results in a new file.
Author: Queryverse
Source Code: https://github.com/queryverse/FeatherFiles.jl
License: View license
1660425780
tigris
tigris is an R package that allows users to directly download and use TIGER/Line shapefiles (https://www.census.gov/geographies/mapping-files/time-series/geo/tiger-line-file.html) from the US Census Bureau.
To install the package from CRAN, issue the following command in R:
install.packages('tigris')
Or, get the development version from GitHub:
devtools::install_github('walkerke/tigris')
tigris functions return simple features objects with a default year of 2020. To get started, choose a function from the table below and use it with a state and/or county if required. You'll get back an sf object for use in your mapping and spatial analysis projects:
library(tigris)
library(ggplot2)
manhattan_roads <- roads("NY", "New York")
ggplot(manhattan_roads) +
geom_sf() +
theme_void()
tigris only returns feature geometries for US Census data which default to the coordinate reference system NAD 1983 (EPSG: 4269). For US Census demographic data (optionally pre-joined to tigris geometries), try the tidycensus package. For help deciding on an appropriate coordinate reference system for your project, take a look at the crsuggest package.
To learn more about how to use tigris, read Chapter 5 of the book _Analyzing US Census Data: Methods, Maps, and Models in R.
Available datasets:
Please note: cartographic boundary files in tigris are not available for 2011 and 2012.
Function | Datasets available | Years available |
---|---|---|
nation() | cartographic (1:5m; 1:20m) | 2013-2021 |
divisions() | cartographic (1:500k; 1:5m; 1:20m) | 2013-2021 |
regions() | cartographic (1:500k; 1:5m; 1:20m) | 2013-2021 |
states() | TIGER/Line; cartographic (1:500k; 1:5m; 1:20m) | 1990, 2000, 2010-2021 |
counties() | TIGER/Line; cartographic (1:500k; 1:5m; 1:20m) | 1990, 2000, 2010-2021 |
tracts() | TIGER/Line; cartographic (1:500k) | 1990, 2000, 2010-2021 |
block_groups() | TIGER/Line; cartographic (1:500k) | 1990, 2000, 2010-2021 |
blocks() | TIGER/Line | 2000, 2010-2021 |
places() | TIGER/Line; cartographic (1:500k) | 2011-2021 |
pumas() | TIGER/Line; cartographic (1:500k) | 2012-2021 |
school_districts() | TIGER/Line; cartographic | 2011-2021 |
zctas() | TIGER/Line; cartographic (1:500k) | 2000, 2010, 2012-2021 |
congressional_districts() | TIGER/Line; cartographic (1:500k; 1:5m; 1:20m) | 2011-2021 |
state_legislative_districts() | TIGER/Line; cartographic (1:500k) | 2011-2021 |
voting_districts() | TIGER/Line | 2012 |
area_water() | TIGER/Line | 2011-2021 |
linear_water() | TIGER/Line | 2011-2021 |
coastline | TIGER/Line() | 2013-2021 |
core_based_statistical_areas() | TIGER/Line; cartographic (1:500k; 1:5m; 1:20m) | 2011-2021 |
combined_statistical_areas() | TIGER/Line; cartographic (1:500k; 1:5m; 1:20m) | 2011-2021 |
metro_divisions() | TIGER/Line | 2011-2021 |
new_england() | TIGER/Line; cartographic (1:500k) | 2011-2021 |
county_subdivisions() | TIGER/Line; cartographic (1:500k) | 2010-2021 |
urban_areas() | TIGER/Line; cartographic (1:500k) | 2012-2021 |
primary_roads() | TIGER/Line | 2011-2021 |
primary_secondary_roads() | TIGER/Line | 2011-2021 |
roads() | TIGER/Line | 2011-2021 |
rails() | TIGER/Line | 2011-2021 |
native_areas() | TIGER/Line; cartographic (1:500k) | 2011-2021 |
alaska_native_regional_corporations() | TIGER/Line; cartographic (1:500k) | 2011-2021 |
tribal_block_groups() | TIGER/Line | 2011-2021 |
tribal_census_tracts() | TIGER/Line | 2011-2021 |
tribal_subdivisions_national() | TIGER/Line | 2011-2021 |
landmarks() | TIGER/Line | 2011-2021 |
military() | TIGER/Line | 2011-2021 |
Author: Walkerke
Source Code: https://github.com/walkerke/tigris
License: View license
1660273860
Routines for reading and manipulating GWAS data in .bed files
Data from Genome-wide association studies are often saved as a PLINK binary biallelic genotype table or .bed
file. To be useful, such files should be accompanied by a .fam
file, containing metadata on the rows of the table, and a .bim
file, containing metadata on the columns. The .fam
and .bim
files are in tab-separated format.
The table contains the observed allelic type at n
single-nucleotide polymorphism (SNP) positions for m
individuals.
A SNP corresponds to a nucleotide position on the genome where some degree of variation has been observed in a population, with each individual have one of two possible alleles at that position on each of a pair of chromosomes. Three possible types can be observed are: homozygous allele 1, coded as 0x00
, heterzygous, coded as 0x10
, and homozygous allele 2, coded as 0x11
. Missing values are coded as 0x01
.
A single column - one SNP position over all m
individuals - is packed into an array of div(m + 3, 4)
bytes (UInt8
values).
This package requires Julia v0.7.0 or later, which can be obtained from https://julialang.org/downloads/ or by building Julia from the sources in the JuliaLang/julia repository.
The package has not yet been registered and must be installed using the repository location. Start julia and use the ]
key to switch to the package manager REPL
(v0.7) pkg> add https://github.com/dmbates/BEDFiles.jl.git#master
Updating git-repo `https://github.com/dmbates/BEDFiles.jl.git`
Updating registry at `~/.julia/registries/Uncurated`
Updating git-repo `https://github.com/JuliaRegistries/Uncurated.git`
Resolving package versions...
Updating `~/.julia/environments/v0.7/Project.toml`
[6f44c9a6] + BEDFiles v0.1.0 #master (https://github.com/dmbates/BEDFiles.jl.git)
Updating `~/.julia/environments/v0.7/Manifest.toml`
[6f44c9a6] + BEDFiles v0.1.0 #master (https://github.com/dmbates/BEDFiles.jl.git)
[6fe1bfb0] + OffsetArrays v0.6.0
[10745b16] + Statistics
Use the backspace key to return to the Julia REPL.
Please see the documentation for usage.
Author: dmbates
Source Code: https://github.com/dmbates/BEDFiles.jl
1660178520
Go bindings to systemd. The project has several packages:
activation
- for writing and using socket activation from Godaemon
- for notifying systemd of service status changesdbus
- for starting/stopping/inspecting running services and unitsjournal
- for writing to systemd's logging service, journaldsdjournal
- for reading from journald by wrapping its C APIlogin1
- for integration with the systemd logind APImachine1
- for registering machines/containers with systemdunit
- for (de)serialization and comparison of unit filesThe daemon
package is an implementation of the sd_notify protocol. It can be used to inform systemd of service start-up completion, watchdog events, and other status changes.
The dbus
package connects to the systemd D-Bus API and lets you start, stop and introspect systemd units. API documentation is available online.
Create /etc/dbus-1/system-local.conf
that looks like this:
<!DOCTYPE busconfig PUBLIC
"-//freedesktop//DTD D-Bus Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
<policy user="root">
<allow eavesdrop="true"/>
<allow eavesdrop="true" send_destination="*"/>
</policy>
</busconfig>
Using the pure-Go journal
package you can submit journal entries directly to systemd's journal, taking advantage of features like indexed key/value pairs for each log entry.
The sdjournal
package provides read access to the journal by wrapping around journald's native C API; consequently it requires cgo and the journal headers to be available.
The login1
package provides functions to integrate with the systemd logind API.
The machine1
package allows interaction with the systemd machined D-Bus API.
The unit
package provides various functions for working with systemd unit files.
An example HTTP server using socket activation can be quickly set up by following this README on a Linux machine running systemd:
https://github.com/coreos/go-systemd/tree/main/examples/activation/httpserver
Author: Coreos
Source Code: https://github.com/coreos/go-systemd
License: Apache-2.0 license
1660116180
Detect gene fusion directly from fastq files, written in Julia language
Julia is a fresh programming language with C/C++
like performance and Python
like simple usage
On Ubuntu, you can install Julia by sudo apt-get install julia
, and type julia
to open Julia interactive prompt
# from Julia REPL
Pkg.add("FusionDirect")
using FusionDirect
# the reference folder, which contains chr1.fa, chr2fa...
# download from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz and gunzip it
ref = "/opt/ref/hg19chr"
# a gene list with their coordination intervals, see the example bed files in data folder
bed = Pkg.dir("FusionDirect") * "/data/test_panel.bed"
read1 = "R1.fq.gz"
read2 = "R2.fq.gz"
detect(ref, bed, read1, read2)
copy src/fusion.jl to anywhere you want, run
julia fusion.jl -f <REF_FILE_OR_FOLDER> -b <BED_FILE> -l <READ1_FILE> -r <READ2_FILE> > output.fa
# here gives an example
# (hg19chr is downloaded and gunzipped from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz )
julia fusion.jl -f ~/hg19chr -b ~/.julia/v0.5/FusionDirect/data/lung_cancer_hg19.bed -l R1.fq -r R2.fq > ourput.fa
Can be downloaded from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz
You should run gunzip chromFa.tar.gz
then pass the folder contains fa files to -f <REF>
A bed file to give a gene list (chr, start, end, genename), it usually includes the gene panel of your target sequencing and other genes you have interest (like EML4). You can use data/lung_cancer_hg19.bed
if you don't know how to make it.
Here gives an example:
chr9 133588266 133763062 ABL1
chr14 105235686 105262088 AKT1
chr19 40736224 40791443 AKT2
chr2 29415640 30144432 ALK
chrX 66764465 66950461 AR
chr11 108093211 108239829 ATM
chr3 142168077 142297668 ATR
chr2 111876955 111926024 BCL2L11
chr7 140419127 140624564 BRAF
chr17 41196312 41277500 BRCA1
chr2 42396490 42559688 EML4
>
is the number of duplicated reads (including this displaying read), so it is at least 1.merged
, read1
, read2
or crosspair
, which means the fusion is detected on merged sequence, read1, read2 or read1/read2 are not on same contig.fusion_site
, which means in which base the fusion happens. If fusion_site
is merged
, then the number is according to the merged sequence. If fusion_site
is crosspair
, then this value is set 0.conjunct_pos
, the two fusion genes, intron/exon number and the global fusion coordinations are given. +
or -
means forward strand or reverse strand. Note that fusion is on double strand DNA, so both +
and -
can exist on same fusion./1
or /2
in the tail of read name./merge
in the tail of its read name.#Fusion:ALK-EML4 (total: 3, unique: 2)
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/1
AATTGAACCTGTGTATTTATCCTCCTTAAGCTAGATTTCCATCATACTTAGAAATACTAATAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATC
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/2
GCTGCAAACTAATCAGGAATCGATCGGATTGTAAGGACATTGATTGGACGACATAGCTTTGCATTTACTTAAAATCATGCTTCAATTAAAGACACACCTTCTTTAATCATTTTATTAGTATTTCTAAGTATGATGGAAATCTATCTTAA
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/merged
AATTGAACCTGTGTATTTATCCTCCTTAAGCTAGATTTCCATCATACTTAGAAATACTAATAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGC
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/1
TAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGCCATTTGGAATGTCCCCTTTAAATTTAGAAACAG
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/2
GTAAAAGTGGCTAGTTTGAATCAAGATGCACTTTCAAATACATTTGTACACAAGCACTATGATTATACTTCCTGTTTCTAAATTTAAAGGGGACATTCCAAATGGCTGCAAACTAATCAGGAATCGATCGGATTGTAAGGACATTGATT
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/merged
TAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGCCATTTGGAATGTCCCCTTTAAATTTAGAAACAGGAAGTATAATCATAGTGCTTGTGTACAAATGTATTTGAAAGTGCATCTTGATTCAAACTAGCCACTTTTAC
Author: OpenGene
Source Code: https://github.com/OpenGene/FusionDirect.jl
License: View license
1660108740
Utilities to read/write FASTA format files in Julia.
To install the module, use Julia's package manager: start pkg mode by pressing ] and then enter:
(v1.3) pkg> add FastaIO
Dependencies will be installed automatically. The module can then be loaded like any other Julia module:
julia> using FastaIO
See also the examples in the examples/
directory.
Author: Carlobaldassi
Source Code: https://github.com/carlobaldassi/FastaIO.jl
License: View license
1660068300
JLDArchives
JLD is an HDF5-based file format for storing data for the Julia language.
This is a repository of old *.jld files, useful for ensuring that we preserve backwards compatibility.
At present, there is no "runtime" code in this repository: it merely includes *.jld files and some test scripts. At some point if it becomes difficult to maintain backwards compatibility, this might become a home for converting old formats to more modern ones. (For example, by archiving old versions of the JLD format implementation, and then saving all the same variables using the modern format.)
To test backwards compatility, simply say Pkg.test("JLDArchives")
from julia, or run the runtests.jl
script from inside the test
folder.
Author: JuliaIO
Source Code: https://github.com/JuliaIO/JLDArchives.jl
License: View license
1659272280
Laravel Langman
Langman is a language files manager in your artisan console, it helps you search, update, add, and remove translation lines with ease. Taking care of a multilingual interface is not a headache anymore.
Begin by installing the package through Composer. Run the following command in your terminal:
$ composer require themsaid/laravel-langman
Once done, add the following line in your providers array of config/app.php
:
Themsaid\Langman\LangmanServiceProvider::class
This package has a single configuration option that points to the resources/lang
directory, if only you need to change the path then publish the config file:
php artisan vendor:publish --provider="Themsaid\Langman\LangmanServiceProvider"
php artisan langman:show users
You get:
+---------+---------------+-------------+
| key | en | nl |
+---------+---------------+-------------+
| name | name | naam |
| job | job | baan |
+---------+---------------+-------------+
php artisan langman:show users.name
Brings only the translation of the name
key in all languages.
php artisan langman:show users.name.first
Brings the translation of a nested key.
php artisan langman:show package::users.name
Brings the translation of a vendor package language file.
php artisan langman:show users --lang=en,it
Brings the translation of only the "en" and "it" languages.
php artisan langman:show users.nam -c
Brings only the translation lines with keys matching the given key via close match, so searching for nam
brings values for keys like (name
, username
, branch_name_required
, etc...).
In the table returned by this command, if a translation is missing it'll be marked in red.
php artisan langman:find 'log in first'
You get a table of language lines where any of the values matches the given phrase by close match.
php artisan langman:sync
This command will look into all files in resources/views
and app
and find all translation keys that are not covered in your translation files, after that it appends those keys to the files with a value equal to an empty string.
php artisan langman:missing
It'll collect all the keys that are missing in any of the languages or has values equals to an empty string, prompt asking you to give a translation for each, and finally save the given values to the files.
php artisan langman:trans users.name
php artisan langman:trans users.name.first
php artisan langman:trans users.name --lang=en
php artisan langman:trans package::users.name
Using this command you may set a language key (plain or nested) for a given group, you may also specify which language you wish to set leaving the other languages as is.
This command will add a new key if not existing, and updates the key if it is already there.
php artisan langman:remove users.name
php artisan langman:remove package::users.name
It'll remove that key from all language files.
php artisan langman:rename users.name full_name
This will rename users.name
to be users.full_name
, the console will output a list of files where the key used to exist.
langman:sync
, langman:missing
, langman:trans
, and langman:remove
will update your language files by writing them completely, meaning that any comments or special styling will be removed, so I recommend you backup your files.
If you want a web interface to manage your language files instead, I recommend Laravel 5 Translation Manager by Barry vd. Heuvel.
Author: Themsaid
Source Code: https://github.com/themsaid/laravel-langman
License: MIT license
1658821826
using SVMLightLoader
# load the whole file
vectors, labels = load_svmlight_file("test.txt")
# the vector dimension can be specified
ndim = 20
vectors, labels = load_svmlight_file("test.txt", ndim)
println(size(vectors, 1)) # 20
# iterate the file line by line
for (vector, label) in SVMLightFile("test.txt")
dosomething(vector, label)
end
for (vector, label) in SVMLightFile("test.txt", ndim)
dosomething(vector, label)
end
Author: IshitaTakeshi
Source Code: https://github.com/IshitaTakeshi/SVMLightLoader.jl
License: View license
1658422080
This package allows you to easily cache responses as static files on disk for lightning fast page loads.
While static site builders such as Jekyll and Jigsaw are extremely popular these days, dynamic PHP sites still offer a lot of value even for a site that is mostly static. A proper PHP site allows you to easily add dynamic functionality wherever needed, and also means that there's no build step involved in pushing updates to the site.
That said, for truly static pages on a site there really is no reason to have to boot up a full PHP app just to serve a static page. Serving a simple HTML page from disk is infinitely faster and less taxing on the server.
The solution? Full page caching.
Using the middleware included in this package, you can selectively cache the response to disk for any given request. Subsequent calls to the same page will be served directly as a static HTML page!
Install the page-cache
package with composer:
$ composer require silber/page-cache
Note: If you're using Laravel 5.5+, the service provider will be registered automatically. You can simply skip this step entirely.
Open config/app.php
and add a new item to the providers
array:
Silber\PageCache\LaravelServiceProvider::class,
Open app/Http/Kernel.php
and add a new item to the web
middleware group:
protected $middlewareGroups = [
'web' => [
\Silber\PageCache\Middleware\CacheResponse::class,
/* ... keep the existing middleware here */
],
];
The middleware is smart enough to only cache responses with a 200 HTTP status code, and only for GET requests.
If you want to selectively cache only specific requests to your site, you should instead add a new mapping to the routeMiddleware
array:
protected $routeMiddleware = [
'page-cache' => \Silber\PageCache\Middleware\CacheResponse::class,
/* ... keep the existing mappings here */
];
Once registered, you can then use this middleware on individual routes.
In order to serve the static files directly once they've been cached, you need to properly configure your web server to check for those static files.
For nginx:
Update your location
block's try_files
directive to include a check in the page-cache
directory:
location = / {
try_files /page-cache/pc__index__pc.html /index.php?$query_string;
}
location / {
try_files $uri $uri/ /page-cache/$uri.html /page-cache/$uri.json /page-cache/$uri.xml /index.php?$query_string;
}
For apache:
Open public/.htaccess
and add the following before the block labeled Handle Front Controller
:
# Serve Cached Page If Available...
RewriteCond %{REQUEST_URI} ^/?$
RewriteCond %{DOCUMENT_ROOT}/page-cache/pc__index__pc.html -f
RewriteRule .? page-cache/pc__index__pc.html [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.html -f
RewriteRule . page-cache%{REQUEST_URI}.html [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.json -f
RewriteRule . page-cache%{REQUEST_URI}.json [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.xml -f
RewriteRule . page-cache%{REQUEST_URI}.xml [L]
To make sure you don't commit your locally cached files to your git repository, add this line to your .gitignore
file:
/public/page-cache
Note: If you've added the middleware to the global
web
group, then all successful GET requests will automatically be cached. No need to put the middleware again directly on the route.If you instead registered it as a route middleware, you should use the middleware on whichever routes you want to be cached.
To cache the response of a given request, use the page-cache
middleware:
Route::middleware('page-cache')->get('posts/{slug}', 'PostController@show');
Every post will now be cached to a file under the public/page-cache
directory, closely matching the URL structure of the request. All subsequent requests for this post will be served directly from disk, never even hitting your app!
Since the responses are cached to disk as static files, any updates to those pages in your app will not be reflected on your site. To update pages on your site, you should clear the cache with the following command:
php artisan page-cache:clear
As a rule of thumb, it's good practice to add this to your deployment script. That way, whenever you push an update to your site the page cache will automatically be cleared.
If you're using Forge's Quick Deploy feature, you should add this line to the end of your Deploy Script. This'll ensure that the cache is cleared whenever you push an update to your site.
You may optionally pass a URL slug to the command, to only delete the cache for a specific page:
php artisan page-cache:clear {slug}
To clear everything under a given path, use the --recursive
flag:
php artisan page-cache:clear {slug} --recursive
For example, imagine you have a category resource under /categories
, with the following cached pages:
/categories/1
/categories/2
/categories/5
To clear the cache for all categories, use --recursive
with the categories
path:
php artisan page-cache:clear categories --recursive
By default, all GET requests with a 200 HTTP response code are cached. If you want to change that, create your own middleware that extends the package's base middleware, and override the shouldCache
method with your own logic.
Run the make:middleware
Artisan command to create your middleware file:
php artisan make:middleware CacheResponse
Replace the contents of the file at app/Http/Middleware/CacheResponse.php
with this:
<?php
namespace App\Http\Middleware;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Silber\PageCache\Middleware\CacheResponse as BaseCacheResponse;
class CacheResponse extends BaseCacheResponse
{
protected function shouldCache(Request $request, Response $response)
{
// In this example, we don't ever want to cache pages if the
// URL contains a query string. So we first check for it,
// then defer back up to the parent's default checks.
if ($request->getQueryString()) {
return false;
}
return parent::shouldCache($request, $response);
}
}
Finally, update the middleware references in your app/Http/Kernel.php
file, to point to your own middleware.
Author: JosephSilber
Source Code: https://github.com/JosephSilber/page-cache
License: MIT license
1657674600
Returns the difference between two JSON files.
Add this line to your application's Gemfile:
gem 'json-compare'
And then execute:
$ bundle
Or install it yourself as:
$ gem install json-compare
require 'yajl'
require 'json-compare'
json1 = File.new('spec/fixtures/twitter-search.json', 'r')
json2 = File.new('spec/fixtures/twitter-search2.json', 'r')
old, new = Yajl::Parser.parse(json1), Yajl::Parser.parse(json2)
result = JsonCompare.get_diff(old, new)
If you want to exclude some keys from comparison use exclusion param:
exclusion = ["from_user", "to_user_id"]
result = JsonCompare.get_diff(old, new, exclusion)
git checkout -b my-new-feature
)git commit -am 'Added some feature'
)git push origin my-new-feature
)Author: a2design-inc
Source Code: https://github.com/a2design-inc/json-compare
License: MIT license
1656728820
A plugin to render simple calendars.
This branch is for CakePHP 4.2+. For details see version map.
year/month
URL pieces (copy-paste and link/redirect friendly).ics
calendar file output.composer require dereuromark/cakephp-calendar
Then make sure the plugin is loaded in bootstrap:
bin/cake plugin load Calendar
See the demo Calendar example at the sandbox.
See Documentation.
Author: Dereuromark
Source Code: https://github.com/dereuromark/cakephp-calendar
License: MIT license