StatFiles.jl: FileIO.jl integration for Stata, SPSS, and SAS Files

StatFiles  

Overview

This package provides load support for Stata, SPSS, and SAS files under the FileIO.jl package.

Installation

Use Pkg.add("StatFiles") in Julia to install StatFiles and its dependencies.

Usage

Load a Stata, SPSS, or SAS file

To read a Stata, SPSS, or SAS file into a DataFrame, use the following julia code:

using StatFiles, DataFrames

df = DataFrame(load("data.dta"))

The call to load returns a struct that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a Stata, SPSS, or SAS file into data structures that are not a DataFrame:

using StatFiles, DataTables, IndexedTables, TimeSeries, Temporal, Gadfly

# Load into a DataTable
dt = DataTable(load("data.dta"))

# Load into an IndexedTable
it = IndexedTable(load("data.dta"))

# Load into a TimeArray
ta = TimeArray(load("data.dta"))

# Load into a TS
ts = TS(load("data.dta"))

# Plot directly with Gadfly
plot(load("data.dta"), x=:a, y=:b, Geom.line)

Using the pipe syntax

load also support the pipe syntax. For example, to load a Stata, SPSS, or SAS file into a DataFrame, one can use the following code:

using StatFiles, DataFrames

df = load("data.dta") |> DataFrame

The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a Stata, SPSS, or SAS file, pipe it into a query, then pipe it to the save function to store the results in a new file.

Download Details:

Author: Queryverse
Source Code: https://github.com/queryverse/StatFiles.jl 
License: View license

#julia #files 

StatFiles.jl: FileIO.jl integration for Stata, SPSS, and SAS Files

BedgraphFiles.jl: FileIO.jl integration for BedGraph Files

BedgraphFiles.jl  

This project follows the semver pro forma and uses the git-flow branching model.

Overview

This package provides load and save support for Bedgraph under the FileIO package, and also implements the IterableTables interface for easy conversion between tabular data structures.

Installation

You can install BedgraphFiles from the Julia REPL. Press ] to enter pkg mode, then enter the following:

add BedgraphFiles

If you are interested in the cutting edge of the development, please check out the develop branch to try new features before release.

Usage

Loading bedGraph files

To load a bedGraph file into a Vector{Bedgraph.Record}, use the following Julia code:

using FileIO, BedgraphFiles, Bedgraph

records = Vector{Bedgraph.Record}(load("data.bedgraph"))
records = collect(Bedgraph.Record, load("data.bedgraph"))

Saving bedGraph files

Note: saving on top of an existing file will overwrite metadata/header information with a minimal working header.

The following example saves a Vector{Bedgraph.Record} to a bedGraph file:

using FileIO, BedgraphFiles, Bedgraph

records = [Bedgraph.Record("chr", i, i + 99, rand()) for i in 1:100:1000]

save("output.bedgraph", records)

IterableTables

The execution of load returns a struct that adheres to the IterableTables interface, and can be passed to any function that also implements the interface, i.e. all the sinks in IterableTable.jl.

The following code shows an example of loading a bedGraph file into a DataFrame:

using FileIO, BedgraphFiles, DataFrames

df = DataFrame(load("data.bedgraph"))

Here are some more examples of materialising a bedGraph file into other data structures:

using FileIO, BedgraphFiles, DataTables, IndexedTables, Gadfly

# Load into a DataTable
dt = DataTable(load("data.bedgraph"))

# Load into an IndexedTable
it = IndexedTable(load("data.bedgraph"))

# Plot directly with Gadfly
plot(load("data.bedgraph"), xmin=:leftposition, xmax=:rightposition, y=:value, Geom.bar)

The following code saves any compatible source to a bedGraph file:

using FileIO, BedgraphFiles

it = getiterator(data)

save("output.bedgraph", it)

Using the pipe syntax

Both load and save also support the pipe syntax. For example, to load a bedGraph file into a DataFrame, one can use the following code:

using FileIO, BedgraphFiles, DataFrame

df = load("data.bedgraph") |> DataFrame

To save an iterable table, one can use the following form:

using FileIO, BedgraphFiles, DataFrame

df = # Aquire a DataFrame somehow.

df |> save("output.bedgraph")

The save method returns the data provided or Vector{Bedgraph.Record}. This is useful when periodically saving your work during a sequence of operations.

records = some sequence of operations |> save("output.bedgraph")

The pipe syntax is especially useful when combining it with Query.jl queries. For example, one can easily load a bedGraph file, pipe its data into a query, and then store the query result by piping it to the save function.

using FileIO, BedgraphFiles, Query
load("data.bedgraph") |> @filter(_.chrom == "chr19") |> save("data-chr19.bedgraph")

Acknowledgements

This package is largely -- if not completely -- inspired by the work of David Anthoff. Other influences are from the BioJulia community.

Download Details:

Author: CiaranOMara
Source Code: https://github.com/CiaranOMara/BedgraphFiles.jl 
License: MIT license

#julia #graph #files 

BedgraphFiles.jl: FileIO.jl integration for BedGraph Files

ParquetFiles.jl: FileIO.jl integration for Parquet files

ParquetFiles

Overview

This package provides load support for Parquet files under the FileIO.jl package.

Installation

Use ] add ParquetFiles in Julia to install ParquetFiles and its dependencies.

Usage

Load a Parquet file

To read a Parquet file into a DataFrame, use the following julia code:

using ParquetFiles, DataFrames

df = DataFrame(load("data.parquet"))

The call to load returns a struct that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a Parquet file into data structures that are not a DataFrame:

using ParquetFiles, IndexedTables, TimeSeries, Temporal, VegaLite

# Load into an IndexedTable
it = IndexedTable(load("data.parquet"))

# Load into a TimeArray
ta = TimeArray(load("data.parquet"))

# Load into a TS
ts = TS(load("data.parquet"))

# Plot directly with Gadfly
@vlplot(:point, data=load("data.parquet"), x=:a, y=:b)

Using the pipe syntax

load also support the pipe syntax. For example, to load a Parquet file into a DataFrame, one can use the following code:

using ParquetFiles, DataFrame

df = load("data.parquet") |> DataFrame

The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a Parquet file, pipe it into a query, then pipe it to the save function to store the results in a new file.

Download Details:

Author: Queryverse
Source Code: https://github.com/queryverse/ParquetFiles.jl 
License: View license

#julia #files 

ParquetFiles.jl: FileIO.jl integration for Parquet files

FeatherFiles.jl: FileIO.jl integration for Feather Files

FeatherFiles   

Overview

This package provides load and save support for Feather files under the FileIO.jl package.

Installation

Use Pkg.add("FeatherFiles") in Julia to install FeatherFiles and its dependencies.

Usage

Load a feather file

To read a feather file into a DataFrame, use the following julia code:

using FeatherFiles, DataFrames

df = DataFrame(load("data.feather"))

The call to load returns a struct that is an IterableTable.jl, so it can be passed to any function that can handle iterable tables, i.e. all the sinks in IterableTable.jl. Here are some examples of materializing a feather file into data structures that are not a DataFrame:

using FeatherFiles, DataTables, IndexedTables, TimeSeries, Temporal, Gadfly

# Load into a DataTable
dt = DataTable(load("data.feather"))

# Load into an IndexedTable
it = IndexedTable(load("data.feather"))

# Load into a TimeArray
ta = TimeArray(load("data.feather"))

# Load into a TS
ts = TS(load("data.feather"))

# Plot directly with Gadfly
plot(load("data.feather"), x=:a, y=:b, Geom.line)

Save a feather file

The following code saves any iterable table as a feather file:

using FeatherFiles

save("output.feather", it)

This will work as long as it is any of the types supported as sources in IterableTables.jl.

Using the pipe syntax

Both load and save also support the pipe syntax. For example, to load a feather file into a DataFrame, one can use the following code:

using FeatherFiles, DataFrame

df = load("data.feather") |> DataFrame

To save an iterable table, one can use the following form:

using FeatherFiles, DataFrame

df = # Aquire a DataFrame somehow

df |> save("output.feather")

The pipe syntax is especially useful when combining it with Query.jl queries, for example one can easily load a feather file, pipe it into a query, then pipe it to the save function to store the results in a new file.

Download Details:

Author: Queryverse
Source Code: https://github.com/queryverse/FeatherFiles.jl 
License: View license

#julia #files 

FeatherFiles.jl: FileIO.jl integration for Feather Files
Nat  Grady

Nat Grady

1660425780

Tigris: Download and Use Census TIGER/Line Shapefiles in R

tigris  

tigris is an R package that allows users to directly download and use TIGER/Line shapefiles (https://www.census.gov/geographies/mapping-files/time-series/geo/tiger-line-file.html) from the US Census Bureau.

To install the package from CRAN, issue the following command in R:

install.packages('tigris')

Or, get the development version from GitHub:

devtools::install_github('walkerke/tigris')

tigris functions return simple features objects with a default year of 2020. To get started, choose a function from the table below and use it with a state and/or county if required. You'll get back an sf object for use in your mapping and spatial analysis projects:

library(tigris)
library(ggplot2)

manhattan_roads <- roads("NY", "New York")

ggplot(manhattan_roads) + 
  geom_sf() + 
  theme_void()

tigris only returns feature geometries for US Census data which default to the coordinate reference system NAD 1983 (EPSG: 4269). For US Census demographic data (optionally pre-joined to tigris geometries), try the tidycensus package. For help deciding on an appropriate coordinate reference system for your project, take a look at the crsuggest package.

To learn more about how to use tigris, read Chapter 5 of the book _Analyzing US Census Data: Methods, Maps, and Models in R.

Available datasets:

Please note: cartographic boundary files in tigris are not available for 2011 and 2012.

FunctionDatasets availableYears available
nation()cartographic (1:5m; 1:20m)2013-2021
divisions()cartographic (1:500k; 1:5m; 1:20m)2013-2021
regions()cartographic (1:500k; 1:5m; 1:20m)2013-2021
states()TIGER/Line; cartographic (1:500k; 1:5m; 1:20m)1990, 2000, 2010-2021
counties()TIGER/Line; cartographic (1:500k; 1:5m; 1:20m)1990, 2000, 2010-2021
tracts()TIGER/Line; cartographic (1:500k)1990, 2000, 2010-2021
block_groups()TIGER/Line; cartographic (1:500k)1990, 2000, 2010-2021
blocks()TIGER/Line2000, 2010-2021
places()TIGER/Line; cartographic (1:500k)2011-2021
pumas()TIGER/Line; cartographic (1:500k)2012-2021
school_districts()TIGER/Line; cartographic2011-2021
zctas()TIGER/Line; cartographic (1:500k)2000, 2010, 2012-2021
congressional_districts()TIGER/Line; cartographic (1:500k; 1:5m; 1:20m)2011-2021
state_legislative_districts()TIGER/Line; cartographic (1:500k)2011-2021
voting_districts()TIGER/Line2012
area_water()TIGER/Line2011-2021
linear_water()TIGER/Line2011-2021
coastlineTIGER/Line()2013-2021
core_based_statistical_areas()TIGER/Line; cartographic (1:500k; 1:5m; 1:20m)2011-2021
combined_statistical_areas()TIGER/Line; cartographic (1:500k; 1:5m; 1:20m)2011-2021
metro_divisions()TIGER/Line2011-2021
new_england()TIGER/Line; cartographic (1:500k)2011-2021
county_subdivisions()TIGER/Line; cartographic (1:500k)2010-2021
urban_areas()TIGER/Line; cartographic (1:500k)2012-2021
primary_roads()TIGER/Line2011-2021
primary_secondary_roads()TIGER/Line2011-2021
roads()TIGER/Line2011-2021
rails()TIGER/Line2011-2021
native_areas()TIGER/Line; cartographic (1:500k)2011-2021
alaska_native_regional_corporations()TIGER/Line; cartographic (1:500k)2011-2021
tribal_block_groups()TIGER/Line2011-2021
tribal_census_tracts()TIGER/Line2011-2021
tribal_subdivisions_national()TIGER/Line2011-2021
landmarks()TIGER/Line2011-2021
military()TIGER/Line2011-2021

Download Details:

Author: Walkerke
Source Code: https://github.com/walkerke/tigris 
License: View license

#r #files 

Tigris: Download and Use Census TIGER/Line Shapefiles in R
Monty  Boehm

Monty Boehm

1660273860

BEDFiles.jl: Routines for Reading and Manipulating GWAS Data in .bed F

BEDFiles.jl

Routines for reading and manipulating GWAS data in .bed files

Data from Genome-wide association studies are often saved as a PLINK binary biallelic genotype table or .bed file. To be useful, such files should be accompanied by a .fam file, containing metadata on the rows of the table, and a .bim file, containing metadata on the columns. The .fam and .bim files are in tab-separated format.

The table contains the observed allelic type at n single-nucleotide polymorphism (SNP) positions for m individuals.

A SNP corresponds to a nucleotide position on the genome where some degree of variation has been observed in a population, with each individual have one of two possible alleles at that position on each of a pair of chromosomes. Three possible types can be observed are: homozygous allele 1, coded as 0x00, heterzygous, coded as 0x10, and homozygous allele 2, coded as 0x11. Missing values are coded as 0x01.

A single column - one SNP position over all m individuals - is packed into an array of div(m + 3, 4) bytes (UInt8 values).

Installation

This package requires Julia v0.7.0 or later, which can be obtained from https://julialang.org/downloads/ or by building Julia from the sources in the JuliaLang/julia repository.

The package has not yet been registered and must be installed using the repository location. Start julia and use the ] key to switch to the package manager REPL

(v0.7) pkg> add https://github.com/dmbates/BEDFiles.jl.git#master
  Updating git-repo `https://github.com/dmbates/BEDFiles.jl.git`
  Updating registry at `~/.julia/registries/Uncurated`
  Updating git-repo `https://github.com/JuliaRegistries/Uncurated.git`
 Resolving package versions...
  Updating `~/.julia/environments/v0.7/Project.toml`
  [6f44c9a6] + BEDFiles v0.1.0 #master (https://github.com/dmbates/BEDFiles.jl.git)
  Updating `~/.julia/environments/v0.7/Manifest.toml`
  [6f44c9a6] + BEDFiles v0.1.0 #master (https://github.com/dmbates/BEDFiles.jl.git)
  [6fe1bfb0] + OffsetArrays v0.6.0
  [10745b16] + Statistics 

Use the backspace key to return to the Julia REPL.

Please see the documentation for usage.

Download Details:

Author: dmbates
Source Code: https://github.com/dmbates/BEDFiles.jl 

#julia #data #files 

BEDFiles.jl: Routines for Reading and Manipulating GWAS Data in .bed F

Go-systemd: Go Bindings to Systemd Socket Activation, Journal, D-Bus

go-systemd 

Go bindings to systemd. The project has several packages:

  • activation - for writing and using socket activation from Go
  • daemon - for notifying systemd of service status changes
  • dbus - for starting/stopping/inspecting running services and units
  • journal - for writing to systemd's logging service, journald
  • sdjournal - for reading from journald by wrapping its C API
  • login1 - for integration with the systemd logind API
  • machine1 - for registering machines/containers with systemd
  • unit - for (de)serialization and comparison of unit files

systemd Service Notification

The daemon package is an implementation of the sd_notify protocol. It can be used to inform systemd of service start-up completion, watchdog events, and other status changes.

D-Bus

The dbus package connects to the systemd D-Bus API and lets you start, stop and introspect systemd units. API documentation is available online.

Debugging

Create /etc/dbus-1/system-local.conf that looks like this:

<!DOCTYPE busconfig PUBLIC
"-//freedesktop//DTD D-Bus Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
    <policy user="root">
        <allow eavesdrop="true"/>
        <allow eavesdrop="true" send_destination="*"/>
    </policy>
</busconfig>

Journal

Writing to the Journal

Using the pure-Go journal package you can submit journal entries directly to systemd's journal, taking advantage of features like indexed key/value pairs for each log entry.

Reading from the Journal

The sdjournal package provides read access to the journal by wrapping around journald's native C API; consequently it requires cgo and the journal headers to be available.

logind

The login1 package provides functions to integrate with the systemd logind API.

machined

The machine1 package allows interaction with the systemd machined D-Bus API.

Units

The unit package provides various functions for working with systemd unit files.

Socket Activation

An example HTTP server using socket activation can be quickly set up by following this README on a Linux machine running systemd:

https://github.com/coreos/go-systemd/tree/main/examples/activation/httpserver

Download Details:

Author: Coreos
Source Code: https://github.com/coreos/go-systemd 
License: Apache-2.0 license

#go #golang #unit #files 

Go-systemd: Go Bindings to Systemd Socket Activation, Journal, D-Bus
Monty  Boehm

Monty Boehm

1660116180

FusionDirect.jl: Detect Gene Fusion Directly From Raw Fastq Files

FusionDirect

Detect gene fusion directly from fastq files, written in Julia language

Features

  • no alignment needed, it just reads fastq files of pair sequencing
  • output fusion pattern (gene and position), along with the reads support this fusion
  • ultra sensitive, comparing to delly, factera or other tools
  • output file is a standard fasta file, which can be used to verify fusions using blast or other tools
  • very suitable for detecting fusions from cancer target sequencing data (exom seq or panel seq)

Julia

Julia is a fresh programming language with C/C++ like performance and Python like simple usage
On Ubuntu, you can install Julia by sudo apt-get install julia, and type julia to open Julia interactive prompt

Install FusionDirect

# from Julia REPL
Pkg.add("FusionDirect")

Use FusionDirect as a package

using FusionDirect

# the reference folder, which contains chr1.fa, chr2fa...
# download from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz and gunzip it
ref = "/opt/ref/hg19chr"
# a gene list with their coordination intervals, see the example bed files in data folder
bed = Pkg.dir("FusionDirect") * "/data/test_panel.bed"
read1 = "R1.fq.gz"
read2 = "R2.fq.gz"
detect(ref, bed, read1, read2)

Use FusionDirect as a standalone script from commandline

copy src/fusion.jl to anywhere you want, run

julia fusion.jl -f <REF_FILE_OR_FOLDER> -b <BED_FILE> -l <READ1_FILE> -r <READ2_FILE> > output.fa
# here gives an example 
# (hg19chr is downloaded and gunzipped from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz )
julia fusion.jl -f ~/hg19chr -b ~/.julia/v0.5/FusionDirect/data/lung_cancer_hg19.bed -l R1.fq -r R2.fq > ourput.fa

Get the reference

Can be downloaded from http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/chromFa.tar.gz
You should run gunzip chromFa.tar.gz then pass the folder contains fa files to -f <REF>

Prepare the bed

A bed file to give a gene list (chr, start, end, genename), it usually includes the gene panel of your target sequencing and other genes you have interest (like EML4). You can use data/lung_cancer_hg19.bed if you don't know how to make it.
Here gives an example:

chr9    133588266   133763062   ABL1
chr14   105235686   105262088   AKT1
chr19   40736224    40791443    AKT2
chr2    29415640    30144432    ALK
chrX    66764465    66950461    AR
chr11   108093211   108239829   ATM
chr3    142168077   142297668   ATR
chr2    111876955   111926024   BCL2L11
chr7    140419127   140624564   BRAF
chr17   41196312    41277500    BRCA1
chr2    42396490    42559688    EML4

Understand the output

  • fasta: The output is a standard fasta, which can be directly used to double check these fusions with blast(http://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastn&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome)
  • duplication number: the first nubmer after > is the number of duplicated reads (including this displaying read), so it is at least 1.
  • fusion_site: The followed word can be merged, read1, read2 or crosspair, which means the fusion is detected on merged sequence, read1, read2 or read1/read2 are not on same contig.
  • conjunct_pos: the number after fusion_site, which means in which base the fusion happens. If fusion_site is merged, then the number is according to the merged sequence. If fusion_site is crosspair, then this value is set 0.
  • fusion_genes: following conjunct_pos, the two fusion genes, intron/exon number and the global fusion coordinations are given. + or - means forward strand or reverse strand. Note that fusion is on double strand DNA, so both + and - can exist on same fusion.
  • original_reads: original reads are given for read1/read2. See /1 or /2 in the tail of read name.
  • merged_sequence: if the pair of reads can be merged automatically, the fusion detection is done on the merged sequence. In this case, merged sequence is given with /merge in the tail of its read name.
#Fusion:ALK-EML4 (total: 3, unique: 2)
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/1
AATTGAACCTGTGTATTTATCCTCCTTAAGCTAGATTTCCATCATACTTAGAAATACTAATAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATC
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/2
GCTGCAAACTAATCAGGAATCGATCGGATTGTAAGGACATTGATTGGACGACATAGCTTTGCATTTACTTAAAATCATGCTTCAATTAAAGACACACCTTCTTTAATCATTTTATTAGTATTTCTAAGTATGATGGAAATCTATCTTAA
>2_merged_120_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/merged
AATTGAACCTGTGTATTTATCCTCCTTAAGCTAGATTTCCATCATACTTAGAAATACTAATAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGC
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/1
TAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGCCATTTGGAATGTCCCCTTTAAATTTAGAAACAG
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/2
GTAAAAGTGGCTAGTTTGAATCAAGATGCACTTTCAAATACATTTGTACACAAGCACTATGATTATACTTCCTGTTTCTAAATTTAAAGGGGACATTCCAAATGGCTGCAAACTAATCAGGAATCGATCGGATTGTAAGGACATTGATT
>1_merged_60_ALK:intron:19|+chr2:29446598_EML4:exon:21|-chr2:42553364/merged
TAAAATGATTAAAGAAGGTGTGTCTTTAATTGAAGCATGATTTAAAGTAAATGCAAAGCTATGTCGTCCAATCAATGTCCTTACAATCCGATCGATTCCTGATTAGTTTGCAGCCATTTGGAATGTCCCCTTTAAATTTAGAAACAGGAAGTATAATCATAGTGCTTGTGTACAAATGTATTTGAAAGTGCATCTTGATTCAAACTAGCCACTTTTAC

Download Details:

Author: OpenGene
Source Code: https://github.com/OpenGene/FusionDirect.jl 
License: View license

#julia #direct #files 

FusionDirect.jl: Detect Gene Fusion Directly From Raw Fastq Files
Monty  Boehm

Monty Boehm

1660108740

FastaIO.jl: Utilities to Read/write FASTA format Files in Julia

FastaIO.jl

Utilities to read/write FASTA format files in Julia.

Installation and usage

Installation

To install the module, use Julia's package manager: start pkg mode by pressing ] and then enter:

(v1.3) pkg> add FastaIO

Dependencies will be installed automatically. The module can then be loaded like any other Julia module:

julia> using FastaIO

Documentation

  • STABLEmost recently tagged version of the documentation.
  • DEVin-development version of the documentation.

See also the examples in the examples/ directory.

Download Details:

Author: Carlobaldassi
Source Code: https://github.com/carlobaldassi/FastaIO.jl 
License: View license

#julia #format #files 

FastaIO.jl: Utilities to Read/write FASTA format Files in Julia

JLDArchives.jl: A Repository Of Julia *.jld Files

JLDArchives

JLD is an HDF5-based file format for storing data for the Julia language.

This is a repository of old *.jld files, useful for ensuring that we preserve backwards compatibility.

At present, there is no "runtime" code in this repository: it merely includes *.jld files and some test scripts. At some point if it becomes difficult to maintain backwards compatibility, this might become a home for converting old formats to more modern ones. (For example, by archiving old versions of the JLD format implementation, and then saving all the same variables using the modern format.)

To test backwards compatility, simply say Pkg.test("JLDArchives") from julia, or run the runtests.jl script from inside the test folder.

Download Details:

Author: JuliaIO
Source Code: https://github.com/JuliaIO/JLDArchives.jl 
License: View license

#julia #files #testing 

JLDArchives.jl: A Repository Of Julia *.jld Files
Rupert  Beatty

Rupert Beatty

1659272280

Laravel-langman: Language files manager in your artisan console

Laravel Langman

Langman is a language files manager in your artisan console, it helps you search, update, add, and remove translation lines with ease. Taking care of a multilingual interface is not a headache anymore.    

Installation

Begin by installing the package through Composer. Run the following command in your terminal:

$ composer require themsaid/laravel-langman

Once done, add the following line in your providers array of config/app.php:

Themsaid\Langman\LangmanServiceProvider::class

This package has a single configuration option that points to the resources/lang directory, if only you need to change the path then publish the config file:

php artisan vendor:publish --provider="Themsaid\Langman\LangmanServiceProvider"

Usage

Showing lines of a translation file

php artisan langman:show users

You get:

+---------+---------------+-------------+
| key     | en            | nl          |
+---------+---------------+-------------+
| name    | name          | naam        |
| job     | job           | baan        |
+---------+---------------+-------------+

php artisan langman:show users.name

Brings only the translation of the name key in all languages.


php artisan langman:show users.name.first

Brings the translation of a nested key.


php artisan langman:show package::users.name

Brings the translation of a vendor package language file.


php artisan langman:show users --lang=en,it

Brings the translation of only the "en" and "it" languages.


php artisan langman:show users.nam -c

Brings only the translation lines with keys matching the given key via close match, so searching for nam brings values for keys like (name, username, branch_name_required, etc...).

In the table returned by this command, if a translation is missing it'll be marked in red.

Finding a translation line

php artisan langman:find 'log in first'

You get a table of language lines where any of the values matches the given phrase by close match.

Searching view files for missing translations

php artisan langman:sync

This command will look into all files in resources/views and app and find all translation keys that are not covered in your translation files, after that it appends those keys to the files with a value equal to an empty string.

Filling missing translations

php artisan langman:missing

It'll collect all the keys that are missing in any of the languages or has values equals to an empty string, prompt asking you to give a translation for each, and finally save the given values to the files.

Translating a key

php artisan langman:trans users.name
php artisan langman:trans users.name.first
php artisan langman:trans users.name --lang=en
php artisan langman:trans package::users.name

Using this command you may set a language key (plain or nested) for a given group, you may also specify which language you wish to set leaving the other languages as is.

This command will add a new key if not existing, and updates the key if it is already there.

Removing a key

php artisan langman:remove users.name
php artisan langman:remove package::users.name

It'll remove that key from all language files.

Renaming a key

php artisan langman:rename users.name full_name

This will rename users.name to be users.full_name, the console will output a list of files where the key used to exist.

Notes

langman:sync, langman:missing, langman:trans, and langman:remove will update your language files by writing them completely, meaning that any comments or special styling will be removed, so I recommend you backup your files.

Web interface

If you want a web interface to manage your language files instead, I recommend Laravel 5 Translation Manager by Barry vd. Heuvel.

Author: Themsaid
Source Code: https://github.com/themsaid/laravel-langman 
License: MIT license

#laravel #files #console 

Laravel-langman: Language files manager in your artisan console
Monty  Boehm

Monty Boehm

1658821826

SVMLightLoader.jl: Loader Of Svmlight / Liblinear format Files

SVMLightLoader

The Loader of svmlight / liblinear format files

Usage

using SVMLightLoader


# load the whole file
vectors, labels = load_svmlight_file("test.txt")

# the vector dimension can be specified
ndim = 20
vectors, labels = load_svmlight_file("test.txt", ndim)
println(size(vectors, 1))  # 20

# iterate the file line by line
for (vector, label) in SVMLightFile("test.txt")
    dosomething(vector, label)
end

for (vector, label) in SVMLightFile("test.txt", ndim)
    dosomething(vector, label)
end

Author: IshitaTakeshi
Source Code: https://github.com/IshitaTakeshi/SVMLightLoader.jl 
License: View license

#julia #loader #files 

SVMLightLoader.jl: Loader Of Svmlight / Liblinear format Files
Rupert  Beatty

Rupert Beatty

1658422080

Caches Responses As Static Files on Disk for Lightning Fast Page Loads

Laravel Page Cache

This package allows you to easily cache responses as static files on disk for lightning fast page loads.

Introduction

While static site builders such as Jekyll and Jigsaw are extremely popular these days, dynamic PHP sites still offer a lot of value even for a site that is mostly static. A proper PHP site allows you to easily add dynamic functionality wherever needed, and also means that there's no build step involved in pushing updates to the site.

That said, for truly static pages on a site there really is no reason to have to boot up a full PHP app just to serve a static page. Serving a simple HTML page from disk is infinitely faster and less taxing on the server.

The solution? Full page caching.

Using the middleware included in this package, you can selectively cache the response to disk for any given request. Subsequent calls to the same page will be served directly as a static HTML page!

Installation

Install the page-cache package with composer:

$ composer require silber/page-cache

Service Provider

Note: If you're using Laravel 5.5+, the service provider will be registered automatically. You can simply skip this step entirely.

Open config/app.php and add a new item to the providers array:

Silber\PageCache\LaravelServiceProvider::class,

Middleware

Open app/Http/Kernel.php and add a new item to the web middleware group:

protected $middlewareGroups = [
    'web' => [
        \Silber\PageCache\Middleware\CacheResponse::class,
        /* ... keep the existing middleware here */
    ],
];

The middleware is smart enough to only cache responses with a 200 HTTP status code, and only for GET requests.

If you want to selectively cache only specific requests to your site, you should instead add a new mapping to the routeMiddleware array:

protected $routeMiddleware = [
    'page-cache' => \Silber\PageCache\Middleware\CacheResponse::class,
    /* ... keep the existing mappings here */
];

Once registered, you can then use this middleware on individual routes.

URL rewriting

In order to serve the static files directly once they've been cached, you need to properly configure your web server to check for those static files.

For nginx:

Update your location block's try_files directive to include a check in the page-cache directory:

location = / {
    try_files /page-cache/pc__index__pc.html /index.php?$query_string;
}

location / {
    try_files $uri $uri/ /page-cache/$uri.html /page-cache/$uri.json /page-cache/$uri.xml /index.php?$query_string;
}

For apache:

Open public/.htaccess and add the following before the block labeled Handle Front Controller:

# Serve Cached Page If Available...
RewriteCond %{REQUEST_URI} ^/?$
RewriteCond %{DOCUMENT_ROOT}/page-cache/pc__index__pc.html -f
RewriteRule .? page-cache/pc__index__pc.html [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.html -f
RewriteRule . page-cache%{REQUEST_URI}.html [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.json -f
RewriteRule . page-cache%{REQUEST_URI}.json [L]
RewriteCond %{DOCUMENT_ROOT}/page-cache%{REQUEST_URI}.xml -f
RewriteRule . page-cache%{REQUEST_URI}.xml [L]

Ignoring the cached files

To make sure you don't commit your locally cached files to your git repository, add this line to your .gitignore file:

/public/page-cache

Usage

Using the middleware

Note: If you've added the middleware to the global web group, then all successful GET requests will automatically be cached. No need to put the middleware again directly on the route.

If you instead registered it as a route middleware, you should use the middleware on whichever routes you want to be cached.

To cache the response of a given request, use the page-cache middleware:

Route::middleware('page-cache')->get('posts/{slug}', 'PostController@show');

Every post will now be cached to a file under the public/page-cache directory, closely matching the URL structure of the request. All subsequent requests for this post will be served directly from disk, never even hitting your app!

Clearing the cache

Since the responses are cached to disk as static files, any updates to those pages in your app will not be reflected on your site. To update pages on your site, you should clear the cache with the following command:

php artisan page-cache:clear

As a rule of thumb, it's good practice to add this to your deployment script. That way, whenever you push an update to your site the page cache will automatically be cleared.

If you're using Forge's Quick Deploy feature, you should add this line to the end of your Deploy Script. This'll ensure that the cache is cleared whenever you push an update to your site.

You may optionally pass a URL slug to the command, to only delete the cache for a specific page:

php artisan page-cache:clear {slug}

To clear everything under a given path, use the --recursive flag:

php artisan page-cache:clear {slug} --recursive

For example, imagine you have a category resource under /categories, with the following cached pages:

  • /categories/1
  • /categories/2
  • /categories/5

To clear the cache for all categories, use --recursive with the categories path:

php artisan page-cache:clear categories --recursive

Customizing what to cache

By default, all GET requests with a 200 HTTP response code are cached. If you want to change that, create your own middleware that extends the package's base middleware, and override the shouldCache method with your own logic.

Run the make:middleware Artisan command to create your middleware file:

php artisan make:middleware CacheResponse

Replace the contents of the file at app/Http/Middleware/CacheResponse.php with this:

<?php

namespace App\Http\Middleware;

use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Silber\PageCache\Middleware\CacheResponse as BaseCacheResponse;

class CacheResponse extends BaseCacheResponse
{
    protected function shouldCache(Request $request, Response $response)
    {
        // In this example, we don't ever want to cache pages if the
        // URL contains a query string. So we first check for it,
        // then defer back up to the parent's default checks.
        if ($request->getQueryString()) {
            return false;
        }

        return parent::shouldCache($request, $response);
    }
}

Finally, update the middleware references in your app/Http/Kernel.php file, to point to your own middleware.

Author: JosephSilber
Source Code: https://github.com/JosephSilber/page-cache 
License: MIT license

#laravel #cache #files 

Caches Responses As Static Files on Disk for Lightning Fast Page Loads
Royce  Reinger

Royce Reinger

1657674600

JsonCompare: Returns The Difference Between Two JSON Files

JsonCompare

Returns the difference between two JSON files.

Installation

Add this line to your application's Gemfile:

gem 'json-compare'

And then execute:

$ bundle

Or install it yourself as:

$ gem install json-compare

Usage

require 'yajl'
require 'json-compare'

json1 = File.new('spec/fixtures/twitter-search.json', 'r')
json2 = File.new('spec/fixtures/twitter-search2.json', 'r')
old, new = Yajl::Parser.parse(json1), Yajl::Parser.parse(json2)
result = JsonCompare.get_diff(old, new)

If you want to exclude some keys from comparison use exclusion param:

exclusion = ["from_user", "to_user_id"]
result = JsonCompare.get_diff(old, new, exclusion)

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: a2design-inc
Source Code: https://github.com/a2design-inc/json-compare 
License: MIT license

#ruby #json #files 

JsonCompare: Returns The Difference Between Two JSON Files

Cakephp-calendar: CakePHP Calendar Plugin

CakePHP Calendar plugin

A plugin to render simple calendars.

This branch is for CakePHP 4.2+. For details see version map.

Features

  • Simple and robust
  • No JS needed, more responsive than solutions like fullcalendar
  • Persistent year/month URL pieces (copy-paste and link/redirect friendly)
  • IcalView class for .ics calendar file output.

Setup

composer require dereuromark/cakephp-calendar

Then make sure the plugin is loaded in bootstrap:

bin/cake plugin load Calendar

Demo

See the demo Calendar example at the sandbox.

Usage

See Documentation.

Author: Dereuromark
Source Code: https://github.com/dereuromark/cakephp-calendar 
License: MIT license

#php #cakephp #calendar #files 

Cakephp-calendar: CakePHP Calendar Plugin