IntelVectorMath.jl: Julia bindings for the Intel Vector Math Library

IntelVectorMath.jl (formerly VML.jl)

This package provides bindings to the Intel MKL Vector Mathematics Functions. This is often substantially faster than broadcasting Julia's built-in functions, especially when applying a transcendental function over a large array. Until Julia 0.6 the package was registered as VML.jl.

Similar packages are Yeppp.jl, which wraps the open source Yeppp library, and AppleAccelerate.jl, which provides access to macOS's Accelerate framework.

Warning for macOS

There is currently the following issue between the CompilerSupportLibraries_jll artifact, which is used for example by SpecialFunctions.jl, and MKL_jll. Unless MKL_jll is loaded first, there might be wrong results coming from a small number of function for particular input array lengths. If you are unsure which, if any, your used packages might load this artifact, loading IntelVectorMath as the very first package should be fine.

Basic install

To install IntelVectorMath.jl run

julia> ] add IntelVectorMath

Since version 0.4 IntelVectorMath uses the MKL_jll artifact, which is shared with other packages uses MKL, removing several other dependencies. This has the side effect that from version 0.4 onwards this package requires at least Julia 1.3.

For older versions of Julia IntelVectorMath v0.3 downloads its own version of MKL and keeps only the required files in its own directory. As such installing MKL.jl or MKL via intel are no longer required, and may mean some duplicate files if they are present. However, this package will adopt the new artifact system in the next minor version update and fix this issue. In the event that MKL was not installed properly you will get an error when first using it. Please try running

julia> ] build IntelVectorMath

If this does not work, please open an issue and include the output of <packagedir>/deps/build.log.

Renaming from VML

If you used this package prior to its renaming, you may have to run ] rm VML first. Otherwise there will be a conflict due to the UUID.

Using IntelVectorMath

After loading IntelVectorMath, you have the supported function listed below, for example IntelVectorMath.sin(rand(100)). These should provide a significant speed-up over broadcasting the Base functions. Since the package name is quite long, an alias IVM is also exported to allow IVM.sin(rand(100)) after using the package. If you import the package, you can add this alias via const IVM = IntelVectorMath. Equally, you can replace IVM with another alias of your choice.

Example

julia> using IntelVectorMath, BenchmarkTools

julia> a = randn(10^4);

julia> @btime sin.($a);     # apply Base.sin to each element
  102.128 μs (2 allocations: 78.20 KiB)

julia> @btime IVM.sin($a);  # apply IVM.sin to the whole array
  20.900 μs (2 allocations: 78.20 KiB)

julia> b = similar(a);

julia> @btime IVM.sin!(b, a);  # in-place version
  20.008 μs (0 allocations: 0 bytes)

Accuracy

By default, IntelVectorMath uses VML_HA mode, which corresponds to an accuracy of <1 ulp, matching the accuracy of Julia's built-in openlibm implementation, although the exact results may be different. To specify low accuracy, use vml_set_accuracy(VML_LA). To specify enhanced performance, use vml_set_accuracy(VML_EP). More documentation regarding these options is available on Intel's website.

Performance

Summary of Results:

Relative speed of IntelVectorMath/Base: The height of the bars is how fast IntelVectorMath is compared to using broadcasting for functions in Base

IntelVectorMath Performance Comparison

IntelVectorMath Complex Performance Comparison

Full Results:

Real Functions - Full Benchmark Results

Dimension set 1 Dimension set 2 Dimension set 3 Dimension set 4 Dimension set 5 Dimension set 6 Dimension set 7 Dimension set 8 Dimension set 9 Dimension set 10

Complex Functions - Full Benchmark Results

Dimension set 1 Dimension set 2 Dimension set 3 Dimension set 4 Dimension set 5 Dimension set 6 Dimension set 7 Dimension set 8 Dimension set 9 Dimension set 10

Real Functions - Performance over dimensions

abs abs2 acos acosh asin asinh atan atanh cbrt ceil cis cos cosh erf erfc erfcinv erfcinv exp expm1 floor gamma hypot log round sin sinh sqrt tan tanh trunc


 

Tests were performed on an Intel(R) Core(TM) i5-8250U @ 1.6 [GHz] 1800 Mhz. The dashed line indicates equivalent performance for IntelVectorMath versus the implementations in Base.

Supported functions

IntelVectorMath.jl supports the following functions, most for Float32 and Float64, while some also take complex numbers.

Unary functions

Allocating forms have signature f(A). Mutating forms have signatures f!(A) (in place) and f!(out, A) (out of place). The last 9 functions have been moved from Base to SpecialFunctions.jl or have no Base equivalent.

AllocatingMutating
acosacos!
asinasin!
atanatan!
coscos!
sinsin!
tantan!
acoshacosh!
asinhasinh!
atanhatanh!
coshcosh!
sinhsinh!
tanhtanh!
cbrtcbrt!
sqrtsqrt!
expexpm1!
loglog!
log10log10!
log1plog1p!
absabs!
abs2abs2!
ceilceil!
floorfloor!
roundround!
trunctrunc!
erferf!
erfcerfc!
erfinverfinv!
efcinvefcinv!
gammagamma!
lgammalgamma!
inv_cbrtinv_cbrt!
inv_sqrtinv_sqrt!
pow2o3pow2o3!
pow3o2pow3o2!

Binary functions

Allocating forms have signature f(A, B). Mutating forms have signature f!(out, A, B).

AllocatingMutating
atanatan!
hypothypot!
powpow!
dividedivide!

Next steps

Next steps for this package

  •  Windows support
  •  Basic Testing
  •  Avoiding overloading base and optional overload function
  •  Travis and AppVeyor testing
  •  Adding CIS function
  •  Move Testing to GitHub Actions
  •  Add test for using standalone MKL
  •  Update Benchmarks
  •  Add tests for mutating functions
  •  Add own dependency management via BinaryProvider
  •  Update function list in README
  •  Adopt Julia 1.3 artifact system, breaking backwards compatibility

Advanced

IntelVectorMath.jl uses CpuId.jl to detect if your processor supports the newer avx2 instructions, and if not defaults to libmkl_vml_avx. If your system does not have AVX this package will currently not work for you. If the CPU feature detection does not work for you, please open an issue.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/IntelVectorMath.jl 
License: View license

#julia #binding #cryptography 

What is GEEK

Buddha Community

IntelVectorMath.jl: Julia bindings for the Intel Vector Math Library

IntelVectorMath.jl: Julia bindings for the Intel Vector Math Library

IntelVectorMath.jl (formerly VML.jl)

This package provides bindings to the Intel MKL Vector Mathematics Functions. This is often substantially faster than broadcasting Julia's built-in functions, especially when applying a transcendental function over a large array. Until Julia 0.6 the package was registered as VML.jl.

Similar packages are Yeppp.jl, which wraps the open source Yeppp library, and AppleAccelerate.jl, which provides access to macOS's Accelerate framework.

Warning for macOS

There is currently the following issue between the CompilerSupportLibraries_jll artifact, which is used for example by SpecialFunctions.jl, and MKL_jll. Unless MKL_jll is loaded first, there might be wrong results coming from a small number of function for particular input array lengths. If you are unsure which, if any, your used packages might load this artifact, loading IntelVectorMath as the very first package should be fine.

Basic install

To install IntelVectorMath.jl run

julia> ] add IntelVectorMath

Since version 0.4 IntelVectorMath uses the MKL_jll artifact, which is shared with other packages uses MKL, removing several other dependencies. This has the side effect that from version 0.4 onwards this package requires at least Julia 1.3.

For older versions of Julia IntelVectorMath v0.3 downloads its own version of MKL and keeps only the required files in its own directory. As such installing MKL.jl or MKL via intel are no longer required, and may mean some duplicate files if they are present. However, this package will adopt the new artifact system in the next minor version update and fix this issue. In the event that MKL was not installed properly you will get an error when first using it. Please try running

julia> ] build IntelVectorMath

If this does not work, please open an issue and include the output of <packagedir>/deps/build.log.

Renaming from VML

If you used this package prior to its renaming, you may have to run ] rm VML first. Otherwise there will be a conflict due to the UUID.

Using IntelVectorMath

After loading IntelVectorMath, you have the supported function listed below, for example IntelVectorMath.sin(rand(100)). These should provide a significant speed-up over broadcasting the Base functions. Since the package name is quite long, an alias IVM is also exported to allow IVM.sin(rand(100)) after using the package. If you import the package, you can add this alias via const IVM = IntelVectorMath. Equally, you can replace IVM with another alias of your choice.

Example

julia> using IntelVectorMath, BenchmarkTools

julia> a = randn(10^4);

julia> @btime sin.($a);     # apply Base.sin to each element
  102.128 μs (2 allocations: 78.20 KiB)

julia> @btime IVM.sin($a);  # apply IVM.sin to the whole array
  20.900 μs (2 allocations: 78.20 KiB)

julia> b = similar(a);

julia> @btime IVM.sin!(b, a);  # in-place version
  20.008 μs (0 allocations: 0 bytes)

Accuracy

By default, IntelVectorMath uses VML_HA mode, which corresponds to an accuracy of <1 ulp, matching the accuracy of Julia's built-in openlibm implementation, although the exact results may be different. To specify low accuracy, use vml_set_accuracy(VML_LA). To specify enhanced performance, use vml_set_accuracy(VML_EP). More documentation regarding these options is available on Intel's website.

Performance

Summary of Results:

Relative speed of IntelVectorMath/Base: The height of the bars is how fast IntelVectorMath is compared to using broadcasting for functions in Base

IntelVectorMath Performance Comparison

IntelVectorMath Complex Performance Comparison

Full Results:

Real Functions - Full Benchmark Results

Dimension set 1 Dimension set 2 Dimension set 3 Dimension set 4 Dimension set 5 Dimension set 6 Dimension set 7 Dimension set 8 Dimension set 9 Dimension set 10

Complex Functions - Full Benchmark Results

Dimension set 1 Dimension set 2 Dimension set 3 Dimension set 4 Dimension set 5 Dimension set 6 Dimension set 7 Dimension set 8 Dimension set 9 Dimension set 10

Real Functions - Performance over dimensions

abs abs2 acos acosh asin asinh atan atanh cbrt ceil cis cos cosh erf erfc erfcinv erfcinv exp expm1 floor gamma hypot log round sin sinh sqrt tan tanh trunc


 

Tests were performed on an Intel(R) Core(TM) i5-8250U @ 1.6 [GHz] 1800 Mhz. The dashed line indicates equivalent performance for IntelVectorMath versus the implementations in Base.

Supported functions

IntelVectorMath.jl supports the following functions, most for Float32 and Float64, while some also take complex numbers.

Unary functions

Allocating forms have signature f(A). Mutating forms have signatures f!(A) (in place) and f!(out, A) (out of place). The last 9 functions have been moved from Base to SpecialFunctions.jl or have no Base equivalent.

AllocatingMutating
acosacos!
asinasin!
atanatan!
coscos!
sinsin!
tantan!
acoshacosh!
asinhasinh!
atanhatanh!
coshcosh!
sinhsinh!
tanhtanh!
cbrtcbrt!
sqrtsqrt!
expexpm1!
loglog!
log10log10!
log1plog1p!
absabs!
abs2abs2!
ceilceil!
floorfloor!
roundround!
trunctrunc!
erferf!
erfcerfc!
erfinverfinv!
efcinvefcinv!
gammagamma!
lgammalgamma!
inv_cbrtinv_cbrt!
inv_sqrtinv_sqrt!
pow2o3pow2o3!
pow3o2pow3o2!

Binary functions

Allocating forms have signature f(A, B). Mutating forms have signature f!(out, A, B).

AllocatingMutating
atanatan!
hypothypot!
powpow!
dividedivide!

Next steps

Next steps for this package

  •  Windows support
  •  Basic Testing
  •  Avoiding overloading base and optional overload function
  •  Travis and AppVeyor testing
  •  Adding CIS function
  •  Move Testing to GitHub Actions
  •  Add test for using standalone MKL
  •  Update Benchmarks
  •  Add tests for mutating functions
  •  Add own dependency management via BinaryProvider
  •  Update function list in README
  •  Adopt Julia 1.3 artifact system, breaking backwards compatibility

Advanced

IntelVectorMath.jl uses CpuId.jl to detect if your processor supports the newer avx2 instructions, and if not defaults to libmkl_vml_avx. If your system does not have AVX this package will currently not work for you. If the CPU feature detection does not work for you, please open an issue.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/IntelVectorMath.jl 
License: View license

#julia #binding #cryptography 

VSL.jl: Julia Bindings to The intel Vector Statistics Library

VSL.jl

This package provides bindings to the Intel Vector Statistics Library.

Using VSL.jl

You must have the Intel® Math Kernel Library installed to use VSL.jl, and the shared library must be in a directory known to the linker.

VML.jl provides several basic random number generators (BRNGs) and distributions, and each distribution has at least one method to generate random number. After VSL.jl loaded, you can use the distributions such like the followings:

julia> using VSL

julia> brng = BasicRandomNumberGenerator(VSL_BRNG_MT19937, 12345);
# A BRNG created, in which 12345 is the random seed.

julia> u = Uniform(brng, 0.0, 1.0); # Create a uniform distribution between 0.0 and 1.0.

julia> rand(u) # Generate one random number.
0.41661986871622503

julia> rand(u, 2, 3) # Generate an random 2*3 array.
2×3 Array{Float64,2}:
 0.732685   0.820175  0.802848
 0.0101692  0.825207  0.29864 

julia> A = Array{Float64}(3, 4);

julia> rand!(u, A) # Fill an array with random numbers.
3×4 Array{Float64,2}:
 0.855138  0.193661  0.436228  0.124267
 0.368412  0.270245  0.161688  0.874174
 0.931785  0.566008  0.373064  0.432936

Basic random number generators

Use the Enum BRNGType to set the type of BRNG.

BRNGType Enum
VSL_BRNG_MCG31
VSL_BRNG_R250
VSL_BRNG_MRG32K3A
VSL_BRNG_MCG59
VSL_BRNG_WH
VSL_BRNG_SOBOL
VSL_BRNG_NIEDERR
VSL_BRNG_MT19937
VSL_BRNG_MT2203
VSL_BRNG_SFMT19937
VSL_BRNG_NONDETERM
VSL_BRNG_ARS5
VSL_BRNG_PHILOX4X32X10

Supported distributions

Contigurous: Uniform, Gaussian, GaussianMV, Exponential, Laplace, Weibull, Cauchy, Rayleigh, Lognormal, Gumbel, Gamma, Beta

Discrete: UniformDiscrete, UniformBits, UniformBits32, UniformBits64, Bernoulli, Geometric, Binomial, Hypergeometric, Poisson, PoissonV, NegBinomial

Notes

Most of the discrete distributions return values of 32-bit integer. Please be careful when using those distributions.

For more information, please refer to the Intel® Math Kernel Library Developer Reference

Download Details:

Author: Sunoru
Source Code: https://github.com/sunoru/VSL.jl 
License: MIT license

#julia #cryptography #vector 

FFTW.jl: Julia Bindings to The FFTW Library for Fast Fourier Transform

FFTW.jl

This package provides Julia bindings to the FFTW library for fast Fourier transforms (FFTs), as well as functionality useful for signal processing. These functions were formerly a part of Base Julia.

Usage and documentation

]add FFTW
using FFTW
fft([0; 1; 2; 1])

returns

4-element Array{Complex{Float64},1}:
  4.0 + 0.0im
 -2.0 + 0.0im
  0.0 + 0.0im
 -2.0 + 0.0im

The documentation of generic FFT functionality can be found in the AbstractFFTs.jl package. Additional functionalities supported by the FFTW library are documented in the present package.

MKL

Alternatively, the FFTs in Intel's Math Kernel Library (MKL) can be used by running FFTW.set_provider!("mkl"). MKL will be provided through MKL_jll. This change of provider is persistent and has to be done only once, i.e., the package will use MKL when building and updating. Note however that MKL provides only a subset of the functionality provided by FFTW. See Intel's documentation for more information about potential differences or gaps in functionality. In case MKL does not fit the needs (anymore), FFTW.set_provider!("fftw") allows to revert the change of provider.

Download Details:

Author: JuliaMath
Source Code: https://github.com/JuliaMath/FFTW.jl 
License: MIT license

#julia #math #binding 

Hollie  Ratke

Hollie Ratke

1597554000

Critical Intel Flaw Afflicts Several Motherboards, Server Systems, Compute Modules

Intel is warning of a rare critical-severity vulnerability affecting several of its motherboards, server systems and compute modules. The flaw could allow an unauthenticated, remote attacker to achieve escalated privileges.

The recently patched flaw (CVE-2020-8708) ranks 9.6 out of 10 on the CVSS scale, making it critical. Dmytro Oleksiuk, who discovered the flaw, told Threatpost that it exists in the firmware of Emulex Pilot 3. This baseboard-management controller is a service processor that monitors the physical state of a computer, network server or other hardware devices via specialized sensors.

Click to register!

Emulex Pilot 3 is used by various motherboards, which aggregate all the server components into one system. Also impacted are various server operating systems, and some Intel compute modules, which are electronic circuits, packaged onto a circuit board, that provide various functions.

The critical flaw stems from improper-authentication mechanisms in these Intel products before version 1.59.

In bypassing authentication, an attacker would be able to access to the KVM console of the server. The KVM console can access the system consoles of network devices to monitor and control their functionality. The KVM console is like a remote desktop implemented in the baseboard management controller – it provides an access point to the display, keyboard and mouse of the remote server, Oleksiuk told Threatpost.

The flaw is dangerous as it’s remotely exploitable, and attackers don’t need to be authenticated to exploit it – though they need to be located in the same network segment as the vulnerable server, Oleksiuk told Threatpost.

“The exploit is quite simple and very reliable because it’s a design flaw,” Oleksiuk told Threatpost.

Beyond this critical flaw, Intel also fixed bugs tied to 22 critical-, high-, medium- and low-severity CVEs affecting its server board, systems and compute modules. Other high-severity flaws include a heap-based overflow (CVE-2020-8730) that’s exploitable as an authenticated user; incorrect execution-assigned permissions in the file system (CVE-2020-8731); and a buffer overflow in daemon (CVE-2020-8707) — all three of which enable escalated privileges.

intel flaw

Click to enlarge.

Oleksiuk was credited with reporting CVE-2020-8708, as well as CVE-2020-8706, CVE-2020-8707. All other CVEs were found internally by Intel.

Affected server systems include: The R1000WT and R2000WT families, R1000SP, LSVRP and LR1304SP families and R1000WF and R2000WF families.

Impacted motherboards include: The S2600WT family, S2600CW family, S2600KP family, S2600TP family, S1200SP family, S2600WF family, S2600ST family and S2600BP family.

Finally, impacted compute modules include: The HNS2600KP family, HNS2600TP family and HNS2600BP family. More information regarding patches is available in Intel’s security advisory.

Intel also issued an array of other security advisories addressing high-severity flaws across its product lines, including ones that affect Intel Graphics Drivers, Intel’s RAID web console 3 for Windows, Intel Server Board M10JNP2SB and Intel NUCs.

#vulnerabilities #compute module #critical flaw #cve-2020-8708 #intel #intel critical flaw #intel flaw #intel motherboard #intel server board #patch #privilege escalation #security vulnerability #server system

GraphViz.jl: Julia Binding to The GraphViz Library

GraphViz.jl

This package provides an interface to the the GraphViz package for graph visualization. There are two primary entry points:

  • The GraphViz.load function (not exported) to load graphs from a file
  • The dot""" string macro for literal inline specifications of graphs

Both of these accept Graph type accepts graph in DOT format. To load a graph from a non-constant string, use GraphViz.load with an IOBuffer.

Getting started

If you already have a graph you would like to work with, the following code snippets may be helpful. If not, have a look at the "Simple Examples" section below

using GraphViz
GraphViz.load("mygraph.dot")
dot"""
 digraph graphname {
     a -> b -> c;
     b -> d;
 }
""")

Usage

After obtaining the package through the package manager, the following suffices to load the package:

using GraphViz

Note that graphviz has many configuration options. In particular, both the Cairo and the GTK backends may be disabled by default.

Simple Examples

Try the following in an IJulia Notebook (this example is taken from here):

dot"""
graph graphname {
     // The label attribute can be used to change the label of a node
     a [label="Foo"];
     // Here, the node shape is changed.
     b [shape=box];
     // These edges both have different line properties
     a -- b -- c [color=blue];
     b -- d [style=dotted];
 }
"""

Download Details:

Author: JuliaGraphs
Source Code: https://github.com/JuliaGraphs/GraphViz.jl 
License: View license

#julia #graphs #binding