 1666105260

# Fast Matrix Multiplication and Division for toeplitz Matrices in Julia

## ToeplitzMatrices.jl

Fast matrix multiplication and division for Toeplitz, Hankel and circulant matrices in Julia

Note

Multiplication of large matrices and `sqrt`, `inv`, `LinearAlgebra.eigvals`, `LinearAlgebra.ldiv!`, and `LinearAlgebra.pinv` for circulant matrices are computed with FFTs. To be able to use these methods, you have to install and load a package that implements the AbstractFFTs.jl interface such as FFTW.jl:

``````using FFTW
``````

If you perform multiple calculations with FFTs, it can be more efficient to initialize the required arrays and plan the FFT only once. You can precompute the FFT factorization with `LinearAlgebra.factorize` and then use the factorization for the FFT-based computations.

Supported matrices

## Toeplitz

A Toeplitz matrix has constant diagonals. It can be constructed using

``````Toeplitz(vc,vr)
``````

where `vc` are the entries in the first column and `vr` are the entries in the first row, where `vc` must equal `vr`. For example.

``````Toeplitz(1:3, [1.,4.,5.])
``````

is a sparse representation of the matrix

``````[ 1.0  4.0  5.0
2.0  1.0  4.0
3.0  2.0  1.0 ]
``````

## SymmetricToeplitz

A symmetric Toeplitz matrix is a symmetric matrix with constant diagonals. It can be constructed with

``````SymmetricToeplitz(vc)
``````

where `vc` are the entries of the first column. For example,

``````SymmetricToeplitz([1.0, 2.0, 3.0])
``````

is a sparse representation of the matrix

``````[ 1.0  2.0  3.0
2.0  1.0  2.0
3.0  2.0  1.0 ]
``````

## TriangularToeplitz

A triangular Toeplitz matrix can be constructed using

``````TriangularToeplitz(ve,uplo)
``````

where uplo is either `:L` or `:U` and `ve` are the rows or columns, respectively. For example,

``````TriangularToeplitz([1.,2.,3.],:L)
``````

is a sparse representation of the matrix

``````[ 1.0  0.0  0.0
2.0  1.0  0.0
3.0  2.0  1.0 ]
``````

## Hankel

A Hankel matrix has constant anti-diagonals. It can be constructed using

``````Hankel(vc,vr)
``````

where `vc` are the entries in the first column and `vr` are the entries in the last row, where `vc[end]` must equal `vr`. For example.

``````Hankel([1.,2.,3.], 3:5)
``````

is a sparse representation of the matrix

``````[  1.0  2.0  3.0
2.0  3.0  4.0
3.0  4.0  5.0 ]
``````

## Circulant

A circulant matrix is a special case of a Toeplitz matrix with periodic end conditions. It can be constructed using

``````Circulant(vc)
``````

where `vc` is a vector with the entries for the first column. For example:

``````Circulant([1.0, 2.0, 3.0])
``````

is a sparse representation of the matrix

``````[  1.0  3.0  2.0
2.0  1.0  3.0
3.0  2.0  1.0 ]``````

Author: JuliaMatrices
Source Code: https://github.com/JuliaMatrices/ToeplitzMatrices.jl

## Buddha Community  1666105260

## ToeplitzMatrices.jl

Fast matrix multiplication and division for Toeplitz, Hankel and circulant matrices in Julia

Note

Multiplication of large matrices and `sqrt`, `inv`, `LinearAlgebra.eigvals`, `LinearAlgebra.ldiv!`, and `LinearAlgebra.pinv` for circulant matrices are computed with FFTs. To be able to use these methods, you have to install and load a package that implements the AbstractFFTs.jl interface such as FFTW.jl:

``````using FFTW
``````

If you perform multiple calculations with FFTs, it can be more efficient to initialize the required arrays and plan the FFT only once. You can precompute the FFT factorization with `LinearAlgebra.factorize` and then use the factorization for the FFT-based computations.

Supported matrices

## Toeplitz

A Toeplitz matrix has constant diagonals. It can be constructed using

``````Toeplitz(vc,vr)
``````

where `vc` are the entries in the first column and `vr` are the entries in the first row, where `vc` must equal `vr`. For example.

``````Toeplitz(1:3, [1.,4.,5.])
``````

is a sparse representation of the matrix

``````[ 1.0  4.0  5.0
2.0  1.0  4.0
3.0  2.0  1.0 ]
``````

## SymmetricToeplitz

A symmetric Toeplitz matrix is a symmetric matrix with constant diagonals. It can be constructed with

``````SymmetricToeplitz(vc)
``````

where `vc` are the entries of the first column. For example,

``````SymmetricToeplitz([1.0, 2.0, 3.0])
``````

is a sparse representation of the matrix

``````[ 1.0  2.0  3.0
2.0  1.0  2.0
3.0  2.0  1.0 ]
``````

## TriangularToeplitz

A triangular Toeplitz matrix can be constructed using

``````TriangularToeplitz(ve,uplo)
``````

where uplo is either `:L` or `:U` and `ve` are the rows or columns, respectively. For example,

``````TriangularToeplitz([1.,2.,3.],:L)
``````

is a sparse representation of the matrix

``````[ 1.0  0.0  0.0
2.0  1.0  0.0
3.0  2.0  1.0 ]
``````

## Hankel

A Hankel matrix has constant anti-diagonals. It can be constructed using

``````Hankel(vc,vr)
``````

where `vc` are the entries in the first column and `vr` are the entries in the last row, where `vc[end]` must equal `vr`. For example.

``````Hankel([1.,2.,3.], 3:5)
``````

is a sparse representation of the matrix

``````[  1.0  2.0  3.0
2.0  3.0  4.0
3.0  4.0  5.0 ]
``````

## Circulant

A circulant matrix is a special case of a Toeplitz matrix with periodic end conditions. It can be constructed using

``````Circulant(vc)
``````

where `vc` is a vector with the entries for the first column. For example:

``````Circulant([1.0, 2.0, 3.0])
``````

is a sparse representation of the matrix

``````[  1.0  3.0  2.0
2.0  1.0  3.0
3.0  2.0  1.0 ]``````

Author: JuliaMatrices
Source Code: https://github.com/JuliaMatrices/ToeplitzMatrices.jl 1657400640

## Introduction

Matrix is an ambitious new ecosystem for open federated Instant Messaging and VoIP. The basics you need to know to get up and running are:

• Everything in Matrix happens in a room. Rooms are distributed and do not exist on any single server. Rooms can be located using convenience aliases like `#matrix:matrix.org` or `#test:localhost:8448`.
• Matrix user IDs look like `@matthew:matrix.org` (although in the future you will normally refer to yourself and others using a third party identifier (3PID): email address, phone number, etc rather than manipulating Matrix user IDs)

The overall architecture is:

``````client <----> homeserver <=====================> homeserver <----> client
https://somewhere.org/_matrix      https://elsewhere.net/_matrix
``````

`#matrix:matrix.org` is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or via IRC bridge at irc://irc.libera.chat/matrix.

Synapse is currently in rapid development, but as of version 0.5 we believe it is sufficiently stable to be run as an internet-facing service for real usage!

Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard, which handle:

• Creating and managing fully distributed chat rooms with no single points of control or failure
• Eventually-consistent cryptographically secure synchronisation of room state across a global open network of federated servers and services
• Sending and receiving extensible messages in a room with (optional) end-to-end encryption
• Inviting, joining, leaving, kicking, banning room members
• Managing user accounts (registration, login, logout)
• Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, Facebook accounts to authenticate, identify and discover users on Matrix.
• Placing 1:1 VoIP and Video calls

These APIs are intended to be implemented on a wide range of servers, services and clients, letting developers build messaging and VoIP functionality on top of the entirely open Matrix ecosystem rather than using closed or proprietary solutions. The hope is for Matrix to act as the building blocks for a new generation of fully open and interoperable messaging and VoIP apps for the internet.

Synapse is a Matrix "homeserver" implementation developed by the matrix.org core team, written in Python 3/Twisted.

In Matrix, every user runs one or more Matrix clients, which connect through to a Matrix homeserver. The homeserver stores all their personal chat history and user account information - much as a mail client connects through to an IMAP/SMTP server. Just like email, you can either run your own Matrix homeserver and control and own your own communications and history or use one hosted by someone else (e.g. matrix.org) - there is no single point of control or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.

We'd like to invite you to join #matrix:matrix.org (via https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look at the Matrix spec, and experiment with the APIs and Client SDKs.

Thanks for using Matrix!

## Support

For support installing or managing Synapse, please join `#synapse:matrix.org` (from a matrix.org account if necessary) and ask questions there. We do not use GitHub issues for support requests, only for bug reports and feature requests.

Synapse's documentation is nicely rendered on GitHub Pages, with its source available in `docs`.

## Connecting to Synapse from a client

The easiest way to try out your new Synapse installation is by connecting to it from a web client.

Unless you are running a test instance of Synapse on your local machine, in general, you will need to enable TLS support before you can successfully connect from a client: see TLS certificates.

An easy way to get started is to login or register via Element at https://app.element.io/#/login or https://app.element.io/#/register respectively. You will need to change the server you are logging into from `matrix.org` and instead specify a Homeserver URL of `https://<server_name>:8448` (or just `https://<server_name>` if you are using a reverse proxy). If you prefer to use another client, refer to our client breakdown.

If all goes well you should at least be able to log in, create a room, and start sending messages.

### Registering a new user from a client

By default, registration of new users via Matrix clients is disabled. To enable it, specify `enable_registration: true` in `homeserver.yaml`. (It is then recommended to also set up CAPTCHA - see docs/CAPTCHA_SETUP.md.)

Once `enable_registration` is set to `true`, it is possible to register a user via a Matrix client.

Your new user name will be formed partly from the `server_name`, and partly from a localpart you specify when you create the account. Your name will take the form of:

``````@localpart:my.domain.name
``````

(pronounced "at localpart on my dot domain dot name").

As when logging in, you will need to specify a "Custom server". Specify your desired `localpart` in the 'User name' box.

## Security note

Matrix serves raw, user-supplied data in some APIs -- specifically the content repository endpoints.

Whilst we make a reasonable effort to mitigate against XSS attacks (for instance, by using CSP), a Matrix homeserver should not be hosted on a domain hosting other web applications. This especially applies to sharing the domain with Matrix web clients and other sensitive applications like webmail. See https://developer.github.com/changes/2014-04-25-user-content-security for more information.

Ideally, the homeserver should not simply be on a different subdomain, but on a completely different registered domain (also known as top-level site or eTLD+1). This is because some attacks are still possible as long as the two applications share the same registered domain.

To illustrate this with an example, if your Element Web or other sensitive web application is hosted on `A.example1.com`, you should ideally host Synapse on `example2.com`. Some amount of protection is offered by hosting on `B.example1.com` instead, so this is also acceptable in some scenarios. However, you should not host your Synapse on `A.example1.com`.

Note that all of the above refers exclusively to the domain used in Synapse's `public_baseurl` setting. In particular, it has no bearing on the domain mentioned in MXIDs hosted on that server.

Following this advice ensures that even if an XSS is found in Synapse, the impact to other applications will be minimal.

The instructions for upgrading synapse are in the upgrade notes. Please check these instructions as upgrading may require extra steps for some versions of synapse.

## Using a reverse proxy with Synapse

It is recommended to put a reverse proxy such as nginx, Apache, Caddy, HAProxy or relayd in front of Synapse. One advantage of doing so is that it means that you can expose the default https port (443) to Matrix clients without needing to run Synapse with root privileges.

For information on configuring one, see docs/reverse_proxy.md.

## Identity Servers

Identity servers have the job of mapping email addresses and other 3rd Party IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs before creating that mapping.

They are not where accounts or credentials are stored - these live on home servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.

This process is very security-sensitive, as there is obvious risk of spam if it is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer term, we hope to create a decentralised system to manage it (matrix-doc #712), but in the meantime, the role of managing trusted identity in the Matrix ecosystem is farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix Identity Servers' such as Sydent, whose role is purely to authenticate and track 3PID logins and publish end-user public keys.

You can host your own copy of Sydent, but this will prevent you reaching other users in the Matrix ecosystem via their email address, and prevent them finding you. We therefore recommend that you use one of the centralised identity servers at `https://matrix.org` or `https://vector.im` for now.

To reiterate: the Identity server will only be used if you choose to associate an email address with your account, or send an invite to another user via their email address.

Users can reset their password through their client. Alternatively, a server admin can reset a users password using the admin API or by directly editing the database as shown below.

First calculate the hash of the new password:

``````\$ ~/synapse/env/bin/hash_password
\$2a\$12\$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
``````

Then update the `users` table in the database:

``````UPDATE users SET password_hash='\$2a\$12\$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com';
``````

## Synapse Development

The best place to get started is our guide for contributors. This is part of our larger documentation, which includes information for synapse developers as well as synapse administrators.

Developers might be particularly interested in:

Alongside all that, join our developer community on Matrix: #synapse-dev:matrix.org, featuring real humans!

### Quick start

Before setting up a development environment for synapse, make sure you have the system dependencies (such as the python header files) installed - see Platform-specific prerequisites.

To check out a synapse for development, clone the git repo into a working directory of your choice:

``````git clone https://github.com/matrix-org/synapse.git
cd synapse
``````

Synapse has a number of external dependencies. We maintain a fixed development environment using Poetry. First, install poetry. We recommend:

``````pip install --user pipx
pipx install poetry
``````

as described here. (See poetry's installation docs for other installation methods.) Then ask poetry to create a virtual environment from the project and install Synapse's dependencies:

``````poetry install --extras "all test"
``````

This will run a process of downloading and installing all the needed dependencies into a virtual env.

We recommend using the demo which starts 3 federated instances running on ports 8080 - 8082:

``````poetry run ./demo/start.sh
``````

(to stop, you can use `poetry run ./demo/stop.sh`)

If you just want to start a single instance of the app and run it directly:

``````# Create the homeserver.yaml config once
poetry run synapse_homeserver \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=[yes|no]

# Start the app
poetry run synapse_homeserver --config-path homeserver.yaml
``````

### Running the unit tests

After getting up and running, you may wish to run Synapse's unit tests to check that everything is installed correctly:

``````poetry run trial tests
``````

This should end with a 'PASSED' result (note that exact numbers will differ):

``````Ran 1337 tests in 716.064s

PASSED (skips=15, successes=1322)
``````

For more tips on running the unit tests, like running a specific test or to see the logging output, see the CONTRIBUTING doc.

### Running the Integration Tests

Synapse is accompanied by SyTest, a Matrix homeserver integration testing suite, which uses HTTP requests to access the API as a Matrix client would. It is able to run Synapse directly from the source tree, so installation of the server is not required.

Testing with SyTest is recommended for verifying that changes related to the Client-Server API are functioning correctly. See the SyTest installation instructions for details.

## Platform dependencies

Synapse uses a number of platform dependencies such as Python and PostgreSQL, and aims to follow supported upstream versions. See the docs/deprecation_policy.md document for more details.

## Troubleshooting

### Running out of File Handles

If synapse runs out of file handles, it typically fails badly - live-locking at 100% CPU, and/or failing to accept new TCP connections (blocking the connecting client). Matrix currently can legitimately use a lot of file handles, thanks to busy rooms like #matrix:matrix.org containing hundreds of participating servers. The first time a server talks in a room it will try to connect simultaneously to all participating servers, which could exhaust the available file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow to respond. (We need to improve the routing algorithm used to be better than full mesh, but as of March 2019 this hasn't happened yet).

If you hit this failure mode, we recommend increasing the maximum number of open file handles to be at least 4096 (assuming a default of 1024 or 256). This is typically done by editing `/etc/security/limits.conf`

Separately, Synapse may leak file handles if inbound HTTP requests get stuck during processing - e.g. blocked behind a lock or talking to a remote server etc. This is best diagnosed by matching up the 'Received request' and 'Processed request' log lines and looking for any 'Processed request' lines which take more than a few seconds to execute. Please let us know at #synapse:matrix.org if you see this failure mode so we can help debug it, however.

### Help!! Synapse is slow and eats all my RAM/CPU!

First, ensure you are running the latest version of Synapse, using Python 3 with a PostgreSQL database.

Synapse's architecture is quite RAM hungry currently - we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests. We'll improve this in the future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost-undocumented `SYNAPSE_CACHE_FACTOR` environment variable. The default is 0.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade.

However, degraded performance due to a low cache factor, common on machines with slow disks, often leads to explosions in memory use due backlogged requests. In this case, reducing the cache factor will make things worse. Instead, try increasing it drastically. 2.0 is a good starting value.

Using libjemalloc can also yield a significant improvement in overall memory use, and especially in terms of giving back RAM to the OS. To use it, the library must simply be put in the LD_PRELOAD environment variable when launching Synapse. On Debian, this can be done by installing the `libjemalloc1` package and adding this line to `/etc/default/matrix-synapse`:

``````LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
``````

This can make a significant difference on Python 2.7 - it's unclear how much of an improvement it provides on Python 3.x.

If you're encountering high CPU use by the Synapse process itself, you may be affected by a bug with presence tracking that leads to a massive excess of outgoing federation requests (see discussion). If metrics indicate that your server is also issuing far more outgoing federation requests than can be accounted for by your users' activity, this is a likely cause. The misbehavior can be worked around by setting the following in the Synapse config file:

``````presence:
enabled: false``````

### People can't accept room invitations from me

The typical failure mode here is that you send an invitation to someone to join a room or direct chat, but when they go to accept it, they get an error (typically along the lines of "Invalid signature"). They might see something like the following in their logs:

``````2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
``````

This is normally caused by a misconfiguration in your reverse-proxy. See docs/reverse_proxy.md and double-check that your settings are correct.

Author: matrix-org
Source Code: https://github.com/matrix-org/synapse

#python

1597559012

## Multiple File Upload in Laravel 7, 6

in this post, i will show you easy steps for multiple file upload in laravel 7, 6.

As well as how to validate file type, size before uploading to database in laravel.

### Laravel 7/6 Multiple File Upload

You can easily upload multiple file with validation in laravel application using the following steps:

2. Setup Database Credentials
3. Generate Migration & Model For File
5. Create File Controller & Methods
6. Create Multiple File Blade View
7. Run Development Server 1665695940

## LowRankApprox

This Julia package provides fast low-rank approximation algorithms for BLAS/LAPACK-compatible matrices based on some of the latest technology in adaptive randomized matrix sketching. Currently implemented algorithms include:

• sketch methods:
• random Gaussian
• random subset
• subsampled random Fourier transform
• sparse random Gaussian
• partial range finder
• partial factorizations:
• QR decomposition
• interpolative decomposition
• singular value decomposition
• Hermitian eigendecomposition
• CUR decomposition
• spectral norm estimation

By "partial", we mean essentially that these algorithms are early-terminating, i.e., they are not simply post-truncated versions of their standard counterparts. There is also support for "matrix-free" linear operators described only through their action on vectors. All methods accept a number of options specifying, e.g., the rank, estimated absolute precision, and estimated relative precision of approximation.

Our implementation borrows heavily from the perspective espoused by N. Halko, P.G. Martinsson, J.A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53 (2): 217-288, 2011., except that we choose the interpolative decomposition (ID) as our basic form of approximation instead of matrix range projection. The reason is that the latter requires expensive matrix-matrix multiplication to contract to the relevant subspace, while the former can sometimes be computed much faster, depending on the accelerated sketching strategy employed.

This package has been developed with performance in mind, and early tests have shown large speedups over similar codes written in MATLAB and Python (and even some in Fortran and C). For example, computing an ID of a Hilbert matrix of order 1024 to relative precision ~1e-15 takes:

• ~0.02 s using LowRankApprox in Julia
• ~0.07 s using SciPy in Python (calling a Fortran backend; see PyMatrixID)
• ~0.3 s in MATLAB

This difference can be attributed in part to both algorithmic improvements as well as to some low-level optimizations.

## Installation

To install LowRankApprox, simply type:

``````Pkg.add("LowRankApprox")
``````

at the Julia prompt. The package can then be imported as usual with `using` or `import`.

## Getting Started

To illustrate the usage of this package, let's consider the computation of a partial QR decomposition of a Hilbert matrix, which is well known to have low rank. First, we load LowRankApprox via:

``````using LowRankApprox
``````

Then we construct a Hilbert matrix with:

``````n = 1024
A = matrixlib(:hilb, n, n)
``````

A partial QR decomposition can then be computed using:

``````F = pqrfact(A)
``````

This returns a `PartialQR` factorization with variables `Q`, `R`, and `p` denoting the unitary, trapezoidal, and permutation factors, respectively, constituting the decomposition. Alternatively, these can be extracted directly with:

``````Q, R, p = pqr(A)
``````

but the factorized form is often more convenient as it also implements various arithmetic operations. For example, the commands:

``````x = rand(n)
y = F *x
z = F'*x
``````

automatically invoke specialized multiplication routines to rapidly compute `y` and `z` using the low-rank structure of `F`.

The rank of the factorization can be retrieved by `F[:k]`, which in this example usually gives 26 or 27. The reason for this variability is that the default interface uses randomized Gaussian sketching for acceleration. Likewise, the actual approximation error is also random but can be efficiently and robustly estimated:

``````aerr = snormdiff(A, F)
rerr = aerr / snorm(A)
``````

This computes the absolute and relative errors in the spectral norm using power iteration. Here, the relative error achieved should be on the order of machine epsilon. (You may see a warning about exceeding the iteration limit, but this is harmless in this case.)

The default interface requests ~1e-15 estimated relative precision. To request, say, only 1e-12 relative precision, use:

``````F = pqrfact(A, rtol=1e-12)
``````

which returns a factorization of rank ~22. We can also directly control the rank instead with, e.g.:

``````F = pqrfact(A, rank=20)
``````

Using both together as in:

``````F = pqrfact(A, rank=20, rtol=1e-12)
``````

sets two separate termination criteria: one on reaching rank 20 and the other on achieving estimated relative precision 1e-12---with the computation completing upon either of these being fulfilled. Many other options are available as well. All keyword arguments can also be encapsulated into an `LRAOptions` type for convenience. For example, we can equivalently write the above as:

``````opts = LRAOptions(rank=20, rtol=1e-12)
F = pqrfact(A, opts)
``````

For further details, see the Options section.

All aforementioned considerations also apply when the input is a linear operator, i.e., when the matrix is described not by its entries but by its action on vectors. To demonstrate, we can convert `A` to type `LinearOperator` as follows:

``````L = LinearOperator(A)
``````

which inherits and stores methods for applying the matrix and its adjoint. (This command actually recognizes `A` as Hermitian and forms `L` as a `HermitianLinearOperator`.) A partial QR decomposition can then be computed with:

``````F = pqrfact(L)
``````

just as in the previous case. Of course, there is no real benefit to doing so in this particular example; the advantage comes when considering complicated matrix products that can be represented implicitly as a single `LinearOperator`. For instance, `A*A` can be represented as `L*L` without ever forming the resultant matrix explicitly, and we can even encapsulate entire factorizations as linear operators to exploit fast multiplication:

``````L = LinearOperator(F)
``````

Linear operators can be scaled, added, and composed together using the usual syntax. All methods in LowRankApprox transparently support both matrices and linear operators.

## Low-Rank Factorizations

We now detail the various low-rank approximations implemented, which all nominally return compact `Factorization` types storing the matrix factors in structured form. All such factorizations provide optimized multiplication routines. Furthermore, the rank of any factorization `F` can be queried with `F[:k]` and the matrix approximant defined by `F` can be reconstructed as `Matrix(F)`. For concreteness of exposition, assume in the following that `A` has size `m` by `n` with factorization rank `F[:k] = k`. Note that certain matrix identities below should be interpreted only as equalities up to the approximation precision.

### QR Decomposition

A partial QR decomposition is a factorization `A[:,p] = Q*R`, where `Q` is `m` by `k` with orthonormal columns, `R` is `k` by `n` and upper trapezoidal, and `p` is a permutation vector. Such a decomposition can be computed with:

``````F = pqrfact(A, args...)
``````

or more explicitly with:

``````Q, R, p = pqr(A, args...)
``````

The former returns a `PartialQR` factorization with access methods:

• `F[:Q]`: `Q` factor as type `Matrix`
• `F[:R]`: `R` factor as type `UpperTrapezoidal`
• `F[:p]`: `p` permutation as type `Vector`
• `F[:P]`: `p` permutation as type `ColumnPermutation`

Both `F[:R]` and `F[:P]` are represented as structured matrices, complete with their own arithmetic operations, and together permit the alternative approximation formula `A*F[:P] = F[:Q]*F[:R]`. The factorization form additionally supports least squares solution by left-division.

We can also compute a partial QR decomposition of `A'` (that is, pivoting on rows instead of columns) without necessarily constructing the matrix transpose explicitly by writing:

``````F = pqrfact(:c, A, args...)
``````

and similarly with `pqr`. The default interface is equivalent to, e.g.:

``````F = pqrfact(:n, A, args...)
``````

for "no transpose". It is also possible to generate only a subset of the partial QR factors for further efficiency; see Options.

The above methods do not modify the input matrix `A` and may make a copy of the data in order to enforce this (whether this is actually necessary depends on the type of input and the sketch method used). Potentially more efficient versions that reserve the right to overwrite `A` are available as `pqrfact!` and `pqr!`, respectively.

### Interpolative Decomposition (ID)

The ID is based on the approximation `A[:,rd] = A[:,sk]*T`, where `sk` is a set of `k` "skeleton" columns, `rd` is a set of `n - k` "redundant" columns, and `T` is a `k` by `n - k` interpolation matrix. It follows that `A[:,p] = C*V`, where `p = [sk; rd]`, `C = A[:,sk]`, and `V = [Matrix(I,k,k) T]`. An ID can be computed by:

``````V = idfact(A, args...)
``````

or:

``````sk, rd, T = id(A, args...)
``````

Here, `V` is of type `IDPackedV` and defines the `V` factor above but can also implicitly represent the entire ID via:

• `V[:sk]`: `sk` columns as type `Vector`
• `V[:rd]`: `rd` columns as type `Vector`
• `V[:p]`: `p` permutation as type `Vector`
• `V[:P]`: `p` permutation as type `ColumnPermutation`
• `V[:T]`: `T` factor as type `Matrix`

To actually produce the ID itself, use:

``````F = ID(A, V)
``````

or:

``````F = ID(A, sk, rd, T)
``````

which returns an `ID` factorization that can be directly compared with `A`. This factorization has access methods:

• `F[:C]`: `C` factor as type `Matrix`
• `F[:V]`: `V` factor as type `IDPackedV`

in addition to those defined for `IDPackedV`.

As with the partial QR decomposition, an ID can be computed for `A'` instead (that is, finding skeleton rows as opposed to columns) in the same way, e.g.:

``````V = idfact(:c, A, args...)
``````

The default interface is equivalent to passing `:n` as the first argument. Moreover, modifying versions of the above are available as `idfact!` and `id!`.

### Singular Value Decomposition (SVD)

A partial SVD is a factorization `A = U*S*V'`, where `U` and `V` are `m` by `k` and `n` by `k`, respectively, both with orthonormal columns, and `S` is `k` by `k` and diagonal with nonincreasing nonnegative real entries. It can be computed with:

``````F = psvdfact(A, args...)
``````

or:

``````U, S, V = psvd(A, args...)
``````

The factorization is of type `PartialSVD` and has access methods:

• `F[:U]`: `U` factor as type `Matrix`
• `F[:S]`: `S` factor as type `Vector`
• `F[:V]`: `V` factor as type `Matrix`
• `F[:Vt]`: `V'` factor as type `Matrix`

Note that the underlying SVD routine forms `V'` as output, so `F[:Vt]` is easier to extract than `F[:V]`. Least squares solution is also supported using left-division. Furthermore, if just the singular values are required, then we can use:

``````S = psvdvals(A, args...)
``````

### Hermitian Eigendecomposition

A partial Hermitian eigendecomposition of an `n` by `n` Hermitian matrix `A` is a factorization `A = U*S*U'`, where `U` is `n` by `k` with orthonormal columns and `S` is `k` by `k` and diagonal with nondecreasing real entries. It is very similar to a partial Hermitian SVD and can be computed by:

``````F = pheigfact(A, args...)
``````

or:

``````vals, vecs = pheig(A, args...)
``````

where we have followed the Julia convention of letting `vals` denote the eigenvalues comprising `S` and `vecs` denote the eigenvector matrix `U`. The factorization is of type `PartialHermitianEigen` and has access methods:

• `F[:values]`: `vals` as type `Vector`
• `F[:vectors]`: `vecs` as type `Matrix`

It also supports least squares solution by left-division. If only the eigenvalues are desired, use instead:

``````vals = pheigvals(A, args...)
``````

### CUR Decomposition

A CUR decomposition is a factorization `A = C*U*R`, where `C = A[:,cols]` and `R = A[rows,:]` consist of `k` columns and rows, respectively, from `A` and `U = inv(A[rows,cols])`. The basis rows and columns can be computed with:

``````U = curfact(A, args...)
``````

or:

``````rows, cols = cur(A, args...)
``````

The former is of type `CURPackedU` (or `HermitianCURPackedU` if `A` is Hermitian or `SymmetricCURPackedU` if symmetric) and has access methods:

• `U[:cols]`: `cols` columns as type `Vector`
• `U[:rows]`: `rows` rows as type `Vector`

To produce the corresponding CUR decomposition, use:

``````F = CUR(A, U)
``````

or:

``````F = CUR(A, rows, cols)
``````

which returns a `CUR` factorization (or `HermitianCUR` if `A` is Hermitian or `SymmetricCUR` if symmetric), with access methods:

• `F[:C]`: `C` factor as type `Matrix`
• `F[:U]`: `U` factor as type `Factorization`
• `F[:R]`: `R` factor as type `Matrix`

in addition to those defined for `CURPackedU`. If `F` is of type `HermitianCUR`, then `F[:R] = F[:C]'`, while if `F` has type `SymmetricCUR`, then `F[:R] = transpose(F[:C])`. Note that because of conditioning issues, `U` is not stored explicitly but rather in factored form, nominally as type `SVD` but practically as `PartialHermitianEigen` if `U` has type `HermitianCURPackedU` or `PartialSVD` otherwise (for convenient arithmetic operations).

Modifying versions of the above are available as `curfact!` and `cur!`.

## Sketch Methods

Matrix sketching is a core component of this package and its proper use is critical for high performance. For an `m` by `n` matrix `A`, a sketch of order `k` takes the form `B = S*A`, where `S` is a `k` by `m` sampling matrix (see below). Sketches can similarly be constructed for sampling from the right or for multiplying against `A'`. The idea is that `B` contains a compressed representation of `A` up to rank approximately `k`, which can then be efficiently processed to recover information about `A`.

The default sketch method defines `S` as a Gaussian random matrix. Other sketch methods can be specified using the keyword `sketch`. For example, setting:

``````opts = LRAOptions(sketch=:srft, args...)
``````

or equivalently:

``````opts = LRAOptions(args...)
opts.sketch = :srft
``````

then passing to, e.g.:

``````V = idfact(A, opts)
``````

computes an ID with sketching via a subsampled random Fourier transform (SRFT). This can also be done more directly with:

``````V = idfact(A, sketch=:srft)
``````

A list of supported sketch methods is given below. To disable sketching altogether, use:

``````opts.sketch = :none
``````

In addition to its integration with low-rank factorization methods, sketches can also be generated independently by:

``````B = sketch(A, order, args...)
``````

Other interfaces include:

• `B = sketch(:left, :n, A, order, args...)` to compute `B = S*A`
• `B = sketch(:left, :c, A, order, args...)` to compute `B = S*A'`
• `B = sketch(:right, :n, A, order, args...)` to compute `B = A*S`
• `B = sketch(:right, :c, A, order, args...)` to compute `B = A'*S`

We also provide adaptive routines to automatically sketch with increasing orders until a specified error tolerance is met, as detected by early termination of an unaccelerated partial QR decomposition. This adaptive sketching forms the basis for essentially all higher-level algorithms in LowRankApprox and can be called with:

``````F = sketchfact(A, args...)
``````

Like `sketch`, a more detailed interface is also available as:

``````F = sketchfact(side, trans, A, args...)
``````

### Random Gaussian

The canonical sampling matrix is a Gaussian random matrix with entries drawn independently from the standard normal distribution (or with real and imaginary parts each drawn independently if `A` is complex). To use this sketch method, set:

``````opts.sketch = :randn
``````

There is also support for power iteration to improve accuracy when the spectral gap (up to rank `k`) is small. This computes, e.g., `B = S*(A*A')^p*A` (or simply `B = S*A^(p + 1)` if `A` is Hermitian) instead of just `B = S*A`, with all intermediate matrix products orthogonalized for stability.

For generic `A`, Gaussian sketching has complexity `O(k*m*n)`. In principle, this can make it the most expensive stage of computing a fast low-rank approximation (though in practice it is still very effective). There is a somewhat serious effort to develop sketch methods with lower computational cost, which is addressed in part by the following techniques.

### Random Subset

Perhaps the simplest matrix sketch is just a random subset of rows or columns, with complexity `O(k*m)` or `O(k*n)` as appropriate. This can be specified with:

``````opts.sketch = :sub
``````

The linear growth in matrix dimension is obviously attractive, but note that this method can fail if the matrix is not sufficiently "regular", e.g., if it contains a few large isolated entries. Random subselection is only implemented for type `AbstractMatrix`.

### Subsampled Random Fourier Transform (SRFT)

An alternative approach based on imposing structure in the sampling matrix is the SRFT, which has the form `S = R*F*D` (if applying from the left), where `R` is a random permutation matrix of size `k` by `m`, `F` is the discrete Fourier transform (DFT) of order `m`, and `D` is a random diagonal unitary scaling. Due to the DFT structure, this can be applied in only `O(m*n*log(k))` operations (but beware that the constant is quite high). To use this method, set:

``````opts.sketch = :srft
``````

For real `A`, our SRFT implementation uses only real arithmetic by separately computing real and imaginary parts as in a standard real-to-real DFT. Only `AbstractMatrix` types are supported.

### Sparse Random Gaussian

As a modification of Gaussian sketching, we provide also a "sparse" random Gaussian sampling scheme, wherein `S` is restricted to have only `O(m)` or `O(n)` nonzeros, depending on the dimension to be contracted. Considering the case `B = S*A` for concreteness, each row of `S` is taken to be nonzero in only `O(m/k)` columns, with full coverage of `A` maintained by evenly spreading these nonzero indices among the rows of `S`. The complexity of computing `B` is `O(m*n)`. Sparse Gaussian sketching can be specified with:

``````opts.sketch = :sprn
``````

and is only implemented for type `AbstractMatrix`. Power iteration is not supported since any subsequent matrix application would devolve back to having `O(k*m*n)` cost.

## Other Capabilities

We also provide a few other useful relevant algorithms as follows. Let `A` be an `m` by `n` matrix.

### Partial Range

A basis for the partial range of `A` of rank `k` is an `m` by `k` matrix `Q` with orthonormal columns such that `A = Q*Q'*A`. Such a basis can be computed with:

``````Q = prange(A, args...)
``````

Fast range approximation using sketching is supported.

The default interface computes a basis for the column space of `A`. To capture the row space instead, use:

``````Q = prange(:c, A, args...)
``````

which is equivalent to computing the partial range of `A'`. The resulting matrix `Q` is `n` by `k` with orthonormal rows and satisfies `A = A*Q*Q'`. It is also possible to approximate both the row and column spaces simultaneously with:

``````Q = prange(:b, A, args...)
``````

Then `A = Q*Q'*A*Q*Q'`.

A possibly modifying version is available as `prange!`.

### Spectral Norm Estimation

The spectral norm of `A` can be rapidly computed using randomized power iteration via:

``````err = snorm(A, args...)
``````

Similarly, the spectral norm difference of two matrices `A` and `B` can be computed with:

``````err = snormdiff(A, B, args...)
``````

which admits both a convenient and efficient way to test the accuracy of our low-rank approximations.

## Core Algorithm

The underlying algorithm behind LowRankApprox is the pivoted QR decomposition, with the magnitudes of the pivots providing an estimate of the approximation error incurred at each truncation rank. Here, we use an early-terminating variant of the LAPACK routine GEQP3. The partial QR decomposition so constructed is then leveraged into an ID to support the various other factorizations.

Due to its fundamental importance, we can also perform optional determinant maximization post-processing to obtain a (strong) rank-revealing QR (RRQR) decomposition. This ensures that we select the best column pivots and can further improve numerical precision and stability.

## Options

Numerous options are exposed by the `LRAOptions` type, which we will cover by logical function below.

### Accuracy Options

The accuracy of any low-rank approximation (in the spectral norm) is controlled by the following parameters:

• `atol`: absolute tolerance of approximation (default: `0`)
• `rtol`: relative tolerance of approximation (default: `5*eps()`)
• `rank`: maximum rank of approximation (default: `-1`)

Each parameter specifies an independent termination criterion; the computation completes when any of them are met. Currently, `atol` and `rtol` are checked against QR pivot magnitudes and thus accuracy can only be approximately guaranteed, though the resulting errors should be of the correct order.

Iterative RRQR post-processing is also available:

• `maxdet_niter`: maximum number of iterations for determinant maximization (default: `-1`)
• `maxdet_tol`: relative tolerance for determinant maximization (default: `-1`)

If `maxdet_tol < 0`, no post-processing is done; otherwise, as above, each parameter specifies an independent termination criterion. These options have an impact on all factorizations (i.e., not just QR) since they all involve, at some level, approximations based on the QR. For example, computing an ID via an RRQR guarantees that the interpolation matrix `T` satisfies `maxabs(T) < 1 + maxdet_tol` (assuming no early termination due to `maxdet_niter`).

The parameters `atol` and `rtol` are also used for the spectral norm estimation routines `snorm` and `snormdiff` to specify the requested precision of the (scalar) norm output.

### Sketching Options

The following parameters govern matrix sketching:

• `sketch`: sketch method, one of `:none`, `:randn` (default), `:srft`, `:sub`, or `:sprn`
• `sketch_randn_niter`: number of power iterations for Gaussian sketching (default: `0`)
• `sketchfact_adap`: whether to compute a sketched factorization adaptively by successively doubling the sketch order (default: `true`); if `false` only takes effect if `rank >= 0`, in which case a single sketch of order `rank` is (partially) factorized
• `sketchfact_randn_samp`: oversampling function for Gaussian sketching (default: `n -> n + 8`)
• `sketchfact_srft_samp`: oversampling function for SRFT sketching (default: `n -> n + 8`)
• `sketchfact_sub_samp`: oversampling function for subset sketching (default: `n -> 4*n + 8`)

The oversampling functions take as input a desired approximation rank and return a corresponding sketch order designed to be able to capture it with high probability. No oversampling function is used for sparse random Gaussian sketching due to its special form.

### Other Options

Other available options include:

• `nb`: computational block size, used in various settings (default: `32`)
• `pheig_orthtol`: eigenvalue relative tolerance to identify degenerate subspaces, within which eigenvectors are re-orthonormalized (to combat LAPACK issue; default: `sqrt(eps())`)
• `pqrfact_retval`: string containing keys indicating which outputs to return from `pqrfact` (default: `"qr"`)
• `"q"`: orthonormal `Q` matrix
• `"r"`: trapezoidal `R` matrix
• `"t"`: interpolation `T` matrix (for ID)
• `snorm_niter`: maximum number of iterations for spectral norm estimation (default: `32`)
• `verb`: whether to print verbose messages, used sparingly (default: `true`)

Note that `pqrfact` always returns the permutation vector `p` so that no specification is needed in `pqrfact_retval`. If `pqrfact_retval = "qr"` (in some order), then the output factorization has type `PartialQR`; otherwise, it is of type `PartialQRFactors`, which is simply a container type with no defined arithmetic operations. All keys other than `"q"`, `"r"`, and `"t"` are ignored.

## Computational Complexity

Below, we summarize the leading-order computational costs of each factorization function depending on the sketch type. Assume an input `AbstractMatrix` of size `m` by `n` with numerical rank `k << min(m, n)` and `O(1)` cost to compute each entry. Then, first, for a non-adaptive computation (i.e., `k` is known essentially a priori):

The cost given for the ID is for the default column-oriented version; to obtain the operation count for a row-oriented ID, simply switch the roles of `m` and `n`. Note also that `pheig` is only applicable to square matrices, i.e., `m = n`.

All of the above remain unchanged for `sketchfact_adap = true` with the exception of the following, in which case the costs become:

• `sketch = :srft`: `m*n*log(k)^2`
• `sketch = :sprn`: `m*n*log(k)`

uniformly across all functions.

Author: JuliaMatrices
Source Code: https://github.com/JuliaMatrices/LowRankApprox.jl

1597470037

## Laravel 7 Multiple Image Upload with Preview

Here, i will show you how to upload multiple image with preview using ajax in laravel.

## Laravel 7 Ajax Multiple Image Upload with Preview

Just follow the below steps and upload multiple images using ajax with showing preview in laravel applications:

• Install Laravel Fresh Setup
• Setup Database Credentials
• Create Route
• Generate Controller By Command