How To Setup and Upload Files to Amazon S3 in Laravel

Streaming Files to Amazon S3

Laravel already ships with all the tools needed to upload a file to Amazon S3. If you don’t know how to do that already, take a look at the putFile and putFileAs functions on the Storage facade. With any of these two functions, Laravel will automatically manage streaming a file to a storage location, such as Amazon S3. All you need to do is something like this:

Storage::disk('s3')->putFile('photos', new File('/path/to/photo'));

Streaming a file to S3 may take a long time, depending on the network speed. Even if the putFile and putFileAs functions stream the file in segments and won’t consume a lot of memory, this is still a task that may end up taking a lot of time to complete, causing timeouts. That’s why it’s recommended to use queued jobs for this operation.

Using Queued Jobs

Queues allow you to defer the processing of a time-consuming task. Deferring these time-consuming tasks drastically speeds up web requests to your application.

We will use two separate queued jobs, one to encrypt the file and another one to upload the encrypted file to Amazon S3.

In Laravel, you can chain queued jobs so that the jobs will run in sequence. This way, we can start uploading the file to S3 immediately after the file has been encrypted.

Let’s Start Coding

In this tutorial, we will build the encrypt and upload functionalities to S3, on top of the app created in our previous tutorial.

As a quick recap, we have built a simple app where users can log in and upload files that will be encrypted as soon as the upload finishes.

Configure Amazon S3

First, you will need to configure S3 on Amazon side and create a bucket where we will store the encrypted files. This tutorial does a great job of explaining how to create a bucket, add the proper policies, associate an IAM user to it and add the AWS variables to your .env file.

As per the Laravel docs, we also need to install the Flysystem adapter package via Composer:

composer require league/flysystem-aws-s3-v3

We also need to install an additional package for a cached adapter — an absolute must for performance:

composer require league/flysystem-cached-adapter

Creating Queueable Jobs

Next, let’s create the two queueable jobs that we use for encryption and uploading to S3:

php artisan make:job EncryptFile

php artisan make:job MoveFileToS3

This will create two files in app/Http/Jobs : EncryptFile.php and MoveFileToS3.php. These jobs will accept a param in the constructor, which represents the filename. We add the functionality of encrypting and uploading to S3 in the handle method. This is what the two jobs look like:

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use SoareCostin\FileVault\Facades\FileVault;

class EncryptFile implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    protected $filename;

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($filename)
    {
        $this->filename = $filename;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        FileVault::encrypt($this->filename);
    }
}
<?php

namespace App\Jobs;

use Exception;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Http\File;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Storage;

class MoveFileToS3 implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    protected $filename;

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($filename)
    {
        $this->filename = $filename . '.enc';
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        // Upload file to S3
        $result = Storage::disk('s3')->putFileAs(
            '/',
            new File(storage_path('app/' . $this->filename)),
            $this->filename
        );

        // Forces collection of any existing garbage cycles
        // If we don't add this, in some cases the file remains locked
        gc_collect_cycles();

        if ($result == false) {
            throw new Exception("Couldn't upload file to S3");
        }

        // delete file from local filesystem
        if (!Storage::disk('local')->delete($this->filename)) {
            throw new Exception('File could not be deleted from the local filesystem ');
        }
    }
}

As you can see, the EncryptFile job is simple — we are just using the FileVault package to encrypt a file and save it into the same directory, with the same name and the .enc extension. It’s exactly what we were doing before, in the HomeController’s store method.

For the MoveFileToS3 job, we are first using the Laravel putFileAs method that will automatically stream our file to S3, following the same directory convention as we had on the local filesystem.

We are then calling the PHP gc_collect_cycles function, in order to force collection of any existing garbage cycles. In some cases, if we don’t run this function then the file will remain locked and we won’t be able to delete it in the next step.

Finally, we are deleting the file from the filesystem and throwing Exceptions if the upload or the delete processes fail.

Updating the Controller

Now let’s update the HomeController.php file to match the new functionality.

Instead of encrypting the file inline using the FileVault package with the store method, we call to dispatch the newly created queued jobs, chained together:

EncryptFile::withChain([
    new MoveFileToS3($filename),
])->dispatch($filename);

Next, in the index method, we send both the local files and the S3 files of a user to the view, so we can display the files that are in the process of encrypting and streaming to S3 together with the files that are already encrypted and stored in S3:

$localFiles = Storage::files(‘files/’ . auth()->user()->id);
$s3Files = Storage::disk(‘s3’)->files(‘files/’ . auth()->user()->id);

return view(‘home’, compact(‘localFiles’, ‘s3Files’));

We also update our downloadFile, specifying that we want to download and stream the file from S3 instead of the local filesystem. We just chain a disk(‘s3’) call to both the Storage and FileVault facades.

This is what the HomeController.php file looks like:

<?php

namespace App\Http\Controllers;

use App\Jobs\EncryptFile;
use App\Jobs\MoveFileToS3;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
use SoareCostin\FileVault\Facades\FileVault;

class HomeController extends Controller
{
    /**
     * Create a new controller instance.
     *
     * @return void
     */
    public function __construct()
    {
        $this->middleware('auth');
    }

    /**
     * Show the application dashboard.
     *
     * @return \Illuminate\Contracts\Support\Renderable
     */
    public function index()
    {
        $localFiles = Storage::files('files/' . auth()->user()->id);
        $s3Files = Storage::disk('s3')->files('files/' . auth()->user()->id);

        return view('home', compact('localFiles', 's3Files'));
    }

    /**
     * Store a user uploaded file
     *
     * @param  \Illuminate\Http\Request $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
        if ($request->hasFile('userFile') && $request->file('userFile')->isValid()) {
            $filename = Storage::putFile('files/' . auth()->user()->id, $request->file('userFile'));

            // check if we have a valid file uploaded
            if ($filename) {
                EncryptFile::withChain([
                    new MoveFileToS3($filename),
                ])->dispatch($filename);
            }
        }

        return redirect()->route('home')->with('message', 'Upload complete');
    }

    /**
     * Download a file
     *
     * @param  string  $filename
     * @return \Illuminate\Http\Response
     */
    public function downloadFile($filename)
    {
        // Basic validation to check if the file exists and is in the user directory
        if (!Storage::disk('s3')->has('files/' . auth()->user()->id . '/' . $filename)) {
            abort(404);
        }

        return response()->streamDownload(function () use ($filename) {
            FileVault::disk('s3')->streamDecrypt('files/' . auth()->user()->id . '/' . $filename);
        }, Str::replaceLast('.enc', '', $filename));
    }

}

Updating the View

The last thing we need to do is update the home.blade.php view file, so that we can display not only the user files that have been encrypted and are stored to S3 but also the files that are being encrypted and uploaded to S3 at that moment.

Note: You can make this step much more engaging by using JavaScript to show a spinning icon for the files that are being encrypted and streamed to S3, and refreshing the table once the files have been uploaded. Because we want to keep this tutorial strictly to the point of deferring the encryption and S3 upload to a separate process, we’ll stick to a basic solution that requires manual refresh in order to see any updates to the queued jobs status.

<h4>Your files</h4>
<ul class="list-group">
    @forelse ($s3Files as $file)
        <li class="list-group-item">
            <a href="{{ route('downloadFile', basename($file)) }}">
                {{ basename($file) }}
            </a>
        </li>
    @empty
        <li class="list-group-item">You have no files</li>
    @endforelse
</ul>

@if (!empty($localFiles))
<hr />
<h4>Uploading and encrypting...</h4>
<ul class="list-group">
    @foreach ($localFiles as $file)
        <li class="list-group-item">
            {{ basename($file) }}
        </li>
    @endforeach
</ul>
@endif

Queue Configuration

If you haven’t made any changes to the queues configuration, you are most likely using the synchronous driver (sync) that is set by default in Laravel. This is a driver that will execute jobs immediately and is designed specifically for local use. However, we want to see how deferring our two queued jobs will work in production, so we will configure the queues to work with the [database](https://laravel.com/docs/6.x/queues#driver-prerequisites) driver.

In order to use the database queue driver, you will need a database table to hold the jobs. To generate a migration that creates this table, run the queue:table Artisan command. Once the migration has been created, you may migrate your database using the migrate command:

php artisan queue:table

php artisan migrate 

The last step is updating your QUEUE_CONNECTION variable in your .env file to use the database driver:

QUEUE_CONNECTION=database

Running the Queue Worker

Next, we need to run the queue worker. Laravel includes a queue worker that will process new jobs as they are pushed onto the queue. You may run the worker using the queue:work Artisan command. You can specify the maximum number of times a job should be attempted using the —-tries switch on the queue:work command

php artisan queue:work —-tries=3

Time to Test

We’re now ready to test our changes. Once you upload a file, you should see that the file is immediately displayed in the “Uploading and encrypting…” section.

If you switch to the terminal where you initiated the queue worker, you should see that the jobs are starting in sequence. Once both jobs are completed, the file should be found in S3 and no longer in the local filesystem.

Refreshing the user dashboard after the jobs have finished should display the file into “Your files” section, with a link to stream download it from S3.

You can find the entire Laravel app in this Github repo and the changes made above in this commit.

Thank you for reading !

#Laravel #PHP #Amazon S3 #Programming

How To Setup and Upload Files to Amazon S3 in Laravel
36.00 GEEK