Is it secure to keep my webpages(php pages) inside public_html folder on a production server?

This is my first php project and I'm going make it online very soon. Recently i have read some articles about not keeping the php scripts inside public folder because if the server is not configured correctly php scripts might be visible as pure text and that is a big security concern if those scripts have sensitive information(like DB credentials etc.). But i believe, I shouldn't be concerned. As my php pages are mainly consist of multiple include/require. Here is an example:

This is my first php project and I'm going make it online very soon. Recently i have read some articles about not keeping the php scripts inside public folder because if the server is not configured correctly php scripts might be visible as pure text and that is a big security concern if those scripts have sensitive information(like DB credentials etc.). But i believe, I shouldn't be concerned. As my php pages are mainly consist of multiple include/require. Here is an example:

home.php

<?php
require_once ('../resources/app_config.php');
require_once ('../resources/includes/functions.php');
require_once ('../resources/includes/header.php');
?>
<body>
The body elements...
</body>
<?php
require_once ('../resources/includes/footer.php');
?>

Here is the directory structure of my project:

resources
|___ app_config.php
|___ includes
|___ functions.php
public_html
|___ css_dir
|___ js_dir
|___ images_dir

index.php
home.php
profile.php

so my question is should I be concerned about moving my php pages out of the public folder or there is nothing to concern ?? Thank you.

How to Security Credentials for PHP with Docker

How to Security Credentials for PHP with Docker

A simple configuration file is all it takes to set up an environment and can be used to rebuild it at any time. So, if we move forward with current technology, we need a way to security our credentials in a Docker-based environment that makes use of PHP-FPM and Nginx.

Environment Overview

Before we dive into the configurations and files required to make this all happen, I wanted to take a brief look at the pieces involved here and how they'll fit together. I've mentioned some of them above in passing but here's the list all in one place:

  1. Docker to build and manage the environment
  2. Nginx to handle the web requests and responses
  3. PHP-FPM to parse and execute the PHP for the request
  4. Vault to store and manage the secrets

I'll also be making use of a simple Vault client - psecio/vaultlib to make the requests to Vault for the secrets. Using a combination of these technologies and a bit of configuration a working system isn't too difficult.

Protecting Credentials

There are several ways to get secrets into a Docker-based environment, some being more secure than others. Here's a list of some of these options and their pros and cons:

Passing them in as command-line options

One option that Docker allows is the passing in of values on the command-line when you're bringing up the container. For example, if you wanted to execute a command inside of a container and pass in values that become environment variables, you could use the -e option:

docker run -e "test=foo" whoami

In this command, we're executing the whoami command and passing in an environment variable of test with a value of foo. While this is useful, it's limited to only being used in single commands and not in the environment as a whole when it starts up. Additionally, when you run a command on the command-line, the command and all of its arguments could show up in the process list. This would expose the plain-text version of the variable to anyone with access to the server.

Using Docker "secrets"

Another option that's one of the most secure in the list is the use of Docker's own "secrets" handling. This functionality allows you to store secret values inside of an encrypted storage location but still allows them to be accessed from inside of the Docker containers. You use the docker secret command to set the value and grant access to the services that should have access. Their documentation has several examples of setting it up and how to use it in more real-world situations (such as a WordPress blog).

While this storage option is one of the better ones, it also comes with a caveat: it can only be used in a Docker Swarm situation. Docker Swarm is functionality built into Docker that makes it easier to manage a cluster of Docker instances rather than just one. If you're not using Swarm mode, you're out of luck on using this "secrets" storage method.

Hard-coding them in the docker-compose configuration

There's another option with Docker Compose to get values pushed into the environment as variables: through settings in the docker-compose.yml configuration file.

The Setup

Before I get too far along in the setup, I want to outline the file and directory structure of what we'll be working with. There are several configuration files involved and I wanted to call them out so they're all in place.

For the examples, we'll be working in a project1/ directory which will contain the following files:

  • docker-compose.yml
  • www.conf
  • site.conf
  • .env
Staring with Docker

To start, we need to build out the environment our application is going to live in. This is a job for Docker or, more specifically Docker Compose. For those not familiar with Docker Compose, you can think of it as a layer that sits on top of Docker and makes building out the environments simpler than having a bunch of Dockerfile configuration files lying around. It joins the different containers together as "services" and provides a configuration structure that abstracts away much of the manual commands that just using the docker command line tool would require.

In a Compose configuration file, you define the "services" that you want to create and various settings about them. For example, if we just wanted to create a simple server with Nginx running on port 8080, we could create a docker-compose.yml configuration like this:

version: '2'

services:
web:
image: nginx:latest
ports:
- "8080:80"

Easy, right? You can create the same kind of thing with just Dockerfile configurations but Compose makes it a bit simpler.

The docker-compose.yml configuration

Using this structure we're going to create our environment that includes:

  • A container running Nginx that mounts our code/ directory to its document root
  • A container running PHP-FPM (PHP 7) to handle the incoming PHP requests (linked to the Nginx container)
  • The Vault container that runs the Vault service (linked to the PHP container)

Here's what that looks like:

version: '2'

services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php

php:
image: php:7-fpm
volumes:
- ./www.conf:/usr/local/etc/php-fpm.d/www.conf
- ./code:/code
environment:
- VAULT_KEY=${VAULT_KEY}
- VAULT_TOKEN=${VAULT_TOKEN}
- ENC_KEY=${ENC_KEY}

vault:
image: vault:latest
links:
- php
environment:
- VAULT_ADDR=http://127.0.0.1:8200

Lets walk through this so you can understand each part. First we create the web service - this is our Nginx container that installs from the nginx:lastest image. It then defines the ports to use, setting up the container to respond on port 8080 and proxy that to port 80 on the local machine (the default port for HTTP). The volumes section defines two things to mount from the local system to the remote system: our code/ directory and the site.conf that's copied over to the Nginx configuration path of /etc/nginx/conf.d/site.conf. Finally, in the links section, we tell Docker that we want to link the web and php containers so they're aware of each other. This link makes it possible for the Nginx configuration to be able to call PHP-FPM as a handler on *.php requests. The contents of the site.conf file are explained in a section later in this article.

Next is the php service. This service installs from the php:7-fpm image, loading in the latest version of PHP-FPM that uses a 7.x version. Again we have a volumes section that copies over the code/ to the container but this time we're moving in a different configuration file: the www.conf configuration. This is the configuration PHP-FPM uses when processing PHP requests. More on this configuration will be shared later too.

What about the environment settings in the php service, you might be asking. Don't worry, I'll get to those later but those are one of the keys to how we'll be getting values from Docker pushed into the service containers for later use.

Finally, we get to the vault service. This service uses the vault:latest image to pull in the latest version of the Vault container and runs the setup process. There's also a link over to the php service so that Vault and PHP can talk. The last part there, the environment setting, is just a Vault-specific setting so that we know a predictable address and port to access the Vault service from PHP.

The site.conf configuration (Nginx)

I mentioned this configuration before when walking through the docker-compose.yml configuration but lets get into a bit more detail. First, here's the contents of our site.conf:

server {
index index.php index.html;
server_name php-docker.local;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;

location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass php:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
}

}

If you've ever worked with PHP-FPM and Nginx before, this configuration probably looks pretty similar. It sets up the server configuration (with the hostname of php-docker.local, add this to 127.0.0.1 in /etc/hosts) to hand off any requests for .php scripts to PHP-FPM via FastCGI. Our index setting lets us use either a index.php or index.html file for the base without having to specify it in the URL. Pretty simple, right?

When we fire up Docker Compose this configuration will be copied into the container at the /etc/nginx/conf.d/site.conf path. With that settled, we'll move on to the next file: the PHP-FPM configuration.

The www.conf configuration (PHP-FPM)

This configuration sets up how the PHP-FPM process behaves when Nginx passes the incoming request over to it. I've reduced down the contents of the file (removing extra comments) to help make it clearer here. Here are the contents of the file:

[www]
user = www-data
group = www-data

listen = 127.0.0.1:9000

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

clear_env = no

env[VAULT_KEY] = $VAULT_KEY
env[VAULT_TOKEN] = $VAULT_TOKEN

While most of this configuration is default settings, there are a few things to note here, starting with the clear_env line. PHP-FPM, by default, will no import any environment variables that were set when the process started up. This clear_env setting being set to no tells it to import those values and make them accessible to Nginx. In the next few lines, there are a few values that are manually defined with the env[] directive. These are variables that come from the environment and are then passed along to the PHP process as $_ENV values.

If you're paying attention, you might notice how things are starting to line up between the configuration files and how environment variables are being passed around.

This configuration will be copied into place by Compose to the /usr/local/etc/php-fpm.d/www.conf path. With this file in place, we get to the last piece of the puzzle: the .env file.

The .env configuration (Environment variables)

One of the handy things about Docker Compose is its ability to read from a default .env file when the build/up commands are run and automatically import them. In this case we have a few settings that we don't want to hard-code in the docker-compose.yml configuration and don't want to hard-code in our actual PHP code:

  • the key to seal/unseal the Vault
  • the token used to access the Vault API
  • the key used for the encryption of configuration values

We can define these in our .env file in the base project1/ directory:

VAULT_KEY=[ key to use for locking/unlocking ]
VAULT_TOKEN=[ token to use for API requests]

Obviously, you'll want to replace the [...] strings with your values when creating the files.

NOTE: DO NOT use the root token and key in a production environment. Using it here is only for example purposes without having to get into further setup and configuration on the Vault instance of other credentials to use. For more information about authentication method options in Vault

One of the tricky things to note here is that, when you (re)build the Vault container, it starts from scratch and will drop any users you've created (and even reset the root key/token). The key here is to grab these values once the environment is built, put them into the project1/.env and then rebuild the php service to pull the new environment values in:

docker-compose build -d php
It's all about the code

Alright, now that we've worked through the four configuration files needed to set up the environment we need to talk about code. In this case, it's the PHP code that lives in project1/code/. Since we're going to keep this super simple, the example will only have one file: index.php. The basic idea behind the code is to be able to extract secrets values from the Vault server that we'll need in our application. Since we're going to use the psecio/vaultlib library, we need to install it via Composer:

composer require psecio/vaultlib

If you run that on your local system in the project1/code/ directory, it will set up the vendor/ directory with everything you need. Since the code/ directory is mounted as a volume on the php service, it will pull it from the local version when you make the web request.

With this installed, we can then initialize our Vault connection and set our first value:

<?php
require_once DIR.'/vendor/autoload.php';

$accessToken = $_ENV['VAULT_TOKEN'];
$baseUrl = 'http://vault:8200';

$client = new \Psecio\Vaultlib\Client($accessToken, $baseUrl);

// If the vault is sealed, unseal it
if ($client->isSealed() == true) {
$client->unseal($_ENV['VAULT_KEY']);
}

// Now set our secret value for "my-secret"
$result = $client->setSecret('my-secret', ['testing1' => 'foo']);
echo 'Result: '.var_export($result, true);

?>

Now if you make a request to the local instance on port 8080 and all goes well, you should see the message "Result: true". If you see exceptions there might be something up with the container build. You can use docker-compose down to destroy all of the current instances and then docker-compose build; docker-compose up to bring them all back up. If you do this, be sure to swap out the Vault token and key and rebuild the php service.

In the code above we create an instance of the Psecio\Vaultlib\Client and pass in our token pulled from an environment variable. This variable exists because of a few special lines in our configuration file. Here's the flow:

  1. The values are set in the .env file for Docker to pull in.
  2. Those values are pushed into the php container using the environment section in the docker-compose.yml configuration.
  3. The PHP-FPM configuration then imports the environment variables and makes them available for use in the $_ENV superglobal.

These secrets exist in-memory in the containers and don't have to be written to a file inside of the container itself where they could potentially be compromised at rest. Once the Docker containers have started up, the .env file can be removed without impacting the values inside of the containers.

The tricky part here is that, if you remove the .env file once the containers are up and running, you'll need to put it back if there's ever a need to run the build command again.
But why is this good?

I started this article off by giving examples of a few methods you could use for secret storage when Docker is in use but they all had rather large downsides. With this method, there's a huge plus that you won't get with the other methods: the secrets defined in the .env file will only live in-memory but are still accessible to the PHP processes. This provides a pretty significant layer of protection for them and makes it more difficult for an attacker to access them directly.

I will say one thing, however. Much like the fact that nothing is 100% secure, this method isn't either. It does protect the secrets by not requiring them to be sitting at rest somewhere but it doesn't prevent the $_ENV values from being accessed directly. If an attacker were able to perform a remote code execution attack - tricking your application to run their code - they would be able to access these values.

Unfortunately, because of the way that PHP works there's not a very good built-in method for protecting values. That's why Vault is included in this environment. It's designed specifically to store secret values and protect them at rest. By only passing in the token and key to access it, we're reducing the risk level of the system overall. Vault also includes controls to let you fine-tune the access levels of your setup. This would allow you to do something like creating a read-only user your application can use. Even if there was a compromise, at least your secret values would be protected from change.

Hopefully, with the code, configuration and explanation I've provided here, you have managed to get an environment up and running and can use it to test out your own applications and secrets management.

I hope this tutorial will surely help and you if you liked this tutorial, please consider sharing it with others.

This post was originally published here

Tips for you to Writing PHP clean code and Secure

Tips for you to Writing PHP clean code and Secure

Any code when written in a clean, easy to understand, for any programmer who works on it later, it is essential that the codes are structured, clean, secured and easily maintainable.

Any code when written in a clean, easy to understand and formatted way is readily accepted and acclaimed by one and all. It is essential that the codes we write should be able to be understood by all, because the same programmers need not necessarily work on the same set of codes always. For easy identification and understanding of the codes for any programmer who works on it later, it is essential that the codes are structured, clean, secured and easily maintainable.

Explained below are few of the best practices that are followed to maintain clean and easy to understand PHP codes. They are not in any order of importance, so all are the practices mentioned are essential and carry equal importance:

1. Commenting on every important action that is performed is very essential.

This not only helps in easy identification of the need of that particular code, but also gives a neat look to the codes as well.

// Function for login checking
if(!$user_login){
header("Location:https://www.macronimous.com/");
die();
} 

2. Avoid unwanted usage of conditional statements:

This not only increases the execution time but also makes the coding long and complex.
For example,

<?php
If (condition1==true){
code which satisfies the above condition
} else {
perform   die(); or exit();
}
?>

The same set of codes can be written as:

<?

if(!condition){
// display warning message.
die("Invalid statement");
}
?>

This reduces the execution time and also makes the codes easily maintainable.

The same can also be written as

<?php
$response_text = ( $action == "edit" ) ?  "the action equals edit" : "the action does not equal edit";
echo $response_text;
?>

Here ternary operators have been used, instead of using conditional statements, to simplify the coding further.

3. Code indentation, in order to highlight statement beginnings and endings.

<?php
If(mysql_num_rows($res)>0) {
while($a=mysql_fetch_object($res)){
echo $a->first_name;
}//ending of while loop
}//ending of if condition
?>

4.Avoid unwanted html tags in the PHP code:

In the example given below, the PHP compiler goes through each and every line of the code and executes the function, which is time consuming.

For example:

<?php
echo "<table>";
echo “<tr>”;
echo “<td>”;
echo “Hai welcome to php”;
echo “</td>”;
echo </tr>”;
echo “</table>”;
?>

Instead of the above we can simply say,

<html>
<body>
<table>
<tr>
<td><?php echo "Hai welcome to php"; ?></td>
</tr>
</body>
</html>

Here the PHP compiler would execute the particular server code only, here in the example, , instead of creating html tags, as the html codes are alien to PHP. This facilitates in cutting down unnecessary checking time of the PHP compiler, thereby saving code execution time.

5. Clear Code with in assigning values to Mysql Arguments:

For example

$sql="select first_name,last_name,email_address from tbl_user where user_id=".$user_id." and member_type='".$member_type."'";

mysql_query($sql);

In the above example, you can see that the PHP values are included in the query condition. Also there are lots of concatenations done to the variables within the query.

Instead,

$sql="select first_name,last_name,email_address from tbl_user where user_id="%d" and member_type='"%s"'";

mysql_query(sprintf($sql,$user_id,$member_type));

By using this query, the values are automatically assigned to the appropriate positions, thereby saving the execution time and as well as the programmers can easily find the related values to the arguments passed.

6. Using Arrays:

It is always better to use arrays in PHP, as they are quite easy to manage and easily understandable as well.

<?php
$products=array("Dove","Apple","Nokia");
?>

Using Split array is also a very good way of coding in PHP. However, there are ways to use a split array:
Rather than:

<?
for($iC=0;$iC<count($products);$iC++){
echo $products[$iC].”<br>”;
}
?>

Instead, the codes can be written as follows:

 <?
foreach($products as $product_value){
echo $product_value;
}
?>

foreach is specifically for arrays. When it is used, they reduce the coding length and also the functioning time.

7. Consistent naming:

It is always advisable to name classes, objects and others consistently. This helps in easy identification for other programmers who might work later on the project. Also names of files in local directories should also be easy to understand.

8. Using Objects [class]

Though they seem to be quite complicated to the newcomers, Objects are very useful as it reduces code repetition and also facilitates easier changes to the codes. In that when a class is used, it makes it more flexible to work with.

A simple class functionality is explained below:

<?

Class shopping_cart{

var $cart_items;

function add_product_item($cart_number,$quantity){
$this->items[$cart_number]+=$quantity;
}

}

// Call the class function
$cart=new shopping_cart();
$cart->add_product_item("123",10);

?>

9. Appropriate use of Looping codes:

There are many looping codes available in PHP. It is very important to choose the right looping code for the right purpose, so that execution time can be saved upon and the work also would get over sooner.

For example, instead of:

$res=mysql_query("select * from tbl_products");
for($iC=0;$iC< mysql_num_rows($res);$iC++){
echo mysql_result($res,$iC);
}

The same can be coded this way, in order to reduce the execution time:

$res=mysql_query("select * from tbl_products");
while($obj=mysql_fetch_object($res)){
echo $obj->column_name1;
}

10. Using of case switches:

It is definitely advantageous to use Case switches instead of If conditions. This is because switch statements are equivalent to using a series of IF statements on the same expression.

Example of using a series of If statements:

if($checking_value1==$value){
echo "result1";
}elseif($checking_value2==$value){
echo "result2";
}elseif($checking_value3==$value){
echo "result3";
}else{
echo "result 4";
}

The same thing can be expressed in a simpler way using the switch case, which greatly reduces the operational time:

switch($checking_value){
case Value1:
echo "result1";
break;

case Value2:
echo "result2";
break;

case Value3:
echo "result3";
break;

default:
echo "result4";
break;

}

11. Using single codes instead of double quotes:

Though these two serve various purposes, using single quotes help in faster execution of the loops than when using double quotes.

For example, a simple example of printing 1500 lines of information, can be done in two ways as:

//Using double quotes
print “SerialNo : $serialno. WorkDone : $workdone. Location: $location”;

The same can be written with single quotes:

//Using single quotes
print ‘SerialNo :’.$serialno.’. WorkDone : ‘.$workdone’. Location‘.$location’.';

Here, the second line of codes works much faster than the first one, where the strings have to be analyzed completely, all the 1500 times. In the second line of codes, no actual string analyzing takes place. Just combination of the strings happens in order to make the printing possible.

Few of these tips might sound quite familiar, but they have been included here in order to refresh the best practices that can be followed to get clean, secured at the same time easy to maintain PHP codes.

Thank you for reading ! Please share if you liked it!

How to build Secure Microservices in PHP

How to build Secure Microservices in PHP

Build your own microservices in PHP. ... Creating Your Environment for Microservices ... while at the same time keeping their products fast, secure, and stable.

Microservices for Your PHP App

The emergence of the DevOps discipline and the ability to automatically build, integrate, deploy and scale software led to the birth of Service-Oriented Architecture (SOA). It predates microservices, but it’s based on the same core principle - organize the application into separate units that can be accessed remotely through a well-defined interface and can be updated independently, without affecting the rest of the system. However, SOA remains vague about how you organize and deploy your application, how separated the different units are (they might even be using the same database) and how the various units interact with each other - you might be using remote procedure calls, or some sort of inter-process communication on the same host, or indirectly (through message queues), or over HTTP.

Of course, there is no formal, industry-accepted definition or specification of microservices either, but they introduce some key concepts that lack or differ in SOA:

  • service granularity - microservices are small, fine-grained services, while SOA can include large-scale services that encompass a whole system or product.
  • bounded context - microservices are built on the ‘share as little as possible’ architecture style. Each microservice owns its own data and communicates with clients/other services only through a well-defined, strictly enforced interface. Microservices are independently scaled and resilient to failure.
  • communication protocols - microservices tend to rely on REST/gRPC for synchronous calls and message queues for asynchronous calls, while SOA has no prescribed limits.
What You Will Build: Design Your PHP Microservices Application

I will repeat: the prudent way to start your application is to build a monolith and let it evolve naturally into a different architecture when needed. However, in this article, you will implement a microservice architecture from the start for purely educational purposes.

Using REST APIs with JSON payloads for intra-service communication is a simple and popular solution, but it’s not ideal for all use cases since it’s most often synchronous. You should always consider the alternatives when building a distributed system - many situations call for asynchronous communication via work queues or event-driven processing.

The application you’ll build is a simulation of an audio transcription service. It consists of the following microservices:

  • Transcription Gateway - a service that exposes an API which allows it to accept a request from another machine to transcribe an audio file. Each new request is put on a queue (the ‘transcription’ queue) to be handled asynchronously (since audio transcriptions can be slow). Requests must be authorized.
  • Transcriber - an auto-scaling service that uses multiple workers (you wish: after building the example you’ll end up with a single service and no scaling but it’s easy to scale it, especially on cloud providers like AWS). Each worker listens to the queue for transcription requests, and when there is a new request, the first available worker takes it off the queue, transcribes the audio and puts the result on another queue (the ‘notification’ queue).
  • Notifier - a service that listens to the notification queue and when a transcription has been processed, it notifies the end user of the result via an email (a real system would also use a push notification back to the web app/mobile app that originated the request).

The application doesn’t use a database (or any persistent storage, for that matter) for simplicity. It also doesn’t have any error handling, automated retries on failure, etc. It goes without saying that you would need all of that (and much more) in a production system.

You’ll use Lumen to build the Transcription Gateway, and plain PHP scripts to build the Transcriber and Notifier services (they are not directly accessible because they’re behind the Gateway, and only communicate with the other services via private queues).

Prerequisites: PHP, Composer, an Okta account (used for authentication), an AWS account (you’ll use Amazon SQS for the queues), and some SMTP account you can use for sending emails programmatically (you can sign up at mailtrap.io for an easy solution to test email sending from your app).

Microservices in PHP – Security and Authentication

In monolithic web applications, there is a client (user) and a server. The client would submit credentials via a web form and the server would set a cookie/create a server-side session (or probably use a JWT token) to identify the user in future requests.

In a microservice architecture, you can’t rely on this scheme since you also need to have the different services communicate with each other.

For your application, you’ll use the OAuth 2.0 authorization protocol and Okta as the identity provider. There are different authentication flows in Okta, depending on if the client application is public or private, if there is a user involved, or if the communication is machine-to-machine only. The Client Credentials Flow that you’ll implement is best suited for machine-to-machine communication where the client application is private and can be trusted to hold a secret. You won’t bother to build a user-facing application, but obviously if you had such an application, it should authenticate and authorize its users as well (using one of the other available Okta flows, such as the Authorization Code Flow).

Why Okta for Secure Microservices in PHP?

Okta is an API service that allows you to create, edit, and securely store user accounts and user account data, and connect them with one or more applications. Register for a forever-free developer account, and when you’re done, come back to learn more about building microservices in PHP.

Create an Account for User Management in PHP

In this section, I’ll show you how to create a machine-to-machine application in Okta and how to get JWT access tokens from your Okta authorization server so you can authenticate your requests to the Transcription Gateway service using the Client Credentials Flow.

The Client Credentials Flow is best suited for machine-to-machine communication (where the client can be trusted to hold a secret). Here’s the documentation of the flow: Okta: Client Credentials Flow.

If you still haven’t created your forever-free Okta developer account, do it now and then continue with the tutorial.

Log in and go to Applications, then click Add Application:

Select Service (Machine-to-Machine) and click Next:

Enter a title for your application and click Done. Take note of the values in the Client ID and Client secret fields that are displayed on the next screen, you’ll need them when building the app.

Before creating the application, there’s one more thing to configure in Okta: you need to create a scope for your application.

Go to Api > Authorization Servers, take note of the Issuer URI field (you will need it when configuring the app), and click on the default authorization server. Go to the Scopes tab and click Add Scope. Set up your scope like this:

You should’ve copied 4 values if you did everything correctly: Client IDClient SecretIssuer URI, and Scope(‘token_auth’). Keep these handy because we’ll need them later!

Build and Test the Transcription Gateway

In this section, you’ll build the first draft of the Transcription Gateway - a simple service that exposes an API with a single endpoint POST /transcriptions which allows other apps to submit requests for audio file transcriptions.

First, you’ll install the Lumen installer and initialize a new Lumen application:

composer global require "laravel/lumen-installer"
lumen new transcription-gateway

Change directories into the new folder, and run your new Lumen app using the built-in PHP server:

cd transcription-gateway
php -S 127.0.0.1:8080 -t public

Load http://localhost:8080/ and you should see something like:

Lumen (5.8.4) (Laravel Components 5.8.*)

Create a route for the API endpoint:

/routes/web.php

$router->post('transcription', '[email protected]');

Create the Controller file and create() method (for now, it will simply validate the request and return either a 422 response with the validation errors or a 202 Accepted response with the input):

/app/Http/Controllers/TranscriptionController.php

<?php
namespace App\Http\Controllers;

use Illuminate\Http\Request;

class TranscriptionController extends Controller
{
public function create(Request $request)
{
$this->validate($request, [
'email' => 'required|email',
'audio-file-url' => 'required|url'
]);

    $message = [
        'user-email'          =&gt; $request-&gt;input('email'),
        'user-audio-file-url' =&gt; $request-&gt;input('audio-file-url')
    ];

    return response()-&gt;json($message, 202);
}

}

Test it (including the validation) with curl or Postman by making various POST requests to http://localhost:8080/transcription

The next thing you’ll do is add authentication to the TranscriptionGateway so it only accepts authenticated requests.

Secure Your Transcription Gateway

Add the following to the .env.example file:

CLIENT_ID=
CLIENT_SECRET=
ISSUER=
SCOPE=

Copy the file to .env, open it and fill in the values from the previous section.

Install the dependencies required for authentication:

composer require nesbot/carbon:"2.17.0 as 1.22" firebase/php-jwt okta/jwt-verifier guzzlehttp/psr7

Create a new file /app/Http/Middleware/AuthenticateWithOkta.php which will hold your Okta authentication middleware:

<?php
namespace App\Http\Middleware;

use Closure;

class AuthenticateWithOkta
{
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
*/
public function handle($request, Closure $next)
{
if ($this->isAuthorized($request)) {
return $next($request);
} else {
return response('Unauthorized.', 401);
}
}

public function isAuthorized($request)
{
    if (!$request-&gt;header('Authorization')) {
        return false;
    }

    $authType = null;
    $authData = null;

    // Extract the auth type and the data from the Authorization header.
    @list($authType, $authData) = explode(" ", $request-&gt;header('Authorization'), 2);

    // If the Authorization Header is not a bearer type, return a 401.
    if ($authType != 'Bearer') {
        return false;
    }

    // Attempt authorization with the provided token
    try {
        // Setup the JWT Verifier
        $jwtVerifier = (new \Okta\JwtVerifier\JwtVerifierBuilder())
            -&gt;setAudience('api://default')
            -&gt;setClientId(getenv('CLIENT_ID'))
            -&gt;setIssuer(getenv('ISSUER'))
            -&gt;build();


        // Verify the JWT from the Authorization Header.
        $jwt = $jwtVerifier-&gt;verify($authData);
    } catch (\Exception $e) {
        // We encountered an error, return a 401.
        return false;
    }

    return true;
}

}

Register the middleware:

/bootstrap/app.php (add to the file)

$app->routeMiddleware([
'auth' => App\Http\Middleware\AuthenticateWithOkta::class,
]);

Modify the API route so it requires authentication:

/routes/web.php

$router->post('transcription', [
'middleware' => 'auth',
'uses' => '[email protected]'
]);

If you attempt a POST to http://localhost:8080/transcription now you should get a 401 Unauthorized response. You need to get a valid Okta token to proceed.

Here’s how you can get a valid token in the easiest way (since you have no user-facing application, and you’re doing machine-to-machine communication only):

source .env
curl $ISSUER/v1/token -d grant_type=client_credentials -d client_id=$CLIENT_ID -d client_secret=$CLIENT_SECRET -d scope=$SCOPE

Copy the value of the access_token property from the JSON response.

Now modify your original curl or Postman request to include the following header:

Authorization: Bearer <put your access token here>

Now you should be able to run successful requests again, at least until your token expires (then you can simply get a new one).

Put the Transcription Request on a Queue

Things are about to get interesting! Your gateway receives requests but doesn’t do anything with them yet. Let’s put the requests on a queue and then you’ll build a separate microservice to get jobs from the queue and perform the actual audio transcription (or pretend to do so).

Sign up for a free AWS account, then find SQS in the menu:

Create two SQS queues with names TRANSCRIBE and NOTIFY using Standard queues (not FIFO) and the default settings. Copy the URLs of the queues:

Then find the IAM service and create a user with programmatic access:

Attach the AmazonSQSFullAccess policy to the user:

Finally, copy the access key ID and secret access key of the user.

Add the following variables to .env.example:

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_QUEUE_URL_TRANSCRIBE=
AWS_QUEUE_URL_NOTIFY=

Copy them to .env and fill in your AWS details.

Install the AWS SDK for PHP:

composer require "aws/aws-sdk-php"

Modify the TranscriptionController to put the message on the TRANSCRIBE queue:

/app/Http/Controllers/TranscriptionController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Aws\Sqs\SqsClient;

class TranscriptionController extends Controller
{
public function create(Request $request)
{
$this->validate($request, [
'email' => 'required|email',
'audio-file-url' => 'required|url'
]);

    $message = [
        'user-email'          =&gt; $request-&gt;input('email'),
        'user-audio-file-url' =&gt; $request-&gt;input('audio-file-url')
    ];

    // of course, this should be extracted to a service
    // instead of using a private method on the controller:
    $this-&gt;putMessageOnQueue($message);

    return response()-&gt;json($message, 202);
}


private function putMessageOnQueue($message)
{
    $key = getenv('AWS_ACCESS_KEY_ID');
    $secret = getenv('AWS_SECRET_ACCESS_KEY');

    $client = SqsClient::factory([
        'key' =&gt; $key,
        'secret' =&gt; $secret,
        'version' =&gt; '2012-11-05',
        // modify the region if necessary:
        'region'  =&gt; 'us-east-1',
    ]);

    $result = $client-&gt;sendMessage(array(
        'QueueUrl'    =&gt; getenv('AWS_QUEUE_URL_TRANSCRIBE'),
        'MessageBody' =&gt; json_encode($message)
    ));

    return $result;
}

}

Test the TranscriptionGateway with Postman again and you should see a new message pop up on the TRANSCRIBE queue. You’ll be able to see a dot appear in your AWS console under the Monitoring tab of the SQS queue.

Implement the Transcriber Service for Your PHP Microservices

The message appears on the queue, but you need a service to process it. Create a new directory /transcriber, and inside the new directory, create the following files:

env.example

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_QUEUE_URL_TRANSCRIBE=
AWS_QUEUE_URL_NOTIFY=

Copy this file to .env and fill in your details (same as in the previous section).

.gitignore

/vendor
.env

composer.json

{
"require": {
"aws/aws-sdk-php": "2.*",
"vlucas/phpdotenv": "^3.3"
}
}

worker.php

<?php
require 'vendor/autoload.php';

use Aws\Sqs\SqsClient;
use Dotenv\Dotenv;

$dotenv = Dotenv::create(DIR);
$dotenv->load();

$key = getenv('AWS_ACCESS_KEY_ID');
$secret = getenv('AWS_SECRET_ACCESS_KEY');
$queueUrl = getenv('AWS_QUEUE_URL_TRANSCRIBE');
$notificationQueueUrl = getenv('AWS_QUEUE_URL_NOTIFY');

$client = SqsClient::factory([
'key' => $key,
'secret' => $secret,
'version' => '2012-11-05',
// modify the region if necessary:
'region' => 'us-east-1',
]);

while (true) {
// wait for messages with 10 second long-polling
$result = $client->receiveMessage([
'QueueUrl' => $queueUrl,
'WaitTimeSeconds' => 10,
]);

// if we have a message, get the receipt handle and message body and process it
if ($result-&gt;getPath('Messages')) {
    $receiptHandle = $result-&gt;getPath('Messages/*/ReceiptHandle')[0];
    $messageBody = $result-&gt;getPath('Messages/*/Body')[0];
    $decodedMessage = json_decode($messageBody, true);

    // simulate processing the message here:
    // wait 2 seconds
    sleep(2);

    // put a message on the notification queue:
    $result = $client-&gt;sendMessage(array(
        'QueueUrl'    =&gt; $notificationQueueUrl,
        'MessageBody' =&gt; $messageBody
    ));

    // delete the transcription message:
    $client-&gt;deleteMessage([
        'QueueUrl' =&gt; $queueUrl,
        'ReceiptHandle' =&gt; $receiptHandle,
    ]);
}

}

The worker script simply runs an endless loop, waiting for jobs to appear on the queue. When it gets a new job, it waits for 2 seconds (to simulate working on the transcription), and puts the message on the notification queue.

Run composer install inside the directory to load the dependencies. Then run the worker from the command line:

php worker.php

You can test by sending some more jobs to the gateway service. Each of those jobs should appear on the TRANSCRIBE queue, and two seconds later it should be ‘processed’ and moved to the NOTIFY queue.

Implement the Notification Service

The final piece is building the notification service which listens for jobs on the NOTIFY queue and when it gets a job, it notifies the user by email that the audio file has been transcribed. This service is very similar to the transcription service, but it has an additional dependency on SwiftMailer and an SMTP account for sending emails.

Create a new directory /notifier and create the following files inside the new directory:

env.example

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_QUEUE_URL_TRANSCRIBE=
AWS_QUEUE_URL_NOTIFY=

SMTP_HOST=
SMTP_USERNAME=
SMTP_PASSWORD=

Copy this file to .env and fill in your details (same as in the previous section, plus the new SMTP details).

.gitignore

/vendor
.env

composer.json

{
"require": {
"aws/aws-sdk-php": "2.*",
"vlucas/phpdotenv": "^3.3",
"swiftmailer/swiftmailer": "^6.0"
}
}

worker.php

<?php
require 'vendor/autoload.php';

use Aws\Sqs\SqsClient;
use Dotenv\Dotenv;

$dotenv = Dotenv::create(DIR);
$dotenv->load();

$key = getenv('AWS_ACCESS_KEY_ID');
$secret = getenv('AWS_SECRET_ACCESS_KEY');
$notificationQueueUrl = getenv('AWS_QUEUE_URL_NOTIFY');

$client = SqsClient::factory([
'key' => $key,
'secret' => $secret,
'version' => '2012-11-05',
// modify the region if necessary:
'region' => 'us-east-1',
]);

while (true) {
// wait for messages with 10 second long-polling
$result = $client->receiveMessage([
'QueueUrl' => $notificationQueueUrl,
'WaitTimeSeconds' => 10,
]);

// if we have a message, get the receipt handle and message body and process it
if ($result-&gt;getPath('Messages')) {
    $receiptHandle = $result-&gt;getPath('Messages/*/ReceiptHandle')[0];
    $messageBody = $result-&gt;getPath('Messages/*/Body')[0];
    $decodedMessage = json_decode($messageBody, true);

    // Create the Transport
    $transport = (new Swift_SmtpTransport(getenv('SMTP_HOST'), 587, 'tls'))
      -&gt;setUsername(getenv('SMTP_USERNAME'))
      -&gt;setPassword(getenv('SMTP_PASSWORD'))
      -&gt;setAuthMode('PLAIN');

    // Create the Mailer using your created Transport
    $mailer = new Swift_Mailer($transport);

    // Create a message
    $message = (new Swift_Message('Your file has been transcribed!'))
      -&gt;setFrom(['[email protected]' =&gt; 'Audio Transcription Service'])
      -&gt;setTo([$decodedMessage['user-email']])
      -&gt;setBody($decodedMessage['user-audio-file-url']);

    // Send the message
    $result = $mailer-&gt;send($message);

    // delete the notification message:
    $client-&gt;deleteMessage([
        'QueueUrl' =&gt; $notificationQueueUrl,
        'ReceiptHandle' =&gt; $receiptHandle,
    ]);
}

}

This script also runs an endless loop, waiting for jobs to appear on the notification queue. When it gets a new job, it sends an email to the address specified in the user-email field of the message.

Run composer install inside the directory to load the dependencies. Then run the worker from the command line:

php worker.php

If you do a test now, it should go through the whole loop:

Transcription Gateway -> Transcriber -> Notifier -> Your email inbox.

I hope you enjoyed this introduction to building microservices in PHP!

Monoliths vs. Microservices in PHP

Most web applications are born as monoliths, and many thrive or die as monoliths. Here’s the big secret: there’s no shame in starting your application as a monolith, and letting it grow until it reaches its limits. I would even argue that this is the prudent thing to do because monoliths are easy to build and deploy (at least while the application is small). They use a centralized database which simplifies the design and organization of the data.

You should change a monolithic system only when you have no other choice. If you’re sitting there wondering if you should use microservices for your next project idea, here’s your answer: you shouldn’t. When you get to the point where you need microservices, you’ll know. Once your application starts growing, and changes in the code start impacting unrelated features, or different features have different scalability/reliability requirements, the time has come to look at the microservice architecture.

Benefits and Drawbacks of Microservices in PHP

There are clear benefits to using microservices:

  • Separation of concerns (microservices follow the single responsibility principle - they do one thing, and do it well).
  • Smaller projects - you can easily refactor or even rewrite parts of the system using the appropriate platform, and without affecting the other parts.
  • Scaling and deployment are easier and faster.
  • Isolation and resilience - if the service that prints invoice PDFs crashes, it won’t take down the rest of your billing system. You also avoid the dreaded ‘dependency hell’ problem - where different parts of your application rely on different versions of the same package.

However, implementing a microservice architecture comes with its own challenges. There are specific anti-patterns you have to avoid, and trade-offs to consider:

  • Data migrations and duplication of data - because of bounded contexts and the ‘shared-nothing architecture’, handling data appropriately can become a big issue. However, you absolutely have to make sure to avoid ‘reach-ins’ where a service pulls data directly from the data repository of another service (or, even worse, modifies data that should be owned by a different service).
  • Timeouts - you have to define the acceptable standards for service responsiveness and use patterns like Circuit Breaker to avoid poor user experience because of prolonged, repeated service timeouts.
  • Code dependencies - you should generally prevent code sharing between services. You can extract the shared code into its own service, or practice service consolidation where you combine the different services that rely on shared code into a single service.
  • Transactions - achieving ACID-level database transactions over multiple services is impossible (because of the bounded contexts and communication latency). You have to analyze your needs very carefully and use techniques such as service consolidation (so you can implement transactions within the context of a single service) and event sourcing/CQRS to guarantee eventual data consistency over multiple services.
  • Developing without a cause/jumping on the bandwagon - it’s outside the scope of this article to describe all aspects and patterns of the microservice architecture in detail, but I hope you understand the complexity involved and you can use this article as a starting point for further study. Just like any other architecture, the use of microservices should be driven by business needs and should achieve specific business outcomes.

Thank you!

Originally published on https://developer.okta.com