There’s a lot of possible ways to deploy a Laravel app. After 10 years of deploying big Laravel apps, and trying just about every approach and tool available, here’s the preferred approach I’ve taken. This is an opinionated approach, and you’re welcome to disagree. Something this big won’t be needed for all apps - but the following approach is scalable, stable and can handle a lot.
I’m not going to get into code or details here: this is just a very high level view of tools you might use. I might get around to putting some sample code files on Github - I’ll come back here and update if I do.
Here’s some of the key tools that we’ll get into.
- Ubuntu servers running on AWS
- Terraform, Packer, and Ansible for building and managing servers
- Prometheus, Grafana, etc for observability
For a complex app I would usually set up a bastion server, along with a manager or back-end server or more, and then my web servers in an autoscaling group on AWS. I don’t usually use AWS Autoscaling’s own scaling features, instead letting my app control that itself by setting server numbers.
Kubernetes and Docker are key tools here. Packing your app in Docker with its dependencies creates a nice stable environment. Combine this with tools like Packer and you can avoid lots of need to provision servers when booting them, which helps keep your environment very stable. Booting your autoscaling servers with a customised image created by yourself lets you avoid long running tasks like installing nginx at boot.
You may well need additional services. I’m often using Redis, ElasticSearch (OpenSearch these days), a database like MariaDB or MongoDB, data analysis tools like Apache Spark. There’s a few ways to do this.
- Create a server for the service
- Make Docker images and deploy with Kubernetes
Creating a dedicated server for each service can be simpler and easier to manage. It’s how you probably imagine setting something like this up: create a MySQL installation on its own server and point your other servers at its IP. However, using Docker images and Kubernetes can provide greater flexibility and scalability. If your need isn’t very stable, containerising services lets you dynamically adjust the resources they have available. The choice depends on your specific needs and resources.
Terraform and Ansible
If you deal with servers at all, tools like Terraform and Ansible are a must have. Terraform allows you to define your infrastructure as code, Packer enables you to build machine images, and Ansible automates server configuration management.
I like to keep Terraform an Ansible in their own repos, separate to the actual app code. This provides easier repo permissions sharing if that applies to your team and can help with reusability. So you might have something like myapp/myapp-terraform/myapp-ansible as three separate git repos. But you can just as easily keep your Terraform/Ansible all in one repo with your app.
Deploying the App
There’s endless approaches to deployment you can take. One obvious choice given our current toolset is to use Ansible. This is certainly how I’d start at least defining what the process is.
My usual flow is to have a CI/CD server running something like Jenkins to run the tests, notified by your code hosting service to trigger a deployment when pushed to a certain branch.
This is where a tool like Kubernetes really comes into its own. Kubernetes can handle deployments in a zero downtime safe way and is a great approach to take.
For the last few years I’ve been a big fan of using Prometheus in combination with Grafana. Prometheus is both a metric monitoring systems and efficient database storage for them. Grafana is a visualisation tool that provides a user-friendly way to view and analyse the data collected by Prometheus. Using these tools, you can create custom dashboards that show metrics relevant to your app, such as request latency, error rate, and resource utilisation.
Grafana has a host of other tools like Loki which provides an easy way to search, filter, and analyse logs wherever they are generated; tempo for application traces; alerting; even employee scheduling.
For complex set ups, I like to keep this outside the main application stack, with its own terraform repo even. This helps ensure you can still access your metrics and logs even in the event of a catastrophic error with the app. I would also make use of a third party service like those offered by AWS and services like PagerDuty to ensure absolute coverage for alerts.