Laravel Vapor
Laravel Vapor is the deployment tool of choice for Laravel applications. This is a SaaS orchestration tool that will deploy a serverless web application on AWS. All the necessary AWS infrastructure will be created for you by Vapor, making this a fast & easy way to get your application deployed.
A full explanation of what Vapor provides is available in their documentation.
If you would like to use Vapor, contact the ADOES-CSI team for access.
Implementation
We have structured our Vapor account similar to AWS: a nonprod team and a prod team, each tied to the matching AWS account. This does require creating an application's project in both teams, which gives them different project IDs.
The initial setup for an application requires you to:
- Create a project (in prod & nonprod)
- Generate certificate request(s) for each environment & validate them
- Make sure your vapor CLI is logged in to the proper team:
vapor team:current
& usevapor team:switch
if needed) vapor cert my-subdomain.northwestern.edu
from the CLI w/ subsequent request to SOC for the validation records
- Make sure your vapor CLI is logged in to the proper team:
- Request subnet allocations in the Northwestern VPC
Optional Vapor Services
Vapor natively supports provisioning an optional RDS cluster, S3 buckets, and Elasticache service.
We have found managing these easier long-term when done via Terraform instead. We can be more deliberate and precise about when to upgrade the database engine version, or apply specific policies to S3 buckets.
It is recommended to use IaC to provision these services.
You are free to terraform additional AWS resources (e.g. RDS, S3 buckets, EFS for shared persistent disk for the Lambdas) for use by your application. The AWS PHP SDK can be used from your app to access these additional resources.
Review the Vapor documentation for more information on Vapor-izing your app, what AWS resources are natively supported, and how to configure your vapor.yml
file.
If adding a new environment to a branch (like playground env to prod branch):
Use the CLI to set up the new environment in the appropriate vapor account:
./vendor/bin/vapor team:switch # To switch to the correct vapor account ./vendor/bin/vapor env playground # WHERE 'playground' is the name of the new environment
Create any certificate requests you need for the app, and add the hostname to the
vapor.yml
.- Request these in
us-east-1
: they will be used by CloudFront to create an edge-optimized API gateway, and it can only read certificates fromus-east-1
, regardless of where you are deploying the Lambdas. - Note that you can attach several hostnames to one environment. If you have a temporary name, you can do the certificate requests for both up-front.
./vendor/bin/vapor certificate my-cool-site.northwestern.edu
- Request these in
Once the certificate(s) have been issued, deploy the environment.
- The deployment will give you a CloudFront hostname.
- Submit a Request DNS/DHCP Add form and request a new CNAME for
my-cool-site.northwestern.edu
with the target value of theletters.cloudfront.net
hostname.
All total, you will two DNS requests to the SOC per hostname:
- New CNAME for AWS certificate validation
- DNS entry from ugly cloudfront address to pretty final URL
- You do NOT need to send the custom domain that vapor creates (e.g.
something.vapor-farm-b1.com
)
- You do NOT need to send the custom domain that vapor creates (e.g.
After the new environment is built and you have the pretty URL back from SOC, don't forget to add the callback URL to AzureAD console.
- Go to https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps
- Select the appropriate app.
- Go to 'Authentication' and add a Redirect URI call-back.
- Go to 'Certificates & Secrets' to confirm a secret from that environment is in place.
Limitations
Deploying an application on Vapor comes with the following limitations.
These limitations are combination of a serverless architecture & Vapor product decisions:
- API Gateway (HTTP request -> the lambda) has a timeout of 29 seconds
- Lambdas can only run for 15 minutes max
- Long-running async events dispatched from SQS may need to be cut up if they're liable to go over the limit
- The same is true for any long-running jobs from the scheduler
- File uploads must be done through S3
- You cannot easily customize what PHP extensions are loaded, what binaries are available, or the
php.ini
- See the vapor-php-build repository to see what the Vapor runtime includes
- You can do it -- as a last resort, extend the Vapor docker image and deploy that. We have no experience doing this, but it is an option.
Best Practices
Traditionally, we've put some configuration in a .env.{environment}
file, some other things into the Github Actions Secret store, and done a bunch of stuff in the pipeline to merge those two datasources together.
Instead, we now have a very small number of secrets maintained in GitHub Secrets, and instead commit one .env.{environment}.encrypted
file that contains almost all of an environment's settings. This simplifies the process considerably: if you want to update something in the .env
, you just decrypt it, make changes, re-encrypt it, and submit a pull request. Leads and other team members cannot see exactly what changed in the pull request UI (since it diffs the ciphertext), but this prompts them to decrypt the files and verify your changes.
The encryption key becomes a single secret that must be managed in CyberArk & pushed to AWS when deploying.
Some values -- like the RDS cluster hostname -- must be pushed up as "secrets", since these are not known until Terraform builds the RDS cluster and cannot be included in the .env.{environment}.encrypted
ahead of deploying the environment. These should be outputs from the IaC module piped back into the Vapor CLI to push the values into AWS Parameter Store.
Deploying from GitHub Actions
Vapor deployments should be triggered from GitHub Actions.
Here is an example that pushes the .env
file encryption key from the GitHub Secrets store, some values from the IaC module, and then triggers the deployment. The steps assume the pipeline has run composer install
the vapor-cli
package is a dev dependency, and terraform init
has been run:
steps:
- name: Terraform Apply and Output 🧱
env:
AWS_ACCESS_KEY_ID: ${{ secrets[env.TERRAFORM_KEY_VAR_NAME] }}
AWS_SECRET_ACCESS_KEY: ${{ secrets[env.TERRAFORM_SECRET_VAR_NAME] }}
run: |
cd iac/${BRANCH_NAME}
terraform apply -auto-approve -no-color -var="master_password=${{ secrets.DB_PASSWORD }}"
db_name=$(terraform output -no-color -raw db_name)
db_endpoint=$(terraform output -no-color -raw db_endpoint)
db_username=$(terraform output -no-color -raw master_username)
bucket_name=$(terraform output -no-color -raw file_uploads_bucket_name)
echo "DB_HOST=$db_endpoint" >> $GITHUB_ENV
echo "DB_USERNAME=$db_username" >> $GITHUB_ENV
echo "DB_DATABASE=$db_name" >> $GITHUB_ENV
echo "BUCKET_NAME=$bucket_name" >> $GITHUB_ENV
- name: Deploy App 🚀
env:
VAPOR_API_TOKEN: ${{ secrets[env.VAPOR_API_VAR_NAME] }}
SHA: ${{ github.sha }}
run: |
echo "${{ secrets.LARAVEL_ENV_ENCRYPTION_KEY }}" | vendor/bin/vapor secret --name "LARAVEL_ENV_ENCRYPTION_KEY" ${BRANCH_NAME}
echo "${DB_HOST}" | vendor/bin/vapor secret --name "DB_HOST" ${BRANCH_NAME}
echo "${DB_USERNAME}" | vendor/bin/vapor secret --name "DB_USERNAME" ${BRANCH_NAME}
echo "${DB_DATABASE}" | vendor/bin/vapor secret --name "DB_DATABASE" ${BRANCH_NAME}
echo "${{ secrets.DB_PASSWORD }}" | vendor/bin/vapor secret --name "DB_PASSWORD" ${BRANCH_NAME}
echo "${BUCKET_NAME}" | vendor/bin/vapor secret --name "AWS_BUCKET" ${BRANCH_NAME}
vendor/bin/vapor deploy --no-ansi ${BRANCH_NAME} --commit=${SHA}
There are a few gotchas.
Vapor API Token
The Vapor API credential is an organization-level secret. Access to this credential has to be granted to repositories, and it's only given on an as-needed basis. There's no reason to give an Anypoint repository access to the Vapor API, for example.
If you need this available to your repository, ask the ADOES-CSI team.
Project ID
Vapor was designed with one "team" corresponding to one AWS account, with all environments living in a single project under one team.
We break this model: we split our prod & nonprod environments across AWS accounts. There are two projects with distinct IDs: one for subprod environments, and one for the production environment.
The top of the vapor.yml
file has an id
for the project. When deploying, this id
should be adjusted in the vapor.yml
based on the environment being deployed.
If you are using GitHub deployment environments, you can leave the field empty in vapor.yml
, add an environment variable, and add the value to the file early in the pipeline:
sed -i 's/id:/id: ${{ vars.VAPOR_ID }}/' vapor.yml