Depending on your knowledge of Django, deploying Wagtail CMS in production can be difficult. We put together some of our thoughts to help you transition from the Wagtail docs to getting your application deployed somewhere like AWS.
We expect most people who read this post to be coming from the Wagtail tutorials, perhaps it's your first deployment. We're going to explain how you setup the database, application and how to serve both static and dynamic assets from Amazon S3. If this isn't your first deployment, feel free to skip ahead to the bits you need.
Maybe you're using SQLite and want to move to something more suitable for your production application. We recommend using PostgreSQL with Wagtail, and here is the approach we take to adding a production database.
The first step is to define the DATABASES section in your applications settings.py. Depending on your environment and desired workflow, this might as simple as overwriting the base.py. This ensures PostgreSQL is used in development and production. If only the latter is desired, this can be done overwritten exclusively in production.py.
- Due to the fact that we are using PostgreSQL for this example, we have opted for the postgresql_psycopg2 backend engine.
- The NAME attribute specifies the name of the database you wish to use in your project.
- The HOST attribute specifies the location of the target database, defaulting to localhost if unspecified.
- The PORT attribute specifies the port of the target database, in this case the default PostgreSQL port 5432.
To run your Wagtail application in production mode, there are a few commands that need to be run. We typically include these in a script that executes during deployment, these commands include:
In addition to these commands, you need to specify that the application should run in production mode. This is done by setting the environmental variable DJANGO_SETTINGS_MODULE to ####.settings.production, where the hashes are replaced by your project name. This ensures Django uses the appropriate settings.py file.
Production.py comes pre built with barely any additional setup other than the DEBUG=False flag. This flag tells Django the run the application in production mode, but there are additional configuration values needed to prepare your application for production:
Django requires a secret key to manage the cryptographic functions in the application, with it being bad practice and insecure to store this directly in the production.py file. As such, this should be defined at an environment level and referenced instead.
Finally, there are two commands to be run to make your applications static files available in production.
These commands collect all of the static files in your application and compress them ready to be served to the user. At this stage, your application should be deployable. Opening a browser and navigating to your wagtail application’s admin page should just work. The first issue you will notice is that there is no static assets, i.e. styling. That’s because we are not yet serving those assets in production.
Serving Static Assets
If you are new to Wagtail or Django you might be unfamiliar with the way your site content, assets and media are served in Production. In short, they aren’t. In development assets are served insecurely. For Production, Django, and therefore Wagtail, stick to what they do well and leave the serving of assets to third parties.There are numerous ways of serving this content, but for the purpose of this tutorial we will be using WhiteNoise. WhiteNoise is a Python library that allows Django to serve static assets in production.
The next step is to configure wsgi.py in your base site folder, in the same level as settings. WSGI is the primary deployment platform for Django, and we configure it to make use of the WhiteNoise library.
Where #### is replaced by the name of your Wagtail project.
The next step is to configure our application in production.py to make use of WhiteNoise to activate offline compress and generate the admin section assets.
With these changes added, you should be able to re-deploy your application and see the majority of your assets served. At this stage, the deployment setup is almost complete. The final step is to ensure your dynamic media assets are served to users. WhiteNoise won’t handle this for you, and the recommended solution we use is to serve our assets using Amazon’s S3 service.
Serving Dynamic Media
If you’ve stuck with us this far, great, you’re almost done! The last step of the deployment is to ensure media assets generated after runtime are served to users. Which is quite important in a CMS solution. We do this by utilising Amazon S3.
Creating an IAM User
First we need to set up an IAM user which will provide us with a key pair that allows our wagtail application to access the S3 bucket we will be creating to store our media shortly.
To do this, log into AWS and head over to the IAM section. Create a new user and don't apply any rules or policies to it just yet. Make sure you download the CSV file containing the credentials as you won't be able to retrieve the private key again (if you lose your private key you need to generate new keys for that user).
Make sure that you note down your users ARN.
Setting up your S3 Bucket
Now we need to create our S3 bucket. This is where all of our uploaded media will be stored. Head over to the AWS S3 managmement console this time and click Create Bucket.
Create your bucket with the bare minimum settings, and don't allow any access to it just yet in the wizard - we're going to create our own policy for that! Your bucket name needs to be unique across all buckets in S3 but make sure it's clear to you what it's for so that you don't get confused if you ever end up with a lot of different buckets for different things.
After creating the bucket, click on it to see the bucket management screen and then click on Permissions > Bucket Policy. This is where you will input our JSON bucket policy that will control who/what has access to your stored files. Copy the snippet below and paste it into your bucket policy - be sure to remember to change the YOUR-BUCKET-NAME and YOUR-IAM-ARN stubs to match your own bucket name and user ARN that you took a note of earlier.
The first part of this policy gives everyone read access to your media, which is important for users to be able to see your images and download any files you're prodiving them with.
The second part of the policy gives your newly created IAM user CRUD capabilities in the bucket, that's to say your Wagtail application will be able to create, read, update and/or delete any of the files there. Smashing!
Setting IAM User Permissions
So here's where AWS can be a bit weird. You'd think the bucket permissions clearly gave your IAM user the ability to interact with the bucket and a lot of tutorials stop there. However, I've found that you usually still need to apply a permissions policy to your IAM user itself to make sure it can get as far as trying to access the bucket. To do this, head back to the IAM console, click on Policies in the sidebar and then on Create Policy.
AWS gives you three options for creating policies:
- Copy an AWS Managed Policy
- Use the AWS Policy Generator (great for people new to AWS)
- Create Your Own Policy using JSON
You can use the Policy Generator if you want, but this is a relatively simple policy that we are going to create so I'll show you how to do it manually with JSON. After clicking Create Your Own Policy you'll see a screen similar to the one we saw when we input our S3 Bucket policy. Enter a name and description for your policy and then copy and paste the snippet below remembering to swap out YOUR-BUCKET-NAME for your own buckets name (easy-peasy!).
Once you've saved your policy, view it and click Attached Entities > Attach. This is where you select the IAM user we set up earlier and attach this new policy to them. Once you've selected your user and clicked Attach Policy you're done with the AWS side of things - well done!
Connecting Wagtail with S3
To make use of S3 in Wagtail, you will need to setup a small amount of configuration. The first step is to install two packages.
You’ll need to add ‘storages’ to your INSTALLED_APPS in base.py.
Now your application has the requirements to store dynamic media in S3. The final step is to configure the S3 storage settings in your settings.py. We prefer to do this in our production.py, passing in the variables from env.
This setup configures your application to utilise S3 to store your media assets. Replacing the hashes with your amazon S3 connection credentials and bucket names where appropriate. In our deployment we used a specific region and forced our signature version to be more secure, with the default falling back to v2.
Your application should now support dynamic media storage in S3. You can confirm this by going to your admin section and uploading an image.
If you have any issues or require further understanding, we have compiled a list of documentation used to create this post. If you’re still stumped, throw us an email.