As I’ve mentioned in previous posts, I’m a big fan of Jekyll for creating static websites. The one obvious shortcoming of Jekyll is the entire site must be regenerated when adding even a single post. Lately I’ve been experimenting with Micro.blog for personal posts not appropriate here. It has an iOS app for making quick updates from your phone, and hooks into Twitter and Facebook for automatic cross-posting to your social feeds. Yes, I could post directly on Twitter or Facebook, but I like hosting my content independently. That said, Micro.blog does have one shortcoming in my use. To post anything more than just a simple photo, it must be hosted elsewhere and linked. Of course, I could post my media on any number of free services available, but I want to ensure the links don’t break if their URL scheme changes, or they happen to go under. For this reason, I decided to host my files on Amazon’s S3 service.
During my research, I found many posts out there on hosting content using Amazon S3, but none had a process that worked specifically for me. I’ve documented here the steps I followed, mainly for my benefit, but I’m hoping you will also find it useful.
First off, to use Amazon Web Services, you need an Amazon account. Assuming most everyone has an account, I’m not covering those steps here. Nonetheless, if you don’t have an account, you need to set one up before going further.
- Go to the Amazon Web Services page and sign into the AWS console using your account1.
- Once logged in, under Services, select S3.
On the S3 page, click on the button labelled Create Bucket.
Enter a bucket name and select an appropriate region for hosting, then click Next. In this example, I have used
files.bftsystems.ca.Any name could do, but this makes it easy for me to remember what URL links to this bucket.
- You will be prompted for properties to associate with this bucket. As this will be used for simple hosting of public files, no special properties are needed. Just click Next to accept the defaults.
You will now be prompted to select permissions for this bucket. The only setting to be changed are the public permissions. Change this to Grant public read access to this bucket and click Next.
- Lastly, a summary of the bucket settings will be shown. Confirm everything, then click Create bucket.
Once the bucket is created, it will appear on the S3 page. Click the bucket name in the list to open it.
When the bucket opens, click the Upload button to begin adding files. There are many tools available to simplify uploading of files to S32, but for the purposes of this tutorial, I’ll cover the steps to upload using the website.
- When prompted, click the Add files link to open a file browser and select the files to upload, or drag and drop the files to the browser window. Once all the files to upload are in the list, click Next to begin the upload process.
- Once again, when prompted for permissions, select Grant public read access to these object(s) to allow access from the web.
- As no special properties are required, you can safely click the Upload button at this point to skip the remaining steps.
When the upload completes, the files will appear in the bucket list (pun intended), but they are not yet publicly accessible. To enable this, select the files, then click the Make public option under the More button. You will see a warning the objects will now be made public. Click the Make public button to proceed.
Once the steps above are completed, all the files uploaded will be accessible using the Amazon S3 URL at files.bftsystems.ca.s3.amazonaws.com. That works, but I would prefer the URL be a sub-domain of my site. This can be done by adding a CNAME record for my domain pointing to the URL of the S3 bucket. How this is done will vary depending on your web host, but here is the screenshot from my web host console.
Once this DNS record has been added, all the files I uploaded can now be linked at the URL files.bftsystems.ca, totally transparent to my readers.
I have one last recommendation. Although Amazon S3 rates are exceedingly low, they do charge by usage3. To avoid web crawlers indexing my S3 buckets and racking up charges, I always add a
robots.txt file with the following text to ensure the bucket is not indexed (at least by web spiders that follow the rules).
User-agent: * Disallow: /
As a reminder, your Amazon credentials will have full root access to all your AWS resources, so use a strong password. I also recommend using two-factor authentication for added security. See the steps here to enable Multi-factor authentication in AWS. ↩