Read the Docs is an open source tool for creating documentation. It uses the Sphinx documentation generator and is free for public repos. It offers the following features:
Free hosting for all documentation.
Documentation available in online and offline formats.
Automatic builds in response to Git commits.
Document versioning in response to Git branches and version control settings.
I begin by accessing the Read The Docs tutorial GitHub template and use this to create a repo on my GitHub account. I then sign up for a Read The Docs account and authorise it to interact with my GitHub account:
This allows Read The Docs to view the public repos in my GitHub account. They are then displayed in my Read The Docs console:
I select my ReadTheDocs-Tutorial repo and Read The Docs immediately starts building the documentation for it. Builds usually take around 30 to 40 seconds and Read The Docs gives updates throughout the process:
The end result is a site like the one below:
So far everything has been going well. What will happen when I try it out with the GitHub repository I made last time?
As before, the build takes around 30 seconds and gives me a link to my documentation. This time the site shows an autogenerated template instead:
This is because there is an important difference between the repos. The ReadMe in my repo is an .md (Markdown) file, whereas the Read The Docs tutorial documentation uses.rst(reStructuredText) files.
I’m currently getting to know Read The Docs and .rst, so I’ll use my tutorial repo for the remainder of this post and let my experiences guide my next steps.
Discovering .rst
Now that I’m more clued up on how Read The Docs works behind the scenes, let’s examine what .rst files look like and how they can be changed.
Included within the Read The Docs tutorials repo is a docs folder, which contains a source folder with four files:
api.rst
conf.py
index.rst
usage.rst
These files mirror the site generated by Read The Docs. For example, index.rst:
Welcome to Lumache's documentation!
===================================
**Lumache** (/lu'make/) is a Python library for cooks and food lovers
that creates recipes mixing random ingredients.
It pulls data from the `Open Food Facts database <https://world.openfoodfacts.org/>`_
and offers a *simple* and *intuitive* API.
Check out the :doc:`usage` section for further information, including
how to :ref:`installation` the project.
.. note::
This project is under active development.
Lumache has its documentation hosted on Read the Docs.
Contents
--------
.. toctree::
usage
api
Mirrors the page at readthedocs.io/en/latest/index.html:
Let’s make some changes. I update index.rst to include new code on lines 18, 20 and 29:
Welcome to Lumache's documentation!
===================================
**Lumache** (/lu'make/) is a Python library for cooks and food lovers
that creates recipes mixing random ingredients.
It pulls data from the `Open Food Facts database <https://world.openfoodfacts.org/>`_
and offers a *simple* and *intuitive* API.
Check out the :doc:`usage` section for further information, including
how to :ref:`installation` the project.
.. note::
This project is under active development.
Lumache has its documentation hosted on Read the Docs.
.. note::
This page also now holds test content for `EDMTracksLosslessS3Upload-PowerShell <https://github.com/MrDamienJones/EDMTracksLosslessS3Upload-PowerShell>`_.
Contents
--------
.. toctree::
usage
api
instructions
Instructions
=====
.. _instructions:
Installation
------------
EDMTracksLosslessS3Upload is a PowerShell script for uploading local lossless music files to Amazon S3. The script includes:
- Recording outputs using the ``Start-Transcript`` cmdlet.
- Checking there are files in the local folder.
**(Some text removed to avoid unnecessary scrolling)**
Please use the most recent version. Previous versions are included for completeness.
.. _usage:
Usage
------------
When everything is in place, run the PowerShell script. PowerShell will then move through the script, producing outputs as work is completed. A typical example of a successful transcript is as follows:
.. code-block:: console
**********************
Transcript started, output file is C:\Users\Files\EDMTracksLosslessS3Upload.log
**(Some text removed to avoid unnecessary scrolling)**
All files processed. Exiting.
**********************
Windows PowerShell transcript end
End time: 20220617153926
**********************
instructions.rst on GitHub
The GitHub commit triggers a new Read The Docs build:
The new build updates the Index page with a new note and additional links in the Contents menu:
On paper, the reStructureText format is compelling. It avoids having a single ReadMe file that can easily get large and unwelcoming. The documentation produced by .rst is comparable to a wiki and GitHub supports it in preview and display modes.
That said, Markdown has embedded itself in more places and has found more buy-in as a result. Applications like Trello, Azure DevOps and, crucially, Visual Studio Code support it out of the box. This gives more opportunities to practise and use Markdown, essentially making it the de facto winner of this unofficial format war.
Although, while Markdown is designed for writing for the web, .rst is specifically designed for writing technical documentation. Support is out there – Sphinx has an .rst tutorial and some .rst previewers exist. The versatility of .rst and its ability to auto-generate documentation and navigation is also of interest.
I’m likely to give it a go when I have some beefier documentation to write and see how it works out. There are still parts of the tutorial I haven’t touched on, and the documentation is, perhaps unsurprisingly, very good. So it looks like Read The Docs would be a good tool to use for the right project.
Summary
In this post, I tried out the open source documentation tool Read The Docs. I made some sample documentation and experienced the reStructureText format for the first time. Then I committed some changes to work with the .rst format and get a feel for how it works.
If this post has been useful, please feel free to follow me on the following platforms for future updates:
For several months I’ve been going through some music from an old hard drive. These music files are currently on my laptop, and exist mainly as lossless .flac files.
For each file I’m doing the following:
Creating an .mp3 copy of each lossless file.
Storing the .mp3 file on my laptop.
Uploading a copy of the lossless file to S3 Glacier.
Transferring the original lossless file from my laptop to my desktop PC.
I usually do the uploads using the S3 console, and have been meaning to automate the process for some time. So I decided to write some code to upload files to S3 for me, in this case using PowerShell.
Prerequisites
Before starting to write my PowerShell script, I have done the following on my laptop:
Version 0 gets the basic functionality in place. No bells and whistles here – I just want to upload a file to an S3 bucket prefix, stored using the Glacier Flexible Retrieval storage class.
V0: Writing To S3
I am using the PowerShell Write-S3Object cmdlet to upload my files to S3. This cmdlet needs a couple of parameters to do what’s required:
-BucketName: The S3 bucket receiving the files.
-Folder: The folder on my laptop containing the files.
-KeyPrefix: The S3 bucket key prefix to assign to the uploaded objects.
-StorageClass: The S3 storage class to assign to the uploaded objects.
I create a variable for each of these so that my script is easier to read as I continue its development. I couldn’t find the inputs that the -StorageClass parameter uses in the Write-S3Object documentation. In the end, I found them in the S3 PutObject API Reference.
I don’t have to log onto the S3 console for uploads anymore.
Forgetting to specify Glacier Flexible Retrieval as the S3 storage class is no longer a problem. The script does this for me.
Starting an upload to S3 is now as simple as right-clicking the script and selecting Run With PowerShell from the Windows Context Menu.
Version 0 works great, but I’ll give away one of my S3 bucket names if I start sharing a non-redacted version. This has been known to cause security issues in severe cases. Ideally, I’d like to separate the variables from the Powershell commands, so let’s work on that next.
Version 1: Security
Version 1 enhances the security of my script by separating my variables from my PowerShell commands. To make this work without breaking things, I’m using the following features:
To take advantage of these features, I’ve made two new files in my repo:
Variables.ps1 for my variables.
V1Security.ps1 for my Write-S3Object command.
So let’s now talk about how this all works.
V1: Isolating Variables With Dot Sourcing
At the moment, my script is broken. Running Variables.ps1 will create the variables but do nothing with them. Running V1Security.ps1 will fail as the variables aren’t in that script anymore.
This is where Dot Sourcing comes in. Using Dot Sourcing lets PowerShell look for code in other places. Here, when I run V1Security.ps1 I want PowerShell to look for variables in Variables.ps1.
To dot source a script, type a dot (.) and a space before the script path. As both of my files are in the same folder, PowerShell doesn’t even need the full path:
. .\EDMTracksLosslessS3Upload-Variables.ps1
Now my script works again! But I still have the same problem – if Variables.ps1 is committed to GitHub at any point then my variables are still visible. How can I stop that?
This time it’s Git to the rescue. I need a .gitignore file.
V1: Selective Tracking With .gitignore
.gitignore is a way of telling Git what not to include in commits. Entering a file, folder or pattern into a repo’s .gitignore file tells Git not to track it.
When Visual Studio Code finds a .gitignore file, it helps out by making visual changes in response to the file’s contents. When I create a .gitignore file and add the following lines to it:
#Set Variables
#The local file path for objects to upload to S3
#E.g. "C:\Users\Files\"
$LocalSource =
#The S3 bucket to upload the objects to
#E.g. "my-s3-bucket"
$S3BucketName =
#The S3 bucket prefix / folder to upload the objects to (if applicable)
#E.g. "Folder\SubFolder\"
$S3KeyPrefix =
#The S3 Storage Class to upload to
#E.g. "GLACIER"
$S3StorageClass =
Version 1 VariablesBlank.ps1 On GitHub
V1: Evaluation
Version 1 now gives me the benefits of Version 0 with the following additions:
My variables and commands have now been separated.
I can now call Variables.ps1 from other scripts in the same folder, knowing the variables will be the same each time for each script.
I can use .gitignore to make sure Variables.ps1 is never uploaded to my GitHub repo.
The next problem is one of visibility. I have no way to know if my uploads have been successful. Or if they were duplicated. Nor do I have any auditing.
The S3 console gives me a summary at the end of each upload:
It would be great to have something similar with my script! In addition, some error handling and quality control checks would increase my confidence levels.
Let’s get to work!
Version 2: Visibility
Version 2 enhances the visibility of my script. The length of the script grows a lot here, so let’s run through the changes and I’ll explain what’s going on.
As a starting point, I copied V1Security.ps1 and renamed it to V2Visibility.ps1.
V2: Variables.ps1 And .gitignore Changes
Additions are being made to these files as a result of the Version 2 changes. I’ll mention them as they come up, but it makes sense to cover a few things up-front:
I added External to all variable names in Variables.ps1 to keep track of them in the script. For example, $S3BucketName is now $ExternalS3BucketName.
There are some additional local file paths in Variables.ps1 that I’m using for transcripts and some post-upload checks.
The first change is perhaps the simplest. PowerShell has built-in cmdlets for creating transcripts:
Start-Transcript creates a record of all or part of a PowerShell session in a separate file.
Stop-Transcript stops a transcript that was started by the Start-Transcript cmdlet.
These go at the start and end of V2Visibility.ps1, along with a local file path for the EDMTracksLosslessS3Upload.log file I’m using to record everything.
This new path is stored in Variables.ps1. In addition, EDMTracksLosslessS3Upload.log has been added to .gitignore.
V2: Check If There Are Any Files
Now the error handing begins. I want the script to fail gracefully, and I start by checking that there are files in the correct folder. First I count the files using Get-ChildItem and Measure-Object:
And then stop the script running if no files are found:
If ($LocalSourceCount -lt 1)
{
Write-Output "No Local Files Found. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
Exit
}
There are a couple of cmdlets here that make several appearances in Version 2:
Start-Sleep suspends PowerShell activity for the time stated. This gives me time to read the output when I’m running the script using the context menu.
Exit causes PowerShell to completely stop everything it’s doing. In this case, there’s no point continuing as there’s nothing in the folder.
If files are found, PowerShell displays the count and carries on:
Else
{
Write-Output "$LocalSourceCount Local Files Found"
}
V2: Check If The Files Are Lossless
Next, I want to stop any file uploads that don’t belong in the S3 bucket. The bucket should only contain lossless music – anything else should be rejected.
So now, if I attempt to upload an unacceptable .log file, the transcript will say:
**********************
Transcript started, output file is C:\Files\EDMTracksLosslessS3Upload.log
Checking extensions are valid for each local file.
Unacceptable .log file found. Exiting.
**********************
Whereas an acceptable .flac file will produce:
**********************
Transcript started, output file is C:\Files\EDMTracksLosslessS3Upload.log
Checking extensions are valid for each local file.
Acceptable .flac file.
**********************
And when uploading multiple files:
**********************
Transcript started, output file is C:\Files\EDMTracksLosslessS3Upload.log
Checking extensions are valid for each local file.
Acceptable .flac file.
Acceptable .wav file.
Acceptable .flac file.
**********************
V2: Check If The Files Are Already In S3
The next step checks if the files are already in S3. This might not seem like a problem, as S3 usually overwrites an object if it already exists.
Thing is, this bucket is replicated. This means it’s also versioned. As a result, S3 will keep both copies in this scenario. In the world of Glacier this doesn’t cost much, but it will distort the bucket’s S3 Inventory. This could lead to confusion when I check them with Athena. And if I can stop this situation with some automation then I might as well.
I’m going to use the Get-S3Object cmdlet to query my bucket for each file. For this to work, I need two things:
-BucketName: This is in Variables.ps1.
-Key
-Key is the object’s S3 file path. For example, Folder\SubFolder\Music.flac. As the files shouldn’t be in S3 yet, these keys shouldn’t exist. So I’ll have to make them using PowerShell.
Get-S3Object should return null as the object shouldn’t exist.
If this doesn’t happen then the object is already in the bucket. In this situation, PowerShell identifies the file causing the problem and then exits the script:
If the file isn’t found then PowerShell continues to run:
Else
{
Write-Output "$LocalSourceObjectFileName does not currently exist in S3 bucket."
}
Assuming no files are found at this point, the log will read as follows:
Checking if local files already exist in S3 bucket.
Checking S3 bucket for Artist-Track-ExtendedMix.flac
Artist-Track-ExtendedMix.flac does not currently exist in S3 bucket.
Checking S3 bucket for Artist-Track-OriginalMix.flac
Artist-Track-OriginalMix.flac does not currently exist in S3 bucket.
V2: Uploading Files Instead Of Folders
Now to start uploading to S3!
In Version 2 I’ve altered how this is done. Previously my script’s purpose was to upload a folder to S3 using the PowerShell cmdlet Write-S3Object.
Version 2 now uploads individual files instead. There is a reason for this that I’ll go into shortly.
This means I have to change things around as Write-S3Object now needs different parameters:
Instead of telling the -Folder parameter where the local folder is, I now need to tell the -File parameter where each file is located.
Instead of telling the -KeyPrefix parameter where to store the uploaded objects in S3, I now need to tell the -Key parameter the full S3 path for each object.
I’ll do -Key first. I start by opening another ForEach loop, and create an S3 key for each file in the same way I did earlier:
The main benefit of this approach is that, if something goes wrong mid-upload, the transcript will tell me which uploads were successful. Version 1’s script would only tell me that uploads had started, so in the event of failure I’d need to check the S3 bucket’s contents.
Speaking of failure, wouldn’t it be good to check that the uploads worked?
V2: Were The Uploads Successful?
For this, I’m still working in the ForEach loop I started for the uploads. After an upload finishes, PowerShell checks if the object is in S3 using the Get-S3Object command I wrote earlier:
This time I want the object to be found, so null is a bad result.
Next, I get PowerShell to do some heavy lifting for me. I’ve created a pair of new local folders called S3WriteSuccess and S3WriteFail. The paths for these are stored in Variables.ps1.
If my S3 upload check doesn’t find anything and returns null, PowerShell moves the file from the source folder to S3WriteFail using Move-Item:
If ($null -eq $LocalSourceObjectFileNameS3Check)
{
Write-Output "S3 Upload Check FAIL: $LocalSourceObjectFileName. Moving to local Fail folder"
Move-Item -Path $LocalSourceObjectFilepath -Destination $ExternalLocalDestinationFail
}
If the object is found, PowerShell moves the file to S3WriteSuccess:
Else
{
Write-Output "S3 Upload Check Success: $LocalSourceObjectFileName. Moving to local Success folder"
Move-Item -Path $LocalSourceObjectFilepath -Destination $ExternalLocalDestinationSuccess
}
The ForEach loop then repeats with the next file until all are processed.
So now, a failed upload produces the following log:
**********************
Beginning S3 Upload Checks On Following Objects: StephenJKroos-Micrsh-OriginalMix
S3 Upload Check: StephenJKroos-Micrsh-OriginalMix.flac
S3 Upload Check FAIL: StephenJKroos-Micrsh-OriginalMix. Moving to local Fail folder
**********************
Windows PowerShell transcript end
**********************
While a successful S3 upload produces this one:
**********************
Beginning S3 Upload Checks On Following Objects: StephenJKroos-Micrsh-OriginalMix
S3 Upload Check: StephenJKroos-Micrsh-OriginalMix.flac
S3 Upload Check Success: StephenJKroos-Micrsh-OriginalMix. Moving to local Success folder
**********************
Windows PowerShell transcript end
**********************
PowerShell then shows a final message before ending the transcript:
Write-Output "All files processed. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
##################################
####### EXTERNAL VARIABLES #######
##################################
#Load External Variables Via Dot Sourcing
. .\EDMTracksLosslessS3Upload-Variables.ps1
#Start Transcript
Start-Transcript -Path $ExternalTranscriptPath -IncludeInvocationHeader
###############################
####### LOCAL VARIABLES #######
###############################
#Get count of items in $ExternalLocalSource
#Get list of filenames in $ExternalLocalSource
$LocalSourceCount = (Get-ChildItem -Path $ExternalLocalSource | Measure-Object).Count
#Get list of extensions in $ExternalLocalSource
$LocalSourceObjectFileExtensions = Get-ChildItem -Path $ExternalLocalSource | ForEach-Object -Process { [System.IO.Path]::GetExtension($_) }
#Get list of filenames in $ExternalLocalSource
$LocalSourceObjectFileNames = Get-ChildItem -Path $ExternalLocalSource | ForEach-Object -Process { [System.IO.Path]::GetFileName($_) }
##########################
####### OPERATIONS #######
##########################
#Check there are files in local folder.
Write-Output "Counting files in local folder."
#If local folder less than 1, output this and stop the script.
If ($LocalSourceCount -lt 1)
{
Write-Output "No Local Files Found. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
Exit
}
#If files are found, output the count and continue.
Else
{
Write-Output "$LocalSourceCount Local Files Found"
}
#Check extensions are valid for each file.
Write-Output " "
Write-Output "Checking extensions are valid for each local file."
ForEach ($LocalSourceObjectFileExtension In $LocalSourceObjectFileExtensions)
{
#If any extension is unacceptable, output this and stop the script.
If ($LocalSourceObjectFileExtension -NotIn ".flac", ".wav", ".aif", ".aiff")
{
Write-Output "Unacceptable $LocalSourceObjectFileExtension file found. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
Exit
}
#If extension is fine, output the extension for each file and continue.
Else
{
Write-Output "Acceptable $LocalSourceObjectFileExtension file."
}
}
#Check if local files already exist in S3 bucket.
Write-Output " "
Write-Output "Checking if local files already exist in S3 bucket."
#Do following actions for each file in local folder
ForEach ($LocalSourceObjectFileName In $LocalSourceObjectFileNames)
{
#Create S3 object key using $ExternalS3KeyPrefix and current object's filename
$LocalSourceObjectFileNameS3Key = $ExternalS3KeyPrefix + $LocalSourceObjectFileName
#Create local filepath for each object for the file move
$LocalSourceObjectFilepath = $ExternalLocalSource + "\" + $LocalSourceObjectFileName
#Output that S3 upload check is starting
Write-Output "Checking S3 bucket for $LocalSourceObjectFileName"
#Attempt to get S3 object data using $LocalSourceObjectFileNameS3Key
$LocalSourceObjectFileNameS3Check = Get-S3Object -BucketName $ExternalS3BucketName -Key $LocalSourceObjectFileNameS3Key
#If local file found in S3, output this and stop the script.
If ($null -ne $LocalSourceObjectFileNameS3Check)
{
Write-Output "File already exists in S3 bucket: $LocalSourceObjectFileName. Please review. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
Exit
}
#If local file not found in S3, report this and continue.
Else
{
Write-Output "$LocalSourceObjectFileName does not currently exist in S3 bucket."
}
}
#Output that S3 uploads are starting - count and file names
Write-Output " "
Write-Output "Starting S3 Upload Of $LocalSourceCount Local Files."
Write-Output "These files are as follows: $LocalSourceObjectFileNames"
Write-Output " "
#Do following actions for each file in local folder
ForEach ($LocalSourceObjectFileName In $LocalSourceObjectFileNames)
{
#Create S3 object key using $ExternalS3KeyPrefix and current object's filename
$LocalSourceObjectFileNameS3Key = $ExternalS3KeyPrefix + $LocalSourceObjectFileName
#Create local filepath for each object for the file move
$LocalSourceObjectFilepath = $ExternalLocalSource + "\" + $LocalSourceObjectFileName
#Output that S3 upload is starting
Write-Output "Starting S3 Upload Of $LocalSourceObjectFileName"
#Write object to S3 bucket
Write-S3Object -BucketName $ExternalS3BucketName -File $LocalSourceObjectFilepath -Key $LocalSourceObjectFileNameS3Key -StorageClass $ExternalS3StorageClass
#Output that S3 upload check is starting
Write-Output "Starting S3 Upload Check Of $LocalSourceObjectFileName"
#Attempt to get S3 object data using $LocalSourceObjectFileNameS3Key
$LocalSourceObjectFileNameS3Check = Get-S3Object -BucketName $ExternalS3BucketName -Key $LocalSourceObjectFileNameS3Key
#If $LocalSourceObjectFileNameS3Key doesn't exist in S3, move to local Fail folder.
If ($null -eq $LocalSourceObjectFileNameS3Check)
{
Write-Output "S3 Upload Check FAIL: $LocalSourceObjectFileName. Moving to local Fail folder"
Move-Item -Path $LocalSourceObjectFilepath -Destination $ExternalLocalDestinationFail
}
#If $LocalSourceObjectFileNameS3Key does exist in S3, move to local Success folder.
Else
{
Write-Output "S3 Upload Check Success: $LocalSourceObjectFileName. Moving to local Success folder"
Move-Item -Path $LocalSourceObjectFilepath -Destination $ExternalLocalDestinationSuccess
}
}
#Stop Transcript
Write-Output " "
Write-Output "All files processed. Exiting."
Start-Sleep -Seconds 10
Stop-Transcript
V2Visibility.ps1 On GitHub
VariablesBlank.ps1 Version 2
##################################
####### EXTERNAL VARIABLES #######
##################################
#The local file path for the transcript file
#E.g. "C:\Users\Files\"
$ExternalTranscriptPath =
#The local file path for objects to upload to S3
#E.g. "C:\Users\Files\"
$ExternalLocalSource =
#The S3 bucket to upload objects to
#E.g. "my-s3-bucket"
$ExternalS3BucketName =
#The S3 bucket prefix / folder to upload objects to (if applicable)
#E.g. "Folder\SubFolder\"
$ExternalS3KeyPrefix =
#The S3 Storage Class to upload to
#E.g. "GLACIER"
$ExternalS3StorageClass =
#The local file path for moving successful S3 uploads to
#E.g. "C:\Users\Files\"
$ExternalLocalDestinationSuccess =
#The local file path for moving failed S3 uploads to
#E.g. "C:\Users\Files\"
$ExternalLocalDestinationFail =
Version 2 VariablesBlank.ps1 On GitHub
V2: Evaluation
Overall I’m very happy with how this all turned out! Version 2 took a script that worked with some supervision, and turned it into something I can set and forget.
The various checks now have my back if I select the wrong files or if my connection breaks. And, while the Get-S3Object checks mean that I’m making more S3 API calls, the increase won’t cause any bill spikes.
The following is a typical transcript that my script produces following a successful upload of two .flac files:
**********************
Transcript started, output file is C:\Users\Files\EDMTracksLosslessS3Upload.log
Counting files in local folder.
2 Local Files Found
Checking extensions are valid for each local file.
Acceptable .flac file.
Acceptable .flac file.
Checking if local files already exist in S3 bucket.
Checking S3 bucket for MarkOtten-Tranquility-OriginalMix.flac
MarkOtten-Tranquility-OriginalMix.flac does not currently exist in S3 bucket.
Checking S3 bucket for StephenJKroos-Micrsh-OriginalMix.flac
StephenJKroos-Micrsh-OriginalMix.flac does not currently exist in S3 bucket.
Starting S3 Upload Of 2 Local Files.
These files are as follows: MarkOtten-Tranquility-OriginalMix StephenJKroos-Micrsh-OriginalMix.flac
Starting S3 Upload Of MarkOtten-Tranquility-OriginalMix.flac
Starting S3 Upload Check Of MarkOtten-Tranquility-OriginalMix.flac
S3 Upload Check Success: MarkOtten-Tranquility-OriginalMix.flac. Moving to local Success folder
Starting S3 Upload Of StephenJKroos-Micrsh-OriginalMix.flac
Starting S3 Upload Check Of StephenJKroos-Micrsh-OriginalMix.flac
S3 Upload Check Success: StephenJKroos-Micrsh-OriginalMix.flac. Moving to local Success folder
All files processed. Exiting.
**********************
Windows PowerShell transcript end
End time: 20220617153926
**********************
In this post, I created a script to upload lossless music files from my laptop to one of my Amazon S3 buckets using PowerShell.
I introduced automation to perform checks before and after each upload, and logged the outputs to a transcript. I then produced a repo for the scripts, accompanied by a ReadMe document.
If this post has been useful, please feel free to follow me on the following platforms for future updates:
In this post I talk about my recent experience with the AWS Certified Developer – Associate certification, discuss why and how I studied for the exam and explain why part of the process was like an early 90s puzzle game.
Motivation For Earning The AWS Developer Associate
Firstly I’ll explain why I took the exam. I like to use certifications as evidence of my current knowledge and skillset, and as mechanisms to introduce me to new topics that I wouldn’t otherwise have interacted with.
There’s a gap of around 18 months between my last AWS certification and this one. There are a few reasons for that:
I wanted to use my new skills for the AWS data migration project at work.
My role at the time didn’t involve many of the services covered in the Developer Associate exam.
After the AWS migration was completed and I became a Data Engineer, I felt that the time was right for the Developer Associate. My new role brought with it new responsibilities, and the AWS migration made new tooling available to the business. I incorporated the Developer Associate into the upskilling for my new role over a four month period.
The benefits of the various sections and modules of the Developer Associate can be split across:
Projects the Data Engineering team is currently working on.
Future projects the Data Engineering team is likely to receive.
Projects I can undertake in my own time to augment my skillset.
Current Work Projects
Our ETLs are built using Python on AWS Lambda. The various components of Lambda were a big part of the exam and helped me out when writing new ETLs and modernising legacy components.
Git repos are a big part of the Data Engineering workstream. I am a relative newcomer to Git, and the sections on CodeCommit helped me better understand the fundamentals.
Build tests and deployments are managed by the Data Engineering CICD pipelines. The CodeBuild, CodeDeploy and CodePipeline sections have shown me what these pipelines are capable of and how they function.
Some Data Engineering pipelines use Docker. The ECS and Fargate sections helped me understand containers conceptually and the benefits they offer.
Future Work Projects
Sections about CloudWatch and SNS will be useful for setting up new monitoring and alerting as the Data Engineering team’s use of AWS services increases.
The DynamoDB module will be helpful when new data sources are introduced that either don’t need a relational database or are prone to schema changes.
Sections about Kinesis will help me design streams for real-time data processing and analytics.
Future Personal Projects
The CloudFormation and SAM modules will help me build and deploy applications in my AWS account for developing my Python knowledge.
Sections on Cognito will help me secure these applications against unauthorized and malicious activity.
The API Gateway module will let me define how my applications can be interacted with and how incoming requests should be handled.
Sections on KMS will help me secure my data and resources when releasing homemade applications.
Resources For The AWS Developer Associate
Because AWS certifications are very popular, there are many resources to choose from. I used the following resources for my AWS Developer Associate preparation.
Stéphane Maarek Udemy Course
I’ve been a fan of Stéphane Maarek for some time, having used his courses for all of my AWS associate exams. His Ultimate AWS Certified Developer Associate is exceptional, with 32 hours of well presented and informative videos covering all exam topics. In addition, his code and slides are also included.
Stéphane is big on passing on real-world skills as opposed to just teaching enough to pass exams, and his dedication to keeping his content updated is clearly visible in the course.
À votre santé Stéphane!
Tutorials Dojo Learning Portal
Tutorials Dojo, headed by Jon Bonso, is a site with plentiful resources for AWS, Microsoft Azure and Google Cloud. Their practice exams are known for being hard but fair and are comparable to the AWS exams. All questions include detailed explanations of both the correct and incorrect answers. These practice exams were an essential part of my preparation.
Want to mimic the exam? Timed Mode poses 65 questions against the clock.
Prefer immediate feedback? Review Mode shows answers and explanations after every question.
Practising a weak area? Section-Based Mode limits questions to specific topics.
Tutorials Dojo also offers a variety of Cheat Sheets and Study Guides. These are free, comprehensive and regularly updated.
AWS Documentation & FAQs
AWS documentation is the origin of most questions in the exam and Stéphane and Jon both reference it in their content. I refer to it in situations where a topic isn’t making sense, or if a topic is a regular stumbling block in the practice exams.
For example, I didn’t understand API Gateway integration types until I read the API Gateway Developer Guide page. I am a visual learner, but sometimes there’s no substitute for reading the instruction manual! The KMS FAQs cleared up a few problem areas for me as well.
AWS also have their own learning services, including the AWS Skill Builder. While I didn’t use it here, some of my AWS certifications will expire in 2023 so I’ll definitely be looking to test out Skill Builder then.
Anki
Anki is a free and open-source flashcard program. It has a great user guide that includes an explanation of how it aids learning. I find Anki works best for short pieces of information that I want regular exposure to via their mobile app.
This was explaining the process of migrating a Git repo to CodeCommit. PULL = NULL was a way for me to remember that pulling objects from the Git repo was incorrect in that scenario.
If an Anki card goes over two lines I use pen and paper for it instead. Previous experience has taught me that I can visualise small notes better on Anki and large notes better on paper.
Blogging
My best exam performance is with the AWS services I am most familiar with. Towards the end of my exam preparation, I wanted to fill some knowledge gaps by getting my hands dirty!
My posts about creating security alerts and enhanced S3 notifications let me get to grips with CloudTrail, CloudWatch, EventBridge and SNS. These all made an appearance in my exam so this was time well spent!
I also ran through an AWS guide about Building A Serverless Web Application to get some quick experience using API Gateway, CodeCommit and Cognito. This has given me some ideas for future blog projects, so stay tuned!
Approach To Studying The AWS Developer Associate
This section goes into detail about how I approached my studies. I didn’t realise it at the time but, on review, the whole process is basically a long ETL. With sword fighting.
Extract
I started by watching Stéphane’s course in its entirety, ‘extracting’ notes as I went. Since Stéphane provided his slides and since I already knew some topics from previous experience, the notes were mostly on topics that I either didn’t know or was out of practice with.
Transform
Having finished Stéphane’s course, I started the Tutorials Dojo practice exams. The aim here is to ‘transform’ my knowledge from notes and slides to answers to exam questions.
I have a spreadsheet template in Google Sheets for this process:
As I work through a practice exam, I record how I feel about my answers:
I can choose from:
Confident: I’m totally confident with my answer
5050: I’m torn between two answers but have eliminated some
Guess: I have no idea what the answer is
When I get the results of the practice exam, I add the outcomes:
The Gut Feel and Outcome columns then populate tables elsewhere on the spreadsheet:
I use these tables for planning my next moves:
The top table quantifies overall confidence, and can answer questions like “Is my confidence improving between practise exams?”, “How often am I having to guess answers?” and “How confident am I about taking the real exam?”
I can get the middle table from Tutorials Dojo, but have it on the sheet for convenience.
The bottom table shows me an analysis of Gut Feel and Outcome. This shows me how many of my correct answers were down to knowledge, and in addition how many were down to luck.
I then update the Question column of the spreadsheet depending on the results in the bottom table:
I assume that anything listed as Confident and Correct is well known. Nothing is changed.
All 5050s and Correct Guesses are coloured orange. Here some knowledge is apparent, but more revision is needed.
All Incorrect Guesses are coloured red, because there are clear knowledge gaps here.
Anything listed as Confident and Incorrect is also coloured red. These are the biggest red flags of all, as here knowledge has either been misread or misunderstood.
Load
As the knowledge gaps and development areas become clear, I began to ‘load’ the topics that still didn’t make sense or were proving hard to remember.
Based on the Tutorials Dojo practise exam outcomes, I made a second set of notes that were more concise than the first. So where the first set was mostly “Things I Don’t Know” the second set was mostly “Things I Can’t Remember”.
As you might imagine, this uses a fair amount of paper. I recycle this afterwards because I’m an environmentally-conscious shark.
Insult Sword Fighting
I’ve come to know part of the ‘load’ as Insult Sword Fighting. Some people will know exactly what I’m talking about here, while others will quite rightly need some explanation.
Insult Sword Fighting is part of the 1990 point and click adventure game The Secret of Monkey Island. In this section of the game, the player wins fights by knowing the correct responses to an opponent’s insults.
For example, during a fight the opponent might say:
“You fight like a dairy farmer.”
To which the player’s response should be:
“How appropriate. You fight like a cow!”
The player starts out with two insult-response pairs, and learns more during subsequent fights.
The aim of the section is to learn enough to defeat the Sword Master. However, her insults are different to the ones the player has previously seen. For the final challenge, the player must match their existing knowledge to the new insults.
So if the Sword Master says:
“I will milk every drop of blood from your body!”
The player should pick up on the word “milk” and respond with:
“How appropriate. You fight like a cow!”
OK But What Does This Have To Do With The Exam?
So let me explain. The first time with a practice exam is like the player’s first Insult Sword Fight. Most responses are unknown or unfamiliar, so things usually don’t go well.
The player gets better at Insult Sword Fighting by challenging new opponents. This time the player will know some responses, but will also encounter new insults to learn.
In the same way, the subsequent practice exams will pose some questions that are similar to those in the previous exam. Of course there will also be entirely new questions that need further investigation.
The player will decide they are ready to face the Sword Master when they are able to win the majority of their Insult Sword Fights because they know the logic behind the correct responses.
Like the insults, the logic behind the practice exam questions can also be learned. Knowing the logic well enough to regularly answer these questions correctly is a good indicator that the real exam is a good idea.
The Sword Master’s insults are different to the ones the player has trained with. To win, the player must look for key words and phrases in the new insults and match them to their existing responses during battle.
The real exam will use unfamiliar questions. However the key words and phrases in the questions will match the knowledge built up during the practice exams, revealing the logic to arrive at the correct answers!
For those wondering how I made these images, I direct you to this awesome tool.
Next Steps
Now that the Developer Associate exam is over, I have a number of ideas for blog posts and projects to try out:
Building an ETL for my own data
Creating an API to query that data
Deploying the solution using Git and CICD
Plus I have a bookmarks folder and Trello board full of ideas to consider. So plenty to keep me busy!
If this post has been useful, please feel free to follow me on the following platforms for future updates: