Categories
Data & Analytics

Silver Layer Python ETL With The AWS Glue ETL Job Script Editor

In this post, I create my WordPress data pipeline’s Silver ETL process using Python and the AWS Glue ETL Job Script Editor.

Table of Contents

Introduction

Last time I worked on my WordPress AWS data pipeline, I produced my Bronze layer data and created a Glue Crawler to derive the schema of the Bronze S3 objects. It’s now time to start cleaning that data to prepare it for reporting, aggregation and consumption.

I’m also currently studying for the AWS Certified Data Engineer – Associate certification. While revising for this I learned the capabilities of the AWS Glue ETL Job Script Editor, and it seemed an ideal fit for my Silver ETL process. So I decided to make a post out of it and see how things went!

Firstly, I’ll examine the AWS Glue ETL Job Script Editor and how it will benefit my Silver ETL process. Then I’ll define the architecture of the Silver ETL job and how it fits into the existing data pipeline. Next, I’ll script and test the job. Finally, I’ll integrate it into the pipeline and explore the job’s costs.

Glue ETL Job Script Editor

This section examines the AWS Glue ETL Script Editor and Python Shell and considers some of Python Shell’s benefits and limitations.

Script Editor & Python Shell

Script Editor is a feature of AWS Glue. It offers serverless Spark, Ray and Python shells, enabling data transformation, preparation and cleaning with no infrastructure management. Scripts can be both uploaded and created from scratch, and version control is configurable to several Git services.

This post focuses on AWS Glue Python Shell. Introduced in 2019, Python Shell jobs suit small to medium-sized tasks as part of an ETL workflow.

Python Shell Pros

This section examines some of Python Shell’s benefits.

Low Cost

Python Shell jobs are the cheapest of the Glue job types to run. Glue charges are based on data processing units (DPUs). A single standard DPU currently provides 4 vCPU and 16 GB of memory. While regular Glue ETL jobs using Apache Spark need at least 2 DPUs, Python Shell jobs default to using only 1/16 (or 0.0625) DPU!

This can also be extended to 1 DPU, resulting in faster completion times. Like AWS Lambda, charges accrue based on resource usage and duration. So increased resource allocation can potentially create further savings.

This section was correct as of August 2024 – the latest pricing data is on the AWS Glue pricing site.

Low Barrier To Entry

Python Shell jobs offer accessibility for those from a scripting background. When creating a new script in the console, users only need to choose the engine (in this case Python) and whether the script is being uploaded or created fresh. And that’s it! No configuring interpreters, environments or dependencies.

Python Shell jobs also integrate with other AWS services. They can easily connect to data sources like S3, RDS and DynamoDB. They can be automated with Glue Workflows and Triggers. IAM can also control access to both the Python Shell job and the AWS services it interacts with.

Included Python Libraries

AWS Glue Python Shell includes a variety of built-in Python libraries that are useful for ETL tasks. These libraries cover a range of functionalities such as data processing, machine learning, and interacting with AWS services.

They include:

This AWS post has a full table of included libraries and their versions. Additional libraries can be installed and imported using PIP.

Some people will quickly see issues with this list though…

Python Shell Cons

This section examines some of Python Shell’s limitations.

Outdated Python Versions & Libraries

While the included libraries are welcome, they are also quite outdated. For example, boto3‘s included version is 1.21.21 while the current version is 1.34.150. pandas is at 1.4.2 in the table and 2.2.2 online.

This is likely due to the supported Python versions – currently Python 3.6 and Python 3.9. Now, while Python 3.9 isn’t out of support until October 2025, it was released back in October 2020 and has had three major upgrades since. Worse, Python 3.6 ended life at Christmas 2021!

With the Data Engineer Associate certification drawing attention to various AWS data services, it’s a shame that this feature is so far behind. This would be a great modernisation tool for importing legacy Python scripts into Glue, but the last feature update was in 2022 and it’s really starting to lag behind now.

No Visual Editor

Yes I know it’s a script editor but hear me out.

Let’s briefly segue to AWS IAM. In the early days, updating IAM policies had the potential of losing afternoons to missing braces or errant commas. There was no native AWS validation tooling and the whole thing felt like a dark art for those less experienced.

Then AWS released an IAM visual policy editor. And things went from this:

2024 07 30 IAMPolicyJSON

To this!

2024 07 30 IAMPolicyDown

This transformed the IAM policy-writing process. The guesswork was gone – new policies could be written using dropdowns and checkboxes. And AWS would generate the same code each time, in the same way and to the same standard.

In today’s AWS console, IAM can be administrated both visually and as JSON. Updates made in the visual editor reflect in the code in real-time, and vice versa. And the IAM IDE immediately flags syntax issues, unclosed keypairs and whatnot.

This interface would work so well with Glue Script Editor. It would simplify and encourage using Script Editor, creating standardised code by default and reducing development time. No more syntax violations, verbose comments or missing dependencies – AWS could handle all that.

This doesn’t even need AI – it would just be procedural code generation. Something like selecting awswrangler from a dropdown list, then selecting an S3 location to read or write and a file type to expect. Or even a list of code snippets for the included libraries. These features could all lighten the dev load.

Limited IDE

Let’s consider AWS Lambda’s IDE:

2024 07 30 LambdaIDE

Its benefits include:

  • Code autocompletion
  • Integrated testing
  • Integrated monitoring

And tons of other user-focused functionality. Conversely, this is the Glue Script Editor IDE:

2024 07 30 GlueIDE

Hmm.

Now don’t get me wrong – I’m not asking for Lambda Lite. But something a bit more than Notepad would be nice. AWS are currently making a massive deal of Amazon CodeWhisperer and Amazon Q Developer‘s autocomplete actions, but here pandas isn’t even suggested when I type import pan. And it’s an included library!

The obvious solution is to just use Lambda. But Glue Script Editor offers a sweet spot where it runs custom Python while operating entirely within the AWS Glue service. This is helpful for features like Glue Triggers and Workflows that can’t currently trigger Lambda functions. It’s also helpful with AWS Organisations, where using Glue Script Editor for Python ETL can enable SCPs that entirely block access to AWS Lambda for data-centric accounts.

So Why Use It?

So are Glue Python Shell jobs worth considering with these limitations? Definately! There are several use cases favouring them:

  • Legacy ETL jobs that either can’t use recent Python versions and libraries, or simply don’t need them.
  • Simple, lightweight tasks that don’t require the more advanced (and expensive) features of Apache Spark or Ray.
  • Tasks that need to run quickly, as Python Shells have faster startup times than the Spark environments used by regular Glue ETL jobs.
  • Long-running ETL tasks unsuitable for AWS Lambda, as Python Shell jobs can run for up to 48 hours compared to Lambda’s 15 minutes. Thanks to Yan Cui‘s blog for that one!

For my requirements, a Python Shell job makes sense because I’m doing simple transformations on small volumes of data.

Architecture

This section examines the architecture of my proposed solution. Much of this architecture is similar to the Bronze layer. I’ll examine the new Silver ELT job, followed by the updated data pipeline Step Function workflow.

Glue Silver ETL Job

Firstly, this is the Glue Silver ETL job:

Amazon S3
Bronze Bucket
Amazon S3…
Amazon S3
Silver Bucket
Amazon S3…
AWS Glue
Silver ETL Job
AWS Glue…
Amazon CloudWatch
Logs
Amazon CloudWatch…
1
1
2
2
AWS Cloud
AWS Cloud
Text is not SVG – cannot display

While updating CloudWatch Logs throughout:

  1. Silver Glue ETL job extracts data from wordpress-api Bronze S3 objects and performs Python transformations.
  2. Silver Glue ETL job loads the transformed data into Silver S3 bucket as Parquet objects.

Step Function Workflow

Next, the updated Step Function workflow:

AWS Cloud
AWS Cloud
EventBridge
Schedule
EventBridge…
AWS Step Functions workflow
AWS Step Functions workflow
3
3
AWS Lambda Raw Function
AWS Lambda Ra…
AWS SNS Topic
AWS SNS Topic
2
2
State
Machine
State…
AWS Lambda Bronze Function
AWS Lambda Br…
F
F
5
5
AWS Glue
Bronze Crawler
AWS Glue…
4
4
AWS Glue
Silver ETL Job
AWS Glue…
F
F
F
F
1
1
F
F
EventBridge
Scheduler
EventBridge…
AWS SNS Topic
AWS SNS Topic
User
User
CloudWatch Logs
CloudWatch Lo…
F
F
F
F
Text is not SVG – cannot display

While updating the workflow’s CloudWatch Log Group throughout:

  1. An EventBridge Schedule executes the Step Functions workflow.
  2. Raw Lambda function is invoked.
    • Invocation Fails: Publish SNS message. Workflow ends.
    • Invocation Succeeds: Invoke Bronze Lambda function.
  3. Bronze Lambda function is invoked.
    • Invocation Fails: Publish SNS message. Workflow ends.
    • Invocation Succeeds: Run Glue Crawler.
  4. Glue Crawler runs.
    • Run Fails: Publish SNS message. Workflow ends.
    • Run Succeeds: Update Glue Data Catalog. Run Glue Silver ETL job.
  5. Glue Silver ETL job runs.
    • Run Fails: Publish SNS message. Workflow ends.
    • Run Succeeds: Workflow ends.

An SNS message is published if the Step Functions workflow fails.

Silver ETL Job

In this section, I create the Silver ETL Python script for the AWS Glue Script Editor. Firstly I’ll define the script’s requirements. Next, I’ll translate them into Python code, and finally I’ll create the ETL script and upload it to Git.

Requirements

Firstly, let’s define the requirements for this data pipeline layer. So what does a typical Silver ETL process involve?

Databricks defines the Silver layer as cleansed and conformed data:

In the Silver layer of the lakehouse, the data from the Bronze layer is matched, merged, conformed and cleansed (“just-enough”) so that the Silver layer can provide an “Enterprise view” of all its key business entities, concepts and transactions. (e.g. master customers, stores, non-duplicated transactions and cross-reference tables).

https://www.databricks.com/glossary/medallion-architecture

Because my data source is a WordPress MySQL database, most of the cleansing and conforming work I’d expect to do has already been done there! That said, there’s data that I definitely won’t need, as well as other transformations I can apply to help downstream reporting.

Some of the following transformations can be done at the SQL reporting level with date and string functions. However, these add repetitive load and complexity to queries, which can be avoided by some cleaning transformations. Roche’s Maxim of Data Transformation applies here:

Data should be transformed as far upstream as possible, and as far downstream as necessary.

https://ssbipolar.com/2021/05/31/roches-maxim/

The Silver layer transformations I’m doing here are:

Column Removal

Many columns are empty or unneeded, so now is the time to remove them. This will reduce the data held in the Silver objects, making them cheaper to store and faster to query.

My script uses the pandas.DataFrame.drop function to remove columns by specifying column names. Here, a term_order column is dropped from the DataFrame df:

Python
df = df.drop(columns=['term_order'])

Date Splitting

Dates are tough to analyse and don’t aggregate well, as each date is effectively three different data points in one field. Splitting dates into years, months and days improves data bucketing, query granularity and time series analytics.

My script uses the pandas to_datetime function to convert scalar, array-like, Series or DataFrame/dict-like objects to pandas datetime objects.

Here, values in the date column of the DataFrame df are converted from strings to datetime objects and stored in a new date_todate column. Next, the year attribute of each date_todate column object is extracted and stored in a new date_year column. Finally, the same happens for month and day attributes:

Python
df['date_todate'] = pd.to_datetime(df['date'])

df['date_year'] = df['date_todate'].dt.year
df['date_month'] = df['date_todate'].dt.month
df['date_day'] = df['date_todate'].dt.day

String Editing

Some columns use HTML character entity names for reserved characters. For example, & in place of &. This is great for rendering HTML but not great for analytics.

My script uses the str.replace string method to return a copy of each string with all occurrences of the specified substring replaced by a new one. Here, all instances of & amp; in the name column are overwritten with &:

Python
df['name'] = df['name'].str.replace('& amp;','&')

So that’s the transformations. What else is the script doing?

Python Script

Most of the Silver script processes are similar to the Bronze script ones, including:

  • Logging
  • Getting parameters
  • Accessing S3 objects

So most functionality is reused from my Bronze Lambda function, which is fully documented in this post. To summarise the imports:

Python
import logging                          # Logging
import boto3                            # AWS Interactions
import botocore                         # AWS Exceptions
import awswrangler as wr                # S3 Interactions
import pandas as pd                     # Data Manipulation
from botocore.client import BaseClient  # AWS Type Hints

Some changes have been made for the Silver script:

  • Parameters, object names and logs have been updated from Bronze to Silver:
Python
parametername_snstopic: str = '/sns/data/lakehouse/silver'

logging.info("Getting S3 Silver parameter...")

s3_bucket_silver = get_parameter_from_ssm(client_ssm, parametername_s3bucket_silver)
  • New functionality identifies the AWS AccountID the script is running in:
Python
# Get & display AWS AccountID
identity = client_sts.get_caller_identity()
account_id = identity['Account']
logging.info(f"Starting in AWS Account ID {account_id}")

This is more of a sanity check for me – I have several AWS accounts and want to check I’ve accessed the right one!

  • A test that stops the current loop interaction if the object name doesn’t match one of the expected ones:
Python
# Check if object is mapped and bypass if not.
if object_name not in {'posts', 'statistics_pages', 'term_relationship', 'term_taxonomy', 'terms'}:

logging.warning(f'{object_name} is not currently mapped.  Skipping transform...')

object_count_failure += 1
continue

Finally, I wrote a new function for my Silver transformation logic. This isn’t included here (although it is in my repo) because it’s long. Very long! My first thought was to decouple the ETL processes from each other and write separate scripts for each object. So 5 in total.

However, Python Shell jobs are billed per second with a 1-minute minimum. So 5 jobs = 5 minutes billed. But the job only takes around 60 seconds to process all five objects! I’d have run up 5 times the usage and 5 times the cost for no real benefit.

The full script is in my Github repo.

Testing was quick because it was effectively repeating the Bronze script tests with new parameters. After successfully testing the script locally, it’s time to get it working in AWS!

Uploading & Testing

In this section I upload my Silver ETL script, integrate it with AWS Glue Script Editor and AWS Step Functions and test everything works as expected.

Creating The Python Shell Job

Firstly, let’s get my script into AWS Glue. There are several ways of doing this. If the script is uploaded to S3 then AWS can create a Glue ETL job with the AWS CLI create-job command:

Bash

 aws glue create-job --name python-job-cli --role Glue_DefaultRole 
     --command '{"Name" :  "pythonshell", "PythonVersion": "3.9", "ScriptLocation" : "s3://DOC-EXAMPLE-BUCKET/scriptname.py"}'  
     --max-capacity 0.0625

And with the AWS CloudFormation AWS::Glue::Job resource:

YAML
AWSTemplateFormatVersion: 2010-09-09
Resources:
  Python39Job:
    Type: 'AWS::Glue::Job'
    Properties:
      Command:
        Name: pythonshell
        PythonVersion: '3.9'
        ScriptLocation: 's3://DOC-EXAMPLE-BUCKET/scriptname.py'
      MaxRetries: 0
      Name: python-39-job
      Role: RoleName           
        

Scripts can also be pulled from Git repositories. Here I’ll create my Silver ETL job in the Glue Script Editor console. This creates a new Python script in an S3 bucket location of s3://aws-glue-assets-[AWSAccountID]-[Region]/scripts/.

Next, the new job needs an IAM role with appropriate permissions for the AWS services the script interacts with. Other parameters, including maximum DPU, job timeout value and Python version, can also be set. In addition, Glue Data Quality checks are also supported. And, once saved, the Glue job can have a schedule applied.

Testing Job Execution

AWS Glue records data for each job execution and publishes extensive details and logs:

2024 08 09 AWSGlueJobRun

Glue stores details about the job and Python environment, and logs are published and stored in Amazon Cloudwatch.

And so begins the testing! Initially, I was getting one of my own Python boto3 exceptions:

ValueError: No SNS topic returned.

Easy to fix. This IAM policy was based on the same one that my Bronze Lambda function uses. But the Silver ETL script uses different AWS resources so some IAM policy ARNs need to change. Specifically, the Silver ETL job’s IAM role needs to allow:

  • ssm:GetParameter on the required Parameter Store parameters.
  • sns:Publish on the required SNS topics.
  • s3:GetObject on the data-lakehouse-silver/wordpress_api/* objects.

With these changes, the Silver ETL job runs perfectly and creates new objects in the Silver S3 bucket:

2024 08 09 MonitoringTimeline

With the Glue job running and S3 object creation verified successfully, it’s time to validate the data.

Data Integration & Validation

Validating the data involves two processes:

  • Integrating the data into the Glue Data Catalog.
  • Querying the data with Amazon Athena.

There are several ways to update the Glue Data Catalog, and here I’ll create a new Glue Crawler using a similar setup to my Bronze Crawler. This time the crawler is reading objects from the Silver S3 bucket instead of the Bronze one, and the new Glue Data Catalog tables are prefixed with silver- instead of bronze-.

The Silver crawler creates these new tables in the Glue Data Catalog’s wordpress_api database:

2024 08 06 GlueDataCatalog

This gives Athena visibility of the tables, enabling data validation via SQL query execution. Querying wordpress_api.silver-terms shows the removed column and updated strings:

2024 08 06 AthenaSilverTerms

And querying wordpress_api.silver-statistics_pages shows the split dates:

2024 08 06 AthenaSilverStatistics pages

Looks good! Now that everything has been validated, let’s add these steps to the WordPress Data Pipeline.

Step Function Update

The WordPress Data Pipeline Step Function workflow that I started back in March continues to grow. There’s a new job and a second crawler to add to it now!

The Silver crawler is added in the same way as the Bronze one (including the IAM changes) so let’s focus on adding the new Glue Python Shell ETL job.

Adding Glue ETL jobs to a Step Function workflow is well documented The task uses the StartJobRun Glue API action under the hood and has an optimized integration that enables the .sync integration pattern. Enabling this means the Step Functions workflow waits for the StartJobRun request to complete before progressing to the next state.

However, my workflow currently lacks IAM permissions to run the Silver Glue ETL job. So I make a new IAM policy that allows the glue:StartJobRun action on the Silver Glue ETL job and attach it to the workflow’s IAM role:

JSON
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
				"glue:StartJobRun"
			],
			"Resource": "arn:aws:glue:eu-west-1:[REDACTED]:job/wordpressapi_silver"
		}
	]
}

My Step Function workflow now looks like this:

2024 08 09 stepfunctions graph

Let’s execute the Step Function workflow and check it works.

Step Function Test

Upon execution, everything works as intended. The new StartGlueJob action is triggered and the Glue ETL job is successful:

2024 08 06 GlueJobDetails

But the Step Function doesn’t transition to the next step. In fact it continued running to the point I had to stop it myself after several minutes:

2024 08 06 StepFunctionsStop

So what’s going on? I asked Amazon Q about this behaviour, and in its response were the following points:

  1. Step Functions uses a “sync” integration with AWS Glue, which means it relies on polling the status of the Glue job using the GetJobRun API call.
  2. The polling schedule is designed to be once per minute for the first 10 minutes, and then every 5 minutes thereafter. This is to avoid excessive API calls to Glue.
Amazon Q

Q also linked to this AWS repost answer with further details:

This is an expected behavior in case of .sync integration with AWS Glue. Service integrations that use the .sync pattern require additional IAM permissions where Step Functions will make use of a managed Eventbridge rule to monitor the status of the job. However, AWS Glue does not support Eventbridge integration and thus, Step Functions polls the job status using the GetJobRun API call to fetch the status of the job.

https://repost.aws/questions/QUFFlHcbvIQFe-bS3RAi7TWA/a-glue-job-in-a-step-function-is-taking-so-long-to-continue-the-next-step

This made things clearer. When Step Functions starts a Glue ETL job using a StartGlueJob action with optimized integration, Step Functions determines that job’s status (and thus when to transition to the next action) by calling Glue’s GetJobRun API.

However, my workflow’s IAM role doesn’t have permission to do that! And because Step Functions can’t determine the ETL job’s status, it doesn’t know that the job has finished and the next state transition never happens! Everything stops!

This is resolved by adding the glue:GetJobRun action to the workflow’s IAM policy:

JSON
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
				"glue:StartJobRun",
				"glue:GetJobRun"
			],
			"Resource": "arn:aws:glue:eu-west-1:REDACTED:job/wordpressapi_silver"
		}
	]
}

This time, the Glue GetJobRun API calls are successful. The Step Functions workflow validates that the ETL job has finished, moves to the next state as intended and ultimately completes successfully:

2024 08 06 ExecutionSuccessFull

Thanks Amazon Q!

Costs

Finally, let’s look at the costs for my Glue Script Editor Silver ETL Job resources.

This graph shows all Glue API costs between 2024-07-31 (first AWS job execution) and 2024-09-09:

2024 08 09 CostExplorerGlue

Of the $0.38:

  • $0.37 is the CrawlerRun API for the two Glue Crawlers I’m running.
  • $0.01 is the Jobrun API for the 15 job runs between 2024-07-31 and 2024-09-09.

So all things considered, very manageable!

Summary

In this post, I created my WordPress data pipeline’s Silver ETL process using Python and the AWS Glue ETL Job Script Editor.

I found the Script Editor jobs very useful. They offer Lambda’s benefits of scalability, managed infrastructure and integration with other AWS services, combined with data-centric libraries and features that make it easier to hit the ground running development-wise. It has clear limitations and could do with some AWS TLC, but it was a good fit here and rivals Lambda for some future ETL processes I have planned.

If this post has been useful then the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Developing & Application Integration

WordPress Bronze Data Orchestration With AWS

In this post, I create my WordPress pipeline’s bronze data orchestration process using AWS Lambda layers and AWS Step Functions.

Table of Contents

Introduction

In recent posts, I’ve written a Python script to extract WordPress API data and automated the script’s invocation with AWS services. This script creates five JSON objects in an S3 bucket at 07:00 each morning.

Now, I want to transform the data from semi-structured raw JSON into a more structured and query-friendly ‘bronze’ format to prepare it for downstream partitioning, cleansing and filtration.

Firstly, I’ll cover the additions and changes to my pipeline architecture. Next, I’ll examine both my new bronze Python function and the changes made to the existing raw function.

Finally, I’ll deploy the bronze script to AWS Lambda and create my WordPress pipeline orchestration process with AWS Step Functions. This process will ensure both Lambdas run in a set order each day.

Let’s start by examining my latest architectural decisions.

Architectural Decisions

In this section, I examine my architectural decisions for the bronze AWS Lambda function and the WordPress pipeline orchestration. Note that these decisions are in addition to my previous ones here and here.

AWS SDK For pandas

AWS SDK For pandas is an open-source Python initiative using the pandas library. It integrates with AWS services including Athena, Glue, Redshift, DynamoDB and S3, offering abstracted functions to execute various data processes.

AWS SDK For pandas used to be called awswrangler until AWS renamed it for clarity. It now exists as AWS SDK For pandas in documentation and awswrangler in code.

AWS Lambda Layers

A Lambda layer is an archive containing code like libraries, dependencies, or custom runtimes. Layers can be both created manually and provided by AWS and third parties. Each Lambda function can include up to five layers.

Layers can be shared between functions, reducing code duplication and package sizes. This reduces storage costs and lets the smaller packages deploy markedly faster. Layers also separate dependencies from function code, supporting decoupling and separation of concerns.

AWS Step Functions

AWS Step Functions is a serverless orchestration service that integrates with other AWS services to build application workflows as a series of event-driven steps. For example, chaining Athena queries and ML model training.

Central to the Step Functions service are the concepts of States and State Machines:

  • States represent single steps or tasks in a workflow, and can be one of several types. The Step Functions Developer Guide has a full list of states.

The AWS Step Functions Developer Guide’s welcome page has more details including workflow types, use cases and a variety of sample projects.

Apache Parquet

Onto the data architecture! Let’s start by choosing a structured file type for the bronze data:

Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

Databricks: What is Parquet?

There’s a more detailed explanation in the Parquet documentation too. So why choose Parquet over something like CSV? Well:

  • Size: Parquet supports highly efficient compression, so files take up less space and are cheaper to store than CSVs.
  • Performance: Parquet files store metadata about the data they hold. Query engines can use this metadata to find only the data needed, whereas with CSVs the whole file must be read first. This reduces the amount of processed data and enhances query performance.
  • Compatibility: Parquet is an open standard supported by various data processing frameworks including Apache Spark, Apache Hive and Presto. This means that data stored in Parquet format can be read and processed across many platforms and services.

Data Lakehouse

A Data Lakehouse is an emerging data architecture combining the centralized storage of raw data synonymous with Data Lakes with the transactional and analytical processing associated with Data Warehouses.

The result is a unified platform for efficient data management, analytics, and insights. Lakehouses have gained popularity as cloud services increasingly support them, with AWS, Azure and GCP all providing Lakehouse services.

This segues neatly into…

Medallion Architecture

Medallion Architecture is a data design pattern for logically organizing data in a Lakehouse. It aims to improve data quality as it flows through various layers incrementally. Names for these layers vary, tending to be Bronze, Silver, and Gold.

Implementations of the Medallion Architecture also vary. I like this Advancing Analytics video, which maps the Medallion Architecture to their approach. Despite the title it’s not a negative video, instead outlining how the three layers don’t necessarily fit every use case.

I’m using Raw and Bronze layers here because they best fit what I’m doing with my data.

Architectural Updates

In this section, I examine the changes made to my existing architecture.

Amazon S3

I’ve created a new data-lakehouse-bronze s3 bucket in the same region as the data-lakehouse-raw bucket to separate the two data layers.

Why use two buckets instead of one bucket with two prefixes? Well, after much research I’ve not found a right or wrong answer for this. There’s no difference in cost, performance or availability as long as all objects are stored in the same AWS region.

I chose two buckets because I find it easier to manage multiple buckets with flat structures and small bucket policies, as opposed to single buckets with deep structures and large bucket policies.

The truest answer is ‘it depends’, as other factors can come into play like:

  • Data Sovereignty: S3 bucket prefixes exist in the same region as the parent bucket. Regulations like GDPR and CCPA may require using separate buckets in order to isolate data within designated locations.

AWS SNS

I previously had two standard SNS Topics:

  • wordpress-api-raw for Lambda function alerts
  • failure-lambda for Lambda Destination alerts.

Firstly, there’s now an additional failure-stepfunction topic for any state machine failures.

Secondly, I’ve replaced my wordpress-api-raw topic with a data-lakehouse-raw topic to simplify my alerting channels and allow resource reuse. I’ve also created a new data-lakehouse-bronze topic for bronze process alerts.

Why two data topics? Well, different teams and services care about different things. A bronze-level failure may only concern the Data Engineering team as no other teams consume the data. Conversely, a gold-level failure will concern the AI and MI teams as it impacts their models and reports. Having separate SNS topics for each layer type enables granular monitoring controls.

AWS Parameter Store

Finally, Parameter Store needs the new S3 bucket name and SNS ARNs. I’ve replaced the /sns/pipeline/wordpressapi/raw parameter with /sns/data/lakehouse/raw to preserve the name schema.

I’m now storing five parameters:

  • 2x S3 Bucket names (Raw and Bronze)
  • 2x SNS Topic ARNs (Raw and Bronze notifications)
  • WordPress API Endpoints (unchanged)

Architectural Diagram

There are two diagrams this time! Firstly, here is the data_wordpressapi_bronze AWS Lambda function:

Where:

  1. AWS Lambda calls Parameter Store for S3 and SNS parameters. Parameter Store returns these to AWS Lambda.
  2. Lambda function gets raw WordPress JSON data from S3 Raw Bucket.
  3. Lambda function transforms the raw WordPress JSON data to bronze WordPress Parquet data and puts the new object in the S3 Bronze Bucket.

Meanwhile, Lambda is writing to a CloudWatch Log Group throughout its invocation. If there’s a failure, the Lambda function publishes a message to an SNS topic. SNS then delivers this message to the user’s subscribed email address.

Next, this is the AWS Step Functions WordPress bronze orchestration process:

Where:

  1. EventBridge Schedule invokes the State Machine.
  2. State Machine invokes the Raw Lambda function.
  3. State Machine invokes the Bronze Lambda function.

The State Machine also has its own logging and alerting channels.

Python

In this section, I work on my raw and bronze Python scripts for the WordPress pipeline orchestration process.

Raw Script Updates

I try to update my existing resources when I find something pertinent online. My latest find was this Indently video that covers, amongst other things, type annotations:

So how are type annotations different from type hints? Type annotations were released in 2006 and aimed to standardize function parameters and return value annotation. Type hints (released in 2014) then added updated definitions and conventions to enrich type annotations further.

The type hints PEP shows this difference between the two:

When used in a type hint, the expression None is considered equivalent to type(None)

https://peps.python.org/pep-0484/#using-none

So in this function:

Python
def send_email(name: str, message: str) -> None:
  • name: str is an example of type annotation because the parameter name is of type string.
  • -> None is an example of a type hint because although None isn’t a type, it confirms that the function has no output.

So what’s changed in my raw script?

Updated Import & Functions

Let’s open with a new import:

Python
from botocore.client import BaseClient

BaseClient serves as a foundational base class for AWS service clients within botocore – a low-level library providing the core functionality of boto3 (the AWS Python SDK) and the AWS CLI.

I’m using it here to add type annotations to my boto3 clients. For example, send_sns_message already had these annotations:

Python
def send_sns_message(sns_client, topic_arn: str, subject:str, message: str):

I’ve now annotated sns_client with BaseClient to indicate its boto3 relation. I’ve also added a -> None type hint to confirm the function has no output:

Python
def send_sns_message(sns_client: BaseClient, topic_arn: str, subject:str, message: str) -> None:

Elsewhere, I’ve added the BaseClient annotation to get_parameter_from_ssm‘s ssm_client parameter:

Python
def get_parameter_from_ssm(ssm_client: BaseClient, parameter_name: str) -> str:

And put_s3_object‘s s3_client parameter:

Python
def put_s3_object(s3_client: BaseClient, bucket: str, prefix:str, name: str, json_data: str, suffix: str) -> bool:

put_s3_object also has new prefix and suffix parameters. Before this, it was hard-coded to create JSON objects in a wordpress-api S3 prefix:

Python
    try:
        logging.info(f"Attempting to put {name} data in {bucket} bucket...")
        s3_client.put_object(
            Body = json_data,
            Bucket = bucket,
            Key = f"wordpress-api/{name}.json"
        )

Not any more! The S3 prefix and object suffix can now be changed dynamically:

Python
    try:
        logging.info(f"Attempting to put {name} data in {bucket} bucket's {prefix}/{name} prefix...")
        s3_client.put_object(
            Body = json_data,
            Bucket = bucket,
            Key = f"{prefix}/{name}/{name}.{suffix}"
        )

This improves put_s3_object‘s reusability as I can now pass any prefix and suffix to it during a function call. For example, this call creates a JSON object:

Python
ok = put_s3_object(client_s3, s3_bucket, data_source, object_name, api_json_string, 'json')

While this creates a CSV object:

Python
ok = put_s3_object(client_s3, s3_bucket, data_source, object_name, api_json_string, 'csv')

Likewise, this creates a TXT object:

Python
ok = put_s3_object(client_s3, s3_bucket, data_source, object_name, api_json_string, 'txt')

I can also set data_source (which I’ll cover shortly) to any S3 prefix, giving total control over where the object is stored.

Updated Variables

Next, some of my variables need to change. My SNS parameter name needs updating from:

Python
# AWS Parameter Store Names
parametername_s3bucket = '/s3/lakehouse/name/raw'
parametername_snstopic = '/sns/pipeline/wordpressapi/raw'
parametername_wordpressapi = '/wordpress/amazonwebshark/api/mysqlendpoints'

To:

Python
# AWS Parameter Store Names
parametername_s3bucket = '/s3/lakehouse/name/raw'
parametername_snstopic = '/sns/data/lakehouse/raw'
parametername_wordpressapi = '/wordpress/amazonwebshark/api/mysqlendpoints'

I also need to lay the groundwork for put_s3_object‘s new prefix parameter. I used to have a lambdaname variable that was used in the logs:

Python
# Lambda name for messages
lambdaname = 'data_wordpressapi_raw'

I’ve replaced this with two new variables. data_source records the data’s origin, which matches my S3 prefix naming schema. function_name then adds context to data_source to match my Lambda function naming schema:

Python
# Lambda name for messages
data_source = 'wordpress_api'
function_name = f'data_{data_source}_raw'

data_source is then passed to the put_s3_object function call when creating raw objects:

Python
ok = put_s3_object(client_s3, s3_bucket, data_source, object_name, api_json_string)

While function_name is used in the logs when referring to the Lambda function:

Python
    # Check an S3 bucket has been returned.
    if not s3_bucket_raw:
        message = f"{function_name}: No S3 Raw bucket returned."
        subject = f"{function_name}: Failed"

Updated Script Body

My variables all now have type annotations. They’ve gone from:

Python
    # AWS Parameter Store Names
    parametername_s3bucket = '/s3/lakehouse/name/raw'
    parametername_snstopic = '/sns/data/lakehouse/raw'
    parametername_wordpressapi = '/wordpress/amazonwebshark/api/mysqlendpoints'

    # Lambda name for messages
    data_source = 'wordpress_api'
    function_name = f'data_{data_source}_raw'

    # Counters
    api_call_timeout = 30
    endpoint_count_all = 0
    endpoint_count_failure = 0
    endpoint_count_success = 0

To:

Python
    # AWS Parameter Store Names
    parametername_s3bucket: str = '/s3/lakehouse/name/raw'
    parametername_snstopic: str = '/sns/data/lakehouse/raw'
    parametername_wordpressapi: str = '/wordpress/amazonwebshark/api/mysqlendpoints'

    # Lambda name for messages
    data_source: str = 'wordpress_api'
    function_name: str = f'data_{data_source}_raw'

    # Counters
    api_call_timeout: int = 30
    endpoint_count_all: int = 0
    endpoint_count_failure: int = 0
    endpoint_count_success: int = 0

This is helpful when the variables are passed in from settings files or external services and are not immediately apparent. So a good habit to get into!

Bronze Script

Now let’s talk about the new script, which transforms raw S3 JSON objects into bronze S3 Parquet objects. Both raw and bronze WordPress scripts will then feed into an AWS orchestration workflow.

Reused Raw Functions

The following functions are re-used from the Raw script with no changes:

Get Filename Function

Here, I want to get each S3 path’s object name. The object name has some important uses:

  • Using it instead of the full S3 path makes the logs easier to read and cheaper to store.
  • Using it during bronze S3 object creation ensures consistent naming.

A typical S3 path has the schema s3://bucket/prefix/object.suffix, from which I want object.

This function is a remake of the raw script’s Get Filename function. This time, the source string is an S3 path instead of an API endpoint:

I define a get_objectname_from_s3_path function, which expects a path argument with a string type hint and returns a new string.

Firstly, my name_full variable uses the rsplit method to capture the substring I need, using forward slashes as separators. This converts s3://bucket/prefix/object.suffix to object.suffix.

Next, my name_full_last_period_index variable uses the rfind method to find the last occurrence of the period character in the name_full string.

Finally, my name_partial variable uses slicing to extract a substring from the beginning of the name_full string up to (but not including) the index specified by name_full_last_period_index. This converts object.suffix to object.

If the function cannot return a string, an exception is logged and a blank string is returned instead.

Get Data Function

Next, I want to read data from an S3 JSON object in my Raw bucket and store it in a pandas DataFrame.

Here, I define a get_data_from_s3_object function that returns a pandas DataFrame and expects three arguments:

  • boto3_session: the authenticated session to use with a BaseClient type hint.
  • s3_object: the S3 object path with a string type hint.
  • name: the S3 object name with a string type hint (used for logging).

This function uses AWS SDK For pandas s3.read_json to read the data from the S3 object path using the existing boto3_session authentication.

If data is found then get_data_from_s3_object returns a populated DataFrame. Otherwise, an empty DataFrame is returned instead.

Put Data Function

Finally, I want to convert the DataFrame to Parquet and store it in my bronze S3 bucket.

I define a put_s3_parquet_object function that expects four arguments:

  • df: the pandas DataFrame containing the raw data.
  • name: the S3 object name.
  • s3_object_bronze: the S3 path for the new bronze object
  • session: the authenticated boto3 session to use.

I give string type hints to the name and s3_object_bronze parameters. session gets the same BaseClient hint as before, and df is identified as a pandas DataFrame.

I open a try except block that uses s3.to_parquet with the existing boto3_session to upload the DataFrame data to S3 as a Parquet object. If this operation succeeds, the function returns True. If it fails, a botocore exception is logged and the function returns False.

Imports & Variables

The bronze script has two new imports to examine: awswrangler and pandas:

Python
import logging
import boto3
import botocore
import awswrangler as wr
import pandas as pd
from botocore.client import BaseClient

I’ve used both before. Here, pandas handles my in-memory data storage and awswrangler handles my S3 interactions.

There are also parameter changes. I’ve added Parameter Store names for both the bronze S3 bucket and the SNS topic. I’ve kept the raw S3 bucket parameter as awswrangler needs it for the get_data_from_s3_object function.

Python
parametername_s3bucket_raw: str = '/s3/lakehouse/name/raw'
parametername_s3bucket_bronze: str = '/s3/lakehouse/name/bronze'
parametername_snstopic: str = '/sns/data/lakehouse/bronze'

I’ve also swapped out _raw for _bronze in function_name, and renamed the counters from endpoint_count to object_count to reflect their new function:

Python
    # Lambda name for messages
    data_source: str = 'wordpress_api'
    function_name: str = f'data_{data_source}_bronze'

    # Counters
    object_count_all: int = 0
    object_count_failure: int = 0
    object_count_success: int = 0

Script Body

Most of the bronze script is reused from the raw script. Tasks like logging config, name parsing and validation checks only needed the updated parameters! There are some changes though, as S3 is now my data source and I’m also doing additional tasks.

Firstly, I need to get the raw S3 objects. The AWS SDK For pandas S3 class has a list_objects function which is purpose-built for this:

Python
s3_objects_raw = wr.s3.list_objects(
      path = f's3://{s3_bucket_raw}/{data_source}',
      suffix = 'json',
      boto3_session = session)
  • path is the S3 location to list – in this case the raw S3 bucket’s wordpress_api prefix.
  • suffix filters the list by the specified suffix.
  • boto3_session specifies my existing boto3_session to prevent unnecessary re-authentication.

During the loop, my script checks if the pandas DataFrame returned from get_data_from_s3_object contains data. If it’s empty then the loop ends, otherwise the column and row counts are logged:

Python
if df.empty:
  logging.warning(f"{object_name} DataFrame is empty!")
  endpoint_count_failure += 1
  continue
  
logging.info(f'{object_name} DataFrame has {len(df.columns)} columns and {len(df)} rows.')

Assuming all checks succeed, I want to put a new Parquet object into my bronze S3 bucket. AWS SDK For pandas has an s3.to_parquet function that does this using a pandas DataFrame and an S3 path.

I already have the DataFrame so let’s make the path. This is done by the s3_object_bronze parameter, which joins existing parameters with additional characters. This is then passed to put_s3_parquet_object:

Python
s3_object_bronze = f's3://{s3_bucket_bronze}/{data_source}/{object_name}/{object_name}.parquet'

logging.info(f"Attempting {object_name} S3 Bronze upload...")
ok = put_s3_parquet_object(df, object_name, s3_object_bronze, session)

That’s the bronze script done. Now to deploy it to AWS Lambda.

Lambda

In this section, I configure and deploy my Bronze Lambda function.

Hitting Size Limits

So, remember when I said that I expected my future Lambda deployments to improve? Well, this was the result of my retrying the virtual environment deployment process the Raw Lambda used:

2024 03 08 LambdaError

While my zipped raw function is 19.1 MB, my zipped bronze function is over five times bigger at 101.6 MB! My poorly optimised package wouldn’t cut it this time, so I prepared for some pruning. Until I discovered something…

Using A Layer

There’s a managed AWS SDK for pandas Lambda layer!

2024 03 05 LayerAWSSDKPandas

It can be selected in the Lambda console or programmatically called from this list of AWS SDK for pandas Managed Layer ARNs, which covers:

  • All AWS commercial regions.
  • All Python versions currently supported by Lambda (currently 3.8+)
  • Both Lambda architectures.

Additionally, the Lambda Python 3.12 runtime includes boto3 and botocore. So by using this runtime and the managed layer, I’ve gone from a large deployment package to no deployment package! And because my function is now basically just code, I can view and edit that code in the Lambda console directly.

Lambda Config

My Bronze Lambda function borrows several config settings from the raw one, including:

Where it differs is the IAM setup. I needed additional permissions anyway because this function is reading from two S3 buckets now, but by the time I was done the policy was hard to read, maintain and troubleshoot:

JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "sns:Publish",
                "s3:ListBucket",
                "logs:CreateLogGroup"
            ],
            "Resource": [
                "arn:aws:s3:::data-lakehouse-raw/wordpress_api/*",
                "arn:aws:s3:::data-lakehouse-bronze/wordpress_api/*",
                "arn:aws:s3:::data-lakehouse-raw",
                "arn:aws:s3:::data-lakehouse-bronze",
                "arn:aws:logs:eu-west-1:REDACTED:*",
                "arn:aws:sns:eu-west-1:REDACTED:data-lakehouse-raw",
                "arn:aws:sns:eu-west-1:REDACTED:data-lakehouse-bronze"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:logs:eu-west-1:REDACTED:log-group:/aws/lambda/data_wordpressapi_bronze:*",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/s3/lakehouse/name/raw",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/s3/lakehouse/name/bronze",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/sns/data/lakehouse/raw",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/sns/data/lakehouse/bronze"
            ]
        }
    ]
}

So let’s refactor it! The below policy has the same actions, grouped by service and with appropriately named Sids:

JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudWatchLogGroupActions",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup"
            ],
            "Resource": [
                "arn:aws:logs:eu-west-1:REDACTED:*"
            ]
        },
        {
            "Sid": "CloudWatchLogStreamActions",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:eu-west-1:REDACTED:log-group:/aws/lambda/data_wordpressapi_bronze:*"
            ]
        },
        {
            "Sid": "S3BucketActions",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::data-lakehouse-raw",
                "arn:aws:s3:::data-lakehouse-bronze"
            ]
        },
        {
            "Sid": "S3ObjectActions",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::data-lakehouse-raw/wordpress_api/*",
                "arn:aws:s3:::data-lakehouse-bronze/wordpress_api/*"
            ]
        },
        {
            "Sid": "SNSActions",
            "Effect": "Allow",
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:eu-west-1:REDACTED:data-lakehouse-bronze"
            ]
        },
        {
            "Sid": "ParameterStoreActions",
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/s3/lakehouse/name/raw",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/s3/lakehouse/name/bronze",
                "arn:aws:ssm:eu-west-1:REDACTED:parameter/sns/data/lakehouse/bronze"
            ]
        }
    ]
}

Much better! This policy is now far easier to read and update.

There’s also a clear distinction between the bucket-level s3:ListBucket operation and the object-level s3:PutObject and s3:GetObject operations now. Getting these wrong can have big consequences, so the clearer the better!

One deployment and test later, and I have some new S3 objects!

[INFO]: WordPress API Bronze process complete: 5 Successful | 0 Failed.

REPORT RequestId: 899d1658-f7de-4e74-8d64-b4f029fe2bec	Duration: 7108.50 ms	Billed Duration: 7109 ms	Memory Size: 250 MB	Max Memory Used: 250 MB	Init Duration: 4747.38 ms

So now I have two Lambda functions with some requirements around them:

  • They need to run sequentially.
  • The Raw Lambda must finish before the Bronze Lambda starts.
  • If the Raw Lambda fails then the Bronze Lambda shouldn’t run at all.

Now that AWS Lambda is creating WordPress raw and bronze objects, it’s time to start thinking about orchestration!

Step Functions & EventBridge

In this section, I create both an AWS Step Functions State Machine and an Amazon EventBridge Schedule for my WordPress bronze orchestration process.

State Machine Requirements

Before writing any code, let’s outline the steps I need the state machine to perform:

  1. data_wordpressapi_raw Lambda function is invoked. If it succeeds then move to the next step. If it fails then send a notification and end the workflow reporting failure.
  2. data_wordpressapi_bronze Lambda function is invoked. If it succeeds then end the workflow reporting success. If it fails then send a notification and end the workflow reporting failure.

With the states defined, it’s time to create the state machine.

State Machine Creation

The following state machine was created using Step Functions Workflow Studio – a low-code visual designer released in 2021, with drag-and-drop functionality that auto-generates code in real-time:

Workflow Studio produced this section’s code and diagrams.

Firstly I create a data_wordpressapi_raw task state to invoke my Raw Lambda. This task uses the lambda:invoke action to invoke my data_wordpressapi_raw function. I set the next state as data_wordpressapi_bronze and add a Catch block that sends all errors to a PublishFailure state (which I’ll define later):

JSON
    "data_wordpressapi_raw": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "Payload.$": "$",
        "FunctionName": "arn:aws:lambda:eu-west-1:REDACTED:function:data_wordpressapi_raw:$LATEST"
      },
      "Next": "data_wordpressapi_bronze",
      "Catch": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "ResultPath": "$.Error",
          "Next": "PublishFailure"
        }
      ],
      "TimeoutSeconds": 120
    }

Note the TimeoutSeconds parameter. All my task states will have 120-second timeouts. These stop the state machine from waiting indefinitely if the task becomes unresponsive, and are recommended best practice. Also note that state machines wait for Lambda invocations to finish by default, so no additional config is needed for this.

Next, I create a data_wordpressapi_bronze task state to invoke my Bronze Lambda. This task uses the lambda:invoke action to invoke my data_wordpressapi_bronze function. I then add a Catch block that sends all errors to a PublishFailure state.

Finally, "End": true designates this state as a terminal state which ends the execution if the task is successful:

JSON
    "data_wordpressapi_bronze": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "Payload.$": "$",
        "FunctionName": "arn:aws:lambda:eu-west-1:973122011240:function:data_wordpressapi_bronze:$LATEST"
      },
      "Catch": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "ResultPath": "$.Error",
          "Next": "PublishFailure"
        }
      ],
      "TimeoutSeconds": 120,
      "End": true
    }

Finally, I create a PublishFailure task state that publishes failure notifications. This task uses the sns:Publish action to publish a simple message to the failure-stepfunction SNS Topic ARN. "End": true marks this task as the other potential way the state machine execution can end:

JSON
    "PublishFailure": {
      "Type": "Task",
      "Resource": "arn:aws:states:::sns:publish",
      "Parameters": {
        "TopicArn": "arn:aws:sns:eu-west-1:REDACTED:failure-stepfunction",
        "Message": "An error occurred in the state machine: { \"error\": \"$.Error\" }"
      },
      "End": true,
      "TimeoutSeconds": 120
    }

While both Lambdas already have SNS alerting, the state machine itself may also fail so the added observability is justified. This Marcia Villalba video was very helpful here:

And that’s everything I need! At this point Wordflow Studio gives me two things – firstly the state machine’s code, which I’ve committed to GitHub. And secondly this handy downloadable diagram:

stepfunctions graph

State Machine Config

It’s now time to think about security and monitoring.

When new state machines are created in the AWS Step Functions console, an IAM Role is created with policies based on the state machine’s resources. The nuances and templates are covered in the Step Functions Developer Guide, so let’s examine my WordPress_Raw_To_Bronze state machine’s auto-generated IAM Role consisting of two policies:

JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "xray:PutTraceSegments",
                "xray:PutTelemetryRecords",
                "xray:GetSamplingRules",
                "xray:GetSamplingTargets"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

This supports the AWS X-Ray integration with AWS Step Functions. If X-Ray trancing is never enabled then this policy is unused.

Besides X-Ray tracing, there is also an option to log a state machine’s execution history to CloudWatch Logs. There are three log levels available plus a fourth default choice: OFF. Each state machine retains recent execution history and I’ve got no need to keep that history long-term, so I leave the log retention disabled. Remember – CloudWatch Logs is only free for the first 5GB!

State Machine Testing

There are various ways to test a state machine. There’s a testing and debugging section in the developer guide that goes into further details, the three main options being:

I’ll focus on console testing here.

Both individual states and the entire state machine can be tested in the console. Each state can be tested in isolation (using the TestState API under the hood) with customisable inputs and IAM roles. This is great for checking the state outputs are correct, and that the attached IAM role is sufficient.

The state machine itself can also be tested via on-demand execution. The Execution Details page shows the state machine’s statistics and events, and has great coverage in the developer guide.

During testing, my WordPress_Raw_To_Bronze state machine returned this error:

States.Runtime in step: data_wordpressapi_bronze.

An error occurred while executing the state 'data_wordpressapi_bronze' (entered at the event id #7). Unable to apply Path transformation to null or empty input.

This turned out to be a problem with the OutputPath parameter, which Wordflow Studio enables by default:

2024 03 04 StepFunctionsOutPutPath

I’m not using this setting for anything, so I disabled it to solve this problem.

Eventbridge Schedule

Finally, I want to automate the execution of my state machine. This calls for an EventBridge Schedule!

EventBridge makes this quite simple, using mostly the same process as last time. The Step Functions StartExecution operation is a templated target like Lambda’s Invoke operation, so it’s a case of selecting the WordPress_Raw_To_Bronze state machine from the list and updating the schedule’s IAM role accordingly.

And that’s it! EventBridge now executes the state machine at 07:00 each morning. The state machine then sequentially invokes both Lambda functions and catches any errors.

Costs

In this section, I’ll examine my recent AWS WordPress bronze orchestration process costs.

Let’s start with Step Functions. There are two kinds of Step Function workflow:

  • Standard workflows are charged based on the number of state transitions. These are counted each time a workflow step is executed. The first 4000 transitions each month are free. After that, every 1000 transitions cost $0.025.
  • Express workflows are priced by the number of executions, duration, and memory consumption. The specifics of these criteria, coupled with full details of all charges are on the Step Functions pricing page.

I’m using standard workflows, and as of 26 March I’ve used 118 state transitions. In other words, free! Elsewhere, my costs are broadly on par with previous months. These are my S3 costs from 2024-02-01 to 2024-03-26:

S3 ActionsMonthUsageCost
PUT, COPY, POST, or LIST requests2024-0264,1960.32
PUT, COPY, POST, or LIST requests2024-0317,5660.09
GET and all other requests2024-02101,4620.04
GET and all other requests2024-038,6560.00
GB month of storage used2024-020.1090.00
GB month of storage used2024-030.1610.00

And this is my recent free tier usage from 2024-02-01 to 2024-03-26:

ServiceMonthUsage
EventBridge2024-0231 Invocations
EventBridge2024-0325 Invocations
Lambda2024-02122.563 Second Compute
Lambda2024-0284 Requests
Lambda2024-0382.376 Second Compute
Lambda2024-0358 Requests
Parameter Store2024-0234 API Requests
Parameter Store2024-0325 API Requests
SNS2024-028 Email-JSON Notifications
SNS2024-02438 API Requests
SNS2024-033 Email-JSON Notifications
SNS2024-03205 API Requests

So my only costs are still for storage.

Resources

The following items have been checked into the amazonwebshark GitHub repo for the AWS WordPress bronze orchestration process, available via the button below:

  • Updated data_wordpressapi_raw Python script & requirements.txt file.
  • New data_wordpressapi_bronze Python script & requirements.txt file.
  • WordPress_Raw_To_Bronze state machine JSON.
GitHub-BannerSmall

Summary

In this post, I created my WordPress pipeline’s bronze data orchestration process using AWS Lambda layers and AWS Step Functions.

I’ve wanted to try Step Functions out for a while, and all things considered they’re great! Workflow Studio is easy to use, and the templates and tutorials undoubtedly highlight the value that Step Functions can bring.

Additionally, the integration with both EventBridge Scheduler and other AWS services makes Step Functions a compelling orchestration service for both my ongoing WordPress bronze work and the future projects in my pipeline. This combined with some extra Lambda layers will reduce my future dev and test time.

If this post has been useful then the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Developing & Application Integration

WordPress Data Extraction Automation With AWS

In this post, I set up the automation of my WordPress API data extraction Python script with AWS managed serverless services.

Table of Contents

Introduction

In my previous post, I wrote a Python script for extracting WordPress API data. While it works fine, it relies on me logging in and pressing buttons. This isn’t convenient, and would be completely out of the question in a commercial use case. Wouldn’t it be great if something could run the script for me?

Enter some AWS managed serverless services that are very adept at automation! In this post, I’ll integrate these services into my existing architecture, test that everything works and see what my AWS costs are to date.

A gentle reminder: this is my first time setting up some of these services from scratch. This post doesn’t represent best practices, may be poorly optimised or include unexpected bugs, and may become obsolete. I expect to find better ways of doing these processes in the coming months and will link updates where appropriate.

Architectural Decisions

In this section, I examine my architectural decisions before starting work. Which AWS services will perform my WordPress data extraction automation? Note that these decisions are in addition to my previous ones.

AWS Lambda

Probably no surprises here. Whenever AWS and serverless come up, Lambda is usually the first service that comes to mind.

And with good reason! AWS Lambda deploys quickly and scales on demand. It supports several programming languages and practically every AWS service. It also has a generous free tier and requires no infrastructure management.

Lambda will provide my compute resources. This includes the runtime, execution environment and network connectivity for my Python script.

Amazon Cloudwatch

Amazon CloudWatch is a monitoring service that can collect and track performance data, generate insights and respond to resource state changes. It provides features such as metrics, alarms, and logs, letting users monitor and troubleshoot their applications and infrastructure in real time.

CloudWatch will record and store my Lambda function’s logs. I can see when my function is invoked, how long it takes to run and any errors that may occur.

So if something does go wrong, how will I know?

Amazon SNS

Amazon Simple Notification Service (SNS) is a messaging service that delivers notifications to a set of recipients or endpoints. It supports various messaging protocols like SMS, email and HTTP, making it helpful for building scalable and decoupled applications.

SNS will be the link between AWS and my email inbox. It will deliver messages from AWS about my Lambda function.

So that’s my alerting sorted. How does the function get invoked?

Amazon EventBridge

Amazon EventBridge is an event bus service that enables communication between different services using events. It offers a serverless and scalable platform with advanced event routing, integration capabilities and, crucially, scheduling and time expression functionality.

EventBridge is here to handle my automation requirements. Using a CRON expression, it’ll invoke my Lambda function regularly with no user input required.

Architectural Diagram

This is an architectural diagram of the AWS automation of my WordPress data extraction process:

  1. EventBridge invokes AWS Lambda function.
  2. AWS Lambda calls Parameter Store for WordPress, S3 and SNS parameters. Parameter Store returns these to AWS Lambda.
  3. Lambda Function calls WordPress API. WordPress API returns data.
  4. API data is written to S3 bucket.

If there’s a failure, the Lambda function publishes a message to an SNS topic. SNS then delivers this message to the user’s subscribed email address.

Meanwhile, Lambda is writing to a CloudWatch Log Group throughout its invocation.

SNS & Parameter Store

In this section, I configure Amazon SNS and update AWS Parameter Store to enable my WordPress data extraction automation alerting. This won’t take long!

SNS Configuration

SNS has two fundamental concepts:

  • Topics: communication channels for publishing messages.
  • Subscriptions: endpoints to send messages to.

Firstly, I create a new wordpress-api-raw standard SNS Topic. This topic doesn’t need encryption or delivery policies, so all the defaults are fine. An Amazon Resource Name (ARN) is assigned to the new SNS Topic, which I’ll put into Parameter Store.

Next, I create a new SNS Subscription for my SNS Topic that emails me when invoked.

There’s not much else to add here! That said, SNS can do far more than this. Check out SNS’s features and capabilities in the Developer Guide.

Parameter Store Configuration

Next, I need to add the new SNS Topic ARN to AWS Parameter Store.

I create a new string parameter, and assign the SNS Topic’s ARN as the value. That’s….it! With some changes, my Python script can now get the SNS parameter in the same way as the S3 and WordPress parameters.

Speaking of changing the Python script…

Python

In this section, I integrate SNS into my existing Python script and test the new outputs.

Function Updates

My script now has a new send_sns_message function:

It expects four arguments:

  • sns_client: the boto3 client used to contact AWS.
  • topic_arn: the SNS topic to use for the message.
  • subject: the message’s subject.
  • message: the message to send.

Everything bar sns_client has string type hints. No return value is needed.

I create a try except block that attempts to send a message using the sns_client’s publish method and the supplied values. The log is updated with publish‘s success or failure.

Separately, I’ve also added a ParamValidationError exception to my get_parameter_from_ssm function. Previously the exceptions were:

Python
    except ssm_client.exceptions.ParameterNotFound:
        logging.warning(f"Parameter {parameter_name} not found.")
        return ""

    except botocore.exceptions.ClientError as e:
        logging.error(f"Error getting parameter {parameter_name}: {e}")
        return ""

They are now:

Python
    except ssm_client.exceptions.ParameterNotFound as pnf:
        logging.warning(f"Parameter {parameter_name} not found: {pnf}")
        return ""

    except botocore.exceptions.ParamValidationError as epv:
        logging.error(f"Error getting parameter {parameter_name}: {epv}")
        return ""

    except botocore.exceptions.ClientError as ec:
        logging.error(f"Error getting parameter {parameter_name}: {ec}")
        return ""

Variable Updates

My send_sns_message function needs some new variables. Firstly, I create an SNS Client using my existing boto3 session and assign it to client_sns:

Python
    # AWS sessions and clients
    session = boto3.Session()
    client_ssm = session.client('ssm')
    client_s3 = session.client('s3')
    client_sns = session.client('sns')
    requests_session = requests.Session()

Next, I assign the new SNS parameter name to a parametername_snstopic object:

Python
    # AWS Parameter Store Names
    parametername_s3bucket = '/s3/lakehouse/name/raw'
    parametername_snstopic = '/sns/pipeline/wordpressapi/raw'
    parametername_wordpressapi = '/wordpress/amazonwebshark/api/mysqlendpoints'

Finally, I create a new lambdaname object which I’ll use for SNS notifications in my Python script’s body.

Python
    # Lambda name for messages
    lambdaname = 'wordpress_api_raw.py'

Script Body Updates

These changes integrate SNS failure messages into my script. There are no success messages…because I get enough emails as it is.

SNS Parameter Retrieval & Check

There’s now a third use of get_parameter_from_ssm, using parametername_snstopic to get the SNS topic ARN from AWS Parameter Store:

Python
    # Get SNS topic from Parameter Store
    logging.info("Getting SNS parameter...")
    sns_topic = get_parameter_from_ssm(client_ssm, parametername_snstopic)

I’ve also added an SNS parameter check. It behaves differently to the other checks, as it’ll raise a ValueError if nothing is found:

Python
    # Check an SNS topic has been returned.
    if not sns_topic:
        message = "No SNS topic returned."
        logging.warning(message)
        raise ValueError(message)

I want to cause an invocation failure in this situation, as not having the SNS topic ARN is a critical and unrecoverable problem which the automation process will have no way to alert me about.

However, the AWS Lambda service can warn me about invocation failures. This is something I’ll set up later on.

Failure Getting Other Parameters

The get_parameter_from_ssm response checks have changed. Previously, if a parameter request (the API endpoints in this case) returns a blank string then a warning is logged and the invocation ends:

Python
    # Check the API list isn't empty
    if not any(api_endpoints_list):
        logging.warning("No API endpoints returned.")
        return

Now, new subject and message objects are created with details about the error. The message string is added to the log, and both objects are passed to send_sns_message along with the SNS client and SNS topic ARN:

Python
    # Check the API list isn't empty
    if not any(api_endpoints_list):
        message = "No API endpoints returned."
        subject = f"{lambdaname}: Failed"

        logging.warning(message)
        send_sns_message(client_sns, sns_topic, subject, message)
        return

The S3 check now works similarly:

Python
    # Check an S3 bucket has been returned.
    if not s3_bucket:
        message = "No S3 bucket returned."
        subject = f"{lambdaname}: Failed"

        logging.warning(message)
        send_sns_message(client_sns, sns_topic, subject, message)
        return

If either of these checks fail, no WordPress API calls are made and the invocation stops.

Failure During For Loop

Previously, the script’s final output was a log entry showing the endpoint_count_success and endpoint_count_failure values:

Python
    logging.info("WordPress API Raw process complete: " \
                 f"{endpoint_count_success} Successful | {endpoint_count_failure} Failed.")

This section has now been expanded. If endpoint_count_failure is greater than zero, a message object is created including the number of failures.

message is then written to the log, and is passed to send_sns_message with a subject and the SNS client and SNS topic ARN:

Python
    logging.info("WordPress API Raw process complete: " \
                 f"{endpoint_count_success} Successful | {endpoint_count_failure} Failed.")

    # Send SNS notification if any failures found
    if endpoint_count_failure > 0:
        message = f"{lambdaname} ran with {endpoint_count_failure} errors.  Please check logs."
        subject = f"{lambdaname}: Ran With Failures"

        logging.warning(message)
        send_sns_message(client_sns, sns_topic, subject, message)

If a loop iteration fails, the script ends it and starts the next. One or more loop iterations can fail while the others succeed.

That completes the script changes. Next, I’ll test the failure responses.

SNS Notification Testing

SNS should now send me one of two emails depending on which failure occurs. I can test these locally by inverting the logic of some if conditions.

Firstly, I set the S3 bucket check to fail if a bucket name is returned:

Python
    # Check an S3 bucket has been returned.
    if s3_bucket:
        message = "No S3 bucket returned."
        subject = f"{lambdaname}: Failed"

        logging.warning(message)
        send_sns_message(client_sns, sns_topic, subject, message)
        return

Upon invocation, an email arrives with details of the failure:

2024 02 06 wordpress api raw.py Failed

Secondly, I change the loop’s data check condition to fail if data is returned:

Python
        # If no data returned, record failure & end current iteration
        if api_json:
            logging.warning("Skipping attempt due to API call failure.")
            endpoint_count_failure += 1
            continue

This ends the current loop iteration and increments the endpoint_count_failure value. Then, in a check after the loop, an SNS message is triggered when endpoint_count_failure is greater than 0:

Python
    # Send SNS notification if any failures found
    if endpoint_count_failure > 0:
        message = f"{lambdaname} ran with {endpoint_count_failure} errors.  Please check logs."
        subject = f"{lambdaname}: Ran With Failures"

        logging.warning(message)
        send_sns_message(client_sns, sns_topic, subject, message)

Now, a different email arrives with the number of failures:

2024 02 06 wordpress api raw.py RanWithFailures

Success! Now the Python script is working as intended, it’s time to deploy it to AWS.

Lambda & CloudWatch

In this section, I start creating the automation of my WordPress data extraction process by creating and configuring a new AWS Lambda function. Then I deploy my Python script, set some error handling and test everything works.

I made extensive use of Martyn Kilbryde‘s AWS Lambda Deep Dive A Cloud Guru course while completing this section. It was exactly the kind of course I needed – a bridge between theoretical certification content and hands-on experience in my own account.

This section is the result of my first pass through the course. There are better ways of doing what I’ve done here, but ultimately I have to start somewhere. I have several course sections to revisit, so watch this space!

Let’s begin with creating a new Lambda function.

Function Creation

Lambda function creation steps vary depending on whether the function is being written from scratch, or if it uses a blueprint or container image. I’m writing from scratch, so after choosing a name I must choose the function’s runtime. Runtimes consist of the programming language and the specific version. In my case, this is Python 3.12.

Next are the permissions. By design, AWS services need permissive IAM roles to interact with other services. A Lambda function with no IAM role cannot complete actions like S3 reads or CloudWatch writes.

Thankfully, AWS are one step ahead. By default, Lambda creates a basic execution role for each new function with some essential Amazon CloudWatch actions. With this role, the function can record invocations, resource utilization and billing details in a log stream. Additional IAM actions can be assigned to the role as needed.

Script Deployment

Now I have a function, I need to upload my Python script. There are many ways of doing this! I followed the virtual environment process, as I already had one from developing the script in VSCode. This environment’s contents are in the requirements.txt file listed in the Resources section.

While this was successful, the resulting deployment package is probably far bigger than it needs to be. Additionally, I didn’t make use of any of the toolkits, frameworks or pipelines with Lambda functionality. I expect my future deployments to improve!

Lambda Destination

There’s one more Lambda feature I want to use: a Lambda Destination.

From the AWS Compute blog:

With Destinations, you can route asynchronous function results as an execution record to a destination resource without writing additional code. An execution record contains details about the request and response in JSON format including version, timestamp, request context, request payload, response context, and response payload.

https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/

Here, I want a destination that will email me if my Lambda function fails to run. This helps with visibility, and will be vital if the SNS parameter isn’t returned!

With no Destination, the failure would only appear in the function’s log and I might not know about it for days. With a Destination enabled, I’ll know about the failure as soon as the email comes through.

My destination uses the following config:

  • Invocation Type: Asynchronous
  • Condition: On Failure
  • Destination Type: SNS topic

The SNS topic is a general Failed Lambda one that I already have. The Lambda service can use this SNS topic regardless of any script problems.

Lambda & CloudWatch Testing

With the function created and deployed, it’s testing time! Does my function work and log as intended?

Error: Timeout Exceeded

It doesn’t take long to hit my first problem:

Task timed out after 3.02 seconds

All Lambdas created in the console start with a three-second timeout. This is great at preventing runaway invocations, but I clearly need longer than three seconds.

After some local testing, I increased the timeout to two minutes in the function’s config:

2023 12 19 LambdaTimeout

Error: Access Denied

Next, I start hitting permission errors:

An error occurred (AccessDeniedException) when calling the GetParameter operation: User is not authorized to perform: ssm:GetParameter on resource because no identity-based policy allows the ssm:GetParameter action.

My Lambda’s basic execution role can interact with CloudWatch, but nothing else. This is by design in the interests of security. However, this IAM role is currently too restrictive for my needs.

The role’s policy needs to allow additional actions:

To follow IAM best practise, I should also apply least-privilege permissions. Instead of a wildcard character, I should restrict the policy to the specific ARNs of my AWS resources.

For example, this IAM policy is too permissive as it allows access to all parameters in Parameter Store:

JSON
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "Statement1",
			"Effect": "Allow",
			"Action": [
				"ssm:GetParameter"
			],
			"Resource": [
				"*"
			]
		}
	]
}

Conversely, this IAM policy allows access to specific parameter ARNs only.

(Well, it did before the ARNs were redacted – Ed.)

JSON
"Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:ssm:REDACTED",
                "arn:aws:ssm:REDACTED",
                "arn:aws:ssm:REDACTED"
            ]
        }

My S3 policy does have a wildcard value, but it’s at the prefix level:

JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::REDACTED/wordpress-api/*"
            ]
        }

My Lambda function can now write to my bucket, but only to the wordpress-api prefix. A good way to understand the distinction is to look at an AWS example:

arn:aws:s3:::my_corporate_bucket/*
arn:aws:s3:::my_corporate_bucket/Development/*

In this example, line 1 covers the entire my_corporate_bucket S3 bucket. Line 2 is more focused, only covering all objects in the Development prefix of the my_corporate_bucket bucket.

Error: Memory Exceeded

With the new policy, my function runs smoothly. Until:

Runtime exited with error: signal: killed Runtime.ExitError

This one was weird because the function kept suddenly stopping at different points! I then checked further down the test summary:

2023 12 19 LambdaMaxMemoryHighlight

It’s running out of memory! Lambda assigns a default 128MB RAM to each function, and here my function was hitting 129MB. RAM can be changed in the function’s general configuration. But changed to what?

When a Lambda function runs successfully, it logs memory metrics:

Memory Size: 500 MB	Max Memory Used: 197 MB

After some trial and error, I set the function’s RAM to 250MB and have had no problems since.

Incomplete CloudWatch Logs

The last issue wasn’t an error so much as a bug. CloudWatch was showing my Lambda invocation start and end, but none of the function’s logs:

2023 12 22 LambdaNoLogs

The solution was found in Python’s basicConfig‘s docstring:

This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.

basicConfig docstring

Well, AWS Lambda does have built-in logging. And my basicConfig isn’t forcing anything! One swift update and redeployment later:

Python
    logging.basicConfig(
        level = logging.INFO,
        format = "%(asctime)s [%(levelname)s]: %(message)s",
        datefmt = "%Y-%m-%d %H:%M:%S",
        force = True
        )

And my CloudWatch Log Stream is now far more descriptive!

2023 12 22 LambdaLogs

In the long run I plan to investigate Lamba’s logging abilities, but for now this does what I need.

SNS Destination Email

Finally, I want to make sure my Lambda Destination is working as expected. My function works now, so I need to force a failure. There are many ways of doing this. In this case, I used three steps:

  • Temporarily alter the function’s timeout to 3 seconds.
  • Reconfigure the function’s Asynchronous Invocation retry attempts to zero.
  • Invoke the function with a one-time EventBridge Schedule.

The low timeout guarantees a function invocation failure. Setting zero retries prevents unnecessary retries (because I want the failure to happen!) Finally, the one-time schedule will asynchronously invoke my function, which is what the Destination is looking for.

And…(redacted) success!

2024 02 09 DestinationEmail

I could clean this email up with an EventBridge Input Path (which I’ve done before), but that’s mostly cosmetic in this case.

EventBridge

In this section I configure EventBridge – the AWS service that schedules the automation of my WordPress data extraction process. While I’ve used EventBridge Rules before, this is my first time using EventBridge Scheduler. So what’s the difference?

EventBridge Scheduler 101

From the AWS EventBridge product page:

Amazon EventBridge Scheduler is a serverless scheduler that enables you to schedule tasks and events at scale. With EventBridge Scheduler you have the flexibility to configure scheduling patterns, set a delivery window, and define retry policies to ensure your critical tasks and events are reliably triggered right when you need them.

https://aws.amazon.com/eventbridge/scheduler/

EventBridge Scheduler is a fully managed service that integrates with over 200 AWS services. It supports one-time schedules and start and end dates, and can account for daylight saving time.

Cost-wise, EventBridge Schedules are changed per invocation. EventBridge’s free tier covers the first 14 million(!) invocations each month, after which each further million currently costs $1.00. These invocations can be staggered using Flexible Time Windows to avoid throttling.

AWS has published a table showing the main differences between EventBridge Scheduler and Eventbridge Rules. Essentially, Eventbridge Rules are best suited for event-based activity, while EventBridge Scheduler is best suited for time-based activity.

Schedule Setup

Let’s create a new EventBridge Schedule. After choosing a name, I need a schedule pattern. Here, I want a recurring CRON-based schedule that runs at a specific time.

EventBridge Cron expressions have six required fields which are separated by white space. My cron expression is 0 7 * * ? * which translates to:

  • The 0th minute
  • Of the seventh hour
  • On every day of the month
  • Every month,
  • Day of the week,
  • And year

In response, EventBridge shows some of the future trigger dates so I can check my expression is correct:

Sat, 02 Feb 2024 07:00:00 (UTC+00:00)
Sun, 03 Feb 2024 07:00:00 (UTC+00:00)
Mon, 04 Feb 2024 07:00:00 (UTC+00:00)
Tue, 05 Feb 2024 07:00:00 (UTC+00:00)
Wed, 06 Feb 2024 07:00:00 (UTC+00:00)

I then need to choose a flexible time window setting. This setting distributes AWS service API calls to help prevent throttling, but that’s not a problem here so I select Off.

Next, I choose the target. I have two choices: templated targets or universal targets. Templated targets are a set of popular AWS service operations, needing only the relevant ARN during setup. Universal targets can target any AWS service but require more configuration details. Lambda’s Invoke operation is a targeted template, so I use that.

Next are some optional encryption, retry and state settings. EventBridge Scheduler IAM roles are handled here too, allowing EventBridge to send events to the targeted AWS services. Finally, a summary screen shows the full schedule before creation.

The schedule then appears on the EventBridge console:

2024 02 09 AmazonEventBridgeScheduler

EventBridge Testing

Testing time! Does CloudWatch show Lambda function invocations at 07:00?

It does!

2024 02 08 CloudWatchLogs

While I’m in CloudWatch, I’ll change the log group’s retention setting. It defaults to Never Expire, but I don’t need an indefinite history for this function! Three months is fine – long enough to troubleshoot any errors, but not so long that I’m storing and potentially paying for logs I’ll never need.

Costs

In this section, I examine the current AWS costs for my WordPress data extraction and automation processes using the Billing & Cost Management console.

I began creating pipeline resources in December 2023 using various workshops and tutorials. This table shows my AWS service costs (excluding tax) accrued over December 2023 and January 2024 (the months I currently have full billing periods for):

2024 02 09 Cost Explorer

I’ll examine these costs in two parts:

  • S3 Costs: my AWS costs are all storage-based. I’ll examine my S3 API calls and how each S3 API contributes to my bill.
  • Free Tier Usage: everything else has zero cost. I’ll examine what I used and how it compares to the free tier allowances.

I’ll also take a quick look at February’s costs to date. I’ve not tagged any of the pipeline resources, so these figures are for all activity in this AWS account.

S3 Costs

S3 is the only AWS service in my WordPress data extraction and automation processes that is generating a cost. This Cost Explorer chart shows my S3 API usage over the last two full months:

2024 02 10 Cost ExplorerS3APICalls

PutObject is clearly the most used S3 API, which isn’t surprising given S3’s storage nature. Cost Explorer can also show API request totals, as shown below:

2024 02 10 Cost ExplorerS3APICallsDec23
2024 02 10 Cost ExplorerS3APICallsJan24

Remember that this includes S3 API calls from other services like S3 Inventory, CloudTrail Log Steams and Athena queries.

AWS bills summarise these figures for easier reading. This is my December 2023 S3 bill, where S3 PUT, COPY, POST and LIST requests are grouped:

2024 02 09 Billing202312

January 2024’s bill:

2024 02 09 Billing202401

Going into this depth for $0.08 might not seem worth it. But if the bill suddenly becomes $8 or $80 then having this knowledge is very useful!

The AWS Storage blog has a great post on analyzing S3 API operations that really helped here.

Free Tier Usage

The following services had no cost because my usage fell within their free tier allowances. For each zero cost on the bill, I’ll show the service and, where appropriate, the respective free tier allowance.

CloudTrail:

  • 2023-12: 7970 Events recorded.
  • 2024-01: 6605 Events recorded.

CloudWatch was the same for both months:

  • Sub 1GB-Mo log storage used of 5GB-mo log storage free tier
  • Sub 1GB log data ingested of 5GB log data ingestion free tier

Lambda 2023-12:

  • 36.976 GB-Seconds used of 400,000 GB-seconds Compute free tier
  • 47 Requests used of 1,000,000 Request free tier

Lambda 2024-01:

  • 9.572 GB-Seconds used of 400,000 GB-seconds Compute free tier
  • 8 Requests used of 1,000,000 Request free tier

Parameter Store (billed as Secrets Manager):

  • 2023-12: 31 API Requests used of 10,000 API Request free tier
  • 2024-01: 41 API Requests used of 10,000 API Request free tier

February 2024 Costs

At this time I don’t have full billing data for February, but I wanted to show the EventBridge and SNS usage to date:

EventBridge (billed as CloudWatch Events):

  • 16 Invocations used of 14 million free tier

SNS:

  • 3 Notifications used of 1,000 Email-JSON Notification free tier
  • 227 API Requests used of 1,000,000 API Request free tier

As of Feb 15, Lambda is on 71.742 GB-Seconds and 34 Requests while S3 is on 8,821 PCPL requests, 3,764 GET+ requests and 0.0052 GB-Mo storage.

Resources

The full Python script has been checked into the amazonwebshark GitHub repo, available via the button below. Included is a requirements.txt file for the Python libraries used to extract the WordPress API data. This file is unchanged from last time but is included for completeness.

GitHub-BannerSmall

Summary

In this post, I set up the automation of my WordPress API data extraction script with AWS managed serverless services.

On the one hand, there’s plenty more to do here. I have lots to learn about Lambda, like deployment improvement and resource optimisation. This will improve with time and experience.

However, my function’s logging and alerting are in place, my IAM policies meet AWS standards and I’m using the optimal services for my compute and scheduling. And, most importantly, my automation pipeline works!

My attention now turns to the data itself. My next WordPress Data Pipeline post will look at transforming and loading the data so I can put it to use! If this post has been useful, the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~