Categories
Developing & Application Integration

Low-Code S3 Key Validation With AWS Step Functions & JSONata

In this post, I use JSONata to add low-code S3 object key validation to an AWS Step Functions state machine.

Table of Contents

Introduction

In 2024, I worked a lot with AWS Step Functions. I built several for different tasks, wrote multiple blog posts about them and talked about them a fair bit. So when AWS introduced JSONata support for Step Functions last year, I was very interested. Although I had no prior JSONata experience, I heard positive feedback and made a mental note to explore its use cases.

Well, there’s no time like the present! And as I was starting to create the first Project Wolfie resources I realised some of my requirements were a perfect fit.

Firstly, I will examine what JSONata is, how it works and why it’s useful. Next, I will outline my architecture and create some low-code S3 key validation JSONata expressions. Finally, I’ll test these expressions and review their outputs.

JSONata & AWS

This section introduces JSONata and examines its syntax and benefits.

Introducing JSONata

JSONata is a lightweight query and transformation language for JSON, developed by Andrew Coleman in 2016. Specifically inspired by XPath and SQL, it enables sophisticated queries using a compact and intuitive notation.

JSONata provides built-in operators and functions for efficiently extracting and transforming data into any JSON structure. It also supports user-defined functions, allowing for advanced expressions that enhance the querying of dynamic JSON data.

For a visual introduction, check out this JSONata overview:

JSONata Syntax Essentials

JSONata has a simple and expressive syntax. Its path-based approach lets developers easily navigate nested structures. It combines functional programming with dot notation for navigation, brackets for filtering and pipeline operators for chaining.

JSONata operations include transformations like:

  • Arithmetic ($price * 1.2)
  • Conditional Logic ($price > 100 ? 'expensive' : 'affordable').
  • Filtering ($orders[status = 'shipped'])
  • String Operations ($firstName & ' ' & $lastName)

The JSONata site includes full documentation and a JSONata Exerciser for experimenting.

JSONata In AWS Step Functions

JSONata was introduced to AWS Step Functions in November 2024. Using JSONata in Step Functions requires setting the QueryLanguage field to JSONata in the state machine definition. This action replaces the traditional JSONPath fields with two JSONata fields:

  • Arguments: Used to customise data sent to state actions.
  • Output: Used to transform results into custom state output.

Additionally, the Assign field sets variables that can be stored and reused across the workflow.

In AWS Step Functions, JSONata expressions are enclosed in {% %} delimiters but otherwise follow standard JSONata syntax. They access data using the $states reserved variable with the following structures:

  • State input is accessed using $states.input
  • Context information is accessed using $states.context
  • Task results (if successful) are accessed using $states.result
  • Error outputs (if existing) are accessed using $states.errorOutput

Step Functions includes standard JSONata functions as well as AWS-specific additions like $partition, $range, $hash, $random, and $uuid. Some functions, such as $eval, are not supported.

Here are some JSONata examples from the AWS Step Functions Developer Guide:

Plaintext
{% $states.input.title %}

{% $current_price <= $states.input.desired_priced %}

{% $parse($states.input.json_string) %}

Talking more about this subject is AWS Principle Developer Advocate Eric Johnson:

JSONata Benefits

So why is JSONata in AWS a big deal?

Low Maintenance: JSONata use removes the need for Lambda runtime updates, dependency management and security patching. JSONata expressions are self-contained and version-free, reducing debugging and testing effort.

Simpler Development Workflow: JSONata’s standardised syntax removes decisions about languages, runtimes and tooling. This improves consistency, simplifies collaboration and speeds up development.

Releases Capacity: JSONata use reduces reliance on AWS Lambda, freeing up Lambda concurrency slots for more complex tasks. This minimises throttling risks and can lower Lambda costs.

Faster Execution: JSONata runs inside AWS services, avoiding cold starts, IAM role checks and network latency. Most JSONata transformations are complete in milliseconds, making it ideal for high-throughput APIs and real-time systems.

Architecture

This section explains the key features and events used in my low-code S3 validation architecture with JSONata.

Object Created Event

My process starts when an S3 object is created. For this post, I’m using Amazon EventBridge‘s sample S3 Object Created event:

JSON
{
  "version": "0",
  "id": "17793124-05d4-b198-2fde-7ededc63b103",
  "detail-type": "Object Created",
  "source": "aws.s3",
  "account": "123456789012",
  "time": "2021-11-12T00:00:00Z",
  "region": "ca-central-1",
  "resources": ["arn:aws:s3:::example-bucket"],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "example-bucket"
    },
    "object": {
      "key": "example-key",
      "size": 5,
      "etag": "b1946ac92492d2347c6235b4d2611184",
      "version-id": "IYV3p45BT0ac8hjHg1houSdS1a.Mro8e",
      "sequencer": "00617F08299329D189"
    },
    "request-id": "N4N7GDK58NMKJ12R",
    "requester": "123456789012",
    "source-ip-address": "1.2.3.4",
    "reason": "PutObject"
  }
}

Here, the highlighted key field is vital as it identifies the uploaded object. This field will be used in the validation processes.

Choice State

In AWS Step Functions, Choice states introduce conditional logic to a state machine. They assess conditions and guide execution accordingly, allowing workflows to branch dynamically based on input data. When used with JSONata, a Choice state must contain the following fields:

  • Condition field – a JSONata expression that evaluates to true/false.
  • Next field – a value that must match a state name in the state machine.

For example, this Choice state checks if a variable foo equals 1:

Plaintext
{"Condition": "{% $foo = 1 %}",  "Next": "NumericMatchState"}

If $foo = 1, the condition is true and the workflow transitions to a NumericMatchState state.

Architecture Diagram

Now let’s put this all together into an architecture diagram:

Here,

  1. A file is uploaded to an Amazon S3 Bucket.
  2. S3 creates an Object Created event.
  3. Amazon EventBridge matches the event record to an event rule.
  4. Eventbridge executes the AWS Step Functions state machine and passes the event to it as JSON input.
  5. The state machine transitions through the various choice states.
  6. The state machine transitions to the fail state if any choice state criteria are not met.
  7. The state machine transitions to the success state if all choice state criteria are met.

Expression Creation

In this section, I create JSONata expressions to perform low-code S3 validation. For clarity, I’ll use this sample S3 event including an object key which closely resembles my actual S3 path:

JSON
{
  "version": "0",
  ...
  "detail": {
    "version": "0",
    "bucket": {
      "name": "data-lakehouse-raw"
    },
    "object": {
      "key": "iTunes/iTunes-AllTunes-2025-02-01.txt",
      "size": 5,
      ...
    },
    "request-id": "N4N7GDK58NMKJ12R",
    "requester": "123456789012",
    "source-ip-address": "1.2.3.4",
    "reason": "PutObject"
  }
}

S3 Key TXT Suffix Check

This JSONata expression checks if the S3 object key ends with txt:

Plaintext
{% $lowercase($split($split($states.input.detail.object.key, '/')[-1], '.')[-1]) = 'txt' %}

For better readability:

Plaintext
{% 
  $lowercase(
    $split(
      $split($states.input.detail.object.key, '/')[-1], 
    '.')[-1]
  ) = 'txt' 
%}

Let’s walk through this step by step:

1. Accessing The S3 Object Key

Extract the key from the event using $states.input:

Plaintext
$states.input.detail.object.key

Output: "iTunes/iTunes-AllTunes-2025-02-01.txt"

2. Splitting By / To Extract The Filename

Break the key into an array with %split using / as the delimiter:

Plaintext
$split($states.input.detail.object.key, '/')

Output: ["iTunes", "iTunes-AllTunes-2025-02-01.txt"]

Now, retrieve the array’s last element (the object name) using [-1]:

Plaintext
$split(...)[-1]

Output: "iTunes-AllTunes-2025-02-01.txt"

3. Splitting By . To Extract The File Suffix

Break the filename with $split again, using . as the delimiter:

Plaintext
$split($split(...)[-1], '.')

Output: ["iTunes-AllTunes-2025-02-01", "txt"]

Now, retrieve the last element (the suffix) using [-1]:

Plaintext
$split($split(...)[-1], '.')[-1]

Output: "txt"

4. Converting To Lowercase For Case-Insensitive Matching

Use $lowercase to convert the suffix to lowercase:

Plaintext
$lowercase($split(...)[-1], '.')[-1])

Output: "txt"

The $lowercase function ensures consistency, as files with TXT, Txt, or tXt extensions will still match correctly. Here, there is no change as txt is already lowercase.

5. Comparing Against txt

Finally, compare the result to 'txt':

Plaintext
$lowercase($split(...)[-1], '.')[-1]) = 'txt'

Output: true

This means that files ending in .txt pass validation, while others fail.

S3 Key iTunes String Check

This JSONata expression checks if the S3 object key contains iTunes.

Plaintext
{% $contains($split($states.input.detail.object.key, '/')[-1], 'iTunes') %}

For better readability:

Plaintext
{% 
  $contains(
    $split(
      $states.input.detail.object.key, '/')[-1],
    'iTunes'
  ) 
%}

I’m not using $lowercase this time, as iTunes is the correct spelling.

1. Extract The Filename

This is unchanged from the last expression:

Plaintext
$split($states.input.detail.object.key, '/')[-1]

Output: "iTunes-AllTunes-2025-02-01.txt"

2. Check If The String Contains 'iTunes

The $contains function checks if the string contains the specified substring. It returns true if the substring exists; otherwise, it returns false.

Plaintext
$contains($split(...)[-1], 'iTunes')

Output: true ✅ if 'iTunes‘ appears anywhere in the filename.

So:

  • "iTunes-AllTunes-2025-02-01.txt"true
  • "itunes-AllTunes-2025-02-01.txt"false (case-sensitive)

S3 Key Date Check

This JSONata expression checks if the S3 object key contains a date with format YYYY-MM-DD.

Plaintext
{% $exists($match($split($states.input.detail.object.key, '/')[-1], /\d{4}-\d{2}-\d{2}/)) %}

For better readability:

Plaintext
$exists(
  $match(
    $split($states.input.detail.object.key, '/')[-1], 
    /\d{4}-\d{2}-\d{2}/
  )
)

1. Extract The Filename

This is unchanged from the first expression:

Plaintext
$split($states.input.detail.object.key, '/')[-1]

Output: "iTunes-AllTunes-2025-02-01.txt"

2. Apply The Regex Match

The $match function applies the substring to the provided regular expression (regex). If found, an array of objects is returned containing the following fields:

  • match – the substring that was matched by the regex.
  • index – the offset (starting at zero) within the substring.
  • groups – if the regex contains capturing groups (parentheses), this contains an array of strings representing each captured group.

In this JSONata expression:

Plaintext
$match(..., /\d{4}-\d{2}-\d{2}/)

The regex looks for:

  • \d{4} → Four digits (year)
  • - → Hyphen separator
  • \d{2} → Two digits (month)
  • - → Another hyphen
  • \d{2} → Two digits (day)

Output:

JSON
{
  "match": "2025-02-01",
  "index": 16,
  "groups": []
}

3. Convert To Boolean With $exists

I can’t use the $match output yet as the Choice state needs a boolean output. Enter $exists. This function returns true for a successful match; otherwise, it returns false.

Plaintext
$exists($match(..., /\d{4}-\d{2}-\d{2}/))

Output: true ✅ if a date is found.

Here, $exists returns true as a date is present. However, ote that JSONata lacks built-in functions to validate dates. For example:

  • "2025-02-01"true (valid date)
  • "2025-02-31"true (invalid date but still matches format)

An AWS Lambda function would be needed for strict date validation.

Combining JSONata Expressions

Although I’ve created separate Choice states for each JSONata expression in this section, I will add that all the expressions can be combined into a single Choice state using and:

Plaintext
{% $lowercase($split($split($states.input.detail.object.key, '/')[-1], '.')[-1]) = 'txt' and $contains($split($states.input.detail.object.key, '/')[-1], 'iTunes') and $exists($match($split($states.input.detail.object.key, '/')[-1], /\\d{4}-\\d{2}-\\d{2}/)) %}

For better readability:

Plaintext
{% 
  $lowercase(
    $split(
      $split(
        $states.input.detail.object.key, '/')[-1], '.')[-1]) = 'txt' 
and 
  $contains(
    $split(
      $states.input.detail.object.key, '/')[-1], 'iTunes') 
and 
  $exists(
    $match(
      $split(
        $states.input.detail.object.key, '/')[-1], /\\d{4}-\\d{2}-\\d{2}/)) 
%}

When deciding whether to do this, consider these benefits:

  • Simplified Structure: Reducing the number of states can make the state machine easier to understand and maintain visually. Instead of multiple branching paths, all logic is in one centralised Choice state.
  • Cost Optimisation: AWS Step Functions Standard Workflows pricing is based on the number of state transitions. Combining multiple Choice states into one reduces transitions, potentially lowering costs for high-volume workflows.
  • Minimises Transition Latency: Each state transition adds a slight delay. By managing all logic within a single Choice state, the workflow runs more efficiently due to the reduced transitions.

Against these tradeoffs:

  • Added Complexity: A complex Choice state with many conditions can be difficult to read, debug, and modify. It may require deeply nested logic, which makes future updates challenging.
  • Limited Observability: If multiple conditions are combined into one state, debugging failures becomes more difficult as it is unclear which condition caused an unexpected transition.
  • Potential Scaling Difficulty: As the workflow evolves, adding more conditions to a single Choice state can become unmanageable. Ultimately, this situation may require breaking it up.

Final Workflows

Finally, let’s see what the workflows look like. Firstly, this workflow has separate Choice states for each JSONata expression:

stepfunctions graph Data Ingestion iTunes

Data-Ingestion-iTunes ASL on GitHub.

Next, this workflow has one Choice state for all JSONata expressions:

stepfunctions graph Data Ingestion iTunes all

Data-Ingestion-iTunes-All ASL on GitHub.

Testing

To ensure my low-code JSONata expressions work as expected, I ran several tests against different S3 object keys. These tests validate:

  • File Suffix (.txt)
  • Key Content (iTunes)
  • Date Format (YYYY-MM-DD)

Suffix Validation Tests

Test CaseS3 KeyExpectedActual
Valid Suffix (.txt)"iTunes/iTunes-2025-02-01.txt"Proceed to iTunes Check✅ Success → Next: iTunes String Check
Invalid Suffix (.csv)"iTunes/iTunes-2025-02-01.csv"Fail (No further checks)❌ Failure → No further checks
Missing Suffix"iTunes/iTunes-2025-02-01"Fail (No further checks)❌ Failure → No further checks

Key Content Validation Tests

Test CaseS3 KeyExpectedActual
Valid “iTunes” Key"iTunes/iTunes-2025-02-01.txt"Proceed to Date Check✅ Success → Next: Date Check
Incorrect Case (itunes instead of iTunes)"iTunes/itunes-2025-02-01.txt"Fail (No further checks)❌ Failure → No further checks
Missing Key String""Fail (No further checks)❌ Failure → No further checks

Date Format Validation Tests

Test CaseS3 KeyExpectedActual
Correct Date Format (YYYY-MM-DD)"iTunes/iTunes-2025-02-01.txt"Success (Validation complete)✅ Success → Validation complete!
Incorrect Date Format (Missing Day)"iTunes/iTunes-2025-02.txt"Fail (No further checks)❌ Failure → No further checks
Missing Date"iTunes/iTunes.txt"Fail (No further checks)❌ Failure → No further checks

Edge Case: Impossible Date

Test CaseS3 KeyExpectedActual
⚠️ Impossible Date (2025-02-31)"iTunes/iTunes-2025-02-31.txt"Fail (Ideally)Unexpected Success (JSONata does not validate real-world dates)

These tests confirm that JSONata expressions can effectively validate S3 object keys based on file suffixes, key contents and date formats. However, while JSONata can check formatting (YYYY-MM-DD) it does not validate real-world dates. If strict date validation is needed then an AWS Lambda function would be required.

Summary

In this post, I used JSONata to add low-code S3 object key validation to an AWS Step Functions state machine. This approach simplifies the validation process and reduces the reliance on more complex Lambda functions.

My first impressions of JSONata are very good! It’s already reduced both the number and size of Project Wolfie’s Lambda functions, and there’s still lots of JSONata to explore. In the meantime, these further videos by Eric Johnson explore more advanced JSONata Step Function applications:

If this post has been useful then the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Me

Project Wolfie: Honouring An Absent Friend With Music

In this post, I discuss our late German Shepherd Wolfie and outline a project that utilises music data in his memory.

Table of Contents

Introduction

On 24 January 2025, our German Shepherd Wolfie sadly passed away.

PXL 20230501 121501481 min

Wolfie was a long, fluffy boi with a big presence and a big mane. He enjoyed sniffing things, tilting his head and barking at foxes. He was loud, proud and bushy-browed.

Wolfie was also psychic. No one could leave the house without his knowledge, and no one could enter it without having a snout-first search.

IMG 20200923 121128 min

Wolfie struggled with various health issues throughout his life, including eating difficulties, muscle problems and genetic defects. Despite this, Wolfie was a cherished family member for four years before he sadly lost his battle with kidney failure and suspected cancer.

Our walks often involved music, and I regularly exposed Wolfie to my music library. He grew so accustomed to iPods during walks that he would bark whenever I picked one up!

PXL 20220904 125315538.MP min

After he was gone, I found myself revisiting the songs we shared. Music became a way to cherish those memories, so I wanted to create something meaningful in his memory…

Project Wolfie

This section explores what Project Wolfie is, the music data it utilises and its goals.

Definition

This project has been on my mind for some time now, and I suppose this was the push it needed to take shape. Project Wolfie is a data-driven initiative that explores the patterns hidden in my music collection. It analyses track metadata, listening habits and technical attributes to find insights, trends and recommendations.

Here, Wolfie is short for:

Waveform Observations Library For Intelligence Engineering

Let’s break this down:

Waveform: A visual illustrating a track’s traits like timbre, pitch and dynamics. Time is represented on the horizontal axis, while the vertical axis reflects amplitude.

Here is a sample waveform:

Observations Library: A consolidated data repository containing information about my music’s properties and my listening habits. The data consists of various types, structures and formats, and will be stored, cleaned and enriched for further use.

For Intelligence Engineering: The AI and BI use cases for the observations library. Here, interactive data visualisation and machine learning services will use the data to uncover patterns, predict trends and generate personalised recommendations.

Data

Music files contain more than just sound – they hold layers of metadata that are crucial to Project Wolfie.

This section explores the different types of metadata related to my music collection, highlighting their functions and purposes. I have assigned these categories using my understanding and intended use of the data.

Technical Metadata

Technical Metadata refers to the measurable and technical attributes of a music file. It tends to include numerical values and audio properties, and is commonly found by analysing the track using applications like Audacity, foobar2000 and MixedInKey, as well as Python libraries like Librosa.

Examples include:

  • What is the track’s initial tempo and key?
  • What is the track’s duration, and how loud is it?
  • What are the track’s spectrographic and harmonic properties?

Descriptive Metadata

Descriptive Metadata refers to the contextual and identifying information about a music track. It tends to include text-based details and is commonly found both within the track’s properties and on websites like Beatport and Discogs.

Examples include:

  • Who produced the track, and what is it called?
  • What is the track’s genre?
  • Which label published the track, and when?

Interaction MetaData

Interaction Metadata refers to engagement and listening behaviours. It typically includes dates, integers and timestamps, and is commonly generated by digital music services like iTunes and Spotify.

Examples include:

  • When was the last time a track was played or skipped?
  • How many times has a track been played?
  • What rating has a track been assigned?

Deliverables

Here are the objectives I’m pursuing in Project Wolfie. Given their complexity, they will be divided into multiple epics and spread out over an extended period.

Data Lakehouse

So far, I have discussed the importance, types, and applications of data. To this end, I need to fulfil a few requirements:

  • Ingesting and storing data from multiple sources.
  • Transforming and cleaning data at scale.
  • Enriching and aggregating data for analytics and consumption.

In short, I need a Data Lakehouse. I’ve written about them before and have followed the Medallion Architecture through bronze, silver and gold layers. For Project Wolfie and moving forward, I’ll be using the well-documented and supported AWS reference architecture:

I find this clearer and more regimented than the Medallion Architecture. It also aligns with the points made in Simon Whiteley‘s Advancing Analytics video, which I agree with.

Of course, a good Data Lakehouse isn’t possible without good data…

Quality & Observability

A Data Lakehouse’s effectiveness depends on data quality and observability. Project Wolfie must address factors like:

Veracity & Validation Checks: Verify data accuracy. Checks such as schema validation, null checks and data quality rules can identify issues early, stopping incorrect data from propagating downstream.

Anomaly Detection: Identify patterns often missed by validation like volume spikes and missing periods. Timely anomaly detection shields downstream resources from requiring remedial measures and lowers unforeseen cloud and developer expenses.

Lineage Tracking: Track the data’s journey from ingestion to consumption, documenting all transformations and processes. Vital for debugging, auditing and validation.

Governance & Security

A Data Lakehouse must balance accessibility and control. Governance and security protocols protect data while encouraging responsible usage.

I own all Project Wolfie data, so I have permission to process it. Additionally, there is no sensitive information or PII. However, there are other factors to consider:

Access Controls: Establish guidelines for who and what can access Project Wolfie resources. This safeguards data and services from unauthorised access, misuse and malicious activities.

Data Controls: Establish criteria for availability, backups, and structure. This aids in managing costs, ensuring disaster recovery, and maintaining schema consistency.

Monitoring & Logging: Track access patterns and record changes to data and infrastructure. This improves visibility into both potential threats and cost-related opportunities and vulnerabilities.

AI & BI Use Cases

Finally, I want to extract value and insights from Project Wolfie using Artificial Intelligence (AI) and Business Intelligence (BI). I have data from 2021 onwards from a music collection I started in the early 2000s, so I have lots to work with!

BI Use Cases (Dashboards, Analytics, Insights)

Listening Trends: Identify traits of my collection’s most frequently played and best-represented music. Analyse listening patterns over time to find trends.

Library Optimisation: Find rarely played tracks to add to playlists. Recognise songs that are often played and recommend alternatives for variety.

Distribution Analysis: Analyse my collection’s main genres, publishers and record labels, and investigate the connections between different elements (e.g., “The most popular tracks are typically in the 120-130 BPM range”). Create reports that show diversity and spread (e.g., “90% of house tracks are in five minor keys”).

AI Use Cases (Machine Learning, Automation, Predictions)

AI-Powered Personalised Playlists: Create playlists using the existing library based on properties like BPMs, keys and previous listening patterns, similar to Spotify Wrapped.

Smart Music Recommendations: Use collaborative filtering to suggest search criteria for new music based on my existing collection and listening habits (e.g., “Try G minor tracks at 128 BPM from the early 2010s”).

Predictive Analysis: Use Technical and Descriptive Metadata from new tracks to predict how they will be rated based on my existing library’s metadata (e.g., “This track has harmonic similarities to 70% of your highly rated tracks.“).

Summary

In this post, I discussed our late German Shepherd Wolfie and outlined a project that utilises music data in his memory.

Wolfie enjoyed scent games and retrieving toys, making Project Wolfie’s mission to find and return data and insights a fitting tribute. As the project evolves, I will strengthen its capabilities using new architectures and technologies, honouring Wolfie’s spirit – one track at a time.

Wolfie was more than just a pet; he was a companion and a guardian each day. I miss you big man. Take care out there.

PXL 20211218 180730545.MP min bw

If this post has been useful then the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~