Categories
Data & Analytics

Visualising FinOps With AWS Cost Management Dashboards

In this post, I explore how AWS Billing & Cost Management Dashboards streamline FinOps, monitor service costs and create clear, shareable visual narratives.

(Also, yes – amazonwebshark has been around since 2021 and I can’t believe this is the first time FinOps has been mentioned – Ed.)

Table of Contents

Introduction

On 20 August 2025, AWS announced the general availability of Billing and Cost Management Dashboards. Before this, visualising costs meant flipping between various invoices and Cost Explorer windows. This new feature eliminates that. Users can now visualise and analyse AWS spending in a single view using custom dashboards, combining data from AWS Cost Explorer with Savings Plans and Reserved Instance coverage and utilisation reports.

As a long-time data professional and analytics geek, I loves me some graphs. And as someone who regularly uses Cost Explorer in various ways, I was keen to check this out. One thing led to another and this post emerged!

I’ll begin by examining the benefits of the new AWS Billing and Cost Management Dashboards and how to access them. Then I’ll build two dashboards – one in a standalone AWS account and another in my AWS Organisation Management account. Finally, I’ll examine how to share dashboards between AWS accounts.

Firstly though, I’ll be talking about FinOps, visualisation narratives and data storytelling in this post. These might be new concepts to some, so let’s start with some explainers.

About FinOps

FinOps is short for ‘Financial Operations’. It’s the practice of bringing together finance, engineering, and business teams to maximise the value of cloud spend. Instead of cloud bills being something only Finance is concerned with, FinOps makes cost awareness part of everyday decisions – both non-technical and technical.

FinOps isn’t just for accountants. Engineers can see how their services contribute to the monthly bill, Finance can track patterns and generate forecasts, and leadership gets high-level visuals they can view and share without needing to interact with spreadsheets and raw data. FinOps can also help during negotiations with service providers and cloud platforms, from SLAs to resource reservations.

Data Storytelling & Narratives

Visualisation narratives and data storytelling both focus on using charts and visuals to add context to raw data. They combine data, visuals and narrative to show both what is happening and why it matters. The goal is to create a unified message that moves from context to evidence to insight, rather than using isolated charts.

In cost management, this means structuring dashboards so visuals tell a story: a high-level view of overall spend, followed by the accounts or services driving these costs, and then the details that provide supporting evidence. This turns a dashboard into a coherent narrative that links costs to activity and business goals.

The value lies in this clarity. Narratives reduce noise, highlight what matters, and make cost information accessible to both technical and non-technical teams. They also reflect FinOps guidance on timely and accessible reporting, aligning with the AWS Well-Architected focus on continual optimisation.

Dashboard Benefits

AWS Billing and Cost Management Dashboards help support key industry standard cost guidance. An example of this is the FinOps Foundation‘s FinOps Principles, including:

  • Enabling teams and account owners to monitor and manage cloud spend without relying on external teams or tools.
  • Allowing centralised FinOps teams to highlight and promote key cost metrics consistently across the organisation.
  • Providing real-time updates, ensuring accuracy and constant access without requiring data team oversight.
  • Supporting collaboration between finance and technology teams to understand costs and their alignment with business goals.

AWS Billing and Cost Management Dashboards also align with the AWS Well-Architected Framework’s Cost Optimization Pillar goals. For example:

  • Encouraging active and ongoing management of cloud costs, rather than end-of-month reporting.
  • Increasing awareness of usage and expenditure to enable informed decisions.
  • Making it simple to identify resources or services that may not be cost-effective.
  • Revealing usage trends against demand to ensure resources scale appropriately without overspending.
  • Showing long-term patterns to validate optimisation efforts and drive continuous improvement.

Additionally, sharing dashboards from AWS Organization Management accounts means fewer people need direct access to the account itself, supporting security best practices. And because Billing and Cost Management Dashboards are free to use and require no knowledge of Amazon Quicksight, they come with almost no technical or financial overhead.

Creating Dashboards

The new AWS Billing & Cost Management Dashboards are accessible via Billing & Cost ManagementDashboards:

2025 08 24 AWSBIllingDashboardsNew

The Dashboards console then appears, showing all dashboards by default and a tab for shared dashboards:

2025 08 24 AWSBIllingDashboardsAll

Finally, this is the screen for adding widgets to a new dashboard:

2025 08 24 AWSBIllingDashboardsEmpty

There are two types of widgets:

  • Custom widgets for bespoke reporting needs.
  • Predefined widgets for common use cases. These can also be customised as needed.

These widget types are explained fully in the AWS Cost Management widget types documentation. Having selected and positioned a widget, it can then be customised using the Cost Explorer UI and features, including filters, dimensions, and granularity.

(Aside – there are several widgets aimed at Reservations and Savings Plans. I don’t really use these in my AWS accounts, so you won’t see them being used in this post – Ed)

AWS Cost Explorer and Cost Management Dashboards use the same billing data but serve different purposes. Cost Explorer is ideal for digging into details, while Dashboards focus on building clean, repeatable and shareable views that fit into reports or presentations for technical and non-technical stakeholders.

When creating or editing dashboards, time periods can be set at both the dashboard and widget levels:

  • Dashboard-level time periods apply temporarily to all widgets and reset when leaving or refreshing the dashboard.
  • Widget-level time periods are saved with each widget and persist until changed.

Single Account Dashboard

In this section, the focus is on building a dashboard for a standalone AWS account. Using a mix of predefined and custom widgets, it’s possible to track costs at both the service and API operation level, reveal usage patterns over time, and spot trends that may indicate opportunities for optimisation.

Monthly Service Costs

Let’s start with a predefined Monthly Costs By Service widget:

2025 08 24 SingleAccMonthlyService

This chart displays six months of service usage, with S3 accounting for the majority and showing a recent downward trend. While there are empty sections showing service utilisation with very low or no cost, I’ve left the widget filterless in case that changes in future.

Daily Service Costs

Next, let’s include a predefined Daily Costs widget:

2025 08 24 SingleAccDaily

This is the default bar chart, and it’s not that useful here because it doesn’t tell me much. So let’s make some changes. This menu is available for all widgets:

2025 08 24 SingleOptions

Under Change Vizualisation Type, there are options for a bar chart, line chart, stacked bar chart and table. Given that I want to track my cost trends over time, a line chart is best suited here.

The Daily Costs line graph looks like this:

2025 08 24 SingleAccDailyLine

This still isn’t great as the regular spikes are off-putting. These spikes are a combination of Route53 hosted zones and Tax. However, as both these costs are only debited once a month, and at the same time each month (the 1st), they appear out of place here, as the spikes create an alarming-looking chart. In reality, these are standard events.

Looking back at the earlier chart, the biggest cost by far is S3. Let’s adjust the graph to analyse that by updating the Service filter to only include S3. A quick update of the chart’s title and description produces this:

2025 08 24 SingleAccDailyLineS3

This is much more helpful! Easier to read and comprehend, and has a clear message and narrative. Daily S3 costs in this account were around 20¢ per day from February to May, then almost halved in June and were under 1¢ per day by the end of July.

Spotting sudden drops like this is useful, as it can flag lifecycle rules kicking in, data movement between storage classes or workload shifts. Equally, a steady rise can indicate the need for lifecycle policies or changes in access patterns.

API Operation Costs

Let’s go deeper into the account spend with a custom Cost widget grouped by the API Operation dimension:

2025 08 24 SingleAccCostsAPI

Urgh. Couple of problems here:

  • The chart’s narrative is hard to understand as the bars are sorted by total expenditure across the entire time period of the chart. For example, StandardIAStorage is huge in May, barely there in June and gone in July. And it’s not even in the July bars at all. Yet it’s always the first bar because it’s the biggest spend overall. Confused yet?
  • The legend confuses further. No Operation is Tax – while this is correct within the context of the API Operation dimension, it doesn’t help the chart’s story. And Others is no help at all.
  • Finally, that axis is no use. What was the cost of PutObject in May? And how does it compare to July? No idea.

Given that I want to examine individual API-level costs here, a table is a better choice. It provides precise totals with no need for axis interpretation, shows a $0 spent as a value rather than the absence of a column, and eliminates the requirement to compress everything into a summarised, non-scrolling visual, thereby removing the vague Others legend and axis.

Finally, let’s exclude Tax from the Service filter (yes Tax is a service) and I get something far closer to what I want:

2025 08 24 SingleAccCostsAPITable

This dashboard allows me to track both monthly and daily spending, analyse costs by service and API operation, and identify any unusual spikes or drops. It simplifies monitoring trends, such as changes in S3 usage, and helps me pinpoint exactly where expenses are occurring. This way, I can quickly focus on areas that may require attention, turning detailed cost data into a clear and understandable overview of account activity.

AWS Organisations Dashboard

In this section, the focus shifts to dashboards in an AWS Organisation Management account. The goal is to track costs across multiple linked accounts, understand how AWS credits are being used, and monitor S3 Standard usage.

Note that the charts in this section appear slightly different, as they exclude my organisation’s AWS accounts. While the names are fine, the charts also include AWS account IDs, which I consider sensitive and prefer not to share.

Linked Account Costs

Let’s start with a default predefined Monthly Costs By Linked Account widget:

2025 08 25 MultiLinked

Ok so there’s a lot of empty space here. Although this organisation has existed for a while, it only began generating costs in May 2025. Additionally, the chart does not include any costs for July. This is because I applied my AWS Community Builder credits then, so future months will also follow this behaviour by default.

Let’s make this chart more useful by changing the date range from the last six months to the last three months and amending the Charge Type filter to exclude AWS credits, thereby showing my original spend:

2025 08 25 MultiLinkedAmended

As the July spend now dwarfs that of the other months, the axis makes the visual fairly useless. What’s July’s blue bar value? For that matter, what’s June’s green bar value? No idea at all.

Given that I want exact values, and that these values can be wildly different from month to month, this visual works far better as a table:

2025 08 25 MultiLinkedAmendedTable

The monthly spend for each member account is now far easier to see.

AWS Credits Usage

In the first chart I excluded my AWS credits to see my original spend. But it’d also be helpful to know more about my Community Builder credit usage. Am I burning through them quicker than anticipated? To what extent are the credits covering my AWS spend? And, given they’ll expire eventually, should I be bolder with my cloud spend to get the most out of my credits while I have them?

To visualise this, let’s make a custom Cost widget focusing on the Charge Type dimension:

2025 08 25 MultiCostType

This is already helpful but, like the first chart, a table is better here for precision and clarity:

2025 08 25 MultiCostTypeTable

And let’s update the widget’s title and description to communicate what is being shown:

2025 08 25 MultiCostTypeTableDescription

S3 Standard Usage

Finally, I want to create an early warning system. When storing objects in S3, the default is usually Standard. There’s nothing wrong with this, and S3 Standard is a good choice for short-lived data.

However, it’s also among the most expensive of the S3 storage classes, and if multiple accounts in my organisation are using S3 Standard when they don’t need to, then I’m neither following best practice nor am I well-architected.

So monitoring my organisation member accounts’ use of S3 Standard is a good idea. This will measure when my S3 standing utilisation is trending upwards, showing me where to focus my optimisation efforts if they are needed. I can do this using a Custom Usage widget, configured with a Usage Type Group of S3 Standard:

2025 08 25 MultiUsageS3

As this value is being tracked over time, a line chart is more suitable:

2025 08 25 MultiUsageS3Line

I experimented with changing the granularity from monthly to daily, but I wanted to keep this dashboard for monthly reporting, intended for observability rather than alerting. That’d be better suited to a custom usage AWS Budget configured to monitor a daily S3 Storage: Standard usage type group.

This trend can be further tracked by adjusting the dashboard’s date range. The visuals below show cost data from 01 July to 25 August, showing a downward S3 Standard usage trend and my August 2025 costs up to that point:

2025 08 25 MultiAugust

This multi-account dashboard allows me to track monthly spending across linked accounts. It provides insights into how AWS credits are being used to offset costs and helps me monitor trends in S3 Standard storage across the organisation. With this dashboard, I can easily identify which accounts are driving costs, understand how credits are applied, and pinpoint areas where S3 usage may need optimisation. It transforms multiple streams of raw billing data into a simple, cohesive view.

Sharing Dashboards

Once dashboards are created, they can be shared. Sharing allows teams, finance stakeholders, and other account holders to view or collaborate on dashboards without requiring direct access to the underlying AWS account. This makes it easier to align on costs, promote FinOps practices, and ensure visibility across the organisation.

Accounts can be shared both within and outside of an AWS Organization:

2025 08 28 SelectRecip

Behind the scenes, both sharing options are handled by AWS Resource Access Manager (RAM). If an active AWS Organization exists, then the dropdown list is populated with the member accounts. Alternatively, account IDs can be entered manually.

While this view is the same whether AWS Organizations is enabled or not, accounts not in an AWS Organization will see an error when interacting with this list:

2025 08 24 SharingError

As accounts are selected, their access can be set as:

  • Can View: Recipients can view the dashboard but cannot make changes.
  • Can Edit: Recipients can view and modify the dashboard configuration.

The selection process is very flexible. A single sharing configuration can include both internal and external accounts, and can assign these accounts to either permission scope. Accounts are added to the Added Recipients section as they are selected, showing which accounts can access the dashboard and with what scope:

2025 08 28 AddedRecipients

These accounts will then view the dashboard in the Shared With Me tab of their Billing & Cost Management console. While users can view the dashboard layout and widget configurations, they don’t have access to the underlying data. Also, the data they do see is based on their IAM permissions.

Sharing dashboards enables collaboration among teams and finance stakeholders, offering visibility into costs while eliminating the need for direct account access.

Summary

In this post, I explored how AWS Billing & Cost Management Dashboards streamline FinOps, monitor service costs and create clear, shareable visual narratives.

As demonstrated, I’m already using this feature and am a very happy customer! I love how simple and expressive it is, and especially appreciate not having to manage any backend ETLs or pipelines. I am 100% the type of user this feature was built for, and it delivers exactly what I need to monitor, understand and communicate AWS costs across accounts with minimal effort.

I’ve got a few wishlist items. Exporting daily dashboard snapshots via SNS to Slack or email would be useful. This is a PowerBI feature that would work well here, especially since the data wouldn’t be shared – only a snapshot of the dashboard and a link to the resource. Support for CloudFormation and CDK would also make adoption and repeatability easier.

AWS Billing & Cost Management Dashboards make it simpler to build cost narratives, share insights, and track usage without the overhead of QuickSight or third-party tools. They are available at no additional cost in all AWS commercial regions.

Like this post? Click the button below for links to contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Training & Community

Exploring The AWS Free Tier Changes

AWS has announced changes to its Free Tier. In this post, learn what’s changed, what’s included, and what it means for new and returning users.

Table of Contents

Introduction

Out of the blue on 11 July, AWS announced fundamental changes to the AWS free tier:

AWS accounts launched before 15 July retain their current free tier duration, allowances and terms and conditions. New AWS accounts after this date now offer two options – a Paid Plan and a Free Plan.

For those familiar with AWS, the Paid Plan resembles the AWS we know and are used to. This plan is designed for production applications, grants access to all AWS services and features, and provides payment options like pay-as-you-go and savings plans.

2025 07 15 AWSConsoleSignupPaid

The new Paid Plan also includes the existing always-free services, including:

Then there’s the Free Plan:

2025 07 15 AWSConsoleSignupFree

The free plan also includes the always-free services alongside some entirely new aspects, so let’s take a closer look at its main features.

Major Changes

This section examines the main features of the new AWS Free Plan.

Credits

Where previously new users had free tier allowances on several services, they now receive $100 USD in AWS credits at signup.

A further $100 USD credits can be earned by completing activities using foundational AWS services. This includes launching an EC2 instance, creating an AWS Lambda-backed web app and, brilliantly, setting up an AWS Budget cost budget! Incentivising this for new AWS users is long overdue.

Free plan credits expire 12 months after the date of issuance. However, this doesn’t equate to having twelve months of account access…

Account Expiry

With the previous free tier offering, accounts remained open after the free tier period ended. Now there’s an in-built expiry:

Your free plan expires the earlier of (1) 6 months from the date you opened your AWS account, or (2) once you have exhausted your Free Tier credits.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-FAQ.html

When a free plan expires, the account will close automatically and access to current resources and data will be lost. AWS retains the data for 90 days after the free plan’s expiry, after which it will be entirely erased.

Retrieval after this point is possible, but requires an upgrade to a paid plan to reopen the account. Note that this isn’t automatic – users must consent to being charged as part of the upgrade process.

The expiration date, credit balance, and remaining days of a free tier account can be monitored through the Cost and Usage widget in the AWS Management Console Home, or programmatically using the AWS SDK and command line at no cost via the GetAccountPlanState API. AWS will also send periodic email alerts regarding credit balances and the end of the free plan period.

Service Restrictions

Where previously a new account could use most AWS offerings immediately, free plan accounts now have some limitations. This is the AWS rationale:

Additionally, free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier.html

There’s roughly a 50/50 eligibility split of the AWS service catalogue, with some interesting choices that I’ll go into…

New User Considerations

This section examines considerations of the AWS free tier changes for beginners with no prior AWS experience.

Usage-Linked Closure Is Good…

The new Free Plan stops one of the tales as old as time, where new AWS users join up, try out all their shiny new toys and then get spiked by a massive bill. Or their access keys are exposed and stolen, creating a massive bill. Or they spin up an EC2 instance outside of the free tier and get a massive bill. And so on.

Well now, the user only spends their credits. And when the credits are used up, the account closes. The user loses their free plan, but they don’t lose the shirt off their back. Nor do they have to go to AWS cap in hand.

This also addresses another common concern: “I forgot my account was open, and now it’s been hacked!” Not anymore – accounts will close automatically after six months. This feature also helps limit financial damage from DDoS attacks, exposed credentials and similar risks.

Sounds great, right?

…But Isn’t Infallible

There are circumstances where having account closure linked to a credit balance is less desirable:

  • A user builds something that explodes in popularity.
  • Online attackers deliberately target an account.
  • A user misconfigures a resource.

These circumstances, and others, will quickly eat through the credits and trigger the account’s closure. What would happen in this situation is currently unclear – would AWS hit the brakes immediately? Is there a grace period of any sort? Either way, observability and monitoring are vital – the budget alert is a great start, and CloudWatch is included in the Free Plan.

Potential Credits Confusion

Finally, I feel that there may be potential confusion between the free plan credits that expire in twelve months and the free plan that expires in six months. My interpretation is that free users upgrading to a paid plan after six months will be able to continue using any remaining credits for the following six months.

I feel that some new users will see their account expiry coming up while their credits have over six months remaining, assume the account expiry is wrong and then be surprised when their account shuts. It sounds like AWS will make this as obvious as possible to account owners. I guess we’ll find out on Reddit in six months…

Experienced User Considerations

This section discusses the AWS free tier changes for users with prior AWS experience.

Free Tier Policing

I’ve already seen this ruffle some Internet feathers.

Traditionally, AWS were fairly flexible with new accounts. While officially only one email address can be associated with an account, AWS kinda ignored plus addressing. This allowed users to have multiple free tier accounts, and to start a new account when the free tier on their existing one expired.

Well not any more! AWS make it very clear in their FAQs:

“You would be ineligible for free plan or Free Tier credits if you have an existing AWS account, or had one in the past. The free plan and Free Tier credits are available only to new AWS customers.”

https://aws.amazon.com/free/free-tier-faqs/

Now, if a user has an existing account and tries to make a new one, even with plus addressing, they will see this message at the end of the process:

2025 07 15 AWSConsoleNotEligable

No doubt there are parts of the Internet that will find ways around this. I haven’t pursued it personally as I was only interested in checking the restrictions of certain services. AWS themselves don’t have this problem of course, and have their own blog post about the Free Tier update with various screenshots and explanations.

Speaking of restrictions…

Unusual Service (In)Eligibility Choices

This section is based on the original Excel sheet given by AWS in July 2025 and may be subject to change – Ed

As mentioned earlier, AWS now limit the available services on their Free plan:

Free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier.html

That said, there are some unusual choices here regarding services that are and aren’t eligible for the free plan.

Firstly, Glue is enabled, but Athena isn’t. So new users can create Glue resources, but can’t interact with them using Athena. I’m confused by this – for Athena to be costly, it usually requires querying data in the TB range that a new AWS account simply wouldn’t contain. Nor does it need specialised hardware. AWS even credits Athena with “Simple and predictable pricing” on its feature page, so why the Free Plan exclusion?

Also confusingly, CodeBuild and CodePipeline are eligible, but CodeDeploy isn’t. Can’t say I understand the logic behind this either!

Other exclusions make more sense. S3 is eligible, but Glacier services aren’t. Fair enough – Glacier is for long-lived storage, while free plans have six-month limits. Presumably, S3 Intelligent Tiering also excludes Glacier on the Free Plan.

Elsewhere, EC2 is eligible but I’ve not been able to check how limited the offering is. Trawling Reddit suggests only the t3.micro instance is available, but if this isn’t the case then many instance types exist that could rapidly burn through $200.

ec2 free limits

AWS CloudHSM is also eligible, with average costs around $1.50 per instance per hour. This totals about $36 per day or $100 over three days, somewhat contradicting AWS’s reasoning for the limitations. And while users could be frugal with using it, these are new users who are likely to be using AWS for the first time.

There’s a list of Free Plan eligible services, but it’s not easy to browse.

Immediate Credit Expiry

Finally, new users should be aware that certain actions immediately forfeit free tier credits. Most notably:

When your account joins AWS Organizations or sets up an AWS Control Tower landing zone, your AWS Free Tier credits expire immediately and your account will not be eligible to earn more AWS Free Tier credits.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-FAQ.html

Now, these are hardly services that a new user would need. However, an organisation or educational body would want to bear this in mind if they were encouraging staff or students to try AWS out. The free accounts must remain under the ownership of individual users. Any attempt to bring them into an existing AWS Organisation will kill their free tier!

Separately, this simplifies things for those of us already using Organisations or Control Tower – accounts created using these services will immediately be on the paid plan with no usage restrictions.

Summary

This blog post focused on the recent changes to AWS’s Free Tier, which allows new users to select either a Paid Plan or a Free Plan. It highlighted the main modifications made, specified which services were included or excluded, and considered the impact of these changes on both novice and seasoned users.

Overall, I see this as a positive change. The AWS Free Tier offering has been divisive for some time, and these changes go a long way towards softening many of its rough edges. While not everyone will get what they want, these changes greatly help to address the concerns and challenges faced by newbies in the past.

New users of AWS in 2025 should consider the same advice as in years prior:

  • Security first, always.
  • Check the cost of services before spinning them up.
  • Turn unused services off.
  • And finally, don’t forget to set that budget alarm!

New users can sign up for an AWS Free Plan at aws.amazon.com/free.

2025 07 15 AWSFreeTierStart

Like this post? Click the button below for links to contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Simplified Data Workflows With AWS Step Functions Variables

In this post, I use AWS Step Functions variables and JSONata to create a simplified API data capture workflow with Lambda and DynamoDB.

Table of Contents

Introduction

I’ve become an AWS Step Functions convert in recent times. Back in 2020 when I first studied it for some AWS certifications, Step Functions defined workflows entirely in JSON, making it less approachable and often overlooked.

How times change! With 2021’s inclusion of a visual editor, Step Functions became far more accessible, helping it become a key tool in serverless application design. And in 2024 two major updates significantly enhanced Step Functions’ flexibility: JSONata support, which I recently explored, and built-in variables, which simplify state transitions and data management. This post focuses on the latter.

To demonstrate the power of Step Functions variables, I’ll walk through a practical example: fetching API data, verifying the response, and inserting it into DynamoDB. Firstly, I’ll examine the services and features I’ll use. Then I’ll create a state machine and examine each state’s use of variables. Finally, I’ll complete some test executions to ensure everything works as expected.

If a ‘simplified’ workflow seems hard to justify as a 20-minute read…that’s fair. But mastering Step Functions variables now can save hours of debugging and development in the long run! – Ed

Also, special thanks to AWS Community Builder Md. Mostafa Al Mahmud for generously providing AWS credits to support this and future posts!

Architecture

This section provides a top-level view of the architecture behind my simplified Step Functions variables workflow, highlighting the main AWS services involved in getting and processing API data. I’ll briefly cover the data being used, the role of Step Functions variables and the integration of DynamoDB within the workflow.

API Data

The data comes from a RESTful API that provides UK car details. The API needs both an authentication key and query parameters. Response data is provided in JSON.

The data used in this post is about my car. As some of it is sensitive, I will only use data that is already publicly available:

JSON
{
    "make": "FORD",
    "yearOfManufacture": 2014,
    "engineCapacity": 1242,
    "co2Emissions": 120,
    "fuelType": "PETROL",
    "markedForExport": false,
    "colour": "GREY",
}

There are several data types here. This will be important when writing to DynamoDB!

AWS Step Functions Variables

In my last post, I talked about JSONata in AWS Step Functions. This time let’s talk about Step Functions variables, which were introduced alongside JSONata in November 2024.

Step Functions variables offer a simple way to store and reuse data within a state machine, enabling dynamic workflows without complex transformations. They work well with both JSONata and JSONPath and are available at no extra cost in all AWS regions that support Step Functions.

Variables are set using Assign. They can be assigned static values for fixed values:

JSON
"Assign": {
    "productName": "product1",
    "count" : 42,
    "available" : true
}

As well as dynamic values for changing values. To dynamically set variables, Step Functions uses JSONata expressions within {% ... %}. The following example extracts productName and available from the state input using the JSONata $states reserved variable:

JSON
"Assign": {
    "product": "{% $states.input.productName %}",
    "available": "{% $states.input.available %}"
}

Variables are then referenced using dollar signs ($), e.g. $productName.

There’s tonnes more to this. For details on name syntax, ASL integration and creating JSONPath variables, check the Step Functions Developer Guide variables section. Additionally, watch AWS Principal Developer Advocate Eric Johnson‘s related video:

With Step Functions variables handling data transformation and persistence, the next step is storing processed data efficiently. This is where Amazon DynamoDB comes in.

Amazon DynamoDB

DynamoDB is a fully managed NoSQL database built for high performance and seamless scalability. Its flexible, schema-less design makes it perfect for storing and retrieving JSON-like data with minimal overhead.

DynamoDB can automatically scale to manage millions of requests per second while maintaining low latency. It integrates seamlessly with AWS services like Lambda and API Gateway, providing built-in security, automated backups, and global replication to ensure reliability at any scale.

Popular use cases include:

  • Serverless backends (paired with AWS Lambda/API Gateway) for API-driven apps.
  • Real-time workloads like user sessions, shopping carts, or live leaderboards.
  • High-velocity data streams from IoT devices or clickstream analytics.

Diagram

Finally, here is an architectural diagram of my simplified Step Functions variables workflow:

In which:

  1. The user triggers an AWS Step Functions state machine with a JSON key-value pair as input.
  2. A Lambda function is invoked with the input payload.
  3. The Lambda function sends a POST request to a third-party API.
  4. The API server responds with JSON data.
  5. The Lambda function assigns Step Functions variables to store API response values and enters a Choice state that checks the API response code.
  6. If the Choice state condition fails, SNS publishes a failure notification email.
  7. The state machine terminates with an ExecutionFailed status.
  8. If the Choice state condition passes, the processed API response data is written to a DynamoDB table.
  9. SNS publishes a success notification email.
  10. The state machine terminates with an ExecutionSucceeded status.

If an error occurs at any point in execution, SNS publishes a failure notification email and the state machine terminates with an ExecutionFailed status.

Resources

In this section, I create and configure my simplified Step Functions variables workflow resources, which are:

  • AWS Lambda function
  • AWS Step Functions Choice state
  • Amazon DynamoDB table
  • Amazon SNS topic

AWS Lambda

This section details how I created an AWS Lambda function to interact with a REST API. The function makes a secure POST request, manages potential errors and parses the response for use in downstream states. Additionally, I will discuss an issue I encountered while attempting to create variables from the function’s output using JSONata.

Lambda Function

First, I need to write a function. It’s nothing special – I have the API link and security key I need, and I just want to retrieve some data. So I decided to give Amazon Q Developer a try!

I used this prompt:

I have an HTTPS hyperlink for a REST API and an API key. I want to send a POST request to the API using the key. I have a USERID parameter that I will pass as JSON.

Please write a Python function for AWS Lambda with error handling. Please use security best practises and Pythonic standards.

Yes I do say Please in my prompts.

Amazon Q produced a decent script with:

  • API endpoint and key stored as environment variables.
  • Use of the requests module with a timeout to prevent indefinite hangs.
  • Error handling for request failures, validation errors, and unexpected issues.
  • A structured JSON response including the status code and parsed response.
Python
import json
import os
import logging
import requests
from typing import Dict, Any
from botocore.exceptions import ClientError

# Configure logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
    """
    AWS Lambda function to make POST request to API endpoint
    Args:
        event: Lambda event object containing request data
        context: Lambda context object
    Returns:
        API response as dictionary
    """
    try:
        # Get API endpoint and key from environment variables
        api_endpoint = os.environ['API_ENDPOINT']
        api_key = os.environ['API_KEY']

        # Extract user ID from event
        user_id = event.get('userId')
        if not user_id:
            raise ValueError("userId is required in the request")

        # Set request headers
        headers = {
            'Content-Type': 'application/json',
            'x-api-key': api_key
        }

        # Prepare request payload
        payload = {
            'userId': user_id
        }

        # Make POST request
        response = requests.post(
            api_endpoint,
            headers=headers,
            json=payload,
            timeout=30  # Set timeout for the request
        )

        # Raise exception for bad status codes
        response.raise_for_status()

        return {
            'statusCode': response.status_code,
            'body': response.json()
        }

    except requests.exceptions.RequestException as e:
        logger.error(f"API request failed: {str(e)}")
        return {
            'statusCode': 500,
            'body': json.dumps({'error': 'Failed to make API request'})
        }

    except ValueError as e:
        logger.error(f"Validation error: {str(e)}")
        return {
            'statusCode': 400,
            'body': json.dumps({'error': str(e)})
        }

    except Exception as e:
        logger.error(f"Unexpected error: {str(e)}")
        return {
            'statusCode': 500,
            'body': json.dumps({'error': 'Internal server error'})
        }

It needed some tweaks for my purposes, but was still faster than typing it all out manually!

Step Functions Config

The Lambda: Invoke action defaults to using the state input as the payload, so "Payload": "{% $states.input %}" is scripted automatically:

JSON
    "Lambda Invoke": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Output": "{% $states.result.Payload %}",
      "Arguments": {
        "FunctionName": "[LAMBDA_ARN]:$LATEST",
        "Payload": "{% $states.input %}"
      },
      "Next": "Check API Status Code"
    }

This is going to be helpful in the next section!

Step Functions manages retries and error handling. If my Lambda function fails, it will retry up to three times with exponential backoff before sending a failure notification through SNS:

JSON
    "Lambda Invoke": {
      "Retry": [
        {
          "ErrorEquals": [
            "Lambda.ServiceException",
            "Lambda.AWSLambdaException",
            "Lambda.SdkClientException",
            "Lambda.TooManyRequestsException"
          ],
          "IntervalSeconds": 1,
          "MaxAttempts": 3,
          "BackoffRate": 2,
          "JitterStrategy": "FULL"
        }
      ],
      "Next": "Check API Status Code",
      "Catch": [
        {
          "ErrorEquals": [
            "States.ALL"
          ],
          "Next": "SNS Publish: Fail"
        }
      ]
    }

Next, let’s talk about the function’s outputs.

Outputs & JSONata Variables

The Lambda function returns a nested JSON structure. Here’s a redacted example of it:

JSON
{
  "output": {
    "ExecutedVersion": "$LATEST",
    "Payload": {
      "statusCode": 200,
      "body": {
        "make": "FORD",
        "yearOfManufacture": 2014,
        "engineCapacity": 1242,
        "co2Emissions": 120,
        "fuelType": "PETROL",
        "markedForExport": false,
        "colour": "GREY"
      }
    },
    "SdkHttpMetadata": {
      "AllHttpHeaders": {
        "REDACTED": "REDACTED"
      },
      "HttpHeaders": {
        "REDACTED": "REDACTED"
      },
      "HttpStatusCode": 200
    },
    "SdkResponseMetadata": {
      "REDACTED": "REDACTED"
    },
    "StatusCode": 200
  }
}

I mentioned earlier about Lambda: Invoke‘s default Payload setting. This default creates a {% $states.result.Payload %} JSONata expression output that I can use to assign variables for downstream states.

In this example, {% $states.result.Payload %} returns this:

JSON
{
  "Payload": {
      "statusCode": 200,
      "body": {
        "make": "FORD",
        "yearOfManufacture": 2014,
        "engineCapacity": 1242,
        "co2Emissions": 120,
        "fuelType": "PETROL",
        "markedForExport": false,
        "colour": "GREY"
      }
    }
}

Let’s make a variable for statusCode. In the response, statusCode is a property of Payload:

JSON
{
  "Payload": {
      "statusCode": 200
    }
}

In JSONata this is expressed as {% $states.result.Payload.statusCode %}. Then I can assign the JSONata expression to a statusCode variable via JSON. In the AWS console, I do this via:

JSON
{
  "statusCode": "{% $states.result.Payload.statusCode %}"
}

And in Step Functions ASL via:

JSON
"Assign": {"statusCode": "{% $states.result.Payload.statusCode %}"}

I can then call this variable using $statusCode. Here, this will return 200.

Next, let’s make a make variable. This is slightly more involved as make is a property of body, which is itself a property of Payload:

JSON
{
  "Payload": {
      "body": {
        "make": "FORD"
      }
    }
}

So this time I need:

JSON
CONSOLE:
"make": "{% $states.result.Payload.body.make%}"

ASL:
"Assign": {"make": "{% $states.result.Payload.body.make%}"}

And now $make will return "FORD".

So let’s do the other values:

JSON
"Assign": {
    "statusCode": "{% $states.result.Payload.statusCode %}",
    "make": "{% $states.result.Payload.body.make%}",
    "yearOfManufacture": "{% $string($states.result.Payload.body.yearOfManufacture) %}",
    "engineCapacity": "{% $string($states.result.Payload.body.engineCapacity) %}",
    "co2Emissions": "{% $string($states.result.Payload.body.co2Emissions) %}",
    "fuelType": "{% $states.result.Payload.body.fuelType %}",
    "markedForExport": "{% $states.result.Payload.body.markedForExport%}",
    "colour": "{% $states.result.Payload.body.colour%}"
}

Note that variables returning numbers from the response body like yearOfManufacture have an additional $string JSONata expression. I’ll explain the reason for this in the DynamoDB section.

Lambda Issues

When I first started using Step Functions variables, I used a different Lambda function for the API call and kept getting this error:

An error occurred.

The JSONata expression '$states.input.body.make' specified for the field 'Assign/make' returned nothing (undefined).

After getting myself confused, I checked the function’s return statement and found this:

Python
return {
    'statusCode': response.status_code,
    'body': response.text
}

Here, response.text returns the response body as a JSON-formatted string rather than as a nested dictionary:

Plaintext
{
  "statusCode": 200,
  "body": "{\"make\":\"FORD\",\"yearOfManufacture\":2014,\"engineCapacity\":1242,\"co2Emissions\":120,\"fuelType\":\"PETROL\",\"markedForExport\":false,\"colour\":\"GREY\"}"
}

That string isn’t compatible with dot notation. So while $states.input.body will match the whole body, $states.input.body.make can’t match anything because the string can’t be traversed. So nothing is returned, causing the error.

Using response.json() fixes this, as the response is now correctly structured for JSONata expressions:

Python
return {
    'statusCode': response.status_code,
    'body': response.json()
}

Choice State

The Choice state here is very similar to a previous one. This Choice state checks the Lambda function’s API response and routes accordingly.

Here, the Choice state uses the JSONata expression {% $statusCode = 200 %} to check the $statusCode variable value. By default, it will transition to the SNS Publish: Fail state. However, if $statusCode equals 200, then the Choice state will transition to the DynamoDB PutItem state instead:

JSON
    "Check API Status Code": {
      "Type": "Choice",
      "Choices": [
        {
          "Next": "DynamoDB PutItem",
          "Condition": "{% $statusCode = 200 %}"
        }
      ],
      "Default": "SNS Publish: Fail"
    }

This step prevents silent failures by ensuring unsuccessful API responses trigger an SNS notification instead of proceeding to DynamoDB. It also helps maintain data integrity by isolating success and failure paths, and ensuring only valid responses are saved in DynamoDB.

So now I’ve captured the data and confirmed its integrity. Next, let’s store it somewhere!

Amazon DynamoDB

It’s time to think about storing the API data. Enter DynamoDB! This section covers creating a table, writing data and integrating DynamoDB with AWS Step Functions and JSONata. I’ll share key lessons learned, especially about handling data types correctly.

Let’s start by creating a table.

Creating A Table

Before inserting data into DynamoDB, I need to create a table. Since DynamoDB is a schemaless database, all that is required to create a new table is a table name and a primary key. Naming the table is straightforward, so let’s focus on the key.

DynamoDB has two types of key:

  • Partition key (required): Part of the table’s primary key. It’s a hash value that is used to retrieve items from the table and allocate data across hosts for scalability and availability.
  • Sort key (optional): The second part of a table’s primary key. The sort key enables sorting or searching among all items sharing the same partition key.

Let’s look at an example using a Login table. In this table, the user ID serves as the partition key, while the login date acts as the sort key. This structure enables efficient lookups and sorting, allowing quick retrieval of a user’s login history while minimizing operational overhead.

To use a physical analogy, consider the DynamoDB table as a filing cabinet, the Partition key as a drawer, and the Sort key as a folder. If I wanted to retrieve User 123‘s logins for 2025, I would:

  • Access the Logins filing cabinet (DynamoDB table).
  • Find User 123’s drawer (Partition Key).
  • Get User 123’s 2025 folder (Sort Key).

DynamoDB provides many features beyond those discussed here. For the latest features, please refer to the Amazon DynamoDB Developer Guide.

Writing Data

So now I have a table, how do I put data in it?

DynamoDB offers several ways to write data, and a common one is PutItem. This lets me insert or replace an item in my table. Here’s a basic example of adding a login event to a UserLogins table:

JSON
{
    "TableName": "UserLogins",
    "Item": {
        "UserID": { "S": "123" },
        "LoginDate": { "S": "2025-02-25T12:00:00Z" },
        "Device": { "S": "Laptop" }
    }
}

Here:

  • TableName specifies the name of the DynamoDB table where the item will be stored.
  • Item represents the data being inserted into the table. It contains key-value pairs, where the attributes (e.g. UserID) are mapped to their corresponding data types (e.g. "S") and values (e.g. "123").
  • UserID is an attribute in the item being inserted.
  • "S" is a data type descriptor, ensuring that DynamoDB knows how to store and index it.
  • "123" is the value assigned to the UserID attribute.

While DynamoDB is NoSQL, it still enforces strict data types and naming rules to ensure consistency. These are detailed in the DynamoDB Developer Guide, but here’s a quick rundown of supported data types as of March 2025:

  • S – String
  • N – Number
  • B – Binary
  • BOOL – Boolean
  • NULL – Null
  • M – Map
  • L – List
  • SS – String Set
  • NS – Number Set
  • BS – Binary Set

Step Functions Config

So how do I apply this to Step Functions? Well, remember when I set variables in the output of the Lambda function? Step Functions lets me reference those variables here.

Here’s how I store a make attribute in DynamoDB, using my $make variable in a JSONata expression:

JSON
{
    "TableName": "REDACTED",
    "Item": {
        "make": { "S": "{% $make %}" }
    }
}

This is equivalent to:

JSON
{
    "TableName": "REDACTED",
    "Item": {
        "make": { "S": "FORD" }
    }
}

Using JSONata, I can dynamically inject values during execution instead of hardcoding them.

Now let’s add a yearOfManufacture attribute:

JSON
{
    "TableName": "REDACTED",
    "Item": {
        "make": { "S": "{% $make %}" },
        "yearOfManufacture": { "N": "{% $yearOfManufacture %}" }
    }
}

This pattern continues for my other attributes:

JSON
{
  "TableName": "REDACTED",
  "Item": {
    "make": {
      "S": "{% $make %}"
    },
    "yearOfManufacture": {
      "N": "{% $yearOfManufacture%}"
    },
    "engineCapacity": {
      "N": "{% $engineCapacity %}"
    },
    "co2Emissions": {
      "N": "{% $co2Emissions%}"
    },
    "fuelType": {
      "S": "{% $fuelType %}"
    },
    "markedForExport": {
      "BOOL": "{% $markedForExport %}"
    },
    "colour": {
      "S": "{% $colour %}"
    }
  }
}

All this is then passed as an Argument to the DynamoDB: PutItem action in the state machine’s ASL:

JSON
    "DynamoDB PutItem": {
      "Type": "Task",
      "Resource": "arn:aws:states:::dynamodb:putItem",
      "Arguments": {
        "TableName": "REDACTED",
        "Item": {
          "make": {
            "S": "{% $make %}"
          },
          "yearOfManufacture": {
            "N": "{% $yearOfManufacture%}"
          },
          "engineCapacity": {
            "N": "{% $engineCapacity %}"
          },
          "co2Emissions": {
            "N": "{% $co2Emissions%}"
          },
          "fuelType": {
            "S": "{% $fuelType %}"
          },
          "markedForExport": {
            "BOOL": "{% $markedForExport %}"
          },
          "colour": {
            "S": "{% $colour %}"
          }
        }
      }

Finally, DynamoDB:PutAction gets the same error handling as Lambda:Invoke.

So I got all this working first time, right? Well…

DynamoDB Issues

During my first attempts, I got this error:

An error occurred while executing the state 'DynamoDB PutItem'.

The Parameters '{"TableName":"REDACTED","Item":{"make":{"S":"FORD"},"yearOfManufacture":{"N":2014}}}' could not be used to start the Task:

[The value for the field 'N' must be a STRING]

Ok. Not the first time I’ve seen data type problems. I’ll just change the yearOfManufacture data type to "S"(string) and try again…

An error occurred while executing the state 'DynamoDB PutItem'.

The Parameters '{"TableName":"REDACTED","Item":{"make":{"S":"FORD"},"yearOfManufacture":{"S":2014}}}' could not be used to start the Task:

[The value for the field 'S' must be a STRING]

DynamoDB rejected both approaches (╯°□°)╯︵ ┻━┻

The issue wasn’t the data type, but how it was formatted. DynamoDB treats numbers as strings in its JSON-like structure, so even when using numbers they must be wrapped in quotes.

In the case of yearOfManufacture, where I was providing 2014:

Plaintext
"yearOfManufacture": {"N": 2014}

DynamoDB needed "2014":

Plaintext
"yearOfManufacture": {"N": "2014"}

Thankfully, JSONata came to the rescue again! Remember the $string function from the Lambda section? Well, $string casts the given argument to a string!

So this:

JSON
"yearOfManufacture": "{% $states.result.Payload.body.yearOfManufacture %}"

> 2014

Becomes this:

JSON
"yearOfManufacture": "{% $string($states.result.Payload.body.yearOfManufacture) %}"

> "2014"

This solved the problem with no Lambda function changes or additional states!

Amazon SNS

After successfully writing data to DynamoDB, I want to include a confirmation step by sending a notification through Amazon SNS.

While this approach is not recommended for high-volume use cases because of potential costs and notification fatigue, it can be helpful for testing, monitoring, and debugging. Additionally, it offers an opportunity to reuse variables from previous states and dynamically format a message using JSONata.

The goal is to send an email notification like this:

A 2014 GREY FORD has been added to DynamoDB on (current date and time)

To do this, I’ll use:

  • $yearOfManufacture for the vehicle’s year (2014)
  • $colour for the vehicle’s colour (GREY)
  • $make for the manufacturer (FORD)

Plus the JSONata $now() function for the current date and time. This generates a UTC timestamp in ISO 8601-compatible format and returns it as a string. E.g. "2025-02-25T19:12:59.152Z"

So the code will look something like:

A $yearOfManufacture $colour $make has been added to DynamoDB on $now()

Which translates to this JSONata expression:

Plaintext
{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}

Let’s analyse each part of the JSONata expression to understand how it builds the final message:

Plaintext
{%

  'A '
& 
  $yearOfManufacture 
& 
  ' ' 
& 
  $colour 
& 
  ' ' 
& 
  $make 
& 
  ' has been added to DynamoDB on ' 
& 
  $now() 
  
%}"

Each part of this expression plays a specific role:

  • ‘A ‘ | ‘ has been added to DynamoDB on ‘: Static strings & spaces.
  • $yearOfManufacture | $colour | $make: Dynamic values.
  • $now(): JSONata function.
  • ‘ ‘: Static spaces to separate JSONata variable outputs.

The static spaces are important! Without them, I’d get this:

2014GREYFORD

Instead of the expected:

2014 GREY FORD

This JSONata expression is passed as the Message argument in the SNS:Publish action, ensuring the notification contains the correctly formatted message:

JSON
"Message": "{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}"

Finally, to integrate this with Step Functions it is included in the SNS Publish: Success task ASL:

JSON
"SNS Publish: Success": {
    "Type": "Task",
    "Resource": "arn:aws:states:::sns:publish",
    "Arguments": {
      "Message": "{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}",
      "TopicArn": "arn:aws:sns:REDACTED:success-stepfunction"
}

Final Workflow

Finally, let’s see what the workflows look like. Here’s the workflow graph:

stepfunctions graph

And here’s the workflow ASL on GitHub.

Testing

In this section, I run some test executions against my simplified Step Functions workflow and check the variables. I’ll test four requests – two valid and two invalid.

Valid Request: Ford

Firstly, what happens when a valid API request is made and everything works as expected?

The Step Functions execution succeeds:

stepfunctions graph testsuccess

Each state completes successfully:

2025 02 26 StateViewSuccess

My DynamoDB table now contains one item:

2025 02 26 DyDBTable1

I receive a confirmation email from SNS:

2025 02 26 SNSSuccessFord

If I send the same request again, the existing DynamoDB item is overwritten because the primary key remains the same.

Valid Request: Audi

Next, what happens if I make a valid request for a different car? The steps repeat as above, and my DynamoDB table now has two items:

2025 02 26 DyDBTable2

And I get a different email:

2025 02 26 SNSSuccessAudi

Invalid Request

Next, what happens if the car in my request doesn’t exist? Well, it does fail, but in an unexpected way:

stepfunctions graphfail

The API returns an error response:

JSON
"Payload": {
      "statusCode": 500,
      "body": "{\"error\": \"API request failed: 400 Client Error: Bad Request for url"}"
    }

I’d expected the response to be passed to the Choice state, which would then notice the 500 status code and start the Fail process. But this happened instead:

2025 02 26 StateViewFail

The failure occurs at the assignment of the Lambda action variable! It attempts to assign a yearOfManufacture value from the API response body to a variable, but since there is no response body the assignment fails:

JSON
{
  "cause": "An error occurred while executing the state 'Lambda Invoke' (entered at the event id #2). The JSONata expression '$states.result.Payload.body.yearOfManufacture ' specified for the field 'Assign/yearOfManufacture ' returned nothing (undefined).",
  "error": "States.QueryEvaluationError",
  "location": "Assign/registrationNumber",
  "state": "Lambda Invoke"
}

I also get an email, but this one is less fancy as it just dumps the whole output:

2025 02 26 SNSFail

So I still get my Fail outcome – just not in the expected way. Despite this, the Choice state remains valuable for preventing invalid data from entering DynamoDB.

No Request

Finally, what happens if no data is passed to the state machine at all?

Actually, this situation is very similar to the invalid request! There’s a different error message in the log:

JSON
"Payload": {
      "statusCode": 400,
      "body": "{\"error\": \"Registration number not provided\"}"
    }

But otherwise it’s the same events and outcome. The Lambda variable assignment fails, triggering an SNS email and an ExecutionFailed result.

Cost Analysis

This section examines the costs of my simplified Step Functions variables workflow. This section is brief since all services used in this workflow fall within the AWS Free Tier! For transparency, I’ll include my billing metrics for the month. These are account-wide, and I’m still nowhere near paying AWS anything!

DynamoDB:

$0.1415 per million read request units (EU (Ireland))30.5 ReadRequestUnits
$0.705 per million write request units (EU (Ireland))13 WriteRequestUnits

Lambda:

AWS Lambda – Compute Free Tier – 400,000 GB-Seconds – EU (Ireland)76.219 Second
AWS Lambda – Requests Free Tier – 1,000,000 Requests – EU (Ireland)110 Request

SNS:

First 1,000 Amazon SNS Email/Email-JSON Notifications per month are free19 Notifications
First 1,000,000 Amazon SNS API Requests per month are free289 Requests

Step Functions:

$0 for first 4,000 state transitions431 StateTransitions

This experiment demonstrates how cost-effective Step Functions can be. As long as my usage remains within the Free Tier, I pay nothing! If my workflow grows, I’ll monitor costs and optimise accordingly.

Summary

In this post, I used AWS Step Functions variables and JSONata to create a simplified API data capture workflow with Lambda and DynamoDB.

With a background in SQL and Python, I’m no stranger to variables, and I love that they’re now a native part of Step Functions. AWS keeps enhancing Step Functions every few months, making it more powerful and versatile. The introduction of variables unlocks new possibilities for data manipulation, serverless applications and event-driven workflows, and I’m excited to explore them further in the coming months!

For a visual walkthrough of Step Functions variables and JSONata, check out this Serverless Office Hours episode with AWS Principal Developer Advocates Eric Johnson and Julian Wood:

If this post has been useful then the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~