Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully-featured services from data centres globally.
For those familiar with AWS, the Paid Plan resembles the AWS we know and are used to. This plan is designed for production applications, grants access to all AWS services and features, and provides payment options like pay-as-you-go and savings plans.
The new Paid Plan also includes the existing always-free services, including:
When a free plan expires, the account will close automatically and access to current resources and data will be lost. AWS retains the data for 90 days after the free plan’s expiry, after which it will be entirely erased.
Retrieval after this point is possible, but requires an upgrade to a paid plan to reopen the account. Note that this isn’t automatic – users must consent to being charged as part of the upgrade process.
The expiration date, credit balance, and remaining days of a free tier account can be monitored through the Cost and Usage widget in the AWS Management Console Home, or programmatically using the AWS SDK and command line at no cost via the GetAccountPlanState API. AWS will also send periodic email alerts regarding credit balances and the end of the free plan period.
Service Restrictions
Where previously a new account could use most AWS offerings immediately, free plan accounts now have some limitations. This is the AWS rationale:
Additionally, free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.
There’s roughly a 50/50 eligibility split of the AWS service catalogue, with some interesting choices that I’ll go into…
New User Considerations
This section examines considerations of the AWS free tier changes for beginners with no prior AWS experience.
Usage-Linked Closure Is Good…
The new Free Plan stops one of the tales as old as time, where new AWS users join up, try out all their shiny new toys and then get spiked by a massive bill. Or their access keys are exposed and stolen, creating a massive bill. Or they spin up an EC2 instance outside of the free tier and get a massive bill. And so on.
Well now, the user only spends their credits. And when the credits are used up, the account closes. The user loses their free plan, but they don’t lose the shirt off their back. Nor do they have to go to AWS cap in hand.
This also addresses another common concern: “I forgot my account was open, and now it’s been hacked!” Not anymore – accounts will close automatically after six months. This feature also helps limit financial damage from DDoS attacks, exposed credentials and similar risks.
Sounds great, right?
…But Isn’t Infallible
There are circumstances where having account closure linked to a credit balance is less desirable:
A user builds something that explodes in popularity.
Online attackers deliberately target an account.
A user misconfigures a resource.
These circumstances, and others, will quickly eat through the credits and trigger the account’s closure. What would happen in this situation is currently unclear – would AWS hit the brakes immediately? Is there a grace period of any sort? Either way, observability and monitoring are vital – the budget alert is a great start, and CloudWatch is included in the Free Plan.
Potential Credits Confusion
Finally, I feel that there may be potential confusion between the free plan credits that expire in twelve months and the free plan that expires in six months. My interpretation is that free users upgrading to a paid plan after six months will be able to continue using any remaining credits for the following six months.
I feel that some new users will see their account expiry coming up while their credits have over six months remaining, assume the account expiry is wrong and then be surprised when their account shuts. It sounds like AWS will make this as obvious as possible to account owners. I guess we’ll find out on Reddit in six months…
Experienced User Considerations
This section discusses the AWS free tier changes for users with prior AWS experience.
Free Tier Policing
I’ve already seen this ruffle some Internet feathers.
Traditionally, AWS were fairly flexible with new accounts. While officially only one email address can be associated with an account, AWS kinda ignored plus addressing. This allowed users to have multiple free tier accounts, and to start a new account when the free tier on their existing one expired.
Well not any more! AWS make it very clear in their FAQs:
“You would be ineligible for free plan or Free Tier credits if you have an existing AWS account, or had one in the past. The free plan and Free Tier credits are available only to new AWS customers.”
Now, if a user has an existing account and tries to make a new one, even with plus addressing, they will see this message at the end of the process:
No doubt there are parts of the Internet that will find ways around this. I haven’t pursued it personally as I was only interested in checking the restrictions of certain services. AWS themselves don’t have this problem of course, and have their own blog post about the Free Tier update with various screenshots and explanations.
Speaking of restrictions…
Unusual Service (In)Eligibility Choices
This section is based on the original Excel sheet given by AWS in July 2025 and may be subject to change – Ed
As mentioned earlier, AWS now limit the available services on their Free plan:
Free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.
That said, there are some unusual choices here regarding services that are and aren’t eligible for the free plan.
Firstly, Glue is enabled, but Athena isn’t. So new users can create Glue resources, but can’t interact with them using Athena. I’m confused by this – for Athena to be costly, it usually requires querying data in the TB range that a new AWS account simply wouldn’t contain. Nor does it need specialised hardware. AWS even credits Athena with “Simple and predictable pricing” on its feature page, so why the Free Plan exclusion?
Also confusingly, CodeBuild and CodePipeline are eligible, but CodeDeploy isn’t. Can’t say I understand the logic behind this either!
Other exclusions make more sense. S3 is eligible, but Glacier services aren’t. Fair enough – Glacier is for long-lived storage, while free plans have six-month limits. Presumably, S3 Intelligent Tiering also excludes Glacier on the Free Plan.
Elsewhere, EC2 is eligible but I’ve not been able to check how limited the offering is. Trawling Reddit suggests only the t3.micro instance is available, but if this isn’t the case then many instance types exist that could rapidly burn through $200.
AWS CloudHSM is also eligible, with average costs around $1.50 per instance per hour. This totals about $36 per day or $100 over three days, somewhat contradicting AWS’s reasoning for the limitations. And while users could be frugal with using it, these are new users who are likely to be using AWS for the first time.
Finally, new users should be aware that certain actions immediately forfeit free tier credits. Most notably:
When your account joins AWS Organizations or sets up an AWS Control Tower landing zone, your AWS Free Tier credits expire immediately and your account will not be eligible to earn more AWS Free Tier credits.
Now, these are hardly services that a new user would need. However, an organisation or educational body would want to bear this in mind if they were encouraging staff or students to try AWS out. The free accounts must remain under the ownership of individual users. Any attempt to bring them into an existing AWS Organisation will kill their free tier!
Separately, this simplifies things for those of us already using Organisations or Control Tower – accounts created using these services will immediately be on the paid plan with no usage restrictions.
Summary
This blog post focused on the recent changes to AWS’s Free Tier, which allows new users to select either a Paid Plan or a Free Plan. It highlighted the main modifications made, specified which services were included or excluded, and considered the impact of these changes on both novice and seasoned users.
Overall, I see this as a positive change. The AWS Free Tier offering has been divisive for some time, and these changes go a long way towards softening many of its rough edges. While not everyone will get what they want, these changes greatly help to address the concerns and challenges faced by newbies in the past.
New users of AWS in 2025 should consider the same advice as in years prior:
Security first, always.
Check the cost of services before spinning them up.
Turn unused services off.
And finally, don’t forget to set that budget alarm!
Data validation is a crucial component of any data project. It ensures that data is accurate, consistent and reliable. It verifies that data meets set criteria and rules to maintain its quality, and stops erroneous or unreliable information from entering downstream systems. I’ve written about it, scripted it and talked about it.
Validation will be a crucial aspect of Project Wolfie. It is an ongoing process that should occur from data ingestion to exposure, and should be automated wherever possible. Thankfully, most data processes within Project Wolfie are (and will be) built using Python, which provides several libraries to simplify data validation. These include Pandera, Great Expectations and the focus of this post – Pydantic (specifically, version 2).
Firstly, I’ll explore the purpose and benefits of Pydantic. Next, I’ll import some iTunes data and use it to explore key Pydantic validation concepts. Finally, I’ll explore how Pydantic handles observability and test its findings. The complete code will be in a GitHub repo.
Let’s begin!
Introducing Pydantic
This section introduces Pydantic and examines some of its benefits.
About Pydantic
Pydantic is an open-source data validation Python library. It uses established Python notation and constructs to define data structures, types and constraints. These can then validate the provided data, generating clear error messages when issues occur.
Pydantic is a widely used tool for managing application settings, validating API requests and responses, and streamlining data transfer between Python objects and formats like JSON. By integrating both existing and custom elements, it offers a powerful and Pythonic method for ensuring data quality and consistency within projects. This makes data handling in Python more reliable and reduces the likelihood of errors through its intuitive definition and validation processes.
Pydantic Benefits
Pydantic’s benefits are thoroughly documented, and the ones I want to highlight here are:
Intuitive: Pydantic’s use of type hints, functions and classes fits well with my current Python skill level, so I can focus on learning Pydantic without also having to explore unfamiliar Python concepts.
Fast: Pydantic’s core validation logic is written in Rust, which enables rapid development, testing, and validation. This speed has contributed towards…
Before I can start using Pydantic, I need some data. This section examines the data I am using and how I prepare it for Pydantic.
iTunes Data
Firstly, let’s extract some data from iTunes. I create iTunes Export files using the iTunes > Export Playlist command. Apple has documented this, but WikiHow’s documentation is more illustrative. The export file type choices are…interesting. The one closest to matching my needs is the txt format, although the files are technically tab-separated files (TSVs).
iTunes Exports contain many metadata columns. I’m not including them all here (after all, this is a Pydantic post not an iTunes one), but I will be using the following subset (using my existing metadata definitions):
Note that the starred Album and Track Number columns have purposes that differ from the column names. The reasons for this are…not ideal.
Track Number contains BPM data as, although iTunes does have a BPM column, it isn’t included in the exports. And the exports can’t be customised! To include BPMs in an export, I had to repurpose an existing column.
Great. But that’s not as bad as…
Album contains musical keys, as iTunes doesn’t even have a key column, despite MP3s having a native Initial Key metadata field! Approaches to dealing with this vary – I chose to use another donor column. I’ll explain Camelot Notations later on.
That’s enough about the iTunes data for now – I’ll go into more detail in future Project Wolfie posts. Now let’s focus on getting this data into memory for Python.
Data Capture
Next, let’s get the iTunes data into memory. Starting with a familiar library…
pandas
I’ll be using pandas to ingest the iTunes data. This is a well-established and widely supported module. It also has its own data validation functions and will assist with issues like handling spaces in column names.
While iTunes files aren’t CSVs, the pandas read_csv function can still read their data into a DataFrame. It needs some help though – the delimiter parameter must be \t to identify the tabs’ delimiting status.
So let’s read the iTunes metadata into memory and…
Python
df = pd.read_csv(csv_path, delimiter='\t')>>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Oh. pandas can’t read the file. The error says it’s trying the utf-8 codec, so the export must be using something else. Fortunately, there’s another Python library that can help!
charset_normalizer
charset_normalizer is an open-source encoding detector. It determines the encoding of a file or text and records the result. It’s related to the older chardet library but is faster, has a more permissive MIT license and supports more encodings.
Here, I’m using charset_normalizer.detect in a detect_file_encoding function to detect the export’s codec:
I define a detect_file_encoding function that expects a filepath and returns a string.
detect_file_encoding opens the file, reads the data and stores it as raw_data.
charset_normalizer detects raw_data‘s codec and stores this as detection_result.
detect_file_encoding returns either the successfully detected codec, or the common utf-8 codec if the attempt fails.
I can then pass the export’s filepath to the detect_file_encoding function, capture the results as encoding and pass this as a parameter to pandas.read_csv:
There’s one more action to take before moving on. Some columns contain spaces. This will become a problem as spaces are not allowed in Python identifiers!
As the data is now in a pandas DataFrame, I can use pandas.DataFrame.rename to remove these spaces:
In this section, I tell Pydantic about my data model and the types of data it should expect for validation.
Introducing BaseModel
At the core of Pydantic is the BaseModel class – used for defining data models. Every Pydantic model inherits from it, and by doing so gains features like type enforcement, automatic data parsing and built-in validation.
By subclassing BaseModel, a schema for the data is defined using standard Python type hints. Pydantic uses these hints to validate and convert input data automatically.
Let’s explore BaseModel by creating a new Track class.
Creating A Track Class
Pydantic supports standard library types like string and integer. This reduces Pydantic’s learning curve and simplifies integration into existing Python processes.
Here are the very beginnings of my Track data model. I have a new Track class inheriting from Pydantic’s BaseModel, and a Name field with string data type:
Python
classTrack(BaseModel): Name: str
Next, I add a Year field with integer data type:
Python
classTrack(BaseModel): Name: str Year: int
And so on for each field I want to validate with Pydantic:
Python
classTrack(BaseModel): Name: str Artist: str Album: str Work: str Genre: str TrackNumber: int Year: int MyRating: int Location: str
Now, if any field is missing or has the wrong type, Pydantic will raise a ValidationError. But there’s far more to Pydantic data types than this…
Defining Special Data Types
Where no standards exist or where validation rules are more complex to determine, Pydantic offers further type coverage. These include:
One of my Track fields will immediately benefit from this:
Python
classTrack(BaseModel): Location: str
Currently, my Location field validation is highly permissive. It will accept any string. I can improve this using Pydantic’s FilePath data type:
Python
classTrack(BaseModel): Location: FilePath
Now, Pydantic will check that the given location is a path that exists and links to a valid file. No custom code; no for loops – the FilePath type handles everything for me.
So I now have data type validation in my Pydantic data model. What else can I have?
Pydantic Built-In Validation
This section explores the native data validation features of Pydantic, including field annotation and constraints.
Introducing Field
In Pydantic models, data attributes are typically defined using Python type hints. The Field function enables further customisation like constraints, schema metadata and default values.
While type hints define what kind of data is allowed, Field defines how that data should behave, what happens if it’s missing and how it should be documented. It adds clarity to models and helps Pydantic enforce stricter rules.
Let’s run through some examples.
Custom Schema Metadata
One of the challenges in creating data pipelines is that the data fields can sometimes be unclear or difficult to explain. This can cause confusion and delay when building ETLs, examining repos and interacting with code.
Field helps here by adding custom fields to annotate data within Pydantic classes. Examples include description:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.")
And examples:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"])
Using these throughout my Track class simplifies the code and reduces context switching:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"]) Artist: str= Field(description="The artist(s) of the track.",examples=["Above & Beyond", "Armin van Buuren"]) Album: str= Field(description="Track's Camelot Notation indicating the key.",examples=["01A-Abm", "02B-GbM"]) Work: str= Field(description="The record label that published the track.",examples=["Armada Music", "Anjunabeats"]) Genre: str= Field(description="Track's musical genre.",examples=["Trance", "Progressive House"]) TrackNumber: int= Field(description="Track's BPM (Beats Per Minute).",examples=[130, 140]) Year: int= Field(description="Track's release year.",examples=[1998, 2004]) MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80]) Location: FilePath = Field(description="Track's Location on the filesystem.",examples=[r"C:\Users\User\Music\iTunes\TranquilityBase-GettingAway-OriginalMix.mp3"])
This is especially useful for Album and TrackNumber given their unique properties.
Field Constraints
Field can also constrain the data that a class accepts. This includes string constraints:
max_length: Maximum length of the string.
min_length: Minimum length of the string.
pattern: A regular expression that the string must match.
ge & le – greater than or equal to/less than or equal to
gt & lt – greater/less than
multiple_of – multiple of a given number
Constraints can also be combined as needed. For example, iTunes exports record MyRating values in increments of 20, where 1 star is 20 and 2 stars are 40, rising to the maximum 5 stars being 100.
I can express this within the Track class as:
Python
classTrack(BaseModel): MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=20,le=100,multiple_of=20)
Here, MyRating must be greater than or equal to 20 (ge=20), less than or equal to 100 (le=100), and must be a multiple of 20 (multiple_of=20).
I can also parameterise these constraints using variables instead of hard-coded values:
Python
ITUNES_RATING_RAW_LOWEST=20ITUNES_RATING_RAW_HIGHEST=100classTrack(BaseModel): MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=ITUNES_RATING_RAW_LOWEST,le=ITUNES_RATING_RAW_HIGHEST,multiple_of=20)
This property lets me use Pydantic with other Python libraries. Here, my Year validation checks for years greater than or equal to 1970 and less than or equal to the current year (using the datetime library):
No track in the collection should exist beyond the current year – this constraint will now update itself as time passes.
Having applied other constraints, my Track class looks like this:
Python
classTrack(BaseModel):"""Pydantic model for validating iTunes track metadata.""" Name: str= Field(description="Track's name and mix type.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"]) Artist: str= Field(description="The artist(s) of the track.",examples=["Above & Beyond", "Armin van Buuren"]) Album: str= Field(description="Track's Camelot Notation indicating the key.",examples=["01A-Abm", "02B-GbM"]) Work: str= Field(description="The record label that published the track.",examples=["Armada Music", "Anjunabeats"]) Genre: str= Field(description="Track's musical genre.",examples=["Trance", "Progressive House"]) TrackNumber: int= Field(description="Track's BPM (Beats Per Minute).",examples=[130, 140],ge=BPM_LOWEST,le=BPM_HIGHEST) Year: int= Field(description="Track's release year.",examples=[1998, 2004],ge=YEAR_EARLIEST,le=YEAR_CURRENT) MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=ITUNES_RATING_RAW_LOWEST,le=ITUNES_RATING_RAW_HIGHEST,multiple_of=20) Location: FilePath = Field(description="Track's Location on the filesystem.",examples=[r"C:\Users\User\Music\iTunes\AboveAndBeyond-AloneTonight-OriginalMix.mp3"])
This is already very helpful. Next, let’s examine my custom requirements.
Pydantic Custom Validation
This section discusses how to create custom data validation using Pydantic. I will outline what the requirements are, and then examine how these validations are defined and implemented.
Introducing Decorators
In Python, decorators modify or enhance the behaviour of functions or methods without changing their actual code. Decorators are usually written using the @ symbol followed by the decorator name, just above the function definition:
Python
@my_decoratordefmy_function():...
For example, consider this logger_decorator function:
Python
deflogger_decorator(func):defwrapper():print(f"Running {func.__name__}...") func() # Execute the supplied functionprint("Done!")return wrapper
This function takes another function (func) as an argument, printing a message before and after execution. If the logger_decorator function is then used as a decorator when running this greet function:
Python will add the logging behaviour of logger_decorator without modifying greet:
Python
Running greet...Hello, world!Done!
Introducing Field Validators
In addition to the built-in data validation capabilities of Pydantic, custom validators with more specific rules can be defined for individual fields using Field Validators. These use the field_validator() decorator, and are declared as class methods within a class inheriting from Pydantic’s BaseModel.
Here’s a basic example using my Track model:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"] )@field_validator("Name")@classmethoddefvalidate_name(cls, value):# custom validation logic herereturn value
Where:
@field_validator("Name") tells Pydantic to use the function to validate the Name field.
@classmethod lets the validator access the Track class (cls).
The validator executes the validate_name function with the field value (in this case Name) as input, performs the checks and must either:
return the validated value, or
raise a ValueError or TypeError if validation fails.
Let’s see this in action.
Null Checks
Firstly, let’s perform a common data validation check by identifying empty fields. I have two variants of this – one for strings and another for numbers.
The first – validate_non_empty_string – uses pandas.isna to catch missing values and strip() to catch empty strings. This field validator applies to the Artist, Work and Genre columns:
Python
@field_validator("Artist", "Work", "Genre")@classmethoddefvalidate_non_empty_string(cls, value, info):"""Validate that a string field is not empty."""if pd.isna(value) orstr(value).strip() =="":raiseValueError(f"{info.field_name} must not be null or empty")return value
The second – validate_non_null_numeric – checks the TrackNumber, Year and MyRating numeric columns for empty values using pandas.isna:
Python
@field_validator("TrackNumber", "Year", "MyRating", mode="before")@classmethoddefvalidate_non_null_numeric(cls, value, info):"""Validate that a numeric field is not null."""if pd.isna(value):raiseValueError(f"{info.field_name} must not be null")return value
Also, it uses Pydantic’s before validator (mode="before"), ensuring the data validation happens beforePydantic coerces types. This catches edge cases like "" or "NaN" before they become None or float("nan") values.
Character Check
Now let’s create a validator for something a little more challenging to define. All tracks in my collection follow a Track Name (Mix) schema. This can take many forms:
Original track: Getting Away (Original Mix)
Remixed track: Shapes (Oliver Smith Remix)
Updated remixed track: Distant Planet (Menno de Jong Interpretation) (2020 Remaster)
…and many more variants.
But generally, there should be at least one instance of text enclosed by parentheses. However, some tracks have no remixer and are released with just a title:
Getting Away
Shapes
Distant Planet
This not only looks untidy (eww!), but also breaks some of my downstream automation that expects the Track Name (Mix) schema. So any track without a remixer gets (Original Mix) added to the Name field upon download:
Getting Away (Original Mix)
Shapes (Original Mix)
Distant Planet (Original Mix)
Expressing this is possible with RegEx, but I can make a more straightforward and more understandable check with a field validator:
Python
@field_validator("Name")@classmethoddefvalidate_name(cls, value):if pd.isna(value) orstr(value).strip() =="":raiseValueError("Name must not be null or empty") value_str =str(value)if'('notin value_str:raiseValueError("Name must contain an opening parenthesis '('")if')'notin value_str:raiseValueError("Name must contain a closing parenthesis ')'")return value
This validator checks that the value isn’t empty and then performs additional checks for parentheses. This could be one check, but having it as two checks improves log readability (insert foreshadowing – Ed). I could also have added Name to the validate_non_empty_string validation, but this way I have all my Name checks in the same place.
Parameterised Checks
Like constraints, field validators can also be parameterised. Let’s examine Album.
As iTunes exports can’t be customised, I use Album for a track’s Camelot Notation. These are based on the Camelot Wheel – MixedInKey‘s representation of the Circle Of Fifths. DJs generally favour Camelot Notation as it is simpler than traditional music notation for human understanding and application sorting.
Importantly, there are only twenty-four possible notations:
For example:
1A (A-Flat Minor)
6A (G Minor)
6B (B-Flat Major)
10A (B Minor)
So let’s capture these values in a CAMELOT_NOTATIONS list:
(Note the leading zeros. Without them, iTunes sorts the Album column as (10, 11, 12, 1, 2, 3…) – you can imagine how I felt about that – Ed)
Next, I pass the CAMELOT_NOTATIONS list to an Album field validator that checks if the given value is in the list:
Python
@field_validator("Album")@classmethoddefvalidate_album(cls, value):if pd.isna(value) orstr(value).strip() =="":raiseValueError("Album must not be null or empty")ifstr(value) notinCAMELOT_NOTATIONS:raiseValueError(f"Album must be a valid Camelot notation: {value} is not in the valid list")return value
Pydantic now fails any value not found in the CAMELOT_NOTATIONS list.
Now I have my validation needs fully covered. What observability does Pydantic give me over these data validation checks?
Pydantic Observability
In this section, I assess and adjust the default Pydantic observability abilities to ensure my data validation is accurately recorded.
Default Output
Pydantic automatically generates data validation error messages if validation fails. These detailed messages provide a structured overview of the issues encountered, including:
The index of the failing input (e.g., a DataFrame row number).
The model class where the error occurred.
The field name that failed validation.
A human-readable explanation of the issue.
The offending input value and its type.
A direct link to relevant documentation for further guidance.
Here’s an example of Pydantic’s output when a string field receives a NaN value:
Python
Row 2353: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
In this example:
Row 2353 indicates the problematic input row.
Track is the Pydantic model where validation failed.
Work is the failing field.
Pydantic detects that the input is nan (a float) and not a valid string.
Pydantic provides a URL to the string_type documentation.
Here’s another example, this time for a MyRating error:
Python
Row 3040: 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
In this case, a field validator raised a ValueError because MyRating must not be null.
Pydantic’s error reporting is clear and actionable, making it suitable for debugging and systemic data validation tasks. However, for larger datasets or more user-friendly outputs (such as reports or UI feedback), further customisation is helpful, such as…
Terminal Output Customisation
As good as Pydantic’s default output is, it’s not that human-readable. For example, in this Terminal output I have no idea which tracks are on rows 2353, 2495 and 3040:
Plaintext
Row 2353: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 2495: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 3040: 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
While I can find this out, it would be better to know at a glance. Fortunately, I can improve this when capturing the errors by appending the artist and name to each row of the errors object:
Python
except (ValidationError, ValueError) as e: artist = row['Artist'] ifnot pd.isna(row['Artist']) else"Unknown Artist" name = row['Name'] ifnot pd.isna(row['Name']) else"Unknown Name" errors.append((index, artist, name, str(e)))
Now, Artist and Name are added to each row:
Plaintext
Row 2353: Ben Stone - Mercure (Extended Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 2495: DJ Hell - My Definition Of House Music (Resistance D Remix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 3040: York - Reachers Of Civilisation (In Search Of Sunrise Mix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This makes it far easier to find the problematic files in my collection. As long as there aren’t many findings…
Creating An Error File
There are three main problems with Pydantic printing all data validation errors in the Terminal:
They don’t persist outside of the Terminal session.
The Terminal isn’t that easy to read when it’s full of text.
The Terminal may run out of space if there are a large number of errors.
So let’s capture the errors in a file instead. This write_error_report function generates a text-based error report from validation failures, saving it in a logs subfolder adjacent to the input file:
Firstly, it constructs a timestamped filename using the original file’s stem (e.g., 20250529-142304-PydanticErrors-data.txt) and the logs subfolder, creating the latter if it doesn’t exist:
Next, Python orders the errors by the sorted_fields input, displays error counts per field and formats each error message with clear section dividers. A structured report listing all validation errors by field is saved in the logs subfolder:
Python
withopen(error_output_path, 'w', encoding='utf-8') as f: f.write(f"Validation Error Report - {timestamp}\n") f.write("="*80+"\n")for field in sorted_fields: messages = field_error_details.get(field, [])if messages: f.write(f"\n{field} Errors ({len(messages)}):\n") f.write("-"*80+"\n")for message in messages: f.write(message +"\n\n")
Finally, the filesystem path of the generated report is returned:
Python
return error_output_path
When executed, the Terminal tells me the error file path:
Plaintext
Detailed error log written to: 20250513-133743-PydanticErrors-iTunes-Elec-Dance-Club-Main.txt
And stores the findings in a local txt file, grouped by error type for simpler readability:
Plaintext
Validation Error Report - 20250513-133743================================================================================MyRating Errors (5):--------------------------------------------------------------------------------Row 3040: York - Reachers Of Civilisation (In Search Of Sunrise Mix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_errorWork Errors (22):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
Adding A Terminal Summary
Finally, I created a Terminal summary of Pydantic’s findings:
Python
print("\nValidation Summary:\n")sorted_fields =sorted(Track.model_fields.keys())for field in sorted_fields: count = error_analysis['counts'].get(field, 0)print(f"{field} findings: {count}")
In this section, I test that my Pydantic data validation and observability processes are working correctly using iTunes export files and pytest unit tests.
Recent File Test
The first test used a recent export from the end of April 2025. Here is the Terminal output:
Plaintext
Processing file: iTunes-Elec-Dance-Club-Main-2025-04-28.txtReading iTunes-Elec-Dance-Club-Main-2025-04-28.txt with detected encoding UTF-16Loaded 4407 rowsValidated 4379 rowsFound 28 errors!Validation Summary for iTunes-Elec-Dance-Club-Main-2025-04-28.txt:Album errors: 0Artist errors: 0Genre errors: 0Location errors: 0MyRating errors: 5Name errors: 1TrackNumber errors: 0Work errors: 22Year errors: 0Detailed error log written to: 20250521-164324-PydanticErrors-iTunes-Elec-Dance-Club-Main-2025-04-28.txt
Good first impressions – the 4407 row count matches the export file, the summary is shown in the Terminal and an error log is created. So what’s in the log?
Firstly, five tracks have no MyRating values. For example:
Plaintext
MyRating Errors (5):--------------------------------------------------------------------------------Row 558: Reel People Feat Angela Johnson - Can't Stop (Michael Gray Instrumental Remix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This is correct, as this export was created when I added some new tracks to my collection.
Next, one track has a Name issue:
Plaintext
Name Errors (1):--------------------------------------------------------------------------------Row 1292: The Prodigy - Firestarter (Original Mix}: 1 validation error for TrackName Value error, Name must contain a closing parenthesis ')' [type=value_error, input_value='Firestarter (Original Mix}', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This one confused me at first, until I looked at the error more closely and realised the closing parenthesis is wrong! } is used instead of )! This is why my validate_name field validator has separate checks for each character – it makes it easier to understand the results!
Finally, twenty-two tracks are missing record label metadata in Work:
Plaintext
Work Errors (22):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
This means some tracks are missing full metadata. This won’t break any downstream processes as I have no reliance on this field. That said, it’s good to know about this in case my future needs change.
Older File Test
The next test uses an older file from March 2025. Let’s see what the Terminal says this time…
There are fewer rows here – 4381 vs 4407. This is correct, as my collection was smaller in March. But no rows were validated successfully!
I don’t have to go far to find out why:
Plaintext
Location Errors (4381):--------------------------------------------------------------------------------Row 0: Ariel - A9 (Original Mix): 1 validation error for TrackLocation Path does not point to a file [type=path_not_file, input_value='C:\\Users\\User\\Folder...riel-A9-OriginalMix.mp3', input_type=str]
All the location checks failed. But this is actually a successful test!
In the time between these two exports, I reorganised my music collection. As a result, the file paths in this export no longer exist. Remember – the Location field uses the FilePath data type, which checks that the given paths exist and link to valid files. And these don’t!
The Name results are the same as the first test. This has been around for a while apparently…
Plaintext
Name Errors (1):--------------------------------------------------------------------------------Row 1292: The Prodigy - Firestarter (Original Mix}: 1 validation error for TrackName Value error, Name must contain a closing parenthesis ')' [type=value_error, input_value='Firestarter (Original Mix}', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/value_error
There are also TrackNumber errors in this export:
Plaintext
TrackNumber Errors (2):--------------------------------------------------------------------------------Row 485: Andrew Bayer Feat Alison May - Brick (Original Mix): 2 validation errors for TrackTrackNumber Input should be greater than or equal to 100 [type=greater_than_equal, input_value=90, input_type=int] For further information visit https://errors.pydantic.dev/2.11/v/greater_than_equal
Two tracks have BPM values lower than the set range. Both files were moved during my reorganisation, but were included in this export at the time and therefore fail this validation check.
Finally, the Work errors are the same as the first test (although more have crept in since!):
Plaintext
Work Errors (17):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
Ultimately, both tests match expectations!
Unit Tests With Amazon Q
Finally, I wanted to include some unit tests for this project. Unit testing is always a good idea, especially in this context where I can verify function outputs and error generation without needing to create numerous test files.
I figured this was a good opportunity to test Amazon Q Developer and see what it came up with. I gave it a fairly basic prompt, using the @workspace context to allow Q access to my project’s entire workspace as context for its responses:
Plaintext
@workspace write unit tests for this script using pytest
I tend to use pytest for my Python testing, as I find it simpler and more flexible than Python’s standard unittest library.
Q promptly provided several reasonable tests in response. This initiated a half-hour exchange between us focused on calibrating the existing tests and creating new ones. To be fair to Q, my initial prompt was quite basic and could have been much more detailed.
Amongst Q’s tests was this one testing an empty Artist field:
Python
@patch('pathlib.Path.exists')deftest_empty_artist(self, mock_exists):"""Test that an empty artist fails validation."""# Mock file existence check mock_exists.return_value =True invalid_track_data = {"Name": "Test Track (Original Mix)","Artist": "", # Empty artist"Album": "01A-Abm","Work": "Test Label","Genre": "Trance","TrackNumber": 130,"Year": 2020,"MyRating": 80,"Location": "C:\\Music\\test_track.mp3" }
This one, checking an invalid Camelot Notation:
Python
@patch('pathlib.Path.exists')deftest_invalid_album_not_camelot(self, mock_exists):"""Test that an invalid Camelot notation fails validation."""# Mock file existence check mock_exists.return_value =True invalid_track_data = {"Name": "Test Track (Original Mix)","Artist": "Test Artist","Album": "Invalid Key", # Not a valid Camelot notation"Work": "Test Label","Genre": "Trance","TrackNumber": 130,"Year": 2020,"MyRating": 80,"Location": "C:\\Music\\test_track.mp3" }with pytest.raises(ValueError, match="Album must be a valid Camelot notation"): Track(**invalid_track_data)
And this one, checking what happens with an incomplete DataFrame:
Python
@patch('wolfie_exportvalidator_itunes.detect_file_encoding')@patch('pandas.read_csv')deftest_load_itunes_data_missing_columns(self, mock_read_csv, mock_detect_encoding):"""Test loading iTunes data with missing columns."""# Setup mocks mock_detect_encoding.return_value ='utf-8' mock_df = pd.DataFrame({'Name': ['Test Track (Original Mix)'],'Artist': ['Test Artist'],# Missing required columns }) mock_read_csv.return_value = mock_df# Call function and verify it raises an errorwith pytest.raises(ValueError, match="Missing expected columns"): load_itunes_data(Path('dummy_path.txt'))
I’ll include the whole test suite in my GitHub repo. Let’s conclude with pytest‘s output:
I had a very positive experience overall! Working with Amazon Q allowed me to write the tests more quickly than I could have done on my own. We would have been even faster if I had put more thought into my initial prompt. Additionally, since Q Developer offers a generous free tier, it didn’t cost me anything.
GitHub Repo
I have committed my Pydantic data validation script, test suite and documentation in the repo below:
Note that the parameters are decoupled from the Pydantic script. This will allow me to reuse some parameters across future validation scripts and has enabled me to exclude the system parameters from the repository.
Summery
In this post, I used the Pydantic Python library to create data validation and observability processes for my Project Wolfie iTunes data.
I found Pydantic very impressive! Its simplicity, functionality and interoperability make it an attractive addition to Python data pipelines, and its strong community support keeps Pydantic relevant and current. Additionally, Pydantic’s presence in FastAPI, PydanticAI and a managed AWS Lambda layer enables rapid integration and seamless deployment. I see many applications for it within Project Wolfie.
There’s lots more to Pydantic – this Pixegami video is a great walkthrough of Pydantic in action:
If this post has been useful then the button below has links for contact, socials, projects and sessions:
I’ve become an AWS Step Functions convert in recent times. Back in 2020 when I first studied it for some AWS certifications, Step Functions defined workflows entirely in JSON, making it less approachable and often overlooked.
How times change! With 2021’s inclusion of a visual editor, Step Functions became far more accessible, helping it become a key tool in serverless application design. And in 2024 two major updates significantly enhanced Step Functions’ flexibility: JSONata support, which I recently explored, and built-in variables, which simplify state transitions and data management. This post focuses on the latter.
To demonstrate the power of Step Functions variables, I’ll walk through a practical example: fetching API data, verifying the response, and inserting it into DynamoDB. Firstly, I’ll examine the services and features I’ll use. Then I’ll create a state machine and examine each state’s use of variables. Finally, I’ll complete some test executions to ensure everything works as expected.
If a ‘simplified’ workflow seems hard to justify as a 20-minute read…that’s fair. But mastering Step Functions variables now can save hours of debugging and development in the long run! – Ed
Also, special thanks to AWS Community Builder Md. Mostafa Al Mahmud for generously providing AWS credits to support this and future posts!
Architecture
This section provides a top-level view of the architecture behind my simplified Step Functions variables workflow, highlighting the main AWS services involved in getting and processing API data. I’ll briefly cover the data being used, the role of Step Functions variables and the integration of DynamoDB within the workflow.
API Data
The data comes from a RESTful API that provides UK car details. The API needs both an authentication key and query parameters. Response data is provided in JSON.
The data used in this post is about my car. As some of it is sensitive, I will only use data that is already publicly available:
Step Functions variables offer a simple way to store and reuse data within a state machine, enabling dynamic workflows without complex transformations. They work well with both JSONata and JSONPath and are available at no extra cost in all AWS regions that support Step Functions.
Variables are set using Assign. They can be assigned static values for fixed values:
As well as dynamic values for changing values. To dynamically set variables, Step Functions uses JSONata expressions within {% ... %}. The following example extracts productName and available from the state input using the JSONata $states reserved variable:
Variables are then referenced using dollar signs ($), e.g. $productName.
There’s tonnes more to this. For details on name syntax, ASL integration and creating JSONPath variables, check the Step Functions Developer Guide variables section. Additionally, watch AWS Principal Developer Advocate Eric Johnson‘s related video:
With Step Functions variables handling data transformation and persistence, the next step is storing processed data efficiently. This is where Amazon DynamoDB comes in.
Amazon DynamoDB
DynamoDB is a fully managed NoSQL database built for high performance and seamless scalability. Its flexible, schema-less design makes it perfect for storing and retrieving JSON-like data with minimal overhead.
DynamoDB can automatically scale to manage millions of requests per second while maintaining low latency. It integrates seamlessly with AWS services like Lambda and API Gateway, providing built-in security, automated backups, and global replication to ensure reliability at any scale.
Popular use cases include:
Serverless backends (paired with AWS Lambda/API Gateway) for API-driven apps.
Real-time workloads like user sessions, shopping carts, or live leaderboards.
High-velocity data streamsfrom IoT devices or clickstream analytics.
Diagram
Finally, here is an architectural diagram of my simplified Step Functions variables workflow:
In which:
The user triggers an AWS Step Functions state machine with a JSON key-value pair as input.
A Lambda function is invoked with the input payload.
The Lambda function sends a POST request to a third-party API.
The API server responds with JSON data.
The Lambda function assigns Step Functions variables to store API response values and enters a Choice state that checks the API response code.
If the Choice state condition fails, SNS publishes a failure notification email.
The state machine terminates with an ExecutionFailed status.
If the Choice state condition passes, the processed API response data is written to a DynamoDB table.
SNS publishes a success notification email.
The state machine terminates with an ExecutionSucceeded status.
If an error occurs at any point in execution, SNS publishes a failure notification email and the state machine terminates with an ExecutionFailed status.
Resources
In this section, I create and configure my simplified Step Functions variables workflow resources, which are:
AWS Lambda function
AWS Step Functions Choice state
Amazon DynamoDB table
Amazon SNS topic
AWS Lambda
This section details how I created an AWS Lambda function to interact with a REST API. The function makes a secure POST request, manages potential errors and parses the response for use in downstream states. Additionally, I will discuss an issue I encountered while attempting to create variables from the function’s output using JSONata.
Lambda Function
First, I need to write a function. It’s nothing special – I have the API link and security key I need, and I just want to retrieve some data. So I decided to give Amazon Q Developer a try!
I used this prompt:
I have an HTTPS hyperlink for a REST API and an API key. I want to send a POST request to the API using the key. I have a USERID parameter that I will pass as JSON.
Please write a Python function for AWS Lambda with error handling. Please use security best practises and Pythonic standards.
Yes I do say Please in my prompts.
Amazon Q produced a decent script with:
API endpoint and key stored as environment variables.
Use of the requests module with a timeout to prevent indefinite hangs.
Error handling for request failures, validation errors, and unexpected issues.
A structured JSON response including the status code and parsed response.
Python
import jsonimport osimport loggingimport requestsfrom typing import Dict, Anyfrom botocore.exceptions import ClientError# Configure logginglogger = logging.getLogger()logger.setLevel(logging.INFO)deflambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:""" AWS Lambda function to make POST request to API endpoint Args: event: Lambda event object containing request data context: Lambda context object Returns: API response as dictionary """try:# Get API endpoint and key from environment variables api_endpoint = os.environ['API_ENDPOINT'] api_key = os.environ['API_KEY']# Extract user ID from event user_id = event.get('userId')ifnot user_id:raiseValueError("userId is required in the request")# Set request headers headers = {'Content-Type': 'application/json','x-api-key': api_key }# Prepare request payload payload = {'userId': user_id }# Make POST request response = requests.post( api_endpoint,headers=headers,json=payload,timeout=30# Set timeout for the request )# Raise exception for bad status codes response.raise_for_status()return {'statusCode': response.status_code,'body': response.json() }except requests.exceptions.RequestException as e: logger.error(f"API request failed: {str(e)}")return {'statusCode': 500,'body': json.dumps({'error': 'Failed to make API request'}) }exceptValueErroras e: logger.error(f"Validation error: {str(e)}")return {'statusCode': 400,'body': json.dumps({'error': str(e)}) }exceptExceptionas e: logger.error(f"Unexpected error: {str(e)}")return {'statusCode': 500,'body': json.dumps({'error': 'Internal server error'}) }
It needed some tweaks for my purposes, but was still faster than typing it all out manually!
Step Functions Config
The Lambda: Invoke action defaults to using the state input as the payload, so "Payload": "{% $states.input %}" is scripted automatically:
JSON
"Lambda Invoke": {"Type": "Task","Resource": "arn:aws:states:::lambda:invoke","Output": "{% $states.result.Payload %}","Arguments": {"FunctionName": "[LAMBDA_ARN]:$LATEST","Payload": "{% $states.input %}" },"Next": "Check API Status Code" }
This is going to be helpful in the next section!
Step Functions manages retries and error handling. If my Lambda function fails, it will retry up to three times with exponential backoff before sending a failure notification through SNS:
I mentioned earlier about Lambda: Invoke‘s default Payload setting. This default creates a {% $states.result.Payload %} JSONata expression output that I can use to assign variables for downstream states.
In this example, {% $states.result.Payload %} returns this:
Let’s make a variable for statusCode. In the response, statusCode is a property of Payload:
JSON
{"Payload": {"statusCode": 200 }}
In JSONata this is expressed as {% $states.result.Payload.statusCode %}. Then I can assign the JSONata expression to a statusCode variable via JSON. In the AWS console, I do this via:
Note that variables returning numbers from the response body like yearOfManufacture have an additional $string JSONata expression. I’ll explain the reason for this in the DynamoDB section.
Lambda Issues
When I first started using Step Functions variables, I used a different Lambda function for the API call and kept getting this error:
An error occurred.
The JSONata expression '$states.input.body.make' specified for the field 'Assign/make' returned nothing (undefined).
After getting myself confused, I checked the function’s return statement and found this:
That string isn’t compatible with dot notation. So while $states.input.body will match the whole body, $states.input.body.make can’t match anything because the string can’t be traversed. So nothing is returned, causing the error.
Using response.json() fixes this, as the response is now correctly structured for JSONata expressions:
The Choice state here is very similar to a previous one. This Choice state checks the Lambda function’s API response and routes accordingly.
Here, the Choice state uses the JSONata expression {% $statusCode = 200 %} to check the $statusCode variable value. By default, it will transition to the SNS Publish: Fail state. However, if $statusCode equals 200, then the Choice state will transition to the DynamoDB PutItem state instead:
This step prevents silent failures by ensuring unsuccessful API responses trigger an SNS notification instead of proceeding to DynamoDB. It also helps maintain data integrity by isolating success and failure paths, and ensuring only valid responses are saved in DynamoDB.
So now I’ve captured the data and confirmed its integrity. Next, let’s store it somewhere!
Amazon DynamoDB
It’s time to think about storing the API data. Enter DynamoDB! This section covers creating a table, writing data and integrating DynamoDB with AWS Step Functions and JSONata. I’ll share key lessons learned, especially about handling data types correctly.
Let’s start by creating a table.
Creating A Table
Before inserting data into DynamoDB, I need to create a table. Since DynamoDB is a schemaless database, all that is required to create a new table is a table name and a primary key. Naming the table is straightforward, so let’s focus on the key.
DynamoDB has two types of key:
Partition key(required): Part of the table’s primary key. It’s a hash value that is used to retrieve items from the table and allocate data across hosts for scalability and availability.
Sort key (optional): The second part of a table’s primary key. The sort key enables sorting or searching among all items sharing the same partition key.
Let’s look at an example using a Login table. In this table, the user ID serves as the partition key, while the login date acts as the sort key. This structure enables efficient lookups and sorting, allowing quick retrieval of a user’s login history while minimizing operational overhead.
To use a physical analogy, consider the DynamoDB table as a filing cabinet, the Partition key as a drawer, and the Sort key as a folder. If I wanted to retrieve User 123‘s logins for 2025, I would:
Access the Logins filing cabinet (DynamoDB table).
Find User 123’s drawer (Partition Key).
Get User 123’s 2025 folder (Sort Key).
DynamoDB provides many features beyond those discussed here. For the latest features, please refer to the Amazon DynamoDB Developer Guide.
Writing Data
So now I have a table, how do I put data in it?
DynamoDB offers several ways to write data, and a common one is PutItem. This lets me insert or replace an item in my table. Here’s a basic example of adding a login event to a UserLogins table:
TableName specifies the name of the DynamoDB table where the item will be stored.
Item represents the data being inserted into the table. It contains key-value pairs, where the attributes (e.g. UserID) are mapped to their corresponding data types (e.g. "S") and values (e.g. "123").
UserID is an attribute in the item being inserted.
"S" is a data type descriptor, ensuring that DynamoDB knows how to store and index it.
"123" is the value assigned to the UserID attribute.
While DynamoDB is NoSQL, it still enforces strict data types and naming rules to ensure consistency. These are detailed in the DynamoDB Developer Guide, but here’s a quick rundown of supported data types as of March 2025:
S – String
N – Number
B – Binary
BOOL – Boolean
NULL – Null
M – Map
L – List
SS – String Set
NS – Number Set
BS – Binary Set
Step Functions Config
So how do I apply this to Step Functions? Well, remember when I set variables in the output of the Lambda function? Step Functions lets me reference those variables here.
Here’s how I store a make attribute in DynamoDB, using my $make variable in a JSONata expression:
Finally, DynamoDB:PutAction gets the same error handling as Lambda:Invoke.
So I got all this working first time, right? Well…
DynamoDB Issues
During my first attempts, I got this error:
An error occurred while executing the state 'DynamoDB PutItem'.
The Parameters '{"TableName":"REDACTED","Item":{"make":{"S":"FORD"},"yearOfManufacture":{"N":2014}}}' could not be used to start the Task:
[The value for the field 'N' must be a STRING]
Ok. Not the first time I’ve seen data type problems. I’ll just change the yearOfManufacture data type to "S"(string) and try again…
An error occurred while executing the state 'DynamoDB PutItem'.
The Parameters '{"TableName":"REDACTED","Item":{"make":{"S":"FORD"},"yearOfManufacture":{"S":2014}}}' could not be used to start the Task:
[The value for the field 'S' must be a STRING]
DynamoDB rejected both approaches (╯°□°)╯︵ ┻━┻
The issue wasn’t the data type, but how it was formatted. DynamoDB treats numbers as strings in its JSON-like structure, so even when using numbers they must be wrapped in quotes.
In the case of yearOfManufacture, where I was providing 2014:
Plaintext
"yearOfManufacture": {"N": 2014}
DynamoDB needed "2014":
Plaintext
"yearOfManufacture": {"N": "2014"}
Thankfully, JSONata came to the rescue again! Remember the $string function from the Lambda section? Well, $string casts the given argument to a string!
This solved the problem with no Lambda function changes or additional states!
Amazon SNS
After successfully writing data to DynamoDB, I want to include a confirmation step by sending a notification through Amazon SNS.
While this approach is not recommended for high-volume use cases because of potential costs and notification fatigue, it can be helpful for testing, monitoring, and debugging. Additionally, it offers an opportunity to reuse variables from previous states and dynamically format a message using JSONata.
The goal is to send an email notification like this:
A 2014 GREY FORD has been added to DynamoDB on (current date and time)
To do this, I’ll use:
$yearOfManufacture for the vehicle’s year (2014)
$colour for the vehicle’s colour (GREY)
$make for the manufacturer (FORD)
Plus the JSONata $now() function for the current date and time. This generates a UTC timestamp in ISO 8601-compatible format and returns it as a string. E.g. "2025-02-25T19:12:59.152Z"
So the code will look something like:
A $yearOfManufacture$colour$make has been added to DynamoDB on $now()
Which translates to this JSONata expression:
Plaintext
{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}
Let’s analyse each part of the JSONata expression to understand how it builds the final message:
Plaintext
{% 'A '& $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}"
Each part of this expression plays a specific role:
‘A ‘ | ‘ has been added to DynamoDB on ‘: Static strings & spaces.
‘ ‘: Static spaces to separate JSONata variable outputs.
The static spaces are important! Without them, I’d get this:
2014GREYFORD
Instead of the expected:
2014 GREY FORD
This JSONata expression is passed as the Message argument in the SNS:Publish action, ensuring the notification contains the correctly formatted message:
JSON
"Message": "{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}"
Finally, to integrate this with Step Functions it is included in the SNS Publish: Success task ASL:
JSON
"SNS Publish: Success": {"Type": "Task","Resource": "arn:aws:states:::sns:publish","Arguments": {"Message": "{% 'A ' & $yearOfManufacture & ' ' & $colour & ' ' & $make & ' has been added to DynamoDB on ' & $now() %}","TopicArn": "arn:aws:sns:REDACTED:success-stepfunction"}
Final Workflow
Finally, let’s see what the workflows look like. Here’s the workflow graph:
In this section, I run some test executions against my simplified Step Functions workflow and check the variables. I’ll test four requests – two valid and two invalid.
Valid Request: Ford
Firstly, what happens when a valid API request is made and everything works as expected?
The Step Functions execution succeeds:
Each state completes successfully:
My DynamoDB table now contains one item:
I receive a confirmation email from SNS:
If I send the same request again, the existing DynamoDB item is overwritten because the primary key remains the same.
Valid Request: Audi
Next, what happens if I make a valid request for a different car? The steps repeat as above, and my DynamoDB table now has two items:
And I get a different email:
Invalid Request
Next, what happens if the car in my request doesn’t exist? Well, it does fail, but in an unexpected way:
The API returns an error response:
JSON
"Payload": {"statusCode": 500,"body": "{\"error\": \"API request failed: 400 Client Error: Bad Request for url"}" }
I’d expected the response to be passed to the Choice state, which would then notice the 500 status code and start the Fail process. But this happened instead:
The failure occurs at the assignment of the Lambda action variable! It attempts to assign a yearOfManufacture value from the API response body to a variable, but since there is no response body the assignment fails:
JSON
{"cause": "An error occurred while executing the state 'Lambda Invoke' (entered at the event id #2). The JSONata expression '$states.result.Payload.body.yearOfManufacture ' specified for the field 'Assign/yearOfManufacture ' returned nothing (undefined).","error": "States.QueryEvaluationError","location": "Assign/registrationNumber","state": "Lambda Invoke"}
I also get an email, but this one is less fancy as it just dumps the whole output:
So I still get my Fail outcome – just not in the expected way. Despite this, the Choice state remains valuable for preventing invalid data from entering DynamoDB.
No Request
Finally, what happens if no data is passed to the state machine at all?
Actually, this situation is very similar to the invalid request! There’s a different error message in the log:
JSON
"Payload": {"statusCode": 400,"body": "{\"error\": \"Registration number not provided\"}" }
But otherwise it’s the same events and outcome. The Lambda variable assignment fails, triggering an SNS email and an ExecutionFailed result.
Cost Analysis
This section examines the costs of my simplified Step Functions variables workflow. This section is brief since all services used in this workflow fall within the AWS Free Tier! For transparency, I’ll include my billing metrics for the month. These are account-wide, and I’m still nowhere near paying AWS anything!
DynamoDB:
$0.1415 per million read request units (EU (Ireland))
30.5 ReadRequestUnits
$0.705 per million write request units (EU (Ireland))
First 1,000 Amazon SNS Email/Email-JSON Notifications per month are free
19 Notifications
First 1,000,000 Amazon SNS API Requests per month are free
289 Requests
Step Functions:
$0 for first 4,000 state transitions
431 StateTransitions
This experiment demonstrates how cost-effective Step Functions can be. As long as my usage remains within the Free Tier, I pay nothing! If my workflow grows, I’ll monitor costs and optimise accordingly.
Summary
In this post, I used AWS Step Functions variables and JSONata to create a simplified API data capture workflow with Lambda and DynamoDB.
With a background in SQL and Python, I’m no stranger to variables, and I love that they’re now a native part of Step Functions. AWS keeps enhancing Step Functions every few months, making it more powerful and versatile. The introduction of variables unlocks new possibilities for data manipulation, serverless applications and event-driven workflows, and I’m excited to explore them further in the coming months!