Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully-featured services from data centres globally.
In this post, I explore how AWS Billing & Cost Management Dashboards streamline FinOps, monitor service costs and create clear, shareable visual narratives.
(Also, yes – amazonwebshark has been around since 2021 and I can’t believe this is the first time FinOps has been mentioned – Ed.)
As a long-time data professional and analytics geek, I loves me some graphs. And as someone who regularly uses Cost Explorer in various ways, I was keen to check this out. One thing led to another and this post emerged!
I’ll begin by examining the benefits of the new AWS Billing and Cost Management Dashboards and how to access them. Then I’ll build two dashboards – one in a standalone AWS account and another in my AWS Organisation Management account. Finally, I’ll examine how to share dashboards between AWS accounts.
FinOps is short for ‘Financial Operations’. It’s the practice of bringing together finance, engineering, and business teams to maximise the value of cloud spend. Instead of cloud bills being something only Finance is concerned with, FinOps makes cost awareness part of everyday decisions – both non-technical and technical.
FinOps isn’t just for accountants. Engineers can see how their services contribute to the monthly bill, Finance can track patterns and generate forecasts, and leadership gets high-level visuals they can view and share without needing to interact with spreadsheets and raw data. FinOps can also help during negotiations with service providers and cloud platforms, from SLAs to resource reservations.
Data Storytelling & Narratives
Visualisation narratives and data storytelling both focus on using charts and visuals to add context to raw data. They combine data, visuals and narrative to show both what is happening and why it matters. The goal is to create a unified message that moves from context to evidence to insight, rather than using isolated charts.
In cost management, this means structuring dashboards so visuals tell a story: a high-level view of overall spend, followed by the accounts or services driving these costs, and then the details that provide supporting evidence. This turns a dashboard into a coherent narrative that links costs to activity and business goals.
The value lies in this clarity. Narratives reduce noise, highlight what matters, and make cost information accessible to both technical and non-technical teams. They also reflect FinOps guidance on timely and accessible reporting, aligning with the AWS Well-Architected focus on continual optimisation.
Dashboard Benefits
AWS Billing and Cost Management Dashboards help support key industry standard cost guidance. An example of this is the FinOps Foundation‘s FinOps Principles, including:
Enabling teams and account owners to monitor and manage cloud spend without relying on external teams or tools.
Allowing centralised FinOps teams to highlight and promote key cost metrics consistently across the organisation.
Providing real-time updates, ensuring accuracy and constant access without requiring data team oversight.
Supporting collaboration between finance and technology teams to understand costs and their alignment with business goals.
AWS Billing and Cost Management Dashboards also align with the AWS Well-Architected Framework’s Cost Optimization Pillar goals. For example:
Encouraging active and ongoing management of cloud costs, rather than end-of-month reporting.
Increasing awareness of usage and expenditure to enable informed decisions.
Making it simple to identify resources or services that may not be cost-effective.
Revealing usage trends against demand to ensure resources scale appropriately without overspending.
Showing long-term patterns to validate optimisation efforts and drive continuous improvement.
Additionally, sharing dashboards from AWS Organization Management accounts means fewer people need direct access to the account itself, supporting security best practices. And because Billing and Cost Management Dashboards are free to use and require no knowledge of Amazon Quicksight, they come with almost no technical or financial overhead.
Creating Dashboards
The new AWS Billing & Cost Management Dashboards are accessible via Billing & Cost Management → Dashboards:
The Dashboards console then appears, showing all dashboards by default and a tab for shared dashboards:
Finally, this is the screen for adding widgets to a new dashboard:
There are two types of widgets:
Custom widgets for bespoke reporting needs.
Predefined widgets for common use cases. These can also be customised as needed.
These widget types are explained fully in the AWS Cost Management widget types documentation. Having selected and positioned a widget, it can then be customised using the Cost Explorer UI and features, including filters, dimensions, and granularity.
(Aside – there are several widgets aimed at Reservations and Savings Plans. I don’t really use these in my AWS accounts, so you won’t see them being used in this post – Ed)
AWS Cost Explorer and Cost Management Dashboards use the same billing data but serve different purposes. Cost Explorer is ideal for digging into details, while Dashboards focus on building clean, repeatable and shareable views that fit into reports or presentations for technical and non-technical stakeholders.
When creating or editing dashboards, time periods can be set at both the dashboard and widget levels:
Dashboard-level time periods apply temporarily to all widgets and reset when leaving or refreshing the dashboard.
Widget-level time periods are saved with each widget and persist until changed.
Single Account Dashboard
In this section, the focus is on building a dashboard for a standalone AWS account. Using a mix of predefined and custom widgets, it’s possible to track costs at both the service and API operation level, reveal usage patterns over time, and spot trends that may indicate opportunities for optimisation.
Monthly Service Costs
Let’s start with a predefined Monthly Costs By Service widget:
This chart displays six months of service usage, with S3 accounting for the majority and showing a recent downward trend. While there are empty sections showing service utilisation with very low or no cost, I’ve left the widget filterless in case that changes in future.
Daily Service Costs
Next, let’s include a predefined Daily Costs widget:
This is the default bar chart, and it’s not that useful here because it doesn’t tell me much. So let’s make some changes. This menu is available for all widgets:
Under Change Vizualisation Type, there are options for a bar chart, line chart, stacked bar chart and table. Given that I want to track my cost trends over time, a line chart is best suited here.
The Daily Costs line graph looks like this:
This still isn’t great as the regular spikes are off-putting. These spikes are a combination of Route53 hosted zones and Tax. However, as both these costs are only debited once a month, and at the same time each month (the 1st), they appear out of place here, as the spikes create an alarming-looking chart. In reality, these are standard events.
Looking back at the earlier chart, the biggest cost by far is S3. Let’s adjust the graph to analyse that by updating the Service filter to only include S3. A quick update of the chart’s title and description produces this:
This is much more helpful! Easier to read and comprehend, and has a clear message and narrative. Daily S3 costs in this account were around 20¢ per day from February to May, then almost halved in June and were under 1¢ per day by the end of July.
Spotting sudden drops like this is useful, as it can flag lifecycle rules kicking in, data movement between storage classes or workload shifts. Equally, a steady rise can indicate the need for lifecycle policies or changes in access patterns.
API Operation Costs
Let’s go deeper into the account spend with a custom Cost widget grouped by the API Operation dimension:
Urgh. Couple of problems here:
The chart’s narrative is hard to understand as the bars are sorted by total expenditure across the entire time period of the chart. For example, StandardIAStorage is huge in May, barely there in June and gone in July. And it’s not even in the July bars at all. Yet it’s always the first bar because it’s the biggest spend overall. Confused yet?
The legend confuses further. No Operation is Tax – while this is correct within the context of the API Operation dimension, it doesn’t help the chart’s story. And Others is no help at all.
Finally, that axis is no use. What was the cost of PutObject in May? And how does it compare to July? No idea.
Given that I want to examine individual API-level costs here, a table is a better choice. It provides precise totals with no need for axis interpretation, shows a $0 spent as a value rather than the absence of a column, and eliminates the requirement to compress everything into a summarised, non-scrolling visual, thereby removing the vague Others legend and axis.
Finally, let’s exclude Tax from the Service filter (yes Tax is a service) and I get something far closer to what I want:
This dashboard allows me to track both monthly and daily spending, analyse costs by service and API operation, and identify any unusual spikes or drops. It simplifies monitoring trends, such as changes in S3 usage, and helps me pinpoint exactly where expenses are occurring. This way, I can quickly focus on areas that may require attention, turning detailed cost data into a clear and understandable overview of account activity.
AWS Organisations Dashboard
In this section, the focus shifts to dashboards in an AWS Organisation Management account. The goal is to track costs across multiple linked accounts, understand how AWS credits are being used, and monitor S3 Standard usage.
Note that the charts in this section appear slightly different, as they exclude my organisation’s AWS accounts. While the names are fine, the charts also include AWS account IDs, which I consider sensitive and prefer not to share.
Linked Account Costs
Let’s start with a default predefined Monthly Costs By Linked Account widget:
Ok so there’s a lot of empty space here. Although this organisation has existed for a while, it only began generating costs in May 2025. Additionally, the chart does not include any costs for July. This is because I applied my AWS Community Builder credits then, so future months will also follow this behaviour by default.
Let’s make this chart more useful by changing the date range from the last six months to the last three months and amending the Charge Type filter to exclude AWS credits, thereby showing my original spend:
As the July spend now dwarfs that of the other months, the axis makes the visual fairly useless. What’s July’s blue bar value? For that matter, what’s June’s green bar value? No idea at all.
Given that I want exact values, and that these values can be wildly different from month to month, this visual works far better as a table:
The monthly spend for each member account is now far easier to see.
AWS Credits Usage
In the first chart I excluded my AWS credits to see my original spend. But it’d also be helpful to know more about my Community Builder credit usage. Am I burning through them quicker than anticipated? To what extent are the credits covering my AWS spend? And, given they’ll expire eventually, should I be bolder with my cloud spend to get the most out of my credits while I have them?
To visualise this, let’s make a custom Cost widget focusing on the Charge Type dimension:
This is already helpful but, like the first chart, a table is better here for precision and clarity:
And let’s update the widget’s title and description to communicate what is being shown:
S3 Standard Usage
Finally, I want to create an early warning system. When storing objects in S3, the default is usually Standard. There’s nothing wrong with this, and S3 Standard is a good choice for short-lived data.
However, it’s also among the most expensive of the S3 storage classes, and if multiple accounts in my organisation are using S3 Standard when they don’t need to, then I’m neither following best practice nor am I well-architected.
So monitoring my organisation member accounts’ use of S3 Standard is a good idea. This will measure when my S3 standing utilisation is trending upwards, showing me where to focus my optimisation efforts if they are needed. I can do this using a Custom Usage widget, configured with a Usage Type Group of S3 Standard:
As this value is being tracked over time, a line chart is more suitable:
I experimented with changing the granularity from monthly to daily, but I wanted to keep this dashboard for monthly reporting, intended for observability rather than alerting. That’d be better suited to a custom usage AWS Budget configured to monitor a daily S3 Storage: Standard usage type group.
This trend can be further tracked by adjusting the dashboard’s date range. The visuals below show cost data from 01 July to 25 August, showing a downward S3 Standard usage trend and my August 2025 costs up to that point:
This multi-account dashboard allows me to track monthly spending across linked accounts. It provides insights into how AWS credits are being used to offset costs and helps me monitor trends in S3 Standard storage across the organisation. With this dashboard, I can easily identify which accounts are driving costs, understand how credits are applied, and pinpoint areas where S3 usage may need optimisation. It transforms multiple streams of raw billing data into a simple, cohesive view.
Sharing Dashboards
Once dashboards are created, they can be shared. Sharing allows teams, finance stakeholders, and other account holders to view or collaborate on dashboards without requiring direct access to the underlying AWS account. This makes it easier to align on costs, promote FinOps practices, and ensure visibility across the organisation.
Accounts can be shared both within and outside of an AWS Organization:
Behind the scenes, both sharing options are handled by AWS Resource Access Manager (RAM). If an active AWS Organization exists, then the dropdown list is populated with the member accounts. Alternatively, account IDs can be entered manually.
While this view is the same whether AWS Organizations is enabled or not, accounts not in an AWS Organization will see an error when interacting with this list:
As accounts are selected, their access can be set as:
Can View: Recipients can view the dashboard but cannot make changes.
Can Edit: Recipients can view and modify the dashboard configuration.
The selection process is very flexible. A single sharing configuration can include both internal and external accounts, and can assign these accounts to either permission scope. Accounts are added to the Added Recipients section as they are selected, showing which accounts can access the dashboard and with what scope:
These accounts will then view the dashboard in the Shared With Me tab of their Billing & Cost Management console. While users can view the dashboard layout and widget configurations, they don’t have access to the underlying data. Also, the data they do see is based on their IAM permissions.
Sharing dashboards enables collaboration among teams and finance stakeholders, offering visibility into costs while eliminating the need for direct account access.
Summary
In this post, I explored how AWS Billing & Cost Management Dashboards streamline FinOps, monitor service costs and create clear, shareable visual narratives.
As demonstrated, I’m already using this feature and am a very happy customer! I love how simple and expressive it is, and especially appreciate not having to manage any backend ETLs or pipelines. I am 100% the type of user this feature was built for, and it delivers exactly what I need to monitor, understand and communicate AWS costs across accounts with minimal effort.
I’ve got a few wishlist items. Exporting daily dashboard snapshots via SNS to Slack or email would be useful. This is a PowerBI feature that would work well here, especially since the data wouldn’t be shared – only a snapshot of the dashboard and a link to the resource. Support for CloudFormation and CDK would also make adoption and repeatability easier.
AWS Billing & Cost Management Dashboards make it simpler to build cost narratives, share insights, and track usage without the overhead of QuickSight or third-party tools. They are available at no additional cost in all AWS commercial regions.
Like this post? Click the button below for links to contact, socials, projects and sessions:
For those familiar with AWS, the Paid Plan resembles the AWS we know and are used to. This plan is designed for production applications, grants access to all AWS services and features, and provides payment options like pay-as-you-go and savings plans.
The new Paid Plan also includes the existing always-free services, including:
When a free plan expires, the account will close automatically and access to current resources and data will be lost. AWS retains the data for 90 days after the free plan’s expiry, after which it will be entirely erased.
Retrieval after this point is possible, but requires an upgrade to a paid plan to reopen the account. Note that this isn’t automatic – users must consent to being charged as part of the upgrade process.
The expiration date, credit balance, and remaining days of a free tier account can be monitored through the Cost and Usage widget in the AWS Management Console Home, or programmatically using the AWS SDK and command line at no cost via the GetAccountPlanState API. AWS will also send periodic email alerts regarding credit balances and the end of the free plan period.
Service Restrictions
Where previously a new account could use most AWS offerings immediately, free plan accounts now have some limitations. This is the AWS rationale:
Additionally, free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.
There’s roughly a 50/50 eligibility split of the AWS service catalogue, with some interesting choices that I’ll go into…
New User Considerations
This section examines considerations of the AWS free tier changes for beginners with no prior AWS experience.
Usage-Linked Closure Is Good…
The new Free Plan stops one of the tales as old as time, where new AWS users join up, try out all their shiny new toys and then get spiked by a massive bill. Or their access keys are exposed and stolen, creating a massive bill. Or they spin up an EC2 instance outside of the free tier and get a massive bill. And so on.
Well now, the user only spends their credits. And when the credits are used up, the account closes. The user loses their free plan, but they don’t lose the shirt off their back. Nor do they have to go to AWS cap in hand.
This also addresses another common concern: “I forgot my account was open, and now it’s been hacked!” Not anymore – accounts will close automatically after six months. This feature also helps limit financial damage from DDoS attacks, exposed credentials and similar risks.
Sounds great, right?
…But Isn’t Infallible
There are circumstances where having account closure linked to a credit balance is less desirable:
A user builds something that explodes in popularity.
Online attackers deliberately target an account.
A user misconfigures a resource.
These circumstances, and others, will quickly eat through the credits and trigger the account’s closure. What would happen in this situation is currently unclear – would AWS hit the brakes immediately? Is there a grace period of any sort? Either way, observability and monitoring are vital – the budget alert is a great start, and CloudWatch is included in the Free Plan.
Potential Credits Confusion
Finally, I feel that there may be potential confusion between the free plan credits that expire in twelve months and the free plan that expires in six months. My interpretation is that free users upgrading to a paid plan after six months will be able to continue using any remaining credits for the following six months.
I feel that some new users will see their account expiry coming up while their credits have over six months remaining, assume the account expiry is wrong and then be surprised when their account shuts. It sounds like AWS will make this as obvious as possible to account owners. I guess we’ll find out on Reddit in six months…
Experienced User Considerations
This section discusses the AWS free tier changes for users with prior AWS experience.
Free Tier Policing
I’ve already seen this ruffle some Internet feathers.
Traditionally, AWS were fairly flexible with new accounts. While officially only one email address can be associated with an account, AWS kinda ignored plus addressing. This allowed users to have multiple free tier accounts, and to start a new account when the free tier on their existing one expired.
Well not any more! AWS make it very clear in their FAQs:
“You would be ineligible for free plan or Free Tier credits if you have an existing AWS account, or had one in the past. The free plan and Free Tier credits are available only to new AWS customers.”
Now, if a user has an existing account and tries to make a new one, even with plus addressing, they will see this message at the end of the process:
No doubt there are parts of the Internet that will find ways around this. I haven’t pursued it personally as I was only interested in checking the restrictions of certain services. AWS themselves don’t have this problem of course, and have their own blog post about the Free Tier update with various screenshots and explanations.
Speaking of restrictions…
Unusual Service (In)Eligibility Choices
This section is based on the original Excel sheet given by AWS in July 2025 and may be subject to change – Ed
As mentioned earlier, AWS now limit the available services on their Free plan:
Free account plans don’t have access to certain AWS services that would rapidly consume the entire AWS Free Tier credit amount, or hardware purchases.
That said, there are some unusual choices here regarding services that are and aren’t eligible for the free plan.
Firstly, Glue is enabled, but Athena isn’t. So new users can create Glue resources, but can’t interact with them using Athena. I’m confused by this – for Athena to be costly, it usually requires querying data in the TB range that a new AWS account simply wouldn’t contain. Nor does it need specialised hardware. AWS even credits Athena with “Simple and predictable pricing” on its feature page, so why the Free Plan exclusion?
Also confusingly, CodeBuild and CodePipeline are eligible, but CodeDeploy isn’t. Can’t say I understand the logic behind this either!
Other exclusions make more sense. S3 is eligible, but Glacier services aren’t. Fair enough – Glacier is for long-lived storage, while free plans have six-month limits. Presumably, S3 Intelligent Tiering also excludes Glacier on the Free Plan.
Elsewhere, EC2 is eligible but I’ve not been able to check how limited the offering is. Trawling Reddit suggests only the t3.micro instance is available, but if this isn’t the case then many instance types exist that could rapidly burn through $200.
AWS CloudHSM is also eligible, with average costs around $1.50 per instance per hour. This totals about $36 per day or $100 over three days, somewhat contradicting AWS’s reasoning for the limitations. And while users could be frugal with using it, these are new users who are likely to be using AWS for the first time.
Finally, new users should be aware that certain actions immediately forfeit free tier credits. Most notably:
When your account joins AWS Organizations or sets up an AWS Control Tower landing zone, your AWS Free Tier credits expire immediately and your account will not be eligible to earn more AWS Free Tier credits.
Now, these are hardly services that a new user would need. However, an organisation or educational body would want to bear this in mind if they were encouraging staff or students to try AWS out. The free accounts must remain under the ownership of individual users. Any attempt to bring them into an existing AWS Organisation will kill their free tier!
Separately, this simplifies things for those of us already using Organisations or Control Tower – accounts created using these services will immediately be on the paid plan with no usage restrictions.
Summary
This blog post focused on the recent changes to AWS’s Free Tier, which allows new users to select either a Paid Plan or a Free Plan. It highlighted the main modifications made, specified which services were included or excluded, and considered the impact of these changes on both novice and seasoned users.
Overall, I see this as a positive change. The AWS Free Tier offering has been divisive for some time, and these changes go a long way towards softening many of its rough edges. While not everyone will get what they want, these changes greatly help to address the concerns and challenges faced by newbies in the past.
New users of AWS in 2025 should consider the same advice as in years prior:
Security first, always.
Check the cost of services before spinning them up.
Turn unused services off.
And finally, don’t forget to set that budget alarm!
Data validation is a crucial component of any data project. It ensures that data is accurate, consistent and reliable. It verifies that data meets set criteria and rules to maintain its quality, and stops erroneous or unreliable information from entering downstream systems. I’ve written about it, scripted it and talked about it.
Validation will be a crucial aspect of Project Wolfie. It is an ongoing process that should occur from data ingestion to exposure, and should be automated wherever possible. Thankfully, most data processes within Project Wolfie are (and will be) built using Python, which provides several libraries to simplify data validation. These include Pandera, Great Expectations and the focus of this post – Pydantic (specifically, version 2).
Firstly, I’ll explore the purpose and benefits of Pydantic. Next, I’ll import some iTunes data and use it to explore key Pydantic validation concepts. Finally, I’ll explore how Pydantic handles observability and test its findings. The complete code will be in a GitHub repo.
Let’s begin!
Introducing Pydantic
This section introduces Pydantic and examines some of its benefits.
About Pydantic
Pydantic is an open-source data validation Python library. It uses established Python notation and constructs to define data structures, types and constraints. These can then validate the provided data, generating clear error messages when issues occur.
Pydantic is a widely used tool for managing application settings, validating API requests and responses, and streamlining data transfer between Python objects and formats like JSON. By integrating both existing and custom elements, it offers a powerful and Pythonic method for ensuring data quality and consistency within projects. This makes data handling in Python more reliable and reduces the likelihood of errors through its intuitive definition and validation processes.
Pydantic Benefits
Pydantic’s benefits are thoroughly documented, and the ones I want to highlight here are:
Intuitive: Pydantic’s use of type hints, functions and classes fits well with my current Python skill level, so I can focus on learning Pydantic without also having to explore unfamiliar Python concepts.
Fast: Pydantic’s core validation logic is written in Rust, which enables rapid development, testing, and validation. This speed has contributed towards…
Before I can start using Pydantic, I need some data. This section examines the data I am using and how I prepare it for Pydantic.
iTunes Data
Firstly, let’s extract some data from iTunes. I create iTunes Export files using the iTunes > Export Playlist command. Apple has documented this, but WikiHow’s documentation is more illustrative. The export file type choices are…interesting. The one closest to matching my needs is the txt format, although the files are technically tab-separated files (TSVs).
iTunes Exports contain many metadata columns. I’m not including them all here (after all, this is a Pydantic post not an iTunes one), but I will be using the following subset (using my existing metadata definitions):
Note that the starred Album and Track Number columns have purposes that differ from the column names. The reasons for this are…not ideal.
Track Number contains BPM data as, although iTunes does have a BPM column, it isn’t included in the exports. And the exports can’t be customised! To include BPMs in an export, I had to repurpose an existing column.
Great. But that’s not as bad as…
Album contains musical keys, as iTunes doesn’t even have a key column, despite MP3s having a native Initial Key metadata field! Approaches to dealing with this vary – I chose to use another donor column. I’ll explain Camelot Notations later on.
That’s enough about the iTunes data for now – I’ll go into more detail in future Project Wolfie posts. Now let’s focus on getting this data into memory for Python.
Data Capture
Next, let’s get the iTunes data into memory. Starting with a familiar library…
pandas
I’ll be using pandas to ingest the iTunes data. This is a well-established and widely supported module. It also has its own data validation functions and will assist with issues like handling spaces in column names.
While iTunes files aren’t CSVs, the pandas read_csv function can still read their data into a DataFrame. It needs some help though – the delimiter parameter must be \t to identify the tabs’ delimiting status.
So let’s read the iTunes metadata into memory and…
Python
df = pd.read_csv(csv_path, delimiter='\t')>>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Oh. pandas can’t read the file. The error says it’s trying the utf-8 codec, so the export must be using something else. Fortunately, there’s another Python library that can help!
charset_normalizer
charset_normalizer is an open-source encoding detector. It determines the encoding of a file or text and records the result. It’s related to the older chardet library but is faster, has a more permissive MIT license and supports more encodings.
Here, I’m using charset_normalizer.detect in a detect_file_encoding function to detect the export’s codec:
I define a detect_file_encoding function that expects a filepath and returns a string.
detect_file_encoding opens the file, reads the data and stores it as raw_data.
charset_normalizer detects raw_data‘s codec and stores this as detection_result.
detect_file_encoding returns either the successfully detected codec, or the common utf-8 codec if the attempt fails.
I can then pass the export’s filepath to the detect_file_encoding function, capture the results as encoding and pass this as a parameter to pandas.read_csv:
There’s one more action to take before moving on. Some columns contain spaces. This will become a problem as spaces are not allowed in Python identifiers!
As the data is now in a pandas DataFrame, I can use pandas.DataFrame.rename to remove these spaces:
In this section, I tell Pydantic about my data model and the types of data it should expect for validation.
Introducing BaseModel
At the core of Pydantic is the BaseModel class – used for defining data models. Every Pydantic model inherits from it, and by doing so gains features like type enforcement, automatic data parsing and built-in validation.
By subclassing BaseModel, a schema for the data is defined using standard Python type hints. Pydantic uses these hints to validate and convert input data automatically.
Let’s explore BaseModel by creating a new Track class.
Creating A Track Class
Pydantic supports standard library types like string and integer. This reduces Pydantic’s learning curve and simplifies integration into existing Python processes.
Here are the very beginnings of my Track data model. I have a new Track class inheriting from Pydantic’s BaseModel, and a Name field with string data type:
Python
classTrack(BaseModel): Name: str
Next, I add a Year field with integer data type:
Python
classTrack(BaseModel): Name: str Year: int
And so on for each field I want to validate with Pydantic:
Python
classTrack(BaseModel): Name: str Artist: str Album: str Work: str Genre: str TrackNumber: int Year: int MyRating: int Location: str
Now, if any field is missing or has the wrong type, Pydantic will raise a ValidationError. But there’s far more to Pydantic data types than this…
Defining Special Data Types
Where no standards exist or where validation rules are more complex to determine, Pydantic offers further type coverage. These include:
One of my Track fields will immediately benefit from this:
Python
classTrack(BaseModel): Location: str
Currently, my Location field validation is highly permissive. It will accept any string. I can improve this using Pydantic’s FilePath data type:
Python
classTrack(BaseModel): Location: FilePath
Now, Pydantic will check that the given location is a path that exists and links to a valid file. No custom code; no for loops – the FilePath type handles everything for me.
So I now have data type validation in my Pydantic data model. What else can I have?
Pydantic Built-In Validation
This section explores the native data validation features of Pydantic, including field annotation and constraints.
Introducing Field
In Pydantic models, data attributes are typically defined using Python type hints. The Field function enables further customisation like constraints, schema metadata and default values.
While type hints define what kind of data is allowed, Field defines how that data should behave, what happens if it’s missing and how it should be documented. It adds clarity to models and helps Pydantic enforce stricter rules.
Let’s run through some examples.
Custom Schema Metadata
One of the challenges in creating data pipelines is that the data fields can sometimes be unclear or difficult to explain. This can cause confusion and delay when building ETLs, examining repos and interacting with code.
Field helps here by adding custom fields to annotate data within Pydantic classes. Examples include description:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.")
And examples:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"])
Using these throughout my Track class simplifies the code and reduces context switching:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"]) Artist: str= Field(description="The artist(s) of the track.",examples=["Above & Beyond", "Armin van Buuren"]) Album: str= Field(description="Track's Camelot Notation indicating the key.",examples=["01A-Abm", "02B-GbM"]) Work: str= Field(description="The record label that published the track.",examples=["Armada Music", "Anjunabeats"]) Genre: str= Field(description="Track's musical genre.",examples=["Trance", "Progressive House"]) TrackNumber: int= Field(description="Track's BPM (Beats Per Minute).",examples=[130, 140]) Year: int= Field(description="Track's release year.",examples=[1998, 2004]) MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80]) Location: FilePath = Field(description="Track's Location on the filesystem.",examples=[r"C:\Users\User\Music\iTunes\TranquilityBase-GettingAway-OriginalMix.mp3"])
This is especially useful for Album and TrackNumber given their unique properties.
Field Constraints
Field can also constrain the data that a class accepts. This includes string constraints:
max_length: Maximum length of the string.
min_length: Minimum length of the string.
pattern: A regular expression that the string must match.
ge & le – greater than or equal to/less than or equal to
gt & lt – greater/less than
multiple_of – multiple of a given number
Constraints can also be combined as needed. For example, iTunes exports record MyRating values in increments of 20, where 1 star is 20 and 2 stars are 40, rising to the maximum 5 stars being 100.
I can express this within the Track class as:
Python
classTrack(BaseModel): MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=20,le=100,multiple_of=20)
Here, MyRating must be greater than or equal to 20 (ge=20), less than or equal to 100 (le=100), and must be a multiple of 20 (multiple_of=20).
I can also parameterise these constraints using variables instead of hard-coded values:
Python
ITUNES_RATING_RAW_LOWEST=20ITUNES_RATING_RAW_HIGHEST=100classTrack(BaseModel): MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=ITUNES_RATING_RAW_LOWEST,le=ITUNES_RATING_RAW_HIGHEST,multiple_of=20)
This property lets me use Pydantic with other Python libraries. Here, my Year validation checks for years greater than or equal to 1970 and less than or equal to the current year (using the datetime library):
No track in the collection should exist beyond the current year – this constraint will now update itself as time passes.
Having applied other constraints, my Track class looks like this:
Python
classTrack(BaseModel):"""Pydantic model for validating iTunes track metadata.""" Name: str= Field(description="Track's name and mix type.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"]) Artist: str= Field(description="The artist(s) of the track.",examples=["Above & Beyond", "Armin van Buuren"]) Album: str= Field(description="Track's Camelot Notation indicating the key.",examples=["01A-Abm", "02B-GbM"]) Work: str= Field(description="The record label that published the track.",examples=["Armada Music", "Anjunabeats"]) Genre: str= Field(description="Track's musical genre.",examples=["Trance", "Progressive House"]) TrackNumber: int= Field(description="Track's BPM (Beats Per Minute).",examples=[130, 140],ge=BPM_LOWEST,le=BPM_HIGHEST) Year: int= Field(description="Track's release year.",examples=[1998, 2004],ge=YEAR_EARLIEST,le=YEAR_CURRENT) MyRating: int= Field(description="Personal Rating. Stars expressed as 0, 20, 40, 60, 80, or 100",examples=[60, 80],ge=ITUNES_RATING_RAW_LOWEST,le=ITUNES_RATING_RAW_HIGHEST,multiple_of=20) Location: FilePath = Field(description="Track's Location on the filesystem.",examples=[r"C:\Users\User\Music\iTunes\AboveAndBeyond-AloneTonight-OriginalMix.mp3"])
This is already very helpful. Next, let’s examine my custom requirements.
Pydantic Custom Validation
This section discusses how to create custom data validation using Pydantic. I will outline what the requirements are, and then examine how these validations are defined and implemented.
Introducing Decorators
In Python, decorators modify or enhance the behaviour of functions or methods without changing their actual code. Decorators are usually written using the @ symbol followed by the decorator name, just above the function definition:
Python
@my_decoratordefmy_function():...
For example, consider this logger_decorator function:
Python
deflogger_decorator(func):defwrapper():print(f"Running {func.__name__}...") func() # Execute the supplied functionprint("Done!")return wrapper
This function takes another function (func) as an argument, printing a message before and after execution. If the logger_decorator function is then used as a decorator when running this greet function:
Python will add the logging behaviour of logger_decorator without modifying greet:
Python
Running greet...Hello, world!Done!
Introducing Field Validators
In addition to the built-in data validation capabilities of Pydantic, custom validators with more specific rules can be defined for individual fields using Field Validators. These use the field_validator() decorator, and are declared as class methods within a class inheriting from Pydantic’s BaseModel.
Here’s a basic example using my Track model:
Python
classTrack(BaseModel): Name: str= Field(description="Track's name and mix.",examples=["Track Title (Original Mix)", "Track Title (Extended Mix)"] )@field_validator("Name")@classmethoddefvalidate_name(cls, value):# custom validation logic herereturn value
Where:
@field_validator("Name") tells Pydantic to use the function to validate the Name field.
@classmethod lets the validator access the Track class (cls).
The validator executes the validate_name function with the field value (in this case Name) as input, performs the checks and must either:
return the validated value, or
raise a ValueError or TypeError if validation fails.
Let’s see this in action.
Null Checks
Firstly, let’s perform a common data validation check by identifying empty fields. I have two variants of this – one for strings and another for numbers.
The first – validate_non_empty_string – uses pandas.isna to catch missing values and strip() to catch empty strings. This field validator applies to the Artist, Work and Genre columns:
Python
@field_validator("Artist", "Work", "Genre")@classmethoddefvalidate_non_empty_string(cls, value, info):"""Validate that a string field is not empty."""if pd.isna(value) orstr(value).strip() =="":raiseValueError(f"{info.field_name} must not be null or empty")return value
The second – validate_non_null_numeric – checks the TrackNumber, Year and MyRating numeric columns for empty values using pandas.isna:
Python
@field_validator("TrackNumber", "Year", "MyRating", mode="before")@classmethoddefvalidate_non_null_numeric(cls, value, info):"""Validate that a numeric field is not null."""if pd.isna(value):raiseValueError(f"{info.field_name} must not be null")return value
Also, it uses Pydantic’s before validator (mode="before"), ensuring the data validation happens beforePydantic coerces types. This catches edge cases like "" or "NaN" before they become None or float("nan") values.
Character Check
Now let’s create a validator for something a little more challenging to define. All tracks in my collection follow a Track Name (Mix) schema. This can take many forms:
Original track: Getting Away (Original Mix)
Remixed track: Shapes (Oliver Smith Remix)
Updated remixed track: Distant Planet (Menno de Jong Interpretation) (2020 Remaster)
…and many more variants.
But generally, there should be at least one instance of text enclosed by parentheses. However, some tracks have no remixer and are released with just a title:
Getting Away
Shapes
Distant Planet
This not only looks untidy (eww!), but also breaks some of my downstream automation that expects the Track Name (Mix) schema. So any track without a remixer gets (Original Mix) added to the Name field upon download:
Getting Away (Original Mix)
Shapes (Original Mix)
Distant Planet (Original Mix)
Expressing this is possible with RegEx, but I can make a more straightforward and more understandable check with a field validator:
Python
@field_validator("Name")@classmethoddefvalidate_name(cls, value):if pd.isna(value) orstr(value).strip() =="":raiseValueError("Name must not be null or empty") value_str =str(value)if'('notin value_str:raiseValueError("Name must contain an opening parenthesis '('")if')'notin value_str:raiseValueError("Name must contain a closing parenthesis ')'")return value
This validator checks that the value isn’t empty and then performs additional checks for parentheses. This could be one check, but having it as two checks improves log readability (insert foreshadowing – Ed). I could also have added Name to the validate_non_empty_string validation, but this way I have all my Name checks in the same place.
Parameterised Checks
Like constraints, field validators can also be parameterised. Let’s examine Album.
As iTunes exports can’t be customised, I use Album for a track’s Camelot Notation. These are based on the Camelot Wheel – MixedInKey‘s representation of the Circle Of Fifths. DJs generally favour Camelot Notation as it is simpler than traditional music notation for human understanding and application sorting.
Importantly, there are only twenty-four possible notations:
For example:
1A (A-Flat Minor)
6A (G Minor)
6B (B-Flat Major)
10A (B Minor)
So let’s capture these values in a CAMELOT_NOTATIONS list:
(Note the leading zeros. Without them, iTunes sorts the Album column as (10, 11, 12, 1, 2, 3…) – you can imagine how I felt about that – Ed)
Next, I pass the CAMELOT_NOTATIONS list to an Album field validator that checks if the given value is in the list:
Python
@field_validator("Album")@classmethoddefvalidate_album(cls, value):if pd.isna(value) orstr(value).strip() =="":raiseValueError("Album must not be null or empty")ifstr(value) notinCAMELOT_NOTATIONS:raiseValueError(f"Album must be a valid Camelot notation: {value} is not in the valid list")return value
Pydantic now fails any value not found in the CAMELOT_NOTATIONS list.
Now I have my validation needs fully covered. What observability does Pydantic give me over these data validation checks?
Pydantic Observability
In this section, I assess and adjust the default Pydantic observability abilities to ensure my data validation is accurately recorded.
Default Output
Pydantic automatically generates data validation error messages if validation fails. These detailed messages provide a structured overview of the issues encountered, including:
The index of the failing input (e.g., a DataFrame row number).
The model class where the error occurred.
The field name that failed validation.
A human-readable explanation of the issue.
The offending input value and its type.
A direct link to relevant documentation for further guidance.
Here’s an example of Pydantic’s output when a string field receives a NaN value:
Python
Row 2353: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
In this example:
Row 2353 indicates the problematic input row.
Track is the Pydantic model where validation failed.
Work is the failing field.
Pydantic detects that the input is nan (a float) and not a valid string.
Pydantic provides a URL to the string_type documentation.
Here’s another example, this time for a MyRating error:
Python
Row 3040: 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
In this case, a field validator raised a ValueError because MyRating must not be null.
Pydantic’s error reporting is clear and actionable, making it suitable for debugging and systemic data validation tasks. However, for larger datasets or more user-friendly outputs (such as reports or UI feedback), further customisation is helpful, such as…
Terminal Output Customisation
As good as Pydantic’s default output is, it’s not that human-readable. For example, in this Terminal output I have no idea which tracks are on rows 2353, 2495 and 3040:
Plaintext
Row 2353: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 2495: 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 3040: 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
While I can find this out, it would be better to know at a glance. Fortunately, I can improve this when capturing the errors by appending the artist and name to each row of the errors object:
Python
except (ValidationError, ValueError) as e: artist = row['Artist'] ifnot pd.isna(row['Artist']) else"Unknown Artist" name = row['Name'] ifnot pd.isna(row['Name']) else"Unknown Name" errors.append((index, artist, name, str(e)))
Now, Artist and Name are added to each row:
Plaintext
Row 2353: Ben Stone - Mercure (Extended Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 2495: DJ Hell - My Definition Of House Music (Resistance D Remix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_typeRow 3040: York - Reachers Of Civilisation (In Search Of Sunrise Mix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This makes it far easier to find the problematic files in my collection. As long as there aren’t many findings…
Creating An Error File
There are three main problems with Pydantic printing all data validation errors in the Terminal:
They don’t persist outside of the Terminal session.
The Terminal isn’t that easy to read when it’s full of text.
The Terminal may run out of space if there are a large number of errors.
So let’s capture the errors in a file instead. This write_error_report function generates a text-based error report from validation failures, saving it in a logs subfolder adjacent to the input file:
Firstly, it constructs a timestamped filename using the original file’s stem (e.g., 20250529-142304-PydanticErrors-data.txt) and the logs subfolder, creating the latter if it doesn’t exist:
Next, Python orders the errors by the sorted_fields input, displays error counts per field and formats each error message with clear section dividers. A structured report listing all validation errors by field is saved in the logs subfolder:
Python
withopen(error_output_path, 'w', encoding='utf-8') as f: f.write(f"Validation Error Report - {timestamp}\n") f.write("="*80+"\n")for field in sorted_fields: messages = field_error_details.get(field, [])if messages: f.write(f"\n{field} Errors ({len(messages)}):\n") f.write("-"*80+"\n")for message in messages: f.write(message +"\n\n")
Finally, the filesystem path of the generated report is returned:
Python
return error_output_path
When executed, the Terminal tells me the error file path:
Plaintext
Detailed error log written to: 20250513-133743-PydanticErrors-iTunes-Elec-Dance-Club-Main.txt
And stores the findings in a local txt file, grouped by error type for simpler readability:
Plaintext
Validation Error Report - 20250513-133743================================================================================MyRating Errors (5):--------------------------------------------------------------------------------Row 3040: York - Reachers Of Civilisation (In Search Of Sunrise Mix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_errorWork Errors (22):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
Adding A Terminal Summary
Finally, I created a Terminal summary of Pydantic’s findings:
Python
print("\nValidation Summary:\n")sorted_fields =sorted(Track.model_fields.keys())for field in sorted_fields: count = error_analysis['counts'].get(field, 0)print(f"{field} findings: {count}")
In this section, I test that my Pydantic data validation and observability processes are working correctly using iTunes export files and pytest unit tests.
Recent File Test
The first test used a recent export from the end of April 2025. Here is the Terminal output:
Plaintext
Processing file: iTunes-Elec-Dance-Club-Main-2025-04-28.txtReading iTunes-Elec-Dance-Club-Main-2025-04-28.txt with detected encoding UTF-16Loaded 4407 rowsValidated 4379 rowsFound 28 errors!Validation Summary for iTunes-Elec-Dance-Club-Main-2025-04-28.txt:Album errors: 0Artist errors: 0Genre errors: 0Location errors: 0MyRating errors: 5Name errors: 1TrackNumber errors: 0Work errors: 22Year errors: 0Detailed error log written to: 20250521-164324-PydanticErrors-iTunes-Elec-Dance-Club-Main-2025-04-28.txt
Good first impressions – the 4407 row count matches the export file, the summary is shown in the Terminal and an error log is created. So what’s in the log?
Firstly, five tracks have no MyRating values. For example:
Plaintext
MyRating Errors (5):--------------------------------------------------------------------------------Row 558: Reel People Feat Angela Johnson - Can't Stop (Michael Gray Instrumental Remix): 1 validation error for TrackMyRating Value error, MyRating must not be null [type=value_error, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This is correct, as this export was created when I added some new tracks to my collection.
Next, one track has a Name issue:
Plaintext
Name Errors (1):--------------------------------------------------------------------------------Row 1292: The Prodigy - Firestarter (Original Mix}: 1 validation error for TrackName Value error, Name must contain a closing parenthesis ')' [type=value_error, input_value='Firestarter (Original Mix}', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/value_error
This one confused me at first, until I looked at the error more closely and realised the closing parenthesis is wrong! } is used instead of )! This is why my validate_name field validator has separate checks for each character – it makes it easier to understand the results!
Finally, twenty-two tracks are missing record label metadata in Work:
Plaintext
Work Errors (22):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
This means some tracks are missing full metadata. This won’t break any downstream processes as I have no reliance on this field. That said, it’s good to know about this in case my future needs change.
Older File Test
The next test uses an older file from March 2025. Let’s see what the Terminal says this time…
There are fewer rows here – 4381 vs 4407. This is correct, as my collection was smaller in March. But no rows were validated successfully!
I don’t have to go far to find out why:
Plaintext
Location Errors (4381):--------------------------------------------------------------------------------Row 0: Ariel - A9 (Original Mix): 1 validation error for TrackLocation Path does not point to a file [type=path_not_file, input_value='C:\\Users\\User\\Folder...riel-A9-OriginalMix.mp3', input_type=str]
All the location checks failed. But this is actually a successful test!
In the time between these two exports, I reorganised my music collection. As a result, the file paths in this export no longer exist. Remember – the Location field uses the FilePath data type, which checks that the given paths exist and link to valid files. And these don’t!
The Name results are the same as the first test. This has been around for a while apparently…
Plaintext
Name Errors (1):--------------------------------------------------------------------------------Row 1292: The Prodigy - Firestarter (Original Mix}: 1 validation error for TrackName Value error, Name must contain a closing parenthesis ')' [type=value_error, input_value='Firestarter (Original Mix}', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/value_error
There are also TrackNumber errors in this export:
Plaintext
TrackNumber Errors (2):--------------------------------------------------------------------------------Row 485: Andrew Bayer Feat Alison May - Brick (Original Mix): 2 validation errors for TrackTrackNumber Input should be greater than or equal to 100 [type=greater_than_equal, input_value=90, input_type=int] For further information visit https://errors.pydantic.dev/2.11/v/greater_than_equal
Two tracks have BPM values lower than the set range. Both files were moved during my reorganisation, but were included in this export at the time and therefore fail this validation check.
Finally, the Work errors are the same as the first test (although more have crept in since!):
Plaintext
Work Errors (17):--------------------------------------------------------------------------------Row 223: Dave Angel - Artech (Original Mix): 1 validation error for TrackWork Input should be a valid string [type=string_type, input_value=nan, input_type=float] For further information visit https://errors.pydantic.dev/2.11/v/string_type
Ultimately, both tests match expectations!
Unit Tests With Amazon Q
Finally, I wanted to include some unit tests for this project. Unit testing is always a good idea, especially in this context where I can verify function outputs and error generation without needing to create numerous test files.
I figured this was a good opportunity to test Amazon Q Developer and see what it came up with. I gave it a fairly basic prompt, using the @workspace context to allow Q access to my project’s entire workspace as context for its responses:
Plaintext
@workspace write unit tests for this script using pytest
I tend to use pytest for my Python testing, as I find it simpler and more flexible than Python’s standard unittest library.
Q promptly provided several reasonable tests in response. This initiated a half-hour exchange between us focused on calibrating the existing tests and creating new ones. To be fair to Q, my initial prompt was quite basic and could have been much more detailed.
Amongst Q’s tests was this one testing an empty Artist field:
Python
@patch('pathlib.Path.exists')deftest_empty_artist(self, mock_exists):"""Test that an empty artist fails validation."""# Mock file existence check mock_exists.return_value =True invalid_track_data = {"Name": "Test Track (Original Mix)","Artist": "", # Empty artist"Album": "01A-Abm","Work": "Test Label","Genre": "Trance","TrackNumber": 130,"Year": 2020,"MyRating": 80,"Location": "C:\\Music\\test_track.mp3" }
This one, checking an invalid Camelot Notation:
Python
@patch('pathlib.Path.exists')deftest_invalid_album_not_camelot(self, mock_exists):"""Test that an invalid Camelot notation fails validation."""# Mock file existence check mock_exists.return_value =True invalid_track_data = {"Name": "Test Track (Original Mix)","Artist": "Test Artist","Album": "Invalid Key", # Not a valid Camelot notation"Work": "Test Label","Genre": "Trance","TrackNumber": 130,"Year": 2020,"MyRating": 80,"Location": "C:\\Music\\test_track.mp3" }with pytest.raises(ValueError, match="Album must be a valid Camelot notation"): Track(**invalid_track_data)
And this one, checking what happens with an incomplete DataFrame:
Python
@patch('wolfie_exportvalidator_itunes.detect_file_encoding')@patch('pandas.read_csv')deftest_load_itunes_data_missing_columns(self, mock_read_csv, mock_detect_encoding):"""Test loading iTunes data with missing columns."""# Setup mocks mock_detect_encoding.return_value ='utf-8' mock_df = pd.DataFrame({'Name': ['Test Track (Original Mix)'],'Artist': ['Test Artist'],# Missing required columns }) mock_read_csv.return_value = mock_df# Call function and verify it raises an errorwith pytest.raises(ValueError, match="Missing expected columns"): load_itunes_data(Path('dummy_path.txt'))
I’ll include the whole test suite in my GitHub repo. Let’s conclude with pytest‘s output:
I had a very positive experience overall! Working with Amazon Q allowed me to write the tests more quickly than I could have done on my own. We would have been even faster if I had put more thought into my initial prompt. Additionally, since Q Developer offers a generous free tier, it didn’t cost me anything.
GitHub Repo
I have committed my Pydantic data validation script, test suite and documentation in the repo below:
Note that the parameters are decoupled from the Pydantic script. This will allow me to reuse some parameters across future validation scripts and has enabled me to exclude the system parameters from the repository.
Summery
In this post, I used the Pydantic Python library to create data validation and observability processes for my Project Wolfie iTunes data.
I found Pydantic very impressive! Its simplicity, functionality and interoperability make it an attractive addition to Python data pipelines, and its strong community support keeps Pydantic relevant and current. Additionally, Pydantic’s presence in FastAPI, PydanticAI and a managed AWS Lambda layer enables rapid integration and seamless deployment. I see many applications for it within Project Wolfie.
There’s lots more to Pydantic – this Pixegami video is a great walkthrough of Pydantic in action:
If this post has been useful then the button below has links for contact, socials, projects and sessions: