Categories
Me

YearCompass 2022-2023

In this post, I use the free YearCompass booklet to reflect on 2022 and to plan some professional goals for 2023.

Table of Contents

Introduction

I’ve never been a fan of New Year’s Resolutions. Life moves fast! The idea that goals set on a given day will still be relevant in six or three months (or sometimes even one!) should be taken with a pinch of salt. Especially after working in technology for a while!

I prefer to not wait. My attitudes towards forward planning are best summed up by these quotes:

The best time to write a story is yesterday. The next best time is today.

R. A. Lafferty

The best time to do something significant is between yesterday and tomorrow.

Zig Ziglar

Enter YearCompass – a free tool for reflection and planning. I first heard about YearCompass from Brent Ozar:

I put it on my calendar for December and, well, here it is.

Additionally, on December 1st Paul Randal of SQLSkills posted an offer of mentorship. Paul wants to see a blog post from interested parties, which overlaps with YearCompass pretty well.

With this post, I can meet both goals at the same time. Whichever choice Paul makes, I’ll have a strong list of 2023 goals that I can refer to in the new year.

Let’s start by examining YearCompass.

YearCompass

From the YearCompass site:

YearCompass is a free booklet that helps you reflect on the year and plan the next one. With a set of carefully selected questions and exercises, YearCompass helps you uncover your own patterns and design the ideal year for yourself.

YearCompass.com

YearCompass started as a reflection tool for a small group of friends and was made publicly available in 2012. It is available as an A4 and A5 PDF, with options to fill out the booklet both digitally and by hand. YearCompass is currently available in 52 languages.

YearCompass positions itself as an alternative to New Year’s Resolutions. Each PDF has two sections. The first half examines the previous year and the second half considers the next one.

Each section consists of a series of prompts and questions. These guide the user through the reflection process and help them identify their priorities and plan for the future.

Some of the questions are:

  • What are you most proud of?
  • Who are the three people who influenced you the most?
  • What does the year ahead of you look like?

While prompts include:

  • List your three biggest challenges from last year.
  • This year, I will be bravest when…
  • I want to achieve these three things the most.

There are no hard and fast rules for completing YearCompass. The book suggests a break between sections, although some prefer to do the whole thing in one sitting.

Personally, I dedicated an hour to each section on separate days, then went back to it for the rest of the week as I remembered other things. This helped a lot with the sections I struggled on.

YearCompass 2023 Goals

In this section, I examine my 2023 professional goals from my YearCompass booklet.

Confidence Building & Anxiety Management

One of the reasons I started amazonwebshark was in response to the imposter syndrome I felt after becoming a Data Engineer in 2021. In the first half of 2022 I got the balance wrong, as what I was posting wasn’t really improving me as a Data Engineer. This ended up fueling the very anxiety I was trying to control!

My recent projects and posts have boosted my confidence and improved my data skills. I’ve been able to apply learnings from here to my working role, and have tried things here that have increased my fluency with our current codebase.

In 2023 I want to continue this momentum. I have some ideas for future projects that will flex my creativity, focus my development and further boost my confidence such as:

I also have subscriptions for A Cloud Guru and DataCamp, and want to explore those sites more next year. I’m going to have a proper think about prioritisation over Christmas, and will get some peer advice when I have some ideas.

Collaborating & Communicating

In the first few months of 2022, my confidence and anxiety issues led to slips in my communication and teamwork. I found it hard to ask for help and struggled to articulate myself, and was in a bad place for a while.

Thankfully I was about to get some help and turn this around over Summer. Although I’m still mixing things up when I talk at the moment, I’ve been about to bolster my communication skills and increase my value within my team.

If 2022 was about repair, then I want 2023 to be about strengthening. I have more Data Engineering knowledge than I did a year ago, and feel like my finger is more on the pulse at work now. I want to continue to add value to and bring resilience and agility to the projects we are responsible for.

Knowledge Sharing & Presenting

This year I’ve learned about a range of languages, tools and methodologies as part of my role. I’ve also earned the AWS Certified Developer Associate certification and the Microsoft SC-900 and AI-900 certifications, so I’ve improved my knowledge of topics like development, deployment, security, monitoring and machine learning.

While all this knowledge is great, it’s no good if it just stays in my head! On the back of boosting my confidence skills and bolstering my communication skills, I want to improve my knowledge-sharing and presentation skills.

I want to link my knowledge-sharing efforts to my efforts to improve my confidence and communication. Opportunities to apply my knowledge and skills at work are frequent, and being able to give knowledgeable, confident and persuasive suggestions and feedback will help both me and my team create value and meet our goals.

I also want to improve my presentation skills. We have regular departmental meetings that encourage lightning talks, and having presented twice this year I now feel that I have some good foundations to build on. I want to get more competent at presenting, with long-term ambitions to speak at a user group or community event when my confidence allows!

Summary

In this post, I used the free YearCompass booklet to reflect on 2022 and to plan some professional goals for 2023.

On reflection, my YearCompass 2023 goals relate to each other pretty well. Improving my communication skills and anxiety management will make it easier to collaborate and will help me present better. Improving my confidence will help me become more influential and persuasive, and I will feel more comfortable when sharing knowledge.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
AI & Machine Learning

Microsoft AI-900: Artificial Fintelligence

In this post, I talk about my recent experience with the Microsoft AI-900 certification and the resources I used to study for it.

Table of Contents

Introduction

On 04 November 2022, I earned the Microsoft Certified Azure AI Fundamentals certification. I’ve had my eye on the AI-900 since passing the SC-900 over Summer. Last month I found the time to sit down with it properly! This is my fourth Microsoft certification, joining my other badges on Credly.

Firstly, I’ll talk about my motivation for studying for the Microsoft AI-900. Then I’ll talk about the resources I used and how they fitted into my learning plan.

Motivation

In this section, I’ll talk about my reasons for studying for the Microsoft AI-900.

Increased Effectiveness

A common Data Engineering task is extracting data. This usually involves structured data, which have well-defined data models that help to organise and map the data available.

Sources of structured data include:

  • CSV data extracts.
  • Excel spreadsheets.
  • SQL database tables.

Increasingly, insights are being sought from unstructured data. This is harder to extract, as unstructured data aren’t arranged according to preset data models or schemas.

Examples of unstructured data sources include:

  • Inbound correspondence.
  • Recorded calls.
  • Social media activity.

Historically, extracting unstructured data needed special equipment, complex software and dedicated personnel. In recent years, public cloud providers have produced Artificial Intelligence and Machine Learning services aimed at quickly and easily extracting unstructured data.

In the case of Microsoft Azure, these include:

Knowing that these tools exist and understanding their use cases will help me create future data pipelines and ETL processes for unstructured data sources. This will add value to the data and will make me a more effective Data Engineer.

And on that note…

Skill Diversification

Recently I was introduced to the idea of T-shaped skills in a CollegeInfoGeek article by Ransom Patterson. Ransom summarises a T-shaped person as having:

…deep knowledge/skills in one area and a broad base of general supporting knowledge/skills.

“The T-Shaped Person: Building Deep Expertise AND a Wide Knowledge Base”Ransom Patterson on CollegeInfoGeek
t-shaped skills

Ransom’s article made me realise that I’ve been developing T-shaped skills for a while. I’ve then applied these skills back to my Data Engineering role. For example:

My studying for the AI-900 is a continuation of this. This isn’t me saying “I want to be a Machine Learning Engineer now!” This is me seeing a topic, being interested in it and examining how it could be useful for my personal and professional interests.

Multi-Cloud Fluency

This kind of follows on from T-shaped skills.

Earlier in 2022, Forrest Brazeal examined the benefits of multi-cloud fluency, and built a case summarised in one of his tweets:

This applies to the data world pretty well, as many public cloud services can interact with each other across vendor boundaries.

For example:

With multi-cloud fluency, decisions can be made based on using the best services for the job as opposed to choosing services based on vendor or familiarity alone.

This GuyInACube video gives an example of this using the Microsoft Power BI Service:

To connect the Power BI Service to an AWS data source, a data gateway needs to be running on an EC2 instance to handle authentication. This introduces server costs and network management.

Conversely, data stored in Azure (Azure SQL Database in the video) can be accessed by other Azure services with a single click. As a multi-cloud fluent Data Engineer in this scenario, I now have options where previously there was only one choice.

Improved multi-cloud fluency means I can use AWS for some jobs and Azure for others, in the same way that I use Windows for some jobs and Linux for others. It’s about having the knowledge and skills to choose the best tools for the job.

Resources

In this section, I’ll talk about the resources I used to study for the Microsoft AI-900.

John Savill

John Savill’s Technical Training YouTube channel started in 2008. Since then he’s created a wide range of videos from deep dives to weekly updates. In addition, he has numerous playlists for many Microsoft certifications including the AI-900.

Having watched John’s SC-900 video I knew I was in good hands. John has a talent for simple, straightforward discussions of important topics. His AI-900 video was the first resource I used when starting to study, and the last resource I used before taking the exam.

Exceptional work as usual John!

Microsoft Learn

microsoft learn logo

Microsoft Learn was my main study resource for the AI-900. It has a lot going for it! The content is up to date, the structure makes it easy to dip in and out and the knowledge checks and XP system keep the momentum up.

To start, I attended one of Microsoft’s Virtual Training Days. The courses are free, and their AI Fundaments course currently provides a free certification voucher when finished. Microsoft Product Manager Loraine Lawrence presented the course and it was a great introduction to the various Azure AI services.

Complimenting this, Microsoft Learn has a free learning path with six modules tailed for the AI-900 exam. These modules are well-organised and communicate important knowledge without being too complex.

The modules include supporting labs for learning reinforcement. The labs are well documented and use the Azure Portal, Azure Cloud Shell and Git to build skills and real experience.

I didn’t end up using the labs due to time constraints, but someone else had me covered on that front…

Andrew Brown

Andrew Brown is the CEO of ExamPro. He has numerous freeCodeCamp videos, including his free AI-900 one.

I’ve used some of Andrew’s AWS resources before and found this to be of his usual high standard. The video is four hours long, with dozens of small lectures that are time-stamped in the video description. This made it easy to replay sections during my studies.

Andrew also includes two hours of him using Azure services like Computer Vision, Form Recognizer and QnAMaker. This partnered with the Microsoft Learn material very well and helped me understand and visualise topics I wasn’t 100% on.

Summary

In this post, I talked about my recent experience with the Microsoft AI-900 certification and the resources I used to study for it. I can definitely use the skills I’ve picked up moving forwards, and the certification is some great self-validation!

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Production Code Qualities

In this post, I respond to November 2022’s T-SQL Tuesday #156 Invitation and give my thoughts on some production code qualities.

tsql tuesday

Table of Contents

Introduction

This month, Tomáš Zíka’s T-SQL Tuesday invitation was as follows:

Which quality makes code production grade?

Please be as specific as possible with your examples and include your reasoning.

Good question!

In each section, I’ll use a different language. Firstly I’ll create a script, and then show a problem the script could encounter in production. Finally, I’ll show how a different approach can prevent that problem from occurring.

I’m limiting myself to three production code qualities to keep the post at a reasonable length, and so I can show some good examples.

Precision

In this section, I use T-SQL to show how precise code in production can save a data pipeline from unintended failure.

Setting The Scene

Consider the following SQL table:

USE [amazonwebshark]
GO

CREATE TABLE [2022].[sharkspecies](
	[shark_id] [int] IDENTITY(1,1) NOT NULL,
	[name_english] [varchar](100) NOT NULL,
	[name_scientific] [varchar](100) NOT NULL,
	[length_max_cm] [int] NULL,
	[url_source] [varchar](1000) NULL
)
GO

This table contains a list of sharks, courtesy of the Shark Foundation.

Now, let’s say that I have a data pipeline that uses data in amazonwebshark.2022.sharkspecies for transformations further down the pipeline.

No problem – I create a #tempsharks temp table and insert everything from amazonwebshark.2022.sharkspecies using SELECT *:

When this script runs in production, I get two tables with the same data:

2022 11 02 SQLResults1

What’s The Problem?

One day a new last_evaluated column is needed in the amazonwebshark.2022.sharkspecies table. I add the new column and backfill it with 2019:

ALTER TABLE [2022].sharkspecies
ADD last_evaluated INT DEFAULT 2019 WITH VALUES
GO

However, my script now fails when trying to insert data into #tempsharks:

2022 11 02 SQLResults2Sharp
(1 row affected)

(4 rows affected)

Msg 213, Level 16, State 1, Line 17
Column name or number of supplied values does not match table definition.

Completion time: 2022-11-02T18:00:43.5997476+00:00

#tempsharks has five columns but amazonwebshark.2022.sharkspecies now has six. My script is now trying to insert all six sharkspecies columns into the temp table, causing the msg 213 error.

Doing Things Differently

The solution here is to replace row 21’s SELECT * with the precise columns to insert from amazonwebshark.2022.sharkspecies:

While amazonwebshark.2022.sharkspecies now has six columns, my script is only inserting five of them into the temp table:

2022 11 02 SQLResults3Sharp

I can add the last_evaluated column into #tempsharks in future, but its absence in the temp table isn’t causing any immediate problems.

Works The Same In Other Environments

In this section, I use Python to show the value of production code that works the same in non-production.

Setting The Scene

Here I have a Python script that reads data from an Amazon S3 bucket using a boto3 session. I pass my AWS_ACCESSKEY and AWS_SECRET credentials in from a secrets manager, and create an s3bucket variable for the S3 bucket path:

When I deploy this script to my dev environment it works fine.

What’s The Problem?

When I deploy this script to production, s3bucket will still be s3://dev-bucket. The potential impact of this depends on the AWS environment setup:

Different AWS account for each environment:

  • dev-bucket doesn’t exist in Production. The script fails.

Same AWS account for all environments:

  • Production IAM roles might not have any permissions for dev-bucket. The script fails.
  • Production processes might start using a dev resource. The script succeeds but now data has unintentionally crossed environment boundaries.

Doing Things Differently

A solution here is to dynamically set the s3bucket variable based on the ID of the AWS account the script is running in.

I can get the AccountID using AWS STS. I’m already using boto3, so can use it to initiate an STS client with my AWS credentials.

STS then has a GetCallerIdentity action that returns the AWS AccountID linked to the AWS credentials. I capture this AccountID in an account_id variable, then use that to set s3bucket‘s value:

More details about get_caller_identity can be found in the AWS Boto3 documentation.

For bonus points, I can terminate the script if the AWS AccountID isn’t defined. This prevents undesirable states if the script is run in an unexpected account.

Speaking of which…

Prevents Undesirable States

In this section, I use PowerShell to demonstrate how to stop production code from doing unintended things.

Setting The Scene

In June I started writing a PowerShell script to upload lossless music files from my laptop to one of my S3 buckets.

I worked on it in stages. This made it easier to script and test the features I wanted. By the end of Version 1, I had a script that dot-sourced its variables and wrote everything in my local folder $ExternalLocalSource to my S3 bucket $ExternalS3BucketName:

#Load Variables Via Dot Sourcing
. .\EDMTracksLosslessS3Upload-Variables.ps1


#Upload File To S3
Write-S3Object -BucketName $ExternalS3BucketName -Folder $ExternalLocalSource -KeyPrefix $ExternalS3KeyPrefix -StorageClass $ExternalS3StorageClass

What’s The Problem?

NOTE: There were several problems with Version 1, all of which were fixed in Version 2. In the interests of simplicity, I’ll focus on a single one here.

In this script, Write-S3Object will upload everything in the local folder $ExternalLocalSource to the S3 bucket $ExternalS3BucketName.

Problem is, the $ExternalS3BucketName S3 bucket isn’t for everything! It should only contain lossless music files!

At best, Write-S3Object will upload everything in the local folder to S3 whether it’s music or not.

At worst, if the script is pointing at a different folder it will start uploading everything there instead! PowerShell commonly defaults to C:\Windows, so this could cause all kinds of problems.

Doing Things Differently

I decided to limit the extensions that the PowerShell script could upload.

Firstly, the script captures the extensions for each file in the local folder $ExternalLocalSource using Get-ChildItem and [System.IO.Path]::GetExtension:

$LocalSourceObjectFileExtensions = Get-ChildItem -Path $ExternalLocalSource | ForEach-Object -Process { [System.IO.Path]::GetExtension($_) }

Then it checks each extension using a ForEach loop. If an extension isn’t in the list, PowerShell reports this and terminates the script:

ForEach ($LocalSourceObjectFileExtension In $LocalSourceObjectFileExtensions) 

{
If ($LocalSourceObjectFileExtension -NotIn ".flac", ".wav", ".aif", ".aiff") 
{
Write-Output "Unacceptable $LocalSourceObjectFileExtension file found.  Exiting."
Start-Sleep -Seconds 10
Exit
}

So now, if I attempt to upload an unacceptable .log file, PowerShell raises an exception and terminates the script:

**********************
Transcript started, output file is C:\Files\EDMTracksLosslessS3Upload.log

Checking extensions are valid for each local file.
Unacceptable .log file found.  Exiting.
**********************

While an acceptable .flac file will produce this message:

**********************
Transcript started, output file is C:\Files\EDMTracksLosslessS3Upload.log

Checking extensions are valid for each local file.
Acceptable .flac file.
**********************

To see the code in full, as well as the other problems I solved, please check out my post from June.

Summary

In this post, I responded to November 2022’s T-SQL Tuesday #156 Invitation and gave my thoughts on some production code qualities. I gave examples of each quality and showed how they could save time and prevent unintended problems in a production environment.

Thanks to Tomáš for this month’s topic! My previous T-SQL Tuesday posts are here.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~