Categories
Architecture & Resilience

amazonwebshark’s Abandoned 2019 AWS Architecture

In this post, I respond to January 2024’s T-SQL Tuesday #170 Invitation by examining amazonwebshark’s abandoned 2019 AWS architecture.

tsql tuesday

Table of Contents

Introduction

amazonwebshark is two years old today!

One of a kind 500

I wrote an analysis post last year, and when deciding on the second birthday’s topic I saw this month’s T-SQL Tuesday invitation from Reitse Eskens:

“What projects did you abandon but learn a lot from?”

One immediately sprang to mind! Since this T-SQL Tuesday falls on amazonwebshark’s second birthday, it seemed a good time to evaluate it.

Rewind to 2019. I was new to AWS and was studying towards their Certified Cloud Practitioner certification. To that end, I set up an AWS account and tried several tutorials including an S3 static website.

After earning the certification, I kept the site going to continue my learning journey. I made the site into a blog and chose a snappy (Groan – Ed) name…amazonwebshark. In fact, that site is still around!

I’ll start by looking at the site architecture, then examine what went wrong and end with how it influenced the current amazonwebshark site. For the rest of this post, I’ll refer to amazonwebshark 2019 as awshark2019 and the current version as awshark2021.

How awshark2019 Was Built

In this section, I examine the architecture behind awshark2019.

Hugo Static Site Generator

Hugo is an open-source static site generator written in the Go programming language. Go is known for its efficiency and performance, making Hugo’s build process very fast.

Hugo’s content files are written in Markdown which enables easy post creation and formatting. These Markdown posts are then converted to static HTML files at build time. The built site has a file system structure and can be deployed to platforms like traditional web servers, content delivery networks (CDNs), and cloud storage services.

Speaking of which…

S3 Static Site

awshark2019 has been operating out of a public S3 bucket since its creation:

2024 01 04 S3WebsiteBucketOverview

This won’t be a particularly technical section, as the AWS documentation and tutorial are already great resources for this S3 feature. So let’s talk about the benefits of static sites instead:

  • Since static websites consist of pre-built HTML, CSS, and JavaScript files, they load quickly and can scale rapidly.
  • Static websites are inherently more secure and maintainable because there’s no server-side code execution, database vulnerabilities or plugin updates.
  • All site processing is done before deployment, so the only ongoing cost is for storage. awshark2019 weighs in at around 4MB, so in the four years it has been live this has been essentially free.

So far this all sounds good. What went wrong?

Why awshark2019 Failed

In this section, I examine awshark2019’s problems. Why was the 2019 architecture abandoned?

Unclear Objectives

Firstly, awshark2019 had no clear purpose.

In my experience, good blogs have their purpose nailed down. It could be automation, data, biscuits…anything as long as it becomes consistent and plays to the creator’s strengths.

With awshark2019, some posts are about S3 Static Sites and Billing Alerts. These are good topics to explore. However, almost half of the posts are about creating the site and are in a web design category. But the blog isn’t about web design, and I’ve never been a web designer!

Rounding things off, the About page is…the Hugo default. So who is the site for? If I, as the blog creator, don’t know that then what chance does anyone else have?

Poor Content

Secondly, as awshark2019’s objectives were unclear the content was…not very good. The topic choices are disjointed, some of the posts are accidental documentation rehashes and ultimately there’s little value.

Let’s take the example of Adding An Elastic IP To An Amazon Linux EC2 Instance. The post explores the basics, shows the AWS console changes and mentions costs. This is fine, but there’s not much else here. If I wrote this post today, I’d define a proper use case and explore the problem more by pinging the instance’s IP before and after a stoppage. This shows the problem instead of telling it.

Another post examines Setting Up A Second AWS Account With AWS Organizations. There’s more here than the IP address post, but there’s no context. What am I doing with the second account? Why does my use case support the use of AWS Organisations? What problems is it helping me solve?

There’s nothing in these posts that I can’t get from the AWS documentation and no new insights for readers.

Awkward To Publish

Finally, awshark2019 was too complex to publish. More accurately, Hugo’s deployment process wasn’t the problem. The way I was doing it was.

Hugo sites can be deployed in several ways. These centre around putting files and folders in a location accessible by the deployment service. So far so good.

But instead of automating this process, I had a horrible manual workflow of creating and testing the site locally, and then manually overwriting the existing S3 objects. This quickly got so tedious that I eventually ran out of enthusiasm.

What I Learned

In this section, I examine what I learned from the abandoned 2019 architecture when creating awshark2021.

Decide On Scope

My first key awshark2021 decision was the blog’s purpose.

While ‘Welcome To My Blog’ posts are something of a cliche, I took the time to write Introducing amazonwebshark as a standard to hold myself to:

By writing about my experiences I can check and confirm my understanding of new topics, give myself points of reference for future projects and exam revision, evidence my development where necessary and help myself out in the moments when my imposter syndrome sees an opportunity to strike.

Introducting amazonwebshark: What Is amazonwebshark For?

awshark2021 took as much admin away as possible, letting me explore topics and my curiosity instead. amazonwebshark was, and is, a place for me to:

  • Try things
  • Make mistakes
  • Improve myself
  • Be creative

While this is firstly a technology and cloud computing blog, I allow myself some freedom (for example the Me category) as long as the outcome is potentially useful. To this end, I’ve also written about life goals, problem-solving and public speaking.

Add Value

Secondly, let’s examine the posts themselves.

I probably average about eight hours of writing per post. I want to get the most out of that time investment, so I try to ensure my posts add value to their subject matter. There’s no set process for this, as value can take many forms like:

  • Examining how I apply services to my situation or use case.
  • Raising awareness of topics with low coverage.
  • Detailing surprising or unexpected event handling.

My attitude has always been that I’m not here to tell people how and why to do things. I’m here to tell people how and why I did things. Through this process, I can potentially help others in the technology community while also helping myself.

Post introspection and feedback have led to improvements in my working practises like:

It could be argued that amazonwebshark is a big ongoing peer review. It’s made me a better engineer and has hopefully helped others out too.

Keep It Simple

Finally, let’s discuss architecture.

awshark2021 is a WordPress blog, currently hosted on Hostinger servers. While this architecture isn’t free and has tradeoffs, it offers a fast, reliable deployment path managed by organisations specialising in this field.

This is exactly what I wanted for awshark2021:

…my main focus was to get the ball rolling and get something online. I’ve wanted to start a blog for some time, but have run into problems like knowledge gaps, time pressures and running out of enthusiasm.

Introducing amazonwebshark: Why Didn’t You Use AWS For Hosting?

I enjoy writing, so my priority is there. If I begin seriously considering a serverless amazonwebshark, one of the core tests will be the deployment process. For now, the managed services I’m paying for meet my needs and let me focus on creativity over admin.

Summary

In this post, I responded to January 2024’s T-SQL Tuesday #170 Invitation by examining amazonwebshark’s abandoned 2019 AWS architecture.

It’s unfair to blame the architecture. Rather, my implementation of it was at fault. awshark2019 was a good idea but suffered from poor and over-ambitious architectural decisions. I’ve considered deleting it. But if nothing else it reminds me of a few things:

  • I won’t always get it right first time.
  • It doesn’t have to be perfect.
  • Enjoy the process.

awshark2019’s lessons have allowed awshark2021 to reach two years. Happy birthday!

If this post has been useful, the button below has links for contact, socials, projects and sessions:

SharkLinkButton 1

Thanks for reading ~~^~~

Categories
Architecture & Resilience

Automating Application Management With Winget

In this post, I try automating my laptop’s application management with the Windows Package Manager tool Winget.

Table of Contents

Introduction

After much frustration with my laptop’s performance, I finally booked it in for upgrades to an SSD hard drive and 16GB RAM. It’s now very responsive and far faster!

The shop originally planned to clone my existing HDD drive onto the new SSD. Unfortunately, the clone kept failing due to some bad sectors. Fortunately, this didn’t present a risk of data loss – most of my files are in OneDrive, and everything else is either in Amazon S3 or on external drives.

The failing clone meant that none of my previously installed programs and packages were on the new drive. I wasn’t flying blind here though, as I regularly use the free Belarc Advisor tool to create a list of installed programs.

But this is a heavily manual process, and the Belarc Advisor files contain a lot of unnecessary data that isn’t easy to use. So I found myself looking for an alternative!

User Story

In this section, I outline the problem I want to solve.

I want to capture a list of all applications installed on a given Windows device so that I can audit my device and have a better disaster recovery strategy.

ACCEPTANCE CRITERIA:

The process must be fully automated. I don’t want another job to do – I want the device to own this process.

The process must be efficient. Belarc Advisor gets the job done, but it takes time to load and does a bunch of other stuff that I don’t need.

There is no budget. Belarc Advisor isn’t ideal, but it’s free. I don’t want to start spending money on this problem now.

Introducing Winget

This section explains what Winget is and examines some of the features and benefits it offers.

What Is Winget?

Winget is a Windows Package Manager that helps install, upgrade, configure and delete applications on Windows 10 and Windows 11.

Package Managers look through configured repositories like the Windows Package Manager Community Repository for applications. If the application is available, it will be downloaded from the repository and installed onto the device.

Microsoft has open-sourced Winget, and has committed it to their GitHub account. After installation, Winget is accessible via the Windows Terminal, PowerShell, and the Command Prompt.

Package Manager Benefits

Package Managers like Winget offer several benefits over traditional methods:

  • Applications are installed as CLI commands, so there is no need to navigate to different websites or go through multiple installation steps.
  • Their repositories enforce a strict submission policy and use standardized package formats, so applications are installed consistently and reliably.
  • They manage application dependencies. If a desired application needs another application to work, the package manager will automatically install that application as well.
  • They lend themselves well to CI/CD pipelines, IAC and disaster recovery, as package manager commands can be used in scripts and automated processes.
  • Community tools like winstall exist that can create batch-installation Winget commands and scripts using a web GUI.

Winget Commands

Winget regularly receives new commands, a list of which is maintained by Microsoft. These commands can be loosely grouped into:

For this post, I will be focusing on the last group.

winget list displays a list of installed applications. The list includes the current version and the package’s source, and has several filtration options.

The winget list syntax is:

winget list [[-q] \<query>] [\<options>]

winget export creates and exports a JSON file of apps to a specified path.

This JSON file can combine with the winget import command to allow the batch-installing of applications and the creation of build environments.

winget export‘s JSON files do not include applications that are unavailable in the Windows Package Manager Community Repository. In these cases, the export command will show a warning.

The winget export syntax is:

winget export [-o] <output> [<options>]

Winget Scripting With VSCode

In this section, I write a script that will run the Winget commands.

I’m writing the script using Visual Studio Code, as this allows me to write the Winget script in the same way as other PowerShell scripts I’ve written.

Unique Filename

Firstly, I want to give each file a unique filename to make sure nothing is overwritten. A good way to do that here is by capturing Get-Date‘s output formatted as the ISO 8601 standard:

$RunDate = Get-Date -Format 'yyyy-MM-dd-HHmm'

This returns a string with an appropriate level of granularity, as I’m not going to be running this script multiple times a minute:

2023-04-26-1345

Winget Export Code

Next, I’ll script my export command.

I need to tell Winget where to create the file, and what to call it. I create a new folder for the exports and capture its path in a $ExportsFilePath variable.

Then I create a $ExportsFileName variable for the first part of the export file’s name. It uses a WingetExport string and the device’s name, which PowerShell can access using $env:computername:

$ExportsFileName = 'WingetExport' + '-' + $env:computername + '-'

Including the computer’s name means I can run this script on different devices and know which export files belong to which device:

WingetExport-LAPTOP-IFIJ32T-

My third $ExportsOutput variable joins everything together to produce an acceptable string for winget export‘s output argument:

$ExportsOutput = $ExportsFilePath + '\' + $ExportsFileName  + $RunDate + '.json'

An example of which is:

C:\{PATH}\WingetExport-LAPTOP-IFIJ32T-2023-04-26-1345.json

Finally, I can script the full command. This command creates an export file at the desired location and includes application version numbers for accuracy and auditing:

winget export --output $ExportsOutput --include-versions

Here are some sample exports:

{
  "$schema": "https://aka.ms/winget-packages.schema.2.0.json",
  "CreationDate": "2023-04-27T11:02:04.321-00:00",
  "Sources": [
    {
      "Packages": [
        {
          "PackageIdentifier": "Git.Git",
          "Version": "2.40.0"
        },
        {
          "PackageIdentifier": "Anki.Anki",
          "Version": "2.1.61"
        },
        {
          "PackageIdentifier": "Microsoft.PowerToys",
          "Version": "0.69.1"
        }
      ],
      "SourceDetails": {
        "Argument": "https://cdn.winget.microsoft.com/cache",
        "Identifier": "Microsoft.Winget.Source_8wekyb3d8bbwe",
        "Name": "winget",
        "Type": "Microsoft.PreIndexed.Package"
      }
    }
  ],
  "WinGetVersion": "1.4.10173"
}

As a reminder, these exports don’t include applications that are unavailable in Winget. This means winget export alone doesn’t meet the user story requirements, so there is still work to do!

Winget List Code

Finally, I’ll script my list command. This is mostly similar to the export command and I create the file path in the same way:

$ListsOutput = $ListsFilePath + '\' + $ListsFileName + $RunDate + '.txt'

The filename is changed for accuracy, and the suffix is now TXT as no JSON is produced:

WingetList-LAPTOP-IFIJ32T-2023-04-25-2230.txt

Now, while winget list shows all applications on the device, it has no argument to save this list anywhere. For that, I need to pipe the winget list output to a PowerShell command that does create files – Out-File:

winget list | Out-File -FilePath $ListsOutput

Out-File writes the list to the $ListsOutput path, producing rows like these:

Name Id Version Available Source
Anki Anki.Anki 2.1.61 winget
Audacity 2.4.2 Audacity.Audacity 2.4.2 3.2.4 winget
DBeaver 23.0.2 dbeaver.dbeaver 23.0.2 winget
S3 Browser version 10.8.1 S3 Browser_is1 10.8.1

The entire script takes around 10 seconds to run in an open PowerShell session and produces no CPU spikes or memory load. The script is on my GitHub with redacted file paths.

Automation With Task Scheduler

In this section, I put Task Scheduler in charge of automating my application management Winget script.

What Is The Task Scheduler?

Task Scheduler began life on Windows 95 and is still used today by applications including Dropbox, Edge and OneDrive. Parts of it aren’t great. The Send Email and Display Message features are deprecated, and monitoring and error handling relies on creating additional tasks that are triggered by failure events.

However, it’s handy for running local scripts and has no dependencies as it’s built into Windows. It supports a variety of use cases which can be scripted or created in the GUI. Existing tasks are exportable as XML.

Creating A New Task

There is plentiful documentation for the Task Scheduler. The Microsoft Learn developer resources cover every inch of it, and these Windows Central and Windows Reports guides are great resources with extensive coverage.

In my case, I create a new ApplicationInventory task, set to trigger every time I log on to Windows:

2023 04 25 TaskSchedulerTrigger

The task starts powershell.exe, passing an argument of -file "C:\{PATH}\ApplicationInventory.ps1".

This works, but will force a PowerShell window to open every time the schedule runs. This can be stopped by configuring the task to Run whether user is logged on or not. Yup – it feels a bit hacky. But it works!

I now have a new scheduled task:

2023 04 25 TaskSchedulerNewTask

Testing

An important part of automating my application management with Winget is making sure everything works! In this section, I check the script and automation processes are working as expected.

I’ll start with the task automation. Task Scheduler has a History tab, which filters events from Event Viewer. Upon checking this tab, I can see the chain of events marking a successful execution:

2023 04 25 TaskSchedulerHistory

When I check the WingetExport folder, it contains an export file created on 25/04/2023 at 22:30:

2023 04 25 AppInventoryExports

And there are similar findings in the WingetList folder:

2023 04 25 AppInventoryLists

Both files open successfully and contain the expected data. Success!

Summary

In this post, I try automating my laptop’s application management with the Windows Package Manager tool Winget.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Architecture & Resilience

Can SQL Upgrades Be Avoided In The Cloud?

In this post I consider the February 2022 T-SQL Tuesday #147 Invitation “Upgrade Strategies” and look at the importance of upgrades in the cloud.

For this month’s T-SQL Tuesday, VoiceOfTheDBA’s invitation was as follows:

This month I want you to write about how you look at SQL Server upgrades. A few things you might think about:

Why we wait to upgrade?

Strategies for testing an upgrade

Smoke tests or other ways to verify the upgrade worked

Moving to the cloud to avoid upgrades

Using compatibility levels to upgrade an instance by not a database.

Checklists of things to use in planning

The time it takes to upgrade your environment

What you evaluate in making a decision to upgrade or not?

Anything else

Immediately I was drawn to “Moving to the cloud to avoid upgrades”. Some perceive the cloud as a ‘set it and forget it’ environment. The reality is that cloud services still require upgrades that can cause security vulnerabilities and data issues if left unchecked.

What follows are some SQL based observations from my experience to date. While it’s true this list is AWS specific, it isn’t AWS exclusive as Azure and GCP operate similar services with similar considerations.

EC2 Upgrades

When I create a new EC2 instance I can generally expect it to be running the latest build of my chosen OS. However, an instance that has been running for a while will soon find itself needing system updates like any other computer. Some updates offer performance improvements or new features and are essentially optional. Others fix security vulnerabilities and bugs and are non-negotiable.

If that instance is running my relational database of choice, that too will need a range of updates from the desirable to the critical. AWS views this as a customer responsibility, with the AWS Shared Responsibility Model including the following:

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

However the managed services are viewed differently:

For abstracted services, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

So what if I get AWS to run my database for me?

RDS Upgrades

Amazon Relational Database Service (RDS) offers managed relational databases including Microsoft SQL Server, MySQL and PostgreSQL. RDS still uses EC2 instances but here they are managed by AWS and are not accessible by the user. This means OS management is no longer a customer responsibility.

Upgrades to the database engine are still a factor though. AWS try to make this as painless as possible – upgrades can be done using the console, AWS CLI or the RDS API. This is still a manual process though, although it is possible to upgrade some minor engine versions automatically.

However, even on rails it’s still possible for an update to go wrong. AWS have a nine-point checklist for testing an upgrade that wouldn’t be out of place on-premises. AWS also encourage database snapshots and non-production testing. While RDS removes infrastructure complexity, the data is still the customer’s responsibility and needs the same care as ever.

Operational Upgrades

AWS constantly release new services intended to simplify workflows and reduce costs. Even when an organisation’s cloud setup is fully mature, it can still benefit from upgrading to these services.

When Athena debuted in 2016 it enabled the analysis of data in S3 using standard SQL. This removed the need for complex ETLs and data warehouses, and with Athena being serverless it was faster to set up and cheaper to operate than EC2, RDS or Redshift.

In 2020 Amazon announced new EBS GP3 volumes. GP3s have separate settings for performance and storage, and are recommended for applications like MySQL that need high performance at low costs. This meant organisations could save money by reducing their use of the more expensive IO1 volumes.

More recently, AWS announced a new S3 Glacier Instant Retrieval storage class in 2021. This made S3 less expensive for a range of use cases including SQL backup storage and data lake archival.

Conclusion

The Cloud offers numerous opportunities for individuals and organisations to develop, build and deploy quicker and easier. But upgrades are a fact of life in technology regardless of platform. The cloud is still a collection of computers, which still need to respond to changing requirements and threats.

A well maintained and fully upgraded cloud environment is reliable, scalable and secure. A poorly maintained one can, at best, be expensive, slow and unwieldy. At worst it can be unreliable, vulnerable and in breach of terms of service.

If you want to check the health of your AWS account, AWS offers their Trusted Advisor and Well-Architected Tool services. These give free architectural advice, security recommendations, cost optimisations and best practice guidance.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~