Categories
Architecture & Resilience

Can SQL Upgrades Be Avoided In The Cloud?

In this post I consider the February 2022 T-SQL Tuesday #147 Invitation “Upgrade Strategies” and look at the importance of upgrades in the cloud.

For this month’s T-SQL Tuesday, VoiceOfTheDBA’s invitation was as follows:

This month I want you to write about how you look at SQL Server upgrades. A few things you might think about:

Why we wait to upgrade?

Strategies for testing an upgrade

Smoke tests or other ways to verify the upgrade worked

Moving to the cloud to avoid upgrades

Using compatibility levels to upgrade an instance by not a database.

Checklists of things to use in planning

The time it takes to upgrade your environment

What you evaluate in making a decision to upgrade or not?

Anything else

Immediately I was drawn to “Moving to the cloud to avoid upgrades”. Some perceive the cloud as a ‘set it and forget it’ environment. The reality is that cloud services still require upgrades that can cause security vulnerabilities and data issues if left unchecked.

What follows are some SQL based observations from my experience to date. While it’s true this list is AWS specific, it isn’t AWS exclusive as Azure and GCP operate similar services with similar considerations.

EC2 Upgrades

When I create a new EC2 instance I can generally expect it to be running the latest build of my chosen OS. However, an instance that has been running for a while will soon find itself needing system updates like any other computer. Some updates offer performance improvements or new features and are essentially optional. Others fix security vulnerabilities and bugs and are non-negotiable.

If that instance is running my relational database of choice, that too will need a range of updates from the desirable to the critical. AWS views this as a customer responsibility, with the AWS Shared Responsibility Model including the following:

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

However the managed services are viewed differently:

For abstracted services, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

So what if I get AWS to run my database for me?

RDS Upgrades

Amazon Relational Database Service (RDS) offers managed relational databases including Microsoft SQL Server, MySQL and PostgreSQL. RDS still uses EC2 instances but here they are managed by AWS and are not accessible by the user. This means OS management is no longer a customer responsibility.

Upgrades to the database engine are still a factor though. AWS try to make this as painless as possible – upgrades can be done using the console, AWS CLI or the RDS API. This is still a manual process though, although it is possible to upgrade some minor engine versions automatically.

However, even on rails it’s still possible for an update to go wrong. AWS have a nine-point checklist for testing an upgrade that wouldn’t be out of place on-premises. AWS also encourage database snapshots and non-production testing. While RDS removes infrastructure complexity, the data is still the customer’s responsibility and needs the same care as ever.

Operational Upgrades

AWS constantly release new services intended to simplify workflows and reduce costs. Even when an organisation’s cloud setup is fully mature, it can still benefit from upgrading to these services.

When Athena debuted in 2016 it enabled the analysis of data in S3 using standard SQL. This removed the need for complex ETLs and data warehouses, and with Athena being serverless it was faster to set up and cheaper to operate than EC2, RDS or Redshift.

In 2020 Amazon announced new EBS GP3 volumes. GP3s have separate settings for performance and storage, and are recommended for applications like MySQL that need high performance at low costs. This meant organisations could save money by reducing their use of the more expensive IO1 volumes.

More recently, AWS announced a new S3 Glacier Instant Retrieval storage class in 2021. This made S3 less expensive for a range of use cases including SQL backup storage and data lake archival.

Conclusion

The Cloud offers numerous opportunities for individuals and organisations to develop, build and deploy quicker and easier. But upgrades are a fact of life in technology regardless of platform. The cloud is still a collection of computers, which still need to respond to changing requirements and threats.

A well maintained and fully upgraded cloud environment is reliable, scalable and secure. A poorly maintained one can, at best, be expensive, slow and unwieldy. At worst it can be unreliable, vulnerable and in breach of terms of service.

If you want to check the health of your AWS account, AWS offers their Trusted Advisor and Well-Architected Tool services. These give free architectural advice, security recommendations, cost optimisations and best practice guidance.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Data & Analytics

Using Athena To Query S3 Inventory Parquet Objects

In this post I’ll be using Amazon Athena to query data created by the S3 Inventory service.

When I wrote about my first impressions of S3 Glacier Instant Retrieval last month, I noticed some of my S3 Inventory graphs showed figures I didn’t expect. I couldn’t remember many of the objects in the InMotion bucket, and didn’t know that some were in Standard! I went through the bucket manually and found the Standard objects, but still had other questions that I wasn’t keen on solving by hand.

So while I was on-call over Christmas I decided to take a closer look at Athena – the AWS serverless query service designed to analyse data in S3. I’ve used existing setups at work but this was my first time experiencing it from scratch, and I made use of the AWS documentation about querying Amazon S3 Inventory with Amazon Athena and the Andy Grimes blog “Manage and analyze your data at scale using Amazon S3 Inventory and Amazon Athena” to fill in the blanks.

We’ve Got a File On You

First I created an empty s3inventory Athena database. Then I created a s3inventorytable table using the script below, specifying the 2022-01-01 symlink.txt Hive object created by S3 Inventory as the data source:

CREATE EXTERNAL TABLE s3inventorytable(
         bucket string,
         key string,
         version_id string,
         is_latest boolean,
         is_delete_marker boolean,
         size bigint,
         last_modified_date bigint,
         e_tag string,
         storage_class string,
         is_multipart_uploaded boolean,
         replication_status string,
         encryption_status string,
         object_lock_retain_until_date bigint,
         object_lock_mode string,
         object_lock_legal_hold_status string,
         intelligent_tiering_access_tier string,
         bucket_key_status string
) 
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
  STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
  LOCATION 's3://[REDACTED]/hive/dt=2022-01-01-01-00/';

Then I ran a query to determine the storage classes in use in the InMotion bucket and the number of objects assigned to each:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results were as follows:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"

41 Standard objects?! I wasn’t sure what they were and so added object size into the query:

SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class
SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"

The zero size and subsequent investigations confirmed that the Standard objects were prefixes, and so presented no problems.

Next, I wanted to check for unwanted previous versions of objects using the following query:

SELECT key, size 
FROM "s3inventory"."s3inventorytable" 
WHERE is_latest = FALSE

This query returned another prefix, so again there were no actions needed:

SELECT key, size 
FROM "s3inventory"."s3inventorytable"

Further investigation found that this prefix also has no storage class assigned to it, as seen in the results above.

For Old Time’s Sake

I then wanted to see the youngest and oldest objects for each storage class, and ran the following query:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

What I got back was unexpected:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"

S3 Inventory stores dates as Unix Epoch Time, so I needed a function to transform the data to a human-legible format. Traditionally this would involve CAST or CONVERT, but as Athena uses Presto additional functions are available such as from_unixtime:

from_unixtime(unixtime) → timestamp

Returns the UNIX timestamp unixtime as a timestamp.

I updated the query to include this function:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date)),
MAX(from_unixtime(last_modified_date))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

This time the dates were human-legible but completely inaccurate:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
human

I then found a solution in Stack Overflow, where a user suggested converting a Unix Epoch Time value from microseconds to milliseconds. I applied this suggestion to my query by dividing the last modified dates by 1000:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date/1000)),
MAX(from_unixtime(last_modified_date/1000))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results after this looked far more reasonable:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
FINAL

And EpochConverter confirmed the human time was correct for the Deep Archive MIN(last_modified_date) Unix value of 1620147401000:

So there we go! An introduction to Athena and utilization of the data from S3 Inventory!

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Me

Introducing amazonwebshark.com

It gives me great pleasure to introduce amazonwebshark.com! A project that I started in October 2021 is now seeing the light of day and I thought I’d take the time to go through some of the decisions behind it and what my ambitions for the site are.

What Is amazonwebshark For?

At time of writing I’ve been a Data Engineer for a few months and there are several elements of the role that I’d like to get more familiar with – for example my Python skills need sharpening and I’ve not used Git properly yet. By writing about my experiences I can check and confirm my understanding of new topics, give myself points of reference for future projects and exam revision, evidence my development where necessary and help myself out in the moments when my imposter syndrome sees an opportunity to strike.

How Did You Set amazonwebshark Up?

The domain was purchased using Amazon Route 53. This keeps my setup and overall billing somewhat simplified and also means I can try out Route 53’s integrations with other AWS services.

My hosting is via Bluehost. For around £2.66 a month they’re sorting out my server, database, CDN and SSL – that price was a Black Friday special and goes up after a year but so far I’ve been very impressed with their communication and customer service so I’ll see how it goes.

Why Didn’t You Use AWS For Hosting?

Besides the bargain price, Bluehost brings with it a level of convenience. I could have used my own EC2/RDS setup. Or Lightsail. Or even a static S3 site. But my main focus was to get the ball rolling and get something online. I’ve wanted to start a blog for some time, but have run into problems like knowledge gaps, time pressures and running out of enthusiasm. Bluehost offered a quick and simple process that sorted out everything I needed, letting me get the actual blog started.

This is not to say I’m unwilling to roll my own in the future, of course. Stay tuned.

Why Start A Blog At All When LinkedIn And Medium Are Around?

I’m not keen on the Medium model. Many times I’ve seen an article that I fancy reading, only to be met with forced login requests or paywalls. Both of which are deeply frustrating. And LinkedIn is great for articles, but this way I have all my content ringfenced together and under my control. Social media platforms change constantly and there is no guarantee that features, formats and content availability will stay the same or continue to exist from month to month. This site will hopefully let me sidestep any unpleasant surprises of that nature.

What’s The Plan For The Next Few Months?

I’m currently studying towards the AWS Certified Developer – Associate certification, and want to try using my work with the Strava API as the basis for a Lambda function and a CI/CD pipeline to get experience with some AWS services I’ve yet to use. Plus I have a Trello board full of ideas and an unused Rasberry Pi, so plenty to get on with!

Why amazonwebshark?

The name for the site comes from the original Amazon Web Shark – Terabyte:

amazonwebshark Terabyte

He hails from Lanzarote and his favourite topics are FinTech, Dy-Nom-oDB and S3 (Shark Storage Service).

Thanks for reading ~~^~~