Categories
Data & Analytics

Using Athena To Query S3 Inventory Parquet Objects

In this post I’ll be using Amazon Athena to query data created by the S3 Inventory service.

When I wrote about my first impressions of S3 Glacier Instant Retrieval last month, I noticed some of my S3 Inventory graphs showed figures I didn’t expect. I couldn’t remember many of the objects in the InMotion bucket, and didn’t know that some were in Standard! I went through the bucket manually and found the Standard objects, but still had other questions that I wasn’t keen on solving by hand.

So while I was on-call over Christmas I decided to take a closer look at Athena – the AWS serverless query service designed to analyse data in S3. I’ve used existing setups at work but this was my first time experiencing it from scratch, and I made use of the AWS documentation about querying Amazon S3 Inventory with Amazon Athena and the Andy Grimes blog “Manage and analyze your data at scale using Amazon S3 Inventory and Amazon Athena” to fill in the blanks.

We’ve Got a File On You

First I created an empty s3inventory Athena database. Then I created a s3inventorytable table using the script below, specifying the 2022-01-01 symlink.txt Hive object created by S3 Inventory as the data source:

CREATE EXTERNAL TABLE s3inventorytable(
         bucket string,
         key string,
         version_id string,
         is_latest boolean,
         is_delete_marker boolean,
         size bigint,
         last_modified_date bigint,
         e_tag string,
         storage_class string,
         is_multipart_uploaded boolean,
         replication_status string,
         encryption_status string,
         object_lock_retain_until_date bigint,
         object_lock_mode string,
         object_lock_legal_hold_status string,
         intelligent_tiering_access_tier string,
         bucket_key_status string
) 
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
  STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
  LOCATION 's3://[REDACTED]/hive/dt=2022-01-01-01-00/';

Then I ran a query to determine the storage classes in use in the InMotion bucket and the number of objects assigned to each:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results were as follows:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"

41 Standard objects?! I wasn’t sure what they were and so added object size into the query:

SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class
SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"

The zero size and subsequent investigations confirmed that the Standard objects were prefixes, and so presented no problems.

Next, I wanted to check for unwanted previous versions of objects using the following query:

SELECT key, size 
FROM "s3inventory"."s3inventorytable" 
WHERE is_latest = FALSE

This query returned another prefix, so again there were no actions needed:

SELECT key, size 
FROM "s3inventory"."s3inventorytable"

Further investigation found that this prefix also has no storage class assigned to it, as seen in the results above.

For Old Time’s Sake

I then wanted to see the youngest and oldest objects for each storage class, and ran the following query:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

What I got back was unexpected:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"

S3 Inventory stores dates as Unix Epoch Time, so I needed a function to transform the data to a human-legible format. Traditionally this would involve CAST or CONVERT, but as Athena uses Presto additional functions are available such as from_unixtime:

from_unixtime(unixtime) → timestamp

Returns the UNIX timestamp unixtime as a timestamp.

I updated the query to include this function:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date)),
MAX(from_unixtime(last_modified_date))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

This time the dates were human-legible but completely inaccurate:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
human

I then found a solution in Stack Overflow, where a user suggested converting a Unix Epoch Time value from microseconds to milliseconds. I applied this suggestion to my query by dividing the last modified dates by 1000:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date/1000)),
MAX(from_unixtime(last_modified_date/1000))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results after this looked far more reasonable:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
FINAL

And EpochConverter confirmed the human time was correct for the Deep Archive MIN(last_modified_date) Unix value of 1620147401000:

So there we go! An introduction to Athena and utilization of the data from S3 Inventory!

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Me

Introducing amazonwebshark.com

It gives me great pleasure to introduce amazonwebshark.com! A project that I started in October 2021 is now seeing the light of day and I thought I’d take the time to go through some of the decisions behind it and what my ambitions for the site are.

What Is amazonwebshark For?

At time of writing I’ve been a Data Engineer for a few months and there are several elements of the role that I’d like to get more familiar with – for example my Python skills need sharpening and I’ve not used Git properly yet. By writing about my experiences I can check and confirm my understanding of new topics, give myself points of reference for future projects and exam revision, evidence my development where necessary and help myself out in the moments when my imposter syndrome sees an opportunity to strike.

How Did You Set amazonwebshark Up?

The domain was purchased using Amazon Route 53. This keeps my setup and overall billing somewhat simplified and also means I can try out Route 53’s integrations with other AWS services.

My hosting is via Bluehost. For around £2.66 a month they’re sorting out my server, database, CDN and SSL – that price was a Black Friday special and goes up after a year but so far I’ve been very impressed with their communication and customer service so I’ll see how it goes.

Why Didn’t You Use AWS For Hosting?

Besides the bargain price, Bluehost brings with it a level of convenience. I could have used my own EC2/RDS setup. Or Lightsail. Or even a static S3 site. But my main focus was to get the ball rolling and get something online. I’ve wanted to start a blog for some time, but have run into problems like knowledge gaps, time pressures and running out of enthusiasm. Bluehost offered a quick and simple process that sorted out everything I needed, letting me get the actual blog started.

This is not to say I’m unwilling to roll my own in the future, of course. Stay tuned.

Why Start A Blog At All When LinkedIn And Medium Are Around?

I’m not keen on the Medium model. Many times I’ve seen an article that I fancy reading, only to be met with forced login requests or paywalls. Both of which are deeply frustrating. And LinkedIn is great for articles, but this way I have all my content ringfenced together and under my control. Social media platforms change constantly and there is no guarantee that features, formats and content availability will stay the same or continue to exist from month to month. This site will hopefully let me sidestep any unpleasant surprises of that nature.

What’s The Plan For The Next Few Months?

I’m currently studying towards the AWS Certified Developer – Associate certification, and want to try using my work with the Strava API as the basis for a Lambda function and a CI/CD pipeline to get experience with some AWS services I’ve yet to use. Plus I have a Trello board full of ideas and an unused Rasberry Pi, so plenty to get on with!

Why amazonwebshark?

The name for the site comes from the original Amazon Web Shark – Terabyte:

amazonwebshark Terabyte

He hails from Lanzarote and his favourite topics are FinTech, Dy-Nom-oDB and S3 (Shark Storage Service).

Thanks for reading ~~^~~

Categories
Architecture & Resilience

S3 Glacier Instant Retrieval: First Impressions

On 30/11/2021, AWS introduced S3 Glacier Instant Retrieval – a new archive storage class for S3 that operates alongside S3 Glacier (now renamed S3 Glacier Flexible Retrieval) and S3 Glacier Deep Archive. Their announcements can be seen here and here and a summary of all Glacier classes is available on the S3 Glacier product page.

I already use most of the S3 storage classes in my AWS accounts. Earlier in the year I got tired of my laptop backups needing to run overnight and made an S3 cross-account replication setup in which whatever I upload to the AtRest bucket in my main account gets replicated to the AtRest bucket in my backup account and gets set as S3 Glacier Deep Archive. This way I have two versions of the object in different regions in different accounts, and although there are data transfer costs they are offset by the reduced cost I get from using S3 Glacier Deep Archive for the backup objects.

Objects in my main account use different classes depending on their purpose. Before I upload any objects there I consider whether the object is in motion or at rest and what my access pattern for the object is likely to be, then choose a storage class accordingly. This is the current storage class distribution for all buckets in my main account according to S3 Storage Lens:

The arrival of S3 Glacier Instant Retrieval is of interest to me as it might offer cost savings and accessibility improvements over my current setup. So far my decisions over S3 storage classes have usually boiled down to trade-offs. For example:

  • For Object X I could use S3 Intelligent Tiering or S3 Infrequent Access. S3 Infrequent Access has a minimum storage duration of 30 days and has retrieval costs, but S3 Intelligent Tiering has a handing fee per 1000 objects and each object will spend the first 30 days in, and be charged as, S3 Standard. So if I know I’m not going to touch this object for at least a month which class is most suitable?
  • For Object Y I could use S3 Glacier or S3 Glacier Deep Archive. Deep Archive will cost less for storage but the retrieval fees are higher than Glacier and Deep Archive’s minimum storage duration is 180 days where Glacier’s is only 90 days. Plus I can get objects out of Glacier far quicker as its standard retrieval time is 3 to 5 hours compared to Deep Archive’s standard of 12 hours. So could I afford to wait half a day for this object if I needed it? And how long do I see this object being around for?

Comparisons With Other S3 Storage Classes

So how does S3 Glacier Instant Retrieval compare to S3 Infrequent Access and S3 Glacier Flexible Retrieval? I loaded the S3 pricing site and had a look at various costs in eu-west-1 for S3 Infrequent Access (IFA), S3 Glacier Instant Retrieval (GIR) and S3 Glacier Flexible Retrieval (GFR), then used the S3 calculator to get some estimates based on my current S3 Storage Lens statistics and November 2021 bill.

Storage:

  • IFA $0.0125 per GB
  • GIR $0.004 per GB
  • GFR $0.0036 per GB

PUT, COPY, POST, LIST requests (per 1,000 requests):

  • IFA $0.01
  • GIR $0.02
  • GFR $0.33

GET, SELECT, and all other requests (per 1,000 requests):

  • IFA $0.001
  • GIR $0.01
  • GFR $0.0004

Data Retrieval requests (per 1,000 requests):

  • IFA N/A
  • GIR N/A
  • GFR $0.055 (Standard)

Data retrievals (per GB):

  • IFA $0.01
  • GIR $0.03
  • GFR $0.01 (Standard)

Estimated cost for storing 200GB per month (with average size of 4.4MB for Glacier Flexible Retrieval)24265 PUT, COPY, POST, LIST requests, 10402 GET, SELECT, and all other requests and retrieval of 50GB per month (using 1 Standard request for Glacier Flexible Retrieval):

  • IFR $3.25
  • GIR $2.89
  • GFR $2.38

A couple other items of note:

  • S3 Glacier Instant Retrieval has a minimum billable object size of 128 KB, which it shares with S3 Standard Infrequent Access
  • S3 Glacier Instant Retrieval offers instant retrieval in milliseconds, which it also shares with S3 Standard Infrequent Access
  • S3 Glacier Instant Retrieval has a minimum storage duration of 90 days, which it shares with S3 Glacier Flexible Retrieval

What’s interesting in the cost estimates for me is now close S3 Glacier Instant Retrieval is to S3 Standard Infrequent Access. The major difference between the two classes that I can see is that, while S3 Glacier Instant Retrieval has a minimum storage duration of 90 days, the same period for S3 Standard Infrequent Access is only 30 days. If you delete an object before the end of a minimum storage duration period, you are charged for the full period specified. Depending on the size and amount of the objects, this could get expensive if mismanaged. That said, AWS are offering S3 Glacier Instant Retrieval as being “For long-lived archive data accessed once a quarter with instant retrieval in milliseconds” so there are no smoke and mirrors here.

Conclusions

Would I use S3 Glacier Instant Retrieval over S3 Glacier Flexible Retrieval or S3 Standard Infrequent Access? Definitely in my AtRest bucket. The S3 Storage Lens stats for that bucket shows many objects in S3 Standard Infrequent Access, including all the old TV shows from Internet Archive because let’s face it – if you want to watch old TV you want to watch it now not in 3 hours’ time </Glacier>. In this scenario S3 Glacier Instant Retrieval keeps the millisecond access and, although the retrieval cost is higher (GIR $0.03 vs IFA $0.01) the cost of data storage is lower (GIR $0.004 per GB vs IFA $0.0125 per GB). So S3 Glacier Instant Retrieval looks like a winner there.

My InMotion bucket is a different story though. The objects here aren’t being retained permanently and most of them are in S3 so they don’t bring my laptop’s hard drive to its knees. If I’m looking at uploading objects here it’s usually with a question of “When will I deal with this?”, the answer to which will usually be:

  • The next few weeks, in which case I’ll keep the object in OneDrive instead (What a TWIST)
  • Next month, in which case I’d put the object in S3 Standard Infrequent Access because of its 30-day minimum storage duration
  • “I don’t know”, in which case I’d put the object in S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive because their storage costs are less than S3 Glacier Instant Retrieval

As a side note, most of the objects in my InMotion bucket are S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive already, so it looks like my estimates from the start of the year were half decent!

Thanks for reading! ~~^~~