Categories
Data & Analytics

Using Athena To Query S3 Inventory Parquet Objects

In this post I’ll be using Amazon Athena to query data created by the S3 Inventory service.

When I wrote about my first impressions of S3 Glacier Instant Retrieval last month, I noticed some of my S3 Inventory graphs showed figures I didn’t expect. I couldn’t remember many of the objects in the InMotion bucket, and didn’t know that some were in Standard! I went through the bucket manually and found the Standard objects, but still had other questions that I wasn’t keen on solving by hand.

So while I was on-call over Christmas I decided to take a closer look at Athena – the AWS serverless query service designed to analyse data in S3. I’ve used existing setups at work but this was my first time experiencing it from scratch, and I made use of the AWS documentation about querying Amazon S3 Inventory with Amazon Athena and the Andy Grimes blog “Manage and analyze your data at scale using Amazon S3 Inventory and Amazon Athena” to fill in the blanks.

We’ve Got a File On You

First I created an empty s3inventory Athena database. Then I created a s3inventorytable table using the script below, specifying the 2022-01-01 symlink.txt Hive object created by S3 Inventory as the data source:

CREATE EXTERNAL TABLE s3inventorytable(
         bucket string,
         key string,
         version_id string,
         is_latest boolean,
         is_delete_marker boolean,
         size bigint,
         last_modified_date bigint,
         e_tag string,
         storage_class string,
         is_multipart_uploaded boolean,
         replication_status string,
         encryption_status string,
         object_lock_retain_until_date bigint,
         object_lock_mode string,
         object_lock_legal_hold_status string,
         intelligent_tiering_access_tier string,
         bucket_key_status string
) 
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
  STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
  LOCATION 's3://[REDACTED]/hive/dt=2022-01-01-01-00/';

Then I ran a query to determine the storage classes in use in the InMotion bucket and the number of objects assigned to each:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results were as follows:

SELECT storage_class, count(*) 
FROM "s3inventory"."s3inventorytable"

41 Standard objects?! I wasn’t sure what they were and so added object size into the query:

SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class
SELECT storage_class, count(*), sum(size)
FROM "s3inventory"."s3inventorytable"

The zero size and subsequent investigations confirmed that the Standard objects were prefixes, and so presented no problems.

Next, I wanted to check for unwanted previous versions of objects using the following query:

SELECT key, size 
FROM "s3inventory"."s3inventorytable" 
WHERE is_latest = FALSE

This query returned another prefix, so again there were no actions needed:

SELECT key, size 
FROM "s3inventory"."s3inventorytable"

Further investigation found that this prefix also has no storage class assigned to it, as seen in the results above.

For Old Time’s Sake

I then wanted to see the youngest and oldest objects for each storage class, and ran the following query:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

What I got back was unexpected:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"

S3 Inventory stores dates as Unix Epoch Time, so I needed a function to transform the data to a human-legible format. Traditionally this would involve CAST or CONVERT, but as Athena uses Presto additional functions are available such as from_unixtime:

from_unixtime(unixtime) → timestamp

Returns the UNIX timestamp unixtime as a timestamp.

I updated the query to include this function:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date)),
MAX(from_unixtime(last_modified_date))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

This time the dates were human-legible but completely inaccurate:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
human

I then found a solution in Stack Overflow, where a user suggested converting a Unix Epoch Time value from microseconds to milliseconds. I applied this suggestion to my query by dividing the last modified dates by 1000:

SELECT storage_class, 
MIN(from_unixtime(last_modified_date/1000)),
MAX(from_unixtime(last_modified_date/1000))
FROM "s3inventory"."s3inventorytable"
GROUP BY storage_class
ORDER BY storage_class

The results after this looked far more reasonable:

SELECT storage_class, 
MIN(last_modified_date), 
MAX(last_modified_date) 
FROM "s3inventory"."s3inventorytable"
FINAL

And EpochConverter confirmed the human time was correct for the Deep Archive MIN(last_modified_date) Unix value of 1620147401000:

So there we go! An introduction to Athena and utilization of the data from S3 Inventory!

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Me

Introducing amazonwebshark.com

It gives me great pleasure to introduce amazonwebshark.com! A project that I started in October 2021 is now seeing the light of day and I thought I’d take the time to go through some of the decisions behind it and what my ambitions for the site are.

What Is amazonwebshark For?

At time of writing I’ve been a Data Engineer for a few months and there are several elements of the role that I’d like to get more familiar with – for example my Python skills need sharpening and I’ve not used Git properly yet. By writing about my experiences I can check and confirm my understanding of new topics, give myself points of reference for future projects and exam revision, evidence my development where necessary and help myself out in the moments when my imposter syndrome sees an opportunity to strike.

How Did You Set amazonwebshark Up?

The domain was purchased using Amazon Route 53. This keeps my setup and overall billing somewhat simplified and also means I can try out Route 53’s integrations with other AWS services.

My hosting is via Bluehost. For around £2.66 a month they’re sorting out my server, database, CDN and SSL – that price was a Black Friday special and goes up after a year but so far I’ve been very impressed with their communication and customer service so I’ll see how it goes.

Why Didn’t You Use AWS For Hosting?

Besides the bargain price, Bluehost brings with it a level of convenience. I could have used my own EC2/RDS setup. Or Lightsail. Or even a static S3 site. But my main focus was to get the ball rolling and get something online. I’ve wanted to start a blog for some time, but have run into problems like knowledge gaps, time pressures and running out of enthusiasm. Bluehost offered a quick and simple process that sorted out everything I needed, letting me get the actual blog started.

This is not to say I’m unwilling to roll my own in the future, of course. Stay tuned.

Why Start A Blog At All When LinkedIn And Medium Are Around?

I’m not keen on the Medium model. Many times I’ve seen an article that I fancy reading, only to be met with forced login requests or paywalls. Both of which are deeply frustrating. And LinkedIn is great for articles, but this way I have all my content ringfenced together and under my control. Social media platforms change constantly and there is no guarantee that features, formats and content availability will stay the same or continue to exist from month to month. This site will hopefully let me sidestep any unpleasant surprises of that nature.

What’s The Plan For The Next Few Months?

I’m currently studying towards the AWS Certified Developer – Associate certification, and want to try using my work with the Strava API as the basis for a Lambda function and a CI/CD pipeline to get experience with some AWS services I’ve yet to use. Plus I have a Trello board full of ideas and an unused Rasberry Pi, so plenty to get on with!

Why amazonwebshark?

The name for the site comes from the original Amazon Web Shark – Terabyte:

amazonwebshark Terabyte

He hails from Lanzarote and his favourite topics are FinTech, Dy-Nom-oDB and S3 (Shark Storage Service).

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Authenticating Strava API Calls Using OAuth 2.0 And Visual Studio Code

Picking up from last time, I continued on with the Strava API Getting Started page and got myself very confused over how things were supposed to work. The Strava API requires authentication via OAuth 2.0, and in fairness Strava includes a graph that shows how the process works which unfortunately went right over my head. By chance I found a YouTube video produced by InterSystems Learning Services that gave me a good entry-level understanding of the process, boiling down to this slide:

oauth 2.0 workflow

As it turned out I was misunderstanding the purpose of the access token Strava was giving me on the My API Application page. That access token wasn’t provided by OAuth and so couldn’t be used to get any data – instead I needed to use Strava’s OAuth API to request different credentials and define a scope for the access needed.

To that end, Strava provides this link on their Getting Started page:

http://www.strava.com/oauth/authorize?client_id=[REPLACE_WITH_YOUR_CLIENT_ID]&response_type=code&redirect_uri=http://localhost/exchange_token&approval_prompt=force&scope=read

Which was used in conjunction with my app’s ClientID to get an authorization code in the form of:

http://localhost/exchange_token?state=&code=MYAUTHCODE123456789&scope=read

Next, a cURL request must be made to exchange the authorization code and scope for a refresh token, access token, and access token expiration date. The page suggests https://www.strava.com/oauth/token which turns out to be wrong – the URL used in the Authentication documentation for this is https://www.strava.com/api/v3/oauth/token. At this point I was having varying success with Postman, so instead installed REST Client on Visual Studio Code on the advice of Vu Long Tran’s blog which made things much simpler.  

Sending a request of:

curl -X POST https://www.strava.com/api/v3/oauth/token 

  -d client_id=ReplaceWithClientID \

  -d client_secret=ReplaceWithClientSecret \

  -d code=ReplaceWithCode \

  -d grant_type=authorization_code\

Produced a response containing my Strava profile data along with new tokens and details of their expiry:

"token_type": "Bearer"

"expires_at": 1640810729,

"expires_in": 4002,

"refresh_token": REFRESHTOKEN123456789

"access_token": "ACCESSTOKEN123456789"

Awesome! Now to put these to work…

Part of my confusion coming into this was that I had been given a Python script in my travels that I was expecting to work with the access token from the My API Application screen, which of course with the benefit of hindsight was never going to be successful. The Python script is as follows:

import requests

activities_url = "https://www.strava.com/api/v3/athlete/activities"

header = {'Authorization': 'Bearer ' + "access_token"}
param = {'per_page': 200, 'page': 1}

my_dataset = requests.get(activities_url, headers=header, params=param).json()

print(my_dataset)

Time to test it out with the new access token! Straight into a problem:

{'message': 'Authorization Error', 'errors': [{'resource': 'Athlete', 'field': 'access_token', 'code': 'invalid'}]}

The Strava API and SDK Reference page was quick to indicate the source of the problem:

strava api get activity

The previous requests to Strava’s OAuth API were with scope=read. I changed this to scope=activity:read_all and received a new authorization code with the updated scope. This authorization code was then exchanged for a new access token which, when pasted into the Python script, outputted so much JSON that in the interests of sanity I’ve screenshotted a small section for illustration:

python json strava api response

To be fair it was what I asked for. And it all makes a bit more sense to me now!

Future plans for this involve making use of the refresh tokens to automate access token generation (assuming I’ve understood that part correctly – stay tuned!) and getting some working code into a Lambda function so I can start turning some cogs in AWS.

Thanks for reading ~~^~~