Categories
Developing & Application Integration

Earning My AWS Developer Associate Cert By Fighting A Cow

In this post I talk about my recent experience with the AWS Certified Developer – Associate certification, discuss why and how I studied for the exam and explain why part of the process was like an early 90s puzzle game.

Table of Contents

Introduction

On 25 March 2022 I earned the AWS Certified Developer – Associate certification. This is my fourth AWS certification and I now hold all the associate AWS certifications. People wanting to know more are welcome to view my Credly badges.

Motivation For Earning The AWS Developer Associate

Firstly I’ll explain why I took the exam. I like to use certifications as evidence of my current knowledge and skillset, and as mechanisms to introduce me to new topics that I wouldn’t otherwise have interacted with.

There’s a gap of around 18 months between my last AWS certification and this one. There are a few reasons for that:

  • I wanted to give the knowledge from the Solutions Architect and SysOps Administrator certifications time to bed in.
  • I wanted to use my new skills for the AWS data migration project at work.
  • My role at the time didn’t involve many of the services covered in the Developer Associate exam.

After the AWS migration was completed and I became a Data Engineer, I felt that the time was right for the Developer Associate. My new role brought with it new responsibilities, and the AWS migration made new tooling available to the business. I incorporated the Developer Associate into the upskilling for my new role over a four month period.

The benefits of the various sections and modules of the Developer Associate can be split across:

  • Projects the Data Engineering team is currently working on.
  • Future projects the Data Engineering team is likely to receive.
  • Projects I can undertake in my own time to augment my skillset.

Current Work Projects

  • Our ETLs are built using Python on AWS Lambda. The various components of Lambda were a big part of the exam and helped me out when writing new ETLs and modernising legacy components.
  • Git repos are a big part of the Data Engineering workstream. I am a relative newcomer to Git, and the sections on CodeCommit helped me better understand the fundamentals.
  • Build tests and deployments are managed by the Data Engineering CICD pipelines. The CodeBuild, CodeDeploy and CodePipeline sections have shown me what these pipelines are capable of and how they function.
  • Some Data Engineering pipelines use Docker. The ECS and Fargate sections helped me understand containers conceptually and the benefits they offer.

Future Work Projects

  • Sections about CloudWatch and SNS will be useful for setting up new monitoring and alerting as the Data Engineering team’s use of AWS services increases.
  • The DynamoDB module will be helpful when new data sources are introduced that either don’t need a relational database or are prone to schema changes.
  • Sections about Kinesis will help me design streams for real-time data processing and analytics.

Future Personal Projects

  • The CloudFormation and SAM modules will help me build and deploy applications in my AWS account for developing my Python knowledge.
  • Sections on Cognito will help me secure these applications against unauthorized and malicious activity.
  • The API Gateway module will let me define how my applications can be interacted with and how incoming requests should be handled.
  • Sections on KMS will help me secure my data and resources when releasing homemade applications.

Resources For The AWS Developer Associate

Because AWS certifications are very popular, there are many resources to choose from. I used the following resources for my AWS Developer Associate preparation.

Stéphane Maarek Udemy Course

I’ve been a fan of Stéphane Maarek for some time, having used his courses for all of my AWS associate exams. His Ultimate AWS Certified Developer Associate is exceptional, with 32 hours of well presented and informative videos covering all exam topics. In addition, his code and slides are also included.

Stéphane is big on passing on real-world skills as opposed to just teaching enough to pass exams, and his dedication to keeping his content updated is clearly visible in the course.

À votre santé Stéphane!

Tutorials Dojo Learning Portal

Tutorials Dojo, headed by Jon Bonso, is a site with plentiful resources for AWS, Microsoft Azure and Google Cloud. Their practice exams are known for being hard but fair and are comparable to the AWS exams. All questions include detailed explanations of both the correct and incorrect answers. These practice exams were an essential part of my preparation.

Their Certified Developer Associate practise exam package offers a number of learning choices:

  • Want to mimic the exam? Timed Mode poses 65 questions against the clock.
  • Prefer immediate feedback? Review Mode shows answers and explanations after every question.
  • Practising a weak area? Section-Based Mode limits questions to specific topics.

Tutorials Dojo also offers a variety of Cheat Sheets and Study Guides. These are free, comprehensive and regularly updated.

AWS Documentation & FAQs

AWS documentation is the origin of most questions in the exam and Stéphane and Jon both reference it in their content. I refer to it in situations where a topic isn’t making sense, or if a topic is a regular stumbling block in the practice exams.

For example, I didn’t understand API Gateway integration types until I read the API Gateway Developer Guide page. I am a visual learner, but sometimes there’s no substitute for reading the instruction manual! The KMS FAQs cleared up a few problem areas for me as well.

AWS also have their own learning services, including the AWS Skill Builder. While I didn’t use it here, some of my AWS certifications will expire in 2023 so I’ll definitely be looking to test out Skill Builder then.

Anki

Anki is a free and open-source flashcard program. It has a great user guide that includes an explanation of how it aids learning. I find Anki works best for short pieces of information that I want regular exposure to via their mobile app.

For example, one of my Anki cards was:

CodeCommit: Migrate Git = CLONE Git; PUSH Git
PULL = NULL

This was explaining the process of migrating a Git repo to CodeCommit. PULL = NULL was a way for me to remember that pulling objects from the Git repo was incorrect in that scenario.

If an Anki card goes over two lines I use pen and paper for it instead. Previous experience has taught me that I can visualise small notes better on Anki and large notes better on paper.

Blogging

My best exam performance is with the AWS services I am most familiar with. Towards the end of my exam preparation, I wanted to fill some knowledge gaps by getting my hands dirty!

My posts about creating security alerts and enhanced S3 notifications let me get to grips with CloudTrail, CloudWatch, EventBridge and SNS. These all made an appearance in my exam so this was time well spent!

I also ran through an AWS guide about Building A Serverless Web Application to get some quick experience using API Gateway, CodeCommit and Cognito. This has given me some ideas for future blog projects, so stay tuned!

Approach To Studying The AWS Developer Associate

This section goes into detail about how I approached my studies. I didn’t realise it at the time but, on review, the whole process is basically a long ETL. With sword fighting.

Extract

I started by watching Stéphane’s course in its entirety, ‘extracting’ notes as I went. Since Stéphane provided his slides and since I already knew some topics from previous experience, the notes were mostly on topics that I either didn’t know or was out of practice with.

Transform

Having finished Stéphane’s course, I started the Tutorials Dojo practice exams. The aim here is to ‘transform’ my knowledge from notes and slides to answers to exam questions.

I have a spreadsheet template in Google Sheets for this process:

As I work through a practice exam, I record how I feel about my answers:

I can choose from:

  • Confident: I’m totally confident with my answer
  • 5050: I’m torn between two answers but have eliminated some
  • Guess: I have no idea what the answer is

When I get the results of the practice exam, I add the outcomes:

The Gut Feel and Outcome columns then populate tables elsewhere on the spreadsheet:

I use these tables for planning my next moves:

  • The top table quantifies overall confidence, and can answer questions like “Is my confidence improving between practise exams?”, “How often am I having to guess answers?” and “How confident am I about taking the real exam?”
  • I can get the middle table from Tutorials Dojo, but have it on the sheet for convenience.
  • The bottom table shows me an analysis of Gut Feel and Outcome. This shows me how many of my correct answers were down to knowledge, and in addition how many were down to luck.

I then update the Question column of the spreadsheet depending on the results in the bottom table:

  • I assume that anything listed as Confident and Correct is well known. Nothing is changed.
  • All 5050s and Correct Guesses are coloured orange. Here some knowledge is apparent, but more revision is needed.
  • All Incorrect Guesses are coloured red, because there are clear knowledge gaps here.
  • Anything listed as Confident and Incorrect is also coloured red. These are the biggest red flags of all, as here knowledge has either been misread or misunderstood.

Load

As the knowledge gaps and development areas become clear, I began to ‘load’ the topics that still didn’t make sense or were proving hard to remember.

Based on the Tutorials Dojo practise exam outcomes, I made a second set of notes that were more concise than the first. So where the first set was mostly “Things I Don’t Know” the second set was mostly “Things I Can’t Remember”.

As you might imagine, this uses a fair amount of paper. I recycle this afterwards because I’m an environmentally-conscious shark.

Insult Sword Fighting

I’ve come to know part of the ‘load’ as Insult Sword Fighting. Some people will know exactly what I’m talking about here, while others will quite rightly need some explanation.

Insult Sword Fighting is part of the 1990 point and click adventure game The Secret of Monkey Island. In this section of the game, the player wins fights by knowing the correct responses to an opponent’s insults.

For example, during a fight the opponent might say:

“You fight like a dairy farmer.”

To which the player’s response should be:

“How appropriate. You fight like a cow!”

The player starts out with two insult-response pairs, and learns more during subsequent fights.

The aim of the section is to learn enough to defeat the Sword Master. However, her insults are different to the ones the player has previously seen. For the final challenge, the player must match their existing knowledge to the new insults.

So if the Sword Master says:

“I will milk every drop of blood from your body!”

The player should pick up on the word “milk” and respond with:

“How appropriate. You fight like a cow!”

OK But What Does This Have To Do With The Exam?

So let me explain. The first time with a practice exam is like the player’s first Insult Sword Fight. Most responses are unknown or unfamiliar, so things usually don’t go well.

The player gets better at Insult Sword Fighting by challenging new opponents. This time the player will know some responses, but will also encounter new insults to learn.

In the same way, the subsequent practice exams will pose some questions that are similar to those in the previous exam. Of course there will also be entirely new questions that need further investigation.

The player will decide they are ready to face the Sword Master when they are able to win the majority of their Insult Sword Fights because they know the logic behind the correct responses.

Like the insults, the logic behind the practice exam questions can also be learned. Knowing the logic well enough to regularly answer these questions correctly is a good indicator that the real exam is a good idea.

The Sword Master’s insults are different to the ones the player has trained with. To win, the player must look for key words and phrases in the new insults and match them to their existing responses during battle.

The real exam will use unfamiliar questions. However the key words and phrases in the questions will match the knowledge built up during the practice exams, revealing the logic to arrive at the correct answers!

For those wondering how I made these images, I direct you to this awesome tool.

Next Steps

Now that the Developer Associate exam is over, I have a number of ideas for blog posts and projects to try out:

  • Building an ETL for my own data
  • Creating an API to query that data
  • Deploying the solution using Git and CICD

Plus I have a bookmarks folder and Trello board full of ideas to consider. So plenty to keep me busy!

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Next-Level S3 Notifications With EventBridge

In this post I will use AWS managed services to enhance my S3 user experience with custom EventBridge notifications that are low cost, quick to set up and perform well at scale.

Table of Contents

Introduction

I’ve been restoring some S3 Glacier Flexible Retrieval objects lately. I use bulk retrievals to reduce costs – these finish within 5–12 hours. However, on a couple of occasions I’ve totally forgotten about them and almost missed the download deadline!

Having recently set up some alerting, I decided to make a similar setup that will trigger emails at key points in the retrieval process, using the following AWS services:

  • S3 for holding the objects and managing the retrieval process
  • EventBridge for receiving events from S3 and looking for patterns
  • SNS for sending notifications to me

The end result will look like this:

Let’s start with SNS.

SNS: The Notifier

I went into detail about Amazon Simple Notification Service (SNS) in my last post about making some security alerts so feel free to read that if some SNS terms are unfamiliar.

Here I want SNS to send me emails, so I start by making a new standard topic called s3-object-restore. I then create a new subscription with an email endpoint and link it to my new topic.

This completes my SNS setup. Next I need to make some changes to one of my S3 buckets.

S3: The Storage

Amazon S3 stores objects in buckets. The properties of a bucket can be customised to complement its intended purpose. For example, the Default Encryption property forces encryption on buckets containing sensitive objects. The Bucket Versioning property protects objects from accidental changes and deletes.

Here I’m interested in the Event Notifications property. This property sends notifications when certain events occur in the bucket. Examples of S3 events include uploads, deletes and, importantly for this use case, restore requests.

S3 can send events to a number of AWS services including, helpfully, EventBridge! This isn’t on by default but is easily enabled in the bucket’s properties:

My bucket will now send events to EventBridge. But what is EventBridge?

EventBridge: The Go-Between

Full disclosure. At first I wasn’t entirely sure what EventBridge was. The AWS description did little to change that:

I tend to uncomplicate topics by abstracting them. Here I found it helpful to think of EventBridge as a bus:

  • Busses provide high-capacity transport between bus stops. The bus is EventBridge.
  • Passengers use the bus to get to where they need to go. The passengers are events.
  • Bus stops are where passengers join or depart the bus. The bus stops are event sources and targets.

In the same way that a bus picks up passengers at one bus stop and drops them off at another, EventBridge receives events from a source and directs them to a target.

Much has been written about EventBridge’s benefits. Rather than spending the next few paragraphs copy/pasting, I will instead suggest the following for further reading:

In this use case, EventBridge’s main advantage is that it is decoupled from S3. This allows one EventBridge Rule to serve many S3 buckets. S3 can send notifications to SNS without EventBridge, but each bucket needs configuring separately so this quickly causes headaches with multiple buckets.

Currently my S3 bucket is already sending events to EventBridge, so let’s create an EventBridge rule for them.

EventBridge Rule: Setting A Pattern & Choosing A Source

Rules allow EventBridge to route events from a source to a target. After naming my new rule s3-object-restore, I need to choose what kind of rule I want:

  • Event Pattern: the rule will be triggered by an event.
  • Schedule: the rule will be triggered by a schedule.

I select Event Pattern. EventBridge then poses further questions to establish what events to look for:

  • Event Matching Pattern: Do I want to use EventBridge presets or write my own pattern?
  • Service Provider: Are the events coming from an AWS service or a third party?
  • Service Name: What service will be the source of events?

EventBridge will only present options relevant to the previous choices. For example, choosing AWS as Service Provider means that no third party services are available in Service Name.

My choices so far tell EventBrdige that S3 is the event source:

Next up is Event Type. As EventBridge knows the events are coming from S3, the options here are very specific:

I choose Amazon S3 Event Notification.

EventBridge now knows enough to create a rule, and offers the following JSON as an Event Pattern:

{
  "source": ["aws.s3"],
  "detail-type": ["Object Access Tier Changed", "Object ACL Updated", "Object Created", "Object Deleted", "Object Restore Completed", "Object Restore Expired", "Object Restore Initiated", "Object Storage Class Changed", "Object Tags Added", "Object Tags Deleted"]
}

I’m only interested in restores, so I open the Specific Event(s) list and choose the three Object Restore events:

EventBridge then amends the event pattern to:

{
  "source": ["aws.s3"],
  "detail-type": ["Object Restore Completed", "Object Restore Initiated", "Object Restore Expired"]
}

That’s it for the source. Now EventBridge needs to know what to do when it finds something!

EventBridge Rule: Choosing A Target & Configuring Inputs

One of EventBridge’s big selling points is how it interacts with targets. There are already numerous targets, and EventBridge rules can have more than one.

I select SNS Topic as a target then choose my s3-object-restore SNS topic from the list:

This alone is enough for EventBridge to interact with SNS. When I save this EventBridge rule and trigger it by running an S3 object restore, I receive this email:

Although this is technically a success, some factors aren’t ideal:

  • The formatting of the email is hard to read.
  • There’s a lot of information here, most of which is irrelevant.
  • It’s not immediately clear what this email is telling me.

To address this I can use EventBridge’s Configure Input feature to change what is sent to the target. This feature offers four options:

  • Matched Events: EventBridge passes all of the event text to the target. This is the default.
  • Part Of The Matched Event: EventBridge only sends part of the event text to the target.
  • Constant (JSON text): None of the event text is sent to the target. EventBridge sends user-defined JSON instead.
  • Input Transformer: EventBridge assigns lines of event text as variables, then uses those variables in a template.

Let’s look at the input transformer.

The AWS EventBridge user guide goes into detail about the input transformer and includes a good tutorial. Having consulted these resources, I start by getting the desired JSON from the initial email:

{
"detail-type":"Object Restore Initiated",
"source":"aws.s3",
"time":"2022-02-21T12:51:21Z",
"detail":
{
"bucket":{"name":"redacted"},
"object":{"key":"redacted"}
}
}

Then I convert the JSON into an Input Path:

{
"bucket":"$.detail.bucket.name",
"detail-type":"$.detail-type",
"object":"$.detail.object.key",
"source":"$.source",
"time":"$.time"
}

And finally specify an Input Template:

"<source> <detail-type> at <time>. Bucket: <bucket>. Object: <object>"

EventBridge checks input templates before accepting them, and will throw an error if the input template is invalid:

I update my EventBridge rule with the new Input Transformer configuration. Time to test it out!

Testing

When I trigger an S3 object restore I receive this email moments later:

I then receive a second email when the object is ready for download:

"aws.s3 Object Restore Completed at 2022-03-04T00:15:33Z. Bucket: REDACTED. Object: REDACTED"

And a final one when the object expires:

"aws.s3 Object Restore Expired at 2022-03-05T10:12:04Z. Bucket: REDACTED. Object: REDACTED"

Success!

Before moving on, let me share the results of an earlier test. My very first input path (not included here) contained some mistakes. The input template was valid but it couldn’t read the S3 event properly, so I ended up with this:

Something to bear in mind for future rules!

Cost Analysis

Before I wrap up, let’s run through the expected costs with this setup:

  • SNS: the first thousand email notifications SNS every month are included in the AWS Always Free tier, and I’m nowhere near that!
  • S3: There is no change for S3 passing events to EventBridge. Charges for object storage and retrieval are out of scope for this post.
  • EventBridge: All events published by AWS services are free.

There is no expected cost rise for this setup based on my current use.

Summary

In this post I’ve used EventBridge and SNS to produce free bespoke notifications at key points in the S3 object retrieval process. This offers me the following benefits:

  • Reassurance: I can choose the longer S3 retrieval offerings knowing that AWS will keep me updated on progress.
  • Convenience: I will know the status of retrievals without accessing the AWS console or using the CLI.
  • Cost: I am less likely to forget to download retrieved objects before expiry, and therefore less likely to need to retrieve those objects again.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Re-Runnable Strava API Calls Using Python

In this post I make my existing Python code re-runnable by enabling it to replace expired access tokens when it sends requests to the Strava API.

A couple of posts ago I wrote about authenticating Strava API calls. I ended up successfully requesting data using this Python code:

import requests

activities_url = "https://www.strava.com/api/v3/athlete/activities" 

header = {'Authorization': 'Bearer ' + "access_token"}
param = {'per_page': 200, 'page': 1}

my_dataset = requests.get(activities_url, headers=header, params=param).json()

print(my_dataset)

Although successful, the uses for this code are limited as it stops working when the header’s access_token expires. Ideally the code should be able to function constantly once Strava grants initial authorisation, which is what I’m exploring here. Plus the last post was unclear in places so this one will hopefully tie up some loose ends.

Please note that I have altered or removed all sensitive codes and tokens in this post in the interests of security.

The Story So Far

First of all, some reminders. Strava uses OAuth 2.0 for authentication, and this is a typical OAuth 2.0 workflow:

OAuthAnOverview

I am sending GET requests to Strava via Get Activity. Strava’s documentation for this is as follows:

StravaDevelopers

Finally, during my initial setup I created an API Application on the Strava site. Strava provided these details upon completion:

StravaAPICredentials

Authorizing My App To View Data

Strava’s Getting Started page explains that they require authentication via OAuth 2.0 for data requests and gives the following link for that process:

http://www.strava.com/oauth/authorize?client_id=[REPLACE_WITH_YOUR_CLIENT_ID]&response_type=code&redirect_uri=http://localhost/exchange_token&approval_prompt=force&scope=read

I must amend this URL as scope=read is insufficient for Get Activity requests. The end of the URL becomes scope=activity:read_all and the updated URL loads a Strava authorization screen:

StravaAuthorization

Selecting Authorize gives the following response:

http://localhost/exchange_token?state=&code=CODE9fbb&scope=read,activity:read_all

Where code=CODE9fbb is a single-use authorization code that I will use to create access tokens.

Getting Tokens For API Requests

Next I will use CODE9fbb to request access tokens which Get Activity will accept. This is done via the following cURL request:

curl -X POST https://www.strava.com/api/v3/oauth/token \
  -d client_id=APIAPP-CLIENTID \
  -d client_secret=APIAPP-SECRET \
  -d code=CODE9fbb \
  -d grant_type=authorization_code

Here, Client_ID and Client_Secret are from my API application, Code is the authorization code CODE9fbb and Grant_Type is what I’m asking for – Strava’s Authentication documentation states this must always be authorization_code for initial authentication.

Strava then responds to my cURL request with a refresh token, access token, and access token expiration date in Unix Epoch format:

"token_type": "Bearer",
  "expires_at": 1642370007,
  "expires_in": 21600,
  "refresh_token": "REFRESHc8c4",
  "access_token": "ACCESS22e5",

Why two tokens? Access tokens expire six hours after they are created and must be refreshed to maintain access to the desired data. Strava uses the refresh tokens as part of the access token refresh process.

Writing The API Request Code

With the tokens now available I can start assembling the Python code for my Strava API requests. I will again be using Visual Studio Code here. I make a new Python virtual environment called StravaAPI by running py -3 -m venv StravaAPI, activate it using StravaAPI\Scripts\activate and run pip install requests to install the module I need. Finally I create an empty StravaAPI.py file in the StravaAPI virtual environment folder for the Python code.

Onto the code. The first part imports the requests module, declares some variables and sets up a request to refresh an expired access code as detailed in the Strava Authentication documentation:

# Import modules
import requests


# Set Variables
apiapp_clientid = "APIAPP-CLIENTID"
apiapp_secret = 'APIAPP-SECRET'
token_refresh = 'REFRESHc8c4'

# Requesting Access Token
url_oauth = "https://www.strava.com/oauth/token"
payload_oauth = {
	'client_id': apiapp_clientid,
	'client_secret': apiapp_secret,
	'refresh_token': token_refresh,
	'grant_type': "refresh_token",
	'f': 'json'
}

Note this time that the Grant_Type is refresh_token instead of authorization_code. These variables can then be used by the requests module to send a request to Strava’s API:

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)
print(req_access_token.json())

This request is successful and returns existing tokens ACCESS22e5 and REFRESHc8c4 as they have not yet expired:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESS22e5', 'expires_at': 1642370008, 'expires_in': 20208, 'refresh_token': 'REFRESHc8c4'}

A warning is also presented here as my request is not secure:

InsecureRequestWarning: Unverified HTTPS request is being made to host 'www.strava.com'. Adding certificate verification is strongly advised.

The warning includes a link to urllib3 documentation, which states:

Making unverified HTTPS requests is strongly discouraged, however, if you understand the risks and wish to disable these warnings, you can use disable_warnings()

As this code is currently in development, I import the urllib3 module and disable the warnings:

# Import modules
import requests
import urllib3

# Disable Insecure Request Warnings
urllib3.disable_warnings()

Next I extract the access token from Strava’s response into a new token_access variable and print that in the terminal as a process indicator:

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)

token_access = req_access_token.json()['access_token']
print("Access Token = {}\n".format(token_access))

So far the terminal’s output is:

Requesting Token...

Access Token = ACCESS22e5

Let’s get some data! I’m making a call to Get Activities now, so I declare three variables to compose the request and include the token_access variable from earlier :

# Requesting Athlete Activities
url_api_activities = "https://www.strava.com/api/v3/athlete/activities"
header_activities = {'Authorization': 'Bearer ' + token_access}
param_activities = {'per_page': 200, 'page' : 1}

Then I use the requests module to send the request to Strava’s API:

print("Requesting Athlete Activities...\n")
dataset_activities = requests.get(url_api_activities, headers=header_activities, params=param_activities).json()
print(dataset_activities)

And receive data about several recent activities as JSON in return. Success! The full Python code is as follows:

# Import modules
import requests
import urllib3

# Disable Insecure Request Warnings
urllib3.disable_warnings()

# Set Variables
apiapp_clientid = "APIAPP-CLIENTID"
apiapp_secret = 'APIAPP-SECRET'
token_refresh = 'REFRESHc8c4'

# Requesting Access Token
url_oauth = "https://www.strava.com/oauth/token"
payload_oauth = {
	'client_id': apiapp_clientid,
	'client_secret': apiapp_secret,
	'refresh_token': token_refresh,
	'grant_type': "refresh_token",
	'f': 'json'
}

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)

token_access = req_access_token.json()['access_token']
print("Access Token = {}\n".format(token_access))

# Requesting Athlete Activities
url_api_activities = "https://www.strava.com/api/v3/athlete/activities"
header_activities = {'Authorization': 'Bearer ' + token_access}
param_activities = {'per_page': 200, 'page' : 1}
print("Requesting Athlete Activities...\n")
dataset_activities = requests.get(url_api_activities, headers=header_activities, params=param_activities).json()
print(dataset_activities)

But Does It Work?

This only leaves the question of whether the code works when the access code expires. As a reminder this was Strava’s original response:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESS22e5', 'expires_at': 1642370008, 'expires_in': 20208, 'refresh_token': 'REFRESHc8c4'}

Expiry 1642370008 is Sunday, 16 January 2022 21:53:28. I run the code at 22:05 and:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESSe0e7', 'expires_at': 1642392321, 'expires_in': 21559, 'refresh_token': 'REFRESHc8c4'}

A new access token! The new expiry 1642392321 is Monday, 17 January 2022 04:05:21. And when I run the code at 09:39:

{'token_type': 'Bearer', 'access_token': 'ACCESS74dd', 'expires_at': 1642433966, 'expires_in': 21600, 'refresh_token': 'REFRESHc8c4'}

A second new access code. All working fine! As long as my refresh token remains valid I can continue to get valid access tokens when they expire.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~