Categories
Security & Monitoring

Creating Security Alerts For AWS Console Access

In this post I will use AWS managed services to produce security alerts when attempts are made to access my AWS account’s console.

Table of Contents

Introduction

I am currently studying towards the AWS Certified Developer – Associate certification using Stéphane Maarek’s video course and the Tutorials Dojo practice exams. As part of my studies I want to better understand the various AWS monitoring services, and setting up some security alerts is a great way to get some real-world experience of them.

My current security posture is already in line with AWS best practices. For example:

  • I use a password manager and autogenerate passwords so they’re not reused.
  • MFA is enabled on everything offering it.
  • I have created IAM users and roles for all my AWS requirements and never use my root account.

To strengthen this, I will create some security alerts that will notify me when attempts are made to access my AWS console whether they succeed or fail.

I will be using the following AWS services:

  • IAM for handling authentication requests.
  • CloudTrail for creating log events.
  • CloudWatch for analysing log events and triggering alarms.
  • SNS for sending notifications when alarms are triggered.

The end result will look like this:

Let’s start with IAM.

IAM

AWS Identity and Access Management (IAM) is the AWS tool for managing permissions and access policies. I don’t need to do anything with IAM here, but I include it as IAM is the source of the events that my security alerts will use.

Next I’ll create the setup that AWS will use to send the security alerts.

SNS

Amazon Simple Notification Service (SNS) focuses on delivering notifications from sources to subscribers. SNS offers hundreds of potential combinations, and here I’m using it to send notifications to me in response to certain AWS events.

To do this, SNS uses Topics for what the notifications are and Subscriptions for where to send them.

SNS Topics

SNS Topics can be heavily customised but here I only need a simple setup. First I choose my topic’s type:

Standard is fine here as I don’t need to worry about message ordering or duplication. I also need a name for the topic. I will use the following naming pattern for the topics and for the security alerts themselves:

  • Action – this will always be signin.
  • Signin Method – this will always be console.
  • Outcome – this will be either failure or success.
  • User Type – this will be either iam or root.

Thus my first topic is named signin-console-failure-iam.

I am creating four security alerts and want each to have a separate SNS topic generating different notifications. A short time later I have created all four successfully:

SNS Subscriptions

Now I need some SNS Subscriptions. These tell SNS where to send notifications.

An SNS Subscription needs the following details:

  • Topic ARN – the Amazon Resource Name of the desired SNS Topic.
  • Protocol – there are several choices including email and SMS.
  • Endpoint – this will depend on the choice of protocol. Here I have selected the Email protocol so SNS requests an email address.

Once an SNS Subscription is created it must be confirmed. Here my endpoint is an email address, so SNS sends this email:

When the owner of the email confirms the subscription, a new window opens displaying the Subscription ID:

I then create further subscriptions for each SNS Topic. The SNS Subscription dashboard updates to show the list of endpoints and the confirmation status of each:

Note that signin-console-success-root has two subscriptions – one email and one SMS. This is because I never use my root account and want the heightened awareness of an SMS!

In terms of cost, the first thousand email notifications SNS every month are included in the AWS Always Free tier. Any costs will be from the infrequent SMS notifications.

With the alerts created, let’s start on the events they’ll alert against.

CloudTrail

AWS CloudTrail records user activity and API interactions. These include resource creation, service configuration and, crucially, sign-in activity.

By default, CloudTrail holds 90 days of events that can be viewed, searched and downloaded from the CloudTrail dashboard. A CloudTrail Trail is needed to store events for longer periods or to export them to other AWS services.

CloudTrail Trails

Here my CoudTrail Trail will be delivering events from CloudTrail to CloudWatch for analysis and storing them in an S3 bucket. I use the AWS CloudTrail documentation to create an events-management trail based in eu-west-1:

This trail is included in the AWS Always Free tier as it is my only one and it only records management events. There will be S3 charges for the objects in the bucket but this is usually around 30kb a day, so the cost here is trivial.

CloudTrail Logs

While I’m talking about CloudTrail, let’s look at the log events themselves. The CloudTrail user guide has some examples so let’s take examine one.

This example log event shows that the IAM user Alice used the AWS CLI to create a new user named Bob.

{"Records": [{
    "eventVersion": "1.0",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn:aws:iam::123456789012:user/Alice",
        "accountId": "123456789012",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "userName": "Alice"
    },
    "eventTime": "2014-03-24T21:11:59Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "CreateUser",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "127.0.0.1",
    "userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7",
    "requestParameters": {"userName": "Bob"},
    "responseElements": {"user": {
        "createDate": "Mar 24, 2014 9:11:59 PM",
        "userName": "Bob",
        "arn": "arn:aws:iam::123456789012:user/Bob",
        "path": "/",
        "userId": "EXAMPLEUSERID"
    }}
}]}

Let’s break this down. The first section of the log tells us about Alice:

    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn:aws:iam::123456789012:user/Alice",
        "accountId": "123456789012",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "userName": "Alice"

Her userName is Alice, she is an IAMUser and her accountId is 123456789012. The next part tells us what happened:

    "eventTime": "2014-03-24T21:11:59Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "CreateUser",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "127.0.0.1",
    "userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7"

At 21:11 on 24/03/2014 Alice used the AWS CLI to call iam.amazonaws.com‘s CreateUser action. Alice’s IP address was 127.0.0.1 and she was using us-east-2.

Finally the log shows the parameters Alice supplied:

    "requestParameters": {"userName": "Bob"},
    "responseElements": {"user": {
        "createDate": "Mar 24, 2014 9:11:59 PM",
        "userName": "Bob",
        "arn": "arn:aws:iam::123456789012:user/Bob"

Alice provided a userName of Bob, and AWS responded with a createDate and arn for the new user.

Armed with the knowledge of what an event looks like, let’s start analysing some of them!

CloudWatch

Amazon CloudWatch is a service for monitoring, analysing and observing events. CloudWatch has several features for various situations – I’ll be using the following features for my security alerts:

  • Log Groups – collections of streams of events coming from the same source.
  • Metric Filters – filter expressions that are applied to events to create data points for CloudWatch metrics.
  • Alarms – rules that watch CloudWatch metrics and perform actions based on their values.

Let’s start with a log group.

CloudWatch Log Group

As this is a new Log Group, I can configure it from the CloudTrail console by editing the existing events-management trail.

Nothing too complex here. The Log Group needs a name – aws-cloudtrail-logs-signin-console. It also needs an IAM role. This is essential otherwise CloudTrail can’t send the events to CloudWatch. In these situations the console usually offers to create a new role, which I call CloudTrail-CloudWatchLogs-Signin-Console.

That’s it! Now that CloudWatch is receiving the logs it needs to know what to look for.

CloudWatch Metric Filters

The Metric Filters are going to look at events in the Log Group and use them to create data points for my alarms. There are two steps to creating a metric filter:

  • A pattern must be defined for the filter.
  • A metric must be assigned to the filter.

Defining A Pattern

CloudWatch needs to know the terms and/or patterns to look for in the Log Group. This is done by specifying a filter pattern, for which there is an AWS user guide about filter and pattern syntax.

For the signin-console-failure-iam alert, the filter will be:

{ $.eventSource = "signin.amazonaws.com" && $.eventName = "ConsoleLogin" && $.responseElements.ConsoleLogin = "Failure" && $.userIdentity.type = "IAMUser" }

To break this down:

For limiting events to sign-ins I use $.eventSource = "signin.amazonaws.com". This will be the same for all filters.

I only want to know about AWS console logins so I add $.eventName = "ConsoleLogin". This will also be the same for all filters.

This filter only cares about failed events, so I need to add $.responseElements.ConsoleLogin = "Failure". This will change to "Success" for other filters.

Finally I want to limit this filter to IAM users, and so add $.userIdentity.type = "IAMUser". This will change to "root" for other filters.

To aid understanding, this will be the filter for the signin-console-success-root alert:

{ $.eventSource = "signin.amazonaws.com" && $.eventName = "ConsoleLogin" && $.responseElements.ConsoleLogin = "Success" && $.userIdentity.type = "root" }

Assigning A Metric

CloudWatch now needs to know about the metrics to create. First it needs a name for the new filter, one for the metric itself and one for the namespace that will contain the metric.

Then I need to make some decisions about the metric values, which I set as follows:

Nothing elaborate here – the metric will be one every time a matching event is found and zero otherwise. I originally left the default value blank but this led to some undesirable alarm behaviour, which I will go into later.

Speaking of alarms…

CloudWatch Alarms

At this point CloudWatch understands what I’m looking for but has no way to tell me if it finds anything. The alarms will provide that missing link!

Creating an alarm starts by selecting one of the new metrics and choosing how often CloudWatch will check it. Then I set the conditions the alarm will use:

Here I want the alarm to trigger when the metric is greater than zero.

CloudWatch now needs to know what actions it must perform when an alarm triggers. CloudWatch alarms have three states:

  • OK (Green) – The metric or expression is within the defined threshold
  • In Alarm (Red) – The metric or expression is outside of the defined threshold.
  • Insufficient Data (Grey) – The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.

Here I tell CloudWatch to notify the signin-console-failure-iam SNS Topic when the corresponding metric is outside of the defined threshold of zero and enters the In Alarm state.

This is why I created the SNS resources first. It makes CloudWatch alarm creation a lot smoother.

Three additional alarms later I’m all done! As AWS provide ten custom metrics and ten alarms in the Always Free tier, my new CloudWatch setup will be free.

Insufficient Data (Grey) Alarms

Before moving on, let’s talk about grey alarms for a moment.

While grey alarms generally aren’t a problem in CloudWatch, they don’t look great from a human perspective. While green suggests everything’s fine and red suggests problems, grey is more vague. Everything could be ok. There might be problems. Not ideal.

This is why I set my default metric values to zero in the Metric Filters section. When no default value was set, CloudWatch considered the data to be missing and set the alarm state to Insufficient Data unless it was alerting. While this isn’t a problem, the DBA in me will always prefer green states to grey ones!

During an alarm’s configuration, it is possible to change the treatment of missing data:

I did try this out and it did work as expected – the grey alarm turned green when told to treat missing data as good. But this would mean that any missing data would be treated as good. That setting did not fill me with reassurance for this use case!

Does This All Work?

Let’s find out! I have four CloudWatch alarms, each partnered with a different SNS topic. This should mean I get a different notification for each type of event when the alarms trigger.

Here is my CloudWatch Alarms dashboard in its base state with no alarms triggered.

This is the same dashboard with all alarms triggered.

Failing to sign in as an IAM user triggered this email:

While a successful IAM sign-in triggered this one:

Failing to sign in as the root user triggered this email:

And a successful root sign in triggered this email:

And this SMS:

Cost Analysis

The eagle-eyed will have noticed that some of the dates in these screenshots are from early February. I was going to publish this post around that time too, but I wanted to finish my T-SQL Tuesday post first and then had a busy week.

This means I can demonstrate the actual costs of my security alerts though!

These AWS Billing screenshots are from 2022-02-13, so a week after the earlier screenshots. Be aware that, as IAM is always free, it has no entry on the bill.

First of all, CloudTrail and CloudWatch:

As expected, well within the AWS Always Free tier limits and no charges. Next is SNS:

The cost here is for SMS notifications. I’ve triggered two in testing so that’s averaging 0.04 USD each. This cost is acceptable considering the peace of mind it gives – these are the notifications for the successful root account sign-ins!

Summery

In this post I’ve demonstrated how a number of AWS managed services work together to turn a collection of events into meaningful security alerts that give me peace of mind when I’m signed out of my AWS account. I’ve also analysed the costs of my setup and have used various AWS Always Free tier offerings to minimise the impact on my monthly AWS bill.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Architecture & Resilience

Can SQL Upgrades Be Avoided In The Cloud?

In this post I consider the February 2022 T-SQL Tuesday #147 Invitation “Upgrade Strategies” and look at the importance of upgrades in the cloud.

For this month’s T-SQL Tuesday, VoiceOfTheDBA’s invitation was as follows:

This month I want you to write about how you look at SQL Server upgrades. A few things you might think about:

Why we wait to upgrade?

Strategies for testing an upgrade

Smoke tests or other ways to verify the upgrade worked

Moving to the cloud to avoid upgrades

Using compatibility levels to upgrade an instance by not a database.

Checklists of things to use in planning

The time it takes to upgrade your environment

What you evaluate in making a decision to upgrade or not?

Anything else

Immediately I was drawn to “Moving to the cloud to avoid upgrades”. Some perceive the cloud as a ‘set it and forget it’ environment. The reality is that cloud services still require upgrades that can cause security vulnerabilities and data issues if left unchecked.

What follows are some SQL based observations from my experience to date. While it’s true this list is AWS specific, it isn’t AWS exclusive as Azure and GCP operate similar services with similar considerations.

EC2 Upgrades

When I create a new EC2 instance I can generally expect it to be running the latest build of my chosen OS. However, an instance that has been running for a while will soon find itself needing system updates like any other computer. Some updates offer performance improvements or new features and are essentially optional. Others fix security vulnerabilities and bugs and are non-negotiable.

If that instance is running my relational database of choice, that too will need a range of updates from the desirable to the critical. AWS views this as a customer responsibility, with the AWS Shared Responsibility Model including the following:

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

However the managed services are viewed differently:

For abstracted services, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

So what if I get AWS to run my database for me?

RDS Upgrades

Amazon Relational Database Service (RDS) offers managed relational databases including Microsoft SQL Server, MySQL and PostgreSQL. RDS still uses EC2 instances but here they are managed by AWS and are not accessible by the user. This means OS management is no longer a customer responsibility.

Upgrades to the database engine are still a factor though. AWS try to make this as painless as possible – upgrades can be done using the console, AWS CLI or the RDS API. This is still a manual process though, although it is possible to upgrade some minor engine versions automatically.

However, even on rails it’s still possible for an update to go wrong. AWS have a nine-point checklist for testing an upgrade that wouldn’t be out of place on-premises. AWS also encourage database snapshots and non-production testing. While RDS removes infrastructure complexity, the data is still the customer’s responsibility and needs the same care as ever.

Operational Upgrades

AWS constantly release new services intended to simplify workflows and reduce costs. Even when an organisation’s cloud setup is fully mature, it can still benefit from upgrading to these services.

When Athena debuted in 2016 it enabled the analysis of data in S3 using standard SQL. This removed the need for complex ETLs and data warehouses, and with Athena being serverless it was faster to set up and cheaper to operate than EC2, RDS or Redshift.

In 2020 Amazon announced new EBS GP3 volumes. GP3s have separate settings for performance and storage, and are recommended for applications like MySQL that need high performance at low costs. This meant organisations could save money by reducing their use of the more expensive IO1 volumes.

More recently, AWS announced a new S3 Glacier Instant Retrieval storage class in 2021. This made S3 less expensive for a range of use cases including SQL backup storage and data lake archival.

Conclusion

The Cloud offers numerous opportunities for individuals and organisations to develop, build and deploy quicker and easier. But upgrades are a fact of life in technology regardless of platform. The cloud is still a collection of computers, which still need to respond to changing requirements and threats.

A well maintained and fully upgraded cloud environment is reliable, scalable and secure. A poorly maintained one can, at best, be expensive, slow and unwieldy. At worst it can be unreliable, vulnerable and in breach of terms of service.

If you want to check the health of your AWS account, AWS offers their Trusted Advisor and Well-Architected Tool services. These give free architectural advice, security recommendations, cost optimisations and best practice guidance.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Developing & Application Integration

Re-Runnable Strava API Calls Using Python

In this post I make my existing Python code re-runnable by enabling it to replace expired access tokens when it sends requests to the Strava API.

A couple of posts ago I wrote about authenticating Strava API calls. I ended up successfully requesting data using this Python code:

import requests

activities_url = "https://www.strava.com/api/v3/athlete/activities" 

header = {'Authorization': 'Bearer ' + "access_token"}
param = {'per_page': 200, 'page': 1}

my_dataset = requests.get(activities_url, headers=header, params=param).json()

print(my_dataset)

Although successful, the uses for this code are limited as it stops working when the header’s access_token expires. Ideally the code should be able to function constantly once Strava grants initial authorisation, which is what I’m exploring here. Plus the last post was unclear in places so this one will hopefully tie up some loose ends.

Please note that I have altered or removed all sensitive codes and tokens in this post in the interests of security.

The Story So Far

First of all, some reminders. Strava uses OAuth 2.0 for authentication, and this is a typical OAuth 2.0 workflow:

OAuthAnOverview

I am sending GET requests to Strava via Get Activity. Strava’s documentation for this is as follows:

StravaDevelopers

Finally, during my initial setup I created an API Application on the Strava site. Strava provided these details upon completion:

StravaAPICredentials

Authorizing My App To View Data

Strava’s Getting Started page explains that they require authentication via OAuth 2.0 for data requests and gives the following link for that process:

http://www.strava.com/oauth/authorize?client_id=[REPLACE_WITH_YOUR_CLIENT_ID]&response_type=code&redirect_uri=http://localhost/exchange_token&approval_prompt=force&scope=read

I must amend this URL as scope=read is insufficient for Get Activity requests. The end of the URL becomes scope=activity:read_all and the updated URL loads a Strava authorization screen:

StravaAuthorization

Selecting Authorize gives the following response:

http://localhost/exchange_token?state=&code=CODE9fbb&scope=read,activity:read_all

Where code=CODE9fbb is a single-use authorization code that I will use to create access tokens.

Getting Tokens For API Requests

Next I will use CODE9fbb to request access tokens which Get Activity will accept. This is done via the following cURL request:

curl -X POST https://www.strava.com/api/v3/oauth/token \
  -d client_id=APIAPP-CLIENTID \
  -d client_secret=APIAPP-SECRET \
  -d code=CODE9fbb \
  -d grant_type=authorization_code

Here, Client_ID and Client_Secret are from my API application, Code is the authorization code CODE9fbb and Grant_Type is what I’m asking for – Strava’s Authentication documentation states this must always be authorization_code for initial authentication.

Strava then responds to my cURL request with a refresh token, access token, and access token expiration date in Unix Epoch format:

"token_type": "Bearer",
  "expires_at": 1642370007,
  "expires_in": 21600,
  "refresh_token": "REFRESHc8c4",
  "access_token": "ACCESS22e5",

Why two tokens? Access tokens expire six hours after they are created and must be refreshed to maintain access to the desired data. Strava uses the refresh tokens as part of the access token refresh process.

Writing The API Request Code

With the tokens now available I can start assembling the Python code for my Strava API requests. I will again be using Visual Studio Code here. I make a new Python virtual environment called StravaAPI by running py -3 -m venv StravaAPI, activate it using StravaAPI\Scripts\activate and run pip install requests to install the module I need. Finally I create an empty StravaAPI.py file in the StravaAPI virtual environment folder for the Python code.

Onto the code. The first part imports the requests module, declares some variables and sets up a request to refresh an expired access code as detailed in the Strava Authentication documentation:

# Import modules
import requests


# Set Variables
apiapp_clientid = "APIAPP-CLIENTID"
apiapp_secret = 'APIAPP-SECRET'
token_refresh = 'REFRESHc8c4'

# Requesting Access Token
url_oauth = "https://www.strava.com/oauth/token"
payload_oauth = {
	'client_id': apiapp_clientid,
	'client_secret': apiapp_secret,
	'refresh_token': token_refresh,
	'grant_type': "refresh_token",
	'f': 'json'
}

Note this time that the Grant_Type is refresh_token instead of authorization_code. These variables can then be used by the requests module to send a request to Strava’s API:

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)
print(req_access_token.json())

This request is successful and returns existing tokens ACCESS22e5 and REFRESHc8c4 as they have not yet expired:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESS22e5', 'expires_at': 1642370008, 'expires_in': 20208, 'refresh_token': 'REFRESHc8c4'}

A warning is also presented here as my request is not secure:

InsecureRequestWarning: Unverified HTTPS request is being made to host 'www.strava.com'. Adding certificate verification is strongly advised.

The warning includes a link to urllib3 documentation, which states:

Making unverified HTTPS requests is strongly discouraged, however, if you understand the risks and wish to disable these warnings, you can use disable_warnings()

As this code is currently in development, I import the urllib3 module and disable the warnings:

# Import modules
import requests
import urllib3

# Disable Insecure Request Warnings
urllib3.disable_warnings()

Next I extract the access token from Strava’s response into a new token_access variable and print that in the terminal as a process indicator:

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)

token_access = req_access_token.json()['access_token']
print("Access Token = {}\n".format(token_access))

So far the terminal’s output is:

Requesting Token...

Access Token = ACCESS22e5

Let’s get some data! I’m making a call to Get Activities now, so I declare three variables to compose the request and include the token_access variable from earlier :

# Requesting Athlete Activities
url_api_activities = "https://www.strava.com/api/v3/athlete/activities"
header_activities = {'Authorization': 'Bearer ' + token_access}
param_activities = {'per_page': 200, 'page' : 1}

Then I use the requests module to send the request to Strava’s API:

print("Requesting Athlete Activities...\n")
dataset_activities = requests.get(url_api_activities, headers=header_activities, params=param_activities).json()
print(dataset_activities)

And receive data about several recent activities as JSON in return. Success! The full Python code is as follows:

# Import modules
import requests
import urllib3

# Disable Insecure Request Warnings
urllib3.disable_warnings()

# Set Variables
apiapp_clientid = "APIAPP-CLIENTID"
apiapp_secret = 'APIAPP-SECRET'
token_refresh = 'REFRESHc8c4'

# Requesting Access Token
url_oauth = "https://www.strava.com/oauth/token"
payload_oauth = {
	'client_id': apiapp_clientid,
	'client_secret': apiapp_secret,
	'refresh_token': token_refresh,
	'grant_type': "refresh_token",
	'f': 'json'
}

print("Requesting Token...\n")
req_access_token = requests.post(url_oauth, data=payload_oauth, verify=False)

token_access = req_access_token.json()['access_token']
print("Access Token = {}\n".format(token_access))

# Requesting Athlete Activities
url_api_activities = "https://www.strava.com/api/v3/athlete/activities"
header_activities = {'Authorization': 'Bearer ' + token_access}
param_activities = {'per_page': 200, 'page' : 1}
print("Requesting Athlete Activities...\n")
dataset_activities = requests.get(url_api_activities, headers=header_activities, params=param_activities).json()
print(dataset_activities)

But Does It Work?

This only leaves the question of whether the code works when the access code expires. As a reminder this was Strava’s original response:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESS22e5', 'expires_at': 1642370008, 'expires_in': 20208, 'refresh_token': 'REFRESHc8c4'}

Expiry 1642370008 is Sunday, 16 January 2022 21:53:28. I run the code at 22:05 and:

Requesting Token...

{'token_type': 'Bearer', 'access_token': 'ACCESSe0e7', 'expires_at': 1642392321, 'expires_in': 21559, 'refresh_token': 'REFRESHc8c4'}

A new access token! The new expiry 1642392321 is Monday, 17 January 2022 04:05:21. And when I run the code at 09:39:

{'token_type': 'Bearer', 'access_token': 'ACCESS74dd', 'expires_at': 1642433966, 'expires_in': 21600, 'refresh_token': 'REFRESHc8c4'}

A second new access code. All working fine! As long as my refresh token remains valid I can continue to get valid access tokens when they expire.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~