Categories
Developing & Application Integration

Next-Level S3 Notifications With EventBridge

In this post I will use AWS managed services to enhance my S3 user experience with custom EventBridge notifications that are low cost, quick to set up and perform well at scale.

Table of Contents

Introduction

I’ve been restoring some S3 Glacier Flexible Retrieval objects lately. I use bulk retrievals to reduce costs – these finish within 5–12 hours. However, on a couple of occasions I’ve totally forgotten about them and almost missed the download deadline!

Having recently set up some alerting, I decided to make a similar setup that will trigger emails at key points in the retrieval process, using the following AWS services:

  • S3 for holding the objects and managing the retrieval process
  • EventBridge for receiving events from S3 and looking for patterns
  • SNS for sending notifications to me

The end result will look like this:

Let’s start with SNS.

SNS: The Notifier

I went into detail about Amazon Simple Notification Service (SNS) in my last post about making some security alerts so feel free to read that if some SNS terms are unfamiliar.

Here I want SNS to send me emails, so I start by making a new standard topic called s3-object-restore. I then create a new subscription with an email endpoint and link it to my new topic.

This completes my SNS setup. Next I need to make some changes to one of my S3 buckets.

S3: The Storage

Amazon S3 stores objects in buckets. The properties of a bucket can be customised to complement its intended purpose. For example, the Default Encryption property forces encryption on buckets containing sensitive objects. The Bucket Versioning property protects objects from accidental changes and deletes.

Here I’m interested in the Event Notifications property. This property sends notifications when certain events occur in the bucket. Examples of S3 events include uploads, deletes and, importantly for this use case, restore requests.

S3 can send events to a number of AWS services including, helpfully, EventBridge! This isn’t on by default but is easily enabled in the bucket’s properties:

My bucket will now send events to EventBridge. But what is EventBridge?

EventBridge: The Go-Between

Full disclosure. At first I wasn’t entirely sure what EventBridge was. The AWS description did little to change that:

I tend to uncomplicate topics by abstracting them. Here I found it helpful to think of EventBridge as a bus:

  • Busses provide high-capacity transport between bus stops. The bus is EventBridge.
  • Passengers use the bus to get to where they need to go. The passengers are events.
  • Bus stops are where passengers join or depart the bus. The bus stops are event sources and targets.

In the same way that a bus picks up passengers at one bus stop and drops them off at another, EventBridge receives events from a source and directs them to a target.

Much has been written about EventBridge’s benefits. Rather than spending the next few paragraphs copy/pasting, I will instead suggest the following for further reading:

In this use case, EventBridge’s main advantage is that it is decoupled from S3. This allows one EventBridge Rule to serve many S3 buckets. S3 can send notifications to SNS without EventBridge, but each bucket needs configuring separately so this quickly causes headaches with multiple buckets.

Currently my S3 bucket is already sending events to EventBridge, so let’s create an EventBridge rule for them.

EventBridge Rule: Setting A Pattern & Choosing A Source

Rules allow EventBridge to route events from a source to a target. After naming my new rule s3-object-restore, I need to choose what kind of rule I want:

  • Event Pattern: the rule will be triggered by an event.
  • Schedule: the rule will be triggered by a schedule.

I select Event Pattern. EventBridge then poses further questions to establish what events to look for:

  • Event Matching Pattern: Do I want to use EventBridge presets or write my own pattern?
  • Service Provider: Are the events coming from an AWS service or a third party?
  • Service Name: What service will be the source of events?

EventBridge will only present options relevant to the previous choices. For example, choosing AWS as Service Provider means that no third party services are available in Service Name.

My choices so far tell EventBrdige that S3 is the event source:

Next up is Event Type. As EventBridge knows the events are coming from S3, the options here are very specific:

I choose Amazon S3 Event Notification.

EventBridge now knows enough to create a rule, and offers the following JSON as an Event Pattern:

{
  "source": ["aws.s3"],
  "detail-type": ["Object Access Tier Changed", "Object ACL Updated", "Object Created", "Object Deleted", "Object Restore Completed", "Object Restore Expired", "Object Restore Initiated", "Object Storage Class Changed", "Object Tags Added", "Object Tags Deleted"]
}

I’m only interested in restores, so I open the Specific Event(s) list and choose the three Object Restore events:

EventBridge then amends the event pattern to:

{
  "source": ["aws.s3"],
  "detail-type": ["Object Restore Completed", "Object Restore Initiated", "Object Restore Expired"]
}

That’s it for the source. Now EventBridge needs to know what to do when it finds something!

EventBridge Rule: Choosing A Target & Configuring Inputs

One of EventBridge’s big selling points is how it interacts with targets. There are already numerous targets, and EventBridge rules can have more than one.

I select SNS Topic as a target then choose my s3-object-restore SNS topic from the list:

This alone is enough for EventBridge to interact with SNS. When I save this EventBridge rule and trigger it by running an S3 object restore, I receive this email:

Although this is technically a success, some factors aren’t ideal:

  • The formatting of the email is hard to read.
  • There’s a lot of information here, most of which is irrelevant.
  • It’s not immediately clear what this email is telling me.

To address this I can use EventBridge’s Configure Input feature to change what is sent to the target. This feature offers four options:

  • Matched Events: EventBridge passes all of the event text to the target. This is the default.
  • Part Of The Matched Event: EventBridge only sends part of the event text to the target.
  • Constant (JSON text): None of the event text is sent to the target. EventBridge sends user-defined JSON instead.
  • Input Transformer: EventBridge assigns lines of event text as variables, then uses those variables in a template.

Let’s look at the input transformer.

The AWS EventBridge user guide goes into detail about the input transformer and includes a good tutorial. Having consulted these resources, I start by getting the desired JSON from the initial email:

{
"detail-type":"Object Restore Initiated",
"source":"aws.s3",
"time":"2022-02-21T12:51:21Z",
"detail":
{
"bucket":{"name":"redacted"},
"object":{"key":"redacted"}
}
}

Then I convert the JSON into an Input Path:

{
"bucket":"$.detail.bucket.name",
"detail-type":"$.detail-type",
"object":"$.detail.object.key",
"source":"$.source",
"time":"$.time"
}

And finally specify an Input Template:

"<source> <detail-type> at <time>. Bucket: <bucket>. Object: <object>"

EventBridge checks input templates before accepting them, and will throw an error if the input template is invalid:

I update my EventBridge rule with the new Input Transformer configuration. Time to test it out!

Testing

When I trigger an S3 object restore I receive this email moments later:

I then receive a second email when the object is ready for download:

"aws.s3 Object Restore Completed at 2022-03-04T00:15:33Z. Bucket: REDACTED. Object: REDACTED"

And a final one when the object expires:

"aws.s3 Object Restore Expired at 2022-03-05T10:12:04Z. Bucket: REDACTED. Object: REDACTED"

Success!

Before moving on, let me share the results of an earlier test. My very first input path (not included here) contained some mistakes. The input template was valid but it couldn’t read the S3 event properly, so I ended up with this:

Something to bear in mind for future rules!

Cost Analysis

Before I wrap up, let’s run through the expected costs with this setup:

  • SNS: the first thousand email notifications SNS every month are included in the AWS Always Free tier, and I’m nowhere near that!
  • S3: There is no change for S3 passing events to EventBridge. Charges for object storage and retrieval are out of scope for this post.
  • EventBridge: All events published by AWS services are free.

There is no expected cost rise for this setup based on my current use.

Summary

In this post I’ve used EventBridge and SNS to produce free bespoke notifications at key points in the S3 object retrieval process. This offers me the following benefits:

  • Reassurance: I can choose the longer S3 retrieval offerings knowing that AWS will keep me updated on progress.
  • Convenience: I will know the status of retrievals without accessing the AWS console or using the CLI.
  • Cost: I am less likely to forget to download retrieved objects before expiry, and therefore less likely to need to retrieve those objects again.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Security & Monitoring

Creating Security Alerts For AWS Console Access

In this post I will use AWS managed services to produce security alerts when attempts are made to access my AWS account’s console.

Table of Contents

Introduction

I am currently studying towards the AWS Certified Developer – Associate certification using Stéphane Maarek’s video course and the Tutorials Dojo practice exams. As part of my studies I want to better understand the various AWS monitoring services, and setting up some security alerts is a great way to get some real-world experience of them.

My current security posture is already in line with AWS best practices. For example:

  • I use a password manager and autogenerate passwords so they’re not reused.
  • MFA is enabled on everything offering it.
  • I have created IAM users and roles for all my AWS requirements and never use my root account.

To strengthen this, I will create some security alerts that will notify me when attempts are made to access my AWS console whether they succeed or fail.

I will be using the following AWS services:

  • IAM for handling authentication requests.
  • CloudTrail for creating log events.
  • CloudWatch for analysing log events and triggering alarms.
  • SNS for sending notifications when alarms are triggered.

The end result will look like this:

Let’s start with IAM.

IAM

AWS Identity and Access Management (IAM) is the AWS tool for managing permissions and access policies. I don’t need to do anything with IAM here, but I include it as IAM is the source of the events that my security alerts will use.

Next I’ll create the setup that AWS will use to send the security alerts.

SNS

Amazon Simple Notification Service (SNS) focuses on delivering notifications from sources to subscribers. SNS offers hundreds of potential combinations, and here I’m using it to send notifications to me in response to certain AWS events.

To do this, SNS uses Topics for what the notifications are and Subscriptions for where to send them.

SNS Topics

SNS Topics can be heavily customised but here I only need a simple setup. First I choose my topic’s type:

Standard is fine here as I don’t need to worry about message ordering or duplication. I also need a name for the topic. I will use the following naming pattern for the topics and for the security alerts themselves:

  • Action – this will always be signin.
  • Signin Method – this will always be console.
  • Outcome – this will be either failure or success.
  • User Type – this will be either iam or root.

Thus my first topic is named signin-console-failure-iam.

I am creating four security alerts and want each to have a separate SNS topic generating different notifications. A short time later I have created all four successfully:

SNS Subscriptions

Now I need some SNS Subscriptions. These tell SNS where to send notifications.

An SNS Subscription needs the following details:

  • Topic ARN – the Amazon Resource Name of the desired SNS Topic.
  • Protocol – there are several choices including email and SMS.
  • Endpoint – this will depend on the choice of protocol. Here I have selected the Email protocol so SNS requests an email address.

Once an SNS Subscription is created it must be confirmed. Here my endpoint is an email address, so SNS sends this email:

When the owner of the email confirms the subscription, a new window opens displaying the Subscription ID:

I then create further subscriptions for each SNS Topic. The SNS Subscription dashboard updates to show the list of endpoints and the confirmation status of each:

Note that signin-console-success-root has two subscriptions – one email and one SMS. This is because I never use my root account and want the heightened awareness of an SMS!

In terms of cost, the first thousand email notifications SNS every month are included in the AWS Always Free tier. Any costs will be from the infrequent SMS notifications.

With the alerts created, let’s start on the events they’ll alert against.

CloudTrail

AWS CloudTrail records user activity and API interactions. These include resource creation, service configuration and, crucially, sign-in activity.

By default, CloudTrail holds 90 days of events that can be viewed, searched and downloaded from the CloudTrail dashboard. A CloudTrail Trail is needed to store events for longer periods or to export them to other AWS services.

CloudTrail Trails

Here my CoudTrail Trail will be delivering events from CloudTrail to CloudWatch for analysis and storing them in an S3 bucket. I use the AWS CloudTrail documentation to create an events-management trail based in eu-west-1:

This trail is included in the AWS Always Free tier as it is my only one and it only records management events. There will be S3 charges for the objects in the bucket but this is usually around 30kb a day, so the cost here is trivial.

CloudTrail Logs

While I’m talking about CloudTrail, let’s look at the log events themselves. The CloudTrail user guide has some examples so let’s take examine one.

This example log event shows that the IAM user Alice used the AWS CLI to create a new user named Bob.

{"Records": [{
    "eventVersion": "1.0",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn:aws:iam::123456789012:user/Alice",
        "accountId": "123456789012",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "userName": "Alice"
    },
    "eventTime": "2014-03-24T21:11:59Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "CreateUser",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "127.0.0.1",
    "userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7",
    "requestParameters": {"userName": "Bob"},
    "responseElements": {"user": {
        "createDate": "Mar 24, 2014 9:11:59 PM",
        "userName": "Bob",
        "arn": "arn:aws:iam::123456789012:user/Bob",
        "path": "/",
        "userId": "EXAMPLEUSERID"
    }}
}]}

Let’s break this down. The first section of the log tells us about Alice:

    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn:aws:iam::123456789012:user/Alice",
        "accountId": "123456789012",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "userName": "Alice"

Her userName is Alice, she is an IAMUser and her accountId is 123456789012. The next part tells us what happened:

    "eventTime": "2014-03-24T21:11:59Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "CreateUser",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "127.0.0.1",
    "userAgent": "aws-cli/1.3.2 Python/2.7.5 Windows/7"

At 21:11 on 24/03/2014 Alice used the AWS CLI to call iam.amazonaws.com‘s CreateUser action. Alice’s IP address was 127.0.0.1 and she was using us-east-2.

Finally the log shows the parameters Alice supplied:

    "requestParameters": {"userName": "Bob"},
    "responseElements": {"user": {
        "createDate": "Mar 24, 2014 9:11:59 PM",
        "userName": "Bob",
        "arn": "arn:aws:iam::123456789012:user/Bob"

Alice provided a userName of Bob, and AWS responded with a createDate and arn for the new user.

Armed with the knowledge of what an event looks like, let’s start analysing some of them!

CloudWatch

Amazon CloudWatch is a service for monitoring, analysing and observing events. CloudWatch has several features for various situations – I’ll be using the following features for my security alerts:

  • Log Groups – collections of streams of events coming from the same source.
  • Metric Filters – filter expressions that are applied to events to create data points for CloudWatch metrics.
  • Alarms – rules that watch CloudWatch metrics and perform actions based on their values.

Let’s start with a log group.

CloudWatch Log Group

As this is a new Log Group, I can configure it from the CloudTrail console by editing the existing events-management trail.

Nothing too complex here. The Log Group needs a name – aws-cloudtrail-logs-signin-console. It also needs an IAM role. This is essential otherwise CloudTrail can’t send the events to CloudWatch. In these situations the console usually offers to create a new role, which I call CloudTrail-CloudWatchLogs-Signin-Console.

That’s it! Now that CloudWatch is receiving the logs it needs to know what to look for.

CloudWatch Metric Filters

The Metric Filters are going to look at events in the Log Group and use them to create data points for my alarms. There are two steps to creating a metric filter:

  • A pattern must be defined for the filter.
  • A metric must be assigned to the filter.

Defining A Pattern

CloudWatch needs to know the terms and/or patterns to look for in the Log Group. This is done by specifying a filter pattern, for which there is an AWS user guide about filter and pattern syntax.

For the signin-console-failure-iam alert, the filter will be:

{ $.eventSource = "signin.amazonaws.com" && $.eventName = "ConsoleLogin" && $.responseElements.ConsoleLogin = "Failure" && $.userIdentity.type = "IAMUser" }

To break this down:

For limiting events to sign-ins I use $.eventSource = "signin.amazonaws.com". This will be the same for all filters.

I only want to know about AWS console logins so I add $.eventName = "ConsoleLogin". This will also be the same for all filters.

This filter only cares about failed events, so I need to add $.responseElements.ConsoleLogin = "Failure". This will change to "Success" for other filters.

Finally I want to limit this filter to IAM users, and so add $.userIdentity.type = "IAMUser". This will change to "root" for other filters.

To aid understanding, this will be the filter for the signin-console-success-root alert:

{ $.eventSource = "signin.amazonaws.com" && $.eventName = "ConsoleLogin" && $.responseElements.ConsoleLogin = "Success" && $.userIdentity.type = "root" }

Assigning A Metric

CloudWatch now needs to know about the metrics to create. First it needs a name for the new filter, one for the metric itself and one for the namespace that will contain the metric.

Then I need to make some decisions about the metric values, which I set as follows:

Nothing elaborate here – the metric will be one every time a matching event is found and zero otherwise. I originally left the default value blank but this led to some undesirable alarm behaviour, which I will go into later.

Speaking of alarms…

CloudWatch Alarms

At this point CloudWatch understands what I’m looking for but has no way to tell me if it finds anything. The alarms will provide that missing link!

Creating an alarm starts by selecting one of the new metrics and choosing how often CloudWatch will check it. Then I set the conditions the alarm will use:

Here I want the alarm to trigger when the metric is greater than zero.

CloudWatch now needs to know what actions it must perform when an alarm triggers. CloudWatch alarms have three states:

  • OK (Green) – The metric or expression is within the defined threshold
  • In Alarm (Red) – The metric or expression is outside of the defined threshold.
  • Insufficient Data (Grey) – The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.

Here I tell CloudWatch to notify the signin-console-failure-iam SNS Topic when the corresponding metric is outside of the defined threshold of zero and enters the In Alarm state.

This is why I created the SNS resources first. It makes CloudWatch alarm creation a lot smoother.

Three additional alarms later I’m all done! As AWS provide ten custom metrics and ten alarms in the Always Free tier, my new CloudWatch setup will be free.

Insufficient Data (Grey) Alarms

Before moving on, let’s talk about grey alarms for a moment.

While grey alarms generally aren’t a problem in CloudWatch, they don’t look great from a human perspective. While green suggests everything’s fine and red suggests problems, grey is more vague. Everything could be ok. There might be problems. Not ideal.

This is why I set my default metric values to zero in the Metric Filters section. When no default value was set, CloudWatch considered the data to be missing and set the alarm state to Insufficient Data unless it was alerting. While this isn’t a problem, the DBA in me will always prefer green states to grey ones!

During an alarm’s configuration, it is possible to change the treatment of missing data:

I did try this out and it did work as expected – the grey alarm turned green when told to treat missing data as good. But this would mean that any missing data would be treated as good. That setting did not fill me with reassurance for this use case!

Does This All Work?

Let’s find out! I have four CloudWatch alarms, each partnered with a different SNS topic. This should mean I get a different notification for each type of event when the alarms trigger.

Here is my CloudWatch Alarms dashboard in its base state with no alarms triggered.

This is the same dashboard with all alarms triggered.

Failing to sign in as an IAM user triggered this email:

While a successful IAM sign-in triggered this one:

Failing to sign in as the root user triggered this email:

And a successful root sign in triggered this email:

And this SMS:

Cost Analysis

The eagle-eyed will have noticed that some of the dates in these screenshots are from early February. I was going to publish this post around that time too, but I wanted to finish my T-SQL Tuesday post first and then had a busy week.

This means I can demonstrate the actual costs of my security alerts though!

These AWS Billing screenshots are from 2022-02-13, so a week after the earlier screenshots. Be aware that, as IAM is always free, it has no entry on the bill.

First of all, CloudTrail and CloudWatch:

As expected, well within the AWS Always Free tier limits and no charges. Next is SNS:

The cost here is for SMS notifications. I’ve triggered two in testing so that’s averaging 0.04 USD each. This cost is acceptable considering the peace of mind it gives – these are the notifications for the successful root account sign-ins!

Summery

In this post I’ve demonstrated how a number of AWS managed services work together to turn a collection of events into meaningful security alerts that give me peace of mind when I’m signed out of my AWS account. I’ve also analysed the costs of my setup and have used various AWS Always Free tier offerings to minimise the impact on my monthly AWS bill.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~

Categories
Architecture & Resilience

Can SQL Upgrades Be Avoided In The Cloud?

In this post I consider the February 2022 T-SQL Tuesday #147 Invitation “Upgrade Strategies” and look at the importance of upgrades in the cloud.

For this month’s T-SQL Tuesday, VoiceOfTheDBA’s invitation was as follows:

This month I want you to write about how you look at SQL Server upgrades. A few things you might think about:

Why we wait to upgrade?

Strategies for testing an upgrade

Smoke tests or other ways to verify the upgrade worked

Moving to the cloud to avoid upgrades

Using compatibility levels to upgrade an instance by not a database.

Checklists of things to use in planning

The time it takes to upgrade your environment

What you evaluate in making a decision to upgrade or not?

Anything else

Immediately I was drawn to “Moving to the cloud to avoid upgrades”. Some perceive the cloud as a ‘set it and forget it’ environment. The reality is that cloud services still require upgrades that can cause security vulnerabilities and data issues if left unchecked.

What follows are some SQL based observations from my experience to date. While it’s true this list is AWS specific, it isn’t AWS exclusive as Azure and GCP operate similar services with similar considerations.

EC2 Upgrades

When I create a new EC2 instance I can generally expect it to be running the latest build of my chosen OS. However, an instance that has been running for a while will soon find itself needing system updates like any other computer. Some updates offer performance improvements or new features and are essentially optional. Others fix security vulnerabilities and bugs and are non-negotiable.

If that instance is running my relational database of choice, that too will need a range of updates from the desirable to the critical. AWS views this as a customer responsibility, with the AWS Shared Responsibility Model including the following:

Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

However the managed services are viewed differently:

For abstracted services, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

So what if I get AWS to run my database for me?

RDS Upgrades

Amazon Relational Database Service (RDS) offers managed relational databases including Microsoft SQL Server, MySQL and PostgreSQL. RDS still uses EC2 instances but here they are managed by AWS and are not accessible by the user. This means OS management is no longer a customer responsibility.

Upgrades to the database engine are still a factor though. AWS try to make this as painless as possible – upgrades can be done using the console, AWS CLI or the RDS API. This is still a manual process though, although it is possible to upgrade some minor engine versions automatically.

However, even on rails it’s still possible for an update to go wrong. AWS have a nine-point checklist for testing an upgrade that wouldn’t be out of place on-premises. AWS also encourage database snapshots and non-production testing. While RDS removes infrastructure complexity, the data is still the customer’s responsibility and needs the same care as ever.

Operational Upgrades

AWS constantly release new services intended to simplify workflows and reduce costs. Even when an organisation’s cloud setup is fully mature, it can still benefit from upgrading to these services.

When Athena debuted in 2016 it enabled the analysis of data in S3 using standard SQL. This removed the need for complex ETLs and data warehouses, and with Athena being serverless it was faster to set up and cheaper to operate than EC2, RDS or Redshift.

In 2020 Amazon announced new EBS GP3 volumes. GP3s have separate settings for performance and storage, and are recommended for applications like MySQL that need high performance at low costs. This meant organisations could save money by reducing their use of the more expensive IO1 volumes.

More recently, AWS announced a new S3 Glacier Instant Retrieval storage class in 2021. This made S3 less expensive for a range of use cases including SQL backup storage and data lake archival.

Conclusion

The Cloud offers numerous opportunities for individuals and organisations to develop, build and deploy quicker and easier. But upgrades are a fact of life in technology regardless of platform. The cloud is still a collection of computers, which still need to respond to changing requirements and threats.

A well maintained and fully upgraded cloud environment is reliable, scalable and secure. A poorly maintained one can, at best, be expensive, slow and unwieldy. At worst it can be unreliable, vulnerable and in breach of terms of service.

If you want to check the health of your AWS account, AWS offers their Trusted Advisor and Well-Architected Tool services. These give free architectural advice, security recommendations, cost optimisations and best practice guidance.

If this post has been useful, please feel free to follow me on the following platforms for future updates:

Thanks for reading ~~^~~