Popular with:
Cloud Engineer
DevOps
Cloud Security

AWS API Gateway: Rate Limiting

Updated:
June 5, 2023
Written by
Rajesh Kanumuru

Table of contents:

  1. Introduction
  2. What is rate-limit?
  3. What is rate limiting?
  4. Why is rate limiting important?
  5. Useful rate limiting strategies
  6. The bottom line

Introduction

As cloud computing gains widespread adoption, businesses need to ensure the security of their cloud-based applications. According to a Radware survey, 70% of enterprises lack confidence in maintaining consistent security measures across on-premise and multi-cloud systems while deploying different security products to protect their cloud apps. 

One key area of concern is securing APIs, which allow applications to communicate with each other over the Internet. 

In this blog, we'll look closer at securing APIs in AWS using rate limiting, a common method for controlling network traffic and protecting web servers from excessive use of resources by malicious actors.

What to Rate-Limit?

Before implementing rate limiting in your applications, it's important to consider which resources to limit. Generally, it's best to follow the advice of experts like Maarten Balliauw, who suggests "rate limit everything." Your application operates under a "time-sharing model," much like a vacation property, and you don't want one user to hinder the experience of others. Thus, you should apply rate limiting to every endpoint that uses resources that could potentially slow down or even break your application when overwhelmed. 

Every request uses at least the CPU and memory of your server and potentially also involves disk I/O, database queries, external APIs, and more. Therefore, applying rate limiting to every endpoint ensures a fair and stable user experience.

What is Rate Limiting?

This technique is used to control the amount of traffic that can access a particular resource or service over a given period. The purpose of rate limiting is to prevent malicious actors from overwhelming a web server or network with too many requests in a short period, causing it to slow down or even crash. 

By restricting the number of requests made within a specific time frame, rate limiting helps to protect resources and maintain the stability and availability of the system.

Rate limiting can be applied at various network, server, or application levels. 

At the network level, rate limiting is typically used to control the amount of traffic that can flow through a specific port or interface. 

At the server level, rate limiting controls the amount of traffic a server can handle. This can be ensured by limiting the number of connections or requests made to the server within a given period.

At an application level, rate limiting can be applied to specific actions or resources, like an API endpoint or user account. This can help prevent system abuse, such as spamming or brute force attacks. 

For example, rate limiting can limit the number of login attempts made from a single IP address within a specific time frame. This can prevent automated bots from trying to guess passwords by making repeated login attempts.

In short, rate limiting is essential for maintaining the stability and security of web servers and networks. By restricting the amount of traffic that can access a particular resource or service, rate limiting helps to prevent malicious actors from overwhelming the system and causing damage or disruption.

Why is Rate Limiting Important?

Rate limiting is a critical aspect of API security. Without rate limiting, an API can be overwhelmed by many requests from a single source or a group of sources. This can lead to downtime, unavailability of the API, or even complete server failure. 

By applying rate limiting, businesses can control the amount of traffic that can access an API, ensure all users have a good experience using the API, and protect their web servers from malicious attacks.

Useful Rate Limiting Strategies

Businesses can apply several practical rate-limiting strategies to their APIs in AWS. Let's take a closer look at 3 of these strategies:

API Throttling

API throttling is a technique used to control the rate at which individual users or applications can access an API. It limits the number of requests that can be made to an API in a given time period, ensuring that the network is not overwhelmed by too many requests. This strategy ensures all users have a good API experience and can access it without disruptions.

API Queues

API queues hold requests to an API that can't be processed immediately. This allows businesses to manage the flow of requests and ensure that the API can handle the incoming traffic. Incoming requests are added to a queue until the API is ready to process them. This strategy ensures that the API can handle spikes in traffic without crashing or experiencing downtime.

Algorithm-Based API Rate-Limiting

Algorithm-based API rate-limiting uses an algorithm to determine how many requests an API can handle. The algorithm considers factors like the current load on the API, overall capacity, and the rate at which requests are made. This strategy ensures that the API can handle traffic spikes and that all users have a good experience using the API.

The Bottom Line

Rate limiting is a powerful technique that can be used to secure your APIs in AWS. It can help prevent DDoS attacks, control access to specific resources, and manage data flow effectively. By implementing rate limiting, you can ensure that your API is available to all users and prevent it from getting overwhelmed by too many requests.

If you're interested in learning all about AWS security, including rate limiting, custom JWT authorizers, input validation, and mTLS, then you should check out the Essential AWS API Gateway Security course from AppSecEngineer. 

This comprehensive course covers all aspects of API security in AWS. It includes hands-on labs that allow you to practice implementing rate limiting and other security measures in a real-world environment. 

Don't wait; start learning today and take your cloud security skills to the next level with our AWS Security Specialist Bundle!

Source for article
Rajesh Kanumuru

Rajesh Kanumuru

Rajesh Kanumuru works at we45 as a Cloud Security Lead. Rajesh is a builder and breaker of Cloud applications. He has created some pioneering works in the area of Cloud Security. He is actively researching the effects of emerging technologies on cloud security. Since 2020, Rajesh has mostly been involved with research, development and building solutions around we45 and AppSecEngineer's training offerings. He consults with organizations to help them implement Cloud Security successfully. Rajesh has co-authored and trained a course on Purple Team AWS that was delivered by we45 at BlackHat USA. When AFK, he can be found on the cricket pitch.

Rajesh Kanumuru

FOLLOW APPSECENGINEER
CONTACT

Contact Support

help@appsecengineer.com

1603 Capitol Avenue,
Suite 413A #2898,
Cheyenne, Wyoming 82001,
United States

Copyright AppSecEngineer © 2023