When operating a website or an application, you might have encountered a situation where some bad bots or someone is trying to spam your website or application with flooding types of requests. These are often known as brute force attacks, DDoS attacks, or web scraping, etc.
To prevent these kinds of actions, I want to share a use case of using a combination of Nginx rate limiting and Fail2Ban techniques, which help me block massive spam POST requests.
What is Rate Limit and Fail2Ban?
Just a quick summary for those who don’t know about these 2 keywords.
Rate Limit is simply a technique that limits the number of requests/actions sent to a system, application, or website. It’s a common way, a silent hero that protects your system silently. In this blog, I have a website running on an Nginx server, and Nginx has its own Rate Limiting feature, which I have implemented for my website.
Fail2Ban is an open-source intrusion prevention system, which basically scans logs (e.g. /var/log/nginx/access.log) and identifies and bans IP addresses that are performing many failed access attempts based on a set of rules we defined.
Sample Situation
An internal user of the company, while performing routine checks and operations on our website, inadvertently tried to submit hundreds of duplicate POST requests to an internal URL constantly due to an accidentally stuck Enter key on their keyboard. The POST request attempts to run a complex query on our database. As a consequence of this significant strain on our web server and database, the entire website experienced a critical crash. Check the nginx access logs, which reveal many 499 status codes, meaning that the client closed connections before the server could complete the response.
Current Rate Limit setting:
- We then had a Rate Limiting rule in place already, it’s set at 15 requests per second + burst of 10 requests, and it actually did block 4,500 of these 19,000 requests made in 18 minutes. But still, a massive number of requests has gone through, and due to the complexity of the SQL query, which has become another separate issue.
We know that there are many ways to prevent the form submission issue, but due to many other complex issues of the system that I don’t mention here, we still prefer to use Nginx Rate Limiting + Fail2Ban in this case.
Solution
We decided to solve the problems by two 2 approaches:
- Rate Limiting: set a new Nginx Rate Limiting rule, a rate of 5r/s and a burst threshold of 10r, applying for POST requests only. Although this approach reduced the spam POST requests significantly, it wouldn’t completely address the problem. With this rule, it allows other users to perform various normal operations properly without being limited, but there was still a small amount of POST requests within the threshold that is still hitting the database with complex queries, and our database system was not able to cope. That’s why we need Fail2ban in place additionally.
- Fail2Ban: For remaining POST requests that exceed the threshold, if a POST request returned with the same error code 499, and it repeated over 50 times, this request should be banned. Combine with this approach completely prevent the database and web server from being crashed due to spam POST requests.
Apply Nginx Rate Limiting
To configure Nginx Rate Limiting, basically, we have two directives: limit_req_zone and limit_req
limit_req_zone: we define parameters for the rate limiting rule so that we can apply it for a context by usinglimit_req.limit_req: apply the rate limiting rule configured inlimit_req_zoneto a particular context. e.g. to all requests inlocaltion /update { }
In our default Nginx configuration path, we have many *.conf files in /etc/nginx/conf.d/ which have been included in the http { ... } block. For example:
http {
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}So, I created /etc/nginx/conf.d/limit_req.conf and add this content:
# map module allows to create a new variable named $rate_limit_post_req
# that stores binary IP of POST requests if match.
map $request_method $rate_limit_post_req {
POST $binary_remote_addr;
default "";
}
# Other map modules
...
# Create an 8-megabyte zone called "post_req", which stores the value of
# $rate_limit_post_req variable. And this "post_req" zone allows to
# process 5 requests per second on average and cannot exceed the threshold.
limit_req_zone $rate_limit_post_req zone=post_req:8m rate=5r/s;
# Nginx default returns 503 error code (it's too general) when all requests start
# being rejected due to exceeding threshold. In this case, is "rate=5r/s"
# As best practice, we may want to change it to error code 429
# (stands for too many requests) so that we could identify this error caused by
# rate limiting.
limit_req_status 429;In my case, we have error code 499 instead, but we don’t set limit_req_status to 499 since we understand its meaning and origin. We prefer to use the standard error 429 for other requests blocked by rate limiting.
To really apply limit_req_zone to our context, we have to add it to the server { } block defined in /etc/nginx/sites-enabled/*.conf files. In my case, I need it to apply to a specific location { } block inside the server { } block in the file, e.g., /etc/nginx/sites-enabled/my-example-domain.com.conf. Open the file and edit the content like:
server {
listen 443 ssl;
server_name my-example-domain.com
# SSL Configuration
ssl_certificate /path/to/certs/fullchain.pem;
ssl_certificate_key /path/to/certs/privkey.pem;
# Other configuration
....
# Apply rate limiting configured in "post_req" zone to the POST request
# submitting to e.g. /advanced_search path
location /advanced_search {
...
# We use "limit_req" directive to apply zone "post_req" for this location
limit_req zone=post_req burst=10 nodelay;
...
}
}Where:
- burst: allows you to define how many requests can be put into the queue before Nginx start returns 503 error by default. In my setting, I changed it to status code 429.
- nodelay: forwards a request to an upstream immediately without putting it to queue but still mark the slot in the queue as “taken”, and only free it until the appropriate time has passed.
I would suggest that you should have a read this blog to understand more about burst and nodelay.
After that, just need to test Nginx config and reload Nginx service. Simply run:
sudo nginx -t && sudo systemctl reload nginx.service
Apply Fail2Ban
Need to install Fail2ban on your Nginx server. Run
sudo apt install fail2ban
Create a jail rule to monitor the nginx access log file. Run sudo nano /etc/fail2ban/jail.d/nginx-status-499.conf and add the following content:
[nginx-status-499]
enabled = true
action = %(action_mb)s
filter = nginx-status-499
logpath = /var/log/nginx/access.log
maxretry = 50
findtime = 60
bantime = 120
port = http,httpsPress Ctrl + O and press Enter to save the content. Then press Ctrl + X to exit.
Where:
- enabled: make this jail become active
- action: use default
action_mbaction defined in/etc/fail2ban/jail.conf. It simply ban and send an e-mail with buffered report to thedestemail - filter: define a filter name should be used by this jail. My custom filter name is “nginx-status-429“. I’ll create later.
- logpath: we specify which log file should be monitored by fail2ban. It could be many log files like
/var/log/nginx/*access.log. In my config is just a basic example. - maxretry: is how many time requests are being rejected with 429 error before a host get banned. This parameter should work with
findtimebelow - findtime: this is a time window when the number of failures defined in
maxretryhappens. Setting it to 60 means that if the requests exceed 50 times with a 429 error within 60 seconds, it gets banned. - bantime: is the amount of time that a host is banned. 120 means the request is banned for 120 seconds.
- port: are ports to be banned. My config is to ban 80 and 443
You could totally create rule for error code 429 as well. Have to modify the configuration according to what you want to block.
Create a filter rule for identify error format in log, run sudo nano /etc/fail2ban/filter.d/nginx-status-429.conf and add the following content.
[Init]
[INCLUDES]
[Definition]
failregex = ^<HOST>.*"(POST).*" (499) .*$
ignoreregex =Press Ctrl + O and press Enter to save the content. Then press Ctrl + X to exit.
The failregex will try to capture content in the /var/log/nginx/access.log to see whether it match the defined format. This rule is called from /etc/fail2ban/jail.d/nginx-status-499.conf.
This filter rule is my custom filter because it depends on how my nginx log format is set up. You might need to change it in order to match with your configuration. Could have a look at the default setting at /etc/fail2ban/filter.d/nginx-limit-req.conf to see tempalte of failregex .
Next, create a sendmail action for nginx-status-499 jail config to call to send an email notification when it the request is banned. Run sudo nano /etc/fail2ban/action.d/sendmail.conf and add the following content:
[INCLUDES]
before = sendmail-common.conf
[Definition]
# bypass ban/unban for restored tickets
norestored = 1
# Option: actionban
# Notes.: command executed when banning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from <fq-hostname>
Date: `LC_ALL=C date +"%%a, %%d %%h %%Y %%T %%z"`
From: <sendername> <<sender>>
To: <dest>\n
Hi,\n
The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n
Regards,\n
Fail2Ban" | <mailcmd>
[Init]
# Default name of the chain
#
name = defaultPress Ctrl + O and press Enter to save the content. Then press Ctrl + X to exit
And then we configure the sender and destemail settings in /etc/fail2ban/jail.conf, run sudo nano /etc/fail2ban/jail.conf
Find the lines that contain:
destemail = ...
sender = ...Change it to:
destemail = email-used-to-receive@example.com
sender = email-used-to-send@example.comPress Ctrl + O and press Enter to save the content. Then press Ctrl + X to exit
Restart fail2ban service to make the change take effect
sudo systemctl restart fail2ban
Once fail2ban started, you can view log at /var/log/fail2ban.log. If you notice any line looks like below
2025-05-31 21:14:48,668 fail2ban.filter [3940735]: INFO [nginx-status-499] Found 123.123.123.123 - 2025-05-31 21:14:48
2025-05-31 21:14:58,834 fail2ban.filter [3940735]: INFO [nginx-status-499] Found 123.123.123.123 - 2025-05-31 21:14:58
...
2025-05-31 21:31:05,248 fail2ban.filter [3940735]: INFO [nginx-status-499] Found 123.123.123.123 - 2025-05-31 21:31:05
2025-05-31 21:31:10,209 fail2ban.filter [3940735]: INFO [nginx-status-499] Found 123.123.123.123 - 2025-05-31 21:31:09It indicates the example IP: 123.123.123.123 has been caught with the number of failed attempts that match jail nginx-status-499. If it exceeds 50 times within 60 seconds, any requests from this IP should be banned for 120 seconds.
That’s how we prevent users from spamming our web and database servers.
Conclusion
Rate limiting is very powerful technique that we should consider applying to the website, API, or application, etc. It could be another option to reduce using AWS WAF if you run a Nginx server on AWS and cost is a matter for you.
With the use of Fail2Ban in this case, I think we should be careful in using that in production or with public customer-facing platforms. Because it will likely block a lot of customers accessing to the website if you don’t know what you’re doing and apply it incorrectly. Please do your own research before applying this technique.
I hope the technique I share in this blog could somehow help you protect your system or at least have a grasp of how the method should work so that you have a clue to research futher. Thank you for reading.
References:
- https://www.cloudflare.com/learning/bots/what-is-rate-limiting/
- https://blog.nginx.org/blog/rate-limiting-nginx
Discover more from Turn DevOps Easier
Subscribe to get the latest posts sent to your email.
