The possibility to send a high number of emails without keeping your own server for email delivery.
We've implemented the hooks delivery system with Amazon Simple Notification Service (SNS) and proxy server that processes, subscribes, and redirects requests. Little coding was required and responsibility of webhook delivery is assigned to an external service.
To achieve project objectives, we’ve considered several implementation options:
We have the following components that are responsible for events in our system:
The simple way to inform a 3rd party API is just to make HTTP request the hook URL directly from the consumer, as displayed on the following diagram:
Accordingly, we came to the conclusion that this approach will not give us expected results or satisfy our needs in a comprehensive manner.
Therefore, we’ve started to consider other options.
We use a lot of AWS services at the moment and already have a bunch of SQS Messaging Queues. The main idea for the solution it is to use a separate queue for hooks (webhooks) and consumer to process this queue. Take a look at the schema that shows how SQS queue works:
In this manner our main app flow does not depend on the working state of an external app and we are able to implement retries via SQS queue.
The flow of a retry is as follows:
The flow is displayed on diagram below:
This option was way better, however, not all of our primary goals would be met by implementation via SQS queue.
Another way to implement webhooks delivery is by using AWS SNS. The service was developed for delivering notifications and can send them via HTTP, the same way as webhooks do. Also, if notification delivery has failed, SNS is responsible for retry delivery, and we can use linear or logarithmic function to manage retry frequency.
This solution will give us desirable outcomes; however, it cannot be applied due to these two reasons:
The SNS was designed for communication between different components within one system and not for communication among different components of different systems with different owners. Hence, we need a proxy component inside our system that SNS can communicate with.
Let's take a look at the scheme of proposed webhooks implementation using SNS:
Flow of hooks delivery using SNS:
Schema of proxy API is shown below:
After considering each implementation option in terms of benefit it will bring, and required resources for its execution, we’ve chosen to go with the webhooks delivery by SNS notification. With SNS and the simple proxy server, we’ve implemented the hooks delivery system with minimum coding involved and shifted responsibility of webhook delivery to an external service. As a result, we are able to send about 20,000 events per minute without any scaling, which is even higher compared to the planned 10,000 events per minute. The major bottleneck is the proxy, but we designed this one for scaling so throughput can be easily increased.
We wrote our proxy using Node.js platform. The proxy hosted on EC2 T2 small instance and can process about 4,000 concurrent requests with throughput of about 400 req/sec. The proxy doesn’t have any interaction with the database since all data for redirects are passed in URLs. This gives us an option to do a horizontal scaling. If we place our proxy behind a load balancer, then we will be able to add more server instances and as a result obtain two times higher throughput.
Vertical scaling won’t give us the same effect. When we scale node processes within one server instance, we can get about 30% higher throughput for each additional processor kernel. Therefore, horizontal scaling is a more suitable and effective approach.
We don’t need to worry about scaling SNS because AWS does it automatically, we only have to send the proper count of messages.