Our main target is to develop a system for sending large numbers of emails and then tracking users’ interaction with a received mail (e.g. opens, link clicks, unsubscribes, etc.) Users with large amounts of daily sends prefer use scheduling for sending use.
Using scheduling, instead of send-immediately flows, gives us several advantages.
If a user needs to send emails after his workday is finished, then he won’t be able to monitor the sending process and fix them if something goes wrong. With scheduling, users can prepare all emails for sending before the actual send date, and our ESP (Email Service Provider) handles the rest.
If you already have content for emails that you need to send (for example tomorrow), you are able to send the emails gradually. For example, if you need to send 5M emails at 8 PM you can start sending at 10 AM and continue sending in small batches the whole day. We collect the emails from our side and cover the delivery in time.
If you have a notification email that needs to be sent after some time (for example, a subscription reminder), just send the notification to our ESP and we cover the scheduling. You don’t need to handle that case with personal Cron scripts.
We propose pretty simple API for working with scheduling, but it’s not easy to organize simultaneously sending even 1M emails. We needed to think about scalability and our system in the future. We had to solve several problems.
We should able to filter “ready for send” emails as fast as possible among large amounts of records that don’t need to send yet. At the time we already used ClickHouse database as storage for our analytics events (unsubscribe, clicks, open, etc.).
Clickhouse is a column-oriented database that is used mostly for analytics. But with the read/write throughput that ClickHouse provides for us we are able to use the solution as scheduling storage.
The most valuable storage requirements for us were the following:
We made some performance tests and got results that completely satisfied our requirements:
As a result, with network latency for transfers, we were able to fetch 3.5M emails per second. That was a very good result for us.
We resolved the problem of choosing our data storage and then faced another issue. We used NodeJS as our main platform, the platform, together with SQS queues, didn’t provide the desired performance for us. Then we wrote simple prototypes for testing using Golang and Elixir (we hoped that Erlang with its lightweight threads helps us) we got some throughput increases, but still weren’t reaching the desired results and scalable solutions.
Results of our benchmark’s queue inserting time for 50K messages to queue was as follows:
Elixir was about 2 times slower than NodeJS with 2 threads and the same kind of problem.
So obviously, we needed to find ways to scale horizontally.
Obviously, we needed to make our scheduling scalable and found a way to split all scheduled emails for workers. Workers should have the same role and know nothing about each other. It’s allowing us to run more workers on more servers when we need to process the largest amount of emails.
When workers are fully independent from each other, we decided to create one more component in our system.
The component is named Ranger and is responsible for splitting ranges for our workers. Each range shouldn’t intersect with other ranges.
With Ranger and Clickhouse working together, we created the following architecture for our scheduling feature:
AWS SQS service promises not less than one delivery for a Standard Queue. this means that sometimes we can obtain one message on two workers and send our email twice. Since each message can contain multiple emails (our current batch size contains 5K) we need to organize duplicate handling.
For that, we have the Redis database and we store each SQS messageID as a Redis key. The keys have an expiration date that's equal to message visibility timeout in the Queue. When a consumer obtains a new message from SQS, he checks for an existing messageID in Redis. If the message already exists, the consumer skips this one.
Using ClickHouse and AWS Simple Queue Service, we build a scalable scheduling system for delivering emails. These flows and tools can be applied to any actions that could be scheduled, not just for emails.
At the moment, we can fetch 335K records per second, filtering by ready date takes 0.1 seconds for 5M database set (including network latency), and workers that process the emails can easily scale horizontally for the desired throughput.