Recently we’ve been working with one of our clients to build application for use with AppNexus. We were faced with a challenge which required a bunch of different technologies to all come together and work together. Below I’ll try to list out how we approached it and what additional challenges we faced.
First came the obvious challenge: How to handle at least 25,000 requests per second. Our usual language of choice is PHP and knew it was not a good candidate for the project. Instead we wanted to do some benchmarks on a number of other other languages and frameworks. We looked at Rusty/Nginx/Lua, Go, Scala, and Java. After some testing it appeared that Java was the best bet for us. We initially loaded up Jetty. We knew that this had a bit more baked in than we needed, but it was also the quickest way to get up and running and could be migrated away from fairly easily. The idea overall was to keep the parsing of the request logic separate from the business logic. In our initial tests we were able to get around 20,000 requests a second using Jetty, which was good, but we wanted better.
Jetty was great at breaking down the incoming HTTP requests to easily work with, it even provided an out of the box general statistics package. However, we didn’t need much heavy lifting on the HTTP side, what we were building required very little complexity on with regards to HTTP protocol. Jetty in the end was spending too many CPU cycles for what we needed. We looked to Netty next.
Netty out of the box is not as friendly as Jetty as it is much lower level. That said, it wasn’t too much work to get Netty up and running responding to HTTP request. We ported over most of the business logic from our Jetty code and were off to the races. We did have to add our own statistics layer as Netty didn’t have an embedded one for what we were looking for. After some fine tuning with Netty we were able to start to handle over 40,000 requests per second. This part of the puzzle was solved.
On our DB side we had heard great things about Aerospike in terms of performance and some of its features. We ended up using this on the backend. When we query Aerospike we have the timeout set at 3ms. We’ll get around one or two request timeouts per second, or about 0.0025% of the time we’ll timeout, not too shabby. One of the nice features of Aerospike is the XDR function of the enterprise version. With this we can have multiple Aerospike clusters which all stay in sync from a master cluster. This lets us load our data onto one machine, which isn’t handling all the requests, and then it is replicated to the machines which are handling all the requests.
All in all we’ve had a great experience with the Netty and Aerospike integration. We’re able to consistently handle around 40,000 requests a second with the average response time (including network time) of 4ms.