by Christopher Rogers on 2013.1.11
We are constantly striving to improve the user experience of Line. Given the nature of Line as a communication tool, one way to do this is to reduce the time it takes to send and receive messages. Making the connection to our servers more efficient is one way to accomplish this.
Until recently, Line had been using HTTP to transmit messages. HTTP, well known for its use in web browsers, has its strengths and is well understood. It has its downsides as well, however. Simply put, HTTP was not designed for the types of real-time applications we see nowadays. HTTP is based on a simple request/response model: you send a request over a TCP connection, and wait for its response. HTTP does not fit well with a messaging service for the following reasons:
- It is not possible to send multiple requests in parallel over a single connection and receive the responses out of order (in an order differing from the order the requests were sent).
- In order to check for new messages the client must send a request to the server. The more frequently requests are sent, the larger the drain on the device’s battery. Line has been employing a technique called “long polling” in HTTP as a workaround to this problem. It requires its own TCP connection, however, since nothing can be sent while waiting for a response.
- When frequently sending small requests, the HTTP request & response headers can easily inflate the total size of the data relative to the payload. Sending multiple requests over the same connection results in sending redundant header fields that usually do not change over the lifetime of a connection such as “User-Agent”.
We then looked at designing and implementing a new, more efficient protocol that would overcome these issues. As we began to work out the details, we saw that the end goals of it and SPDY were very similar and thus decided to adopt it instead. Besides gaining the many benefits that come from not reinventing the wheel, this allowed us to benefit from possible future improvements to SPDY and future software developed with SPDY support if SPDY continues to gain traction in the industry.
SPDY is a next-generation protocol for web browsers being developed by Google, and aims to be adopted as HTTP 2.0. These are the main features of SPDY that Line takes advantage of.
- Multiple requests can be sent in parallel (multiplexing), and responses can be received out of order.
- Headers are compressed, optimized with foreknowledge of the type of data that is often sent in HTTP headers.
- Using the ping feature of SPDY, we can check the health of the connection.
There are a couple of ways we use SPDY differently.
- We allow for non-encrypted connections. SPDY is usually used with TLS, but this slows down connection times and transfers–especially over mobile connections. Thus we decided to allow for non-encrypted connections over a mobile network.
- When using TLS, we decided not to use NPN. NPN is a method to discover and negotiate an application-level protocol such as SPDY during the TLS handshake. NPN can reduce the number of round-trips that must be made when initiating a connection if you were to add an application-level protocol handshake after the normal TLS handshake. NPN is of benefit to browsers that do not know whether a given host supports SPDY. However, in our case we know ahead of time that the host supports SPDY, so the protocol discovery part of NPN was of no use to us. In addition, using NPN would require bundling a newer version of OpenSSL in the app. These factors led us to decide to not adopt NPN in Line.
In addition to adopting SPDY we also added port scanning. Even commonly used ports can be blocked on some networks. In order to ensure that all of our users can use Line, we conducted a survey on major mobile carriers worldwide to determine which ports were accessible, and chose to scan a handful of ports that were accessible on a large majority of the carriers. We also provide for an HTTP fallback over the standard port 80.
In order to actually use SPDY, we added support for SPDY to our custom-built API gateway server, written in Erlang. We lovingly call it “LEGY,” which is short for “Line Event-delivery GatewaY.”
We opened service with SPDY on October 16, 2012. It was not met without its own set of problems.
- As time passed memory usage on LEGY would inflate. This was due to the unexpected size of SPDY’s header compression dictionary and its state. We reduced memory usage by simply optimizing the code.
- Occasionally we would discover truncated data transmitted from the client. We narrowed down the problem to certain problematic ports used over mobile networks. We deducted that this was likely due to intermediary proxy servers interfering. We eventually decommissioned these ports.
After all was said and done, we were successful in adopting SPDY, reducing the number of connections and increasing the speed of sending messages. We realized that networks outside of our control were exactly that: outside of our control. That said, we continue to deal with ever-changing network environments.
In our next post, look forward to more details on our adoption of SPDY!
Co-written by Namil Kim