The one with Einstein and web performance optimization

When Einstein developed the special theory of relativity, he put a hard limit on my page load time. Now, hold on, I am not blaming him for my website running like Slowpoke Rodriguez when it could be running like Speedy Gonzales, all I am saying is, that he could have atleast added flying pigs and network packets to his list of things which can travel faster than the speed of light, but he instead kept the list empty. Not cool Einstein. Not Cool.

Now, I have to live in a world where pigs can't fly and network packets can't travel faster than the speed of light and all Einstein has to say about that is:

If your client and server are halfway across the world and that network latency is killing you, I feel bad for you son. I got 99 problems but wishing pigs could fly, ain't one.

Albert Einstein (or Shakespeare, one of them, for sure)

Jokes aside, the point I want to make here is that it would take 106 ms for a network packet to make the round-trip from New York to Sydney in vacuum, but then no one would be left alive to send or receive the packet, not good. Next best propagation medium we have is optical fiber, giving us the round-trip time (RTT) of 160 ms but again I am being optimistic and assuming that the packet is travelling over a fiber-optic cable along the great-circle path (the shortest distance between two points on the globe) between the cities. In practice, no such cable is available and the packet would take a much longer route and I haven't even factored in the additional routing, processing, queuing, and transmission delays introduced along each hop. Guys this isn't looking good for our poor little network packet.

In practice though, the RTT comes out to be in the range of 200-300 ms and you might be thinking, that seems fast, why the fuss? But that's the problem right there, your mind, it's faster, even in jumping to conclusions.

Studies have shown that for most people 100-200 ms delay is a perceptible lag, 300 ms delay and they would start calling your system sluggish and 1000 ms (1 sec) delay is enough for them to perform a mental context switch, "waiting to load.. waiting to load.. I wonder if kitten gifs are still as cute as yesterday" and let's be honest here, it's really difficult to get anyone's attention back once they have kitten gifs on their mind. Crap, I shouldn't have mentioned kitten gifs, stay with me please, I need your undivided attention.

The conclusion is simple: network latency is a big deal and has to be carefully managed. The basic principles involved in managing network latency are painfully obvious, still often overlooked and hence need to be reiterated:

1. No bit is faster than a bit not sent; send fewer bits

Eliminate unnecessary resources on your page, compress transferred data, minify your HTML, CSS, JS and leverage browser caching. This is common sense, but sometimes convenience makes you throw common sense out of the window.

2. We can't make the bits travel faster, but we can move the bits closer

Use a content delivery network (CDN). CDNs cache your static-assets on geo-distributed servers, so that users aren't forced to traverse across oceans and continental links to the origin servers to fetch them. Take my example, my origin servers i.e. GitLab servers are in USA and for delivering the content on abhirag.in I use CloudFlare CDN. Setting it up was as easy as pie though an issue with GitLab is preventing me from using Full(strict) SSL..

That's all for now folks, one last thing though, if you liked this post you'll love the book High Performance Browser Networking by Ilya Grigorik. I am not affiliated with the author in anyway, just a fan of his work.