We recently talked about reducing HTTP requests. Here’s a quick recap:
- Slow web pages impede your website’s goals;
- 90% of a typical web page’s slowness happens after the HTML has been downloaded;
- Reducing the number of HTTP requests triggered by your page is usually the best first step to making your pages faster;
- We reviewed some specific techniques for reducing the number of HTTP requests in a given page;
- We noted that automation can ease or remove the maintenance burden for more invasive optimization techniques
Next up on the list: taking advantage of the browser’s capabilities to make your web pages faster and more efficient.
But are they even “pages” any more?
Modern web pages have outgrown their humble origins and are not really recognizable as “pages” anymore. Except for the simplest and most old-fashioned brochure-ware sites, visiting a website means executing a complex application that is distributed — and executed — across the web.
Viewed as such, these web applications are comprised of many parts: a client (the browser); one or more origin servers (where the site is hosted); CDN nodes (where static assets are cached); reverse proxy nodes (e.g. for next-gen whole site acceleration services); third-party assets (hosted on various servers); and the networks that connect them all. So it’s time to stop acting like the origin server has to do all the work and the browser can only present the page to the user. The server is just one part of the application, and it’s playing a shrinking role.
Performance-minded website architects are showing an increasing tendency to shift the burden of work from the (overloaded) server to the (powerful, underutilized) client, and with good reason. In this article I’ll review some of the ways you can make your website faster by easing the burden on your server and giving the browser more responsibility.
“Put Me In, Coach, I’m Ready To Play!”
Modern web browsers run on hardware which is staggeringly powerful by historical standards, and which is simply massive overkill for the uses to which most users put them. It is very common for a user to interact with a site without even beginning to strain the RAM or CPU on his or her computer, while waiting far longer than necessary while an overloaded server (often a poorly configured virtual server on shared hardware in a cheap hosting center) struggles to allocate memory and keep up with the flow of requests without crashing under the load. Distributing more work to the client helps keep the server from getting swamped, can help save on bandwidth and hosting costs, makes the application faster and more responsive, and is generally a better architecture. It’s simply a more efficient allocation of available resources. (And even for less powerful clients, like some mobile devices, the high latency costs of HTTP round trips over mobile connections can still make it worthwhile to offload work from the server to the client.)
But too many web developers continue to treat the browser – the client side of the client-server interaction – as just a simple “view” of the application. It’s better understood as residing at the heart of the application that is the modern web page. The server has its place, but the browser is increasingly where the action is. It’s got tons of under-utilized processing and memory resources, and its capabilities should be respected and used to their fullest.
Ok, if you’re ready to leverage the client the first thing you’ll need to do is clean up
your client-tier code. Seriously.
Use web standards.
Apply MVC in the page.
Leverage Ajax techniques. Properly.
Don’t refresh the whole page if you don’t have to! Use Ajax. By only requiring small parts of the page to change in response to user actions, you make your site or web application much more responsive and efficient. But be aware, there are different Ajax approaches.
For example, fetching complete, styled HTML fragments via Ajax may be appropriate for implementing a sophisticated “single-page interface” (SPI) [https://en.wikipedia.org/wiki/Single page_application]. That’s a powerful approach, but don’t take it lightly – serious SEO and usability gotchas abound. If you’re not doing SPI, retrieving chunks of styled HTML from the server is probably not the right thing to do.
For most common use cases, it’s better and faster to just pull pure data from the server. Client side templating libraries help solve the problem of turning that data into HTML that can be injected into the DOM and displayed. (Here’s a helpful template chooser.) But with or without client-side templates, fetching serialized data is usually the best Ajax approach for performance.
Validate in the client.
At the risk of insulting you smart readers, I have to mention the most obvious case for pushing work to the client, just because so many sites get it wrong: form validation. Picture a user, taking the time to fill out your signup or order form. They painstakingly complete the form and submit it. And then they wait. They look at a blinding white blank screen while the form is posted to the server… and processed…and a new page is generated… and sent back… and rendered… until finally… yes, they see — an error? What a waste of time! That’s an unhappy user and a likely candidate to bail out, abandon your site and go to a competitor.
Note for security reasons, web applications should always also validate on the server side. (Rule #1 of web app security is that user input cannot be trusted.) So, validate in the client for reasons of performance and UX, and validate on the server for security.
Let the browser do the data viz.
One last specific scenario I want to mention is the visual display of quantitative information. Generating charts and graphs — any sort of pretty-looking data visualization — used to be the sole province of the server. Those days are long gone.
Now, it makes much more sense to push just the raw data from the server to the browser, in the initial page request. If the data set is too large to include in the initial view, it can be updated via Ajax, in response to user interaction. With modern client libraries (like Processing, D3, and Flot), you can create all kinds of stunning interactive data visualizations right there in the browser. Their capabilities go way, way beyond sorting table columns or rendering a pie chart.
In this way, many user interactions avoid hitting the server at all. And when they do, it’s a small request and response, consuming the minimum amount of network bandwidth and requiring the least possible amount of work from the poor overworked server.
- Web “pages” aren’t really pages any more, they’re distributed applications
- Pushing work from the server to the client is a great way to make your site faster
- Use best practices (web standards and MVC separation in HTML, CSS and JS)
- Use the right Ajax approach for the job
- Powerful client-side templating libraries and dataviz libraries abound
That’s it for this second article. Next time I’ll dive into another area of web performance optimization. In the meantime I’m always interested in feedback and others’ thoughts on web performance.