GreenReaper (greenreaper) wrote in wikifur,
GreenReaper
greenreaper
wikifur

  • Mood:

Squeezing every last bit - techniques behind WikiFur's scalability

WikiFur's new server is pretty fast by itself - but "pretty fast" just won't cut it. Here's some of the things we've been doing to make it faster, especially for international users, as we prepare for increased traffic and the migration of other languages. Hopefully, those of you who are web admins are already doing these things; perhaps you can even give us advice. If not - or if you're just curious about how it all works - you might take a look.

Code, data and web caches

WikiFur runs on MediaWiki, a web application written in PHP (a scripting language). PHP can be very slow, because it has to compile the PHP into an executable format every time it runs the script - and MediaWiki is a very expensive script. Fortunately, the compiled bytecode can be saved into a cache. Newer versions of PHP have made this much easier, and the included APC (Alternative PHP Cache) is quite adept for this purpose. As a bonus, all instances of WikiFur on the server are running on the same shared code files, so only one compiled version has to be cached.

There are many items of data that are useful to cache, including parsed pages, "messages" (all the UI text), user sessions and other items normally stored in the database or on disk. These can instead be stored in APC - or in a different type of network-based in-memory cache called memcached.

Then there are the user login sessions. We initially used a local instance of memcached for this, although we switched back to using APC and disk-backed sessions instead. Memcached is most effective when you have multiple websites in a server farm and wish to dedicate one server to handling caching data for all of them, which isn't the case for WikiFur.

Lastly, you can cache complete pages (and other files). Most readers aren't logged in, in which case you don't actually need to give them a new copy of a particular page - just the one you gave the last time. This is the best of all possible worlds, as the web server doesn't need to be called at all. In fact, this cache can be on a different machine, although it isn't for WikiFur. We're using a program called Squid to act as the web cache, and it's very effective at serving both wiki pages and images. (Another option for the adventurous is Varnish.) MediaWiki can use HTCP to clear the cache when necessary, and this should be compiled into Squid and enabled in MediaWiki, otherwise pages may not be updated properly.

HTTP compression

They say small is beautiful, and nowhere is this more true than the Internet. The idea of HTTP compression is essentially to zip up web pages before they are sent. Using compression cuts the amount of data that needs to be transferred by about 70% in most cases. All web browsers nowadays support it - and it's backwards-compatible for anything that didn't - yet surprisingly many websites fail to use it.

The magic of compression is that not only does it reduce the data transferred, it reduces the relative time taken to transfer that data - very significantly. Here's why:

TCP (the protocol used by web servers to transfer files) will only send a certain amount before getting an acknowledgment back. This is known as the receive window size. It's primarily intended to avoid flooding the network, resulting in dropped packets. The window grows slowly as the packets continue.

If the file is larger than the window size (normally ~16kb in Windows), the server has to wait for the client to say it got the first packet before it can send any more. This greatly increases the time to transfer the file. It's especially painful with medium-sized CSS and JS files - which often need to be loaded before displaying the rest of the website.

If you compress the file, it is more likely to fall within the window size; so the server can finish the file and the client can consider the transfer complete, use the file, and proceed to downloading the next one. On servers and clients which support pipelining, gzip can have an even greater effect, as several files may be sent down the pipeline before it "fills up" the window. If the file references other files, the client can get a start on downloading them too.

Minifying CSS and Javascript

Minification tools aim to reduce the size of a file as much as possible - to render it into computer-readable form rather than human-readable form. Some simply try to remove as much space as possible; others rework the syntax of the code, perform variable renaming and constant concatenation, or even create miniature engines that manufacture Javascript from a more compact representation.

Why do this if compression saves so much? Well, it can only work with what it's given. Even a compressed comment or variable name will take up some space. If you can reduce or eliminate that, so much the better. There are also certain optimizations that result in an equivalent, more compact representation. These can't be done by compression alone.

Some things depend on the actual uncompressed size. The browser can't simply skip comments or whitespace - it has to read it at least once. It also has to store objects in the cache - and, for example, the iPhone won't store objects larger than 25kb when uncompressed.

Using Yahoo's minification tool decreases the size of both the CSS and javascript we are using by about a third. This relative decrease is maintained when the file is compressed, resulting in a 27kb files that is about 17kb after minification and 5kb compressed.

Serving files from multiple subdomains

Most web browsers only maintain a certain number of connections. Once they're done, they can't request any more files until they have finished downloading something. This is a pain when you have several scripts that you would like to download. What can be done is to spread the load across multiple domain names. This increases the number of simultaneous connections that can be made.

We now have our default style sheets and javascript set to load from the pool.wikifur.com domain. This has a few additional advantages:
1. The files don't need to be reloaded for each language
2. When the browser loads the custom skin images (such as the logo), connections to the pool will already be open

As it happens, we've got the same logo image on the wikifur.com portal, which removes the delay for looking up pool.wikifur.com when clicking on one of the languages.

There's a downside to using this domain; people who've visited the shared image wiki have cookies (about 300 bytes) which will be sent along with image requests. To avoid this, we might move to using a separate subdomain for such images in the future, and just link the files.

Being picky about what's on the pages

To achieve maximum performance - a goal for the wikifur.com portal, and eventually for each wiki's key pages - you need to look over each page element with a critical eye. The end result is a page that has all the features that you really need, and nothing left to be taken away.

Think about it: Is that style attribute needed? Could a class name be shortened? Can you lift repeated styles to a class? Are external styles/scripts appropriate, or would inline placement work better? Can an image be replaced by text? How much do you really need that image/link/label? What if you replaced an automatic thumbnail with a manual one at a reduced colour depth? Do you need the "http:" or "www" at the front of that link? (Yes, //google.com works)

You have to measure. Sometimes, a change you think will have a positive effect actually has the reverse. Other things don't make sense unless you're getting Google-level hits. Removing all of the linebreaks on a page you edit regularly is not really worth the hassle; gzip will strip most of them out. Similarly, !DOCTYPE is there for a reason. But usually there's some fat to trim.

All these techniques increase scalability, and many are complimentary (e.g. compressed pages take up less space in a cache). They can probably be applied to your own websites; if you're finding load times aren't what you hoped for, there are ways to fix it. There's more advice here, and on countless other sites. Don't settle for less!
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 9 comments