John Cleveley (@jcleveley) from Auntie BBC talking about the responsive development of BBC News. Usual caveat applies, (a) these are my notes on my thoughts as well as his words, and (b) I may have misheard or misunderstood.
Current site is static files and a lot of Apache includes! Responsive sites: cool… but hard! Moving from an easy compartmentalised set of sites, mobile and desktop; mobile was simpler, for less capable devices.
Many many devices, but also many contexts (GPRS in rural, 3G/LTE in cities), Kindles, madness.
Many factors: resolution, input, speed, etc, etc. This shattered market is impossible to address through compartmentalised teams and sites.
Usage stats vary between sites, and also by country segment within a single site. UK usage on BBC site is 20/80 mobile to desktop (stats maybe misheard), but Nigerian usage of the site is 80/20 mobile to desktop.
Target: 10sec on GPRS. 65kb – 100kb (did I hear that right?)
Problems:
- Shrinking large images
- Media queries load all the CSS files, possible overs separate HTTP requests
- Content is simply hidden, but still downloaded
- Simple things like a Facebook Like button calls in tens of KB of libraries and stuff.
Client detection: server and client side
Serverside: UA sniffing
Client side: Feature detection Mobile first and progressive enhancement. Allows you to focus on the really important stuff: the content.
Cuts the mustard: JavaScript testing for modern browsers. The good news? Most browsers in BBC stats do pass this.
Rather than buying every single device, split the devices in to good (capable) and bad (basic) browser capabilities. Keeps the core experience fast. Progressively enhance the page based on the Mustard test. No way to keep metrics, as the bad browsers have no debugging to create accurate measures, so you’re left with “suck it and see”. BDD testing allows tester to rely on the tests for features, and gives him time to manually test on the library of devices.
Images: load the first (core, main story image) image. Resize it. Once that image is in, then the JS requests appropriately sized images, based on the knowledge from loading the first image.
Similar to the core image above, load the style sheets through JS so you can pick what is needed rather than loading everything and letting media quid filter the mess. Using SASS to conditionalise the creation of CSS and keep it dry.
Rendering on the server: keeps it simple. Templates in only one place. Faster for page render.
Protect the servers with Cache All The Things and Akamai. Cache contexts: country (ads for foreigners, using a Geo-IP service and annotate the header), device, cookies (for personalisation). Always worth checking hit/miss ratios on your Varnish; more boxes more misses for example, more personalisation more misses, etc. To handle massive load switch on Akamai CDN, but at this point context is lost and everyone gets the domestic version of the site.
Q: The difference between IPv4 and IPv6 in terms of geo mapping. Perhaps there’s a problem there?
QA
Library of 20 devices to test with (bought from eBay… better than virtual testing services). Cucumber and Selenium tests (Selenium handles device width changes for testing). Android in particular can be very fragmented, as upgrades are not as easy for users. Remote debugging, handy handy handy.
BBC using a tool called Wiener.
Wally: progress generated from tests and code. Check out on GitHub. Read only view of Gherkin.
Thanks Simon. Good summary.
The stats look right. I wrote down 10s, 100k for GPRS. We also arrived at 10s when creating our Web Design Guidelines for Low Bandwidth. These were written with desktop users in developing countries in mind but the timing is based on HCI research.