Skip to main content Go to the homepage
State of the Browser

Lessons from Building for the Bottom of the Web

Most of us build for the web we see every day: zippy networks, evergreen browsers, and devices that feel dated after two years. But there’s another web living alongside it - a web with wobbly connections, aging devices, and browsers that treat your CSS as mere suggestion.

This talk is about what happens when you have to make the web work under conditions you never endure, the unexpected techniques that emerge, and why building for the bottom of the web was the most rewarding experience of my career.

Transcript

Hello folks. So I'm here today. I'm going to tell you a story. And we've had a few people telling stories today already, which is really good, because I think stories are important. Stories are how we make sense of the world. They're how we share our experiences and our values. And often when we come to events like this, what we get is prescriptive advice. We get told, do this, don't do this, programming considered harmful, that sort of stuff. And that's not what I'm going to do. I'm just going to tell you a story. My goal is not to tell you what you should be doing. My goal is just to tell you what I did. Because this is a story that really happened. This happened to me maybe 10 or 15 years ago. And we were a few years post iPhone at this point, and everyone in the world was talking about mobile. Everyone was really excited about mobile, designing for touch, designing for handheld devices, responsive design, mobile first responsive design. Everyone was very excited about this. And in amongst all of this hullabaloo, I was approached by a customer who said that they wanted a new mobile friendly website for their business. And customers being customers, there is an NDA in place with this customer, so I cannot tell you who they were. But what I can do, in the greatest tradition of storytelling, is change some names to protect the innocent. By which I mean innocent me from a spurious lawsuit.

So we're going to call them Zingg. And Zingg were an esports data website, so they produced match statistics and results for several competitive professional video games. So they did CSGO, they did Dota 2, they did League of Legends, StarCraft II, Rocket League. This isn't true either, incidentally. This is not remotely what this business did. But it is a close enough analog for the story that I'm going to tell. So they had heard the buzz about mobile, and they wanted in on that action. They wanted a piece of that action. Now the good news was that a significant fraction of their users were already using handheld devices, which was great. The bad news is that we were lucky if any of those devices were less than five years old. Barely anybody had any flavour of iPhone at all. And while Android was more popular, it was mostly really old versions like Gingerbread and Eclair. Because it turns out that Zingg were based in Sub-Saharan Africa, in a country that had very poor telecoms infrastructure. And while mobile penetration was already pretty high, that was because those devices were significantly cheaper to buy than laptops and desktop devices.

There was also this other category, and we said to the guy, we said, what's that? What's this other category? What's that about? And it turned out that was devices like this. This is a Nokia 2730 feature phone. I actually have to have one. Have one in my pocket here. This device has got no touchscreen. You navigate using a D-pad. It has got no Wi-Fi. You've got cellular data, or you have got absolutely nothing at all. And while this particular device does support 3G, not all of them did. But even the ones that did, the network coverage in the country was so spotty that it was much more likely that a customer was going to be on a GPRS or EDGE connection than be even on 3G. So we're talking 2G, two and a half G network speeds. It means realistically, you're probably looking at maybe 200 kilobits a second download on a good day. Probably half of that in kind of practical real world conditions. The screen on this thing is tiny. It is 240 pixels wide by 320 pixels tall. We have 30 megabytes of RAM, not gigabytes of RAM, megabytes of RAM, a 200 megahertz ARM9 processor, and the browser that is installed on this thing is Opera Mini. And the customer said that whatever we did, we had to support devices like this, because this was 40% of their mobile users. We couldn't just leave those folks out in the cold. We had to make sure that whatever we did was still providing for those people.

So armed with this information, we came up with three rules, three principles, three guiding principles, I suppose, that dictated everything else that we did in this project. So the first rule was that we had to embrace responsive design. Mobile sites were still pretty common at the time. Remember we used to do m.subdomains with just a mobile specific version of a website there? But my little team, we couldn't afford to maintain multiple versions of this application. We had to have one code base that would work for everybody. That was what we had the resources to be able to put together. And that meant we had to embrace responsive design. And that's great that responsive design was the shiny new hotness. Everyone was very excited about working. The team was very up for working on that. But even within the responsive framework, typically when we're thinking about responsive design, we're thinking about devices, mobile devices that are 320 pixels wide. That's the width of the original iPhone. It's the width of a lot of Android devices at the time. But we had to support these feature phones at 240 pixels wide, which is 25% narrower than where we were used to designing when we were thinking about designing for mobile devices. And at the same time, the same code base had to support a 4K desktop display. Because while we were designing mobile first, we couldn't make desktop a second class citizen. So that was rule number one.

Rule number two was that we had to embrace progressive enhancement. The core experience had to work on Opera Mini. I don't know if anyone's ever developed on Opera Mini and Anger. Has anyone worked properly on Opera Mini? A few people, a few hands going up. Opera Mini works in a slightly weird way. So when you make a request in Opera Mini, it doesn't go to the website you're trying to request from. It firstly it goes to Opera. Goes to a server at Opera. Opera's server then fetches the webpage that you're after. It then renders the webpage on the server and compresses it down to this proprietary Opera format called OBML, and that's what gets delivered down to your device, and that's what gets displayed and rendered on your device. During that render phase on the server, the server will wait for up to two seconds. It's like two seconds, two and a half seconds. I think it might even be a little bit longer these days. For any unload hooks to run, stuff that might manipulate the DOM, kind of load time. But then after that, the page is freeze dried and sent down to your device. So your JavaScript never even touches the device. It never even gets there. There's a little bit, you know, Opera Mini will look for things like on clicks, but when you hit that, it goes back to the server. It doesn't actually run anything locally on your device at all. Which means we have to make sure that this application would work with full page replacement, with JavaScript off totally. We could add JavaScript later. We could add interactivity later, but we could not depend upon any interactive elements for the core functionality of this application. That was the second rule.

The third rule, which was the big rule, or arguably the small rule, was that we had to do this inside 128 kilobytes. We had a 128 kilobyte page budget. What does that mean? That means that any page loaded with a cold cache had to come in at under 128 kilobytes, and that included everything. That includes fonts, it includes images, CSS, JavaScript, HTML, everything. Under 128 kilobytes. And this was not an arbitrary number. This was what we worked out we could reasonably load on a GPRS connection within a few seconds. And this was also a limit, not a goal. This wasn't, 'cause even 128K, this was too much. That was a lot. So obviously we had some difficult decisions that we had to make if we wanted to be able to pull this application off. The first decision that we made actually was really easy, and it was this. No web fonts. These were bytes we decided we just didn't need to spend. We would use the system font natively installed on the device. And this had a few advantages for us. First it meant that we avoided the flash of unstyled text, where the browser loads the text, renders the text, then loads the font, and then renders the text over again. Or worse, some browsers, I think Zach touched on this morning, some browsers block rendering of the text completely until the font is loaded. Now these delays are annoying on a fast connection, but on a slow connection they are absolutely brutal. And so being able to avoid this class of error completely was something that was a big benefit for us. It's something we were really happy about. The second advantage that we got from avoiding fonts is it gave us a wide range of font weights to work with, and a wide range of glyphs to work with. Because if we were doing this in a downloadable font, we'd have to download one font to be the 400 weight, another font to be the 700 weight, maybe another one at a 900 weight, maybe another one at 500 weight. Each of those would probably be an independent download, which would have to come down, spending loads and loads of bytes to make this happen. And typically we would also look to subset those fonts, so we take out characters that are unlikely to be used. But esports has got a lot of non-English players who use non-English characters in their names, and it would be a massive embarrassment for the customer if they're trying to render the name of a particularly famous player, and we haven't included a character for their name in our subset font.

So by avoiding web fonts completely, just using the native fonts that was installed on the device, we got a huge range of glyphs to work with, and a huge range of weights to work with, and that cost us exactly zero bytes out of our page budget, which was fantastic. This was absolutely perfect for us. It did mean that customers on iPhone would see something slightly different to customers on Android, would see something slightly different to customers on Windows, but we thought, who cares? Does that make a difference? The only person that pissed off was our designer. (audience laughing) So that was the first decision that we made, no web fonts.

The second decision, that was harder, because it flew in the face of some of the common advice that we were given about building for the web. So frameworks have dominated front-end development since at least React, if not a little bit before React. Frameworks have been a big deal. We were actually working on this pre-React, but there were still a few frameworks that were vying for people's attention. There was SproutCore, there was Angular 1, there was Ember, which I think was actually SproutCore 2, and there was one called Cappuccino, which was absolutely bananas. Come and ask me about Cappuccino later. It was a completely crazy framework. But all of these frameworks would have broken our 128-kilobyte page budget before we'd written a single line of application code, before we'd written anything at all. So we thought, okay, framework, we can't do a framework. Could we do a library? And we looked at a few libraries, looked at jQuery, Knockout, Backbone. Could we do anything with those kind of things? But we realised in that situation as well, we would necessarily be shipping unused bytes down to the customer. With the best will in the world, there will be parts of that library that we will never execute, that we're not going to call into. And so we will be sending codes to the customer's device that will never get executed. And this was before we had tree shaking that kind of got that dead code elimination, right? So, okay, we can't use a library. So we did the thing that you're not really meant to do, and we decided to roll our own. So we made our own library, our own front-end library, and we called it Whizz. And Whizz implemented just the stuff that we needed. So Whizz could do class manipulation, it could do DOM querying, it could do event handling, and it could do HTTP requests. And that was it. That was all Whizz could do. 'Cause that was all we needed.

And Whizz had a, Whizz allowed us to implement what we referred to as the Whizz navigation pattern. And the Whizz navigation pattern was predicated on a particularly trivial observation, which was the header and footer of our website never changed. So why would we download those bytes over again? Why would we go back to the server and fetch those same bytes over again and fetch them back to the client? So what we did instead was that Whizz would intercept a click, it would go to the server. From the server it would fetch a partial page render, just the HTML that existed between the header and the footer. No it wasn't returning data, it wasn't returning like JSON, it was returning the actual rendered HTML that was rendered on the server. And then we just injected that into the document with a dot in our HTML. Really straightforward. We didn't try and reuse nodes, we didn't do anything fancy like that. And this actually worked really, really well for us. It was no good for Opera Mini because we didn't get that real-time interactivity, they still had to do full page replacement. But that's okay as long as we're under the 128K page budget, that's okay, that's fine. Where this really paid benefits was for the more capable devices like iPhone, like Android, where we did have JavaScript, but there was still a bad network connection. And so it hugely reduced the amount of bytes that we were transferring on those kinds of devices. (keyboard clicking)

Probably the biggest problem that we faced in trying to squeeze web pages into such a tiny package was images. 'Cause even a small raster image takes an enormous number of bytes compared to text content. And text content will gzip, right? You'll get 5 to 10 times compression ratio of text compression from running this through gzip, but images typically don't benefit from gzip compression 'cause they're compressed already. So we knew that we had to use images sparingly on this project, but reducing the absolute number of images was never going to cut it. We had to make sure that the images we did send were as small as we could possibly make them. We discovered that Adobe's tooling produces absolutely abysmal PNG files. They are absolutely, they are huge for what they are actually describing. So initially, early in the project, we started using this tool called OptiPNG, which we could feed the stuff that our designer dropped out of Photoshop. We could push through OptiPNG, and it would really reduce the file size without reducing the way the file rendered visually, which was great. But partway through development, we discovered this other tool, this tool called TinyPNG. And TinyPNG did an absurdly good job at squeezing additional compression without any visual change in the image. Squeezing additional compression out of these images. TinyPNG, it's a really great project. It's something I still use today. Many of the optimizations in TinyPNG are now also in OptiPNG. So you can get this from a variety, and there's another tool called PNG Quant, I think does the same thing. But this is a really good tool. And I still use this for everything today because I'm still, even this is like 10, 15 years on from this project, I'm still in the mindset, why do I wanna spend additional bytes on this? I could just make this smaller. Now these days, TinyPNG are known as Tinify, and they also support JPEG images. But back then they didn't support JPEG images, so for JPEGs we have to do something different.

And what we came up with was probably terrible. It was definitely a hack, and it was a little bit counterintuitive. So let's say we wanted to illustrate a news article with this image, with this JPEG image. The first thing that we did, as I say, counterintuitively was we would double it in size. So if it was, we knew we wanted to render it at 400 by 300, we would render it, we would export it at 800 by 600. But we would turn the quality all the way down to potato. We would take that all the way down as low as we could possibly get. And this resulted in a much smaller image, albeit heavily artifacted. And if we zoom in on this, I hope you can see quite what a horrible, nasty, blocky mess that image is. But actually, when we rendered it at half size, those artifacts were barely noticeable. You couldn't really see them. And so by doubling the image in size and then turning the quality settings all the way down to zero, we actually got a smaller JPEG than if we'd exported it the correct size but with medium quality settings. And visually it worked really well. It worked fantastic. Obviously it's a massive hack, and I wouldn't recommend anybody do this. We've got much better tooling for doing this today. There's things like Squoosh that can do comparables, file size compression without silly hacks. But as I say, I'm telling you a story. This is what we did. I wouldn't do this today. It was mad. But this is actually what we did.

The biggest win in images, though, came from embracing SVGs, which happily were supported on Opera Mini. Now, SVG is XML-based, which means it benefits from gzip compression, which is really good. It's also vector-based, which had the advantage for us that we could use the same SVG as the small and large version of an icon, for example. But that isn't to say that it was all plain sailing because again, it turns out that Adobe's tooling and other tooling as well, like Inkscape does this as well, produces really bad, noisy SVGs which are full of embedded metadata and over-precise paths that we don't care about, get to like four, five, six decimal places. And then there's artifacts that result from the way graphic designers work in vector tools. So we would have deeply nested groups or redundant transforms that just do nothing at all or hidden layers, or sometimes we would even get nested raster images. So you would get literally a PNG, base64ed, inline in the SVG to achieve some effect. And we would never have known it was there unless we opened it up in a code editor to see what it was doing. So there was a tool for this as well, SVGO. This tool is still around, it's great, and Jake did a fantastic project, which was SVGOMG, which is a graphic user interface on top of SVGO. I still use that constantly all the time, brilliant tool. And that did a really good job of clearing up a lot of that Adobe cruft. But it wasn't quite enough. So we had to become experts in SVG optimisation very, very quickly. And we did this, we brute forced this basically.

We opened this up in a code editor and started fooling with stuff to see what would break. So we would find, we can take that group out, we can remove that property because the default actually matches what the property was, and we would just try and whittle away, see how small we could get this file without changing how it looked visually. And when we ran out of space on that, we started working with our design team. We said, okay, how can we change your workflows so that what comes out at the other end is a smaller SVG for us? So one of the goals that we worked to was to try and aim to only ever have a single path of any given colour. And we weren't always able to do that, but what we found was that that generally produced smaller SVGs than having multiple paths of the same colour. So we would try and combine all of those paths into a single colour to produce smaller SVGs. And the net effect of this was that these two SVGs that I've got on the screen here are identical when you render them in a browser. And one of them is, what, 289 bytes, and the other one is five times larger than that. And they're actually visually identical when you render them in the browser. SVGO is much better today than it was back then. It's got a load of additional features in it as well. I would probably just use SVGO today. But even now, I still wince when I see an SVG that's just been copied and pasted out of Figma, turned open a pull request, and I'm like, mm, and I'll usually go in and just fool with it a little bit just to try and improve it. But I'd probably be more relaxed about just using SVGO for minification these days.

On the topic of minification, we've been minifying JavaScript and CSS for a long time. Now, this has been table stakes for like 10 years or something, right, if not longer than that. And not everyone is on board with minification. I've spoken with developers who think minifying your code is a waste of time. They say, if gzip gets bigger, wins anyway, so why would you waste the time mangling your code into an unreadable mess when you're just gonna zip it anyway and get an order of magnitude better gains? But we weren't particularly convinced by this argument when we were working on Zingg, which was because gzip support was actually pretty spotty on mobile devices at the time. And even a browser that does not support gzip is still going to get wins from minification. And if you do support gzip, well, you just get a double win, right? And these are the genuine figures out of the Zingg website about the raw size of the CSS and how it behaved when minified and zipped. So we were always in this mindset of every byte counts. We have to make sure that every byte justifies itself. Every byte has got to earn its place on the wire. And so once we'd got our images as small as we could possibly get them, and we got our CSS and our JavaScript as small as we could get it, and we got rid of the font so that was as small as we could get it, literally, that was zero, we thought, what else have we got? Could we minify our HTML? And I don't think very many people do this these days because it's bonkers.

But so HTML doesn't lend itself terribly well to minification. It's not like you go template a bit of a long name and it isn't, well, rename it T. You can't rename variables and stuff like that to gain some space. But there were a few things that we could do, and even a few hundred bytes, we figured was gonna be worth it. So the first thing we did is we thought, okay, any Windows new line, we'll just turn into a Unix new line. That's great. That saves us a few hundred bytes straight away. Any HTML comments? We could strip those out, get rid of them. No, we didn't need those. Users didn't need to see them. Unless it was an IE conditional comment, they were a thing. We had to make sure that we kept those ones around. We were able to remove any white space around the block level elements. This was where using semantic HTML really helped us because we could be confident about what was a block level element and what was an inline element because we were just sticking to the specs rather than having to look at what the CSS was doing, might be changing the display type of the element. Using the correct semantic type, if we want it inline, it would be a <span>, otherwise it would be a <div>, or using <p>'s and <h1>'s, et cetera. That really paid dividends at this point. And we could also collapse the white space around any inline element. But we also had to be careful to leave preformatted text alone. Although, I'm given to understand, we didn't do this detection for the <plaintext> tag, so that would have just ruined everything, right? (audience laughing) But we had to make sure if something was in a <pre> tag, it was in a <textarea>, or if it was in a <script> tag, <style> tag, we don't wanna fool with that, so we would just leave those alone. The result of this was HTML, which was much smaller, but was all on one line. And I was fine with that. I don't know, that doesn't make a difference to me, but it turned out this broke IE. IE did not like this at all. I think any line over 1K, IE just went (blowing raspberries) and just fell to bits completely. So the way we worked around this is that we changed the space before the first attribute in any element, we changed that space to a new line. So that was a byte-to-byte replacement for us, but it broke up the HTML for IE, and IE was happy again.

And the results of this were actually really effective. Just the minification step alone managed to strip out an awful lot of noise from the HTML that we didn't need to ship down to the customer. And even in the gzip case, there was still a small win in the gzip case for us. So we demoed this to the client. First demo to the client, they were really happy about it. They were really excited, they loved what we'd done. But they have one very important piece of feedback, because the client had just spent a significant amount of time and money working with Saatchi & Saatchi on a new brand campaign and a print advertising campaign that went along with it that included this really specific typeface, which was this very specific font with these dark letters with a thick outline around the outside. And they said, "We want you to use this as the heading style across the site. The H1 style should look like this." is what they said. And we looked at this with our head and our hands and just saw bytes flying out of the window. We thought, "How in the fuck are we gonna do this? This is ridiculous." Our first thought was, could we do it with web fonts? We could subset the fonts. We know what glyphs are in our heading, so we could do that. Maybe so we could still use the system font for the body text, but maybe just for the headers, we can still download a font. We kind of thought about that. But it turned out that text-stroke property didn't exist yet, so we couldn't do this thick outline around the letters. So that was very quickly was a non-starter. Okay, we can't do this. We realised that we were gonna have to do this with images. We were gonna have to do this with SVG. But there was a problem with that as well.

So, the brand styling was very clear that the outline had to go around the outside of the letters. And while SVG does support text stroke, it centres the stroke on the outline of the shape. And as you can see, you get a very different effect here on something that is centred on the path versus something that is on the outside of the path. SVG only supports this thing, and that's no good. That's ugly as hell, right? Our designer did come up with a workaround for this. He said, "What if we layer two shapes on top of each other? What if we take the, put a shape on top of it to hide the bits of the stroke that we don't need?" And this works really well. Perfect, looks beautiful, looks exactly what we wanted. It also doubled the size of our SVGs. (audience laughing) So we thought that's no good. But we had learned enough about the structure of an SVG when we were fooling with it in the earlier phases of the project that we'd figured out a way to work around this. So SVG has got this <defs> element, and stuff that appears in <defs>, the definitions, doesn't get rendered by default. But what you can do is you can give it a label, and then you can reference it later on. So what we did is we took the path that described those letters, and we rendered it twice, once on top of itself. The background version had the thick stroke around it. The foreground version had a fill to hide the bits of the thick stroke we didn't want to see. This worked absolutely beautifully and provided exactly what we wanted at the cost of only a few extra bytes to put those <defs> and the <use> elements in line in those SVGs. So that was, hooray, we showed this. The client was ecstatic, the client was happy. This is what we did, this is what we shipped.

And the final result of this was actually lightning fast. It was absolutely ridiculous. We were showing this off at trade shows, going around, and Zingg's competitors were sitting there looking at a loading spinner, while our stuff was up and running and working and was doing its thing. And the weird thing is that it turned out, we didn't design it to do this. This is just something that fell out by mistake, 'cause it worked on almost every device that we tried. Almost everything we tried, it would run on. We even tried running it on Lynx, the text-based browser, console-based browser, and it worked absolutely perfectly on Lynx straight away. Because the D-pad navigation that we had to use for the feature phones translates really well to the keyboard-based navigation that you have in Lynx. And our use of semantic HTML meant that Lynx just rendered stuff the right way. It just worked out correctly and it was actually really usable in this text mode. Lynx doesn't even download your style sheet, right? It doesn't even pay any attention to that at all. So you have to use the right semantic element to get Lynx to render it properly. And once we'd seen it running on Lynx, we wanted this to run on everything. We tried every device we could possibly get our hands on. So it ran on a Nintendo Wii, worked absolutely fine on that. It ran on the weird webOS television that we had in the office, worked on that absolutely fine. My favorite one was on the PlayStations. It ran on the PSP and it ran on the PlayStation 3. But it turns out these were based on a browser called Netfront, and they had really weird font rendering rules, and it turned out that anything that you put in an h1 tag would render in the PlayStation font. (audience laughing) Didn't matter what you said, didn't matter what your style sheet said, it would render it in the PlayStation font no matter what.

It also ran on a variety of network conditions. So it would run on a speeding train as you were hopping from mast to mast. It would work on hotel WiFi, which is just about as bad as 2G. It also ran on dial-up, and we checked. We got a dial-up connection and we actually worked it out to see if it would run properly on this. And of course it ran well on Opera Mini. And this is me actually running it on Opera Mini. By coincidence, O2 switched off all of their 3G masts about three or four weeks ago. And so this is actually running on an EDGE connection. You can see the E up in the corner there. This is running on an EDGE connection. So this is really, really low-end stuff. And what we found was really interesting is that a lot of the advice that we were hearing at the time about web performance, inline your styles for the first render, and chunking and splitting and lazy loading and things like that, was just so much less relevant when your entire application would fit inside a dozen TCP packets. Was just so much less relevant to us. Not irrelevant, still stuff that was worth doing, but it just didn't really register for us. The constraints on this project we found actually pushed us towards solutions that worked better for everybody. Not the people who just had the latest and greatest devices because we couldn't afford to solve problems just by spending bytes. And I think this is a really overlooked form of accessibility. Accessibility, we often think about accessibility in terms of supporting screen readers and good contrast ratios, and Chad's talk on that this morning was fantastic. But it's not just about those kind of things. It can also be about one-handed use. It can also be about using clear and simple language in your application. And it can be about designing for people who are not on the latest and greatest devices, people who can't afford to have those devices, people who all they can afford is a device that is five, six, seven, eight years old, people who are on metered data plans, people who've got 250MB of data to last them the month, and that's all they've got. And we should be spending that data responsibly. I think performance is a really overlooked format of a form of accessibility.

Our big takeaway from this project was that when we built to this extreme, we ended up with a product that actually worked better for everybody. Of course, your mileage may vary. This might not apply to any of your particular products. This is not me saying this is what you should do. This is me just telling you what I did. But that's me. That was Zingg. I'm gonna give you a couple of plugs before I get going. So as Jake mentioned, I do a podcast called Skeptics with a K. It's, we talk about science, region, rationality, critical thinking, things like that. We don't talk about software often, but it does come up. Couple of weeks ago, I did an hour-long episode on the quoted printable errors in the Epstein files. (audience laughing) So that was fun. I also write occasionally for The Skeptic, and you can find me on Bluesky @mikehall314, and I'm not on Twitter 'cause fuck that. Right, thanks very much, folks.

About Mike Hall

Mike Hall

Mike Hall is a web developer and Doctor Who fan, but not in that order. He has been building for the web since 1998, and in that time has been a frontend lead, a backend lead, a CTO, a devops engineer, a full-stack developer, a PHP guy, a JavaScript guy, a TypeScript guy, a CSS guy, and a pain in the neck. He is currently Lead Engineer at a Manchester-based startup named Zaptic, creating software for frontline workers in the manufacturing sector.

In his spare time, he is active in the fields of science communication and scientific skepticism, advocating for evidence-based thinking and reason. He produces and presents a weekly podcast called Skeptics with a K, is on the board of directors for The Skeptic magazine, and is co-founder of the Merseyside Skeptics Society. In his other spare time he likes to rest, but there isn’t much of that to go around.