Journal tags: build

15

Decision time

I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!

But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.

Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?

I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.

When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.

Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.

It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.

From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.

It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.

In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.

When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?

But often a decision that delivers short-term gain may well end up delivering long-term pain.

Alexander Petros describes this succinctly:

Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.

When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.

Robin Berjon puts this into the historical perspective of Taylorism and Luddism:

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.

Or as Marshall McLuhan put it:

Every extension is also an amputation.

…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.

Maintenance matters. With the passing of time, maintenance matters more and more.

Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:

Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.

Culture and style

Ever get the urge to style a good document?

No? Just me, then.

Well, the urge came over me recently so I started styling this single-page site:

A Few Notes On The Culture by Iain M Banks

I’ve followed this document across multiple locations over the years. It started life as a newsgroup post on rec.arts.sf.written in 1994. Ken McLeod published it there on Iain M Banks’s behalf.

The post complements the epic series of space opera books that Iain M Banks set in the anarcho-utopian society of The Culture. It’s a fascinating piece of world building, as well as an insight into the author’s mind.

I first became aware of it many few years later, after a copy had been posted to the web. That URL died, but Adrian Hon kept a copy on his site. Lots of copies keep stuff safe, so after contemplating linkrot, I made a copy on this site too.

But I recently thought that maybe it deserved a bit of art direction, so I rolled up my sleeves and started messing around, designing in the browser and following happy little accidents.

The finished result is still fairly sparse. It’s still entirely text, except for a background image that shows up if your screen is wide enough. That image of a planet originally started as an infra-red snapshot of Jupiter by the James Webb Space Telescope that I worked over until it was unrecognisable.

The text itself is the main focus of the design though. I knew I wanted to play around with a variable font. Mona Sans from Github was one of the first ones I tried and I found it instantly suitable. I had a lot of fun playing with different weights and widths.

After a bit of messing around, I realised that the heading styles were reminding me of some later reissues of The Culture novels, so I leant into that, deliberately styling the byline to resemble the treatment of the author’s name on those book covers.

There isn’t all that much CSS. I’ve embedded it in the head of the HTML rather than linking to a separate style sheet, so feel free to view source and poke around in there. You’ll see that I’m making liberal use of custom properties, the clamp function, and logical properties.

Originally I had a light mode and dark mode but I found that the dark mode was much more effective so I ditched the lighter option.

I did make sure to include some judicious styles for print, so if you fancy reading on paper, it should print out nicely.

Oh, and of course it’s a progressive web app that works offline.

I didn’t want to mess with the original document other than making some typographic tweaks to punctuation, but I wanted to break up the single wall of text. I wasn’t about to start using pull quotes on the web so in the end I decided to introduce some headings that weren’t in the original document:

  1. Government
  2. Economics
  3. Technology
  4. Philosophy
  5. Lifestyle
  6. Travel
  7. Habitat
  8. Legal System
  9. Politics
  10. Identity
  11. Nomenclature
  12. Cosmology

If your browser viewport is tall enough, the heading for the current section you’re reading will remain sticky as you scroll. No JavaScript required.

I’m pretty pleased with how this little project turned out. It was certainly fun to experiment with fluid type and a nice variable font.

I can add this to my little collection of single-page websites I’ve whittled over the years:

Chain of tools

I shared this link in Slack with my co-workers today:

Cultivating depth and stillness in research by Andy Matuschak.

I wasn’t sure whether it belonged in the #research or the #design channel. While it’s ostensibly about research, I think it applies to design more broadly. Heck, it probably applies to most fields. I should have put it in the Slack channel I created called #iiiiinteresting.

The article is all about that feeling of frustration when things aren’t progressing quickly, even when you know intellectually that not everything should always progress quickly.

The article is filled with advice for battling this feeling, including this observation on curiosity:

Curiosity can also totally change my relationship to setbacks. Say I’ve run an experiment, collected the data, done the analysis, and now I’m writing an essay about what I’ve found. Except, halfway through, I notice that one column of the data really doesn’t support the conclusion I’d drawn. Oops. It’s tempting to treat this development as a frustrating impediment—something to be overcome expediently. Of course, that’s exactly the wrong approach, both emotionally and epistemically. Everything becomes much better when I react from curiosity instead: “Oh, wait, wow! Fascinating! What is happening here? What can this teach me? How might this change what I try next?”

But what really resonated with me was this footnote attached to that paragraph:

I notice that I really struggle to generate curiosity about problems in programming. Maybe it’s because I’ve been doing it so long, but I think it’s because my problems are usually with ephemeral ideas, incidental to what I actually care about. When I’m fighting some godforsaken Javascript build system, I don’t feel even slightly curious to “really” understand those parochial machinations. I know they’re just going to be replaced by some new tool next year.

I feel seen.

I know I’m not alone. I know people who were driven out of front-end development because they felt the unspoken ultimatum was to either become a “full stack” developer or see yourself out.

Remember Chris’s excellent post, The Great Divide? Zach referenced it recently. He wrote:

The question I keep asking though: is the divide borne from a healthy specialization of skills or a symptom of unnecessary tooling complexity?

Mostly I feel sad about the talented people we’ve lost because they felt their front-of-the-front-end work wasn’t valued.

But wait! Can I turn my frown upside down? Can I take Andy Matuschak’s advice and say, “Oh, wait, wow! Fascinating! What is happening here? What can this teach me?”

Here’s one way of squinting at the situation…

There’s an opportunity here. If many people—myself included—feel disheartened and ground down by the amount of time they need to spend dealing with toolchains and build systems, what kind of system would allow us to get on with making websites without having to deal with that stuff?

I’m not proposing that we get rid of these complex toolchains, but I am wondering if there’s a way to make it someone else’s job.

I guess this job is DevOps. In theory it’s a specialised field. In practice everyone adding anything to a codebase partakes in continual partial DevOps because they must understand the toolchains and build processes in order to change one line of HTML.

I’m not saying “Don’t Make Me Think” when it comes to the tooling. I totally get that some working knowledge is probably required. But the ratio has gotten out of whack. You need a lot of working knowledge of the toolchains and build processes.

In fact, that’s mostly what companies hire for these days. If you’re well versed in HTML, CSS, and vanilla JavaScript, but you’re not up to speed on pipelines and frameworks, you’re going to have a hard time.

That doesn’t seem right. We should change it.

Sass and clamp

CSS got some pretty nifty features recently. There’s the min() and max() functions. If you use them for, say, width you can use one rule where previously you would’ve needed to use two (a width declaration followed by either min-width or max-width). But they can also be applied to font-size! That’s very nifty—we’ve never had min-font-size or max-font-size properties.

There’s also the clamp() function. That allows you to set a minimum size, a default size, and a maximum size. Again, it can be used for lengths, like width, or for font-size.

Over on thesession.org, I’ve had some media queries in place for a while now that would increase the font-size for larger screens. It’s nothing crucial, just a nice-to-have so that on wide screens, the font is bumped up accordingly. I realised I could replace all those media queries with one clamp() statement, thanks to the vw (viewport width) unit:

font-size: clamp(1rem, 1.333vw, 1.5rem);

By default, the font-size is 1.333vw (1.333% of the viewport width), but it will never get smaller than 1rem and it will never get larger than 1.5rem.

That works, but there’s a bit of an issue with using raw vw units like that. If someone is on a wide screen and they try to adjust the font size, nothing will happen. The viewport width doesn’t change when you bump the font size up or down.

The solution is to mix in some kind of unit that does respond to the font size being bumped up or down (like, say, the rem unit). Handily, clamp() allows you to combine units, just like calc(). So I can do this:

font-size: clamp(1rem, 0.5rem + 0.666vw, 1.5rem);

The result is much the same as my previous rule, but now—thanks to the presence of that 0.5rem value—the font size responds to being adjusted by the user.

You could use a full 1rem in that default value:

font-size: clamp(1rem, 1rem + 0.333vw, 1.5rem);

…but if you do that, the minimum size (1rem) will never be reached—the default value will always be larger. So in effect it’s no different than saying:

font-size: min(1.rem + 0.333vw, 1.5rem);

I mentioned this to Chris just the other day.

Anyway, I got the result I wanted. I wanted the font size to stay at the browser default size (usually 16 pixels) until the screen was larger than around 1200 pixels. From there, the font size gets gradually bigger, until it hits one and a half times the browser default (which would be 24 pixels if the default size started at 16). I decided to apply it to the :root element (which is html) using percentages:

:root {
  font-size: clamp(100%, 50% + 0.666vw, 150%);
}

(My thinking goes like this: if we take a screen width of 1200 pixels, then 1vw would be 12 pixels: 1200 divided by 100. So for a font size of 16 pixels, that would be 1.333vw. But because I’m combining it with half of the default font size—50% of 16 pixels = 8 pixels—I need to cut the vw value in half as well: 50% of 1.333vw = 0.666vw.)

So I’ve got the CSS rule I want. I dropped it in to the top of my file and…

I got an error.

There was nothing wrong with my CSS. The problem was that I was dropping it into a Sass file (.scss).

Perhaps I am showing my age. Do people even use Sass any more? I hear that post-processors usurped Sass’s dominance (although no-one’s ever been able to explain to me why they’re different to pre-processers like Sass; they both process something you’ve written into something else). Or maybe everyone’s just writing their CSS in JS now. I hear that’s a thing.

The Session is a looooong-term project so I’m very hesitant to use any technology that won’t stand the test of time. When I added Sass into the mix, back in—I think—2012 or so, I wasn’t sure whether it was the right thing to do, from a long-term perspective. But it did offer some useful functionality so I went ahead and used it.

Now, eight years later, it was having a hard time dealing with the new clamp() function. Specifically, it didn’t like the values being calculated through the addition of multiple units. I think it was clashing with Sass’s in-built ability to add units together.

I started to ask myself whether I should still be using Sass. I looked at which features I was using…

Variables. Well, now we’ve got CSS custom properties, which are even more powerful than Sass variables because they can be updated in real time. Sass variables are like const. CSS custom properties are like let.

Mixins. These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

Nesting. I’ve never been a fan. I know it can make the source files look tidier but I find it can sometimes obfuscate what you’re final selectors are going to look like. So this wasn’t something I was using much any way.

Multiple files. Ah! This is the thing I would miss most. Having separate .scss files for separate interface elements is very handy!

But globbing a bunch of separate .scss files into one .css file isn’t really a Sass task. That’s what build tools are for. In fact, that’s what I was already doing with my JavaScript files; I write them as individual .js files that then get concatenated into one .js file using Grunt.

(Yes, this project uses Grunt. I told you I was showing my age. But, you know what? It works. Though seeing as I’m mostly using it for concatenation, I could probably replace it with a makefile. If I’m going to use old technology, I might as well go all the way.)

I swapped out Sass variables for CSS custom properties, mixins for calc(), and removed what little nesting I was doing. Then I stripped the Sass parts out of my Grunt file and replaced them with some concatenation and minification tasks. All of this makes no difference to the actual website, but it means I’ve got one less dependency …and I can use clamp()!

Remember a little while back when I was making a dark mode for my site? I made this observation:

Let’s just take a moment here to pause and reflect on the fact that we can now use CSS to create all sorts of effects that previously required a graphic design tool like Photoshop.

It feels like something similar has happened with tools like Sass. Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

It’s like when we used to need something like jQuery to do DOM Scripting succinctly using CSS selectors. Then we got things like querySelector() in JavaScript so we no longer needed the trailblazer.

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

You could argue that this is what happened with Flash. It certainly happened with jQuery and Sass. I’m pretty sure we’ll see the same cycle play out with frameworks like React.

Design sprint?

Our hack week at CERN to reproduce the WorldWideWeb browser was five days long. That’s also the length of a design sprint. So …was what we did a design sprint?

I’m going to say no.

On the surface, our project has all the hallmarks of a design sprint. A group of people who don’t normally work together were thrown into an instense week of problem-solving and building, culminating in a tangible testable output. But when you look closer, the journey itself was quite different. A design sprint is typical broken into five phases, each one mapped on to a day of work:

  1. Understand and Map
  2. Demos and Sketch
  3. Decide and Storyboard
  4. Prototype
  5. Test

Gathered at CERN, hunched over laptops.

There was certainly plenty of understanding, sketching, and prototyping involved in our hack week at CERN, but we knew going in what the output would be at the end of the week. That’s not the case with most design sprints: figuring out what you’re going to make is half the work. In our case, we knew what needed to be produced; we just had to figure out how. Our process looked more like this:

  1. Understand and Map
  2. Research and Sketch
  3. Build
  4. Build
  5. Build

Now you could say that it’s a kind of design sprint, but I think there’s value in reserving the term “design sprint” for the specific five-day process. As it is, there’s enough confusion between the term “sprint” in its agile sense and “design sprint”.

New Adventures 2019

My trip to Nottingham for the New Adventures conference went very well indeed.

First of all, I had an all-day workshop to run. I was nervous. Because I no longer prepare slides for workshops—and instead rely on exercises and discussions—I always feel like I’m winging it. I’m not winging it, but without the security blanket of a slide deck, I don’t have anything to fall back on.

As it turned out, I needn’t have worried. The workshop went great. Well, I thought it went great but you’d really have to ask the attendees to know for sure. One of the workshop participants, Westley Knight, wrote about his experience:

The workshop itself was fluid enough to cater to the topics that the attendees were interested in; from over-arching philosophy to technical detail around service workers and new APIs. It has helped me to understand that learning in this kind of environment doesn’t have to be rigorously structured, and can be shaped as the day progresses.

(By the way, if you’d like me to run this workshop at your company, get in touch.)

With the workshop done, it was time for me to freak out fully about my conference talk. I was set to open the show. No pressure.

Actually, I felt pretty damn good about what I had been preparing for the past few months (it takes me aaages to put a talk together), but I always get nervous about presenting new material—until I’ve actually given the talk in front of a real audience, I don’t actually know if it’s any good or not.

Clare was speaking right after me, but she was having some technical issues. It’s funny; as soon as she had a problem, I immediately switched modes from conference speaker to conference organiser. Instead of being nervous, I flipped into being calm and reassuring, getting Clare’s presentation—and fonts—onto my laptop, and making sure her talk would go as smoothly as possible (it did!).

My talk went down well. The audience was great. Everyone paid attention, laughed along with the jokes, and really listened to what I was trying to say. For a speaker, you can’t ask for better than that. And people said very nice things about the talk afterwards. Sam Goddard wrote about how it resonated with him.

Wearing my eye-watering loud paisley shirt on stage at New Adventures.

You can peruse the slides from my presentation but they make very little sense out of context. But video of the talk is forthcoming.

The advantage to being on first was that I got my talk over with at the start of the day. Then I could relax and enjoy all the other talks. And enjoy them I did! I think all of the speakers were feeling the same pressure I was, and everybody brought their A-game. There were some recurring themes throughout the day: responsibility; hope; diversity; inclusion.

So New Adventures was already an excellent event by the time we got to Ethan, who was giving the closing talk. His talk elevated the day into something truly sublime.

Look, I could gush over how good Ethan’s talk was, or try to summarise it, but there’s really no point. I’ll just say that I felt the same sense of being present at something genuinely important that I felt when I was in the room for his original responsive web design talk at An Event Apart back in 2010. When the video is released, you really must watch it. In the meantime, you can read through the articles and books that Ethan cited in his presentation.

New Adventures 2019 was worth attending just for that one talk. I was very grateful I had the opportunity to attend, and I still can’t quite believe that I also had the opportunity to speak.

Building links

In just over a week, I’ll be giving the opening talk at the New Adventures conference in Nottingham. I’ll be giving a workshop the day before too. There are still tickets available for both.

I have to admit, I’m kind of nervous about this talk. It’s been quite a while since the last New Adventures, but it’s always had quite the cachet. I think I went to most of them. It’s quite strange—and quite an honour—to shift gears from attendee to speaker.

The talk I’ll be giving is called Building. That might be a noun. That might be a verb. You decide:

Every new medium looks to what has come before for guidance. Web design has taken cues from centuries of typography and graphic design. Web development has borrowed metaphors and ideas from the world of architecture. Let’s take a tour of some of the most influential ideas from architecture that have crossed over into the web, from pattern languages to responsive design. Together we’ll uncover how to build resilient, performant, accessible and beautiful structures that work with the grain of the materials of the web.

This talk builds upon the talk I gave at last year’s An Event Apart called The Way Of The Web. It also reflects many of the ideas in Resilient Web Design. When I gave a run-through of the talk at Clearleft last week, Andy called it a “greatest hits.” For a while there, I was feeling guilty about retreading some ground I’ve covered in previous talks and writings. Then I realised it was pretty arrogant of me to think that anyone in the audience would be familiar with any of it.

Besides, I’ve got a whole new avenue of exploration in this talk. It’s about language and metaphor—how we talk about what we do on the web. I’ve just finished giving another run-through at the Clearleft studio and I’m feeling pretty good about it. That’s good, because I find that giving a talk in a small room to a handful of colleagues is way more stressful than giving a talk to hundreds of people at a conference.

Just as I put together links related to last year’s talk, I figured I’d provide some hyperlinks for anyone interested in the topics raised in this new talk…

Books

Articles

Audio

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

Empire State

I’m in New York. Again. This time it’s for Google’s AMP Conf, where I’ll be giving ‘em a piece of my mind on a panel.

The conference starts tomorrow so I’ve had a day or two to acclimatise and explore. Seeing as Google are footing the bill for travel and accommodation, I’m staying at a rather nice hotel close to the conference venue in Tribeca. There’s live jazz in the lounge most evenings, a cinema downstairs, and should I request it, I can even have a goldfish in my room.

Today I realised that my hotel sits in the apex of a triangle of interesting buildings: carrier hotels.

32 Avenue Of The Americas.Telephone wires and radio unite to make neighbors of nations

Looming above my hotel is 32 Avenue of the Americas. On the outside the building looks like your classic Gozer the Gozerian style of New York building. Inside, the lobby features a mosaic on the ceiling, and another on the wall extolling the connective power of radio and telephone.

The same architects also designed 60 Hudson Street, which has a similar Art Deco feel to it. Inside, there’s a cavernous hallway running through the ground floor but I can’t show you a picture of it. A security guard told me I couldn’t take any photos inside …which is a little strange seeing as it’s splashed across the website of the building.

60 Hudson.HEADQUARTERS The Western Union Telegraph Co. and telegraph capitol of the world 1930-1973

I walked around the outside of 60 Hudson, taking more pictures. Another security guard asked me what I was doing. I told her I was interested in the history of the building, which is true; it was the headquarters of Western Union. For much of the twentieth century, it was a world hub of telegraphic communication, in much the same way that a beach hut in Porthcurno was the nexus of the nineteenth century.

For a 21st century hub, there’s the third and final corner of the triangle at 33 Thomas Street. It’s a breathtaking building. It looks like a spaceship from a Chris Foss painting. It was probably designed more like a spacecraft than a traditional building—it’s primary purpose was to withstand an atomic blast. Gone are niceties like windows. Instead there’s an impenetrable monolith that looks like something straight out of a dystopian sci-fi film.

33 Thomas Street.33 Thomas Street, New York

Brutalist on the outside, its interior is host to even more brutal acts of invasive surveillance. The Snowden papers revealed this AT&T building to be a centrepiece of the Titanpointe programme:

They called it Project X. It was an unusually audacious, highly sensitive assignment: to build a massive skyscraper, capable of withstanding an atomic blast, in the middle of New York City. It would have no windows, 29 floors with three basement levels, and enough food to last 1,500 people two weeks in the event of a catastrophe.

But the building’s primary purpose would not be to protect humans from toxic radiation amid nuclear war. Rather, the fortified skyscraper would safeguard powerful computers, cables, and switchboards. It would house one of the most important telecommunications hubs in the United States…

Looking at the building, it requires very little imagination to picture it as the lair of villainous activity. Laura Poitras’s short film Project X basically consists of a voiceover of someone reading an NSA manual, some ominous background music, and shots of 33 Thomas Street looming in its oh-so-loomy way.

A top-secret handbook takes viewers on an undercover journey to Titanpointe, the site of a hidden partnership. Narrated by Rami Malek and Michelle Williams, and based on classified NSA documents, Project X reveals the inner workings of a windowless skyscraper in downtown Manhattan.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

A map to build by

The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.

It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.

There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.

Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.

I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.

Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.

I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.

I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.

So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?

Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:

How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?

He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.

I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.

Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.

Cool your eyes don’t change

At last November’s Build conference I gave a talk on digital preservation called All Our Yesterdays:

Our communication methods have improved over time, from stone tablets, papyrus, and vellum through to the printing press and the World Wide Web. But while the web has democratised publishing, allowing anyone to share ideas with a global audience, it doesn’t appear to be the best medium for preserving our cultural resources: websites and documents disappear down the digital memory hole every day. This presentation will look at the scale of the problem and propose methods for tackling our collective data loss.

The video is now on vimeo.

The audio has been huffduffed.

Adactio: Articles—All Our Yesterdays on Huffduffer

I’ve published a transcription over in the “articles” section.

I blogged a list of relevant links shortly after the presentation.

You can also download the slides or view them on speakerdeck but, as usual, they won’t make much sense out of context.

I hope you’ll enjoy watching or reading or listening to the talk as much as I enjoyed presenting it.

Play me off

One of the fun fringe events at Build in Belfast was The Standardistas’ Open Book Exam:

Unlike the typical quiz, the Open Book Exam demands the use of iPhones, iPads, Androids—even Zunes—to avail of the internet’s wealth of knowledge, required to answer many of the formidable questions.

Team Clearleft came joint third. Initially it was joint fourth but an obstreperous Andy Budd challenged the scoring.

Now one of the principles of this unusual pub quiz was that cheating was encouraged. Hence the encouragement to use internet-enabled devices to get to Google and Wikipedia as quickly as the network would allow. In that spirit, Andy suggested a strategy of “running interference.”

So while others on the team were taking information from the web, I created a Wikipedia account to add misinformation to the web.

Again, let me stress, this was entirely Andy’s idea.

The town of Clover, South Carolina ceased being twinned Larne and became twinned with Belfast instead.

The world’s largest roller coaster become 465 feet tall instead of its previous 456 feet (requiring a corresponding change to a list page).

But the moment I changed the entry for Keyboard Cat to alter its real name from “Fatso” to “Freddy” …BAM! Instant revert.

You can mess with geography. You can mess with measurements. But you do. Not. Mess. With. Keyboard Cat.

For some good clean Wikipedia fun, you can always try wiki racing:

To Wikirace, first select a page off the top of your head. Using “Random page” works well, as well as the featured article of the day. This will be your beginning page. Next choose a destination page. Generally, this destination page is something very unrelated to the beginning page. For example, going from apple to orange would not be challenging, as you would simply start at the apple page, click a wikilink to fruit and then proceed to orange. A race from Jesus Christ to Subway (restaurant) would be more of a challenge, however. For a true test of skill, attempt Roman Colosseum to Orthographic projection.

Then there’s the simple pleasure of getting to Philosophy:

Some Wikipedia readers have observed that clicking on the first link in the main text of a Wikipedia article, and then repeating the process for subsequent articles, usually eventually gets you to the Philosophy article.

Seriously. Try it.

Speaking, not hacking

I spent last week in Belfast for the Build conference, so I did.

The fun kicked off with a workshop on responsive enhancement which was a lot of fun. Toby has written a report of the day outlining all of the elements that came together for a successful workshop.

The day of the conference itself was filled with inspiring, uplifting talks full of positive energy …except for mine. My talk—All Our Yesterdays—had an underlying sense of anger, especially when I spoke about the destruction of Geocities. If you heard the talk and you’d like to explore some of the resources I mentioned, here’s a grab-bag of links:

I thought I had delivered the talk reasonably well only to discover that my American friends in the audience misinterpreted my quote from Tim Berners-Lee as “Cool your eyes don’t change.”

Still, it was wonderfully surreal to be introduced by Jesse Thorn.

Build Jeremy Keith

My appearance at Build was an eleventh hour affair. Ethan was originally set to speak but he had to cancel. Andy asked me to step in. At first I didn’t think it would be possible. Last Thursday—the day of the conference—was the day I was supposed to fly to San Francisco for Science Hack Day. Luckily I was able to change my flight.

That’s why I was up at the crack of dawn the day after Build to catch an early-morning flight to Heathrow where I would have to dash from the lowest to the highest numbered terminal to get on my transatlantic hackrocket.

So you can imagine how my heart sank as I sat in the departure lounge of Belfast International Airport listening to the announcement of a delay to the first flight. First it was one hour. Then two.

When I did finally make it to Heathrow, there was no chance of making the flight to San Francisco. I was hoping that perhaps it too had been delayed by the foggy weather conditions but no, it took off right on time. Without me.

As my flight from Belfast was a completely separate booking rather than a connecting flight, I couldn’t get on a later flight unless I paid the full fare. So I simply accepted my fate.

C’est la vie, c’est it is.

It looks like Science Hack Day San Francisco—to the surprise of absolutely no-one—was a superb event. There’s a write-up on the open.NASA blog outlining some of the amazing hacks, including the cute (and responsive) Space Ipsum and the freakishly brilliant synesthesia mask: syneseizure.

Science Hack Day SF science hack day

Building

I never made it to the Build conference in Belfast last year or the year before. I think it clashed with previous commitments every time.

This was going to be the third year in a row that I was going to miss Build. I had already slapped my money down for the excellent Full Frontal conference which is on the very same day as Build but takes place right here in Brighton in the excellent Duke Of York’s cinema.

But fate had other plans for me.

Ethan was going to be speaking at Build but he’s had to pull out for personal reasons …so Andy asked me if I’d like to speak. I may be a poor substitute for Ethan and it’s a shame that I’m going to miss Full Frontal but I jumped at the chance to join the stellar line-up.

As well as speaking at the conference itself on November 10th, I’ll be leading a workshop on responsive design and progressive enhancement on the preceding Tuesday. The conference is sold out but there are places available for the workshop so grab yourself a slot if you fancy spending a day working on a content-first approach to planning and building websites.

If you can’t make it to Belfast, I’ll be giving the same workshop at Beyond Tellerrand in Düsseldorf on Sunday, November 20th and there are still some tickets available.

If you can make it to Belfast, I look forward to seeing you there. I’ll be flying my future friendly flag high, just like I’m doing on the front page of the Build website.

That attire would also be suitable for my post-Build plans. The day after the conference I’ll be travelling to San Francisco for Science Hack Day on the weekend of November 12th. If the last one is anything to go by, it’s going to be an unmissable excellent weekend—I highly recommend that you put your name down if you’re going to be in the neighbourhood.

Looking forward to seeing you in Belfast or Düsseldorf or San Francisco …or wherever.