Journal tags: 69

36

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Push

Push notifications are finally arriving on iOS—hallelujah! Like I said last year, this is my number one wish for the iPhone, though not because I personally ever plan to use the feature:

When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.

With push notifications in mobile Safari, the arguments for making proprietary apps get weaker. That’s good.

The announcement post is a bit weird though. It never uses the phrase “progressive web apps”, even though clearly the entire article is all about progressive web apps. I don’t know if this down to Not-Invented-Here syndrome by the Apple/Webkit team, or because of genuine legal concerns around using the phrase.

Instead, there are repeated references to “Home Screen apps”. This distinction makes some sense though. In order to use web push on iOS, your website needs to be added to the home screen.

I think that would be fair enough, if it weren’t for the fact that adding a website to the home screen remains such a hidden feature that even power users would be forgiven for not knowing about it. I described the steps here:

  1. Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
  2. A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
  3. Now you must find “add to home screen” in the list
  • Copy
  • Add to Reading List
  • Add Bookmark
  • Add to Favourites
  • Find on Page
  • Add to Home Screen
  • Markup
  • Print

As long as this remains the case, we can expect usage of web push on iOS to be vanishingly low. Hardly anyone is going to add a website to their home screen when their web browser makes it so hard.

If you’d like to people to install your progressive web app, you’ll almost certainly need to prompt people to do so. Here’s the page I made on thesession.org with instructions on how to add to home screen. I link to it from the home page of the site.

I wish that pages like that weren’t necessary. It’s not the best user experience. But as long as mobile Safari continues to bury the home screen option, we don’t have much choice but to tackle this ourselves.

2022 in numbers

I posted 1057 times on adactio in 2022.

That’s a bit more than in 2021.

November was the busiest month with 137 posts.

February was the quietest with 65 posts.

That included about 237 notes with photos and 214 replies.

I published one article, the transcript of my talk, In And Out Of Style.

I watched an awful lot of television but managed to read 25 books.

Elsewhere, I huffduffed 130 audio files and added 55 tune settings on The Session in 2022.

I spoke at ten events.

I travelled within Europe and the USA to a total of 18 destinations.

Syndicating to Mastodon

I’ve been contemplating a checkbox. The label for this checkbox reads:

This is a bot account

Let me back up…

In what seems like decades ago, but was in fact just a few weeks, Elon Musk bought Twitter and began burning it to the ground. His admirers insist he’s playing some form of four-dimensional chess, but to the rest of us, his actions are indistinguishable from a spoilt rich kid not understanding what a social network is.

It wasn’t giving me much cause for anguish personally. For the past eight years, I’ve only used Twitter as a syndication endpoint for my own notes. But I understand that’s a very privileged position to be in. Most people on Twitter don’t have the same luxury of independence. It’s genuinely maddening and saddening to see their years of sharing destroyed by one cruel idiot.

Lots of people started moving to Mastodon. I figured I should do the same for my syndicated notes.

At first, I signed up for an account on mastodon.cloud. No particular reason. But that’s where I saw this very insightful post from Anil Dash:

When it came time to reckon with social media’s failings, nobody ran to the “web3” platforms. Nobody asked “can I get paid per message”? Nobody asked about the blockchain. The community of people who’ve been quietly doing this work for years (decades!) ended up being the ones who welcomed everyone over, as always.

I was getting my account all set up and beginning to follow some other folks, when I realised that I actually already had an exisiting account over on mastodon.social. Doh! Turns out that I signed up back in 2017 to kick the tyres, but never did much else because there weren’t many other people around back then. Oh, how times have changed!

Anyway, I thought I had really screwed up by having two accounts but this turned out to be an opportunity to experience some of the thoughtfulness in Mastodon’s design. The process of migrating from one Mastodon account to another—on a completely different instance—was very smooth! It was clear that this wasn’t an afterthought. This is an essential part of the fediverse and the design of the migration flow reflects that.

This gives me enormous peace of mind. If I ever want to switch to a different instance and still keep my network intact, I know it won’t be a problem. Mastodon is like the opposite of the roach-motel mentality that permeates most VC-backed so-called social networks.

As I played around some more—reading, following, exploring—my feelings of fondness only grew stronger. I like this place a lot!

I definitely wanted to syndicate my notes to Mastodon. At first, I implemented a straightforward RSS-to-Mastodon syndication using IFTTT (IF This, Then That), thanks to Matthias’s excellent tutorial.

But that didn’t feel quite right. When I syndicate to Twitter, I make a conscious choice each time. There’s a “Twitter” toggle that I can enable or disable in my posting interface. Mastodon deserved the same level of thoughtfulness.

So I switched off the IFTTT recipe and started exploring the Mastodon API. It’s going to sound like a humblebrag when I tell you that I got cross-posting working in almost no time at all, but that’s not a testament to my coding prowess (I’m really not very good), but rather a testament to the Mastodon API, which was a joy to work with.

  1. On your Mastodon instance, go to /settings/applications.
  2. Click on New Application.
  3. Fill in the details about your website and select write:statuses (and probably write:media) from the Scopes list.
  4. Copy Your access token to use in API calls.
  5. Write some sloppy code (in my case, PHP that uses CURL).

I did hit a wall when it came to posting images. That took me a while to get working, and I couldn’t figure out why. Was it something at Mastodon’s end while it was struggling under the influx of new users? As it turns out, no. It was entirely down to me being an idiot. (You know that situation where you’re working on a problem for ages and you’ve become convinced it’s an extremely gnarly rocket-science problem, but then turns out to be something stupid like a typo? Yeah. That.)

Then there’s the whole question of how to receive replies, likes, and reboosts from Mastodon here on my own site. Luckily, that was super easy, thanks to Brid.gy. One click and I was done. I love Brid.gy!

Take this note, for example. There’s a version on Twitter and a version on Mastodon. The original version on my own site gets responses from both places.

If I’m replying to a response on Twitter, I do not syndicate that to Mastodon.

Likewise, if I’m replying to a response on Mastodon, I do not syndicate that to Twitter.

Oh, one thing worth mentioning: if you’re sending a reply to something on Mastodon using the API, there’s an in_reply_to_id field for you to provide. But you should also include the full @username@instance of the person you’re replying to at the beginning of the message to ensure that it’s displayed as a reply rather than showing up as a regular post. Note the difference between this note on my site and its syndicated version on Mastodon.

Anyway, now I’m posting to Mastodon, but I’m doing it through the the interface of my own website. Which brings me to that checkbox in Mastodon’s profile settings:

This is a bot account

The help text reads:

Signal to others that the account mainly performs automated actions and might not be monitored

If I were doing the automatic cross-posting from RSS, I’d definitely tick that box. But as I’m making a conscious decision whenever I syndicate to Mastodon, I think I’m going to leave that checkbox unticked.

My cross-posting is not automated and I’m very much monitoring my Mastodon account …because I’m enjoying my Mastodon experience more than I’ve enjoyed anything online for quite some time. Highly recommended!

Time for transitions

I am simultaneously very excited and very nervous about the View Transitions API.

You may know it by its former name—Shared Element Transitions. The name change is very recent.

I’ve been saying for years that some kind of API like this would be brilliant:

I honestly think if browsers implemented this, 80% of client-rendered Single Page Apps could be done as regular good ol’-fashioned websites.

Miriam Suzanne describes the theory of View Transitions succinctly:

Shared-element transitions are designed to work with standard web navigation across multiple page loads, as well as page transitions in ‘single-page’ apps (often called SPAs).

This all sounds brilliant. But the devil is in the implementation details. Right now, the API only works for single page apps. This is totally understandable. For purely pragmatic reasons, single page apps are a simple use case to solve for. It’s going to take a lot more work to get this API to work for multi-page apps (or as we used to call them, websites).

If we get a View Transitions API that works across page navigations, it could potentially turbo-charge the web. It will act as a disencentive to building single page apps—you’d be able to provide swish transitions without sacrificing performance or resilience at the alter of a heavy-handed JavaScript-only architecture.

But if the API only ever works for single page apps (which is the current situation), then it will act as an incentive to make any kind of website into a single page app, regardless of whether it’s actually the appropriate architecture.

That prospect has me very worried indeed.

I’m making my feelings on this known just in case any of the implementators out there are thinking, “Hey, maybe it’s fine that this API only works for single page apps—I’m sure most people would be happy with that.”

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Update: Jake says:

We’re currently landing code in Chrome for the MPA version.

Very happy to hear that! It’s already in the spec, but it’s good to hear that the implementation isn’t going to lag too much.

Also, read this follow-up.

The audio from dConstruct 2022

dConstruct 2022 was great fun. It was also the last ever dConstruct.

If you were there, and you’d like to re-live the magic, the audio from the talks is now available on the dConstruct Archive. Here they are:

Thanks to some service worker magic, you can select any of those talks for offline listening later.

The audio is also available on Huffduffer on the dConstruct Huffduffer account. Here’s the RSS feed that you can pop into your podcast software of choice.

If you’re more of a visual person, you can watch videos of the slides synced with the audio. They’ve all got captions too (good ones, not just automatically generated).

So have a listen in whichever way you prefer.

Now that I’ve added the audio from the last dConstruct to the dConstruct archive, it feels like the closing scene of Raiders Of The Lost Ark. Roll credits.

UX FOMO

Today is the first day of UX London 2022 …and I’m not there. Stoopid Covid.

I’m still testing positive although I’m almost certainly near the end of my infection. But I don’t want to take any chances. Much as I hate to miss out on UX London, I would hate passing this on even more. So my isolation continues.

Chris jumped in at the last minute to do the hosting duties—thanks, Chris!

From the buzz I’m seeing on Twitter, it sounds like everything is going just great without me, which is great to see. Still, I’m experiencing plenty of FOMO—even more than the usual levels of FOMO I’d have when there’s a great conference happening that I’m not at.

To be honest, nearly all of my work on UX London was completed before the event. My number one task was putting the line-up together, and I have to say, I think I nailed it.

If I were there to host the event, it would mostly be for selfish reasons. I’d get a real kick out of introducing each one of the superb speakers. I’d probably get very tedious, repeatedly saying “Oh, you’re going to love this next one!” followed by “Wasn’t that great‽”

But UX London isn’t about me. It’s about the inspiring talks and practical workshops.

I wish I were there to experience it in person but I can still bask in the glow of a job well done, hearing how much people are enjoying the event.

Backup

I’m standing on a huge stage in a giant hangar-like room already filled with at least a thousand people. More are arriving. I’m due to start speaking in a few minutes. But there’s a problem with my laptop. It connects to the external screen, then disconnects, then connects, then disconnects. The technicians are on the stage with me, quickly swapping out adaptors and cables as they try to figure out a fix.

This is a pretty standard stress dream for me. Except this wasn’t a dream. This was happening for real at the giant We Are Developers World Congress in Berlin last week.

In the run-up to the event, the organisers had sent out emails about providing my slide deck ahead of time so it could go on a shared machine. I understand why this makes life easier for the people running the event, but it can be a red flag for speakers. It’s never quite the same as presenting from your own laptop with its familiar layout of the presentation display in Keynote.

Fortunately the organisers also said that I could present from my own laptop if I wanted to so that’s what I opted for.

One week before the talk in Berlin I was in Amsterdam for CSS Day. During a break between talks I was catching up with Michelle. We ended up swapping conference horror stories around technical issues (prompted by some of our fellow speakers having issues with Keynote on the brand new M1 laptops).

Michelle told me about a situation where she was supposed to be presenting from her own laptop, but because of last-minute technical issues, all the talks were being transferred to a single computer via USB sticks.

“But the fonts!” I said. “Yes”, Michelle responded. Even though she had put the fonts on the USB stick, things got muddled in the rush. If you open the Keynote file before installing the fonts, Keynote will perform font substitution and then it’s too late. This is exactly what happened with Michelle’s code examples, messing them up.

“You know”, I said, “I was thinking about having a back-up version of my talks that’s made entirely out of images—export every slide as an image, then make a new deck by importing all those images.”

“I’ve done that”, said Michelle. “But there isn’t a quick way to do it.”

I was still thinking about our conversation when I was on the Eurostar train back to England. I had plenty of time to kill with spotty internet connectivity. And that huge Berlin event was less than a week away.

I opened up the Keynote file of the Berlin presentation. I selected File, Export to, Images.

Then I created a new blank deck ready for the painstaking work that Michelle had warned me about. I figured I’d have to drag in each image individually. The presentation had 89 slides.

But I thought it was worth trying a shortcut first. I selected all of the images in Finder. Then I dragged them over to the far left column in Keynote, the one that shows the thumbnails of all the slides.

It worked!

Each image was now its own slide. I selected all 89 slides and applied my standard transition: a one second dissolve.

That was pretty much it. I now had a version of my talk that had no fonts whatsoever.

If you’re going to try this, it works best if don’t have too many transitions within slides. Like, let’s say you’ve got three words that you introduce—by clicking—one by one. You could have one slide with all three words, each one with its own build effect. But the other option is to have three slides: each one like the previous slide but with one more word added. If you use that second technique, then the exporting and importing will work smoothly.

Oh, and if you have lots and lots of notes, you’ll have to manually copy them over. My notes tend to be fairly minimal—a few prompts and the occasional time check (notes that say “5 minutes” or “10 minutes” so I can guage how my pacing is going).

Back to that stage in Berlin. The clock is ticking. My laptop is misbehaving.

One of the other speakers who will be on later in the day was hoping to test his laptop too. It’s Håkon. His presentation includes in-browser demos that won’t work on a shared machine. But he doesn’t get a chance to test his laptop just yet—my little emergency has taken precedent.

“Luckily”, I tell him, “I’ve got a backup of my presentation that’s just images to avoid any font issues.” He points out the irony: we spent years battling against the practice of text-as-images on the web and now here we are using that technique once again.

My laptop continues to misbehave. It connects, it disconnects, connects, disconnects. We’re going to have to run the presentation from the house machine. I’m handed a USB stick. I put my images-only version of the talk on there. I’m handed a clicker (I can’t use my own clicker with the house machine). I’m quickly ushered backstage while the MC announces my talk, a few minutes behind schedule.

It works. It feels a little strange not being able to look at my own laptop, but the on-stage monitors have the presentation display including my notes. The unfamiliar clicker feels awkward but hopefully nobody notices. I deliver my talk and it seems to go over well.

I think I’ll be making image-only versions of all my talks from now on. Hopefully I won’t ever need them, but just knowing that the backup is there is reassuring.

Mind you, if you’re the kind of person who likes to fiddle with your slides right up until the moment of presenting, then this technique won’t be very useful for you. But for me, not being able to fiddle with my slides after a certain point is a feature, not a bug.

69420

This is going to make me sound like an old man in his rocking chair on the front porch, but let me tell you about the early days of Twitter…

The first time I mentioned Twitter on here was back in November 2006:

I’ve been playing around with Twitter, a neat little service from the people who brought you Odeo. You send it little text updates via SMS, the website, or Jabber.

A few weeks later, I wrote about some of its emergent properties:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

That’s right; back then we didn’t have the verb “tweeting” yet.

In those early days, some of the now-ubiquitous interactions had yet to emerge. Chris hadn’t yet proposed hashtags. And if you wanted to address a message to a specific person—or reply to a tweet of theirs—the @ symbol hadn’t been repurposed for that. There were still few enough people on Twitter that you could just address someone by name and they’d probably see your message.

That’s what I was doing when I posted:

It takes years off you, Simon.

I’m assuming Simon Willison got a haircut or something.

In any case, it’s an innocuous and fairly pointless tweet. And yet, in the intervening years, that tweet has received many replies. Weirdly, most of the replies consisted of one word:

nice

Very puzzling.

Then a little while back, I realised what was happening. This is the URL for my tweet:

twitter.com/adactio/status/69420

69420.

69.

420.

Pesky kids with their stoner sexual-innuendo numerology!

Both plagues on your one house

February is a tough month at the best of times. A February during The Situation is particularly grim.

At least in December you get Christmas, whose vibes can even carry you through most of January. But by the time February rolls around, it’s all grim winteriness with no respite in sight.

In the middle of February, Jessica caught the ’rona. On the bright side, this wasn’t the worst timing: if this had happened in December, our Christmas travel plans to visit family would’ve been ruined. On the not-so-bright side, catching a novel coronavirus is no fun.

Still, the vaccines did their job. Jessica felt pretty crap for a couple of days but was on the road to recovery before too long.

Amazingly, I did not catch the ’rona. We slept in separate rooms, but still, we were spending most of our days together in the same small flat. Given the virulence of The Omicron Variant, I’m counting my blessings.

But just in case I got any ideas about having some kind of superhuman immune system, right after Jessica had COVID-19, I proceeded to get gastroenteritis. I’ll spare you the details, but let me just say it was not pretty.

Amazingly, Jessica did not catch it. I guess two years of practicing intense hand-washing pays off when a stomach bug comes a-calling.

So all in all, not a great February, even by February’s already low standards.

The one bright spot that I get to enjoy every February is my birthday, just as the month is finishing up. Last year I spent my birthday—the big five oh—in lockdown. But two years ago, right before the world shut down, I had a lovely birthday weekend in Galway. This year, as The Situation began to unwind and de-escalate, I thought it would be good to reprieve that birthday trip.

We went to Galway. We ate wonderful food at Aniar. We listened to some great trad music. We drink some pints. It was good.

But it was hard to enjoy the trip knowing what was happening elsewhere in Europe. I’d blame February for being a bastard again, but in this case the bastard is clearly Vladimar Putin. Fucker.

Just as it’s hard to switch off for a birthday break, it’s equally challenging to go back to work and continue as usual. It feels very strange to be spending the days working on stuff that clearly, in the grand scheme of things, is utterly trivial.

I take some consolation in the fact that everyone else feels this way too, and everyone is united in solidarity with Ukraine. (There are some people in my social media timelines who also feel the need to point out that other countries have been invaded and bombed too. I know it’s not their intention but there’s a strong “all lives matter” vibe to that kind of whataboutism. Hush.)

Anyway. February’s gone. It’s March. Things still feel very grim indeed. But perhaps, just perhaps, there’s a hint of Spring in the air. Winter will not last forever.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Hosting online events

Back in 2014 Vitaly asked me if I’d be the host for Smashing Conference in Freiburg. I jumped at the chance. I thought it would be an easy gig. All of the advantages of speaking at a conference without the troublesome need to actually give a talk.

As it turned out, it was quite a bit of work:

It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered.

Last month, I hosted an other event, but this time it was online: UX Fest. Doing the post-talk interviews was definitely a little weirder online. It’s not quite the same as literally sitting down with someone. But the online nature of the event did provide one big advantage…

To minimise technical hitches on the day, and to ensure that the talks were properly captioned, all the speakers recorded their talks ahead of time. That meant I had an opportunity to get a sneak peek at the talks and prepare questions accordingly.

UX Fest had a day of talks every Thursday in June. There were four talks per Thursday. I started prepping on the Monday.

First of all, I just watched all the talks and let them wash me over. At this point, I’d often think “I’m not sure if I can come up with any questions for this one!” but I’d let the talks sit there in my subsconscious for a while. This was also a time to let connections between talks bubble up.

Then on the Tuesday and Wednesday, I went through the talks more methodically, pausing the video every time I thought of a possible question. After a few rounds of this, I inevitably ended up with plenty of questions, some better than others. So I then re-ordered them in descending levels of quality. That way if I didn’t get to the questions at the bottom of the list, it was no great loss.

In theory, I might not get to any of my questions. That’s because attendees could also ask questions on the day via a chat window. I prioritised those questions over my own. Because it’s not about me.

On some days there was a good mix of audience questions and my own pre-prepared questions. On other days it was mostly my own questions.

Either way, it was important that I didn’t treat the interview like a laundry list of questions to get through. It was meant to be a conversation. So the answer to one question might touch on something that I had made a note of further down the list, in which case I’d run with that. Or the conversation might go in a really interesting direction completely unrelated to the questions or indeed the talk.

Above all, these segments needed to be engaging and entertaining in a personable way, more like a chat show than a post-game press conference. So even though I had done lots of prep for interviewing each speaker, I didn’t want to show my homework. I wanted each interview to feel like a natural flow.

To quote the old saw, this kind of spontaneity takes years of practice.

There was an added complication when two speakers shared an interview slot for a joint Q&A. Not only did I have to think of questions for each speaker, I also had to think of questions that would work for both speakers. And I had to keep track of how much time each person was speaking so that the chat wasn’t dominated by one person more than the other. This was very much like moderating a panel, something that I enjoy very much.

In the end, all of the prep paid off. The conversations flowed smoothly and I was happy with some of the more thought-provoking questions that I had researched ahead of time. The speakers seemed happy too.

Y’know, there are not many things I’m really good at. I’m a mediocre developer, and an even worse designer. I’m okay at writing. But I’m really good at public speaking. And I think I’m pretty darn good at this hosting lark too.

The moment after eclipse

I’m almost finished reading a collection of short stories by Brian Aldiss. He was such a prolific writer that he produced loads of these collections, readily available from second-hand bookshops, published on cheap pulpy paper.

This collection is called The Moment Of Eclipse. It’s has some truly weird stories in there, as well as an undisputed classic with Super-Toys Last All Summer Long. I always find it almost unbearably sad.

Only recently, towards the end of the book, did the coincidence of the book’s title strike me: The Moment Of Eclipse.

See, last time I had the privelige of experiencing a total solar eclipse was on August 21st, 2017. Jessica and I were in Sun Valley, Idaho, right in the path of totality. We found a hill to climb up so we could see the surrounding landscape as the shadow of the moon raced across the Earth.

Checked in at Valley View Trail. Hiked up a hill for the eclipse — with Jessica

When it was over, we climbed down the hill and went online. That’s when I found out. Brian Aldiss had passed away.

aria-live

I wrote a little something recently about using ARIA attributes as selectors in CSS. For me, one of the advantages is that because ARIA attributes are generally added via JavaScript, the corresponding CSS rules won’t kick in if something goes wrong with the JavaScript:

Generally, ARIA attributes—like aria-hidden—are added by JavaScript at runtime (rather than being hard-coded in the HTML).

But there’s one instance where I actually put the ARIA attribute directly in the HTML that gets sent from the server: aria-live.

If you’re not familiar with it, aria-live is extremely useful if you’ve got any dynamic updates on your page—via Ajax, for example. Let’s say you’ve got a bit of your site where filtered results will show up. Slap an aria-live attribute on there with a value of “polite”:

<div aria-live="polite">
...dynamic content gets inserted here
</div>

You could instead provide a value of “assertive”, but you almost certainly don’t want to do that—it can be quite rude.

Anyway, on the face it, this looks like exactly the kind of ARIA attribute that should be added with JavaScript. After all, if there’s no JavaScript, there’ll be no dynamic updates.

But I picked up a handy lesson from Ire’s excellent post on using aria-live:

Assistive technology will initially scan the document for instances of the aria-live attribute and keep track of elements that include it. This means that, if we want to notify users of a change within an element, we need to include the attribute in the original markup.

Good to know!

Design systems on the Clearleft podcast

If you’ve already subscribed to the Clearleft podcast, thank you! The first episode is sliding into your podcast player of choice.

This episode is all about …design systems!

I’m pretty happy with how this one turned out, although as it’s the first one, I’m sure I’ll learn how to do this better. I may end up looking back at this first foray with embarrassment. Still, it’s fairly representative of what you can expect from the rest of the season.

This episode is fairly short. Just under eighteen minutes. That doesn’t mean that other episodes will be the same length. Each episode will be as long (or as short) as it needs to be. Form follows function, or in this case, episode length follows content. Other episodes will be longer. Some might be shorter. It all depends on the narrative.

This flies in the face of accepted wisdom when it comes to podcasting. The watchword that’s repeated again and again for aspiring podcasters is consistency. Release on a consistent schedule and have a consistent length for episodes. I kind of want to go against that advice just out of sheer obstinancy. If I end up releasing episodes on a regular schedule, treat it as coincidence rather than consistency.

There’s not much of me in this episode. And there won’t be much of me in most episodes. I’m just there to thread together the smart soundbites coming from other people. In this episode, the talking heads are my colleagues Jon and James, along with my friends and peers Charlotte, Paul, and Amy (although there’s a Clearleft connection with all of them: Charlotte and Paul used to be Clearlefties, and Amy spoke at Patterns Day and Sofa Conf).

I spoke to each of them for about an hour, but like I said, the entire episode is less than eighteen minutes long. The majority of our conversations ended up on the cutting room floor (possibly to be used in future episodes).

Most of my time was spent on editing. It was painstaking, but rewarding. There’s a real pleasure to be had in juxtaposing two snippets of audio, either because they echo one another or because they completely contradict one another. This episode has a few examples of contradictions, and I think those are my favourite moments.

Needless to say, eighteen minutes was not enough time to cover everything about design systems. Quite the opposite. It’s barely an introduction. This is definitely a topic that I’ll be returning to. Maybe there could even be a whole season on design systems. Let me know what you think.

Oh, and you’ll notice that there’s a transcript for the episode. That’s a no-brainer. I’m a big fan of the spoken word, but it really comes alive when it’s combined with searchable, linkable, accessible text.

Anyway, have a listen and if you’re not already subscribed, pop the RSS feed into your podcast player.

Overlay gap

I think a lot about Danielle’s talk at Patterns Day last year.

Around about the six minute mark she starts talking about gaps and overlaps.

Gaps are where hidden complexity live. If we don’t have a category to cover it, in effect it becomes invisible. But that doesn’t mean it’s not there. Unidentified gaps cause inconsistency and confusion.

Overlaps occur when two separate categories encompass some of the same areas of responsibility. They cause conflict, duplication of effort, and unnecessary friction.

This is the bit I keep thinking about. It’s such an insightful lens to view things through. On just about any project, tensions are almost due to either gaps (“I thought someone else was doing that”) or overlaps (“Oh, you’re doing that? I thought we were doing that”).

When I was talking to Gerry on his new podcast recently, we were trying to figure out why web performance is in such a woeful state. I mused that there may be a gap. Perhaps designers think it’s a technical problem and developers think it’s a design problem. I guess you could try to bridge this gap by having someone whose job is to focus entirely on performance. But I suspect the better—but harder—solution is to create a shared culture of performance, of the kind Lara wrote about in her book:

Performance is truly everyone’s responsibility. Anyone who affects the user experience of a site has a relationship to how it performs. While it’s possible for you to single-handedly build and maintain an incredibly fast experience, you’d be constantly fighting an uphill battle when other contributors touch the site and make changes, or as the Web continues to evolve.

I suspect there’s a similar ownership gap at play when it comes to the ubiquitous obtrusive overlays that are plastered on so many websites these days.

Kirill Grouchnikov recently published a gallery of screenshots showcasing the beauty of modern mobile websites:

There are two things common between the websites in these screenshots that I took yesterday.

  1. They are beautifully designed, with great typography, clear branding, all optimized for readability.
  2. I had to install Firefox, Adblock Plus and uBlock Origin, as well as manually select and remove additional elements such as subscription overlays.

The web can be beautiful. Except it’s not right now.

How is this dissonance possible? How can designers and developers who clearly care about the user experience be responsible for unleashing such user-hostile interfaces?

PM/Legal/Marketing made me do it

I get that. But surely the solution can’t be to shrug our shoulders, pass the buck, and say “not my job.” Somebody designed each one of those obtrusive overlays. Somebody coded up each one and pushed them into production.

It’s clear that this is a problem of communication and understanding, rather than a technical problem. As always. We like to talk about how hard and complex our technical work is, but frankly, it’s a lot easier to get a computer to do what you want than to convince a human. Not least because you also need to understand what that other human wants. As Danielle says:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

So let’s say it is someone in the marketing department who is pushing to have an obtrusive newsletter sign-up form get shoved in the user’s face. Talk to them. Figure out what their goals are—what outcome are they hoping to get to. If they don’t seem to understand the user-experience implications, talk to them about that. But it needs to be a two-way conversation. You need to understand what they need before you start telling them what you want.

I realise that makes it sound patronisingly simple, and I know that in actuality it’s a sisyphean task. It may be that genuine understanding between people is the wickedest of design problems. But even if this problem seems insurmoutable, at least you’d be tackling the right problem.

Because the web can’t survive like this.

Near miss

When I was travelling across the Atlantic ocean on the Queen Mary 2 back in August, I had the pleasure of attending a series of on-board lectures by Charles Barclay from the Royal Astronomical Society.

One of those presentations was on the threat of asteroid impacts—always a fun topic! Charles mentioned Spaceguard, the group that tracks near-Earth objects.

Spaceguard is a pretty cool-sounding name for any organisation. The name comes from a work of (science) fiction. In Arthur C. Clarke’s 1973 book Rendezvous with Rama, Spaceguard is the name of a fictional organisation formed after a devastating asteroid impact on northen Italy—an event which is coincidentally depicted as happening on September 11th. That’s not a spoiler, by the way. The impact happens on the first page of the book.

At 0946 GMT on the morning of September 11 in the exceptionally beautiful summer of the year 2077, most of the inhabitants of Europe saw a dazzling fireball appear in the eastern sky.  Within seconds it was brighter than the Sun, and as it moved across the heavens—at first in utter silence—it left behind it a churning column of dust and smoke.

Somewhere above Austria it began to disintegrate, producing a series of concussions so violent that more than a million people had their hearing permanently damaged.  They were the lucky ones.

Moving at fifty kilometers a second, a thousand tons of rock and metal impacted on the plains of northern Italy, destroying in a few flaming moments the labor of centuries.

Later in the same lecture, Charles talked about the Torino scale, which is used to classify the likelihood and severity of impacts. Number 10 on the Torino scale means an impact is certain and that it will be an extinction level event.

Torino—Turin—is in northern Italy. “Wait a minute!”, I thought to myself. “Is this something that’s also named for that opening chapter of Rendezvous with Rama?”

I spoke to Charles about it afterwards, hoping that he might know. But he said, “Oh, I just assumed that a group of scientists got together in Turin when they came up with the scale.”

Being at sea, there was no way to easily verify or disprove the origin story of the Torino scale. Looking something up on the internet would have been prohibitively slow and expensive. So I had to wait until we docked in New York.

On our first morning in the city, Jessica and I popped into a bookstore. I picked up a copy of Rendezvous with Rama and re-read the details of that opening impact on northern Italy. Padua, Venice and Verona are named, but there’s no mention of Turin.

Sure enough, when I checked Wikipedia, the history and naming of the Torino scale was exactly what Charles Barclay surmised:

A revised version of the “Hazard Index” was presented at a June 1999 international conference on NEOs held in Torino (Turin), Italy. The conference participants voted to adopt the revised version, where the bestowed name “Torino Scale” recognizes the spirit of international cooperation displayed at that conference toward research efforts to understand the hazards posed by NEOs.

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

Discrete replies

Earlier this year, at Indie Web Camp Düsseldorf, I got replies working on my own site. That is to say, I can host a reply on my site to something on another site.

The classic example is Twitter. In fact, if you look at all my replies, most of them are responding to tweets (I also syndicate these replies to Twitter so they show up there just like regular tweet replies).

I’m really, really glad I got replies working. I’ve been using this functionality quite a bit, and it feels really good to own my content this way.

At the time, I wrote:

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

I decided not to include them on my home page feed after all. You’ll still see them if you go to the notes section of my site, but I decided that they were overwhelming my home page a bit. They also don’t show up in my RSS feed.

I’m really happy that I’m hosting my replies, and that I’ve got URLs for all of them, but I don’t think I want to give them the same priority as blog posts, links, and regular notes.

Replies

Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

<details>
    <summary>
        <label for="replyto">Reply to</label>
    </summary>
    <input type="url" id="replyto" name="replyto">
</details>

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”?

adactio.com: I would say Falcor from Neverending Story, the big flying dog.