Journal tags: min

61

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Pickin’ dates on iOS

This is a little follow-up to my post about web components for date inputs.

If you try the demo on iOS it doesn’t work. There’s nothing stopping you selecting any date.

That’s nothing to do with the web components. It turns out that Safari on iOS doesn’t support min and max on date inputs. This is also true of any other browser on iOS because they’re all just Safari in a trenchcoat …for now.

I was surprised — input type="date" has been around for a long time now. I mean, it’s not the end of the world. You’d have to do validation on inputted dates on the server anyway, but it sure would be nice for the user experience of filling in forms.

Alas, it doesn’t look like this is something on the interop radar.

What really surprised me was looking at Can I Use. That shows Safari on iOS as fully supporting date inputs.

Maybe it’s just semantic nitpickery on my part but I would consider that the lack of support for the min and max attributes means that date inputs are partially supported.

Can I Use gets its data from here. I guess I need to study the governance rules and try to figure out how to submit a pull request to update the currently incorrect information.

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Brandolini’s blockchain

I’ve already written about how much I enjoyed hosting Leading Design San Francisco last week.

All the speakers were terrific. Lola’s talk was particularly …um, interesting:

In this talk, Lola will share her adventures in the world of blockchain, the hostility she experienced in her first go-round in 2018, and why she’s chosen to head back to a technology that is going through its largest reputational and social crisis to date.

Wait …I was supposed to stand on stage and introduce a talk that was (at least partly) about blockchain? I have opinions.

As it turned out, Lola warned me that I’d be making an appearance in her talk. She was going to quote that blog post. Before the talk, I asked her how obnoxious I could be about blockchain in her intro. She told me to bring it.

So in the introduction, I deployed all the sarcasm I had in me and said:

Listen, we designers have a tendency to be over-critical of things sometimes. There are all these ideas that we dismiss: phrenology, homeopathy, flat-earthism …blockchain. Haters gonna hate.

I remember somebody asking online a while back, “Why the hate for web3?” And someone I know responded by saying “We hate it because we understand it.” I think there’s a lot of truth to that.

But look, just because blockchains are powering crypto ponzi schemes and N F fucking Ts, it’s worth remembering that it’s also simply a technology. It’s a technological solution in search of a problem.

To be fair, it’s still early days. After all, it’s only been over a decade now.

It’s like the law of instrument says; when all you have is a hammer, everything looks like a nail. Blockchain is like that. Except the hammer is also made of glass.

Anyway, Lola is going to defend the indefensible and talk about blockchain. One thing to keep in mind is this: remember when everyone was talking about “The Cloud”? And then it turned out that you could substitute the phrase “someone else’s server” for “The Cloud?” Well, every time you hear Lola say the word “blockchain”, I’d like you to mentally substitute the phrase “multiple copies of a spreadsheet.”

Please give an open mind and a warm welcome to Lola Oyelayo Pearson!

I got some laughs. I also got lots of gasps and pearl-clutching, as though I were saying something taboo. Welcome to San Francisco.

Lola gave as good as she got. I got a roasting in her talk.

And just to clarify, Lola and I are friends—this was a consensual smackdown.

There was a very serious point to Lola’s talk. Cryptobollocks and other blockchain-powered schemes have historically been very bro-y, and exploitative of non-bro communities. Lola wants to fight that trend.

I get it. But it reminds me a bit of the justifications you hear from people who go to work at Facebook claiming that they can do more good from the inside. Whatever helps you sleep at night.

The crux of Lola’s belief is this: blockchain technology is inevitable, therefore it is uncumbent on us as ethical designers to ensure that the technology is deployed in a way that empowers people instead of exploiting them.

But I take issue with the premise. Blockchain technology is not inevitable. That’s the worst kind of technological determinism. It’s defeatist. It’s a depressing view of “progress” driven not by people, but by technological forces beyond our control.

I refuse to accept that anti-humanist deterministic view.

In any case, for technological determinism to have any validity, there needs to be something to it. At least virtual reality and machine learning are based on some actual technologies. In the case of cryptobollocks, there is no there there. There is nothing except the hype, which is why you’ll see blockchain enthusiasts trying to ride the coattails of trending technologies in a logical fallacy that goes something like this:

  1. There are technologies that will be really big in the future,
  2. blockchain is a technology, therefore
  3. blockchain will be really big in the future.

Blockchain is bullshit. It isn’t even very clever bullshit. And it certainly isn’t inevitable.

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Work ethics

If you’re travelling around Ireland, you may come across some odd pieces of 19th century architecture—walls, bridges, buildings and roads that serve no purpose. They date back to The Great Hunger of the 1840s. These “famine follies” were the result of a public works scheme.

The thinking went something like this: people are starving so we should feed them but we can’t just give people food for nothing so let’s make people do pointless work in exchange for feeding them (kind of like an early iteration of proof of work for cryptobollocks on blockchains …except with a blockchain, you don’t even get a wall or a road, just ridiculous amounts of wasted energy).

This kind of thinking seems reprehensible from today’s perspective. But I still see its echo in the work ethic espoused by otherwise smart people.

Here’s the thing: there’s good work and there’s working hard. What matters is doing good work. Often, to do good work you need to work hard. And so people naturally conflate the two, thinking that what matters is working hard. But whether you work hard or not isn’t actually what’s important. What’s important is that you do good work.

If you can do good work without working hard, that’s not a bad thing. In fact, it’s great—you’ve managed to do good work and do it efficiently! But often this very efficiency is treated as laziness.

Sensible managers are rightly appalled by so-called productivity tracking because it measures exactly the wrong thing. Those instruments of workplace surveillance measure inputs, not outputs (and even measuring outputs is misguided when what really matters are outcomes).

They can attempt to measure how hard someone is working, but they don’t even attempt to measure whether someone is producing good work. If anything, they actively discourage good work; there’s plenty of evidence to show that more hours equates to less quality.

I used to think that must be some validity to the belief that hard work has intrinsic value. It was a position that was espoused so often by those around me that it seemed a truism.

But after a few decades of experience, I see no evidence for hard work as an intrinsically valuable activity, much less a useful measurement. If anything, I’ve seen the real harm that can be caused by tying your self-worth to how much you’re working. That way lies burnout.

We no longer make people build famine walls or famine roads. But I wonder how many of us are constructing little monuments in our inboxes and calendars, filling those spaces with work to be done in an attempt to chase the rewards we’ve been told will result from hard graft.

I’d rather spend my time pursuing the opposite: the least work for the most people.

Control

In two of my recent talks—In And Out Of Style and Design Principles For The Web—I finish by looking at three different components:

  1. a button,
  2. a dropdown, and
  3. a datepicker.

In each case you could use native HTML elements:

  1. button,
  2. select, and
  3. input type="date".

Or you could use divs with a whole bunch of JavaScript and ARIA.

In the case of a datepicker, I totally understand why you’d go for writing your own JavaScript and ARIA. The native HTML element is quite restricted, especially when it comes to styling.

In the case of a dropdown, it’s less clear-cut. Personally, I’d use a select element. While it’s currently impossible to style the open state of a select element, you can style the closed state with relative ease. That’s good enough for me.

Still, I can understand why that wouldn’t be good enough for some cases. If pixel-perfect consistency across platforms is a priority, then you’re going to have to break out the JavaScript and ARIA.

Personally, I think chasing pixel-perfect consistency across platforms isn’t even desirable, but I get it. I too would like to have more control over styling select elements. That’s one of the reasons why the work being done by the Open UI group is so important.

But there’s one more component: a button.

Again, you could use the native button element, or you could use a div or a span and add your own JavaScript and ARIA.

Now, in this case, I must admit that I just don’t get it. Why wouldn’t you just use the native button element? It has no styling issues and the browser gives you all the interactivity and accessibility out of the box.

I’ve been trying to understand the mindset of a developer who wouldn’t use a native button element. The easy answer would be that they’re just bad people, and dismiss them. But that would probably be lazy and inaccurate. Nobody sets out to make a website with poor performance or poor accessibility. And yet, by choosing not to use the native HTML element, that’s what’s likely to happen.

I think I might have finally figured out what might be going on in the mind of such a developer. I think the issue is one of control.

When I hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, I think “Great! That’s less work for me. I can just let the browser deal with it.” In other words, I relinquish control to the browser (though not entirely—I still want the styling to be under my control as much as possible).

But I now understand that someone else might hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, and think “Uh-oh! What if there unexpected side-effects of these built-in behaviours that might bite me on the ass?” In other words, they don’t trust the browsers enough to relinquish control.

I get it. I don’t agree. But I get it.

If your background is in computer science, then the ability to precisely predict how a programme will behave is a virtue. Any potential side-effects that aren’t within your control are undesirable. The only way to ensure that an interface will behave exactly as you want is to write it entirely from scratch, even if that means using more JavaScript and ARIA than is necessary.

But I don’t think it’s a great mindset for the web. The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.

Still, I’m glad that I now have a bit more insight into why someone would choose to attempt to retain control by using div, JavaScript and ARIA. It’s not what I would do, but I think I understand the motivation a bit better now.

More writing on web.dev

Last month I wrote about writing on web.dev. At that time, the first five parts of a fourteen-part course on responsive design had been published. I’m pleased to say that the next five parts are now available. They are:

  1. Typography
  2. Responsive images
  3. The picture element
  4. Icons
  5. Theming

It wasn’t planned, but these five modules feel like they belong together. The first five modules were concerned with layout tools—media queries, flexbox, grid, and even container queries. The latest five modules are about the individual elements of design—type, colour, and images. But those elements are examined through the lens of responsiveness; responsive typography with clamp, responsive colour with prefers-color-scheme, and responsive images with picture and srcset.

The final five modules should be available later this month. In the mean time, I hope you like the first ten modules.

Safari 15

If you download Safari Technology Preview you can test drive features that are on their way in Safari 15. One of those features, announced at Apple’s World Wide Developer Conference, is coloured browser chrome via support for the meta value of “theme-color.” Chrome on Android has supported this for a while but I believe Safari is the first desktop browser to add support. They’ve also added support for the media attribute on that meta element to handle “prefers-color-scheme.”

This is all very welcome, although it does remind me a bit of when Internet Explorer came out with the ability to make coloured scrollbars. I mean, they’re nice features’n’all, but maybe not the most pressing? Safari is still refusing to acknowledge progressive web apps.

That’s not quite true. In her WWDC video Jen demonstrates how you can add a progressive web app like Resilient Web Design to your home screen. I’m chuffed that my little web book made an appearance, but when you see how you add a site to your home screen in iOS, it’s somewhat depressing.

The steps to add a website to your home screen are:

  1. Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
  2. A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
  3. Now you must find “add to home screen” in the list
  • Copy
  • Add to Reading List
  • Add Bookmark
  • Add to Favourites
  • Find on Page
  • Add to Home Screen
  • Markup
  • Print

It reminds of this exchange in The Hitchhiker’s Guide To The Galaxy:

“You hadn’t exactly gone out of your way to call attention to them had you? I mean like actually telling anyone or anything.”

“But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a torch.”

“Ah, well the lights had probably gone.”

“So had the stairs.”

“But look you found the notice didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of The Leopard.’”

Safari’s current “support” for adding progressive web apps to the home screen feels like the minimum possible …just enough to use it as a legal argument if you happen to be litigated against for having a monopoly on app distribution. “Hey, you can always make a web app!” It’s true in theory. In practice it’s …suboptimal, to put it mildly.

Still, those coloured tab bars are very nice.

It’s a little bit weird that this stylistic information is handled by HTML rather than CSS. It’s similar to the meta viewport value in that sense. I always that the plan was to migrate that to CSS at some point, but here we are a decade later and it’s still very much part of our boilerplate markup.

Some people have remarked that the coloured browser chrome can make the URL bar look like part of the site so people might expect it to operate like a site-specific search.

I also wonder if it might blur “the line of death”; that point in the UI where the browser chrome ends and the website begins. Does the unified colour make it easier to spoof browser UI?

Probably not. You can already kind of spoof browser UI by using the right shade of grey. Although the removal any kind of actual line in Safari does give me pause for thought.

I tend not to think of security implications like this by default. My first thought tends to be more about how I can use the feature. It’s only after a while that I think about how bad actors might abuse the same feature. I should probably try to narrow the gap between those thoughts.

ReCoil

On the Coil developers site there’s a page proudly answering the question who is web monetized?

You’ll some familiar sites in there: CSS Tricks, A List Apart, and even this humble website, adactio.com.

But lest you think that this social proof is in any way an endorsement, I should probably clarify what my experience with Coil has been like.

Coil itself is grand. You get an identifier and you add it to your website in a meta element, much like you would do with indie web endpoints for webmentions or micropub.

The problem is with how you then actually get hold of any money that is owed to you from micropayments. Coil doesn’t handle this directly. You have to set up a “wallet” with a third-party service and therein lies the problem.

They are all terrible.

I’m not talking about the hoops you have to jump through to set up an account. I get it. This is scary financial stuff so of course I’ll need to scan my passport and hand over loads of information (more than is needed to open an actual bank account with, say, Monzo).

No, the problem is the stench of crypto.

I tried Stronghold for a while. They really, really don’t want you to use boring old-fashioned currencies like the euro or the pound. There’s also Gatehub. Same. And there’s Uphold. Also a shell game.

I’ve been using Coil and Uphold for a while now, and I’ve amassed a grand total of £6.06 — woo-hoo! So I log into my account and attempt to transfer that sweet, sweet monetisation and …I can’t.

The amount needs to be greater than or equal to £11.53 GBP

But I can still exchange that £6.06 for magic beans like Bitcoin, XRP, and Ether.

The whole thing smells of grift and it feels icky to be in any way associated with it. I understand why Coil needs to partner with existing payment providers, but it would be nice if just one of them weren’t propping up ponzi schemes. If anyone has found a way to get web monetisation to work without needing like you need to take a shower afterwards, I’d love to hear about it.

Deceptive dark patterns

When I was braindumping my thoughts prompted by last week’s UX Fest conference, I wrote about dark patterns.

Well, actually I wrote about deceptive dark patterns. That was a deliberate choice.

The phrase “dark pattern” is …problematic. We really don’t need to be associating darkness with negativity any more than we already do in our language and culture.

This is something I discussed with Melissa Smith after her talk on this topic. The consensus in general seems to be that the terminology is far from ideal, but it’s a bit late to change it now (I’m sure if Harry were coining the term today, he would choose a different phrase).

The defining characteristic of a “dark” pattern is that intentionally deceptive. How about we shift the terminology to talk about deceptive patterns?

Now, I get that inertia is a powerful force and it would be confusing to try do to a find-and-replace on all the resources that already exist on documenting “dark” patterns. So here’s a compromise:

From here on out, let’s start using the adjective “deceptive” in addition to the existing adjective “dark.” That’s what I did in my blog post. I only used the phrase “deceptive dark patterns.”

If we do that consistently, then after a while we’ll be able to drop one of those adjectives—“dark”—and refer to “deceptive patterns.”

Personally I’d love it if we could change the terminology overnight—and I’m quite heartened by the speed at which we changed our Github branches from “master” to “main”—but being pragmatic, I think this approach stands a greater chance of success.

Who’s with me?

Of the web

I’m subscribed to a lot of blogs in my RSS reader. I follow some people because what they write about is very different to what I know about. But I also follow lots of people who have similar interests and ideas to me. So I’m not exactly in an echo chamber, but I do have the reverb turned up pretty high.

Sometimes these people post thoughts that are eerily similar to what I’ve been thinking about. Ethan has been known to do this. Get out of my head, Marcotte!

But even if Ethan wasn’t some sort of telepath, he’d still be in my RSS reader. We’re friends. Lots of the people in my RSS reader are my friends. When I read their words, I can hear their voices.

Then there are the people I’ve never met. Like Desirée García, Piper Haywood, or Jim Nielsen. Never met them, don’t know them, but damn, do I enjoy reading their blogs. Last year alone, I ended up linking to Jim’s posts ten different times.

Or Baldur Bjarnason. I can’t remember when I first came across his writing, but it really, really resonates with me. I probably owe him royalties for the amount of times I’ve cited his post Over-engineering is under-engineering.

His latest post is postively Marcottian in how it exposes what’s been fermenting in my own mind. But because he writes clearly, it really helps clarify my own thinking. It’s often been said that you should write to figure out what you think, and I can absolutely relate to that. But here’s a case where somebody else’s writing really helps to solidify my own thoughts.

Which type of novelty-seeking web developer are you?

It starts with some existentialist stock-taking. I can relate, what with the whole five decades thing. But then it turns the existential questioning to the World Wide Web itself, or rather, the people building the web.

In a way, it’s like taking the question of the great divide (front of the front end and back of the front end), and then turning it 45 degrees to reveal an entirely hidden dimension.

In examining the nature of the web, he hits on the litmus of how you view encapsulation:

I mention this first as it’s the aspect of the web that modern web developers hate the most without even giving it a label. Single-Page-Apps and GraphQL are both efforts to eradicate the encapsulation that’s baked into the foundation of every layer of the web.

Most modern devs are trying to get rid of it but it’s one of the web’s most strategic advantages.

I hadn’t thought of this before.

By default, if you don’t go against the grain of the web, each HTTP endpoint is encapsulated from each other.

Moreover, all of this can happen really fast if you aren’t going overboard with your CSS and JS.

He finishes with a look at another of the web’s most powerful features: distribution. In between are the things that make the web webby: hypertext and flexibility (The Dao of the Web).

It’s the idea that the web isn’t a single fixed thing but a fluid multitude whose shape is dictated by its surroundings.

This resonates with me because it highlights two different ways of viewing the web.

On the one hand, you can see the web purely as a distribution channel. In the past you might have been distributing a Flash movie. These days you might be distributing a single page app. Either way, the web is there as a low-friction way of getting your creation in front of other people.

The other way of building for the web is to go with the web’s grain, embracing flexibility and playing to the strengths of the medium through progressive enhancement. This is the distinction I was getting at when I talked about something being not just on the web, but of the web.

With that mindset, Baldur then takes us through some of the technologies that he’s excited about, like SvelteKit and Hotwire. I think it’s the same mindset that got me excited about service workers. As Baldur says:

They are helping the web become better at being its own thing.

That’s my tagline right there.

Cascading Style Sheets

There are three ways—that I know of—to associate styles with markup.

External CSS

This is probably the most common. Using a link element with a rel value of “stylesheet”, you point to a URL using the href attribute. That URL is a style sheet that is applied to the current document (“the relationship of the linked resource it is that is a ‘stylesheet’ for the current document”).

<link rel="stylesheet" href="/path/to/styles.css">

In theory you could associate a style sheet with a document using an HTTP header, but I don’t think many browsers support this in practice.

You can also pull in external style sheets using the @import declaration in CSS itself, as long as the @import rule is declared at the start, before any other styles.

@import url('/path/to/more-styles.css');

When you use link rel="stylesheet" to apply styles, it’s a blocking request: the browser will fetch the style sheet before rendering the HTML. It needs to know how the HTML elements will be painted to the screen so there’s no point rendering the HTML until the CSS is parsed.

Embedded CSS

You can also place CSS rules inside a style element directly in the document. This is usually in the head of the document.

<style>
element {
    property: value;
}
</style>

When you embed CSS in the head of a document like this, there is no network request like there would be with external style sheets so there’s no render-blocking behaviour.

You can put any CSS inside the style element, which means that you could use embedded CSS to load external CSS using an @import statement (as long as that @import statement appears right at the start).

<style>
@import url('/path/to/more-styles.css');
element {
    property: value;
}
</style>

But then you’re back to having a network request.

Inline CSS

Using the style attribute you can apply CSS rules directly to an element. This is a universal attribute. It can be used on any HTML element. That doesn’t necessarily mean that the styles will work, but your markup is never invalidated by the presence of the style attribute.

<element style="property: value">
</element>

Whereas external CSS and embedded CSS don’t have any effect on specificity, inline styles are positively radioactive with specificity. Any styles applied this way are almost certain to over-ride any external or embedded styles.

You can also apply styles using JavaScript and the Document Object Model.

element.style.property = 'value';

Using the DOM style object this way is equivalent to inline styles. The radioactive specificity applies here too.

Style declarations specified in external style sheets or embedded CSS follow the rules of the cascade. Values can be over-ridden depending on the order they appear in. Combined with the separate-but-related rules for specificity, this can be very powerful. But if you don’t understand how the cascade and specificity work then the results can be unexpected, leading to frustration. In that situation, inline styles look very appealing—there’s no cascade and everything has equal specificity. But using inline styles means foregoing a lot of power—you’d be ditching the C in CSS.

A common technique for web performance is to favour embedded CSS over external CSS in order to avoid the extra network request (at least for the first visit—there are clever techniques for caching an external style sheet once the HTML has already loaded). This is commonly referred to as inlining your CSS. But really it should be called embedding your CSS.

This language mix-up is not a hill I’m going to die on (that hill would be referring to blog posts as blogs) but I thought it was worth pointing out.

Unobtrusive feedback

Ten years ago I gave a talk at An Event Apart all about interaction design. It was called Paranormal Interactivity. You can watch the video, listen to the audio or read the transcript if you like.

I think it holds up pretty well. There’s one interaction pattern in particular that I think has stood the test of time. In the talk, I introduce this pattern as something you can see in action on Huffduffer:

I was thinking about how to tell the user that something’s happened without distracting them from their task, and I thought beyond the web. I thought about places that provide feedback mechanisms on screens, and I thought of video games.

So we all know Super Mario, right? And if you think about when you’re collecting coins in Super Mario, it doesn’t stop the game and pop up an alert dialogue and say, “You have just collected ten points, OK, Cancel”, right? It just does it. It does it in the background, but it does provide you with a feedback mechanism.

The feedback you get in Super Mario is about the number of points you’ve just gained. When you collect an item that gives you more points, the number of points you’ve gained appears where the item was …and then drifts upwards as it disappears. It’s unobtrusive enough that it won’t distract you from the gameplay you’re concentrating on but it gives you the reassurance that, yes, you have just gained points.

I think this a neat little feedback mechanism that we can borrow for subtle Ajax interactions on the web. These are actions that don’t change much of the content. The user needs to be able to potentially do lots of these actions on a single page without waiting for feedback every time.

On Huffduffer, for example, you might be looking at a listing of people that you can choose to follow or unfollow. The mechanism for doing that is a button per person. You might potentially be clicking lots of those buttons in quick succession. You want to know that each action has taken effect but you don’t want to be interrupted from your following/unfollowing spree.

You get some feedback in any case: the button changes. Maybe the text updates from “follow” to “unfollow” accompanied by a change in colour (this is what you’ll see on Twitter). The Super Mario style feedback is in addition to that, rather than instead of.

I’ve made a Codepen so you can see a reduced test case of the Super Mario feedback in action.

See the Pen Unobtrusive feedback by Jeremy Keith (@adactio) on CodePen.

Here’s the code available as a gist.

It’s a function that takes two arguments: the element that the feedback originates from (pass in a DOM node reference for this), and the contents of the feedback (this can be a string of text or it can be HTML …or SVG). When you call the function with those two arguments, this is what happens:

  1. The JavaScript generates a span element and puts the feedback contents inside it.
  2. Then it positions that element right over the element that the feedback originates from.
  3. Then there’s a CSS transform. The feedback gets a translateY applied so it drifts upward. At the same time it gets its opacity reduced from 1 to 0 so it’s fading away.
  4. Finally there’s a transitionend event that fires when the animation is over. Once that event fires, the generated span is destroyed.

When I first used this pattern on Huffduffer, I’m pretty sure I was using jQuery. A few years later I rewrote it in vanilla JavaScript. That was four years ago so I wonder if the code could be improved. Have a go if you fancy it.

Still, even if the code could benefit from an update, I’m pleased that the underlying pattern still holds true. I used it recently on The Session and it’s working a treat for a new Ajax interaction there (bookmarking or unbookbarking an item).

If you end up using this unobtrusive feedback pattern anyway, please let me know—I’d love to see more examples of it in the wild.

Kinopio

Cennydd asked for recommendations on Twitter a little while back:

Can anyone recommend an outlining app for macOS? I’m falling out with OmniOutliner. Not Notion, please.

This was my response:

The only outlining tool that makes sense for my brain is https://kinopio.club/

It’s more like a virtual crazy wall than a virtual Dewey decimal system.

I’ve written before about how I prepare a conference talk. The first step involves a sheet of A3 paper:

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

Kinopio is like a digital version of that A3 sheet of paper. It doesn’t force any kind of hierarchy on your raw ingredients. You can clump things together, join them up, break them apart, or just dump everything down in one go. That very much suits my approach to preparing something like a talk (or a book). The act of organising all the parts into a single narrative timeline is an important challenge, but it’s one that I like to defer to later. The first task is braindumping.

When I was preparing my talk for An Event Apart Online, I used Kinopio.club to get stuff out of my head. Here’s the initial brain dump. Here are the final slides. You can kind of see the general gist of the slidedeck in the initial brain dump, but I really like that I didn’t have to put anything into a sequential outline.

In some ways, Kinopio is like an anti-outlining tool. It’s scrappy and messy—which is exactly why it works so well for the early part of the process. If I use a tool that feels too high-fidelity too early on, I get a kind of impedence mismatch between the state of the project and the polish of the artifact.

I like that Kinopio feels quite personal. Unlike Google Docs or other more polished tools, the documents you make with this aren’t really for sharing. Still, I thought I’d share my scribblings anyway.

Connections

Fourteen years ago, I gave a talk at the Reboot conference in Copenhagen. It was called In Praise of the Hyperlink. For the most part, it was a gushing love letter to hypertext, but it also included this observation:

For a conspiracy theorist, there can be no better tool than a piece of technology that allows you to arbitrarily connect information. That tool now exists. It’s called the World Wide Web and it was invented by Sir Tim Berners-Lee.

You know those “crazy walls” that are such a common trope in TV shows and movies? The detectives enter the lair of the unhinged villain and discover an overwhelming wall that’s like looking at the inside of that person’s head. It’s not the stuff itself that’s unnerving; it’s the red thread that connects the stuff.

Red thread. Blue hyperlinks.

When I spoke about the World Wide Web, hypertext, apophenia, and conspiracy theorists back in 2006, conspiracy theories could still be considered mostly harmless. It was the domain of Dan Brown potboilers and UFO enthusiasts with posters on their walls declaring “I Want To Believe”. But even back then, 911 truthers were demonstrating a darker side to the fun and games.

There’s always been a gamification angle to conspiracy theories. Players are rewarded with the same dopamine hits for “doing the research” and connecting unrelated topics. Now that’s been weaponised into QAnon.

In his newsletter, Dan Hon wrote QAnon looks like an alternate reality game. You remember ARGs? The kind of designed experience where people had to cooperate in order to solve the puzzle.

Being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart.

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

Dan’s brother Adrian also wrote about this connection: What ARGs Can Teach Us About QAnon:

There is a vast amount of information online, and sometimes it is possible to solve “mysteries”, which makes it hard to criticise people for trying, especially when it comes to stopping perceived injustices. But it’s the sheer volume of information online that makes it so easy and so tempting and so fun to draw spurious connections.

This is something that Molly Sauter has been studying for years now, like in her essay The Apophenic Machine:

Humans are storytellers, pattern-spotters, metaphor-makers. When these instincts run away with us, when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet, the web circling back to itself again and again. The internet is an apophenic machine.

I remember interviewing Lauren Beukes back in 2012 about her forthcoming book about a time-travelling serial killer:

Me: And you’ve written a time-travel book that’s set entirely in the past.

Lauren: Yes. The book ends in 1993 and that’s because I did not want to have to deal with Kirby the heroine getting some access to CCTV cameras and uploading the footage to 4chan and having them solve the mystery in four minutes flat.

By the way, I noticed something interesting about the methodology behind conspiracy theories—particularly the open-ended never-ending miasma of something like QAnon. It’s no surprise that the methodology is basically an inversion of the scientific method. It’s the Texas sharpshooter fallacy writ large. Well, you know the way that I’m always going on about design principles and they way that good design principles should be reversible? Conspiracy theories take universal principles and invert them. Take Occam’s razor:

Do not multiply entities without necessity.

That’s what they want you to think! Wake up, sheeple! The success of something like QAnon—or a well-designed ARG—depends on a mindset that rigorously inverts Occam’s razor:

Multiply entities without necessity!

That’s always been the logic of conspiracy theories from faked moon landings to crop circles. I remember well when the circlemakers came clean and showed exactly how they had been making their beautiful art. Conspiracy theorists—just like cultists—don’t pack up and go home in the face of evidence. They double down. There was something almost pitiable about the way the crop circle UFO crowd were bending over backwards to reject proof and instead apply the inversion of Occam’s razor to come up with even more outlandish explanations to encompass the circlemakers’ confession.

Anyway, I recommend reading what Dan and Adrian have written about the shared psychology of QAnon and Alternate Reality Games, not least because they also suggest some potential course corrections.

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

I want to add one more red thread to this crazy wall. There’s a book about conspiracy theories that has become more and more relevant over time. It’s also wonderfully entertaining. Here’s my recommendation from that Reboot presentation in 2006:

For a real hot-tub of conspiracy theory pleasure, nothing beats Foucault’s Pendulum by Umberto Eco.

…luck rewarded us, because, wanting connections, we found connections — always, everywhere, and between everything. The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Summer of Apollo

It’s July, 2019. You know what that means? The 50th anniversary of the Apollo 11 mission is this month.

I’ve already got serious moon fever, and if you’d like to join me, I have some recommendations…

Watch the Apollo 11 documentary in a cinema. The 70mm footage is stunning, the sound design is immersive, the music is superb, and there’s some neat data visualisation too. Watching a preview screening in the Duke of York’s last week was pure joy from start to finish.

Listen to 13 Minutes To The Moon, the terrific ongoing BBC podcast by Kevin Fong. It’s got all my favourite titans of NASA: Michael Collins, Margaret Hamilton, and Charlie Duke, amongst others. And it’s got music by Hans Zimmer.

Experience the website Apollo 11 In Real Time on the biggest monitor you can. It’s absolutely wonderful! From July 16th, you can experience the mission timeshifted by exactly 50 years, but if you don’t want to wait, you can dive in right now. It genuinely feels like being in Mission Control!