Journal tags: enhancement

53

Browser support

There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.

This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.

You can find it at browsersupport.clearleft.com

We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.

I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:

We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.

The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.

So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com

Applying the four principles of accessibility

Web Content Accessibility Guidelines—or WCAG—looks very daunting. It’s a lot to take in. It’s kind of overwhelming. It’s hard to know where to start.

I recommend taking a deep breath and focusing on the four principles of accessibility. Together they spell out the cutesy acronym POUR:

  1. Perceivable
  2. Operable
  3. Understandable
  4. Robust

A lot of work has gone into distilling WCAG down to these four guidelines. Here’s how I apply them in my work…

Perceivable

I interpret this as:

Content will be legible, regardless of how it is accessed.

For example:

  • The contrast between background and foreground colours will meet the ratios defined in WCAG 2.
  • Content will be grouped into semantically-sensible HTML regions such as navigation, main, footer, etc.

Operable

I interpret this as:

Core functionality will be available, regardless of how it is accessed.

For example:

  • I will ensure that interactive controls such as links and form inputs will be navigable with a keyboard.
  • Every form control will be labelled, ideally with a visible label.

Understandable

I interpret this as:

Content will make sense, regardless of how it is accessed.

For example:

  • Images will have meaningful alternative text.
  • I will make sensible use of heading levels.

This is where it starts to get quite collaboritive. Working at an agency, there will some parts of website creation and maintenance that will require ongoing accessibility knowledge even when our work is finished.

For example:

  • Images uploaded through a content management system will need sensible alternative text.
  • Articles uploaded through a content management system will need sensible heading levels.

Robust

I interpret this as:

Content and core functionality will still work, regardless of how it is accessed.

For example:

  • Drop-down controls will use the HTML select element rather than a more fragile imitation.
  • I will only use JavaScript to provide functionality that isn’t possible with HTML and CSS alone.

If you’re applying a mindset of progressive enhancement, this part comes for you. If you take a different approach, you’re going to have a bad time.

Taken together, these four guidelines will get you very far without having to dive too deeply into the rest of WCAG.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Baseline progressive enhancement

Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!

Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.

The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.

Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.

Don’t wait, is what I’m saying.

I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.

But…

Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.

Service workers. Preference queries. View transitions.

If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.

Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.

Baseline won’t tell you the difference between those two different kinds of features.

I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.

For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.

Really you want to ask two questions:

  1. How widely available is this feature?
  2. Can this feature be used as a progressive enhancement?

If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.

But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.

Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.

Pickin’ dates

I had the opportunity to trim some code from The Session recently. That’s always a good feeling.

In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.

There are a couple of places on the site where you can input a date. This is exactly what input type="date" is for. But when I was making the interface, the support for this type of input was patchy.

So instead the interface used three select dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date", I replaced the three selects with one date input.

It was a little fiddly but it worked.

Fast forward to today and input type="date" is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!

I was discussing date inputs recently when I was talking to students in Amsterdam:

They’re given a PDF inheritance-tax form and told to convert it for the web.

That form included dates. The dates were all in the past so the students wanted to be able to set a max value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.

Wouldn’t it be nice if you could specify past dates like this?

<input type="date" max="today">

Or for future dates:

<input type="date" min="today">

Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.

This seems like an ideal use-case for HTML web components:

Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

In this case, it would be nice to augment an existing input type="date" element. Something like this:

 <input-date-past>
   <input type="date">
 </input-date-past>

Here’s the JavaScript that does the augmentation:

 customElements.define('input-date-past', class extends HTMLElement {
     constructor() {
         super();
     }
     connectedCallback() {
         this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
     }
 });

That’s it.

Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future.

See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.

Hanging punctuation in CSS

There’s a lovely CSS property called hanging-punctuation. You can use it to do exactly what the name suggests and exdent punctuation marks such as opening quotes.

Here’s one way to apply it:

html {
  hanging-punctuation: first last;
}

Any punctuation marks at the beginning or end of a line will now hang over the edge, leaving you with nice clean blocks of text; no ragged edges.

Right now it’s only supported in Safari but there’s no reason not to use it. It’s a perfect example of progressive enhancement. One line of CSS to tidy things up for the browsers that support it and leave things exactly as they are for the browsers that don’t.

But when I used this over on The Session I noticed an unintended side-effect. Because I’m applying the property globally, it’s also acting on form fields. If the text inside a form field starts with a quotation mark or some other piece of punctuation, it’s shunted off to the side and hidden.

Here’s the fix I used:

input, textarea {
  hanging-punctuation: none;
}

It’s a small little gotcha but I figured I’d share it in case it helps someone else out.

Progressive disclosure defaults

When I wrote about my time in Amsterdam last week, I mentioned the task that the students were given:

They’re given a PDF inheritance-tax form and told to convert it for the web.

Rich had a question about that:

I’m curious to know if they had the opportunity to optimise the user experience of the form for an online environment, eg. splitting it up into a sequence of questions, using progressive disclosure, branching based on inputs, etc?

The answer is yes, very much so. Progressive disclosure was a very clear opportunity for enhancement.

You know the kind of paper form where it says “If you answered no to this, then skip ahead to that”? On the web, we can do the skipping automatically. Or to put it another way, we can display a section of the form only when the user has ticked the appropriate box.

This is a classic example of progressive disclosure:

information is revealed when it becomes relevant to the current task.

But what should the mechanism be?

This is an interaction design pattern so JavaScript seems the best choice. JavaScript is for behaviour.

On the other hand, you can do this in CSS using the :checked pseudo-class. And the principle of least power suggests using the least powerful language suitable for a given task.

I’m torn on this. I’m not sure if there’s a correct answer. I’d probably lean towards JavaScript just because it’s then possible to dynamically update ARIA attributes like aria-expanded—very handy in combination with aria-controls. But using CSS also seems perfectly reasonable to me.

It was interesting to see which students went down the JavaScript route and which ones used CSS.

It used to be that using the :checked pseudo-class involved an adjacent sibling selector, like this:

input.disclosure-switch:checked ~ .disclosure-content {
  display: block;
}

That meant your markup had to follow a specific pattern where the elements needed to be siblings:

<div class="disclosure-container">
  <input type="checkbox" class="disclosure-switch">
  <div class="disclosure-content">
  ...
  </div>
</div>

But none of the students were doing that. They were all using :has(). That meant that their selector could be much more robust. Even if the nesting of their markup changes, the CSS will still work. Something like this:

.disclosure-container:has(.disclosure-switch:checked) .disclosure-content

That will target the .disclosure-content element anywhere inside the same .disclosure-container that has the .disclosure-switch. Much better! (Ignore these class names by the way—I’m just making them up to illustrate the idea.)

But just about every student ended up with something like this in their style sheets:

.disclosure-content {
  display: none;
}
.disclosure-container:has(.disclosure-switch:checked) .disclosure-content {
  display: block;
}

That gets my spidey-senses tingling. It doesn’t smell right to me. Here’s why…

The simpler selector is doing the more destructive action: hiding content. There’s a reliance on the more complex selector to display content.

If a browser understands the first ruleset but not the second, that content will be hidden by default.

I know that :has() is very well supported now, but this still makes me nervous. I feel that the more risky action (hiding content) should belong to the more complex selector.

Thanks to the :not() selector, you can reverse the logic of the progressive disclosure:

.disclosure-content {
  display: block;
}
.disclosure-container:not(:has(.disclosure-switch:checked)) .disclosure-content {
  display: none;
}

Now if a browser understands the first ruleset, but not the second, it’s not so bad. The content remains visible.

When I was explaining this way of thinking to the students, I used an analogy.

Suppose you’re building a physical product that uses electricity. What should happen if there’s a power cut? Like, if you’ve got a building with electric doors, what should happen when the power is cut off? Should the doors be locked by default? Or is it safer to default to unlocked doors?

It’s a bit of a tortured analogy, but it’s one I’ve used in the past when talking about JavaScript on the web. I like to think about JavaScript as being like electricity…

Take an existing product, like say, a toothbrush. Now imagine what you can do when you turbo-charge it with electricity: an electric toothbrush!

But also consider what happens when the electricity fails. Instead of the product becoming useless you want it to revert back to being a regular old toothbrush.

That’s the same mindset I’m encouraging for the progressive disclosure pattern. Make sure that the default state is safe. Then enhance.

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

HTML web components

Web components have been around for quite a while, but it feels like they’re having a bit of a moment right now.

It turns out that the best selling point for web components was “wait and see.” For everyone who didn’t see the benefit of web components over being locked into a specific framework, time is proving to be a great teacher.

It’s not just that web components are portable. They’re also web standards, which means they’ll be around as long as web browsers. No framework can make that claim. As Jake Lazaroff puts it, web components will outlive your JavaScript framework.

At this point React is legacy technology, like Angular. Lots of people are still using it, but nobody can quite remember why. The decision-makers in organisations who chose to build everything with React have long since left. People starting new projects who still decide to build on React are doing it largely out of habit.

Others are making more sensible judgements and, having been bitten by lock-in in the past, are now giving web components a go.

If you’re one of those people making the move from React to web components, there’ll certainly be a bit of a learning curve, but that would be true of any technology change.

I have a suggestion for you if you find yourself in this position. Try not to bring React’s mindset with you.

I’m talking about the way React components are composed. There’s often lots of props doing heavy lifting. The actual component element itself might be empty.

If you want to apply that model to web components, you can. Lots of people do. It’s not unusual to see web components in the wild that look like this:

<my-component></my-component>

The custom element is just a shell. All the actual power is elsewhere. It’s in the JavaScript that does all kinds of clever things with the shadow DOM, templates, and slots.

There is another way. Ask, as Robin does, “what would HTML do?”

Think about composibility with existing materials. Do you really need to invent an entirely new component from scratch? Or can you use HTML up until it reaches its limit and then enhance the markup?

Robin writes:

I don’t think we should see web components like the ones you might find in a huge monolithic React app: your Button or Table or Input components. Instead, I’ve started to come around and see Web Components as filling in the blanks of what we can do with hypertext: they’re really just small, reusable chunks of code that extends the language of HTML.

Dave talks about how web components can be HTML with superpowers. I think that’s a good attitude to have. Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

Where does the shadow DOM come into all of this? It doesn’t. And that’s okay. I’m not saying it should be avoided completely, but it should be a last resort. See how far you can get with the composibility of regular HTML first.

Eric described his recent epiphany with web components. He created a super-slider custom element that wraps around an existing label and input type="range":

You just take some normal HTML markup, wrap it with a custom element, and then write some JS to add capabilities which you can then style with regular CSS!  Everything’s of the Light Side of the Web.  No need to pierce the Vale of Shadows or whatever.

When you wrap some existing markup in a custom element and then apply some new behaviour with JavaScript, technically you’re not doing anything you couldn’t have done before with some DOM traversal and event handling. But it’s less fragile to do it with a web component. It’s portable. It obeys the single responsibility principle. It only does one thing but it does it well.

Jim created an icon-list custom element that wraps around a regular ul populated with li elements. But he feels almost bashful about even calling it a web component:

Maybe I shouldn’t be using the term “web component” for what I’ve done here. I’m not using shadow DOM. I’m not using the templates or slots. I’m really only using custom elements to attach functionality to a specific kind of component.

I think what Eric and Jim are doing is exemplary. See also Zach’s web components.

At the end of his post, Eric says he’d like a nice catchy term for these kinds of web components. In Dave’s catalogue of web components, they’re called “element extensions.” I like that. It’s pretty catchy.

Or we could call them “HTML web components.” If your custom element is empty, it’s not an HTML web component. But if you’re using a custom element to extend existing markup, that’s an HTML web component.

React encouraged a mindset of replacement: “forgot what browsers can do; do everything in a React component instead, even if you’re reinventing the wheel.”

HTML web components encourage a mindset of augmentation instead.

Add view transitions to your website

I must admit, when Jake told me he was leaving Google, I got very worried about the future of the View Transitions API.

To recap: Chrome shipped support for the API, but only for single page apps. That had me worried:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Well, the multi-page version still hasn’t yet shipped in Chrome stable, but it is available in Chrome Canary behind a flag, so it looks like it’s almost here!

Robin took the words out of my mouth:

Anyway, even this cynical jerk is excited about this thing.

Are you the kind of person who flips feature flags on in nightly builds to test new APIs?

Me neither.

But I made an exception for the View Transitions API. So did Dave:

I think the most telling predictor for the success of the multi-page View Transitions API – compared to all other proposals and solutions that have come before it – is that I actually implemented this one. Despite animations being my bread and butter for many years, I couldn’t be arsed to even try any of the previous generation of tools.

Dave’s post is an excellent step-by-step introduction to using view transitions on your website. To recap:

Enable these two flags in Chrome Canary:

chrome://flags#view-transition
chrome://flags#view-transition-on-navigation

Then add this meta element to the head of your website:

<meta name="view-transition" content="same-origin">

You could stop there. If you navigate around your site, you’ll see that the navigations now fade in and out nicely from one page to another.

But the real power comes with transitioning page elements. Basically, you want to say “this element on this page should morph into that element on that page.” And when I say morph, I mean morph. As Dave puts it:

Behind the scenes the browser is rasterizing (read: making an image of) the before and after states of the DOM elements you’re transitioning. The browser figures out the differences between those two snapshots and tweens between them similar to Apple Keynote’s “Magic Morph” feature, the liquid metal T-1000 from Terminator 2: Judgement Day, or the 1980s cartoon series Turbo Teen.

If those references are lost on you, how about the popular kids book series Animorphs?

Some classic examples would be:

  • A thumbnail of a video on one page morphs into the full-size video on the next page.
  • A headline and snippet of an article on one page morphs into the full article on the next page.

I’ve added view transitions to The Session. Where I’ve got index pages with lists of titles, each title morphs into the heading on the next page.

Again, Dave’s post was really useful here. Each transition needs a unique name, so I used Dave’s trick of naming each transition with the ID of the individual item being linked to.

In the recordings section, for example, there might be a link like this on the index page:

<a href="/recordings/7812" style="view-transition-name: recording-7812">The Banks Of The Moy</a>

Which, if you click on it, takes you to the page with this heading:

<h1><span style="view-transition-name: recording-7812">The Banks Of The Moy</span></h1>

Why the span? Well, like Dave, I noticed some weird tweening happening between block and inline elements. Dave solved the problem with width: fit-content on the block-level element. I just stuck in an extra inline element.

Anyway, the important thing is that the name of the view transition matches: recording-7812.

I also added a view transition to pages that have maps. The position of the map might change from page to page. Now there’s a nice little animation as you move from one page with a map to another page with a map.

thesession.org View Transitions

That’s all good, but I found myself wishing that I could just have those enhancements. Every single navigation on the site was triggering a fade in and out—the default animation. I wondered if there was a way to switch off the default fading.

There is! That default animation is happening on a view transition named root. You can get rid of it with this snippet of CSS:

::view-transition-image-pair(root) {
  isolation: auto;
}
::view-transition-old(root),
::view-transition-new(root) {
  animation: none;
  mix-blend-mode: normal;
  display: block;
}

Voila! Now only the view transitions that you name yourself will get applied.

You can adjust the timing, the easing, and the animation properites of your view transitions. Personally, I was happy with the default morph.

In fact, that’s one of the things I like about this API. It’s another good example of declarative design. I say what I want to happen, but I don’t need to specify the details. I’ll let the browser figure all that out.

That’s what’s got me so excited about this API. Yes, it’s powerful. But just as important, it’s got a very low barrier to entry.

Chris has gathered a bunch of examples together in his post Early Days Examples of View Transitions. Have a look around to get some ideas.

If you like what you see, I highly encourage you to add view transitions to your website now.

“But wait,” I hear you cry, “this isn’t supported in any public-facing browser yet!”

To which, I respond “So what?” It’s a perfect example of progressive enhancement. Adding one meta element and a smidgen of CSS will do absolutely no harm to your website. And while no-one will see your lovely view transitions yet, once browsers do start shipping with support for the API, your site will automatically get better.

Your website will be enhanced. Progressively.

Update: Simon Pieters quite rightly warns against adding view transitions to live sites before the API is done:

in general, using features before they ship in a browser isn’t a great idea since it can poison the feature with legacy content that might break when the feature is enabled. This has happened several times and renames or so were needed.

Good point. I must temper my excitement with pragmatism. Let me amend my advice:

I highly encourage you to experiment with view transitions on your website now.

Read-only web apps

The most cartoonish misrepresentation of progressive enhancement is that it means making everything work without JavaScript.

No. Progressive enhancement means making sure your core functionality works without JavaScript.

In my book Resilient Web Design, I quoted Wilto:

Lots of cool features on the Boston Globe don’t work when JS breaks; “reading the news” is not one of them.

That’s an example where the core functionality is readily identifiable. It’s a newspaper. The core functionality is reading the news.

It isn’t always so straightforward though. A lot of services that self-identify as “apps” will claim that even their core functionality requires JavaScript.

Surely I don’t expect Gmail or Google Docs to provide core functionality without JavaScript?

In those particular cases, I actually do. I believe that a textarea in a form would do the job nicely. But I get it. That might take a lot of re-engineering.

So how about this compromise…

Your app should work in a read-only mode without JavaScript.

Without JavaScript I should still be able to read my email in Gmail, even if you don’t let me compose, reply, or organise my messages.

Without JavaScript I should still be able to view a document in Google Docs, even if you don’t let me comment or edit the document.

Even with something as interactive as Figma or Photoshop, I think I should still be able to view a design file without JavaScript.

Making this distinction between read-only mode and read/write mode could be very useful, especially at the start of a project.

Begin by creating the read-only mode that doesn’t require JavaScript. That alone will make for a solid foundation to build upon. Now you’ve built a fallback for any unexpected failures.

Now start adding the read/write functionally. You’re enhancing what’s already there. Progressively.

Heck, you might even find some opportunities to provide some read/write functionality that doesn’t require JavaScript. But if JavaScript is needed, that’s absolutely fine.

So if you’re about to build a web app and you’re pretty sure it requires JavaScript, why not pause and consider whether you can provide a read-only version.

Of the web

I’m subscribed to a lot of blogs in my RSS reader. I follow some people because what they write about is very different to what I know about. But I also follow lots of people who have similar interests and ideas to me. So I’m not exactly in an echo chamber, but I do have the reverb turned up pretty high.

Sometimes these people post thoughts that are eerily similar to what I’ve been thinking about. Ethan has been known to do this. Get out of my head, Marcotte!

But even if Ethan wasn’t some sort of telepath, he’d still be in my RSS reader. We’re friends. Lots of the people in my RSS reader are my friends. When I read their words, I can hear their voices.

Then there are the people I’ve never met. Like Desirée García, Piper Haywood, or Jim Nielsen. Never met them, don’t know them, but damn, do I enjoy reading their blogs. Last year alone, I ended up linking to Jim’s posts ten different times.

Or Baldur Bjarnason. I can’t remember when I first came across his writing, but it really, really resonates with me. I probably owe him royalties for the amount of times I’ve cited his post Over-engineering is under-engineering.

His latest post is postively Marcottian in how it exposes what’s been fermenting in my own mind. But because he writes clearly, it really helps clarify my own thinking. It’s often been said that you should write to figure out what you think, and I can absolutely relate to that. But here’s a case where somebody else’s writing really helps to solidify my own thoughts.

Which type of novelty-seeking web developer are you?

It starts with some existentialist stock-taking. I can relate, what with the whole five decades thing. But then it turns the existential questioning to the World Wide Web itself, or rather, the people building the web.

In a way, it’s like taking the question of the great divide (front of the front end and back of the front end), and then turning it 45 degrees to reveal an entirely hidden dimension.

In examining the nature of the web, he hits on the litmus of how you view encapsulation:

I mention this first as it’s the aspect of the web that modern web developers hate the most without even giving it a label. Single-Page-Apps and GraphQL are both efforts to eradicate the encapsulation that’s baked into the foundation of every layer of the web.

Most modern devs are trying to get rid of it but it’s one of the web’s most strategic advantages.

I hadn’t thought of this before.

By default, if you don’t go against the grain of the web, each HTTP endpoint is encapsulated from each other.

Moreover, all of this can happen really fast if you aren’t going overboard with your CSS and JS.

He finishes with a look at another of the web’s most powerful features: distribution. In between are the things that make the web webby: hypertext and flexibility (The Dao of the Web).

It’s the idea that the web isn’t a single fixed thing but a fluid multitude whose shape is dictated by its surroundings.

This resonates with me because it highlights two different ways of viewing the web.

On the one hand, you can see the web purely as a distribution channel. In the past you might have been distributing a Flash movie. These days you might be distributing a single page app. Either way, the web is there as a low-friction way of getting your creation in front of other people.

The other way of building for the web is to go with the web’s grain, embracing flexibility and playing to the strengths of the medium through progressive enhancement. This is the distinction I was getting at when I talked about something being not just on the web, but of the web.

With that mindset, Baldur then takes us through some of the technologies that he’s excited about, like SvelteKit and Hotwire. I think it’s the same mindset that got me excited about service workers. As Baldur says:

They are helping the web become better at being its own thing.

That’s my tagline right there.

The principle of most availability

I’ve been thinking some more about the technical experience of booking a vaccination apointment and how much joy it brought me.

I’ve written before about how I’ve got a blind spot for the web so it’s no surprise that I was praising the use of a well marked-up form, styled clearly, and unencumbered by unnecessary JavaScript. But other technologies were in play too: Short Message Service (SMS) and email.

All of those technologies are platform-agnostic.

No matter what operating system I’m using, or what email software I’ve chosen, email works. It gets more complicated when you introduce HTML email. My response to that is the same as the old joke; you know the one: “Doctor, it hurts when I do this.” (“Well, don’t do that.”)

No matter what operating system my phone is using, SMS works. It gets more complicated when you introduce read receipts, memoji, or other additions. See my response to HTML email.

Then there’s the web. No matter what operating system I’m using on a device that could be a phone or a tablet or a laptop or desktop tower, and no matter what browser I’ve chosen to use, the World Wide Web works.

I originally said:

It feels like the principle of least power in action.

But another way of rephrasing “least power” is “most availability.” Technologies that are old, simple, and boring tend to be more widely available.

I remember when software used to come packaged in boxes and displayed on shelves. The packaging always had a list on the side. It looked like the nutritional information on a food product, but this was a list of “system requirements”: operating system, graphics card, sound card, CPU. I never liked the idea of system requirements. It felt so …exclusionary. And for me, the promise of technology was liberation and freedom to act on my own terms.

Hence my soft spot for the boring and basic technologies like email, SMS, and yes, web pages. The difference with web pages is that you can choose to layer added extras on top. As long as the fundamental functionality is using universally-supported technology, you’re free to enhance with all the latest CSS and JavaScript. If any of it fails, that’s okay: it falls back to a nice solid base.

Alas, many developers don’t build with this mindset. I mean, I understand why: it means thinking about users with the most boring, least powerful technology. It’s simpler and more exciting to assume that everyone’s got a shared baseline of newer technology. But by doing that, you’re missing out on one of the web’s superpowers: that something served up at the same URL with the same underlying code can simultaneously serve people with older technology and also provide a whizz-bang experience to people with the latest and greatest technology.

Anyway, I’ve been thinking about the kind of communication technologies that are as universal as email, SMS, and the web.

QR codes are kind of heading in that direction, although I still have qualms because of their proprietary history. But there’s something nice and lo-fi about them. They’re like print stylesheets in reverse (and I love print stylesheets). A funky little bridge between the physical and the digital. I just wish they weren’t so opaque: you never know if scanning that QR code will actually take you to the promised resource, or if you’re about to rickroll yourself.

Telephone numbers kind of fall into the same category as SMS, but with the added option of voice. I’ve always found the prospect of doing something with, say, Twilio’s API more interesting than building something inside a walled garden like Facebook Messenger or Alexa.

I know very little about chat apps or voice apps, but I don’t think there’s a cross-platform format that works with different products, right? I imagine it’s like the situation with native apps which require a different codebase for each app store and operating system. And so there’s a constant stream of technologies that try to fulfil the dream of writing once and running everywhere: React Native, Flutter.

They’re trying to solve a very clear and obvious problem: writing the same app more than once is really wasteful. But that’s the nature of the game when it comes to runtime-specific apps. The only alternative is to either deliberately limit your audience …or apply the principle of least power/most availability.

The wastefulness of having to write the same app for multiple platforms isn’t the only thing that puts me off making native apps. The exclusivity works in two directions. There’s the exclusive nature of the runtime that requires a bespoke codebase. There’s also the exclusive nature of the app store. It feels like a return to shelves of packaged software with strict system requirements. You can’t just walk in and put your software on the shelf. That’s the shopkeeper’s job.

There is no shopkeeper for the World Wide Web.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Continuous partial browser support

Vendor prefixes didn’t work. The theory was sound. It was a way of marking CSS and JavaScript features as being experimental. Developers could use the prefixed properties as long as they understood that those features weren’t to be relied upon.

That’s not what happened though. Developers used vendor-prefixed properties as though they were stable. Tutorials were published that basically said “Go ahead and use these vendor-prefixed properties and ship it!” There were even tools that would add the prefixes for you so you didn’t have to type them out for yourself.

Browsers weren’t completely blameless either. Long after features were standardised, they would only be supported in their prefixed form. Apple was and is the worst for this. To this day, if you want to use the clip-path property in your CSS, you’ll need to duplicate your declaration with -webkit-clip-path if you want to support Safari. It’s been like that for seven years and counting.

Like capitalism, vendor prefixes were one of those ideas that sounded great in theory but ended up being unworkable in practice.

Still, developers need some way to get their hands on experiment features. But we don’t want browsers to ship experimental features without some kind of safety mechanism.

The current thinking involves something called origin trials. Here’s the explainer from Microsoft Edge and here’s Google Chrome’s explainer:

  • Developers are able to register for an experimental feature to be enabled on their origin for a fixed period of time measured in months. In exchange, they provide us their email address and agree to give feedback once the experiment ends.
  • Usage of these experiments is constrained to remain below Chrome’s deprecation threshold (< 0.5% of all Chrome page loads) by a system which automatically disables the experiment on all origins if this threshold is exceeded.

I think it works pretty well. If you’re really interested in kicking the tyres on an experimental feature, you can opt in to the origin trial. But it’s very clear that you wouldn’t want to ship it to production.

That said…

You could ship something that’s behind an origin trial, but you’d have to make sure you’re putting safeguards in place. At the very least, you’d need to do feature detection. You certainly couldn’t use an experimental feature for anything mission critical …but you could use it as an enhancement.

And that is a pretty great way to think about all web features, experimental or otherwise. Don’t assume the feature will be supported. Use feature detection (or @supports in the case of CSS). Try to use the feature as an enhancement rather than a dependency.

If you treat all browser features as though they’re behind an origin trial, then suddenly the landscape of browser support becomes more navigable. Instead of looking at the support table for something on caniuse.com and thinking, “I wish more browsers supported this feature so that I could use it!”, you can instead think “I’m going to use this feature today, but treat it as an experimental feature.”

You can also do it for well-established features like querySelector, addEventListener, and geolocation. Instead of assuming that browser support is universal, it doesn’t hurt to take a more defensive approach. Assume nothing. Acknowledge and embrace unpredictability.

The debacle with vendor prefixes shows what happens if we treat experimental features as though they’re stable. So let’s flip that around. Let’s treat stable features as though they’re experimental. If you cultivate that mindset, your websites will be more robust and resilient.

Saving forms

I added a long-overdue enhancement to The Session recently. Here’s the scenario…

You’re on a web page with a comment form. You type your well-considered thoughts into a textarea field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.

This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.

My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage, IndexedDB, localStorage. It was clear that sessionStorage wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB or localStorage. IndexedDB is the more versatile and powerful—because it’s asynchronous—but localStorage is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.

Alright, so I’m going to store the contents of a form in localStorage. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea. I can store other form fields too. Even though localStorage technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).

Now I know what I’m going to store (the textarea contents) and how I’m going to store it (localStorage). The next question is when should I do it?

I could play it safe and store the comment whenever the user presses a key within the textarea. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.

Alright then, I’ll use the unload event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.

So beforeunload then? Well, maybe. But modern browsers also support a pagehide event that looks like a better option.

In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea. And in order to know if the user has interacted with the textarea, I’m back to listening for key presses again.

But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea.

Handily, addEventListener accepts an object of options. One of those options is called “once”. If I set that to true, then the event listener is only fired once.

So I set up a cascade of event listeners. If the user types anything into the textarea, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea contents are put into localStorage.

I’ve abstracted my code into a gist. Here’s what it does:

  1. Cut the mustard. If this browser doesn’t support localStorage, bail out.
  2. Set the localStorage key to be the current URL.
  3. If there’s already an entry for the current URL, update the textarea with the value in localStorage.
  4. Write a function to store the contents of the textarea in localStorage but don’t call the function yet.
  5. The first time that a key is pressed inside the textarea, start listening for the page being unloaded.
  6. When the page is being unloaded, invoke that function that stores the contents of the textarea in localStorage.
  7. When the form is submitted, remove the entry in localStorage for the current URL.

That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.

Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.

Hey now

Progressive enhancement is at the heart of everything I do on the web. It’s the bedrock of my speaking and writing too. Whether I’m writing about JavaScript, Ajax, HTML, or service workers, it’s always through the lens of progressive enhancement. Sometimes I explicitly bang the drum, like with Resilient Web Design. Other times I don’t mention it by name at all, and instead talk only about its benefits.

I sometimes get asked to name some examples of sites that still offer their core functionality even when JavaScript fails. I usually mention Amazon.com, although that has other issues. But quite often I find that a lot of the examples I might mention are dismissed as not being “web apps” (whatever that means).

The pushback I get usually takes the form of “Well, that approach is fine for websites, but it wouldn’t work something like Gmail.”

It’s always Gmail. Which is odd. Because if you really wanted to flummox me with a product or service that defies progressive enhancement, I’d have a hard time with something like, say, a game (although it would be pretty cool to build a text adventure that’s progressively enhanced into a first-person shooter). But an email client? That would work.

Identify core functionality.

Read emails. Write emails.

Make that functionality available using the simplest possible technology.

HTML for showing a list of emails, HTML for displaying the contents of the HTML, HTML for the form you write the response in.

Enhance!

Now add all the enhancements that improve the experience—keyboard shortcuts; Ajax instead of full-page refreshes; local storage, all that stuff.

Can you build something that works just like Gmail without using any JavaScript? No. But that’s not what progressive enhancement is about. It’s about providing the core functionality (reading and writing emails) with the simplest possible technology (HTML) and then enhancing using more powerful technologies (like JavaScript).

Progressive enhancement isn’t about making a choice between using simpler more robust technologies or using more advanced features; it’s about using simpler more robust technologies and then using more advanced features. Have your cake and eat it.

Fortunately I no longer need to run this thought experiment to imagine what it would be like if something like Gmail were built with a progressive enhancement approach. That’s what HEY is.

Sam Stephenson describes the approach they took:

HEY’s UI is 100% HTML over the wire. We render plain-old HTML pages on the server and send them to your browser encoded as text/html. No JSON APIs, no GraphQL, no React—just form submissions and links.

If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.

See? It’s not either resilient or modern—it’s resilient and modern. Have your cake and eat it.

And yet this supremely sensible approach is not considered “modern” web development:

The architecture astronauts who, for the past decade, have been selling us on the necessity of React, Redux, and megabytes of JS, cannot comprehend the possibility of building an email app in 2020 with server-rendered HTML.

HEY isn’t perfect by any means—they’ve got a lot of work to do on their accessibility. But it’s good to have a nice short answer to the question “But what about something like Gmail?”

It reminds me of responsive web design:

When Ethan Marcotte demonstrated the power of responsive design, it was met with resistance. “Sure, a responsive design might work for a simple personal site but there’s no way it could scale to a large complex project.”

Then the Boston Globe launched its responsive site. Microsoft made their homepage responsive. The floodgates opened again.

It’s a similar story today. “Sure, progressive enhancement might work for a simple personal site, but there’s no way it could scale to a large complex project.”

The floodgates are ready to open. We just need you to create the poster child for resilient web design.

It looks like HEY might be that poster child.

I have to wonder if its coincidence or connected that this is a service that’s also tackling ethical issues like tracking? Their focus is very much on people above technology. They’ve taken a human-centric approach to their product and a human-centric approach to web development …because ultimately, that’s what progressive enhancement is.