Journal tags: javascript

151

Responsibility

My colleague Chris has written a terrific post over on the Clearleft blog: Is the planet the missing member of your project team?

Rather than hand-wringing and finger-wagging, it gets down to some practical steps that you—we—can take on every project.

Chris finishes by asking:

Let me know how you design with the environment in mind. What practical advice would you suggest?

Well, here’s something that I keep coming up against…

Chris shows that the environment can be part of project management, specifically the RACI methodology:

We list who is responsible, accountable, consulted, and informed within the project. It’s a simple exercise but the clarity is useful for identifying what expertise and input we should seek from the named individuals.

Having the planet be a proactive partner in your project ensures its needs are considered.

Whenever responsibilities are being assigned there are some things that inevitably fall through the cracks. One I’ve seen over and over again is responsibility for third-party scripts.

On the face of it this seems like another responsibility for developers. We’re talking about code here, right?

But in my experience it is never the developers adding “beacons” and other third-party embedded scripts.

Chris rightly points out:

Development decisions, visual design choices, content approach, and product strategy all contribute to the environmental impact of your website.

But what about sales and marketing? Often they’re the ones who’ll drop in a third-party script to track user journeys. That’s understandable. That’s kind of their job.

Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript. Tools like Request Map Generator can show just how much destruction third-party JavaScript can wreak:

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

Just to be clear, the people adding third-party scripts to websites usually aren’t doing so maliciously. They often don’t realise the negative effect the scripts will have on performance and the environment.

As is so often the case, this isn’t a technical problem. At root it’s about understanding people’s needs (like “I need a way to see what pages are converting!”) and finding a way to meet those needs without negatively impacting the planet. A good open-minded discussion can go a long way.

So I echo Chris’s call to think about environmental impacts from the very start of a project. Establish early on who will have the ability to add third-party scripts to the site. Do all of those people understand the responsibility that gives them?

I saw this lack of foresight in action on a project recently. The front-end development was going really well and the site was going to be exceptionally performant: green Lighthouse scores across the board. But when the site went live it had tracking scripts. That meant that users needed to consent to being tracked. That meant adding another third-party script to generate a consent banner. It completely tanked the Lighthouse scores.

I’m sure the people who added the tracking scripts and consent banners thought they had no choice. But there are alternatives. There are ways to get the data you need without the intrusive surveillance and performance-wrecking JavaScript.

The problem is that it’s not the norm. “Everyone else is doing it” was the justification for Flash intros two decades ago and it’s the justification for enshittification via third-party scripts now.

It doesn’t have to be this way.

Securing client-side JavaScript

I mentioned that I overhauled the JavaScript on The Session recently. That wasn’t just so that I could mess about with HTML web components. I’d been meaning to consolidate some scripts for a while.

Some of the pages on the site had inline scripts. These were usually one-off bits of functionality. But their presence meant that my content security policy wasn’t as tight as it could’ve been.

Being a community website, The Session accepts input from its users. Literally. I do everything I can to sanitise that input. It would be ideal if I could make sure that any JavaScript that slipped by wouldn’t execute. But as long as I had my own inline scripts, my content security policy had to allow them to be executed with script-src: unsafe-inline.

That’s why I wanted to refactor the JavaScript on my site and move everything to external JavaScript files.

In the end I got close, but there are still one or two pages with internal scripts. But that’s okay. I found a way to have my content security policy cake and eat it.

In my content security policy header I can specifiy that inline scripts are allowed, but only if they have a one-time token specified.

This one-time token is called a nonce. No, really. Stop sniggering. Naming things is hard. And occassionally unintentionally hilarious.

On the server, every time a page is requested it gets sent back with a header like this:

content-security-policy: script-src 'self' 'nonce-Cbb4kxOXIChJ45yXBeaq/w=='

That gobbledegook string is generated randomly every time. I’m using PHP to do this:

base64_encode(openssl_random_pseudo_bytes(16))

Then in the HTML I use the same string in any inline scripts on the page:

<script nonce="Cbb4kxOXIChJ45yXBeaq/w==">
…
</script>

Yes, HTML officially has an attribute called nonce.

It’s working a treat. The security headers for The Session are looking good. I have some more stuff in my content security policy—check out the details if you’re interested.

I initially thought I’d have to make an exception for the custom offline page on The Session. After all, that’s only going to be accessed when there is no server involved so I wouldn’t be able to generate a one-time token. And I definitely needed an inline script on that page in order to generate a list of previously-visited pages stored in a cache.

But then I realised that everything would be okay. When the offline page is cached, its headers are cached too. So the one-time token in the content security policy header still matches the one-time token used in the page.

Most pages on The Session don’t have any inline scripts. For a while, every page had an inline script in the head of the document like this:

<script nonce="Cbb4kxOXIChJ45yXBeaq/w==">
document.documentElement.classList.add('hasJS');
</script>

This is something I’ve been doing for years: using JavaScript to add a class to the HTML. Then I can use the presence or absence of that class to show or hide elements that require JavaScript. I have another class called requiresJS that I put on any elements that need JavaScript to work (like buttons for copying to the clipboard, for example).

Then in my CSS I’d write:

:not(.hasJS) .requiresJS {
 display: none;
}

If the hasJS class isn’t set, hide any elements with the requiresJS class.

I decided to switch over to using a scripting media query:

@media (scripting: none) {
  .requiresJS {
   display: none;
  }
}

This isn’t bulletproof by any means. It doesn’t account for browser extensions that disable JavaScript and it won’t get executed at all in older browsers. But I’m okay with that. I’ve put the destructive action in the more modern CSS:

I feel that the more risky action (hiding content) should belong to the more complex selector.

This means that there are situations where elements that require JavaScript will be visible, even if JavaScript isn’t available. But I’d rather that than the other way around: if those elements were hidden from browsers that could execute JavaScript, that would be worse.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.

Pickin’ dates

I had the opportunity to trim some code from The Session recently. That’s always a good feeling.

In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.

There are a couple of places on the site where you can input a date. This is exactly what input type="date" is for. But when I was making the interface, the support for this type of input was patchy.

So instead the interface used three select dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date", I replaced the three selects with one date input.

It was a little fiddly but it worked.

Fast forward to today and input type="date" is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!

I was discussing date inputs recently when I was talking to students in Amsterdam:

They’re given a PDF inheritance-tax form and told to convert it for the web.

That form included dates. The dates were all in the past so the students wanted to be able to set a max value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.

Wouldn’t it be nice if you could specify past dates like this?

<input type="date" max="today">

Or for future dates:

<input type="date" min="today">

Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.

This seems like an ideal use-case for HTML web components:

Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

In this case, it would be nice to augment an existing input type="date" element. Something like this:

 <input-date-past>
   <input type="date">
 </input-date-past>

Here’s the JavaScript that does the augmentation:

 customElements.define('input-date-past', class extends HTMLElement {
     constructor() {
         super();
     }
     connectedCallback() {
         this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
     }
 });

That’s it.

Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future.

See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.

Progressive disclosure defaults

When I wrote about my time in Amsterdam last week, I mentioned the task that the students were given:

They’re given a PDF inheritance-tax form and told to convert it for the web.

Rich had a question about that:

I’m curious to know if they had the opportunity to optimise the user experience of the form for an online environment, eg. splitting it up into a sequence of questions, using progressive disclosure, branching based on inputs, etc?

The answer is yes, very much so. Progressive disclosure was a very clear opportunity for enhancement.

You know the kind of paper form where it says “If you answered no to this, then skip ahead to that”? On the web, we can do the skipping automatically. Or to put it another way, we can display a section of the form only when the user has ticked the appropriate box.

This is a classic example of progressive disclosure:

information is revealed when it becomes relevant to the current task.

But what should the mechanism be?

This is an interaction design pattern so JavaScript seems the best choice. JavaScript is for behaviour.

On the other hand, you can do this in CSS using the :checked pseudo-class. And the principle of least power suggests using the least powerful language suitable for a given task.

I’m torn on this. I’m not sure if there’s a correct answer. I’d probably lean towards JavaScript just because it’s then possible to dynamically update ARIA attributes like aria-expanded—very handy in combination with aria-controls. But using CSS also seems perfectly reasonable to me.

It was interesting to see which students went down the JavaScript route and which ones used CSS.

It used to be that using the :checked pseudo-class involved an adjacent sibling selector, like this:

input.disclosure-switch:checked ~ .disclosure-content {
  display: block;
}

That meant your markup had to follow a specific pattern where the elements needed to be siblings:

<div class="disclosure-container">
  <input type="checkbox" class="disclosure-switch">
  <div class="disclosure-content">
  ...
  </div>
</div>

But none of the students were doing that. They were all using :has(). That meant that their selector could be much more robust. Even if the nesting of their markup changes, the CSS will still work. Something like this:

.disclosure-container:has(.disclosure-switch:checked) .disclosure-content

That will target the .disclosure-content element anywhere inside the same .disclosure-container that has the .disclosure-switch. Much better! (Ignore these class names by the way—I’m just making them up to illustrate the idea.)

But just about every student ended up with something like this in their style sheets:

.disclosure-content {
  display: none;
}
.disclosure-container:has(.disclosure-switch:checked) .disclosure-content {
  display: block;
}

That gets my spidey-senses tingling. It doesn’t smell right to me. Here’s why…

The simpler selector is doing the more destructive action: hiding content. There’s a reliance on the more complex selector to display content.

If a browser understands the first ruleset but not the second, that content will be hidden by default.

I know that :has() is very well supported now, but this still makes me nervous. I feel that the more risky action (hiding content) should belong to the more complex selector.

Thanks to the :not() selector, you can reverse the logic of the progressive disclosure:

.disclosure-content {
  display: block;
}
.disclosure-container:not(:has(.disclosure-switch:checked)) .disclosure-content {
  display: none;
}

Now if a browser understands the first ruleset, but not the second, it’s not so bad. The content remains visible.

When I was explaining this way of thinking to the students, I used an analogy.

Suppose you’re building a physical product that uses electricity. What should happen if there’s a power cut? Like, if you’ve got a building with electric doors, what should happen when the power is cut off? Should the doors be locked by default? Or is it safer to default to unlocked doors?

It’s a bit of a tortured analogy, but it’s one I’ve used in the past when talking about JavaScript on the web. I like to think about JavaScript as being like electricity…

Take an existing product, like say, a toothbrush. Now imagine what you can do when you turbo-charge it with electricity: an electric toothbrush!

But also consider what happens when the electricity fails. Instead of the product becoming useless you want it to revert back to being a regular old toothbrush.

That’s the same mindset I’m encouraging for the progressive disclosure pattern. Make sure that the default state is safe. Then enhance.

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

Switching costs

Cory has published the transcript of his talk at the Transmediale festival in Berlin. It’s all about enshittification, and what we can collectively do to reverse it.

He succinctly describes the process of enshittification like this:

First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

More importantly, he describes the checks and balances that keep enshittification from happening, all of which have been dismantled over time: competition, regulation, self-help, and workers.

One of the factors that allows enshittification to proceed is a high switching cost:

Switching costs are everything you have to give up when you leave a product or service. In Facebook’s case, it was all the friends there that you followed and who followed you. In theory, you could have all just left for somewhere else; in practice, you were hamstrung by the collective action problem.

It’s hard to get lots of people to do the same thing at the same time.

We’ve seen this play out over at Twitter, where people I used to respect are still posting there as if it hasn’t become a cesspool of far-right racist misogyny reflecting its new owner’s values. But for a significant amount of people—including myself and anyone with a modicum of decency—the switching cost wasn’t enough to stop us getting the hell out of there. Echoing Robin’s observation, Cory says:

…the difference between “I hate this service but I can’t bring myself to quit it,” and “Jesus Christ, why did I wait so long to quit? Get me the hell out of here!” is razor thin.

If users can’t leave because everyone else is staying, when when everyone starts to leave, there’s no reason not to go, too.

That’s terminal enshittification, the phase when a platform becomes a pile of shit. This phase is usually accompanied by panic, which tech bros euphemistically call ‘pivoting.’

Anyway, I bring this up because I recently read something else about switching costs, but in a very different context. Jake Lazaroff was talking about JavaScript frameworks:

I want to talk about one specific weakness of JavaScript frameworks: interoperability, or the lack thereof. Almost without exception, each framework can only render components written for that framework specifically.

As a result, the JavaScript community tends to fragment itself along framework lines. Switching frameworks has a high cost, especially when moving to a less popular one; it means leaving most of the third-party ecosystem behind.

That switching cost stunts framework innovation by heavily favoring incumbents with large ecosystems.

Sounds a lot like what Cory was describing with incumbents like Google, Facebook, Twitter, and Amazon.

And let’s not kid ourselves, when we’re talking about incumbent client-side JavaScript frameworks, we might mention Vue or some other contender, but really we’re talking about React.

React has massive switching costs. For over a decade now, companies have been hiring developers based on one criterion: do they know React?

“An expert in CSS you say? No thanks.”

“Proficient in vanilla JavaScript? Don’t call us, we’ll call you.”

Heck, if I were advising someone who was looking for a job in front-end development (as opposed to actually being good at front-end development; two different things), I’d tell them to learn React.

Just as everyone ended up on Facebook because everyone was on Facebook, everyone ended up using React because everyone was using React.

You can probably see where I’m going with this: the inevitable enshittification of React.

Just to be clear, I’m not talking about React getting shittier in terms of what it does. It’s always been a shitty technology for end users:

React is legacy tech from 2013 when browsers didn’t have template strings or a BFCache.

No, I’m talking about the enshittification of the developer experience …the developer experience being the thing that React supposedly has going for it, though as Simon points out, the developer experience has always been pretty crap:

Whether on purpose or not, React took advantage of this situation by continuously delivering or promising to deliver changes to the library, with a brand new API being released every 12 to 18 months. Those new APIs and the breaking changes they introduce are the new shiny objects you can’t help but chase. You spend multiple cycles learning the new API and upgrading your application. It sure feels like you are doing something, but in reality, you are only treading water.

Well, it seems like the enshittification of the React ecosystem is well underway. Cassidy is kind of annoyed at React. Tom is increasingly miffed about the state of React releases, and Matteo asks React, where are you going?

Personally, I would love it if more people were complaining about the dreadful user experience inflicted by client-side React. Instead the complaints are universally about the developer experience.

I guess doing the right thing for the wrong reasons is fine. It’s just a little dispiriting.

I sometimes feel like I’m living that old joke, where I’m the one in the restaurant saying “the food here is terrible!” and most of my peers are saying “I know! And such small portions!”

HTML web components

Web components have been around for quite a while, but it feels like they’re having a bit of a moment right now.

It turns out that the best selling point for web components was “wait and see.” For everyone who didn’t see the benefit of web components over being locked into a specific framework, time is proving to be a great teacher.

It’s not just that web components are portable. They’re also web standards, which means they’ll be around as long as web browsers. No framework can make that claim. As Jake Lazaroff puts it, web components will outlive your JavaScript framework.

At this point React is legacy technology, like Angular. Lots of people are still using it, but nobody can quite remember why. The decision-makers in organisations who chose to build everything with React have long since left. People starting new projects who still decide to build on React are doing it largely out of habit.

Others are making more sensible judgements and, having been bitten by lock-in in the past, are now giving web components a go.

If you’re one of those people making the move from React to web components, there’ll certainly be a bit of a learning curve, but that would be true of any technology change.

I have a suggestion for you if you find yourself in this position. Try not to bring React’s mindset with you.

I’m talking about the way React components are composed. There’s often lots of props doing heavy lifting. The actual component element itself might be empty.

If you want to apply that model to web components, you can. Lots of people do. It’s not unusual to see web components in the wild that look like this:

<my-component></my-component>

The custom element is just a shell. All the actual power is elsewhere. It’s in the JavaScript that does all kinds of clever things with the shadow DOM, templates, and slots.

There is another way. Ask, as Robin does, “what would HTML do?”

Think about composibility with existing materials. Do you really need to invent an entirely new component from scratch? Or can you use HTML up until it reaches its limit and then enhance the markup?

Robin writes:

I don’t think we should see web components like the ones you might find in a huge monolithic React app: your Button or Table or Input components. Instead, I’ve started to come around and see Web Components as filling in the blanks of what we can do with hypertext: they’re really just small, reusable chunks of code that extends the language of HTML.

Dave talks about how web components can be HTML with superpowers. I think that’s a good attitude to have. Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

Where does the shadow DOM come into all of this? It doesn’t. And that’s okay. I’m not saying it should be avoided completely, but it should be a last resort. See how far you can get with the composibility of regular HTML first.

Eric described his recent epiphany with web components. He created a super-slider custom element that wraps around an existing label and input type="range":

You just take some normal HTML markup, wrap it with a custom element, and then write some JS to add capabilities which you can then style with regular CSS!  Everything’s of the Light Side of the Web.  No need to pierce the Vale of Shadows or whatever.

When you wrap some existing markup in a custom element and then apply some new behaviour with JavaScript, technically you’re not doing anything you couldn’t have done before with some DOM traversal and event handling. But it’s less fragile to do it with a web component. It’s portable. It obeys the single responsibility principle. It only does one thing but it does it well.

Jim created an icon-list custom element that wraps around a regular ul populated with li elements. But he feels almost bashful about even calling it a web component:

Maybe I shouldn’t be using the term “web component” for what I’ve done here. I’m not using shadow DOM. I’m not using the templates or slots. I’m really only using custom elements to attach functionality to a specific kind of component.

I think what Eric and Jim are doing is exemplary. See also Zach’s web components.

At the end of his post, Eric says he’d like a nice catchy term for these kinds of web components. In Dave’s catalogue of web components, they’re called “element extensions.” I like that. It’s pretty catchy.

Or we could call them “HTML web components.” If your custom element is empty, it’s not an HTML web component. But if you’re using a custom element to extend existing markup, that’s an HTML web component.

React encouraged a mindset of replacement: “forgot what browsers can do; do everything in a React component instead, even if you’re reinventing the wheel.”

HTML web components encourage a mindset of augmentation instead.

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Speedy tunes

Performance is a high priority for me with The Session. It needs to work for people all over the world using all kinds of devices.

My main strategy for ensuring good performance is to diligently apply progressive enhancement. The core content is available to any device that can render HTML.

To keep things performant, I’ve avoided as many assets (or, more accurately, liabilities) as possible. No uneccessary images. No superfluous JavaScript libraries. Not even any web fonts (gasp!). And definitely no third-party resources.

The pay-off is a speedy site. If you want to see lab data, run a page from The Session through lighthouse. To see field data, take a look at data from Chrome UX Report (Crux).

But the devil is in the details. Even though most pages on The Session are speedy, the outliers have bothered me for a while.

Take a typical tune page on the site. The data is delivered from the server as HTML, which loads nice and quick. That data includes the notes for the tune settings, written in ABC notation, a nice lightweight text format.

Then the enhancement happens. Using Paul Rosen’s brilliant abcjs JavaScript library, those ABCs are converted into SVG sheetmusic.

So on tune pages there’s an additional download for that JavaScript library. That’s not so bad though—I’m using a service worker to cache that file so there’ll only ever be one initial network request.

If a tune has just a few different versions, the page remains nice and zippy. But if a tune has lots of settings, the computation starts to add up. Converting all those settings from ABC to SVG starts to take a cumulative toll on the main thread.

I pondered ways to avoid that conversion step. Was there some way of pre-generating the SVGs on the server rather than doing it all on the client?

In theory, yes. I could spin up a headless browser, run the JavaScript and take a snapshot. But that’s a bit beyond my backend programming skills, so I’ve taken a slightly different approach.

The first time anyone hits a tune page, the ABCs getting converted to SVGs as usual. But now there’s one additional step. I grab the generated markup and send it as an Ajax payload to an endpoint on my server. That endpoint stores the sheetmusic as a file in a cache.

Next time someone hits that page, there’s a server-side check to see if the sheetmusic has been cached. If it has, send that down the wire embedded directly in the HTML.

The idea is that over time, most of the sheetmusic on the site will transition from being generated in the browser to being stored on the server.

So far it’s working out well.

Take a really popular tune like The Lark In The Morning. There are twenty settings, and each one has four parts. Previously that would have resulted in a few seconds of parsing and rendering time on the main thread. Now everything is delivered up-front.

I’m not out of the woods. A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.

I’ll keep working on it. But overall, the speed improvement is great. A typical tune page is now very speedy indeed.

It’s like a microcosm of web performance in general: respect your users’ time, network connection and battery life. If that means shifting the work from the browser to the server, do it!

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Relative times

Last week Phil posted a little update about his excellent site, ooh.directory:

If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.

Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.

I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.

I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.

“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.

Sure enough, we’ve now got the Intl.RelativeTimeFormat object. It’s got browser support across the board.

Here’s the code I wrote.

I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.

You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.

It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.

Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.

You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)

I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.

This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.

Read-only web apps

The most cartoonish misrepresentation of progressive enhancement is that it means making everything work without JavaScript.

No. Progressive enhancement means making sure your core functionality works without JavaScript.

In my book Resilient Web Design, I quoted Wilto:

Lots of cool features on the Boston Globe don’t work when JS breaks; “reading the news” is not one of them.

That’s an example where the core functionality is readily identifiable. It’s a newspaper. The core functionality is reading the news.

It isn’t always so straightforward though. A lot of services that self-identify as “apps” will claim that even their core functionality requires JavaScript.

Surely I don’t expect Gmail or Google Docs to provide core functionality without JavaScript?

In those particular cases, I actually do. I believe that a textarea in a form would do the job nicely. But I get it. That might take a lot of re-engineering.

So how about this compromise…

Your app should work in a read-only mode without JavaScript.

Without JavaScript I should still be able to read my email in Gmail, even if you don’t let me compose, reply, or organise my messages.

Without JavaScript I should still be able to view a document in Google Docs, even if you don’t let me comment or edit the document.

Even with something as interactive as Figma or Photoshop, I think I should still be able to view a design file without JavaScript.

Making this distinction between read-only mode and read/write mode could be very useful, especially at the start of a project.

Begin by creating the read-only mode that doesn’t require JavaScript. That alone will make for a solid foundation to build upon. Now you’ve built a fallback for any unexpected failures.

Now start adding the read/write functionally. You’re enhancing what’s already there. Progressively.

Heck, you might even find some opportunities to provide some read/write functionality that doesn’t require JavaScript. But if JavaScript is needed, that’s absolutely fine.

So if you’re about to build a web app and you’re pretty sure it requires JavaScript, why not pause and consider whether you can provide a read-only version.

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Control

In two of my recent talks—In And Out Of Style and Design Principles For The Web—I finish by looking at three different components:

  1. a button,
  2. a dropdown, and
  3. a datepicker.

In each case you could use native HTML elements:

  1. button,
  2. select, and
  3. input type="date".

Or you could use divs with a whole bunch of JavaScript and ARIA.

In the case of a datepicker, I totally understand why you’d go for writing your own JavaScript and ARIA. The native HTML element is quite restricted, especially when it comes to styling.

In the case of a dropdown, it’s less clear-cut. Personally, I’d use a select element. While it’s currently impossible to style the open state of a select element, you can style the closed state with relative ease. That’s good enough for me.

Still, I can understand why that wouldn’t be good enough for some cases. If pixel-perfect consistency across platforms is a priority, then you’re going to have to break out the JavaScript and ARIA.

Personally, I think chasing pixel-perfect consistency across platforms isn’t even desirable, but I get it. I too would like to have more control over styling select elements. That’s one of the reasons why the work being done by the Open UI group is so important.

But there’s one more component: a button.

Again, you could use the native button element, or you could use a div or a span and add your own JavaScript and ARIA.

Now, in this case, I must admit that I just don’t get it. Why wouldn’t you just use the native button element? It has no styling issues and the browser gives you all the interactivity and accessibility out of the box.

I’ve been trying to understand the mindset of a developer who wouldn’t use a native button element. The easy answer would be that they’re just bad people, and dismiss them. But that would probably be lazy and inaccurate. Nobody sets out to make a website with poor performance or poor accessibility. And yet, by choosing not to use the native HTML element, that’s what’s likely to happen.

I think I might have finally figured out what might be going on in the mind of such a developer. I think the issue is one of control.

When I hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, I think ���Great! That’s less work for me. I can just let the browser deal with it.” In other words, I relinquish control to the browser (though not entirely—I still want the styling to be under my control as much as possible).

But I now understand that someone else might hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, and think “Uh-oh! What if there unexpected side-effects of these built-in behaviours that might bite me on the ass?” In other words, they don’t trust the browsers enough to relinquish control.

I get it. I don’t agree. But I get it.

If your background is in computer science, then the ability to precisely predict how a programme will behave is a virtue. Any potential side-effects that aren’t within your control are undesirable. The only way to ensure that an interface will behave exactly as you want is to write it entirely from scratch, even if that means using more JavaScript and ARIA than is necessary.

But I don’t think it’s a great mindset for the web. The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.

Still, I’m glad that I now have a bit more insight into why someone would choose to attempt to retain control by using div, JavaScript and ARIA. It’s not what I would do, but I think I understand the motivation a bit better now.

Image previews with the FileReader API

I added a “notes” section to this website eight years ago. I set it up so that notes could be syndicated to Twitter. Ever since then, that’s the only way I post to Twitter.

A few months later I added photos to my notes. Again, this would get syndicated to Twitter.

Something’s bothered me for a long time though. I initially thought that if I posted a photo, then the accompanying text would serve as a decription of the image. It could effectively act as the alt text for the image, I thought. But in practice it didn’t work out that way. The text was often a commentary on the image, which isn’t the same as a description of the contents.

I needed a way to store alt text for images. To make it more complicated, it was possible for one note to have multiple images. So even though a note was one line in my database, I somehow needed a separate string of text with the description of each image in a single note.

I eventually settled on using the file system instead of the database. The images themselves are stored in separate folders, so I figured I could have an accompanying alt.txt file in each folder.

Take this note from yesterday as an example. Different sizes of the image are stored in the folder /images/uploaded/19077. Here’s a small version of the image and here’s the original. In that same folder is the alt text.

This means I’m reading a file every time I need the alt text instead of reading from a database, which probably isn’t the most performant way of doing it, but it seems to be working okay.

Here’s another example:

In order to add the alt text to the image, I needed to update my posting interface. By default it’s a little textarea, followed by a file upload input, followed by a toggle (a checkbox under the hood) to choose whether or not to syndicate the note to Twitter.

The interface now updates automatically as soon as I use that input type="file" to choose any images for the note. Using the FileReader API, I show a preview of the selected images right after the file input.

Here’s the code if you ever need to do something similar. I’ve abstracted it somewhat in that gist—you should be able to drop it into any page that includes input type="file" accept="image/*" and it will automatically generate the previews.

I was pleasantly surprised at how easy this was. The FileReader API worked just as expected without any gotchas. I think I always assumed that this would be quite complex to do because once upon a time, it was quite complex (or impossible) to do. But now it’s wonderfully straightforward. Story of the web.

My own version of the script does a little bit more; it also generates another little textarea right after each image preview, which is where I write the accompanying alt text.

I’ve also updated my server-side script that handles the syndication to Twitter. I’m using the /media/metadata/create method to provide the alt text. But for some reason it’s not working. I can’t figure out why. I’ll keep working on it.

In the meantime, if you’re looking at an image I’ve posted on Twitter and you’re judging me for its lack of alt text, my apologies. But each tweet of mine includes a link back to the original note on this site and you will most definitely find the alt text for the image there.

Bugblogging

A while back I wrote a blog post called Web Audio API weirdness on iOS. I described a bug in Mobile Safari along with a hacky fix. I finished by saying:

If you ever find yourself getting weird but inconsistent behaviour on iOS using the Web Audio API, this nasty little hack could help.

Recently Jonathan Aldrich posted a thread about the same bug. He included a link to my blog post. He also said:

Thanks so much for your post, this was a truly pernicious problem!

That warms the cockles of my heart. It’s very gratifying to know that documenting the bug (and the fix) helped someone out. Or, as I put it:

Yay for bugblogging!

Forgive the Germanic compound word, but in this case I think it fits.

Bugblogging doesn’t need to involve a solution. Just documenting a bug is a good thing to do. Recently I documented a bug with progressive web apps on iOS. Before that I documented a bug in Facebook Container for Firefox. When I documented some weird behaviour with the Web Share API in Safari on iOS, I wasn’t even sure it was a bug but Tess was pretty sure it was and filed a proper bug report.

I’ve benefited from other people bugblogging. Phil Nash wrote Service workers: beware Safari’s range request. That was exactly what I needed to solve a problem I’d been having. And then that post about Phil solving my problem helped Peter Rukavina solve a similar issue so he wrote Phil Nash and Jeremy Keith Save the Safari Video Playback Day.

Again, this warmed the cockles of my heart. Bugblogging is worth doing just for the reward of that feeling.

There’s a similar kind of blog post where, instead of writing about a bug, you write about a particular technique. In one way, this is the opposite of bugblogging because you’re writing about things working exactly as they should. But these posts have a similar feeling to bugblogging because they also result in a warm glow when someone finds them useful.

Here are some recent examples of these kinds of posts—tipblogging?—that I’ve found useful:

All three are very handy tips. Thanks, Eric! Thanks, Rich! Thanks, Stephanie!

Suspicion

I’ve already had some thoughtful responses to yesterday’s post about trust. I wrapped up my thoughts with a request:

I would love it if someone could explain why they avoid native browser features but use third-party code.

Chris obliged:

I can’t speak for the industry, but I have a guess. Third-party code (like the referenced Bootstrap and React) have a history of smoothing over significant cross-browser issues and providing better-than-browser ergonomic APIs. jQuery was created to smooth over cross-browser JavaScript problems. That’s trust.

Very true! jQuery is the canonical example of a library smoothing over the bumpy landscape of browser compatibilities. But jQuery is also the canonical example of a library we no longer need because the browsers have caught up …and those browsers support standards directly influenced by jQuery. That’s a library success story!

Charles Harries takes on my question in his post Libraries over browser features:

I think this perspective of trust has been hammered into developers over the past maybe like 5 years of JavaScript development based almost exclusively on inequality of browser feature support. Things are looking good in 2022; but as recently as 2019, 4 of the 5 top web developer needs had to do with browser compatibility.

Browser compatibility is one of the underlying promises that libraries—especially the big ones that Jeremy references, like React and Bootstrap—make to developers.

So again, it’s browser incompatibilities that made libraries attractive.

Jim Nielsen responds with the same message in his post Trusting Browsers:

We distrust the browser because we’ve been trained to. Years of fighting browser deficiencies where libraries filled the gaps. Browser enemy; library friend.

For example: jQuery did wonders to normalize working across browsers. Write code once, run it in any browser — confidently.

Three for three. My question has been answered: people gravitated towards libraries because browsers had inconsistent implementations.

I’m deliberately using the past tense there. I think Jim is onto something when he says that we’ve been trained not to trust browsers to have parity when it comes to supporting standards. But that has changed.

Charles again:

This approach isn’t a sustainable practice, and I’m trying to do as little of it as I can. Jeremy is right to be suspicious of third-party code. Cross-browser compatibility has gotten a lot better, and campaigns like Interop 2022 are doing a lot to reduce the burden. It’s getting better, but the exasperated I-just-want-it-to-work mindset is tough to uninstall.

I agree. Inertia is a powerful force. No matter how good cross-browser compatibility gets, it’s going to take a long time for developers to shed their suspicion.

Jim is glass-half-full kind of guy:

I’m optimistic that trust in browser-native features and APIs is being restored.

He also points to a very sensible mindset when it comes to third-party libraries and frameworks:

In this sense, third-party code and abstractions can be wonderful polyfills for the web platform. The idea being that the default posture should be: leverage as much of the web platform as possible, then where there are gaps to creating great user experiences, fill them in with exploratory library or framework features (features which, conceivably, could one day become native in browsers).

Yes! A kind of progressive enhancement approach to using third-party code makes a lot of sense. I’ve always maintained that you should treat libraries and frameworks like cattle, not pets. Don’t get too attached. If the library is solving a genuine need, it will be replaced by stable web standards in browsers (again, see jQuery).

I think that third-party libraries and frameworks work best as polyfills. But the whole point of polyfills is that you only use them when the browsers don’t supply features natively (and you also go back and remove the polyfill later when browsers do support the feature). But that’s not how people are using libraries and frameworks today. Developers are reaching for them by default instead of treating them as a last resort.

I like Jim’s proposed design princple:

Where available, default to browser-native features over third party code, abstractions, or idioms.

(P.S. It’s kind of lovely to see this kind of thoughtful blog-to-blog conversation happening. Right at a time when Twitter is about to go down the tubes, this is a demonstration of an actual public square with more nuanced discussion. Make your own website and join the conversation!)