Journal tags: priorities

9

Creativity

It’s like a little mini conference season here in Brighton. Tomorrow is ffconf, which I’m really looking forward to. Last week was UX Brighton, which was thoroughly enjoyable.

Maybe it’s because the theme this year was all around creativity, but all of the UX Brighton speakers gave entertaining presentations. The topics of innovation and creativity were tackled from all kinds of different angles. I was having flashbacks to the Clearleft podcast episode on innovation—have a listen if you haven’t already.

As the day went on though, something was tickling at the back of my brain. Yes, it’s great to hear about ways to be more creative and unlock more innovation. But maybe there was something being left unsaid: finding novel ways of solving problems and meeting user needs should absolutely be done …once you’ve got your basics sorted out.

If your current offering is slow, hard to use, or inaccessible, that’s the place to prioritise time and investment. It doesn’t have to be at the expense of new initiatives: this can happen in parallel. But there’s no point spending all your efforts coming up with the most innovate lipstick for a pig.

On that note, I see that more and more companies are issuing breathless announcements about their new “innovative” “AI” offerings. All the veneer of creativity without any of the substance.

Agile design principles

I may have mentioned this before, but I’m a bit of a nerd for design principles. Have I shown you my equivalent of an interesting rock collection lately?

If you think about design principles for any period of time, it inevitably gets very meta very quickly. You start thinking about what makes for good design principles. In other words, you start wondering if there are design principles for design principles.

I’ve written before about how I think good design principles should encode some level of prioritisation. The classic example is the HTML design principle called the priority of consitituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s wonderfully practical!

I realised recently that there’s another set of design princples that put prioritisation front and centre—the Agile manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

And there’s this excellent explanation which could just as well apply to the priorty of constituencies:

That is, while there is value in the items on the right, we value the items on the left more.

Yes! That’s the spirit!

Ironically, the Agile manifesto also contains a section called principles behind the Agile manifesto which are …less good (at least they’re less good as design principles—they’re fine as hypotheses to be tested).

Agile is far from perfect. See, for example, Miriam Posner’s piece Agile and the Long Crisis of Software. But where Agile isn’t fulfilling its promise, I’d say it’s not because of its four design principles. If anything, I think the problems arise from organisations attempting to implement Agile without truly internalising the four principles.

Oh, and that’s another thing I like about the Agile manifesto as a set of design principles—the list of prioritised principles is mercifully short. Just four lines.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Updating Safari

Safari has been subjected to a lot of ire recently. Most of that ire has been aimed at the proposed changes to the navigation bar in Safari on iOS—moving it from a fixed top position to a floaty bottom position right over the content you’re trying to interact with.

Courage.

It remains to be seen whether this change will actually ship. That’s why it’s in beta—to gather all the web’s hot takes first.

But while this very visible change is dominating the discussion, invisible changes can be even more important. Or in the case of Safari, the lack of changes.

Compared to other browsers, Safari lags far behind when it comes to shipping features. I’m not necessarily talking about cutting-edge features either. These are often standards that have been out for years. This creates a gap—albeit an invisible one—between Safari and other browsers.

Jorge Arango has noticed this gap:

I use Safari as my primary browser on all my devices. I like how Safari integrates with the rest of the OS, its speed, and privacy features. But, alas, I increasingly have issues rendering websites and applications on Safari.

That’s the perspective of an end-user. Developers who have to deal with the gap in features are more, um, strident in their opinions. Perry Sun wrote For developers, Apple’s Safari is crap and outdated:

Don’t get me wrong, Safari is very good web browser, delivering fast performance and solid privacy features.

But at the same time, the lack of support for key web technologies and APIs has been both perplexing and annoying at the same time.

Alas, that post also indulges in speculation about Apple’s motives which always feels a bit too much like a conspiracy theory to me. Baldur Bjarnason has more to say on that topic in his post Kremlinology and the motivational fallacy when blogging about Apple. He also points to a good example of critiquing Safari without speculating about motives: Dave’s post One-offs and low-expectations with Safari, which documents all the annoying paper cuts inflicted by Safari’s “quirks.”

Another deep dive that avoids speculating about motives comes from Tim Perry: Safari isn’t protecting the web, it’s killing it. I don’t agree with everything in it. I think that Apple—and Mozilla’s—objections to some device APIs are informed by a real concern about privacy and security. But I agree with his point that it’s not enough to just object; you’ve got to offer an alternative vision too.

That same post has a litany of uncontroversial features that shipped in Safari looong after they shipped in other browsers:

Again: these are not contentious features shipping by only Chrome, they’re features with wide support and no clear objections, but Safari is still not shipping them until years later. They’re also not shiny irrelevant features that “bloat the web” in any sense: each example I’ve included above primarily improving core webpage UX and performance. Safari is slowing that down progress here.

But perhaps most damning of all is how Safari deals with bugs.

A recent release of Safari shipped with a really bad Local Storage bug. The bug was fixed within a day. Yay! But the fix won’t ship until …who knows?

This is because browser updates are tied to operating system updates. Yes, this is just like the 90s when Microsoft claimed that Internet Explorer was intrinsically linked to Windows (a tactic that didn’t work out too well for them in the subsequent court case).

I don’t get it. I’m pretty sure that other Apple products ship updates and fixes independentally of OS releases. I’m sure I’ve received software updates for Keynote, Garage Band, and other pieces of software made by Apple.

And yet, of all the applications that need a speedy update cycle—a user agent for the World Wide Web—Apple’s version is needlessly delayed by the release cycle of the entire operating system.

I don’t want to speculate on why this might be. I don’t know the technical details. But I suspect that the root cause might not be technical in nature. Apple have always tied their browser updates to OS releases. If Google’s cardinal sin is avoiding anything “Not Invented Here”, Apple’s downfall is “We’ve always done it this way.”

Evergreen browsers update in the background, usually at regular intervals. Firefox is an evergreen browser. Chrome is an evergreen browser. Edge is an evergreen browser.

Safari is not an evergreen browser.

That’s frustrating when it comes to new features. It’s unforgivable when it comes to bugs.

At least on Apple’s desktop computers, users have the choice to switch to a different browser. But on Apple’s mobile devices, users have no choice but to use Safari’s rendering engine, bugs and all.

As I wrote when I had to deal with one of Safari’s bugs:

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Get the FLoC out

I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.

The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:

When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.

So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.

Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.

The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.

Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.

Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.

Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)

Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.

That’s what Google is testing with the origin trial of FLoC.

If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:

  • Don’t use Chrome. No other web browser is participating in this experiment. I recommend Firefox.
  • If you want to continue to use Chrome, install the Duck Duck Go Chrome extension.
  • Alternatively, if you manually disable third-party cookies, your Chrome browser won’t be included in the experiment.
  • Or you could move to Europe. The origin trial won’t be enabled for users in the European Union, which is coincidentally where GDPR applies.

That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.

The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I meanbest guess.”

I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.

In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta element won’t do the trick, I’m afraid.

I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf file:

<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>

If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.

Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.

But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.

Here’s Google’s logic:

  1. Third-party cookies are on their way out so advertisers will no longer be able to use that technology to target users.
  2. If we don’t provide an alternative, advertisers and other third parties will use fingerprinting, which we all agree is very bad.
  3. So let’s implement Federated Learning of Cohorts so that advertisers won’t use fingerprinting.

The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.

Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.

Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.

When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.

But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.

Which one is it?

Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.

The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.

Marko Saric sums it up:

FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.

What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.

Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.

I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.

Until that day, I’m not sure that Google Chrome can be considered a user agent.

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Principles and priorities

I think about design principles a lot. I’m such a nerd for design principles, I even have a collection. I’m not saying all of the design principles in the collection are good—far from it! I collect them without judgement.

As for what makes a good design principle, I’ve written about that before. One aspect that everyone seems to agree on is that a design principle shouldn’t be an obvious truism. Take this as an example:

Make it usable.

Who’s going to disagree with that? It’s so agreeable that it’s practically worthless as a design principle. But now take this statement:

Usability is more important than profitability.

Ooh, now we’re talking! That’s controversial. That’s bound to surface some disagreement, which is a good thing. It’s now passing the reversability test—it’s not hard to imagine an endeavour driven by the opposite:

Profitability is more important than usability.

In either formulation, what makes these statements better than the bland toothless agreeable statements—“Usability is good!”, “Profitability is good!”—is that they introduce the element of prioritisation.

I like design principles that can be formulated as:

X, even over Y.

It’s not saying that Y is unimportant, just that X is more important:

Usability, even over profitability.

Or:

Profitability, even over usability.

Design principles formulated this way help to crystalise priorities. Chris has written about the importance of establishing—and revisiting—priorities on any project:

Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.

I’ve said it many times, but one on my favourite design principles comes from the HTML design principles. The priority of consitituencies (it’s got “priorities” right there in the name!):

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

Or put another way:

  • Users, even over authors.
  • Authors, even over implementors.
  • Implementors, even over specifiers.
  • Specifiers, even over theoretical purity.

When it comes to evaluating technology for the web, I think there are a number of factors at play.

First and foremost, there’s the end user. If a technology choice harms the end user, avoid it. I’m thinking here of the kind of performance tax that a user has to pay when developers choose to use megabytes of JavaScript.

Mind you, some technologies have no direct effect on the end user. When it comes to build tools, version control, toolchains …all the stuff that sits on your computer and never directly interacts with users. In that situation, the wants and needs of developers can absolutely take priority.

But as a general principle, I think this works:

User experience, even over developer experience.

Sadly, I think the current state of “modern” web development reverses that principle. Developer efficiency is prized above all else. Like I said, that would be absolutely fine if we’re talking about technologies that only developers are exposed to, but as soon as we’re talking about shipping those technologies over the network to end users, it’s negligent to continue to prioritise the developer experience.

I feel like personal websites are an exception here. What you do on your own website is completely up to you. But once you’re taking a paycheck to make websites that will be used by other people, it’s incumbent on you to realise that it’s not about you.

I’ve been talking about developers here, but this is something that applies just as much to designers. But I feel like designers go through that priority shift fairly early in their career. At the outset, they’re eager to make their mark and prove themselves. As they grow and realise that it’s not about them, they understand that the most appropriate solution for the user is what matters, even if that’s a “boring” tried-and-tested pattern that isn’t going to wow any fellow designers.

I’d like to think that developers would follow a similar progression, and I’m sure that some do. But I’ve seen many senior developers who have grown more enamoured with technologies instead of honing in on the most appropriate technology for end users. Maybe that’s because in many organisations, developers are positioned further away from the end users (whereas designers are ideally being confronted with their creations being used by actual people). If a lead developer is focused on the productivity, efficiency, and happiness of the dev team, it’s no wonder that their priorities end up overtaking the user experience.

I realise I’m talking in very binary terms here: developer experience versus user experience. I know it’s not always that simple. Other priorities also come into play, like business needs. Sometimes business needs are in direct conflict with user needs. If an online business makes its money through invasive tracking and surveillance, then there’s no point in having a design principle that claims to prioritise user needs above all else. That would be a hollow claim, and the design principle would become worthless.

Because that’s the point with design principles. They’re there to be used. They’re not a nice fluffy exercise in feeling good about your work. The priority of constituencies begins, “in case of conflict” and that’s exactly when a design principle matters—when it’s tested.

Suppose someone with a lot of clout in your organisation makes a decision, but that decision conflicts with your organisations’s design principles. Instead of having an opinion-based argument about who’s right or wrong, the previously agreed-upon design principles allow you to take ego out of the equation.

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Priorities

I recently wrote about a web-specific example of a general principle for choosing the right tool for the job:

JavaScript should only do what only JavaScript can do.

I was—yet again—talking about appropriateness. Use the right technology for the task at hand. Here’s the example I gave:

It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering.

Surprisingly, I got some pushback on this. Šime Vidas wrote:

Based on my experience, this is not necessarily the case.

Going from server-side rendering and progressive enhancement via JS to a single-page app powered by a JS framework was a enormous reduction in complexity for me (so the opposite of over-engineering).

(Emphasis mine.) He goes on to say:

My main concerns are ease of use & maintainability. If you get those things right, you’re good and it’s not over-engineering.

There’s no doubt that maintainability is a desirable goal. And ease of use for the developer is also important …but I think they pale in comparison to ease of use for the end user.

To be fair, the specific use-case I mentioned was making a blog. And a blog is a personal thing. You can do whatever the heck you like on your own website and don’t let anyone tell you otherwise.

So I probably chose a poor example to illustrate my point. I was thinking more about when you’re making websites for a living. You’re being paid money to make something available on the web. In that situation, I strongly believe that user needs should win out over developer convenience.

I wrote about this recently:

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time.

That’s why I responded to Šime, saying:

Your main concern should be user needs—not your own.

When I talk about over-engineering, I’m speaking from the perspective of end users, not developers.

Before considering your ease of use, and maintainability, consider users first.

In fairness to Šime, he’s being very open and honest about his priorities. I admire that. I’ve seen too many developers try to provide user experience justifications for decisions made for developer convenience. Once again I recommend Alex’s excellent article, The “Developer Experience” Bait-and-Switch:

The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Now I worry I wasn’t specific enough when I talked about choosing appropriate technology:

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand.

I should have made it clear that I was talking about what is appropriate or inapropriate for users. I think I made the mistake of assuming that this was obvious, and didn’t need saying. I’ll try not to make that mistake again.

There’s a whole group of tools where this point isn’t even relevant—build tools, task runners, version control …anything that never directly touches the end user; use whatever works for you. But if you’re making decisions around HTML, ARIA, CSS, or JavaScript, then appropriateness for the end user should take precedence.

If you’re in that situation—you are being paid money to make websites, and you are making technology decisions—I urge you to remember Charlie’s words: it isn’t about you.

Prototypes and production

When we do front-end development at Clearleft, we’re usually delivering production code, often in the form of a component library. That means our priorities are performance, accessibility, robustness, and other markers of quality when it comes to web development.

But every so often, we use the materials of front-end development—HTML, CSS, and JavaScript—to produce something that isn’t intended for production. I’m talking about prototyping.

There are plenty of non-code prototyping tools out there, and our designers often reach for them to communicate subtleties like motion design. But when it comes to testing a prototype with real users, it’s hard to beat the flexibility of HTML, CSS, and JavaScript. Load it up in a browser and away you go.

We do a lot of design sprints, where time is of the essence. The prototype we produce on the penultimate day of the sprint definitely won’t be production quality, but it will be good enough to test.

What’s interesting is that—when it comes to prototyping—our usual front-end priorities can and should go out the window. The priority now is speed. If that means sacrificing semantics or performance, then so be it. If I’m building a prototype and I find myself thinking “now, what’s the right class name for this component?”, then I know I’m in the wrong mindset. That question might be valid for production code, but it’s a waste of time for prototypes.

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Whereas I would think long and hard about the performance impacts of third-party libraries and frameworks on a public project, I won’t give it a second thought when it comes to a prototype. Throw all the JavaScript frameworks and CSS libraries you want at it (although I would argue that in-browser technologies like CSS Grid have made CSS libraries like Bootstrap less necessary, even for prototyping).

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

When a prototype is successful, works great, and tests well, there’s a real temptation to use the prototype code as the basis for the final product. Don’t do this! I’ve made that mistake in the past and it always ends badly. I ended up spending far more time trying to wrangle prototype code to a production level than if I had just started from a clean slate.

Build prototypes to test ideas, designs, interactions, and interfaces …and then throw the code away. The value of a prototype is in answering questions and testing hypotheses. Don’t fall for the sunk cost fallacy when it’s time to switch over into production mode.

Of course it should go without saying that you should never, ever release prototype code into production.

And yet…

More and more live sites seem to be built with a prototyping mindset. Weighty JavaScript frameworks are used regardless of appropriateness. Accessibility, if it’s even considered at all, is relegated to an afterthought. Fragile architectures are employed that rely on first loading and then executing JavaScript in order to render basic content. Developer experience is prioritised over user experience.

Heydon recently highlighted an article that offered this tip for aspiring web developers:

As for HTML, there’s not much to learn right away and you can kind of learn as you go, but before making your first templates, know the difference between in-line elements like span and how they differ from block ones like div.

That’s perfectly reasonable advice …if you’re building a prototype. But if you’re building something for public consumption, you have a duty of care to the end users.