Journal tags: principles

23

Applying the four principles of accessibility

Web Content Accessibility Guidelines—or WCAG—looks very daunting. It’s a lot to take in. It’s kind of overwhelming. It’s hard to know where to start.

I recommend taking a deep breath and focusing on the four principles of accessibility. Together they spell out the cutesy acronym POUR:

  1. Perceivable
  2. Operable
  3. Understandable
  4. Robust

A lot of work has gone into distilling WCAG down to these four guidelines. Here’s how I apply them in my work…

Perceivable

I interpret this as:

Content will be legible, regardless of how it is accessed.

For example:

  • The contrast between background and foreground colours will meet the ratios defined in WCAG 2.
  • Content will be grouped into semantically-sensible HTML regions such as navigation, main, footer, etc.

Operable

I interpret this as:

Core functionality will be available, regardless of how it is accessed.

For example:

  • I will ensure that interactive controls such as links and form inputs will be navigable with a keyboard.
  • Every form control will be labelled, ideally with a visible label.

Understandable

I interpret this as:

Content will make sense, regardless of how it is accessed.

For example:

  • Images will have meaningful alternative text.
  • I will make sensible use of heading levels.

This is where it starts to get quite collaboritive. Working at an agency, there will some parts of website creation and maintenance that will require ongoing accessibility knowledge even when our work is finished.

For example:

  • Images uploaded through a content management system will need sensible alternative text.
  • Articles uploaded through a content management system will need sensible heading levels.

Robust

I interpret this as:

Content and core functionality will still work, regardless of how it is accessed.

For example:

  • Drop-down controls will use the HTML select element rather than a more fragile imitation.
  • I will only use JavaScript to provide functionality that isn’t possible with HTML and CSS alone.

If you’re applying a mindset of progressive enhancement, this part comes for you. If you take a different approach, you’re going to have a bad time.

Taken together, these four guidelines will get you very far without having to dive too deeply into the rest of WCAG.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Pace layers and design principles

I think it was Jason who once told me that if you want to make someone’s life a misery, teach them about typography. After that they’ll be doomed to notice all the terrible type choices and kerning out there in the world. They won’t be able to unsee it. It’s like trying to unsee the arrow in the FedEx logo.

I think that Stewart Brand’s pace layers model is a similar kind of mind virus, albeit milder. Once you’ve been exposed to it, you start seeing in it in all kinds of systems.

Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.

Last month I sent out an edition of the Clearleft newsletter that was all about pace layers. I gathered together examples of people who have been infected with the pace-layer mindworm who were applying the same layered thinking to other areas:

My own little mash-up is applying pace layers to the World Wide Web. Tom even brought it to life as an animation.

See the Pen Web Layers Of Pace by Tom (@webrocker) on CodePen.

Recently I had another flare-up of the pace-layer pattern-matching infection.

I was talking to some visiting Austrian students on the weekend about design principles. I explained my mild obsession with design principles stemming from the fact that they sit between “purpose” (or values) and “patterns” (the actual outputs):

Purpose » Principles » Patterns

Your purpose is “why?”

That then influences your principles, “how?”

Those principles inform your patterns, “what?”

Hey, wait a minute! If you put that list in reverse order it looks an awful lot like the pace-layers model with the slowest moving layer at the bottom and the fastest moving layer at the top. Perhaps there’s even room for an additional layer when patterns go into production:

  • Production
  • Patterns
  • Principles
  • Purpose

Your purpose should rarely—if ever—change. Your principles can change, but not too frequently. Your patterns need to change quite often. And what you’re actually putting out into production should be constantly updated.

As you travel from the most abstract layer—“purpose”—to the most concrete layer—“production”—the pace of change increases.

I can’t tell if I’m onto something here or if I’m just being apopheniac. Again.

Agile design principles

I may have mentioned this before, but I’m a bit of a nerd for design principles. Have I shown you my equivalent of an interesting rock collection lately?

If you think about design principles for any period of time, it inevitably gets very meta very quickly. You start thinking about what makes for good design principles. In other words, you start wondering if there are design principles for design principles.

I’ve written before about how I think good design principles should encode some level of prioritisation. The classic example is the HTML design principle called the priority of consitituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s wonderfully practical!

I realised recently that there’s another set of design princples that put prioritisation front and centre—the Agile manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

And there’s this excellent explanation which could just as well apply to the priorty of constituencies:

That is, while there is value in the items on the right, we value the items on the left more.

Yes! That’s the spirit!

Ironically, the Agile manifesto also contains a section called principles behind the Agile manifesto which are …less good (at least they’re less good as design principles—they’re fine as hypotheses to be tested).

Agile is far from perfect. See, for example, Miriam Posner’s piece Agile and the Long Crisis of Software. But where Agile isn’t fulfilling its promise, I’d say it’s not because of its four design principles. If anything, I think the problems arise from organisations attempting to implement Agile without truly internalising the four principles.

Oh, and that’s another thing I like about the Agile manifesto as a set of design principles—the list of prioritised principles is mercifully short. Just four lines.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Design principles on the Clearleft podcast

The final episode of season three of the Clearleft podcast is out. Ah, what a bittersweet feeling! On the hand it’s sad that the season has come to an end. But it feels good to look back at six great episodes all gathered together.

Episode six is all about design principles. That’s a topic close to my heart. I collect design principles.

But for this podcast episode the focus is on one specific project. Clearleft worked with Citizens Advice on a recent project that ended up having design principles at the heart of it. It worked as a great focus for the episode, and a way of exploring design principles in general. As Katie put it, this about searching for principles for design principles.

Katie and Maite worked hard on nailing the design principles for the Citizens Advice project. I was able to get some of Maite’s time for her to talk me through it. I’ve also got some thoughts from my fellow Clearlefties Andy and Chris on the topic of design principles in general.

It’s nineteen minutes long and well worth a listen.

And with that, season three of the Clearleft podcast is a wrap!

Principles and the English language

I work with words. Sometimes they’re my words. Sometimes they’re words that my colleagues have written:

One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.

I also work with web technologies, usually front-of-the-front-end stuff. HTML, CSS, and JavaScript. The technologies that users experience directly in web browsers.

I think a lot about design principles for the web. The two principles I keep coming back to are the robustness principle and the principle of least power.

When it comes to words, the guide that I return to again and again is George Orwell, specifically his short essay, Politics and the English Language.

Towards the end, he offers some rules for writing.

  1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.

These look a lot like design principles. Not only that, but some of them look like specific design principles. Take the robustness principle:

Be conservative in what you send, be liberal in what you accept.

That first part applies to Orwell’s third rule:

If it is possible to cut a word out, always cut it out.

Be conservative in what words you send.

Then there’s the principle of least power:

Choose the least powerful language suitable for a given purpose.

Compare that to Orwell’s second rule:

Never use a long word where a short one will do.

That could be rephrased as:

Choose the shortest word suitable for a given purpose.

Or, going in the other direction, the principle of least power could be rephrased in Orwell’s terms as:

Never use a powerful language where a simple language will do.

Oh, I like that! I like that a lot.

Mind the gap

In May 2012, Brian LeRoux, the creator of PhoneGap, wrote a post setting out the beliefs, goals and philosophy of the project.

The beliefs are the assumptions that inform everything else. Brian stated two core tenets:

  1. The web solved cross platform.
  2. All technology deprecates with time.

That second belief then informed one of the goals of the PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

Last week, PhoneGap succeeded in its goal:

Since the project’s beginning in 2008, the market has evolved and Progressive Web Apps (PWAs) now bring the power of native apps to web applications.

Today, we are announcing the end of development for PhoneGap.

I think Brian was spot-on with his belief that all technology deprecates with time. I also think it was very astute of him to tie the goals of PhoneGap to that belief. Heck, it’s even in the project name: PhoneGap!

I recently wrote this about Sass and clamp:

I’ve said it before and I’ll say it again, the goal of any good library should be to get so successful as to make itself redundant. That is, the ideas and functionality provided by the tool are so useful and widely adopted that the native technologies—HTML, CSS, and JavaScript—take their cue from those tools.

jQuery is the perfect example of this. jQuery is no longer needed because cross-browser DOM Scripting is now much easier …thanks to jQuery.

Successful libraries and frameworks point the way. They show what developers are yearning for, and that’s where web standards efforts can then focus. When a library or framework is no longer needed, that’s not something to mourn; it’s something to celebrate.

That’s particularly true if the library of code needs to be run by a web browser. The user pays a tax with that extra download so that the developer gets the benefit of the library. When web browsers no longer need the library in order to provide the same functionality, it’s a win for users.

In fact, if you’re providing a front-end library or framework, I believe you should be actively working towards making it obselete. Think of your project as a polyfill. If it’s solving a genuine need, then you should be looking forward to the day when your code is made redundant by web browsers.

One more thing…

I think it was great that Brian documented PhoneGap’s beliefs, goals and philosophy. This is exactly why design principles can be so useful—to clearly set out the priorities of a project, so that there’s no misunderstanding or mixed signals.

If you’re working on a project, take the time to ask yourself what assumptions and beliefs are underpinning the work. Then figure out how those beliefs influence what you prioritise.

Ultimately, the code you produce is the output generated by your priorities. And your priorities are driven by your purpose.

You can make those priorities tangible in the form of design principles.

You can make those design principles visible by publishing them.

Putting design principles into action

I was really looking forward to speaking at An Event Apart this year. I was going to be on the line-up for Seattle, Boston, and Minneapolis; three cities I really like.

At the start of the year, I decided to get a head-start on my new talk so I wouldn’t be too stressed out when the first event approached. I spent most of January and February going through the chaotic process of assembling a semi-coherent presentation out of a katamari of vague thoughts.

I was making good progress. Then The Situation happened. One by one, the in-person editions of An Event Apart were cancelled (quite rightly). But my talk preparation hasn’t been in vain. I’ll be presenting my talk at an online edition of An Event Apart on Monday, August 17th.

You should attend. Not for my talk, but for Ire’s talk on Future-Proof CSS which sounds like it was made for me:

In this talk, we’ll cover how to write CSS that stands the test of time. From progressive enhancement techniques to accessibility considerations, we’ll learn how to write CSS for 100 years in the future (and, of course, today).

My talk will be about design principles …kinda. As usual, it will be quite a rambling affair. At this point I almost take pride in evoking a reaction of “where’s he going with this?” during the first ten minutes of a talk.

When I do actually get around to the point of the talk—design principles—I ask whether it’s possible to have such a thing as universal principles. After all, the whole point of design principles is that they’re specific to an endeavour, whether that’s a company, an organisation, or a product.

I think that some principles are, if not universal, then at least very widely applicable. I’ve written before about two of my favourites: the robustness principle and the principle of least power:

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

What’s interesting about both of those principles is that they are imperative. They tell you how to act:

Be conservative in what you send, be liberal in what you accept.

Choose the least powerful language suitable for a given purpose.

Other princples are imperative, but they tell you what not to do. Take the razors of Occam and Hanlon, for example:

Entities are not to be multiplied without necessity.

Never attribute to malice that which is adequately explained by stupidity.

But these imperative principles are exceptions. The vast majority of “universal” principles take the form of laws that are observations. They describe the state of the world without providing any actions to take.

There’s Hofstadter’s Law, for example:

It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Or Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

By themselves, these observational laws are interesting but they leave it up to you to decide on a course of action. On the other hand, imperative principles tell you what to do but don’t tell you why.

It strikes me that it could be fun (and useful) to pair up observational and imperative principles:

Because of observation A, apply action B.

For example:

Because of Murphy’s Law, apply the principle of least power.

Or in its full form:

Because anything that can go wrong will go wrong, choose the least powerful language suitable for a given purpose.

I feel like the Jevons paradox is another observational principle that should inform our work on the web:

The Jevons paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises because of increasing demand.

For example, even though devices, browsers, and networks are much, much better now than they were, say, ten years ago, that doesn’t mean that websites have become better or faster. Instead, it’s precisely because there’s more power available that people think nothing of throwing megabytes of JavaScript at users. See Scott’s theory that 5G Will Definitely Make the Web Slower, Maybe:

JavaScript size has ballooned as networks have improved.

This problem would be addressed if web developers were more conservative in what they sent. The robustness principle in action.

Because of the Jevons paradox, apply the robustness principle.

Admittedly, the expanded version of that is far too verbose:

Because technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises because of increasing demand, be conservative in what you send, be liberal in what you accept.

I’m sure there are more and better pairings to be made: an observational principle to tell you why you should take action, and an imperative principle to tell you what action you should take.

Principles and priorities

I think about design principles a lot. I’m such a nerd for design principles, I even have a collection. I’m not saying all of the design principles in the collection are good—far from it! I collect them without judgement.

As for what makes a good design principle, I’ve written about that before. One aspect that everyone seems to agree on is that a design principle shouldn’t be an obvious truism. Take this as an example:

Make it usable.

Who’s going to disagree with that? It’s so agreeable that it’s practically worthless as a design principle. But now take this statement:

Usability is more important than profitability.

Ooh, now we’re talking! That’s controversial. That’s bound to surface some disagreement, which is a good thing. It’s now passing the reversability test—it’s not hard to imagine an endeavour driven by the opposite:

Profitability is more important than usability.

In either formulation, what makes these statements better than the bland toothless agreeable statements—“Usability is good!”, “Profitability is good!”—is that they introduce the element of prioritisation.

I like design principles that can be formulated as:

X, even over Y.

It’s not saying that Y is unimportant, just that X is more important:

Usability, even over profitability.

Or:

Profitability, even over usability.

Design principles formulated this way help to crystalise priorities. Chris has written about the importance of establishing—and revisiting—priorities on any project:

Prioritisation isn’t and shouldn’t be a one-off exercise. The changing needs of your customers, the business environment and new opportunities from technology mean prioritisation is best done as a regular activity.

I’ve said it many times, but one on my favourite design principles comes from the HTML design principles. The priority of consitituencies (it’s got “priorities” right there in the name!):

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

Or put another way:

  • Users, even over authors.
  • Authors, even over implementors.
  • Implementors, even over specifiers.
  • Specifiers, even over theoretical purity.

When it comes to evaluating technology for the web, I think there are a number of factors at play.

First and foremost, there’s the end user. If a technology choice harms the end user, avoid it. I’m thinking here of the kind of performance tax that a user has to pay when developers choose to use megabytes of JavaScript.

Mind you, some technologies have no direct effect on the end user. When it comes to build tools, version control, toolchains …all the stuff that sits on your computer and never directly interacts with users. In that situation, the wants and needs of developers can absolutely take priority.

But as a general principle, I think this works:

User experience, even over developer experience.

Sadly, I think the current state of “modern” web development reverses that principle. Developer efficiency is prized above all else. Like I said, that would be absolutely fine if we’re talking about technologies that only developers are exposed to, but as soon as we’re talking about shipping those technologies over the network to end users, it’s negligent to continue to prioritise the developer experience.

I feel like personal websites are an exception here. What you do on your own website is completely up to you. But once you’re taking a paycheck to make websites that will be used by other people, it’s incumbent on you to realise that it’s not about you.

I’ve been talking about developers here, but this is something that applies just as much to designers. But I feel like designers go through that priority shift fairly early in their career. At the outset, they’re eager to make their mark and prove themselves. As they grow and realise that it’s not about them, they understand that the most appropriate solution for the user is what matters, even if that’s a “boring” tried-and-tested pattern that isn’t going to wow any fellow designers.

I’d like to think that developers would follow a similar progression, and I’m sure that some do. But I’ve seen many senior developers who have grown more enamoured with technologies instead of honing in on the most appropriate technology for end users. Maybe that’s because in many organisations, developers are positioned further away from the end users (whereas designers are ideally being confronted with their creations being used by actual people). If a lead developer is focused on the productivity, efficiency, and happiness of the dev team, it’s no wonder that their priorities end up overtaking the user experience.

I realise I’m talking in very binary terms here: developer experience versus user experience. I know it’s not always that simple. Other priorities also come into play, like business needs. Sometimes business needs are in direct conflict with user needs. If an online business makes its money through invasive tracking and surveillance, then there’s no point in having a design principle that claims to prioritise user needs above all else. That would be a hollow claim, and the design principle would become worthless.

Because that’s the point with design principles. They’re there to be used. They’re not a nice fluffy exercise in feeling good about your work. The priority of constituencies begins, “in case of conflict” and that’s exactly when a design principle matters—when it’s tested.

Suppose someone with a lot of clout in your organisation makes a decision, but that decision conflicts with your organisations’s design principles. Instead of having an opinion-based argument about who’s right or wrong, the previously agreed-upon design principles allow you to take ego out of the equation.

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Principle

I like good design principles. I collect design principles—of varying quality—at principles.adactio.com. Ben Brignell also has a (much larger) collection at principles.design.

You can spot the less useful design principles after a while. They tend to be wishy-washy; more like empty aspirational exhortations than genuinely useful guidelines for alignment. I’ve written about what makes for good design principles before. Matthew Ström also asked—and answered—What makes a good design principle?

  • Good design principles are memorable.
  • Good design principles help you say no.
  • Good design principles aren’t truisms.
  • Good design principles are applicable.

I like those. They’re like design principles for design principles.

One set of design principles that I’ve included in my collection is from gov.uk: government design principles . I think they’re very well thought-through (although I’m always suspicious when I see a nice even number like 10 for the amount of items in the list). There’s a great line in design principle number two—Do less:

Government should only do what only government can do.

This wasn’t a theoretical issue. The multiple departmental websites that preceded gov.uk were notorious for having too much irrelevant content—content that was readily available elsewhere. It was downright wasteful to duplicate that content on a government site. It wasn’t appropriate.

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand. Whether it’s task runners or JavaScript frameworks, appropriateness feels like it should be the deciding factor.

I think that the design principle from GDS could be abstracted into a general technology principle:

Any particular technology should only do what only that particular technology can do.

Take JavaScript, for example. It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering. It violates this principle:

JavaScript should only do what only JavaScript can do.

Need to manage state or immediately update the interface in response to user action? Only JavaScript can do that. But if you need to present the user with some static content, JavaScript can do that …but it’s not the only technology that can do that. HTML would be more appropriate.

I realise that this is basically a reformulation of one of my favourite design principles, the rule of least power:

Choose the least powerful language suitable for a given purpose.

Or, as Derek put it:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

ARIA should only do what only ARIA can do.

JavaScript should only do what only JavaScript can do.

CSS should only do what only CSS can do.

HTML should only do what only HTML can do.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Design principles

Andrew Travers wrote about designing design principles at Co-op Digital. I’m somewhat obsessed with design principles—hence my collection—so I’m also obsessed with figuring out what makes for “good” design principles.

One of my favourite design principles (yes, I have favourites) is from the HTML Design Principles. It’s the priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

The emphasis my own. It demonstrates how the design principle can be put to use (“in case of conflict”). Andrew also describes uses for the design principles they’re putting together:

What we’re building towards is a set of principles, few enough to be memorable, short enough to be repeatable, relevant enough to be usable. When we’re running a design crit, it’s these principles that we want to lean on. When a sole designer in an agile delivery team is talking about a design approach, it’s these principles that back her up.

Those sound like good use cases to me. Those are situations when design principles can help people reach agreement on priorities, without it having to be about ego or who shouts loudest.

I think it was from Cennydd that I heard about a really good test of a design principle: is it reversible? In other words, could you imagine the exact opposite of the design principle being perfectly valid in a different organisation or on a different project? If not, then the principle may be too weak to be effective. (Cennydd points out that he heard this from Jared who has written a lot about evaluating design principles.)

“Make it easy to use” would be an example of a weak design principle. It’s hard to imagine a situation where “make it hard to use” would be a reasonable guiding principle.

Frankly, there are plenty of “bad” examples in my collection of design principles. Many of them wouldn’t pass the reversibility test. Just recently though, I spotted some that would pass the test with flying colours. They weren’t even labelled as design principles—they’re the tips that Heydon includes at the end of his excellent 24 Ways article on inclusive design:

  • Involve code early
  • Respect conventions
  • Don’t be exact
  • Enforce simplicity

I could easily imagine endeavours where the complete opposite of those tips would be valued. Personally, I think they’re really great design princples.

I should add them to the list.

Extending

Contrary to popular belief, web standards aren’t created by a shadowy cabal and then handed down to browser makers to implement. Quite the opposite. Browser makers come together in standards bodies and try to come to an agreement about how to collectively create and implement standards. That keeps them very busy. They don’t tend to get out very often, but when they do, the browser/standards makers have one message for developers: “We want to make your life better, so tell us what you want and that’s what we’ll work on!”

In practice, this turns out not to be the case.

Case in point: responsive images. For years, this was the number one feature that developers were crying out for. And yet, the standards bodies—and, therefore, browser makers—dragged their heels. First they denied that it was even a problem worth solving. Then they said it was simply too hard. Eventually, thanks to the herculean efforts of the Responsive Images Community Group, the browser makers finally began to work on what developers had been begging for.

Now that same community group is representing the majority of developers once again. Element queries—or container queries—have been top of the wish list of working devs for quite a while now. The response from browser makers is the same as it was for responsive images. They say it’s simply too hard.

Here’s a third example: web components. There are many moving parts to web components, but one of the most exciting to developers who care about accessibility and backwards-compatibility is the idea of extending existing elements:

It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.

So instead of having to create a whole new element from scratch like this:

<taco-button>Click me!</taco-button>

…you could piggy-back on an existing element like this:

<button is="taco-button">Click me!</button>

That way, you get the best of both worlds: the native semantics of button topped with all the enhancements you want to add with your taco-button custom element. Brilliant! Github is using this to extend the time element, for example.

I’m not wedded to the is syntax, but I do think it’s vital that there is some declarative mechanism to extend existing elements instead of creating every custom element from scratch each time.

Now it looks like that’s the bit of web components that isn’t going to make the cut. Why? Because browser makers say it’s simply too hard.

As Bruce points out, this is in direct conflict with the design principles that are supposed to be driving the creation and implementation of web standards.

It probably wouldn’t bother me so much except that browser makers still trot out the party line, “We want to hear what developers want!” Their actions demonstrate that this claim is somewhat hollow.

I don’t hold out much hope that we’ll get the ability to extend existing elements for web components. I think we can still find ways to piggy-back on existing semantics, but it’s going to take more work:

<taco-button><button>Click me!</button></taco-button>

That isn’t very elegant and I can foresee a lot of trickiness trying to sift the fallback content (the button tags) from the actual content (the “Click me!” text).

But I guess that’s what we’ll be stuck with. The alternative is simply too hard.

Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).

Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

—Jamais Cascio

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Celebrating CSS

Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!

Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.

CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.

Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?

But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.

The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.

I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video and audio: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video and audio elements if you’re okay with letting web browsers handle the complexity of display and playback.

I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:

selector {
    property: value;
}

That’s it!

How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?

Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.

Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.

Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.

Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.

There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:

The proposed scheme provides a simple mapping between HTML elements and presentation hints.

Hints.

Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.

My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.

Sasstraction

Emil has been playing around with CSS variables (or “custom properties” as they should more correctly be known), which have started landing in some browsers. It’s well worth a read. He does a great job of explaining the potential of this new CSS feature.

For now though, most of us will be using preprocessors like Sass to do our variabling for us. Sass was the subject of Chris’s talk at An Event Apart in San Francisco last week—an excellent event as always.

At one point, Chris briefly mentioned that he’s quite happy for variables (or constants, really) to remain in Sass and not to be part of the CSS spec. Alas, I didn’t get a chance to chat with Chris about that some more, but I wonder if his thinking aligns with mine. Because I too believe that CSS variables should remain firmly in the realm of preprocessers rather than browsers.

Hear me out…

There are a lot of really powerful programmatic concepts that we could add to CSS, all of which would certainly make it a more powerful language. But I think that power would come at an expense.

Right now, CSS is a relatively-straightforward language:

CSS isn’t voodoo, it’s a simple and straightforward language where you declare an element has a style and it happens.

That’s a somewhat-simplistic summation, and there’s definitely some complexity to certain aspects of CSS—like specificity or margin collapsing—but on the whole, it has a straightforward declarative syntax:

selector {
    property: value;
}

That’s it. I think that this simplicity is quite beautiful and surprisingly powerful.

Over at my collection of design principles, I’ve got a section on Bert Bos’s essay What is a good standard? In theory, it’s about designing standards in general, but it matches very closely to CSS in particular. Some of the watchwords are maintainability, modularity, extensibility, simplicity, and learnability. A lot of those principles are clearly connected. I think CSS does a pretty good job of balancing all of those principles, while still providing authors with quite a bit of power.

Going back to that fundamental pattern of CSS, you’ll notice that is completely modular:

selector {
    property: value;
}

None of those pieces (selector, property, value) reference anything elsewhere in the style sheet. But as soon as you introduce variables, that modularity is snapped apart. Now you’ve got a value that refers to something defined elsewhere in the style sheet (or even in a completely different style sheet).

But variables aren’t the first addition to CSS that sacrifices modularity. CSS animations already do that. If you want to invoke a keyframe animation, you have to define it. The declaration and the invocation happen in separate blocks:

selector {
    animation-name: myanimation;
}
@keyframes myanimation {
    from {
        property: value;
    }
    to {
        property: value;
    }
}

I’m not sure that there’s any better way to provide powerful animations in CSS, but this feature does sacrifice modularity …and I believe that has a knock-on effect for learnability and readability.

So CSS variables (or custom properties) aren’t the first crack in the wall of the design principles behind CSS. To mix my metaphors, the slippery slope began with @keyframes (and maybe @font-face too).

But there’s no denying that having variables/constants in CSS provide a lot of power. There’s plenty of programming ideas (like loops and functions) that would provide lots of power to CSS. I still don’t think it’s a good idea to mix up the declarative and the programmatic. That way lies XSLT—a strange hybrid beast that’s sort of a markup language and sort of a programming language.

I feel very strongly that HTML and CSS should remain learnable languages. I don’t just mean for professionals. I believe it’s really important that anybody should be able to write and style a web page.

Now does that mean that CSS must therefore remain hobbled? No, I don’t think so. Thanks to preprocessors like Sass, we can have our cake and eat it too. As professionals, we can use tools like Sass to wield the power of variables, functions (mixins) and other powerful concepts from the programming world.

Preprocessors cut the Gordian knot that’s formed from the tension in CSS between providing powerful features and remaining relatively easy to learn. That’s why I’m quite happy for variables, mixins, nesting and the like to remain firmly in the realm of Sass.

Incidentally, at An Event Apart, Chris was making the case that Sass’s power comes from the fact that it’s an abstraction. I don’t think that’s necessarily true—I think the fact that it provides a layer of abstraction might be a red herring.

Chris made the case for abstractions being inherently A Good Thing. Certainly if you go far enough down the stack (to Assembly Language), that’s true. But not all abstractions are good abstractions, and I’m not just talking about Spolky’s law of leaky abstractions.

Let’s take two different abstractions that share a common origin story:

  • Sass is an abstraction layer for CSS.
  • Haml is an abstraction layer for HTML.

If abstractions were inherently A Good Thing, then they would both provide value to some extent. But whereas Sass is a well-designed tool that allows CSS-savvy authors to write their CSS more easily, Haml is a steaming pile of poo.

Here’s the crucial difference: Sass doesn’t force you to write all your CSS in a completely new way. In fact, every .css file is automatically a valid .scss file. You are then free to use—or ignore—the features of Sass at your own pace.

Haml, on the other hand, forces you to use a completely new whitespace-significant syntax that maps on to HTML. There are no half-measures. It is an abstraction that is not only opinionated, it refuses to be reasoned with.

So I don’t think that Sass is good because it’s an abstraction; I think that Sass is good because it’s a well-designed abstraction. Crucially, it’s also easy to learn …just like CSS.