Journal tags: users

10

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Stakeholders of styling

When I wrote about the new accent-color property in CSS, I pondered how much control a web developer should have over styling form controls:

Who are we to make that decision? Shouldn’t the user’s choice take primacy over our choices?

But then again, where do we draw the line? We’re allowed over-ride link colours. We’re allowed over-ride font choices.

Ultimately, I came down on the side of granting authors more control:

If developers don’t get a standardised way to customise native form controls, they’ll just recreate their own over-engineered versions.

This question of “who gets to decide?” used to be much more prevelant in the early days of the web. One way to think about this is that there are three stakeholders involved in the presentation of a web page:

  1. The author of the page. “Author” is spec-speak for designer or developer.
  2. The user.
  3. The browser, or user agent. A piece of software tries to balance the needs of both author and user. But, as the name implies, the user takes precedence.

These days we tend to think of web design a single-stakeholder undertaking. The author decides how something should be presented and then executes that decision using CSS.

But as Eric once said, every line of you CSS you write is a suggestion to the browser. That’s not how we think about CSS though. We think of CSS like a series of instructions rather than suggestions. Never mind respecting the user’s preferences; one of the first things we do is reset all the user agent’s styles.

In the early days of the web, more consideration was given to the idea of style suggestions rather than instructions. Heck, users could always over-ride any of your suggestions with their own user stylesheet. These days, users would need to install a browser extension to do the same thing.

The first proposal for CSS had a concept called “influence”:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

I think the only remnant of “influence” left in CSS is accidental. It’s in the specificity of selectors …and the !important declaration.

I think it’s a shame that user stylesheets are no longer a thing. But I get why they were dropped from browsers. They date from a time when it was mostly nerds using the web, before “regular folks” came on board. I understand why it became a little-used feature, suitable for being dropped. But the principle of it still rankles slightly.

But in recent years there has been a slight return to the multi-stakeholder concept of styling websites. Thanks to prefers-reduced-motion and prefers-color-scheme, a responsible author can choose to bow to the wishes of the user.

I was reminded of this when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

Get the FLoC out

I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.

The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:

When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.

So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.

Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.

The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.

Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.

Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.

Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)

Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.

That’s what Google is testing with the origin trial of FLoC.

If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:

  • Don’t use Chrome. No other web browser is participating in this experiment. I recommend Firefox.
  • If you want to continue to use Chrome, install the Duck Duck Go Chrome extension.
  • Alternatively, if you manually disable third-party cookies, your Chrome browser won’t be included in the experiment.
  • Or you could move to Europe. The origin trial won’t be enabled for users in the European Union, which is coincidentally where GDPR applies.

That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.

The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I meanbest guess.”

I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.

In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta element won’t do the trick, I’m afraid.

I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf file:

<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>

If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.

Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.

But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.

Here’s Google’s logic:

  1. Third-party cookies are on their way out so advertisers will no longer be able to use that technology to target users.
  2. If we don’t provide an alternative, advertisers and other third parties will use fingerprinting, which we all agree is very bad.
  3. So let’s implement Federated Learning of Cohorts so that advertisers won’t use fingerprinting.

The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.

Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.

Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.

When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.

But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.

Which one is it?

Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.

The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.

Marko Saric sums it up:

FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.

What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.

Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.

I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.

Until that day, I’m not sure that Google Chrome can be considered a user agent.

Get safe

The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.

I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.

Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.

That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.

<a href="/items/id">

Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.

<form method="post" action="/items/id/edit">

But not all forms should use POST. A search form should use GET.

<form method="get" action="/search">
<input type="search" name="term">

When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action attribute.

/search?term=value

I sometimes see the GET method used incorrectly:

  • “Log out” links that should be forms with a “log out��� button—you can always style it to look like a link if you want.
  • “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.

When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.

That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.

The very first of the Web Platform Design Principles is Put user needs first :

If a trade-off needs to be made, always put user needs above all.

The current abuse of GET requests is damage that the web needs to route around.

Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.

Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.

Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.

But we needn’t wait for the browsers to make the web a safer place for users.

Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.

What if we refused?

Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.

And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.

Influence

Hidde gave a great talk recently called On the origin of cascades (by means of natural selectors):

It’s been 25 years since the first people proposed a language to style the web. Since the late nineties, CSS lived through years of platform evolution.

It’s a lovely history lesson that reminded me of that great post by Zach Bloom a while back called The Languages Which Almost Became CSS.

The TL;DR timeline of CSS goes something like this:

Håkon and Bert joined forces and that’s what led to the Cascading Style Sheet language we use today.

Hidde looks at how the concept of the cascade evolved from those early days. But there’s another idea in Håkon’s proposal that fascinates me:

While the author (or publisher) often wants to give the documents a distinct look and feel, the user will set preferences to make all documents appear more similar. Designing a style sheet notation that fill both groups’ needs is a challenge.

The proposed solution is referred to as “influence”.

The user supplies the initial sheet which may request total control of the presentation, but — more likely — hands most of the influence over to the style sheets referenced in the incoming document.

So an author could try demanding that their lovely styles are to be implemented without question by specifying an influence of 100%. The proposed syntax looked like this:

h1.font.size = 24pt 100%

More reasonably, the author could specify, say, 40% influence:

h2.font.size = 20pt 40%

Here, the requested influence is reduced to 40%. If a style sheet later in the cascade also requests influence over h2.font.size, up to 60% can be granted. When the document is rendered, a weighted average of the two requests is calculated, and the final font size is determined.

Okay, that sounds pretty convoluted but then again, so is specificity.

This idea of influence in CSS reminds me of Cap’s post about The Sliding Scale of Giving a Fuck:

Hold on a second. I’m like a two-out-of-ten on this. How strongly do you feel?

I’m probably a six-out-of-ten, I replied after a couple moments of consideration.

Cool, then let’s do it your way.

In the end, the concept of influence in CSS died out, but user style sheets survived …for a while. Now they too are as dead as a dodo. Most people today aren’t aware that browsers used to provide a mechanism for applying your own visual preferences for browsing the web (kind of like Neopets or MySpace but for literally every single web page …just think of how empowering that was!).

Even if you don’t mourn the death of user style sheets—you can dismiss them as a power-user feature—I think it’s such a shame that the concept of shared influence has fallen by the wayside. Web design today is dictatorial. Designers and developers issue their ultimata in the form of CSS, even though technically every line of CSS you write is a suggestion to a web browser—not a demand.

I wish that web design were more of a two-way street, more of a conversation between designer and end user.

There are occassional glimpses of this mindset. Like I said when I added a dark mode to my website:

Y’know, when I first heard about Apple adding dark mode to their OS—and also to CSS—I thought, “Oh, great, Apple are making shit up again!” But then I realised that, like user style sheets, this is one more reminder to designers and developers that they don’t get the last word—users do.

User agents

I was on the podcast A Question Of Code recently. It was fun! The podcast is aimed at people who are making a career change into web development, so it’s right up my alley.

I sometimes get asked about what a new starter should learn. On the podcast, I mentioned a post I wrote a while back with links to some great resources and tutorials. As I said then:

For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).

That’s assuming you want to be a good well-rounded web developer. But it might be that you need to get a job as quickly as possible. In that case, my advice would be very different. I would advise you to learn React.

Believe me, I take no pleasure in giving that advice. But given the reality of what recruiters are looking for, knowing React is going to increase your chances of getting a job (something that’s reflected in the curricula of coding schools). And it’s always possible to work backwards from React to the more fundamental web technologies of HTML, CSS, and JavaScript. I hope.

Regardless of your initial route, what’s the next step? How do you go from starting out in web development to being a top-notch web developer?

I don’t consider myself to be a top-notch web developer (far from it), but I am very fortunate in that I’ve had the opportunity to work alongside some tippety-top-notch developers at ClearleftTrys, Cassie, Danielle, Mark, Graham, Charlotte, Andy, and Natalie.

They—and other top-notch developers I’m fortunate to know—have something in common. They prioritise users. Sure, they’ll all have their favourite technologies and specialised areas, but they don’t lose sight of who they’re building for.

When you think about it, there’s quite a power imbalance between users and developers on the web. Users can—ideally—choose which web browser to use, and maybe make some preference changes if they know where to look, but that’s about it. Developers dictate everything else—the technology that a website will use, the sheer amount of code shipped over the network to the user, whether the site will be built in a fragile or a resilient way. Users are dependent on developers, but developers don’t always act in the best interests of users. It’s a classic example of the principal-agent problem:

The principal–agent problem, in political science and economics (also known as agency dilemma or the agency problem) occurs when one person or entity (the “agent”), is able to make decisions and/or take actions on behalf of, or that impact, another person or entity: the “principal”. This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard.

A top-notch developer never forgets that they are an agent, and that the user is the principal.

But is it realistic to expect web developers to be so focused on user needs? After all, there’s a whole separate field of user experience design that specialises in this focus. It hardly seems practical to suggest that a top-notch developer needs to first become a good UX designer. There’s already plenty to focus on when it comes to just the technology side of front-end development.

So maybe this is too simplistic a way of defining the principle-agent relationship between users and developers:

user :: developer

There’s something that sits in between, mediating that relationship. It’s a piece of software that in the world of web standards is even referred to as a “user agent”: the web browser.

user :: web browser :: developer

So if making the leap to understanding users seems too much of a stretch, there’s an intermediate step. Get to know how web browsers work. As a web developer, if you know what web browsers “like” and “dislike”, you’re well on the way to making great user experiences. If you understand the pain points for browser when they’re parsing and rendering your code, you’ve got a pretty good proxy for understanding the pain points that your users are experiencing.

Priorities

I recently wrote about a web-specific example of a general principle for choosing the right tool for the job:

JavaScript should only do what only JavaScript can do.

I was—yet again—talking about appropriateness. Use the right technology for the task at hand. Here’s the example I gave:

It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering.

Surprisingly, I got some pushback on this. Šime Vidas wrote:

Based on my experience, this is not necessarily the case.

Going from server-side rendering and progressive enhancement via JS to a single-page app powered by a JS framework was a enormous reduction in complexity for me (so the opposite of over-engineering).

(Emphasis mine.) He goes on to say:

My main concerns are ease of use & maintainability. If you get those things right, you’re good and it’s not over-engineering.

There’s no doubt that maintainability is a desirable goal. And ease of use for the developer is also important …but I think they pale in comparison to ease of use for the end user.

To be fair, the specific use-case I mentioned was making a blog. And a blog is a personal thing. You can do whatever the heck you like on your own website and don’t let anyone tell you otherwise.

So I probably chose a poor example to illustrate my point. I was thinking more about when you’re making websites for a living. You’re being paid money to make something available on the web. In that situation, I strongly believe that user needs should win out over developer convenience.

I wrote about this recently:

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time.

That’s why I responded to Šime, saying:

Your main concern should be user needs—not your own.

When I talk about over-engineering, I’m speaking from the perspective of end users, not developers.

Before considering your ease of use, and maintainability, consider users first.

In fairness to Šime, he’s being very open and honest about his priorities. I admire that. I’ve seen too many developers try to provide user experience justifications for decisions made for developer convenience. Once again I recommend Alex’s excellent article, The “Developer Experience” Bait-and-Switch:

The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Now I worry I wasn’t specific enough when I talked about choosing appropriate technology:

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand.

I should have made it clear that I was talking about what is appropriate or inapropriate for users. I think I made the mistake of assuming that this was obvious, and didn’t need saying. I’ll try not to make that mistake again.

There’s a whole group of tools where this point isn’t even relevant—build tools, task runners, version control …anything that never directly touches the end user; use whatever works for you. But if you’re making decisions around HTML, ARIA, CSS, or JavaScript, then appropriateness for the end user should take precedence.

If you’re in that situation—you are being paid money to make websites, and you are making technology decisions—I urge you to remember Charlie’s words: it isn’t about you.

Handing back control

An Event Apart Seattle was most excellent. This year, the AEA team are trying something different and making each event three days long. That’s a lot of mindblowing content!

What always fascinates me at events like these is the way that some themes seem to emerge, without any prior collusion between the speakers. This time, I felt that there was a strong thread of giving control directly to users:

Sarah and Margot both touched on this when talking about authenticity in brand messaging.

Margot described this in terms of vulnerability for the brand, but the kind of vulnerability that leads to trust.

Sarah talked about it in terms of respect—respecting the privacy of users, and respecting the way that they want to use your services. Call it compassion, call it empathy, or call it just good business sense, but providing these kind of controls in an interface is an excellent long-term strategy.

In Val’s animation talk, she did a deep dive into prefers-reduced-motion, a media query that deliberately hands control back to the user.

Even in a CSS-heavy talk like Jen’s, she took the time to explain why starting with meaningful markup is so important—it’s because you can’t control how the user will access your content. They may use tools like reader modes, or Pocket, or have web pages read aloud to them. The user has the final say, and rightly so.

In his CSS talk, Eric reminded us that a style sheet is a list of strong suggestions, not instructions.

Beth’s talk was probably the most explicit on the theme of returning control to users. She drew on examples from beyond the world of the web—from architecture, urban planning, and more—to show that the most successful systems are not imposed from the top down, but involve everyone, especially those most marginalised.

And even in my own talk on service workers, I raved about the design pattern of allowing users to save pages offline to read later. Instead of trying to guess what the user wants, give them the means to take control.

I was really encouraged to see this theme emerge. Mind you, when I look at the reality of most web products, it’s easy to get discouraged. Far from providing their users with controls over their own content, Instagram won’t even let their customers have a chronological feed. And Matt recently wrote about how both Twitter and Quora are heading further and further away from giving control to their users in his piece called Optimizing for outrage.

Still, I came away from An Event Apart Seattle with a renewed determination to do my part in giving people more control over the products and services we design and develop.

I spent the first two days of the conference trying to liveblog as much as I could. I find it really focuses my attention, although it’s also quite knackering. I didn’t do too badly; I managed to write cover eleven of the talks (out of the conference’s total of seventeen):

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

100 words 057

It’s UX London week. That’s always a crazy busy time at Clearleft. But it’s also an opportunity. We have this sneaky tactic of kidnapping a speaker from UX London and making them give a workshop just for us. We did it a few years ago with Dave Grey and we got a fantastic few days of sketching out of it.

This time we grabbed Jeff Patton. He spent this afternoon locked in the auditorium at 68 Middle Street teaching us all about user story mapping. ‘Twas most enlightening and really helped validate some of the stuff we’ve been doing lately.