Tags: interface

411

Monday, February 5th, 2024

drab

This looks like a handy collection of HTML web components for common interface patterns.

drab does not use the shadow DOM, so you can style content within these elements as usual with CSS.

Friday, January 26th, 2024

Nuberodesign > Blog > In Praise of Buttons – Part One

I concur:

Just because a user interface uses 3D-buttons and some shading doesn’t mean that it has to look tacky. In fact, if you have to make the choice between tacky-but-usable and minimalistic-but-hard-to-use, tacky is the way to go. You don’t have to make that choice though: It’s perfectly possible to create something that is both good-looking and easy to use.

Tuesday, December 5th, 2023

Invokers (Explainer) | Open UI

This is a really interesting proposal, and I have thoughts.

Friday, November 24th, 2023

Your Website’s URLs Can and Should Be Beautiful - Opus

The key to making a beautiful URL is finding a balance between brevity and clarity. In other words, a good URL is short but not so short as to obscure what it’s pointing to. Put another way, a good URL contains enough information about its related resource to be useful, but not so much information that it drags on and becomes unwieldy.

Thursday, November 2nd, 2023

Personalization

A look at how personalisation works in digital interfaces and real-world objects.

Tuesday, October 24th, 2023

Ship Faster by Building Design Systems Slower | Big Medium

Josh mashes up design systems and pace layers, like Mark did a few years back. With this mindset, if your product interface are in sync, that’s not good—either your product is moving too slow or your design system is moving too fast.

The job of the design system team is not to innovate, but to curate. The system should provide answers only for settled solutions: the components and patterns that don’t require innovation because they’ve been solved and now standardized. Answers go into the design system when the questions are no longer interesting—proven out in product. The most exciting design systems are boring.

Wednesday, October 11th, 2023

CSS { In Real Life } | Greenwashing and the COP28 Website

Maybe when I wrote about performative performance? Michelle has a prime example:

The low carbon toggle does absolutely nothing.

In fact, worse than nothing. It doesn’t prevent images being downloaded. It doesn’t switch the site to dark mode, or prevent autoplaying animations (e.g. the hero carousel), or reduce resources transferred in other way. All it does is overlay an extra element with a background gradient on top of the large images on the site to give the appearance that those images being prevented from loading.

Monday, October 9th, 2023

Kinopio’s Design Principles

Pirijan talks us through the design principles underpinning Kinopio, a tool I like very much:

  1. Embrace Smallness by Embracing Code as a Living Design System
  2. Building for Fidget-Ability, hmmm
  3. Embrace Plain Text
  4. A Single Interface for Mobile and Desktop
  5. Refine by Pruning

Monday, September 11th, 2023

Performative performance

Web Summer Camp in Croatia finished with an interesting discussion. It was labelled a town-hall meeting, but it was more like an Oxford debating club.

Two speakers had two minutes each to speak for or against a particular statement. Their stances were assigned to them so they didn’t necessarily believe what they said.

One of the propositions was something like:

In the future, sustainable design will be as important as UX or performance.

That’s a tough one to argue against! But that’s what Sophia had to do.

She actually made a fairly compelling argument. She said that real impact isn’t going to come from individual websites changing their colour schemes. Real impact is going to come from making server farms run on renewable energy. She advocated for political action to change the system rather than having the responsibility heaped on the shoulders of the individuals making websites.

It’s a fair point. Much like the concept of a personal carbon footprint started life at BP to distract from corporate responsibility, perhaps we’re going to end up navel-gazing into our individual websites when we should be collectively lobbying for real change.

It’s akin to clicktivism—thinking you’re taking action by sharing something on social media, when real action requires hassling your political representative.

I’ve definitely seen some examples of performative sustainability on websites.

For example, at the start of this particular debate at Web Summer Camp we were shown a screenshot of a municipal website that has a toggle. The toggle supposedly enables a low-carbon mode. High resolution images are removed and for some reason the colour scheme goes grayscale. But even if those measures genuinely reduced energy consumption, it’s a bit late to enact them only after the toggle has been activated. Those hi-res images have already been downloaded by then.

Defaults matter. To be truly effective, the toggle needs to work the other way. Start in low-carbon mode, and only download the hi-res images when someone specifically requests them. (Hopefully browsers will implement prefers-reduced-data soon so that we can have our sustainable cake and eat it.)

Likewise I’ve seen statistics bandied about around the energy-savings that could be made if we used dark colour schemes. I’m sure the statistics are correct, but I’d like to see them presented side-by-side with, say, the energy impact of Google Tag Manager or React or any other wasteful dependencies that impact performance invisibly.

That’s the crux. Most of the important work around energy usage on websites is invisible. It’s the work done to not add more images, more JavaScript or more web fonts.

And it’s not just performance. I feel like the more important the work, the more likely it is to be invisible: privacy, security, accessibility …those matter enormously but you can’t see when a website is secure, or accessible, or not tracking you.

I suspect this is why those areas are all frustratingly under-resourced. Why pour time and effort into something you can’t point at?

Now that I think about it, this could explain the rise of web accessibility overlays. If you do the real work of actually making a website accessible, your work will be invisible. But if you slap an overlay on your website, it looks like you’re making a statement about how much you care about accessibility (even though the overlay is total shit and does more harm than good).

I suspect there might be a similar mindset at work when it comes to interface toggles for low-carbon mode. It might make you feel good. It might make you look good. But it’s a poor substitute for making your website carbon-neutral by default.

Sunday, September 10th, 2023

Squish Meets Structure: Designing with Language Models

The slides and transcript from a great talk by Maggie Appleton, including this perfect description of the vibes we get from large language models:

It feels like they’re either geniuses playing dumb or dumb machines playing genius, but we don’t know which.

Saturday, August 5th, 2023

Just normal web things.

A plea to let users do web things on websites. In other words, stop over-complicating everything with buckets of JavaScript.

Honestly, this isn’t wishlist isn’t asking for much, and it’s a damning indictment of “modern” frontend development that we’ve come to this:

  • Let me copy text so I can paste it.
  • If something navigates like a link, let me do link things.

Monday, June 12th, 2023

Monday, May 15th, 2023

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

Tuesday, May 2nd, 2023

Why Chatbots Are Not the Future by Amelia Wattenberger

Of course, users can learn over time what prompts work well and which don’t, but the burden to learn what works still lies with every single user. When it could instead be baked into the interface.

Monday, April 17th, 2023

LukeW | Ask LukeW: New Ways into Web Content

I like how Luke is using a large language model to make a chat interface for his own content.

This is the exact opposite of how grifters are selling the benefits of machine learning (“Generate copious amounts of new content instantly!”) and instead builds on over twenty years of thoughtful human-made writing.

Thursday, March 23rd, 2023

Home | The Component Gallery

Here’s an aggregator of components from multiple design systems.

Sunday, March 19th, 2023

Design notes on the 2023 Wikipedia redesign

So then the question becomes: how do you most effectively communicate designs, to facilitate the best discussions about those designs? My answer is: lots of little prototypes built with HTML, CSS, and JavaScript.

ongoing by Tim Bray · The LLM Problem

It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.

Yes! Like I said:

Expose the wires. Show the workings-out.

Tuesday, March 14th, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Sunday, December 4th, 2022

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.