Jim Nielsen’s Blog

You found my HTML feed — I also have an XML feed and a JSON feed.

I HTML

Subscribe to my blog by copy-pasting this URL into your RSS reader.

(Learn more about RSS and subscribing to content on the web at aboutfeeds.)

Recent posts

Digital Trees

View

Trees have many functions:

  • they provide shade,
  • they purify air,
  • they store carbon,
  • they grow fruit,
  • and they’re aesthetically pleasing.

What’s intriguing to me about trees is their return on investment (ROI).

It takes years, even decades, to grow a tree to the point where you feel like you get to reap its benefits.

Because of this, many trees end up being cultivated more for others than for ourselves. They can be a living embodiment of giving over extracting.

With the web going the way it is — what with AI and its extractive penchant, poisoning the well from which it sprang — it makes me wonder: what are the “trees” of the web? Undoubtedly many (metaphorical) trees on the web were planted by others but we enjoy their fruits.

For me personally, one example is the free and open blogs of folks whose advice and education have gifted me the know-how necessary to be employed as an interdisciplinary website maker.

Which makes me wonder: what trees am I planting? Trees I will gain little from in my lifetime, but others may revel in their fruits far into the future?

Pay it forward. Plant a digital tree.


Reply

Cool URIs Don’t Change — But Humans Do

View

Here are two ideas at odds with each other:

  1. You should have human-friendly URIs
  2. Cool URIs don’t change

If a slug is going to be human-friendly, i.e. human-readable, then it's going to contain information that is subject to change because humans make errors.

If “to err is human” then our errors will be forever cemented into our URIs at publish time.

For example, if I write:

/the-earth-is-flat

But later realize I was wrong, I can change the content at that URI but am forever stuck with the erroneous idea expressed in my slug (if my URI is to remain cool).

Whereas if I’d had a non-human-readable URI like this:

/19382

Then I can hide from my errors by merely updating the content at that URI anytime I want.

How do you get around this problem?

In my post about great URI designs I note how StackOverflow addresses this via a URI design that puts the machine-readable identifier first, then the human-readable slug second.

/:id/:slug

This allows the slug to change over time without breaking links. For example, you could publish:

/19382/the-earth-is-flat

And later change it to:

/19382/the-earth-is-round

And both will resolve to the same resource. It doesn’t matter what you put in the position of :slug it’ll always be as if you merely typed:

/19832

Granted you can’t protect from people putting misleading information in your URIs. For example, this would resolve to the same resource as the others:

/19382/the-earth-is-a-triangle

That said, there is one problem with the StackOverflow example: it doesn’t work with simple static file hosts where you don’t have control over routing logic.

The McGyver, jerry-rigged version of this URL would be to use a search param that doesn’t do anything other than provide human-readable context. For example:

/19382?the-earth-is-flat

That would work with a static file host without special routing logic (though it’s still subject to abuse same as the StackOverflow example).

So, to my original example:

/19382?the-earth-is-flat

Could later be changed to:

/19382?the-earth-is-round

And it remains cool 🕶️

Not saying you should, but you could.


Reply

A Local-first Codebase Opens the Door to More Collaborators

View

I thought this was interesting: Dax Raad on the local-first podcast observes how a local-first model drastically simplifies the experience of building an app, both as an individual and as a team.

He talks about how his wife is not an engineer but she learned to be more hands on in the codebase of the project they work on together.

For them, one of the things that’s been “crazy helpful” about a local-first approach is that all the data for the app is “just there” locally. For Dax’s wife, as a beginning coder, it’s such a simple model to work with. She’s not trying to figure out how to round trip to the server and keep data in sync. Dax handles all that upfront. The result?

There's not all this weird like, loading states, or like fetching it, or like just a whole bunch of complexity around getting data back and forth. It's solved in one part of your app, and then you never have to think about it anywhere else.

So from a team productivity point of view, she can build any feature she wants, even if I didn't explicitly think about it from the backend point of view, because she has all the data locally.

She's like, “I want to create a view that searches through this set of data.” She can just go do that. All the data is there. [It’s] very, very straightforward.

And it's actually wild how much of a productivity boost that has on your team, because…with every new feature you’re not rebuilding [yet] another way to sync that data back and forth.

When every single feature you build has to scaffold the lifecycle around fetching, updating, and revalidating the data that’s being changed, you alienate people who could otherwise collaborate on the front-end because they don’t know how to build the show spinner -> fetch -> render -> update -> show spinner -> revalidate loop (we spend a lot of time and effort on the coordination problem).

I’ve been in this position. As someone who started writing mostly HTML & CSS, then later moved to writing view logic with languages like JSX, I could only take my design work so far. Then I’d have to leave it for someone else to “wire things up”, which often resulted in them having to re-write a lot of what I did because it didn’t take into account the architecture of the network layer.

But that problem — how do I get (and update) the data required to build and style a functioning UI as a front-of-the-front-end engineer — can be solved up-front by a local-first architecture, allowing more people to collaborate on building UIs.


Reply
Tags

Custom Elements Don’t Require a Hyphen as a Separator

View

Scott Jehl reached out to help me resolve a conundrum in my post about what constitutes a valid custom element tag.

The spec says you can have custom elements with emojis in them. For example:

<emotion-😍></emotion-😍>

But for some reason the Codepen where I tested this wasn’t working.

Turns out, I’m not very good at JavaScript and simply failed to wrap everything in a try/catch.

What’s funny about this is that <my-$0.02> isn’t a valid custom element but <my-💲0.02> is!

Anyhow, I’ve since updated that post and now things work as the spec says. All is good with the world.

But that’s not all.

In my convo with Scott, he pointed out that custom element tag names don’t need a hyphen as a separator of characters, they only need the hyphen.

This kinda blew my mind when I realized it. All this time I’d been thinking about the rules for custom elements wrong.

You aren’t required to have the hyphen as a separator:

<my-tag></my-tag>

You’re just required to have it:

<mytag-></mytag->

Those are both valid custom element tag names!

Which means, if you have a really simple element and can’t think of a better name than an existing HTML element, you can do this:

<h1->My custom heading</h1->

Or this:

<p->My custom paragraph</p->

Or, I suppose, even this:

<ul->
  <li>My custom unordered list</li>
  <li>That still uses normal li’s</li>
  <li>Because why not?</li>
</ul->

I’m not saying you should do this, but I am saying you could — you know, nothing ever went wrong doing something before stopping to thinking about whether you should.


Reply

Organic Intelligence

View

Jeremy wrote about how the greatest asset of a company like Google is the trust people put in them:

If I use a [knowledge tool] I need to be able to trust [it] is good...I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

That question — “Was this generated, in some part, by an LLM and how can I assess its accuracy?” — is becoming a larger and larger part of my life. It’s taxing.

Jeremy’s post made me think[1] about the parallels between the rise of industrial farming and AI (or, might I say, industrial knowledge work).

Artificial food is to organic food, as artificial intelligence is to natural (i.e. organic) intelligence.

At one point in time, we said “eggs” and generally agreed on what that meant. With the rise of industrial farming, we began to understand that not all eggs are created equal, nor do they match our mental model of where eggs come from. So terms like “organic” and “free-range” and “cage-free” began to surface in our vernacular to help us suss out which eggs match our mental model for the term “eggs” that’s printed on the label.

It’s like that ice cream that can’t be called ice cream but rather a frozen dairy dessert. Or chocolate that can’t be called chocolate so it’s labeled “chocolate-flavored“ or “chocolatey”.

Now, with LLMs, a search result isn’t a search result. An image isn’t an image. A video isn’t a video.

We’re going to need a lot more qualifiers.


Footnotes
  1. I swear someone already wrote at-length about this parallel between food/“organic food“ and knowledge/“organic knowledge” but I can’t find. If you know it, reach out. Update: Found it, from an iA article: “Organic food only became organic once we ate enough frozen pizzas to realize the difference and importance of healthy, organic food.”
Reply
Tags

Notes From “You Are Not A Gadget”

View

Jaron Lanier’s book You Are Not a Gadget was written in 2010, but its preface is a prescient banger for 2024, the year of our AI overlord:

It's early in the 21st century, and that means that these words will mostly be read by nonpersons...[they] will be minced...within industrial cloud computing facilities...They will be scanned, rehashed, and misrepresented...Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds.

Today he might call the book, “You Are Not an Input to Artificial Intelligence”.

Lanier concludes the preface to his book by saying the words in it are intended for people, not computers.

Same for my blog! The words in it are meant for people, not computers. And I would hope any computerized representation of these words is solely for facilitating humans finding them and reading them in context.

Anyhow, here’s a few of my notes from the book.

So Long to The Individual Point of View

Authorship—the very idea of the individual point of view—is not a priority of the new technology...Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites present anonymized fragments of creativity…obscuring the true sources.

Again, this was 2010, way before “AI”.

Who cares for sources anymore? The perspective of the individual is obsolete. Everyone is flattened into a global mush. A word smoothie. We care more for the abstractions we can create on top of individual expression rather than the individuals and their expressions.

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only people were ever meaningful

While Lanier was talking about “the hive mind” of social networks as we understood it then, AI has a similar problem: we begin to care more about the training data than the individual humans whose outputs constitute the training data, even though the training data by itself is meaningless. Only people are meaningful.[1] As Lanier says in the book:

The bits don't mean anything without a cultured person to interpret them.

Information is alienated experience.

Emphasizing Artificial or Natural Intelligence

Emphasizing the crowd means deemphasizing individual humans.

I like that.

Here’s a corollary: emphasizing artificial intelligence means de-emphasizing natural intelligence.

Therein lies the tradeoff.

In Web 2.0, we emphasized the crowd over the individual and people behaved like a crowd instead of individuals, like a mob rather than a person. The design encouraged, even solicited, that kind of behavior.

Now with artificial intelligence enshrined, is it possible we begin to act like it? Hallucinating reality and making baseless claims in complete confidence will be normal, as that’s what the robots we interact with all day do.

What is communicated between people eventually becomes their truth. Relationships take on the troubles of software engineering.

What Even is “Intelligence”?

Before MIDI, a musical note was a bottomless idea that transcended absolute definition

But the digitalization of music require removing options and possibilities based on what was easiest to be represented and processed by the computer. We remove “the unfathomable penumbra of meaning that distinguishes” a musical note in the flesh to make a musical note in the computer.

Why? Because computers require abstractions. But abstractions are just that: models that roughly fit the real thing. But too often we let the abstractions become our reality:

Each layer of digital abstraction, no matter how well it is crafted, contributes some degree of error and obfuscation. No abstraction corresponds to reality perfectly. A lot of such layers become a system unto themselves, one that functions apart from the reality that is obscured far below.

Lanier argues it happened with MIDI and it happened with social networks, where people became rows in a database and began living up to that abstraction.

people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer...We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.

Perhaps similarly, intelligence (dare I say consciousness) was a bottomless idea that transcended definition. But we soon narrowed it down to fit our abstractions in the computer.

We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already posses perfect knowledge.

So we enshrine the idea of intelligence into our computing paradigm when we don’t even know what it means for ourselves. Are we making computers smarter or ourselves dumber?

You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart.

Prescient.


Footnotes
  1. This reminds me of Paul Ford’s questioning why we’re so anxious automate the hell out of everything and remove humans from the process when the whole point of human existence is to interact with other humans.
Reply
Tags

Hedge Words Affirm Creative, Imaginative Thinking

View

Mandy’s note piqued my interest so much, I started reading Being Wrong by Kathryn Schulz. So far, I love it! (I hope to write more about it once I’ve finished, but I’m afraid I won’t because the whole book is underlined in red pencil and I wouldn’t know where to start.)

As someone who has been told they self-sabotage by using hedge words, I like this excerpt from Schulz that Mandy quotes in her post:

disarming, self-deprecating comments, (“this could be wrong, but…” “maybe I’m off the mark here…”)…are often criticized [as] overly timid and self-sabotaging. But I’m not sure that’s the whole story. Awareness of one’s own qualms, attention to contradiction, acceptance of the possibility of error: these strike me as signs of sophisticated thinking, far preferable in many contexts to the confident bulldozer of unmodified assertions.

It’s kind of strange when you think about it.

Why do I feel this need to qualify what I’m about to say with a phrase like, “Maybe I’m wrong here, but…” As if being wrong is, in the words of Kathryn Schulz, a rare, bizarre, and “inexplicable aberration in the natural state of things”.

And yet, as much as we all say “to err is human”, we don’t always act like we believe it. As Schulz says in the book:

A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything.

Which is why I appreciate a good hedge word now and then.

In fact, I don’t think it’s hedging. It’s an open affirmation, as Mandy notes, of one’s desire to learn and evolve (as opposed to a desire to affirm and validate one’s own beliefs).

I would love to see less certainty and more openness. Less “it is this” and more “perhaps it could be this, or that, or maybe even both!”

Give me somebody who is willing to say “Maybe I’m wrong”. Somebody who can creatively imagine new possibilities, rather than be stuck with zero imagination and say, “I know all there is, and there’s no way this can be.”


Reply

The Night Time Sky

View

This post is a secret to everyone! Read more about RSS Club.

When I was a kid, my Dad used to take us outside to look for what he called “UFOs”. It’d take a moment, but after enough searching we’d eventually spot one.

One night, all of us kids were outside with our uncle. We saw a star-like light moving in a slow, linear fashion across the night sky. One of us said, “Look, a UFO!” My uncle, a bit confused, said “That’s not a UFO, that’s a satellite.”

Dad, you sneaky customer.

Fast forward to 2024. I was recently in the mountains in Colorado where the night sky was crisp and clear. I squinted and started looking for “UFOs”.

They were everywhere!

It seemed as though any patch of sky I looked at, I could spot four to six satellites whose paths were cross-crossing at any given moment. It made me think of Coruscant from Star Wars.

Animated gif showing the planet Coruscant from 'Star Wars' with lots of spaceship traffic traversing the sky.

It also reminded me of those times as a kid, scouring the night sky for “UFOs”. Spotting a satellite wasn’t easy. We had to look and look for a good chunk of time before anyone would get a lock on one traversing the sky.

But that night in Colorado I didn’t have to work at all. Point my eyes at any spot in the sky and I’d see not just one but many.

Knowing vaguely about the phenomenon of night sky and space pollution, I came in and looked up how many satellites are up there now-a-days vs. when I was a kid.

I found this site showing trends in satellite launch and use by Akhil Rao, which links to data from The Union of Concerned Scientists. Turns out we’ve ~10x’d the number of satellites in the sky over the last ~30 years!

That’s a long way of saying: I’ve heard about this phenomenon of sky pollution and space junk and the like, but it became much more real to me that night in Colorado.


Reply
Tags

Novels as Prototypes of the Future

View

Via Robin Rendle’s blog, I found this quote from Jack Cheng (emphasis mine):

A novel…is a prototype of the future. And if the ideas that the tech industry is pursuing feel stagnant…maybe it points to a shortage of compelling fictions for what the world could be.

I love that phrasing: novels as prototypes of the future.

Last summer I read Richard Rhodes’ book The Making of the Atomic Bomb (great book btw) and I remember reading about how influential some novels were on the physicists who worked on the science which led to the splitting of the atom, the idea of a chain reaction, and the development of a bomb.

For example, H.G. Wells’ read books on atomic physics from scientists like William Ramsay, Ernest Rutherford, and Frederick Soddy which cued him in to the idea of harnessing the power of the atom. In 1914, thirty one years before the end of WWII, Wells coined the term “atomic bomb” in his book The World Set Free which was read by physicist Leó Szilárd in 1932, the same year the neutron was discovered. Some believe Szilárd was inspired by Wells’ book in envisioning how to tap into the power of the atom via neutron bombardment to trigger a chain reaction.

Perhaps it did, perhaps it didn’t. Or perhaps it was a little bit of fact, a little bit of fiction, and a little bit of contemporary news that all led Szilárd’s inspiration.

In this way, it’s fascinating to think of someone without extensive, specialized scientific training being able to influence scientific discovery nonetheless — all through the power of imagination. Perhaps this is, in part, what Einstein meant about the power of imagination:

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution. It is, strictly speaking, a real factor in scientific research.

For me personally, maybe my own work could benefit from more novels. Maybe a little less “latest APIs in ES2024” and a little more fiction. A little less facts, a little more fancy.


Reply

“Just” One Line

View

From Jeremy Keith’s piece “Responsibility”:

Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript.

“It’s just one line of code” is a pitch you hear all the time. It might also be the biggest lie we tell ourselves — and one another.

“Add styles with just one line”:

<link href="styles.css" rel="stylesheet">

“Add our widget, it’s just one line”:

<script src="script.js"></script>

“Install our framework in just one line”:

npm i framework

But “just one line” is a facade. It comes with hundreds, thousands, even millions of lines of code. You don’t know how many and it’s not usually disclosed.

There’s a big difference between the interface to a thing being one line of code, and the cost of a thing being one line of code.

A more acute rendering of this sales pitch is probaly: “It’s just one line of code to add many more lines of code.”

The connotation of the phrase is ease, e.g. “This big complicated problem can be solved with just one line of code on your part.”

But, however intentional, another subtle connotation sneaks in with that phrase relating to size, e.g. “It’s not big, it’s just one line.”

But “one line” does not necessarily equate to small size. It can be big. Very big. Very, very big. One line of code that creates, imports, or installs many more lines of code is “just one line” to interface with, but many lines in cost (conceptual overhead, project size and complexity, etc.).

The next time you hear “it’s just one line” pause for a moment. Just one line for who? You the developer to write? Future you (or your teammates) to debug and maintain? Or the end-user to download, parse, and execute?


Reply