Tags: artificial

18

Friday, October 6th, 2023

Fair Warning — Real Life

Abeba Birhane has written an excellent historical overview of the original Artificial Intelligence movement, including Weizenbaum’s aboutface, and the current continuation of technological determinism.

Friday, May 5th, 2023

Will A.I. Become the New McKinsey? | The New Yorker

Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

Once again, absolutely spot-on analysis from Ted Chiang.

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

Monday, February 13th, 2023

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

Monday, December 19th, 2022

Empty Pointers and Constellations of AI

AI becomes a stand-in term for whatever technologies and techniques are new, shiny, and just beyond the grasp of our understanding. We use it to gesture at a future we cannot fully comprehend or currently realise. As soon as we do, it will no longer be “AI.”

Monday, September 26th, 2022

Fermented Code: Modelling the Microbial Through Miso - Serpentine Galleries

Y’know, I started reading this great piece by Claire L. Evans thinking about its connections to systems thinking, but I ended up thinking more about prototyping. And microbes.

Monday, April 4th, 2022

Why Computers Won’t Make Themselves Smarter | The New Yorker

In this piece published a year ago, Ted Chiang pours cold water on the idea of a bootstrapping singularity.

How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.

Sunday, March 27th, 2022

Artifice and Intelligence

Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.

Do “cloud” next!

Sunday, March 14th, 2021

The Digital Jungle

A fascinating look at artificial life …for some definition of “life”.

Tuesday, June 18th, 2019

A song of AIs and fire

The televisual adaption of Game of Thrones wrapped up a few weeks ago, so I hope I can safely share some thoughts with spoilering. That said, if you haven’t seen the final season, and you plan to, please read no further!

There has been much wailing and gnashing of teeth about the style of the final series or two. To many people, it felt weirdly …off. Zeynep’s superb article absolutely nails why the storytelling diverged from its previous style:

For Benioff and Weiss, trying to continue what Game of Thrones had set out to do, tell a compelling sociological story, would be like trying to eat melting ice cream with a fork. Hollywood mostly knows how to tell psychological, individualized stories. They do not have the right tools for sociological stories, nor do they even seem to understand the job.

Let’s leave aside the clumsiness of the execution for now and focus on the outcomes.

The story finishes with Bran as the “winner”, in that he now rules the seve— six kingdoms. I have to admit, I quite like the optics of replacing an iron throne with a wheelchair. Swords into ploughshares, and all that.

By this point, Bran is effectively a non-human character. He’s the Dr. Manhattan of the story. As the three-eyed raven, he has taken on the role of being an emotionless database of historical events. He is Big Data personified. Or, if you squint just right, he’s an Artificial Intelligence.

There’s another AI in the world of Game of Thrones. The commonly accepted reading of the Night King is that he represents climate change: an unstoppable force that’s going to dramatically impact human affairs, but everyone is too busy squabbling in their own politics to pay attention to it. I buy that. But there’s another interpretation. The Night King is rogue AI. He’s a paperclip maximiser.

Clearly, a world ruled by an Artificial Intelligence like that would be a nightmare scenario. But we’re also shown that a world ruled purely by human emotion would be just as bad. That would be the tyrannical reign of the mad queen Daenerys. Both extremes are undesirable.

So why is Bran any better? Well, technically, he isn’t ruling alone. He has a board of (very human) advisors. The emotionless logic of a pure AI is kept in check by a council of people. And the extremes of human nature are kept in check by the impartial AI. To put in another way, humanity is augmented by Artificial Intelligence: Man-computer symbiosis.

Whether it’s the game of chess or the game of thrones, a centaur is your best bet.

Sunday, April 28th, 2019

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

Saturday, April 27th, 2019

Wednesday, April 24th, 2019

Untold History of AI - IEEE Spectrum

A terrific six-part series of short articles looking at the people behind the history of Artificial Intelligence, from Babbage to Turing to JCR Licklider.

  1. When Charles Babbage Played Chess With the Original Mechanical Turk
  2. Invisible Women Programmed America’s First Electronic Computer
  3. Why Alan Turing Wanted AI Agents to Make Mistakes
  4. The DARPA Dreamer Who Aimed for Cyborg Intelligence
  5. Algorithmic Bias Was Born in the 1980s
  6. How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine

The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.

Friday, March 9th, 2018

How To Become A Centaur

We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.

Nicky Case on how Douglas Engelbart’s vision for human-computer augmentation has taken a turn from creation to consumption.

When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.

It’s the “+”.

Monday, December 18th, 2017

The Real Danger To Civilization Isn’t AI. It’s Runaway Capitalism.

Spot-on take by Ted Chiang:

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.

Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.

Tuesday, November 7th, 2017

The Juvet Agenda

Questions prompted by the Clearleft gathering in Norway to discuss AI.

Sunday, October 29th, 2017

Five thoughts on design and AI by Richard Pope - IF

I like Richard’s five reminders:

  1. Just because the technology feels magic, it doesn’t mean making it understandable requires magic.
  2. Designers are going to need to get familiar with new materials to make things make sense to people.
  3. We need to make sure people have an option to object when something isn’t right.
  4. We should not fall into the trap of assuming the way to make machine learning understandable should be purely individualistic.
  5. We also need to think about how we design regulators too.

Sunday, May 7th, 2017

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Saturday, August 26th, 2006

Skeptic: The Magazine: Featured Article

A good, if somewhat dispiriting, overview of Artificial Intelligence. (There's some nice typesetting on this page)