The Intelligence Illusion

A practical guide to the business risks of Generative AI

Buy the ebook for $35 USD

“I’ve been hearing a lot about ChatGPT. Sounds like it could help us a lot. Can you look into that?”

“We don’t want to fall behind.”

Your boss has questions about AI, and you need to have answers

Most of the hype is bullshit. AI is already full of grifters, cons, and snake oil salesmen, and it’s only going to get worse.

But, you can’t just say that to your boss.

All they’re seeing are the promises.

In a credibility contest between you and the CEOs who get prima donna interviews on 60 minutes promoting the greatest invention since the steam engine, the prima donnas will win.

Generative AI: The Bad Parts

  • If you want to steer your work away from the bad parts of generative AI, you will need to understand generative AI.
  • If your company is feeling the unrelenting urge to get that AI stock price “bump”, you have to be able to spot the few instances where generative AI would be safe and genuinely useful.
  • You have to prevent them from alienating the customers with random AI features nobody asked for, or scuttling customer service with untested chatbots.
  • The web and social media is full of either doom or hype. Neither looks useful. A lot of it doesn’t even make sense.
To prevent your boss from stepping into a bear trap, you need to be able to spot the trap.

Become the local generative AI expert

Your work can’t ignore generative AI forever. Somebody needs to step up and figure this out, without spending a decade studying mathematics and programming.

Imagine being able to answer these questions and more, with concrete reasons why:

  • What do these systems actually do well?
  • How are they currently broken?
  • How likely are those flaws to be fixed?
  • Are there ways of benefiting from what they do well and avoiding what they do badly?
  • Are these AI chatbot things as clever as they say they are?
  • Can we use it, productively and safely?
  • Or, should we avoid it until it improves?

Provide answers, explanations, and recommendations with references.

All without resorting to “just because” or “it feels iffy”.

You can understand generative AI—how it works and how it doesn't work—but where to start?

You start here, with this book

Cover for the book 'The Intelligence Illusion'

If you want philosophical musings, then there plenty of books and articles out there debating the social, cultural, or even existential risks of AI.

It’s easy to find breathless articles promising miracles or catastrophe.

This book is different

It details, in depth, the risks that might come from using Generative AI at work, with approachable high-level explanations of how the technology works and of the flaws inherent in its design.

The Intelligence Illusion: A practical guide to the business risks of Generative AI is specifically written with you in mind, attempting to answer the question:

How does this affect your work?

Buy the ebook for $35 USD

How the book is structured

It has four main sections:

  1. The basics of the technology and the three main strategies for implementing generative AI at work.

  2. Specific recommendations for how to use and not use generative AI.

  3. The many risks that are inherent to generative AI.

  4. How the development of generative AI is affected by the interplay between the modern software industry and the needs of modern software users.

The book will help you see the many ways in which this technology is flawed—even broken—how many of its defects are integral to how it works, and why they won’t be fixed any time soon.

  • Many of the promises being made by AI system vendors are unsubstantiated and unrealistic.
  • The claims are very often misleading.
  • Many of the ways the AIs are being used today are irresponsible, if not outright dangerous.
  • Even where it’s useful, it comes with risks.
Only you can decide for yourself whether the flawed potential of Generative AI outweighs the risks.

Written for non-experts, but with deeper research than most AI expert writing

The book is 256 pages, or thirty thousand words. The list of references alone is 38 pages. It’s written in non-technical English to make sure it’s accessible to those of us who aren’t fluent in AI techno-babble.

I went so far as to make my parents read the book and quiz them afterwards to make sure that it was relatively free of jargon.

I read hundreds of studies and papers for the book and countless articles. The only other time I’ve gone this deep into researching a single topic was for the PhD I did twenty years ago.

Every risk, recommendation, and analysis in the book is backed by references.

No hype; no bullshit

At the end of the book, you will be fully equipped to keep up with Generative AI. You will have formed your own idea of how effective it’s going to be for you, at your job, in your business.

And you’ll be able to talk your boss out of making an AI mistake if you need to.

Included with the ebook is a zip file with fifteen PDF information cards that summarise a specific topic from the book, along with the references that are relevant to that specific topic. It also contains a BibTex file with all the references I used for the book, as a bonus.

Buy the ebook for $35 USD

Praise for The Intelligence Illusion

Amid the current AI tsunami, I knew it was time to do my share of learning this stuff by filling gaps among pieces of my fragmented knowledge about it and organize my thoughts around it. This book served that purpose very well.

Generative AI, ChatGPT and sorts, have been released prematurely with too many, too much downsides for possible benefits. It’s especially so when it comes to commercial use. This book walk you through possible risks your business might encounter if you casually incorporate it.

Many of his arguments opened my eyes. I’m glad I found his book at this timing. It’s a hype, at least for now and a foreseeable future. Use cases will likely be very limited. And to protect ourselves from bad actors, we need solid regulations, just like in the case of crypto.

yasuhiro yoshida 吉田康浩

The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Jeremy Keith

Should we build xGPT into our product?

Before you answer, make sure to take advantage of all the homework Baldur Bjarnason has done for you.

Brandon Rohrer

Back when I worked in publishing, I employed Baldur Bjarnason as a consultant on complex digital publishing projects, as I appreciated his ability to grasp technical detail and translate it into terms that a general manager could understand - and act upon. He has applied that same skill to a superb new ebook on the business risks of generative AI, which I was lucky enough to read in advance of publication. It combines deep research, logical analysis and clear business recommendations.

George Walkley

I bought it, and I read most of it (skimming the middle part), and it is brilliant. Thank you!

Matthias Büchse

I just bought the book this morning and it’s exactly what I needed. I have not seen a clearer description of how generative AI works, what it might be good for, and what the risks are. The references alone are worth the price of the book.

Dave Cramer

When it comes to the current hype surrounding AGI and LLMs, whether you’re a true skeptic (like me) or a true believer, The Intelligence Illusion is a splash of lemon juice in the greasy pool of incredulous media coverage. Accessible for anyone who’s spent more than 15 minutes with a clueless executive or myopic developer (or, frankly, engaged with any of the technological “disruptions” of the past two decades), Bjarnason rigorously unpacks the many risks involved with the most popular use cases being promoted by unscrupulous executives. He brings plenty of receipts to support his observations, too, while also spotlighting areas where this technology might have legitimate potential for good. Highly recommended!

PS: The images throughout do an amazing job of subtly reinforcing the book’s title and premise and would be worthy of a print edition.

Guy LeCharles Gonzalez

Buy The Intelligence Illusion

Get the four-book bundle of all my ebooks, The Intelligence Illusion, Out of the Software Crisis, Yellow, and Bad Writing, in PDF and EPUB, for $49 USD together, a 45% discount off the combined price.

Buy The four-book bundle The Intelligence Illusion +
Out of the Software Crisis +
Yellow +
Bad Writing and Other Essays
$49 USD

Or, buy The Intelligence Illusion: a practical guide to the business risks of Generative AI in PDF and EPUB for $35 $9.99 USD.

Buy The EbookThe Intelligence Illusion
$35 USD

Excerpts from the book

From Artificial General Intelligence and the bird brains of Silicon Valley

Because text and language are the primary ways we experience other people’s reasoning, it’ll be next to impossible to dislodge the notion that these are genuine intelligences. No amount of examples, scientific research, or analysis will convince those who want to maintain a pseudo-religious belief in alien peer intelligences. After all, if you want to believe in aliens, an artificial one made out of supercomputers and wishful thinking feels much more plausible than little grey men from outer space. But that’s what it is: a belief in aliens.

(Read the full chapter online)

From Beware of AI pseudoscience and snake oil

Even the latest and greatest, the absolute best that the AI industry has to offer today, the aforementioned GPT-4 appears to suffer from this issue where its unbelievable performance in exams and benchmarks seems to be mostly down to training data contamination.

When its predecessor, ChatGPT using GPT-3.5, was compared to less advanced but more specialised language models, it performed worse on most, if not all, natural language tasks.

There’s even reason to be sceptical of much of the criticism of AI coming out from the AI industry.

Much of it consists of hand-wringing that their product might be too good to be safe—akin to a manufacturer promoting a car as so powerful it might not be safe on the streets. Many of the AI ‘doomsday’ style of critics are performing what others in the field have been calling “criti-hype”. They are assuming that the products are at least as good as vendors claim, or even better, and extrapolate science-fiction disasters from a marketing fantasy.

The harms that come from these systems don’t require any science-fiction—they don’t even require any further advancement in AI. They are risky enough as they are, with the capabilities they have today.21 Some of those risks come from abuse—the systems lend themselves to both legal and illegal abuses. Some of the risks come using them in contexts that are well beyond their capabilities—where they don’t work as promised.

(Read the full chapter online)

From AI code copilots are backwards-facing tools in a novelty-seeking industry

These two biases combined mean that users of code assistants are extremely likely to accept the first suggestion the tool makes that doesn’t cause errors.

That means autocompletes need to be substantially better than an average coder to avoid having a detrimental effect on overall code quality. Obviously, if programmers are going to be picking the first suggestion that works, that suggestion needs to be at least as good as what they would have written themselves unaided. What’s less obvious is that the lack of familiarity—not having written the generated code by hand themselves—is likely to lead them to miss bugs and integration issues that would have been trivially obvious if they had typed it out themselves. To balance that out, the generated code needs to be better than average, which is a tricky thing to ask of a system that’s specifically trained on mountains of average code.

Unfortunately, that mediocrity seems to be reflected in the output. GitHub Copilot, for example seems to regularly generate vulnerable code with security issues.

From The Elegiac Hindsight of Intelligent Machines, the book’s finale

The intelligence illusion, the conviction that these are artificial minds capable of powerful reasoning, when combined with anthropomorphism supercharges our automation bias. Our first response to even the most inane pablum from a language model chatbot is awe and wonder. It sounds like a real person at your beck and call! The drive to treat it as, not just a person, but an expert is irresistible. For most people, the incoherence, mediocrity, hallucinations, plagiarism, and biases won’t register over their sense of wonder.

This anthropomorphism-induced delusion is the fatal flaw of all AI assistant and copilot systems. It all but guarantees that—even though the outcome you get from using them is likely to be worse than if you’d done it yourself, because of the flaws inherent in these models—you will feel more confident in it, not less.

From the Afterword

Diffusion and language models look like powerful tools. The technology presents us with so many opportunities—such power—that I have to go all the way back to the early days of the web to find its equal.

Machine learning has already delivered many improvements to our work and lives. It’s the driving force behind many of the innovations we’ve come to rely on in our photography. It’s transformed special effects work. The opportunities it creates for working with audio are amazing. Language and diffusion models are intriguing.

But they are being implemented in the worst possible ways.

Buy the ebook for $35 USD

Table of Contents

INTRODUCTION

  1. What this book is and isn’t
  2. What is Generative AI?
  3. Strategies for using Generative AI

THE RECOMMENDATIONS

  1. Be aware of the abundance of bullshit and snake oil
  2. Only use specialised Generative AI tools that you’ve vetted
  3. Strengthen your defences against fraud and abuse
  4. Avoid externally-facing chatbots—prefer internal tools
  5. Prefer transparent and open generative AI
  6. Use it primarily for derivation, conversion, and modification

THE RISKS

  1. We don’t know if they’re breaking laws or regulations
  2. Copyright protections are diminished
  3. Much of the training data is biased, harmful, or unsafe
  4. Stochastic plagiarism
  5. Hallucinations, or AI don’t do facts
  6. Much of the output is biased, harmful, or unsafe
  7. Code quality
  8. Shortcut reasoning
  9. Fear of Missing Out

WHAT NEXT?

  1. The Elegiac Hindsight of Intelligent Machines
  2. Afterword: a straw to grasp
  3. Further reading
  4. References

Buy the ebook for $35 USD

Refund Policy

If you aren’t happy with the purchase, I’ll give you a refund, no questions asked, even beyond the EU’s 14-day statutory requirement, up to a year from purchase.

The Author

Baldur Bjarnason

My name is Baldur Bjarnason.

I’m an independent scholar and journalist who writes about technology and software. I’ve been a web developer for over twenty-five years and continue to take on projects as a software development consultant and researcher. My clients in the past have included companies small and large, startups and enterprises, and both not-for-profits and for-profits.


Terms of Use

Mastodon