Joi Ito's Web

Joi Ito's conversation with the living web.

Somewhere between 2 and 3 billion years ago, what scientists call the Great Oxidation Event, or GOE, took place, causing the mass extinction of anaerobic bacteria, the dominant life form at the time. A new type of bacteria, cyanobacteria, had emerged, and it had the photosynthetic ability to produce glucose and oxygen out of carbon dioxide and water using the power of the sun. Oxygen was toxic to many anaerobic cousins, and most of them died off. In addition to being a massive extinction event, the oxygenation of the planet kicked off the evolution of multicellular organisms (620 to 550 million years ago), the Cambrian explosion of new species (540 million years ago), and an ice age that triggered the end of the dinosaurs and many cold-blooded species, leading to the emergence of the mammals as the apex group (66 million years ago) and eventually resulting in the appearance of Homo sapiens, with all of their social sophistication and complexity (315,000 years ago).

I’ve been thinking about the GOE, the Cambrian Explosion, and the emergence of the mammals a lot lately, because I’m pretty sure we’re in the midst of a similarly disruptive and pivotal moment in history that I’m calling the Great Digitization Event, or GDE. And right now we’re in that period where the oxygen, or in this case the internet as used today, is rapidly and indifferently killing off many systems while allowing new types of organizations to emerge.

As WIRED celebrates its 25th anniversary, the Whole Earth Catalog its 50th anniversary, and the Bauhaus its 100th anniversary, we’re in a modern Cambrian era, sorting through an explosion of technologies enabled by the internet that are the equivalent of the stunning evolutionary diversity that emerged some 500 million years ago. Just as in the Great Oxidation Event, in which early organisms that created the conditions for the explosion of diversity had to die out or find a new home in the mud on the ocean floor, the early cohort that set off the digital explosion is giving way to a new, more robust form of life. As Fred Turner describes in From Counterculture to Cyberculture, we can trace all of this back to the hippies in the 1960s and 1970s in San Francisco. They were the evolutionary precursor to the advanced life forms observable in the aftermath at Stoneman Douglas High School. Let me give you a first-hand account of how the hippies set off the Great Digitization Event.

From the outset, members of that movement embraced nascent technological change. Stewart Brand, one of the Merry Pranksters, began publishing the Whole Earth Catalog in 1968, which spawned a collection of other publications that promoted a vision of society that was ecologically sound and socially just. The Whole Earth Catalog gave birth to one of the first online communities, the Whole Earth ‘Lectronic Link, or WELL, in 1985.

Around that time, R.U. Sirius and Mark Frost1 started the magazine High Frontiers, which was later relaunched with Queen Mu and others as Mondo 2000. The magazine helped legitimize the burgeoning cyberpunk movement, which imbued the growing community of personal computer users and participants in online communities with an ‘80s version of hippie sensibilities and values. A new wave of science fiction, represented by William Gibson’s Neuromancer, added the punk rock dystopian edge.

Timothy Leary, a “high priest” of the hippie movement and New Age spirituality, adopted me as his godson when we met during his visit to Japan in 1990, and he connected me to the Mondo 2000 community that became my tribe. Mondo 2000 was at the hub of cultural and technological innovation at the time, and I have wonderful memories of raves advertising “free VR” and artist groups like Survival Research Labs that connected the hackers from the emerging Silicon Valley scene with Haight-Ashbury hippies.

I became one of the bridges between the Japanese techno scene and the San Francisco rave scene. Many raves in San Francisco happened in the then-gritty area south of Market Street, near Townsend and South Park. ToonTown, a rave producer, set up its offices (and living quarters) there, which attracted designers and others who worked in the rave business, such as Nick Philip, a British BMX'er and designer. Nick, who started out designing flyers for raves using photocopy machines and collages, created a clothing brand called Anarchic Adjustment, which I distributed in Japan and which William Gibson, Dee-Lite, and Timothy Leary wore. He began using computer graphics tools from companies like Silicon Graphics to create the artwork for T-shirts and posters.

In August 1992, Jane Metcalfe and Louis Rossetto rented a loft in the South Park area because they wanted to start a magazine to chronicle what had evolved from a counterculture into a powerful new culture built around hippie values, technology, and the new Libertarian movement. (In 1971, Louis had appeared on the cover of The New York Times Magazine as coauthor, with Stan Lehr, of “Libertarianism, The New Right Credo.”) When I met them, they had a desk and a 120-page laminated prototype for what would become WIRED. Nicholas Negroponte, who had cofounded the MIT Media Lab in 1985, was backing Jane and Louis financially. The founding executive editor of WIRED was Kevin Kelly, who was formerly one of the editors of the Whole Earth Catalog. I got involved as a contributing editor. I didn’t write articles at the time, but made my debut in the media in the third issue2 of WIRED, mentioned as a kid addicted to MMORPGs in an article by Howard Rheingold. Brian Behlendorf, who ran the SFRaves mailing list, announcing and talking about the SF rave scene, became the webmaster of HotWired, a groundbreaking exploration of the new medium of the Web.

WIRED came along just as the internet and the technology around it really began to morph into something much bigger than a science fiction fantasy, in other words, on the cusp of the GDE. The magazine tapped into the design talent around South Park, literally connecting to the design and development shop Cyborganic, with ethernet cables strung inside of the building where they shared a T1 line. It embraced the post-psychedelic design and computer graphics that distinguished the rave community and established its own distinct look that bled over into the advertisements in the magazine, like one Nick Philip designed for Absolut, with the most impact coming from people such as Barbara Kuhr and Erik Adigard.

Structured learning didn't serve me particularly well. I was kicked out of kindergarten for running away too many times, and I have the dubious distinction of having dropped out of two undergraduate programs and a doctoral business and administration program. I haven't been tested, but have come to think of myself as "neuroatypical" in some way.

"Neurotypical" is a term used by the autism community to describe what society refers to as "normal." According to the Centers for Disease Control, one in 59 children, and one in 34 boys, are on the autism spectrum--in other words, neuroatypical. That's 3 percent of the male population. If you add ADHD--attention deficit hyperactivity disorder--and dyslexia, roughly one out of four people are not "neurotypicals."

In NeuroTribes, Steve Silberman chronicles the history of such non-neurotypical conditions, including autism, which was described by the Viennese doctor Hans Asperger and Leo Kanner in Baltimore in the 1930s and 1940s. Asperger worked in Nazi-occupied Vienna, where institutionalized children were actively euthanized, and he defined a broad spectrum of children who were socially awkward. Others had extraordinary abilities and a "fascination with rules, laws and schedules," to use Silberman's words. Leo Kanner, on the other hand, described children who were more disabled. Kanner's suggestion that the condition was activated by bad parenting made autism a source of stigma for parents and led to decades of work attempting to "cure" autism rather than developing ways for families, the educational system, and society to adapt to it.

Our schools in particular have failed such neurodiverse students, in part because they've been designed to prepare our children for typical jobs in a mass-production-based white- and blue-collar environment created by the Industrial Revolution. Students acquire a standardized skillset and an obedient, organized, and reliable nature that served society well in the past--but not so much today. I suspect that the quarter of the population who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education, and many others probably do as well.

I often say that education is what others do to you and learning is what you do for yourself. But I think that even the broad notion of education may be outdated, and we need a completely new approach to empower learning: We need to revamp our notion of "education" and shake loose the ordered and linear metrics of the society of the past, when we were focused on scale and the mass production of stuff. Accepting and respecting neurodiversity is the key to surviving the transformation driven by the internet and AI, which is shattering the Newtonian predictability of the past and replacing it with a Heisenbergian world of complexity and uncertainty.

In Life, Animated, Ron Suskind tells the story of his autistic son Owen, who lost his ability to speak around his third birthday. Owen had loved the Disney animated movies before his regression began, and a few years into his silence it became clear he'd memorized dozens of Disney classics in their entirety. He eventually developed an ability to communicate with his family by playing the role, and speaking in the voices, of the animated characters he so loved, and he learned to read by reading the film credits. Working with his family, Owen recently helped design a new kind of screen-sharing app, called Sidekicks, so other families can try the same technique.

Owen's story tells us how autism can manifest in different ways and how, if caregivers can adapt rather than force kids to "be normal," many autistic children survive and thrive. Our institutions, however, are poorly designed to deliver individualized, adaptive programs to educate such kids.

In addition to schools poorly designed for non-neurotypicals, our society traditionally has had scant tolerance or compassion for anyone lacking social skills or perceived as not "normal." Temple Grandin, the animal welfare advocate who is herself somewhere on the spectrum, contends that Albert Einstein, Wolfgang Mozart, and Nikola Tesla would have been diagnosed on the "autistic spectrum" if they were alive today. She also believes that autism has long contributed to human development and that "without autism traits we might still be living in caves." She is a prominent spokesperson for the neurodiversity movement, which argues that neurological differences must be respected in the same way that diversity of gender, ethnicity or sexual orientation is.

Despite challenges with some of the things that neurotypicals find easy, people with Asperger's and other forms of autism often have unusual abilities. For example, the Israeli Defense Force's Special Intelligence Unit 9900, which focuses on analyzing aerial and satellite imagery, is partially staffed with people on the autism spectrum who have a preternatural ability to spot patterns. I believe at least some of Silicon Valley's phenomenal success is because its culture places little value on conventional social and corporate values that prize age-based experience and conformity that dominates most institutions on the East Coast and most of society as a whole. It celebrates nerdy, awkward youth and has turned their super-human, "abnormal" powers into a money-making machine that is the envy of the world. (This new culture is wonderfully inclusive from a neurodiversity perspective but white-dude centric and problematic from a gender and race perspective.)

This sort of pattern recognition and many other unusual traits associated with autism are extremely well suited for science and engineering, often enabling a super-human ability to write computer code, understand complex ideas and elegantly solve difficult mathematical problems.

Unfortunately, most schools struggle to integrate atypical learners, even though it's increasingly clear that interest-driven learning, project-based learning, and undirected learning seem better suited for the greater diversity of neural types we now know exist.

Ben Draper, who runs the Macomber Center for Self Directed Learning, says that while the center is designed for all types of children, kids whose parents identify them as on the autism spectrum often thrive at the center when they've had difficulty in conventional schools. Ben is part of the so-called unschooling movement, which believes that not only should learning be self-directed, in fact we shouldn't even focus on guiding learning. Children will learn in the process of pursuing their passions, the reasoning goes, and so we just need to get out of their way, providing support as needed.

Many, of course, argue that such an approach is much too unstructured and verges on irresponsibility. In retrospect, though, I feel I certainly would have thrived on "unschooling." In a recent paper, Ben and my colleague Andre Uhl, who first introduced me to unschooling, argue that it not only works for everyone, but that the current educational system, in addition to providing poor learning outcomes, impinges on the rights of children as individuals.

MIT is among a small number of institutions that, in the pre-internet era, provided a place for non-neurotypical types with extraordinary skills to gather and form community and culture. Even MIT, however, is still trying to improve to give these kids the diversity and flexibility they need, especially in our undergraduate program.

I'm not sure how I'd be diagnosed, but I was completely incapable of being traditionally educated. I love to learn, but I go about it almost exclusively through conversations and while working on projects. I somehow kludged together a world view and life with plenty of struggle, but also with many rewards. I recently wrote a PhD dissertation about my theory of the world and how I developed it. Not that anyone should generalize from my experience--one reader of my dissertation said that I'm so unusual, I should be considered a "human sub-species." While I take that as a compliment, I think there are others like me who weren't as lucky and ended up going through the traditional system and mostly suffering rather than flourishing. In fact, most kids probably aren't as lucky as me and while some types are more suited for success in the current configuration of society, a huge percentage of kids who fail in the current system have a tremendous amount to contribute that we aren't tapping into.

In addition to equipping kids for basic literacy and civic engagement, industrial age schools were primarily focused on preparing kids to work in factories or perform repetitive white-collar jobs. It may have made sense to try to convert kids into (smart) robotlike individuals who could solve problems on standardized tests alone with no smartphone or the internet and just a No. 2 pencil. Sifting out non-neurotypical types or trying to remediate them with drugs or institutionalization may have seemed important for our industrial competitiveness. Also, the tools for instruction were also limited by the technology of the times. In a world where real robots are taking over many of those tasks, perhaps we need to embrace neurodiversity and encourage collaborative learning through passion, play, and projects, in other words, to start teaching kids to learn in ways that machines can't. We can also use modern technology for connected learning that supports diverse interests and abilities and is integrated into our lives and communities of interest.

At the Media Lab, we have a research group called Lifelong Kindergarten, and the head of the group, Mitchel Resnick, recently wrote a book by the same name. The book is about the group's research on creative learning and the four Ps--Passion, Peers, Projects, and Play. The group believes, as I do, that we learn best when we are pursuing our passion and working with others in a project-based environment with a playful approach. My memory of school was "no cheating," "do your own work," "focus on the textbook, not on your hobbies or your projects," and "there's time to play at recess, be serious and study or you'll be shamed"--exactly the opposite of the four Ps.

Many mental health issues, I believe, are caused by trying to "fix" some types of neurodiversity or by simply being insensitive or inappropriate to people who have them. Many mental "illnesses" can be "cured" by providing the appropriate interface to learning, living, or interacting for that person focusing on the four Ps. My experience with the educational system, both as its subject and, now, as part of it, is not so unique. I believe, in fact, that at least the one-quarter of people who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education. People who are wired differently should be able to think of themselves as the rule, not as an exception.

Credits

Edits by Iyasu Nagata on July 8, 2021

As a Japanese, I grew up watching anime like Neon Genesis Evangelion, which depicts a future in which machines and humans merge into cyborg ecstasy. Such programs caused many of us kids to become giddy with dreams of becoming bionic superheroes. Robots have always been part of the Japanese psyche—our hero, Astro Boy, was officially entered into the legal registry as a resident of the city of Niiza, just north of Tokyo, which, as any non-Japanese can tell you, is no easy feat. Not only do we Japanese have no fear of our new robot overlords, we’re kind of looking forward to them.

It’s not that Westerners haven’t had their fair share of friendly robots like R2-D2 and Rosie, the Jetsons’ robot maid. But compared to the Japanese, the Western world is warier of robots. I think the difference has something to do with our different religious contexts, as well as historical differences with respect to industrial-scale slavery.

The Western concept of “humanity” is limited, and I think it’s time to seriously question whether we have the right to exploit the environment, animals, tools, or robots simply because we’re human and they are not.

Sometime in the late 1980s, I participated in a meeting organized by the Honda Foundation in which a Japanese professor—I can’t remember his name—made the case that the Japanese had more success integrating robots into society because of their country’s indigenous Shinto religion, which remains the official national religion of Japan.

Followers of Shinto, unlike Judeo-Christian monotheists and the Greeks before them, do not believe that humans are particularly “special.” Instead, there are spirits in everything, rather like the Force in Star Wars. Nature doesn’t belong to us, we belong to Nature, and spirits live in everything, including rocks, tools, homes, and even empty spaces.

The West, the professor contended, has a problem with the idea of things having spirits and feels that anthropomorphism, the attribution of human-like attributes to things or animals, is childish, primitive, or even bad. He argued that the Luddites who smashed the automated looms that were eliminating their jobs in the 19th century were an example of that, and for contrast he showed an image of a Japanese robot in a factory wearing a cap, having a name and being treated like a colleague rather than a creepy enemy.

The general idea that Japanese accept robots far more easily than Westerners is fairly common these days. Osamu Tezuka, the Japanese cartoonist and the creator of Atom Boy noted the relationship between Buddhism and robots, saying, ''Japanese don't make a distinction between man, the superior creature, and the world about him. Everything is fused together, and we accept robots easily along with the wide world about us, the insects, the rocks—it's all one. We have none of the doubting attitude toward robots, as pseudohumans, that you find in the West. So here you find no resistance, simply quiet acceptance.'' And while the Japanese did of course become agrarian and then industrial, Shinto and Buddhist influences have caused Japan to retain many of the rituals and sensibilities of a more pre-humanist period.

In Sapiens, Yuval Noah Harari, an Israeli historian, describes the notion of “humanity” as something that evolved in our belief system as we morphed from hunter-gatherers to shepherds to farmers to capitalists. As early hunter-gatherers, nature did not belong to us—we were simply part of nature—and many indigenous people today still live with belief systems that reflect this point of view. Native Americans listen to and talk to the wind. Indigenous hunters often use elaborate rituals to communicate with their prey and the predators in the forest. Many hunter-gatherer cultures, for example, are deeply connected to the land but have no tradition of land ownership, which has been a source of misunderstandings and clashes with Western colonists that continues even today.

It wasn’t until humans began engaging in animal husbandry and farming that we began to have the notion that we own and have dominion over other things, over nature. The notion that anything—a rock, a sheep, a dog, a car, or a person—can belong to a human being or a corporation is a relatively new idea. In many ways, it’s at the core of an idea of “humanity” that makes humans a special, protected class and, in the process, dehumanizes and oppresses anything that’s not human, living or non-living. Dehumanization and the notion of ownership and economics gave birth to slavery at scale.

In Stamped from the Beginning, the historian Ibram X. Kendi describes the colonial era debate in America about whether slaves should be exposed to Christianity. British common law stated that a Christian could not be enslaved, and many plantation owners feared that they would lose their slaves if they were Christianized. They therefore argued that blacks were too barbaric to become Christian. Others argued that Christianity would make slaves more docile and easier to control. Fundamentally, this debate was about whether Christianity—giving slaves a spiritual existence—increased or decreased the ability to control them. (The idea of permitting spirituality is fundamentally foreign to the Japanese because everything has a spirit and therefore it can’t be denied or permitted.)

This fear of being overthrown by the oppressed, or somehow becoming the oppressed, has weighed heavily on the minds of those in power since the beginning of mass slavery and the slave trade. I wonder if this fear is almost uniquely Judeo-Christian and might be feeding the Western fear of robots. (While Japan had what could be called slavery, it was never at an industrial scale.)

Lots of powerful people (in other words, mostly white men) in the West are publicly expressing their fears about the potential power of robots to rule humans, driving the public narrative. Yet many of the same people wringing their hands are also racing to build robots powerful enough to do that—and, of course, underwriting research to try to keep control of the machines they’re inventing, although this time it doesn’t involved Christianizing robots … yet.

Douglas Rushkoff, whose book, Team Human, is due out early next year, recently wrote about a meeting in which one of the attendees’ primary concerns was how rich people could control the security personnel protecting them in their armored bunkers after the money/climate/society armageddon. The financial titans at the meeting apparently brainstormed ideas like using neck control collars, securing food lockers, and replacing human security personnel with robots. Douglas suggested perhaps simply starting to be nicer to their security people now, before the revolution, but they thought it was already too late for that.

Friends express concern when I make a connection between slaves and robots that I may have the effect of dehumanizing slaves or the descendants of slaves, thus exacerbating an already tense and advanced war of words and symbols. While fighting the dehumanization of minorities and underprivileged people is important and something I spend a great deal of effort on, focusing strictly on the rights of humans and not the rights of the environment, the animals, and even of things like robots, is one of the things that has gotten us in this awful mess with the environment in the first place. In the long run, maybe it’s not so much about humanizing or dehumanizing, but rather a problem of creating a privileged class—humans—that we use to arbitrarily justify ignoring, oppressing, and exploiting.

Technology is now at a point where we need to start thinking about what, if any, rights robots deserve and how to codify and enforce those rights. Simply imagining that our relationships with robots will be like those of the human characters in Star Wars with C-3PO, R2-D2 and BB-8 is naive.

As Kate Darling, a researcher at the MIT Media Lab, notes in a paper on extending legal rights to robots, there is a great deal of evidence that human beings are sympathetic to and respond emotionally to social robots—even non-sentient ones. I don’t think this is some gimmick; rather, it’s something we must take seriously. We have a strong negative emotional response when someone kicks or abuses a robot—in one of the many gripping examples Darling cites in her paper, a US military officer called off a test using a leggy robot to detonate and clear minefields because he thought it was inhumane. This is a kind of anthropomorphization, and, conversely, we should think about what effect abusing a robot has on the abusing human.

My view is that merely replacing oppressed humans with oppressed machines will not fix the fundamentally dysfunctional order that has evolved over centuries. As a Shinto, I’m obviously biased, but I think that taking a look at “primitive” belief systems might be a good place to start. Thinking about the development and evolution of machine-based intelligence as an integrated “Extended Intelligence” rather than artificial intelligence that threatens humanity will also help.

As we make rules for robots and their rights, we will likely need to make policy before we know what their societal impact will be. Just as the Golden Rule teaches us to treat others the way we would like to be treated, abusing and “dehumanizing” robots prepares children and structures society to continue reinforcing the hierarchical class system that has been in place since the beginning of civilization.

It’s easy to see how the shepherds and farmers of yore could easily come up with the idea that humans were special, but I think AI and robots may help us begin to imagine that perhaps humans are just one instance of consciousness and that “humanity” is a bit overrated. Rather than just being human-centric, we must develop a respect for, and emotional and spiritual dialogue with, all things.

As part of my work in developing the Knowledge Futures Group collaboration with the MIT Press, I'm doing a deep dive into trying to understand the world of academic publishing. One of the interesting things that I discovered as I navigated the different protocols and platforms was the Digital Object Identifier (DOI). There is a foundation that manages DOIs and coordinates a federation of registration agencies. DOIs are used for many things, but the general idea is to create a persistent identifier for some digital object like a dataset or a publication and manage it at a meta-level to the URL, which might change over the lifetime of the drafting and the publication of an academic journal article or the movement of a movie through a supply chain.

One registration agency, Crossref, focuses on DOIs for academic publications and citations across these publications and their service has proliferated the use of DOIs as a convenient and effective way of rigorously managing and tracking citations. Many services, like ORCID which manages affiliations and publications for academics, use DOIs as one way to import and manage publications.

Although DOIs can be used for many things, because they are somewhat non-trivial to get and set up and because of the success of Crossref which services academic publishers, they have become somewhat synonymous with authority, trustworthiness and formal publishing. Although Geoffrey Bilder from Crossref warns us that this is not true and that DOIs shouldn't signal that, I think that in fact they do, for now.

Something I noted as I started playing with all of the various tools available to academics to manage their profiles and their citations, and having only one peer reviewed paper to my name so far (thanks Karthik, Chelsea and Madars for that!), was that my blog posts weren't getting indexed. Also, as I was doing research while working on my dissertation, I noticed that blogs generally weren't very heavily cited. Using my privilege and in the name of research, I started bugging Amy Brand, director of the MIT Press, who worked on the adoption of DOIs when she was at Crossref. I asked whether I could get DOIs for my blog posts.

It wasn't as easy as it sounds. First of all, you need a DOI prefix--sort of like a domain--registered through one of the registration providers. Amy helped me get one, under the MIT Press, via Crossref. Boris defined the DOI suffix format, set up a submission generator and integrated everything into my blog. Alexa from MIT Press worked on getting the DOIs from my blog to Crossref. The next problem is that "blogs" are not a category of "thing" in the DOI world so the closest category according to the experts was "dataset." So, this thing, formerly known as a blog post, that I'm writing is now a dataset contribution to the scholarly world. I do believe that it meets the standard of something that someone might possibly want to cite, so I don't feel guilty having a DOI assigned to it. I hope that Crossref would consider adding a blog post "creationType" or extend the schema more broadly for other citable web resources.

Also, I wish APA would update their blog citation format so that the name of the blog is part of the citation and not just the URL. In a rare act of disobedience, I've gone rogue and added the name of this blog in the APA citation template on this blog against their official guidelines. Strictly speaking, the APA citation for this post would be "Ito, J. (2018, August 22). Blog DOI enabled. [Blog post]. https://doi.org/10.31859/20180822.2140" but the citation tool here gives you: "Ito, J. (2018, August 22). Blog DOI enabled. Joi Ito's Web [Blog post]. https://doi.org/10.31859/20180822.2140". Sorry not sorry if you get dinged on your paper for using the modified format.

When I tweeted about the issue of blog posts not being cited, one of the concerns from the Twittersphere was lack of peer review for blogs. I think this is a valid request and concern, but not all things that are worthy of being cited need to be peer reviewed. On the other hand, clearly citing others, noting any contributors and their contribution to a blog post, and having some sort of peer review when it makes sense, is probably a good idea.

I'm not stuck on the use of the world "blog" although that's what I think this is. I just think that having an ability to rapidly publish, as blogs enable us to do, and have it connect to the world of academic literature is something worth considering.

Recently, academic preprint servers have become very popular and a growing number of academics are skipping journal publishing altogether, putting their papers on archive servers and presenting them at conferences instead of submitting them to journals.

My sense is that blogs can play a role in this ecosystem if we can tweak the academic publishing side, the culture on both sides and some of the practices on the blogging side. Geoffrey suggests that DOIs should be assigned to anything that is citationworthy and I agree, but I think that blogs are and could be even more like informal publications than just a merely citationworthy blobs of data.

Boris Anthony who has been my partner in thinking about this stuff and has been designing and maintaining my blog for the last 15 years or so has been thinking deeply about the semantic web and the creation of knowledge and was critical in getting it sorted out on this blog. He was also the one who convinced me not to convert all of my blog posts into DOI'ed objects, but to pick the ones that might have some scholarly value. :-)

PS There appears to be a DOI plugin for Wordpress using a prefix registered by the developer.

Credits

Boris Anthony for doing the actual technical and design work to get the DOIs deployed on this site and for help with the ideas and the editing of the post.

Amy Brand for her guidance in getting, understanding and writing about DOIs.

Alexa Masi for helping us sort out how to get the DOIs properly formatted and sent over to Crossref.

Around the time I turned 40, I decided to address the trifecta of concerns I had about climate change, animal rights, and my health: I went hard vegan. My doctor had been warning me to cut down on red meat, and I had also moved to a rural Japanese farming village populated by farmers growing a wide variety of veggies. They were delicious.

After a while, the euphoria wore off and the culinary limitations of vegan food, especially when traveling, became challenging. I joined the legions of ex-vegans to become a cheating pescaterian. (I wonder if this article will get me bumped off of the Wikipedia Notable Vegans list.) Five years later, the great Tohoku earthquake of 2011 hit Japan, dumping a pile of radioactive cesium-137 on top of our organic garden and shattering the wonderful organic loop we had created. I took my job at the Media Lab and moved to the US the same year, thus starting my slow but steady reentry into the community of animal eaters.

Ten years after I proclaimed myself vegan, I met Isha Datar1, the executive director of New Harvest, an organization devoted to advancing the science of what she calls “cellular agriculture.” Isha is trying to figure out how to grow any agricultural product—milk, eggs, flavors, fragrances, fish, fruit—from cells instead of animals.

Art fans will remember Oron Catts and Ionat Zurr, who in 2003 served “semi-living steak” grown from the skeletal muscles of frogs as an art project called Disembodied Cuisine. Five years later, they presented “Victimless Leather” at MoMA in New York, an installation that involved tissue growing inside a glass container in the shape of a leather jacket. Protests broke out when the museum had to disconnect the life support system because the jacket grew too big.

Isha wasn’t trying to make provocative art. We now have more challenging choices to make than simply whether to be vegan, pescatarian or carnivore, thanks to technology that has given us an explosion of meat-like products that run the ethical gambit in their production processes. She was and is trying to solve our food problem, and New Harvest is supporting and coordinating research efforts at numerous labs and research groups.

Civilians often clump the alternative meat companies and labs together in some kind of big meatless meatball, but, just like different kinds of self-driving car systems, they’re quite distinct. The Society of Automotive Engineers identifies five levels of autonomy; similarly, I see six levels of cellular agriculture. Just as “driver assist” is nice, having a car pick me up and drive me home is a completely different deal, and the latter might not evolve from the former—they might have separate development paths. I think the different branches of cellular agriculture are developing the same way.

Level 0: Just Be Vegan

Some plants are very high in protein, like beans, and they taste great just the way they are.

Level 1: Go Alternative

As a vegan, I ate a lot of processed plant-based proteins like tofu that feel fleshy and taste savory. I call these Level 1 meat alternatives. Many vegan Chinese restaurants serve “fake meat,” which is usually some sort of seitan, a wheat gluten, or textured vegetable protein like textured soy. It’s flavored and has a texture similar to some sort of animal protein, say, shrimp. This kind of protein substitute is a meat alternative—a plant-based protein that starts to mimic the experience of eating meat. Veggie burgers fall into this category.

Level 2: Get Cultured

These meat alternatives are also plant-based, but they contain some “cultured” proteins that are produced using a new scientific process. Yeast or bacteria are engineered to ferment some plant substances and output products that mimic or even replicate the proteins that make a plant-based recipe taste, smell, look or feel more like meat. Impossible Foods’ Impossible Burger falls into this category because its key ingredient is a protein called heme that is produced by genetically engineered yeast. Heme imparts “bloodiness” and “meatiness” to the plant-based burger-like base. This process relies on the industrial biotechnology and large-scale fermentation systems that are already used in the food industry. JUST’s Just Scramble “scrambled eggs” uses a proprietary process to create a plant-based protein as well, combining processes used in the pharmaceutical business, food R&D labs, and chemistry labs.

Level 3: Post-Vegan

Foods at this level are made of plant-based ingredients combined with cultured animal cells (as opposed to the products of bacterial fermentation). In other words, cells as ingredient, plants for mass. The animal cells provide the color, smell, or taste of meat, but not the substance. This relies on industrial biotech and large-scale cell-culture production methods already used in the pharma industry. Level 3 is the first level that requires going beyond the tools and the science already available in the food business.

Level 4: That's a Spicy Meatball

Level 4 alternatives are pure cultured animal cells like the products Memphis Meats and others are working on. The texture and shape of a real steak comes from the muscle cells that grow around the bones and otherwise self-organize into bundles of tissue. At Level 4, we aren’t really dealing with sophisticated texture yet, so we’re pretty much turning the cells we’ve grown into meatballs. (The difference between this and Level 3 is that most of the mass of the food here is animal cells, whereas Level 3 is mostly plant-based with cells sprinkled on top.)

Right now, the primary ��media” for cell cultures is fetal serum (the most common type is harvested from cow fetuses), and it currently takes roughly 50 liters of serum and costs about $6,000 to produce a single beef burger. A key breakthrough needed to push us into Level 4 reasonably is figuring out a viable way of feeding cells using non-animal sources of energy. This will involve new science on the cell side and on the media side. And we need to better understand and reproduce nutrients and flavor molecules in addition to producing pure calories.

Level 5: Tastes Like Chicken

Now we get something actually like a chicken thigh or T-bone steak. This is the Jetsons’ version that people imagine when they hear the phrase “lab-grown meat.” It is very much the goal of the alternative meat effort, and no one has achieved it yet. Scientifically, this requires the kind of advanced tissue science that is currently being developed to allow us to swap failed organs in our bodies with replacements grown outside of our bodies.

A beaker full of animal cells doesn’t give you the texture of a steak; with this technology, scientists can use 3-D scaffolding to encourage 3-D growth, and they can grow blood vessels in these tissues as well. We can even use plant-based materials as the scaffolding, but what we really want is for that scaffolding to also grow, which is how organs in our body grow. It turns out that research in regenerative medicine and tissue science is giving us a better understanding of how we might create the texture and scaffolding required to grow an actual kidney instead of just a petri dish full of kidney cells. Scientists have not really focused however, on the idea of deploying tissue science for food ... yet.

Level 6: ZOMG What Is This?

Tasty fake meat is exciting, but not nearly as exciting as the idea of a completely new food system with a diversity of inputs and completely new outputs—a completely new food science. Imagine augmented meat tissue with novel nutritional profiles, texture, flavor and other characteristics—in other words, instead of just trying to recreate meat, scientists develop completely new ingredients that are actually “post-meat.”

Let me explain what investors and I find so exciting about all this activity. My dream, and Isha’s dream, is that we figure out a way to make use of extremely efficient “energy harvesters” like algae, kelp, fungi, or anything else that can take a renewable energy source like the sun and convert it into calories. The idea is to figure out a mechanism to convert these organic stores of energy into inputs for bioreactors, which would then transform these calories into anything we want.

Scientists have made so many advances in terms of using microbes as factories (including fermentation) as well as in genomics, tissue engineering, and stem cells, that it’s feasible to imagine a system that unleashes a culinary bonanza of nutritional, flavor and texture options for future chefs while also lowering the environmental impact of belching cows, concentrated animal-feeding operations, and expensive and energy-inefficient refrigerated supply chains. (The livestock industry uses 70 percent of all land suitable for agriculture, and livestock accounts for as much as 51 percent of greenhouse gas emissions.) Eating meat is one of the most environmentally negative things humans do. I can imagine a food supply system that is even more efficient than eating fresh plants, which still requires refrigeration: Move the materials and calories around in shelf-stable forms, and simply “just add water” at the end in the way that adding water magically spawned sea monkeys when I was a kid.

Such a food industry would also need to develop bioreactors—think bread machines with cell cartridges or breweries that make meat, not beer—that would intake the raw materials and spit out lamb chops. That feels like an engineering task to be undertaken once the cellular biology gets worked out.

So far, most of the investment in the companies trying to rethink meat has come from venture capitalists, and they are impatient. This puts the startups they underwrite under pressure to get products to market quickly and generate financial returns, and makes it highly unlikely that we’ll get to Level 4 or 5 with VC-backed science alone. Basic research funding from philanthropy and government needs to be increased, and biomedical researchers need to be convinced to apply their expertise and knowledge to cellular agriculture.

And, indeed, many labs that Isha is working with are working on the basic research. Some are focused on establishing cell cultures from agricultural animals; others are working to grow animals cells on plants by removing cells from the plants, replacing them with living muscle cells effectively using the plant as a scaffolding.

The work of Isha's small network of scientists reminds me of the early days of neuroscience, when there was almost no federal funding for brain research. Then, suddenly, it became “a thing.” I think we’re reaching that same moment for meat, as climate change becomes an ever more pressing concern; the health impact of eating meat becomes more clear; and our population approaches 10 billion people, threatening our food supply.

Most of the people currently supporting the cellular agriculture movement are animal rights advocates. That’s a fine motivation, but figuring out a completely new design for the creation of food is going to take some real science, and we need to start now. Not only might it save us from future starvation, make a major contribution to reversing climate change, correct the antibiotic resistance armageddon, and help restore fish populations in the oceans, it might also unlock a culinary creative explosion.

1 Disclosure: After meeting Isha, I recruited her to be a Director’s Fellow at the Media Lab where she is inspiring us with her work and her vision.

34581570_10156015313486998_718869846225321984_o.jpgIn 2011, when we announced that I would join the Media Lab as the new Director, many people thought it was an unusual choice partially because I had never earned a higher degree - not even an undergraduate degree. I had dropped out of Tufts as well as the University of Chicago and had spent most of my life doing all sorts of weird jobs and building and running companies and nonprofits.

I think it took quite a bit of courage on the part of the Media Lab and MIT to hire a Director with no college degree, but once we got over the hump, some felt it was a kind of "badge of honor." (I'm also sure, not everyone felt this way.)

Jun Murai, father of the Japanese Internet and my mentor in Japan, who is the Dean of the Graduate School of Media and Governance at Keio University in Japan, had been encouraging me to complete a PhD in his program. We had been discussing this in earnest from June 2010,when they confirmed that Keio would be OK with awarding a PhD to someone without a Bachelor's or a Master's degree. When I joined the Media Lab, I asked the co-founder and first Director of the Lab, Nicholas Negroponte, whether it would help me if I completed the PhD. He recommended (at the time) that I not complete the PhD because it was more interesting that I didn't have a degree.

Eight years later, I am often referred to as "the academic" when I'm on panels; I advise and work with many students including PhD students. It felt that it was time to finish the PhD. In other words, one product of my profession is degrees and I felt like I needed to try the product. Even Nicholas agreed when I asked him.

The degree that I earned is a "Thesis PhD" which is a less common type of PhD that you don't see very much in the US. It involves writing about and defending the academic value and contribution of your work, rather than doing new work in residence in an institution. The sequencing and the ordering is different than typical PhDs.

The process involved writing a dissertation and putting together a package that was accepted by the university. After that, a committee was formally constituted with Jun Murai as the lead advisor and Rod Van Meter, Keiko Okawa, Hiroya Tanaka, and Jonathan Zittrain as committee members and thesis readers. They provided feedback and detailed critique on the thesis, which I rewrote based on this feedback. Oh June 6, I defended the thesis publicly at Keio University and, based on the questions and feedback from the defense, I rewrote the dissertation again.

On June 21 I had a final exam, which involved a presentation to the committee of all of the changes and responses to the criticisms and suggestions. The committee had a closed-door discussion and formally accepted the dissertation. I rewrote, formatted, and polished the dissertation some more and submitted the final version in printed form on July 20.

Finally, on behalf of the committee, Jun Murai prepared and presented the case at a faculty meeting on July 30, 2018 where they voted and awarded the PhD.

Although by definition and according to rules the dissertation is entirely my own work, I couldn't have done it without the help of my advisors, collaborators, and all of the people I've worked with over the years.

While I started this project mostly to understand the process and "see what it was like" to work on a degree, I learned a lot during the process of researching, reading, and talking to people about my dissertation. The dissertation, titled "The Practice of Change", is available online both in PDF and in LaTeX as a GitHub repo. It's a summary of a lot of the work that I've done so far, a question about how we understand, design solutions for, and try to address the current challenges to our society, and how the work going on at the Media Lab might be applied to or provide inspiration for people trying to work on addressing these challenges.

In some ways, the dissertation feels like I've gone around and kicked a dozen hornet's nests. I've mostly stayed out of extremely academic discourse in the past, but the process of trying to understand a number of different disciplines to try to understand and describe the context of my work has caused me to wade into many old and new arguments. I'm sure that many of my forays into various disciplines will cause annoyance to those well versed in those disciplines, but those constructive criticisms that I've received about my treatment of various disciplines have surfaced an exciting array of future work for me.

So while I do not believe that I have yet become a "serious academic" or that I will be focused primarily on research and academic output, I feel like I've discovered a new lens through which to look at things -- a new world to explore. It reminds me of entering a new zone in a game like World of Warcraft where there are new quests, new skills, new reps to grind, and lots of new things to learn. So fun.

Credits

To my late godfather Timothy Leary for “Question Authority and Think For Yourself.”

To Jun Murai for pushing me to do this dissertation.

To my thesis advisors: Hiroya Tanaka, Rodney D. Van Meter, Keiko Okawa and Jonathan L. Zittrain for their extensive feedback, guidance and encouragement.

To Nicholas Negroponte for the Media Lab and his mentorship.

To the late Kenichi Fukui for encouraging me to think about complex systems and the limits of reduction.

To the late John Perry Barlow for the “Declaration of Independence of Cyberspace.”

To Hashim Sarkis for sending me in the direction of Foucault.

To Martin Nowak for his guidance on Evolutionary Dynamics.

To my colleagues at MIT and particularly at the Media Lab for continuous inspiration and my raison d’être.

To my research colleagues Karthik Dinakar, Chia Evers, Natalie Saltiel, Pratik Shah, and Andre Uhl for helping me with everything, including this thesis.

To Yuka Sasaki, Stephanie Strom, and Mika Tanaka for their help on helping me pull this dissertation together.

To David Weinberger for “The final edit.”

To Sean Bonner, Danese Cooper, Ariel Ekblaw, Pieter Franken, Mizuko Ito, Mike Linksvayer, Pip Mothersill, Diane Peters, Deb Roy and Jeffrey Shapard for their feedback on various parts of the dissertation.

Finally, thanks to Kio and Mizuka for making room in our family life to work on this and for supporting me through the process.

In the summer of 1990, I was running a pretty weird nightclub in the Roppongi neighborhood of Tokyo. I was deeply immersed in the global cyberpunk scene and working to bring the Tokyo node of this fast-expanding, posthuman, science-fiction-and-psychedelic-drug-fueled movement online. The Japanese scene was more centered around videogames and multimedia than around acid and other psychedelics, and Timothy Leary, a dean of ’60s counterculture and proponent of psychedelia who was always fascinated with anything mind-expanding, was interested in learning more about it. Tim anointed the Japanese youth, including the 24-year-old me, “The New Breed.” He adopted me as a godson, and we started writing a book about The New Breed together, starting with “tune in, turn on, take over,” as a riff off Tim’s original and very famous “turn on, tune in, drop out.” We never finished the book, but we did end up spending a lot of time together. (I should dig out my old notes and finish the book.)

Tim introduced me to his friends in Los Angeles and San Francisco. They were a living menagerie of the counterculture in the United States since the ’60s. There were the traditional New Age types: hippies, cyberpunks, and transhumanists, too. In my early twenties, I was an eager and budding techno-utopian, dreaming of the day when I would become immortal and ascend to the stars into cryogenic slumber to awake on a distant planet. Or perhaps I would have my brain uploaded into a computer network, to become part of some intergalactic superbrain.

Good times. Those were the days and, for some, still are.

We’ve been yearning for immortality at least since the Epic of Gilgamesh. In Greek mythology, Zeus grants Eos’s mortal lover Tithonus immortality—but the goddess forgets to ask for eternal youth as well. Tithonus grows old and decrepit, begging for death. When I hear about life extension today, I am often perplexed, even frustrated. Are we are talking about eternal youth, eternal old age, or having our cryogenically frozen brains thawed out 2,000 years from now to perform tricks in a future alien zoo?

The latest enthusiasm for eternal life largely stems not from any acid-soaked, tie-dyed counterculture but from the belief that technology will enhance humans and make them immortal. Today’s transhumanist movement, sometimes called H+, encompasses a broad range of issues and diversity of belief, but the notion of immortality—or, more correctly, amortality—is the central tenet. Transhumanists believe that technology will inevitably eliminate aging or disease as causes of death and instead turn death into the result of an accidental or voluntary physical intervention.

As science marches forward, and age reversal and the elimination of diseases becomes a real possibility, what once seemed like a science fiction dream is becoming more real, transforming the transhumanist movement and its role in society from a crazy subculture to a Silicon Valley money- and technology-fueled “shot on goal” and more of a practical “hedge” than the sci-fi dream of its progenitors.

Transhumanism can be traced back to futurists in the ’60s, most notably FM-2030. As the development of new, computer-based technologies began to turn into a revolution to rival the Industrial Revolution, Max More defined transhumanism as the effort to become “posthuman” through scientific advances like mind “uploading.” He developed his own variant of Transhumanism and named it Extropy, and together with Tom Morrow, founded the Extropy Institute, whose email list created a community of Extopians in the internet’s cyberpunk era. Its members discussed AI, cryonics, nanotech and crypotoanarchy, among other things, and some reverted to transhumanism, creating an organization now known as Humanity+. As the Tech Revolution continued, Extropians and transhumanists began actively experimenting with technology’s ability to deliver amortality.

In fact, Timothy Leary planned to have his head frozen by Alcor, preserving his brain and, presumably, his sense of humor and unique intelligence. But as he approached his death—I happened to visit him the night before he died in 1996—the vibe of the Alcor team moving weird cryo-gear into his house creeped Tim out, and he ended up opting for the “shoot my ashes into space” path, which seemed more appropriate to me as well. All of his friends got a bit of his ashes, too, and having Timothy Leary ashes became “a thing” for a while. It left me wondering, every time I spoke to groups of transhumanists shaking their fists in the air and rattling their Alcor “freeze me when I die” bracelets: How many would actually go through with the freezing?

That was 20 years ago. The transhumanist and Extropian movements (and even the Media Lab) have gotten more sober since those techno-utopian days, when even I was giddy with optimism. Nonetheless, as science fiction gives way to real science, many of the ZOMG if only conversations are becoming arguments about when and how, and the shift from Haight-Ashbury to Silicon Valley has stripped the movement of its tie-dye and beads and replaced them with Pied Piper shirts. Just as the road to hell is paved with good intentions, the road that brought us Cambridge Analytica and the Pizzagate conspiracy was paved with optimism and oaths to not be evil.

Renowned Harvard geneticist George Church once told me that breakthroughs in biological engineering are coming so fast we can’t predict how they will develop going forward. Crispr, a low-cost gene editing technology that is transforming our ability to design and edit the genome, was completely unanticipated; experts thought it was impossible ... until it wasn’t. Next-generation gene sequencing is decreasing in price, far faster than Moore’s Law for processors. In many ways, bioengineering is moving faster than computing. Church believes that amortality and age reversal will seem difficult and fraught with issues ... until they aren’t. He is currently experimenting with age reversal in dogs using gene therapy that has been successful in mice, a technique he believes is the most promising of nine broad approaches to mortality and aging—genome stability, telomere extension, epigenetics, proteostasis, caloric restriction, mitochondrial research, cell senescence, stem cell exhaustion, and intercellular communication.

Church’s research is but one of the key discoveries giving us hope that we may someday understand aging and possibly reverse it. My bet is that we will significantly lengthen, if not eliminate, the notion of “natural lifespan,” although it’s impossible to predict exactly when.

But what does this mean? Making things technically possible doesn’t always make them societally possible or even desirable, and just because we can do something doesn’t mean we should (as we’re increasingly realizing, watching the technologies we have developed transform into dark zombies instead of the wonderful utopian tools their designers imagined).

Human beings are tremendously adaptable and resilient, and we seem to quickly adjust to almost any technological change. Unfortunately, not all of our problems are technical and we are really bad at fixing social problems. Even the ones that we like to think we’ve fixed, like racism, keep morphing and getting stronger, like drug-resistant pathogens.

Don’t get me wrong—I think it’s important to be optimistic and passionate and push the boundaries of understanding to improve the human condition. But there is a religious tone in some of the arguments, and even a Way of the Future Church, which believes that “the creation of ‘super intelligence’ is inevitable.” As Yuval Harari writes in Homo Deus, “new technologies kill old gods and give birth to new gods.” When he was still just Sir Martin Rees, now Lord Martin Rees once told a group of us a story (which has been retold in various forms in various places) about how he was interviewed by what he called “the society for the abolition of involuntary death” in California. The members offered to put him in cryonic storage when he died, and when he politely told them he’d rather be dead than in a deep freeze, they called him a “deathist.”

Transhumanists correctly argue that every time you take a baby aspirin (or have open heart surgery), you’re intervening to make your life better and longer. They contend that there is no categorical difference between many modern medical procedures and the quest to beat death; it’s just a matter of degree. I tend to agree.

Yet we can clearly imagine the perils of amortality. Would dictators hold onto power endlessly? How would universities work if faculty never retired? Would the population explode? Would endless life be only for the wealthy, or would the poor be forced to toil forever? Clearly many of our social and philosophical systems would break. Back in 2003, Francis Fukuyama, in Our Posthuman Future: Consequences of the Biotechnology Revolution, warned us of the perils of life extension and explained how biotech was taking us into a posthuman future with catastrophic consequences to civilization even with the best intentions.

I think it’s unlikely that we’ll be uploading our minds to computers any time soon, but I do believe changes that challenge what it means to be “human" are coming. Philosopher Nikola Danaylov in his Transhumanist Manifesto says, “We must all respect autonomy and individual rights of all sentience throughout the universe, including humans, non-human animals, and any future AI, modified life forms, or other intelligences.” That sounds progressive and good.

Still, in his manifesto Nikola also writes, “Transhumanists of the world unite—we have immortality to gain and only biology to lose.” That sounds a little scary to me. I poked Nikola about this, and he pointed out that he wrote this manifesto a while ago and his position has become more subtle. But many of his peers are as radical as ever. I think transhumanism, especially its strong, passionate base in exuberant Silicon Valley, could use an overhaul that makes it more attentive to and integrated with our complex societal systems. At the same time, we need to help the “left-behind” parts of society catch up and participate in, rather than just become subjected to, the technological transformations that are looming. Now that the dog has caught the car, tranhumanism has to transform our fantasy into a responsible reality.

I, for one, still dream of flourishing in the future through advances in science and technology, but hopefully one that addresses societal inequities, retains the richness and diversity of our natural systems and indigenous cultures, rather than the somewhat simple and sterile futures depicted by many science fiction writers and futurists. Timothy Leary liked to remind us to remember our hippie roots, with their celebration of diversity and nature, and I hear him calling us again.

Everyone from the ACLU to the Koch brothers wants to reduce the number of people in prison and in jail. Liberals view mass incarceration as an unjust result of a racist system. Conservatives view the criminal justice system as an inefficient system in dire need of reform. But both sides agree: Reducing the number of people behind bars is an all-around good idea.

To that end, AI—in particular, so-called predictive technologies—has been deployed to support various parts of our criminal justice system. For instance, predictive policing uses data about previous arrests and neighborhoods to direct police to where they might find more crime, and similar systems are used to assess the risk of recidivism for bail, parole, and even sentencing decisions. Reformers across the political spectrum have touted risk assessment by algorithm as more objective than decision-making by an individual. Take the decision of whether to release someone from jail before their trial. Proponents of risk assessment argue that many more individuals could be released if only judges had access to more reliable and efficient tools to evaluate their risk.

Yet a 2016 ProPublica investigation revealed that not only were these assessments often inaccurate, the cost of that inaccuracy was borne disproportionately by African American defendants, whom the algorithms were almost twice as likely to label as a high risk for committing subsequent crimes or violating the terms of their parole.

We’re using algorithms as crystal balls to make predictions on behalf of society, when we should be using them as a mirror to examine ourselves and our social systems more critically. Machine learning and data science can help us better understand and address the underlying causes of poverty and crime, as long as we stop using these tools to automate decision-making and reinscribe historical injustice.

Training Troubles

Most modern AI requires massive amounts of data to train a machine to more accurately predict the future. When systems are trained to help doctors spot, say, skin cancer, the benefits are clear. But, in a creepy illustration of the importance of the data used to train algorithms, a team at the Media Lab created what is probably the world’s first artificial intelligence psychopath and trained it with a notorious subreddit that documents disturbing, violent death. They named the algorithm Norman and began showing it Rorschach inkblots. They also trained an algorithm with more benign inputs. The standard algorithm saw birds perched on a tree branch, Norman saw a man electrocuted to death.

So when machine-based prediction is used to make decisions affecting the lives of vulnerable people, we run the risk of hurting people who are already disadvantaged—moving more power from the governed to the governing. This is at odds with the fundamental premise of democracy.

States like New Jersey have adopted pretrial risk assessment in an effort to minimize or eliminate the use of cash-based bail, which multiple studies have shown is not only ineffective but also often deeply punitive for those who cannot pay. In many cases, the cash bail requirement is effectively a means of detaining defendants and denying them one of their most basic rights: the right to liberty under the presumption of innocence.

While cash bail reform is an admirable goal, critics of risk assessment are concerned that such efforts might lead to an expansion of punitive nonmonetary conditions, such as electronic monitoring and mandatory drug testing. Right now, assessments provide little to no insight about how a defendant’s risk is connected to the various conditions a judge might set for release. As a result, judges are ill-equipped to ask important questions about how release with conditions such as drug testing or GPS-equipped ankle bracelets actually affect outcomes for the defendants and society. Will, for instance, an ankle bracelet interfere with a defendant’s ability to work while awaiting trial? In light of these concerns, risk assessments may end up simply legitimizing new types of harmful practices. In this, we miss an opportunity: Data scientists should focus more deeply on understanding the underlying causes of crime and poverty, rather than simply using regression models and machine learning to punish people in high-risk situations.

Such issues are not limited to the criminal justice system. In her latest book, Automating Inequality, Virginia Eubanks describes several compelling examples of failed attempts by state and local governments to use algorithms to help make decisions. One heartbreaking example Eubanks offers is the use of data by the Office of Children, Youth, and Families in Allegheny County, Pennsylvania, to screen calls and assign risk scores to families that help decide whether case workers should intervene to ensure the welfare of a child.

To assess a child’s particular risk, the algorithm primarily “learns” from data that comes from public agencies, where a record is created every time someone applies for low-cost or free public services, such as the Supplemental Nutrition Assistance Program. This means that the system essentially judges poor children to be at higher risk than wealthier children who do not access social services. As a result, the symptoms of a high-risk child look a lot like the symptoms of poverty, the result of merely living in a household that has trouble making ends meet. Based on such data, a child could be removed from her home and placed into the custody of the state, where her outcomes look quite bleak, simply because her mother couldn’t afford to buy diapers.

Look for Causes

Rather than using predictive algorithms to punish low-income families by removing their children, Eubanks argues we should be using data and algorithms to assess the underlying drivers of poverty that exist in a child’s life and then ask better questions about which interventions will be most effective in stabilizing the home.

This is a topic that my colleague Chelsea Barabas discussed at length at the recent Conference on Fairness, Accountability, and Transparency, where she presented our paper, “Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” In the paper, we argue that the technical community has used the wrong yardstick to measure the ethical stakes of AI-enabled technologies. By narrowly framing the risks and benefits of artificial intelligence in terms of bias and accuracy, we’ve overlooked more fundamental questions about how the introduction of automation, profiling software, and predictive models connect to outcomes that benefit society.

To reframe the debate, we must stop striving for “unbiased” prediction and start understanding causal relationships. What caused someone to miss a court date? Why did a mother keep a diaper on her child for so long without changing it? The use of algorithms to help administer public services presents an amazing opportunity to design effective social interventions—and a tremendous risk of locking in existing social inequity. This is the focus of the Humanizing AI in Law (HAL) work that we are doing at the Media Lab, along with a small but growing number of efforts involving the combined efforts of social scientists and computer scientists.

This is not to say that prediction isn’t useful, nor is it to say that understanding causal relationships in itself will fix everything. Addressing our societal problems is hard. My point is that we must use the massive amounts of data available to us to better understand what’s actually going on. This refocus could make the future one of greater equality and opportunity, and less a Minority Report–type nightmare.

On May 13, 2018, I innocently asked:

240 replies later, it is clear that blogs don't make it into the academic journalsphere and people cited two main reasons, the lack of longevity of links and the lack of peer review. I would like to point out that my blog URLs have been solid and permanent since I launched this version of my website in 2002 but it's a fairly valid point. There are a number of ideas about how to solve this, and several people pointed out that The Internet Archive does a pretty good job of keeping an archive of many sites.

There was quite a bit of discussion about peer review. Karim Lakhani posted a link about a study he did on peer review:

In the study, he says that, "we find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel."

Many people on Twitter mentioned pre-prints which is an emerging trend of publishing drafts before peer review since it can take so long. Many fields are skipping formal peer review and just focusing on pre-prints. In some fields ad hoc and informal peer groups are reviewing pre-prints and some journals are even referring to these informal review groups.

This sounds an awful lot like how we review each other's work on blogs. We cite, discuss and share links -- the best blog posts getting the most links. In the early days of Google, this would guarantee being on the first page of search results. Some great blog posts like Tim O'Reilly's "What Is Web 2.0" have ended up becoming canonical. So when people tell me that their professors don't want them to cite blogs in their academic papers, I'm not feelin' it.

It may be true that peer review is better than the alternatives, but it definitely could be improved. SCIgen, invented in 2005 by MIT researchers creates meaningless papers that have been successfully submitted to conferences. In 2014 Springer and IEEE removed more than 120 papers when a French researcher discovered that they were computer-generated fakes. Even peer review itself has been successfully imitated by machines.

At the Media Lab and MIT Press, we are working on trying to think about new ways to publish with experiments like PubPub. There are discussions about the future of peer review. People like Jess Polka at ASAPbio are working on these issues as well. Very excited about the progress, but a long way to go.

One thing we can do is make blogs more citation friendly. Some people on Twitter mentioned that it's more clear who did what in an academic paper than on a blog post. I started, at the urging of Jeremy Rubin, to put credits at the bottom of blog posts when I received a lot of help -- for example my post on the FinTech Bubble. Also, Boris just added a "cite" button at the bottom of each of my blog posts. Try it! I suppose the next thing is to consider DOI numbers for each post although it seems non-obvious how independent bloggers would get them without paying a bunch of money.

One annoying thing is that the citation format for blogs suck. When you Goggle, "cite blog post," you end up at... a blog post about "How to Cite a Blog Post in MLA, APA, or Chicago." According to that blog post, the APA citation for this post would be, "Ito, J. (2018, May). Citing Blogs. [Blog post]. https://joi.ito.com/weblog/2018/05/28/citing-blogs.html" That's annoying. Isn't the name of my blog relevant? If you look at the Citing Electronic Sources section of the MIT Academic Integrity website, they link to the Purdue OWL page. Purdue gives a slightly more cryptic example using a blog comment in the square brackets, but roughly similar. I don't see why the name of my blog is less important than some random journal so I'm going to put it in italics - APA guidelines be damned. Who do we lobby to change the APA guidelines to lift blog names out of the URL and into the body of the citation?

Credits

Boris Anthony, Travis Rich for the work on citations for this blog and the discussion about the citation format.

Amy Brand for the link to the Peer Review Transparency site and the introduction to Jess Polka.

I received a lot of excited feedback from people who saw the 60 Minutes segment on the Media Lab. I also got a few less congratulatory messages questioning the "gee-whiz-isn't-this-all-great" depiction of the Lab and asking why we seemed so relentlessly upbeat at a time when so many of the negative consequences of technology are coming to light. Juxtaposed with the first segment in the program about Aleksandr Kogan, the academic who created the Cambridge Analytica app that mined Facebook, the Media Lab segment appeared, to some, blithely upbeat. And perhaps it reinforced the sometimes unfair image of the Media Lab as a techno-Utopian hype machine.

Of course, the piece clocked in at about 12 minutes and focused on a small handful of projects; it's to be expected that it didn't represent the full range of research or the full spectrum of ideas and questions that this community brings to its endeavors. In my interview, most of my comments focused on how we need more reflection on where we have come in science and technology over the 30-plus years that the Media Lab has been around. I also stressed how at the Lab we're thinking a lot more about the impact technology is having on society, climate, and other systems. But in such a short piece--and one that was intended to showcase technological achievements, not to question the ethical rigor applied to those achievements--it's no surprise that not much of what I said made it into the final cut.

What was particularly interesting about the 60 Minutes segment was the producers' choice of "Future Factory" for the title. I got a letter from one Randall G. Nichols, of Missouri, pointing out that "No one in the segment seems to be studying the fact that technology is creating harmful conditions for the Earth, worse learning conditions for a substantial number of kids, decreasing judgment and attention in many of us, and so on." If we're manufacturing the future here, shouldn't we be at least a little concerned about the far-reaching and unforeseen impact of what we create here? I think most of us agree that, yes, absolutely, we should be! And what I'd say to Randall is, we are.

In fact, the lack of critical reflection in science and technology has been on my mind-I wrote about it in Resisting Reduction. Much of our work at the Lab helps us better understand and intervene responsibly in societal issues, including Deb Roy's Depolarization by Design class and almost all of the work in the Center for Civic Media. There's Kevin Esvelt's work that involves communities in deployment of the CRISPR gene drive and Danielle Wood's work generally and, more specifically, her interest in science and racial issues. And Pattie Maes is making her students watch Black Mirror to imagine how the work we do in the Lab might unintentionally go wrong. I'm also teaching a class on the ethics and governance of AI with Jonathan Zittrain from Harvard Law School, which aims to ensure that the generation now rising is more thoughtful about the societal impact of AI as it is deployed. I could go on.

It's not that I'm apologetic about the institutional optimism that the 60 Minutes piece captured. Optimism is a necessary part of our work at the Lab. Passion and optimism drive us to push the boundaries of science and technology. It's healthy to have a mix of viewpoints-critical, contemplative, and optimistic-in our ecosystem. Not all aspects of that can necessarily be captured in 12 minutes, though. I'm sure that our balance of caution and optimism isn't satisfactory for quite a few critical social scientists, but I think that a quick look at some of the projects I mention will show a more balanced approach than would appear to be the case from the 60 Minutes segment.

Having said that, I believe that we need to continue to integrate social sciences and reflection even more deeply into our science and technology work. While I have a big voice at the Lab, the Lab operates on a "permissionless innovation" model where I don't tell researchers what to do (and neither do our funders). On the other hand, we have safety and other codes that we have to follow--is there an equivalent ethical or social code that we or other institutions should have? Harrison Eiteljorg, II thinks so. He wrote, "I would like to encourage you to consider adding to your staff at least one scholar whose job is to examine projects for the ethical implications for the work and its potential final outcome." I wonder, what would such a process look like?

More socially integrated work in technology has continued to increase in both the rest of society and at the Lab. One of my questions is whether the Lab is changing fast enough, and whether the somewhat emergent way that the work is infusing itself in the Lab is the appropriate way. Doing my own work in ethical and critical work and having conversations is the easiest way to contribute, but I wonder if there is more that we as a Lab should be doing.

One of the main arcs of the 60 Minutes piece was showing how technology built in the Lab's early days--touch screens, voice command, things that were so far ahead of their time in the 80s and 90s as to seem magical--have gone out into the world and become part of the fabric of our everyday lives. The idea of highlighting the Lab as a "future factory" was to suggest that the loftiest and "craziest" ideas we're working on now might one day be just as commonplace. But I'd like to challenge myself, and everyone at the Media Lab, to demonstrate our evolution in thoughtful critique, as well.