0:00/27:33
-27:33

transcript

The Era of Killer Robots Is Here

Ukraine has become a Silicon Valley for autonomous weapons.

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

natalie kitroeff

From The New York Times, I’m Natalie Kitroeff. And this is “The Daily.”

[MUSIC PLAYING]

Out-manned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of killer robots. Today, my colleague Paul Mozur on how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping modern warfare.

[MUSIC PLAYING]

It’s Tuesday, July 9th.

Paul, when we’ve talked on the show about the applications of advanced artificial intelligence, one of the scarier ideas has been that militaries around the world could use it to make autonomous killing machines, i.e., killer robots. Your reporting shows that this may already be happening. Tell me about it.

paul mozur

Yeah, so, when I first got started on my reporting, I thought this was the stuff of sci-fi. You think about AI hunting and killing somebody, and you think of “Terminator” and Arnold Schwarzenegger hunting people as a robot, or you think of HAL in “2001: A Space Odyssey,” an all-knowing robot that can kill people on a spaceship. But the thing is, the early versions of this technology that will get us there are already being developed.

And they’re being developed in Ukraine. And in some ways, Ukraine has become a nexus for the development of this type of autonomous military technology, writ large. They’re taking, basically, artificial intelligence, and finding all kinds of new military applications for it.

natalie kitroeff

Why has Ukraine become that nexus?

paul mozur

So, perhaps most importantly, they’re outgunned. Weapons don’t necessarily come quickly or predictably from the United States or Europe. They have to reach to anything they can use to fight this war. And so they turn to consumer technology and emerging technologies like AI to build new, effective weapons. The second point is that they’re out-manned. And so, as you’re facing the prospect of defending all these trenches and you just don’t have as many people as the Russians have, you need to come up with things that solve that problem.

And what better than something like an automated machine gun or a drone? And then, perhaps something people don’t realize is that Ukraine has been a bit of a back office for the global technology industry for a long time. Many of the apps you use every day were probably, in some part, coded by engineers in Ukraine. And so you have a lot of coders and a lot of skilled experts taking their abilities and saying, well, now, we need to turn from building a dating app to figuring out how to stop the Russians. And that means building these new weapons.

And then, finally, extremely importantly, this is a war of attrition. And so, every day, there’s fighting going on. And that means you have the ability to test these weapons each day, and to use a Silicon Valley term, iterate on them, tweak them and make them better. And so, having what is effectively a sort of laboratory to experiment and find out ways to make AI ever more deadly really helps.

natalie kitroeff

Yeah, it sounds like you kind of have all of these conditions that line up to make Ukraine a perfect incubator to build this type of technology. I’m wondering, Paul, what this actually looks like on the ground? What kind of weapons are we talking about here?

paul mozur

Yeah, so I went to Ukraine in May and met with all kinds of different tech startups and developers, troops who use this technology. And perhaps the most startling moment wasn’t near the front lines or anything. It was actually in a park just outside of Kyiv.

[MOTORCYCLES REVVING]

And a couple of guys in their 20s and one in their teens, who started this company that makes autonomous drones, pulled up on motorcycles.

They take me to a field. And they unbox a tiny little drone, four rotors on it, kind of a smaller version of a typical drone you’d use to take pictures of your vacation or something like that.

speaker 1

Maybe we can put the screen, like, somewhere where it would be not so much —

paul mozur

And then they flip open this briefcase with a screen on it. And what they explained they’d done is, they took a tiny little mini circuit board, a little mini hobbyist computer.

speaker 1

Here we have a Raspberry Pi, plus thermal camera. But —

paul mozur

And put software on it that allows that tiny drone to follow a tank, a piece of artillery, or even a human, and eventually smash into it. And so, the idea is, if you have a shell on the bottom of it, it becomes something of a missile. I had heard of this before. But I hadn’t seen it. And so they said, well, we’re going to show you. And so, the CEO then flips on his motorcycle helmet —

[MOTORCYCLE REVVING]

— revs the engine a few times, and rips off down this dirt road as a target. And one of the teammates launches the drone. And it’s hovering above. And then, what you see is that he centers the cross-hairs on the motorcycle.

[POPPING]

And at that point, the machine takes over from the human. And so the drone starts stalking the motorcycle. And it’s getting closer and closer. And on the screen, you can see it. It’s lining up to swoop in. And his friends are crying out —

speaker 2

[NON-ENGLISH SPEECH] — go faster, go faster! You’re screwed. Oh, my god, you’re done! It’s basically not more than a couple of feet away from him.

[POPPING]

speaker 1

OK, that was really close.

paul mozur

And they hit a button and turn off the autonomy. And the drone flies back up into the air. And they’re laughing. And it’s a funny moment because they were able to run down their CEO. But the darker reality of it is that, if this was an armed drone with a shell and they hadn’t hit that button at the end, their CEO would be a goner. And this is the technology that is already being used on the Ukrainian front-lines to hit Russian targets.

natalie kitroeff

It’s kind of amazing. What you’re seeing is the computer take over for the young startup guy in tracking down and targeting his CEO as he’s swerving on his motorcycle. And that’s why this technology is so powerful, right? This is not a remote control drone. This is a drone powered by artificial intelligence that’s making judgments on its own.

paul mozur

Yeah, and I think what’s important to realize about these guys is, they’re not doing something that miraculous. What they’re doing is they’re taking basic code that is around, combined it with some new data from the war, and made it into something entirely different, which is a weapon. And what this automation system adds to it is one of the great protections against these tiny types of drones, is radio jamming.

And so, if you can break the signal between pilot and drone, you can stop that drone from swooping in on your expensive weapon system. But what this does is, it doesn’t matter what the pilot sees. Once they hit that lock, with the help of this AI software, it will keep going. And so, you all of a sudden are completely helpless to stop it unless you shoot it out of the sky.

natalie kitroeff

It’s kind of insane. I mean, we’re talking about basically autonomous kamikaze drones. Tell me about the other forms of AI that you saw.

paul mozur

Yeah, so if you imagine that single kamikaze drone, the next step is to make a swarm of those kamikaze drones. And so, there are a few companies who are building swarms of drones. And so, what they’re testing now is, you have a single four-rotor drone that watches over the battlefield. But then, it has its own little pack of kamikaze drones.

And it can choose a target and identify a target. And it’s constantly searching with its camera. And it sees a tank. And it can then dispatch one of those drones to go in for a hit. And one of the companies I spoke to called Swarmer recently tested that technology and hit a target 60 kilometers away.

natalie kitroeff

Wow.

paul mozur

So, this is, again, not a very expensive thing. This is hardware that costs thousands of dollars. And they’re hitting weapon systems 60 kilometers away with incredible precision. Another thing that is emerging is a sort of autonomous machine gun turret. And so, what this uses is computer vision that you would have on a lot of surveillance cameras, or even on your iPhone, your smartphone. It will sort of circle a human and identify it.

So you take similar technology, and you put it on a machine gun. And that machine gun can then automatically see targets as they move. And then, all it takes is a human to press the trigger. And it’s already sort of being tested right now. Some of it’s already achieving kills and taking out targets.

natalie kitroeff

Paul, did you see the automated machine gun in action?

paul mozur

Yeah, I did. And the story of it is actually fascinating. And it kind of indicates why Ukraine is a place where these weapons systems are emerging. So we went to a range and met a commander of a battalion called Da Vinci Wolves. And Da Vinci Wolves are very well-known in Ukraine for their experimentation with weapon systems. And the commander we met, Oleksandr Yabchanka, really looks the part of the Ukrainian community, has this Cossack haircut and a bushy mustache.

And he was very excited about the gun. He actually named his dog after the gun. And the reason he’s so excited is because he helped create parts of the gun. He’s helped innovate it. And unlike the engineers that we were seeing in Kyiv, he’s a soldier. So he’s using this technology on the front. And he’s sort of the eyes for a lot of these companies. And he gives feedback to them about how it’s working, and what they need, and so on. And so, he told me this amazing story.

He was fighting in Bakhmut, a city in Eastern Ukraine that the Russians were trying very hard to take over. And his unit was tasked with defending the only road in and out of the city. And they kept having this problem, which was that their machine gunners were just constantly a target. A machine gun is a big, heavy gun that can’t easily be moved. You need to man it at all times. So he did what I think a lot of us do these days. He went online and tried to find a solution. And he ended up asking on Facebook if anybody had an idea.

And in just a few months, he had a working prototype in his hands from a company called Robineers. And what the gun does is, it uses cameras and what is effectively a video game console, a portable thing that looks like a Nintendo Switch. And it can automatically identify targets as they come over the horizon or appear. And then, it automatically aims. And all the soldier has to do is press the button and shoot.

natalie kitroeff

It sounds, honestly, quite terrifying, automated gun crowd-sourced on Facebook. The whole thing sounds really outlandish, if I’m honest.

paul mozur

Yeah, but it solved his unit’s problems. He said, it was great. We could sit back in the trench, drink coffee, smoke cigarettes, and shoot Russians. And it solved this problem using very basic technology that’s very powerful, but that lives in your smartphone and lives on your video game systems and is pretty easily able to be created with artificial intelligence.

natalie kitroeff

Paul, you’ve said this technology has been quite effective in Ukraine. But just to express a dose of my skepticism here, I think many of us interact with AI through ChatGPT or Gemini. And we’ve seen the hilariously bad answers those systems can produce. Like, we can’t even depend on AI to solve a crossword puzzle. I’m just wondering, how can Ukrainians rely on this technology for much higher stakes stuff? I mean, hitting the right targets in war, are they finding that their AI is making mistakes?

paul mozur

We don’t really know the answer to that. We do know that the systems work pretty well. And part of the point is that they’re supposed to be cheap. So they don’t always have to work. If they work 80 percent of the time and they’re cheap, that’s OK. I will say that another reason why they want the human in the loop who can turn off the AI is that they’re afraid of friendly fire and hitting the wrong target. There’s an ethical consideration.

But there’s also just a very practical one, that this tech could go wrong and go at the wrong person, or identify the wrong thing. And they need to be able to turn it off. So there are still humans in the mix. But even with this bad technology, perhaps it’s even scarier, It’s super simple to just take the human out of the mix.

natalie kitroeff

And how close are we to that? I mean, is Ukraine’s military contemplating a scenario where the humans really go away? And the machines and their judgments are entirely responsible for killing?

paul mozur

So that was a question I posed to everyone working on this stuff. And the general answer is a no, that the human will stay in the loop for the foreseeable future. But people had different takes on it. And one guy who had a particularly interesting answer was an executive at the firm that made that automated machine gun, Robineers. His name is Anton Skrypnyk. And I met him at his offices in Ukraine.

paul mozur

So, realistically, the pace that we’re seeing, when do you think we’re going to start seeing the first automated killing? Is it —

paul mozur

And when I asked him how, or when, the first automated killing on the front lines might occur —

anton skrypnyk

Maybe it was already done. Most likely, it was already done.

paul mozur

He said it probably, honestly, has already happened. He had no way of knowing. He wasn’t sure.

anton skrypnyk

People very often just do something to survive, to complete a mission, without sharing information. And this is not bad.

paul mozur

In a fast-paced, high stress environment on the front, where life and death are oftentimes a matter of an instant decision, whether or not a soldier flipped a switch that allowed something to just go fully automated in this way, or autonomous, is very possible.

paul mozur

Is there any thought of changing that, or is that something that you guys are sort of stand by?

paul mozur

And so, I asked him, OK, but you guys aren’t doing this?

anton skrypnyk

There is no — there is not a single request about having that.

paul mozur

No, and for us, you have to hit this trigger every time the gun sees a target so that it shoots.

paul mozur

How long would it take you to do it if you wanted to?

paul mozur

And I said, so, OK, but if you wanted to make it fully autonomous, how long would that take?

anton skrypnyk

Tomorrow.

paul mozur

Tomorrow?

anton skrypnyk

Because today, it’s already half —

paul mozur

Yeah, I’ve taken two hours of your time, so —

paul mozur

So, basically, no time at all. It’s a matter of a few lines of code. Because these things are already effectively doing the auto-targeting, it just has the human pulling the trigger. So to make the computer pull the trigger is almost so easy, it’s trivial.

natalie kitroeff

Wow, so what Anton is saying, essentially, is that he already has the technology to create a robot that makes the decision to kill on its own. There’s a human operator for now. But that’s not a necessity.

paul mozur

Exactly. And his answer really hit home for me because I still, I think, even sitting there, had thought that this was the stuff of science fiction. But what I realized at that moment is that the era of the killer robot is already upon us. We’re already here. And that raises just a huge number of ethical and moral questions about the future of warfare, the future of accessibility to these kinds of weapons, and what it requires to kill a human being in the future.

[MUSIC PLAYING]

natalie kitroeff

We’ll be right back.

Paul, you started to get into the potential moral questions raised by a world where robots need no human input at all to make decisions about killing people. Let’s talk about those questions.

paul mozur

So I guess the first big consideration is who has access to this and where it will spread. And the first group that does is powerful countries, right? I mean, the United States is developing swarms of drones that can accompany its fighter jets. Other major military powers, in Europe and China, for instance, also are developing this kind of thing. But it’s also not just those guys. Ukraine, for instance, has channels where it’s been sharing tips on drone warfare.

We did a story earlier in the year out of Myanmar where we found Burmese drone pilots were training on Ukrainian software that taught them to use kamikaze drones. And so the question becomes, how long until these sorts of automated targeting softwares are shared? And perhaps what’s maybe scarier is that Russia also is developing very similar solutions to the Ukrainians. And so, who will they share it with? Will they share it with North Korea, Iran, certain fighters in Sudan?

So the point is, it’s very easy to spread software. I mean, this isn’t even a piece of hardware. This is just something that plugs into a piece of hardware. You can send it over an email. And the guys that we talked to in the field who are flying the drones, one of the problems they had with their technology when they showed it to the Ukrainian military is that it wasn’t encrypted.

So the Ukrainians were afraid that if their drone crashed behind enemy lines without blowing up, the Russians could take the little mini computer, download the code, and use it to build their own system and hit the Ukrainians. So software spreads incredibly fast and incredibly easily. And it’s going to be extremely hard, once these solutions are developed, to stop them from going almost anywhere. It’s not hard to imagine a dark web site that allows you to sell all manner of autonomous drone attack systems.

natalie kitroeff

Right.

paul mozur

There was one US official I was speaking with who has huge concerns about the terrorism implications of this. So, I mean, it’s not hard to imagine. I mean, take a drone, for example. You could fly something in from 20, 30 miles away. And it becomes extremely difficult to defend against.

natalie kitroeff

That raises a lot of questions, obviously. And I have to assume that ethicists and human rights officials are asking some of them. For example, is there any way to regulate AI weapons? Can we put limits on their use?

paul mozur

Yeah, so this is something that’s been debated in the UN by panels of experts for years. But we never really get to anything particularly concrete, in part because countries are already in an arms race to develop these things. And every time anybody proposes some kind of a rule, it’s vetoed, if not by the United States, by China, by Russia, by other countries in Europe.

But there are some basic principles that ethicists rally behind, things like keeping a human in the loop so that the human makes the ultimate decision, even if there’s automated targeting going on. And that’s the line the Ukrainians are standing behind. But again, there’s really not much out there to stop any of this from going wherever it wants to go. And honestly, it feels like we’re already heading in that direction.

natalie kitroeff

Paul, in listening to you this whole time, I’ve been wondering how we should think about this idea of software making the decision about who lives and dies, because on the one hand, I have to say, the idea of robots hunting down humans is truly frightening. But on the other hand, it’s not as if humans are known for their restraint in war. I mean, right now, we’re witnessing two wars in Ukraine and in Gaza, where human-led military campaigns have killed tens of thousands of people, many of them civilians. And so, I’m wondering, in your reporting, have you come to think of this technology as in any way better or a more precise form of warfare?

paul mozur

Yeah, so I think what was interesting is, some of the technologists building this that I spoke with did make this case. And their logic goes something like, if we have robots fighting robots, humans aren’t dying. If we can put rules inside the software, no, say, children will be killed by this weapon, we can prevent it from doing certain things that maybe a really bad human would do. And you could maybe even create spaces like the front-line, where just nobody can step foot for four kilometers on either side because the weapons are so deadly. And so, how can you move forward in either way? And you just create a perfect stalemate. But I guess history shows us, and my understanding of history, at least, that that’s not the way this will probably go, and that every time in the past we’ve seen a breakthrough in weaponry, oftentimes, it’s just meant more devastating weapons get created. I thought back to Alfred Nobel, who famously thought dynamite would end war. And, of course, it simply made more powerful, deadly bombs. And it feels like that is the kind of future that we are treading into.

But I just think that it’s very hard to sit where we are, in a place that’s not at war, and tell people building these things that are weapons to defend their families and their friends who are going off to war against an invader to stop doing it because it could make us unsafe in the future. And even some of the ethicists I spoke with who are very opposed, who have dedicated their careers to fighting against autonomous weapons, they would throw up their hands and say, well, I can’t really argue with the Ukrainians.

One of the Ukrainians I spoke with who’s making autonomous drone systems said, you show me a hypothetical victim. And I’ll show you a real dead soldier and a family that now has to live without him.

And that leaves you in a very hard place because you’re have what seems like a runaway train.

You can’t morally argue for people to stop building things to defend themselves, yet what they’re building basically secures a future that will be far more dangerous than the present that we live in. And as long as this war in Ukraine goes on, we are going to see more advanced systems get developed. And I just don’t know how we avoid a future in which we have ever more powerful, ever more autonomous weapons. And that’s pretty scary.

[MUSIC PLAYING]

natalie kitroeff

Paul, thank you so much.

paul mozur

Thank you.

[MUSIC PLAYING]

natalie kitroeff

On Monday, Russia launched one of the deadliest assaults on Kyiv since the first months of the war, striking Ukraine’s largest children’s hospital as part of a barrage of bombings across the country. At least 38 people were killed in the attacks. And more than 100 were injured.

[MUSIC PLAYING]

We’ll be right back.

Here’s what else you should know today.

archived recording (joe biden)

The bottom line here is that we’re not going anywhere. I am not going anywhere. I wouldn’t be —

natalie kitroeff

On Monday, in a move to save his candidacy, President Biden told congressional Democrats in a letter and on MSNBC’s “Morning Joe” that he would not withdraw from the race and accused those asking him to step aside of being routinely wrong about politics.

archived recording (joe biden)

I don’t care what those big names think. They were wrong in 2020. They were wrong in 2022 about the red wave. They’re wrong in 2024. And go with — come out with me. Watch people react. You make a judgment.

natalie kitroeff

Biden faces what could be the most crucial week of his candidacy, as he contends with growing concern among Democratic lawmakers about his age and ability to win re-election. He also spoke directly to some of his biggest fundraisers and donors in a private call, telling them Democrats needed to shift the focus away from him and back to Trump.

And as Tropical Storm Beryl battered Houston and its suburbs on Monday, at least two people were killed by fallen trees. And nearly three million homes and businesses lost power in Texas. The storm is expected to move across the Eastern half of the United States over the next several days.

Today’s episode was produced by Will Reid, Claire Toeniskoetter, and Stella Tan. It was edited by Lisa Chow, contains original music by Dan Powell, Elisheba Ittoop, and Sophia Lanman, and was engineered by Alyssa Moxley. Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly.

[MUSIC PLAYING]

That’s it for “The Daily.” I’m Natalie Kitroeff. See you tomorrow.

[MUSIC PLAYING]

The Era of Killer Robots Is Here

Ukraine has become a Silicon Valley for autonomous weapons.

0:00/27:33
-0:00

transcript

The Era of Killer Robots Is Here

Ukraine has become a Silicon Valley for autonomous weapons.

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

natalie kitroeff

From The New York Times, I’m Natalie Kitroeff. And this is “The Daily.”

[MUSIC PLAYING]

Out-manned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of killer robots. Today, my colleague Paul Mozur on how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping modern warfare.

[MUSIC PLAYING]

It’s Tuesday, July 9th.

Paul, when we’ve talked on the show about the applications of advanced artificial intelligence, one of the scarier ideas has been that militaries around the world could use it to make autonomous killing machines, i.e., killer robots. Your reporting shows that this may already be happening. Tell me about it.

paul mozur

Yeah, so, when I first got started on my reporting, I thought this was the stuff of sci-fi. You think about AI hunting and killing somebody, and you think of “Terminator” and Arnold Schwarzenegger hunting people as a robot, or you think of HAL in “2001: A Space Odyssey,” an all-knowing robot that can kill people on a spaceship. But the thing is, the early versions of this technology that will get us there are already being developed.

And they’re being developed in Ukraine. And in some ways, Ukraine has become a nexus for the development of this type of autonomous military technology, writ large. They’re taking, basically, artificial intelligence, and finding all kinds of new military applications for it.

natalie kitroeff

Why has Ukraine become that nexus?

paul mozur

So, perhaps most importantly, they’re outgunned. Weapons don’t necessarily come quickly or predictably from the United States or Europe. They have to reach to anything they can use to fight this war. And so they turn to consumer technology and emerging technologies like AI to build new, effective weapons. The second point is that they’re out-manned. And so, as you’re facing the prospect of defending all these trenches and you just don’t have as many people as the Russians have, you need to come up with things that solve that problem.

And what better than something like an automated machine gun or a drone? And then, perhaps something people don’t realize is that Ukraine has been a bit of a back office for the global technology industry for a long time. Many of the apps you use every day were probably, in some part, coded by engineers in Ukraine. And so you have a lot of coders and a lot of skilled experts taking their abilities and saying, well, now, we need to turn from building a dating app to figuring out how to stop the Russians. And that means building these new weapons.

And then, finally, extremely importantly, this is a war of attrition. And so, every day, there’s fighting going on. And that means you have the ability to test these weapons each day, and to use a Silicon Valley term, iterate on them, tweak them and make them better. And so, having what is effectively a sort of laboratory to experiment and find out ways to make AI ever more deadly really helps.

natalie kitroeff

Yeah, it sounds like you kind of have all of these conditions that line up to make Ukraine a perfect incubator to build this type of technology. I’m wondering, Paul, what this actually looks like on the ground? What kind of weapons are we talking about here?

paul mozur

Yeah, so I went to Ukraine in May and met with all kinds of different tech startups and developers, troops who use this technology. And perhaps the most startling moment wasn’t near the front lines or anything. It was actually in a park just outside of Kyiv.

[MOTORCYCLES REVVING]

And a couple of guys in their 20s and one in their teens, who started this company that makes autonomous drones, pulled up on motorcycles.

They take me to a field. And they unbox a tiny little drone, four rotors on it, kind of a smaller version of a typical drone you’d use to take pictures of your vacation or something like that.

speaker 1

Maybe we can put the screen, like, somewhere where it would be not so much —

paul mozur

And then they flip open this briefcase with a screen on it. And what they explained they’d done is, they took a tiny little mini circuit board, a little mini hobbyist computer.

speaker 1

Here we have a Raspberry Pi, plus thermal camera. But —

paul mozur

And put software on it that allows that tiny drone to follow a tank, a piece of artillery, or even a human, and eventually smash into it. And so, the idea is, if you have a shell on the bottom of it, it becomes something of a missile. I had heard of this before. But I hadn’t seen it. And so they said, well, we’re going to show you. And so, the CEO then flips on his motorcycle helmet —

[MOTORCYCLE REVVING]

— revs the engine a few times, and rips off down this dirt road as a target. And one of the teammates launches the drone. And it’s hovering above. And then, what you see is that he centers the cross-hairs on the motorcycle.

[POPPING]

And at that point, the machine takes over from the human. And so the drone starts stalking the motorcycle. And it’s getting closer and closer. And on the screen, you can see it. It’s lining up to swoop in. And his friends are crying out —

speaker 2

[NON-ENGLISH SPEECH] — go faster, go faster! You’re screwed. Oh, my god, you’re done! It’s basically not more than a couple of feet away from him.

[POPPING]

speaker 1

OK, that was really close.

paul mozur

And they hit a button and turn off the autonomy. And the drone flies back up into the air. And they’re laughing. And it’s a funny moment because they were able to run down their CEO. But the darker reality of it is that, if this was an armed drone with a shell and they hadn’t hit that button at the end, their CEO would be a goner. And this is the technology that is already being used on the Ukrainian front-lines to hit Russian targets.

natalie kitroeff

It’s kind of amazing. What you’re seeing is the computer take over for the young startup guy in tracking down and targeting his CEO as he’s swerving on his motorcycle. And that’s why this technology is so powerful, right? This is not a remote control drone. This is a drone powered by artificial intelligence that’s making judgments on its own.

paul mozur

Yeah, and I think what’s important to realize about these guys is, they’re not doing something that miraculous. What they’re doing is they’re taking basic code that is around, combined it with some new data from the war, and made it into something entirely different, which is a weapon. And what this automation system adds to it is one of the great protections against these tiny types of drones, is radio jamming.

And so, if you can break the signal between pilot and drone, you can stop that drone from swooping in on your expensive weapon system. But what this does is, it doesn’t matter what the pilot sees. Once they hit that lock, with the help of this AI software, it will keep going. And so, you all of a sudden are completely helpless to stop it unless you shoot it out of the sky.

natalie kitroeff

It’s kind of insane. I mean, we’re talking about basically autonomous kamikaze drones. Tell me about the other forms of AI that you saw.

paul mozur

Yeah, so if you imagine that single kamikaze drone, the next step is to make a swarm of those kamikaze drones. And so, there are a few companies who are building swarms of drones. And so, what they’re testing now is, you have a single four-rotor drone that watches over the battlefield. But then, it has its own little pack of kamikaze drones.

And it can choose a target and identify a target. And it’s constantly searching with its camera. And it sees a tank. And it can then dispatch one of those drones to go in for a hit. And one of the companies I spoke to called Swarmer recently tested that technology and hit a target 60 kilometers away.

natalie kitroeff

Wow.

paul mozur

So, this is, again, not a very expensive thing. This is hardware that costs thousands of dollars. And they’re hitting weapon systems 60 kilometers away with incredible precision. Another thing that is emerging is a sort of autonomous machine gun turret. And so, what this uses is computer vision that you would have on a lot of surveillance cameras, or even on your iPhone, your smartphone. It will sort of circle a human and identify it.

So you take similar technology, and you put it on a machine gun. And that machine gun can then automatically see targets as they move. And then, all it takes is a human to press the trigger. And it’s already sort of being tested right now. Some of it’s already achieving kills and taking out targets.

natalie kitroeff

Paul, did you see the automated machine gun in action?

paul mozur

Yeah, I did. And the story of it is actually fascinating. And it kind of indicates why Ukraine is a place where these weapons systems are emerging. So we went to a range and met a commander of a battalion called Da Vinci Wolves. And Da Vinci Wolves are very well-known in Ukraine for their experimentation with weapon systems. And the commander we met, Oleksandr Yabchanka, really looks the part of the Ukrainian community, has this Cossack haircut and a bushy mustache.

And he was very excited about the gun. He actually named his dog after the gun. And the reason he’s so excited is because he helped create parts of the gun. He’s helped innovate it. And unlike the engineers that we were seeing in Kyiv, he’s a soldier. So he’s using this technology on the front. And he’s sort of the eyes for a lot of these companies. And he gives feedback to them about how it’s working, and what they need, and so on. And so, he told me this amazing story.

He was fighting in Bakhmut, a city in Eastern Ukraine that the Russians were trying very hard to take over. And his unit was tasked with defending the only road in and out of the city. And they kept having this problem, which was that their machine gunners were just constantly a target. A machine gun is a big, heavy gun that can’t easily be moved. You need to man it at all times. So he did what I think a lot of us do these days. He went online and tried to find a solution. And he ended up asking on Facebook if anybody had an idea.

And in just a few months, he had a working prototype in his hands from a company called Robineers. And what the gun does is, it uses cameras and what is effectively a video game console, a portable thing that looks like a Nintendo Switch. And it can automatically identify targets as they come over the horizon or appear. And then, it automatically aims. And all the soldier has to do is press the button and shoot.

natalie kitroeff

It sounds, honestly, quite terrifying, automated gun crowd-sourced on Facebook. The whole thing sounds really outlandish, if I’m honest.

paul mozur

Yeah, but it solved his unit’s problems. He said, it was great. We could sit back in the trench, drink coffee, smoke cigarettes, and shoot Russians. And it solved this problem using very basic technology that’s very powerful, but that lives in your smartphone and lives on your video game systems and is pretty easily able to be created with artificial intelligence.

natalie kitroeff

Paul, you’ve said this technology has been quite effective in Ukraine. But just to express a dose of my skepticism here, I think many of us interact with AI through ChatGPT or Gemini. And we’ve seen the hilariously bad answers those systems can produce. Like, we can’t even depend on AI to solve a crossword puzzle. I’m just wondering, how can Ukrainians rely on this technology for much higher stakes stuff? I mean, hitting the right targets in war, are they finding that their AI is making mistakes?

paul mozur

We don’t really know the answer to that. We do know that the systems work pretty well. And part of the point is that they’re supposed to be cheap. So they don’t always have to work. If they work 80 percent of the time and they’re cheap, that’s OK. I will say that another reason why they want the human in the loop who can turn off the AI is that they’re afraid of friendly fire and hitting the wrong target. There’s an ethical consideration.

But there’s also just a very practical one, that this tech could go wrong and go at the wrong person, or identify the wrong thing. And they need to be able to turn it off. So there are still humans in the mix. But even with this bad technology, perhaps it’s even scarier, It’s super simple to just take the human out of the mix.

natalie kitroeff

And how close are we to that? I mean, is Ukraine’s military contemplating a scenario where the humans really go away? And the machines and their judgments are entirely responsible for killing?

paul mozur

So that was a question I posed to everyone working on this stuff. And the general answer is a no, that the human will stay in the loop for the foreseeable future. But people had different takes on it. And one guy who had a particularly interesting answer was an executive at the firm that made that automated machine gun, Robineers. His name is Anton Skrypnyk. And I met him at his offices in Ukraine.

paul mozur

So, realistically, the pace that we’re seeing, when do you think we’re going to start seeing the first automated killing? Is it —

paul mozur

And when I asked him how, or when, the first automated killing on the front lines might occur —

anton skrypnyk

Maybe it was already done. Most likely, it was already done.

paul mozur

He said it probably, honestly, has already happened. He had no way of knowing. He wasn’t sure.

anton skrypnyk

People very often just do something to survive, to complete a mission, without sharing information. And this is not bad.

paul mozur

In a fast-paced, high stress environment on the front, where life and death are oftentimes a matter of an instant decision, whether or not a soldier flipped a switch that allowed something to just go fully automated in this way, or autonomous, is very possible.

paul mozur

Is there any thought of changing that, or is that something that you guys are sort of stand by?

paul mozur

And so, I asked him, OK, but you guys aren’t doing this?

anton skrypnyk

There is no — there is not a single request about having that.

paul mozur

No, and for us, you have to hit this trigger every time the gun sees a target so that it shoots.

paul mozur

How long would it take you to do it if you wanted to?

paul mozur

And I said, so, OK, but if you wanted to make it fully autonomous, how long would that take?

anton skrypnyk

Tomorrow.

paul mozur

Tomorrow?

anton skrypnyk

Because today, it’s already half —

paul mozur

Yeah, I’ve taken two hours of your time, so —

paul mozur

So, basically, no time at all. It’s a matter of a few lines of code. Because these things are already effectively doing the auto-targeting, it just has the human pulling the trigger. So to make the computer pull the trigger is almost so easy, it’s trivial.

natalie kitroeff

Wow, so what Anton is saying, essentially, is that he already has the technology to create a robot that makes the decision to kill on its own. There’s a human operator for now. But that’s not a necessity.

paul mozur

Exactly. And his answer really hit home for me because I still, I think, even sitting there, had thought that this was the stuff of science fiction. But what I realized at that moment is that the era of the killer robot is already upon us. We’re already here. And that raises just a huge number of ethical and moral questions about the future of warfare, the future of accessibility to these kinds of weapons, and what it requires to kill a human being in the future.

[MUSIC PLAYING]

natalie kitroeff

We’ll be right back.

Paul, you started to get into the potential moral questions raised by a world where robots need no human input at all to make decisions about killing people. Let’s talk about those questions.

paul mozur

So I guess the first big consideration is who has access to this and where it will spread. And the first group that does is powerful countries, right? I mean, the United States is developing swarms of drones that can accompany its fighter jets. Other major military powers, in Europe and China, for instance, also are developing this kind of thing. But it’s also not just those guys. Ukraine, for instance, has channels where it’s been sharing tips on drone warfare.

We did a story earlier in the year out of Myanmar where we found Burmese drone pilots were training on Ukrainian software that taught them to use kamikaze drones. And so the question becomes, how long until these sorts of automated targeting softwares are shared? And perhaps what’s maybe scarier is that Russia also is developing very similar solutions to the Ukrainians. And so, who will they share it with? Will they share it with North Korea, Iran, certain fighters in Sudan?

So the point is, it’s very easy to spread software. I mean, this isn’t even a piece of hardware. This is just something that plugs into a piece of hardware. You can send it over an email. And the guys that we talked to in the field who are flying the drones, one of the problems they had with their technology when they showed it to the Ukrainian military is that it wasn’t encrypted.

So the Ukrainians were afraid that if their drone crashed behind enemy lines without blowing up, the Russians could take the little mini computer, download the code, and use it to build their own system and hit the Ukrainians. So software spreads incredibly fast and incredibly easily. And it’s going to be extremely hard, once these solutions are developed, to stop them from going almost anywhere. It’s not hard to imagine a dark web site that allows you to sell all manner of autonomous drone attack systems.

natalie kitroeff

Right.

paul mozur

There was one US official I was speaking with who has huge concerns about the terrorism implications of this. So, I mean, it’s not hard to imagine. I mean, take a drone, for example. You could fly something in from 20, 30 miles away. And it becomes extremely difficult to defend against.

natalie kitroeff

That raises a lot of questions, obviously. And I have to assume that ethicists and human rights officials are asking some of them. For example, is there any way to regulate AI weapons? Can we put limits on their use?

paul mozur

Yeah, so this is something that’s been debated in the UN by panels of experts for years. But we never really get to anything particularly concrete, in part because countries are already in an arms race to develop these things. And every time anybody proposes some kind of a rule, it’s vetoed, if not by the United States, by China, by Russia, by other countries in Europe.

But there are some basic principles that ethicists rally behind, things like keeping a human in the loop so that the human makes the ultimate decision, even if there’s automated targeting going on. And that’s the line the Ukrainians are standing behind. But again, there’s really not much out there to stop any of this from going wherever it wants to go. And honestly, it feels like we’re already heading in that direction.

natalie kitroeff

Paul, in listening to you this whole time, I’ve been wondering how we should think about this idea of software making the decision about who lives and dies, because on the one hand, I have to say, the idea of robots hunting down humans is truly frightening. But on the other hand, it’s not as if humans are known for their restraint in war. I mean, right now, we’re witnessing two wars in Ukraine and in Gaza, where human-led military campaigns have killed tens of thousands of people, many of them civilians. And so, I’m wondering, in your reporting, have you come to think of this technology as in any way better or a more precise form of warfare?

paul mozur

Yeah, so I think what was interesting is, some of the technologists building this that I spoke with did make this case. And their logic goes something like, if we have robots fighting robots, humans aren’t dying. If we can put rules inside the software, no, say, children will be killed by this weapon, we can prevent it from doing certain things that maybe a really bad human would do. And you could maybe even create spaces like the front-line, where just nobody can step foot for four kilometers on either side because the weapons are so deadly. And so, how can you move forward in either way? And you just create a perfect stalemate. But I guess history shows us, and my understanding of history, at least, that that’s not the way this will probably go, and that every time in the past we’ve seen a breakthrough in weaponry, oftentimes, it’s just meant more devastating weapons get created. I thought back to Alfred Nobel, who famously thought dynamite would end war. And, of course, it simply made more powerful, deadly bombs. And it feels like that is the kind of future that we are treading into.

But I just think that it’s very hard to sit where we are, in a place that’s not at war, and tell people building these things that are weapons to defend their families and their friends who are going off to war against an invader to stop doing it because it could make us unsafe in the future. And even some of the ethicists I spoke with who are very opposed, who have dedicated their careers to fighting against autonomous weapons, they would throw up their hands and say, well, I can’t really argue with the Ukrainians.

One of the Ukrainians I spoke with who’s making autonomous drone systems said, you show me a hypothetical victim. And I’ll show you a real dead soldier and a family that now has to live without him.

And that leaves you in a very hard place because you’re have what seems like a runaway train.

You can’t morally argue for people to stop building things to defend themselves, yet what they’re building basically secures a future that will be far more dangerous than the present that we live in. And as long as this war in Ukraine goes on, we are going to see more advanced systems get developed. And I just don’t know how we avoid a future in which we have ever more powerful, ever more autonomous weapons. And that’s pretty scary.

[MUSIC PLAYING]

natalie kitroeff

Paul, thank you so much.

paul mozur

Thank you.

[MUSIC PLAYING]

natalie kitroeff

On Monday, Russia launched one of the deadliest assaults on Kyiv since the first months of the war, striking Ukraine’s largest children’s hospital as part of a barrage of bombings across the country. At least 38 people were killed in the attacks. And more than 100 were injured.

[MUSIC PLAYING]

We’ll be right back.

Here’s what else you should know today.

archived recording (joe biden)

The bottom line here is that we’re not going anywhere. I am not going anywhere. I wouldn’t be —

natalie kitroeff

On Monday, in a move to save his candidacy, President Biden told congressional Democrats in a letter and on MSNBC’s “Morning Joe” that he would not withdraw from the race and accused those asking him to step aside of being routinely wrong about politics.

archived recording (joe biden)

I don’t care what those big names think. They were wrong in 2020. They were wrong in 2022 about the red wave. They’re wrong in 2024. And go with — come out with me. Watch people react. You make a judgment.

natalie kitroeff

Biden faces what could be the most crucial week of his candidacy, as he contends with growing concern among Democratic lawmakers about his age and ability to win re-election. He also spoke directly to some of his biggest fundraisers and donors in a private call, telling them Democrats needed to shift the focus away from him and back to Trump.

And as Tropical Storm Beryl battered Houston and its suburbs on Monday, at least two people were killed by fallen trees. And nearly three million homes and businesses lost power in Texas. The storm is expected to move across the Eastern half of the United States over the next several days.

Today’s episode was produced by Will Reid, Claire Toeniskoetter, and Stella Tan. It was edited by Lisa Chow, contains original music by Dan Powell, Elisheba Ittoop, and Sophia Lanman, and was engineered by Alyssa Moxley. Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly.

[MUSIC PLAYING]

That’s it for “The Daily.” I’m Natalie Kitroeff. See you tomorrow.

[MUSIC PLAYING]

Will ReidClare Toeniskoetter and

Dan PowellElisheba Ittoop and


Outmanned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of killer robots.

Paul Mozur, the global technology correspondent for The Times, explains how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping warfare.


Paul Mozur, the global technology correspondent for The New York Times.

ImageA man is working on a small drone with a rifle on top. He is standing in a sandy landscape.
Roboneers, a Ukrainian company, developed an automated weapon with a gun turret mounted on a rolling drone.Credit...Sasha Maslov for The New York Times

There are a lot of ways to listen to The Daily. Here’s how.

We aim to make transcripts available the next workday after an episode’s publication. You can find them at the top of the page.


The Daily is made by Rachel Quester, Lynsea Garrison, Clare Toeniskoetter, Paige Cowett, Michael Simon Johnson, Brad Fisher, Chris Wood, Jessica Cheung, Stella Tan, Alexandra Leigh Young, Lisa Chow, Eric Krupke, Marc Georges, Luke Vander Ploeg, M.J. Davis Lin, Dan Powell, Sydney Harper, Mike Benoist, Liz O. Baylen, Asthaa Chaturvedi, Rachelle Bonja, Diana Nguyen, Marion Lozano, Corey Schreppel, Rob Szypko, Elisheba Ittoop, Mooj Zadie, Patricia Willens, Rowan Niemisto, Jody Becker, Rikki Novetsky, John Ketchum, Nina Feldman, Will Reid, Carlos Prieto, Ben Calhoun, Susan Lee, Lexie Diao, Mary Wilson, Alex Stern, Sophia Lanman, Shannon Lin, Diane Wong, Devon Taylor, Alyssa Moxley, Summer Thomad, Olivia Natt, Daniel Ramirez and Brendan Klinkenberg.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. Special thanks to Sam Dolnick, Paula Szuchman, Lisa Tobin, Larissa Anderson, Julia Simon, Sofia Milan, Mahima Chablani, Elizabeth Davis-Moorer, Jeffrey Miranda, Maddy Masiello, Isabella Anderson, Nina Lassam and Nick Pitman.

Natalie Kitroeff is the Mexico City bureau chief for The Times, leading coverage of Mexico, Central America and the Caribbean. More about Natalie Kitroeff

Paul Mozur is the global technology correspondent for The Times, based in Taipei. Previously he wrote about technology and politics in Asia from Hong Kong, Shanghai and Seoul. More about Paul Mozur

See more on: Russia-Ukraine War

Advertisement

SKIP ADVERTISEMENT