matt_asay
Contributor

The biggest barrier to AI productivity is people

analysis
Jul 17, 20235 mins
Artificial IntelligenceEmerging TechnologyGenerative AI

Generative AI is helping us churn out vastly more content at remarkable speed, when what we really need is better content. It’s up to humans to put the focus on quality and value.

The more experience I have with generative AI (GenAI), the more I become simultaneously optimistic and cautious. The optimism is perhaps obvious: GenAI has the potential to significantly improve productivity whether you’re a developer or a white-collar office worker. In fact, I knew the GenAI moment had truly arrived when my social worker sister-in-law told me that she’s been actively using ChatGPT to help craft emails, proposals, and more, and sang its praises for how it helped her to do more and better work, faster. It’s like having a supercharged assistant.

Among the downsides, however, is that we don’t always know when GenAI’s output is poor quality or simply wrong. As I’ve written, unless you’re capable of creating code or content at a certain level, you won’t know when your GenAI tool has hit that level. Getting it wrong can range from embarrassing to disastrous. And then there’s the problem identified by Gartner analyst Craig Roth: As it becomes easier for everyone to create content, it also becomes harder for each of us to find the content we need, leading to “bad decisions that can have large impacts.”

Clearly, figuring out how to manage all this “more” of GenAI is a major problem, one that won’t be magically solved overnight. But the key to solving it begins and ends with people.

AI won’t make you Shakespeare

It’s not too hard to find studies that purport to show white-collar productivity and quality gains from GenAI. Take this one published in Science. The study had 453 marketers, data analysts, and college-educated professionals do standard work tasks, such as writing a press release, with and without GenAI tools. They discovered that good writers were able to work faster (up to 40% faster), and weaker writers could generate better content (18% better, as measured by industry professionals).

This is great, if inconclusive. For one thing, the kind of writing they were asked to do (e.g., press releases) is generally not high-quality writing, anyway. (When was the last time you wanted to read a press release? Creating even more garbage press releases isn’t necessarily a good thing.) But more importantly, the weaker writers don’t actually know that their final work product was better. It might seem better to them, but how would they know? After all, by definition, they aren’t good writers. I’ve had friends use ChatGPT to create “content marketing” articles, and they gushed with enthusiasm over the results. When I read the articles, however, they sounded tinny and dull (which, to be fair, is precisely how most content marketing sounds).

Of course, not all content needs to be well-written to be useful. As Patrick Collinson writes, “Consumers are being hit with industrial levels of fakery in an attempt to obtain their business.” Companies like Google and TripAdvisor must constantly find and remove millions of fake reviews of hotels and restaurants. It has become so easy to mass-produce plausible-sounding but fake reviews that soon enough, much of what we read may truly be “fake news.” Worse still, as Collinson found, GenAI isn’t trying to speak truth, but rather to say things that sound reasonable, and it will often descend into stereotypes.

Sadly, it’s trying to make up for these deficiencies with volume.

The problem with ‘more’

Most people already struggle to find the information they need, which is what led to Google’s massive search business. Within the enterprise, Roth says, roughly one-third of respondents to the 2022 Gartner Digital Worker Survey reported that they frequently struggle to find the information they need to do their jobs well. Perhaps worse, 22% have missed important updates because of the sheer volume of applications and information thrown at them. This is the state of workers in the pre-GenAI world.

“Now throw in more content being produced at a quicker pace,” Roth says, “Emails that used to be short and to the point may now be inflated to full, polite corporatespeak by the AI.” A bad problem becomes dramatically worse as more people create more content of middling quality, trusting the AI to get the facts correct. And it often won’t; things like ChatGPT aren’t interested in truth—that’s not what they’re for or how they’re engineered.

The solution to this machine-generated problem is to reinsert people into the mix. People are still needed to fact-check and do quality control. So long as we use GenAI tools to augment but not replace people, we’ll derive tremendous benefits without stumbling into egregious errors. For the most part, we don’t need more, we need better. In my job, I encourage my team to do fewer things but at a higher level. Used properly, GenAI tools can help us do precisely this, eliminating some of the boilerplate of our day-to-day routines and allowing us to focus on higher-value, thoughtful work.

matt_asay
Contributor

Matt Asay runs developer relations at MongoDB. Previously. Asay was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a J.D. from Stanford, where he focused on open source and other IP licensing issues.

More from this author