Skip to content

Why You Need People To Manage the Meaning of AI Content

Why You Need People To Manage the Meaning of AI Content

What’s your data strategy?

Your first thought might be about first-party data collection, performance metrics, or metadata for targeted content. But that’s not what I’m getting at.

Instead, I’m asking: What’s your strategy for shaping thought leadership, crafting stories, building your brand message, and delivering sales materials that engage and persuade?

“Hold on,” you might say. “Isn’t that content strategy?”

Well, yes. But also, no.

Is it content, or is it data?

Generative AI is blurring the lines between content and data.

When you think of your articles, podcasts, and videos, you likely don’t see them as “data.” But AI providers do.

AI providers don’t talk about their models learning from “engaging content” or “well-crafted stories.” Instead, they talk about accessing and processing “data” (text, images, audio, and video). AI providers commonly use the term “training data as the de rigueur way to refer to the datasets they rely on for model development and learning.

This perspective isn’t wrong — it’s rooted in the history of search engines, where patterns and frequency determined relevance, and the “indexes” of search engines were just big buckets of unstructured files and text (i.e., data).

No one ever pretended that search engines understood the meaning in their giant bucket of every content type imaginable. Reducing it down to “data” seemed appropriate.  

But AI companies now attribute understanding and intuition to this data. They claim they have all that information and the ability to rearrange it and intuit the best answer.

But let’s be clear: AI doesn’t understand. It predicts.

It generates the most probable next word or image — structured information devoid of intent or meaning. Meaning is — and always will be — a human construct resulting from the intentionality behind communication.

Fighting for meaning

This difference underpins the growing tension between content creators and AI providers.

AI vendors argue that the internet is a vast pool of publicly accessible data — as available to machines as it is to humans — and their tools help bring deeper meaning.

Content creators argue that people learn from content imbued with intent, but AI merely steals the products and rearranges them without care for the original meaning.

Interestingly, the conflict arises over something both agree on — that the machine determines the meaning.  

But it doesn’t.

The internet makes data (content) available to AI, but only humans can create meaning from it.

And that makes the distinction between content and data more critical than ever.

What’s the difference?

A recent study found that consumers show less positive word-of-mouth and loyalty when they believe emotional content was created by AI rather than by a human.

Interestingly, this study didn’t focus on whether participants could detect whether the content was AI-generated. Instead, the same content was presented to two groups: One was told it was created by a human (the control group), while the other was informed that it was generated by AI. 

The study’s conclusion: “Companies must carefully consider whether and how to disclose AI-authored communications.”

Spoiler alert: No one will.

In another study, however, researchers tested whether people could distinguish between AI-generated and human-generated content. Participants identified AI-generated text correctly only 53% of the time — barely better than random guessing, which would achieve 50% accuracy.

Spoiler alert: No, we can’t.

We are hard-wired to get this wrong

In 2008, science historian Michael Shermer coined the word “patternicity.” In his book, The Believing Brain, he defines the term as “the tendency to find meaningful patterns in both meaningful and meaningless noise.”

He said humans tend to “infuse these patterns with meaning, intention, and agency,” calling this phenomenon “agenticity.”

So, as humans, we’re wired to make two types of errors:

  • Type 1 errors, where we see the false positive — we see a pattern that doesn’t exist.
  • Type 2 errors, where we see the false negative — we miss a pattern that exists.

When it comes to generative AI, people are at risk of making both types of errors.  

AI providers and people’s tendency to anthropomorphize the technology primes people for Type 1 errors. That’s why the solutions are marketed as a “co-pilot,” “assistant,” “researcher,” or “creative partner.”

A data-driven-content mindset leads marketers to chase patterns of success that may not exist. They risk mistaking quick first drafts for agile content without questioning whether the drafts offer any real value or differentiation.

AI-generated “strategies” and “research” feel credible simply because they’re written clearly (and the vendors claim the technology taps into deeper knowledge than people possess).

Many people equate these fast answers with accuracy, overlooking that the system only regurgitates what it has absorbed — truthful or not.

And here’s the irony: Our awareness of these risks could lead us to Type 2 errors and hold us back from realizing the benefits of generative AI tools. We could fail to see patterns that really are there. For example, if we settle into believing that AI always produces average or “not quite true” content, we’ll fail to see the pattern that shows how good AI is at solving complex processing challenges.

As the technology improves, the risk is settling for “good enough” — from both ourselves and the tools we use.

CMI’s recent research highlights this trend. In the 2025 Career Outlook for Content and Marketing study, the most commonly listed use for AI among marketers is “brainstorm new topics.” However, the next five most common responses — each cited by over 30% of respondents — focused on tasks like summarizing content, writing drafts, optimizing posts, crafting email copy, and creating social media content.

But CMI’s B2B Content Marketing Benchmarks, Budgets, and Trends research reveals growing AI hesitation. Thirty-five percent of marketers cite accuracy as their top generative AI concern.  

While most respondents report only a “medium” level of trust in the technology, 61% still rate the quality of AI-generated content as excellent (3%), very good (14%), or good (44%). Another 35% rate it as fair, and 4% as poor.

So, we’re using these tools to produce content we consider satisfactory, but we’re uncertain about its accuracy and only moderately trust the results.

This approach to generative AI shows that marketers tend to use it to produce transactional content at scale. Instead of living up to the promise that AI will “unlock our creativity,” marketers risk living down to the possibility of locking ourselves out of it.

Seek better questions instead of faster answers

The essence of modern marketing is part data, part content — and a lot of understanding and giving meaning to our customers. It’s about uncovering their dreams, fears, aspirations, and desires — the invisible threads that guide them forward.

To paraphrase my marketing hero, Philip Kotler, modern marketing isn’t just about mind share or heart share. It’s about spirit share, something that transcends narrow self-interest.

So, how can we modern marketers balance all those things and deepen the meaning of our communications?

First, recognize that the content people create today becomes the dataset that defines us tomorrow. No matter how it’s generated, our content will carry inherent biases and varying degrees of value.

For AI-generated content to provide value beyond the data you already have, move past the idea of using the technology merely to increase the speed or scale of creating words, pictures, audio, and video.

Instead, embrace it as a tool to enhance the ongoing process of extracting meaningful insights and fostering deeper relationships with our customers.

If generative AI is to become more effective over time, it requires more than just technological refinement — it requires people to grow. People need to become more creative, empathetic, and wise to ensure that both the technology and the people who use it don’t devolve into something meaningless.

Our teams will need more, not fewer, roles that can extract valuable insights from AI-generated content and transform them into meaningful ideas.

The people who fill these roles won’t necessarily be journalists or designers. But they’ll have the skill to ask thoughtful questions, engage with customers and influencers, and transform raw information into meaningful insights through listening, conversation, and synthesis.

The qualities required resemble those of artists, journalists, talented researchers, or subject matter experts. Perhaps this could even be the next evolution of the influencer’s role.

The road ahead is still unfolding.

One thing is clear: If generative AI is to be more than a distracting novelty, businesses need a new role —a manager of meaning — to guide the way AI-driven ideas are shaped into actual value.

It’s your story. Tell it well.

Subscribe to workday or weekly CMI emails to get Rose-Colored Glasses in your inbox each week. 

HANDPICKED RELATED CONTENT:

Cover image by Joseph Kalinowski/Content Marketing Institute