Skip to content

How To Get Unstuck With Generative AI in Your Content and Marketing

How To Get Unstuck with Generative AI in Your Content and Marketing

Feeling stuck with generative AI?

Here’s a way to get unstuck.

I’d like to ask you to think about two questions.

First, did you see the new movie Ghostbusters: Frozen Empire? If you didn’t, just think about the last movie you saw (for me, it was the remake of Road House — ugh). Did you like it?

I reference Ghostbusters only because it scored poorly among critics (44% on Rotten Tomatoes) vs. audiences (84%). Additionally, Ghostbusters scored well at the box office, hitting $45 million in revenue in its first weeks of release.

But here’s the thing: Whether you thought the movie was wonderful or awful — you’re right, and you’re wrong. Even the data can’t tell you if you’re correct. If you loved the recent action spy movie Argylle, the data — box office, critics, and audience reviews — would say you’re wrong. But if you respond with, “But Henry Cavill,” you’re not wrong.

Now, the second question: If you’re experimenting with generative AI to create your marketing content, do you believe you get consistently valuable results, decent but not great results, or poor results?

I’ve asked this question of a few audiences lately, and most people pick the middle — consistently decent but not great. However, independent of your answer, I know one thing. You’re all wrong. And you’re all right.

It’s all a matter of perspective. All marketing content is like a movie. What moves you may not move me. The data may say your content is successful, but whether an individual values or is motivated by it is subjective.   

Stuck between uncertainty and doubt with generative AI

A recent talk included a conversation about a bank that approached an AI company with 500 use cases that it wanted to apply large language models to.

Yep, they’re stuck.

I find the trend pervasive. Companies of all sizes search for the right uses of generative AI. Leadership places extreme pressure on their teams to “find efficiency” and “usefulness” from generative AI. It’s so new and so innovative; there MUST be something you can do with it.

Every swipe of a social media feed, podcast episode, webinar, or industry event turns up uses of generative AI. It’s hard to keep up with all the possibilities because every day someone comes up with something you’re not doing.

But is generative AI serving marketers?

It’s not. That is why you feel stuck. It’s a classic paradox of choice. You think having so many use cases to choose from makes it easier to apply generative AI to your content and marketing. But it really makes it more difficult to decide which applications to use.

The most insidious part? You can’t know if the AI-generated content is better until you commit to one.

Here’s what I mean.

Generative AI’s response to a prompt is designed to be unpredictable. If you ask the tool to rewrite, edit, or create something, it never responds the same way twice. If you press and ask if that’s the best it can do, it typically responds with another variation. It doesn’t stop rewriting until you stop the process. It will never say, “Well, the third iteration was the best version, so stop asking.”

Generative AI doesn’t always give you the right content, nor does it give you the best content. It simply gives the most probable content. If you think that’s good enough, you’re right. And you’re wrong.

Newness and efficiency bring the necessary perspective

I’ve begun to help clients get unstuck by taking a more structured approach to the assembly of their use cases. In marketing, two spectrums can be applied to generative AI. The first involves a new or existing capability. Is the use case a job already being done where generative AI could make it more valuable? Or is it something that wasn’t possible or so difficult it wasn’t worth the human effort?

A real-time translation of customer service calls is an example of an existing capability made easier with AI. Rewriting a research paper into friendlier versions for different personas using AI is an example of a new capability.

The second spectrum centers around efficiency. Will this use of generative AI make you more efficient? Will it save time and resources? Or is it less efficient? Will it need more time and resources?

Using a generative AI tool to generate SEO keywords or fix grammar is an example of being more efficient. An AI tool scanning your CRM data as well as LinkedIn to assemble a content gap report is an example of being less efficient. You would add the task to someone’s to-do list because the result presents a valuable new use of their time.

With these spectrums in mind, you can assemble a four-quadrant chart to assess the generative AI uses. The vertical line goes from new capability at the top to existing capability at the bottom. It is intersected in the middle by the efficiency line, going from less efficient on the left to more efficient on the right.

A four-quadrant chart to assess the generative AI uses.

The four quadrants fit into these categories:

  1. Enhancement — a new capability that makes you more efficient. For example, a generative AI tool learns your brand guidelines, tone, and editorial jargon (new capability). It automatically points out these flaws (more efficiency) to help you create consistently well-branded content.
  2. Refinement — an existing capability that makes you more efficient. For example, a generative AI tool can produce a real-time translation (more efficient) of content for customer service requests (existing capability).
  3. Supplement — an existing capability that will be less efficient but more valuable. A great example is competitive research. By adding a bit more time and resources to it using AI, you can do comprehensive competitive analysis on an ongoing basis.
  4. Complement — a new capability that makes you less efficient. These uses are true innovation. For example, you build a new chatbot using a custom learning model that scans all training documentation to provide an interactive helper application for customers. The amazing new experience will require greater attention to the quality and structure of your training manuals.

These categories may feel esoteric. As I noted, the cases can fall on a spectrum, so one use might be at the top of the upper right quadrant (very new capability and highly efficient), while another might be closer to the chart’s center point in that quadrant (somewhat new capability and generally efficient).

However, this categorizing chart is practical.

Use-case categories get you unstuck

One of the biggest tensions in generative AI planning arises when the use cases misalign with what you see as the priorities and what the senior leaders see as important.

Let me explain.

I’ve collected over 230 use cases for generative AI in content and marketing. Here’s how they break down into the four categories:

  • Enhancement (new capability, more efficient): 6%
  • Refinement (existing capability, more efficient): 31%
  • Supplement (existing capability, less efficient): 45%
  • Complement (new capability, less efficient): 18%
Use cases for generative AI broken down into four categories: Enhancement 6%, Refinement 31%, Supplement 45%, and Complement 18%.

One-third of use cases fall under what you might characterize as the most common — the jobs done in everyday work made more efficient. But, interestingly, it’s only a third.

By far the most popular use cases (45%) are jobs once deprioritized because they took too much effort and are now worth doing because of generative AI. They actually add the need for more resources. This finding matches the early anecdotal evidence I gathered working with clients. Most generative AI integrations in marketing add new requirements for budget and resources, supplementing existing capabilities.   

Also, unsurprising but nice to see is how few of the use cases fall in the enhancement category — things you couldn’t do before that also make you more efficient. Chalk this up to “We don’t know what we don’t know.” This generative AI adventure is still in its early days, and new capabilities are just starting to be discovered.

However, the most important takeaway is not about forcing some balance in the use cases in your work. Rather, it’s to understand where to prioritize so that you align with the leadership’s expectations. If you prioritize generative AI use in the supplement category but management expects AI to deliver a refinement use, for example, conflicts and tensions arise.

When you don’t pitch the use of generative AI correctly, you set yourself up for failure. I know a company recently proposed a new generative AI solution to create a set of content that would automatically create targeted/personalized content on their website. It was a true enhancement use case, but they pitched it as a refinement case — a way to save money. Of course, those two things didn’t align, and their pitch failed.

Only you can tell what’s good

As you assemble your teams and develop the use cases for generative AI in your marketing and content plan, remember to truly understand what value they will provide.

They will all look fantastic and produce good results. They will also all look awful, like big money and time pits. Only you and the team can determine which is which. But if you align on what challenge each will solve, at least you’ll know what’s most important—the critic’s score, the audience’s score, or the box office.

It’s your story. Tell it well.

Subscribe to workday or weekly CMI emails to get Rose-Colored Glasses in your inbox each week. 

HANDPICKED RELATED CONTENT:

Cover image by Joseph Kalinowski/Content Marketing Institute