4 common pitfalls of Gen AI strategy and how to avoid them

Author

Sr. Business Designer

Table of content

Virtually everything about the way we innovate and design has changed with AI. We’ve used AI to power innovation processes, exploring consumer problems, ideating and validating new solutions.

We’ve worked with major consumer goods companies in AI-powered innovation programs to generate hundreds of ideas with imagery we could test in-market.  We’ve developed product and sales strategies with our B2B clients using AI for clear value propositions that account for many stakeholders.  We’ve built tools that synthesize client data to generate and refine hundreds of ideas in minutes – not months.

As we’ve worked with AI in leading businesses, we’ve identified fundamental mistakes people make while setting their AI strategy – and the simple, practical steps leaders can take to save time, money and heartache.

We’ve identified 4 common areas of pitfalls of Gen AI strategy leaders fall into:

  1. Building custom before assessing strategy or understanding off-the-shelf capabilities
  2. Failing to connect the big picture to the boots on the ground
  3. Limiting your strategy to “chatbot”
  4. Underestimating what it takes to transform

1. Building custom before assessing strategy or understanding off-the-shelf capabilities

In the ever-changing “jagged frontier” of AI capabilities, what the technology can do for you is less certain than you may think.  Your job is to find out if Generative AI can work for you, and the fastest way to get smart is testing with ChatGPT or Claude.

The first pitfall we see companies rushing into is building a tool without understanding the capabilities of off-the-shelf foundational models for their specific use case and their specific business context.

Generative AI varies in effectiveness in different use cases (what you are asking AI to do) and contexts (e.g. industry, team or related data).  Even the most advanced researchers explore AI with trial-and-error. 

Working with out-of-the-box LLM’s, we found cornerstones to our current approach – like LLM’s capability to role play customers or industry experts to critique ideas.

A 20 min exercise to test and optimize prompts

We found testing and optimizing prompts quickly lead to better results and a better understanding of LLM capabilities.  For many use cases, 20 minutes is enough! Skipping this step is inexcusable, but shockingly common.

Specifically, as a leader you should require that everyone on your team do some level of experimentation with out-of-the-box LLMs, like Claude, Bing, or ChatGPT before moving to implement AI into a new tool, process, or system.  

This exercise is not necessarily straightforward.  For example, knowing what is “good enough” for generated answers will ultimately be a judgment call based on end user feedback.  In most cases you’ll need a sequence of prompts to get an acceptable answer. In cases involving synthesizing data you’ll need dummy data for user acceptance testing.

If your out-of-the-box results are way too far off the mark and you are saying to yourself, “my special data will solve this” — we’ve found customizing models with data (i.e. RAG) has its own set of complications.  Frankly, your use case may not be a good fit for AI yet. Finding that out without having to train a model just saved you a lot of heartache. 

Finally, if your use case is suitable and you decide to implement a custom model, the out of the box results create a benchmark, so you won’t have wasted time. 

Regardless, it’s always worth at least 20 minutes of experimentation.

the role of generative AI in ideation innovation

2. Failing to connect the big picture to the boots on the ground

In a Lex Fridman podcast, Lex asks OpenAI founder and CEO Sam Altman what jobs AI will do in the future. Sam responds by saying a better question is to ask what tasks will AI do.

Managers of teams tend to skip connecting high level AI strategy (jobs for AI) with lower level AI use cases (tasks for AI). This disconnection manifests as a tension between people executing and those setting strategy, because AI isn’t good at everything.

But before you drill down to a task-level, a helpful shortcut is to think through humans and the parts they play in the process you are considering powering with AI. Stated differently, consider the roles for AI to roleplay.

 

Think through the roles of humans and the roles of AI

Our experience is that the critical value unlock of custom tools came from LLM’s playing roles sequentially in a process to create a better outcome faster. Those roles reflect the roles of real people in human processes.

Our depth in creating and leading traditional “human-powered” innovation processes made this mindset natural for us when applying AI to innovation.  

For example, we know that we’d typically have a cross-functional group brainstorm concepts followed by consumer feedback, so once we had concepts we knew what we needed from AI next. In a custom tool, adding a critique / refinement step allows us to refine hundreds of ideas for better results in under a minute. 

To get even more out of AI, you can include roles in a “perfect world” process. Our latest tools include roles that are “nice-to-haves” or roles that make for more responsible processes (i.e. combating bias, accounting for sustainability).

In a specific regulatory environment, we employed an AI agent to play an expert critiquing a proposed concept. A subject matter expert’s time is valuable, and an early critique is otherwise a “nice-to-have” that typically gets skipped. By including that review, our system avoids wasted time on an idea that violates regulation.  If we want to align to company sustainability initiatives, we can have an LLM score our concepts while playing a sustainability expert to ensure we prioritize responsible initiatives.

These extra steps cost time, attention, and organizational swirl in a human system. With AI their cost is effectively zero.

Whatever your context, think through the roles agents will play in your future toolset and what those people would need to know. AI will need the data that person would need, or the data needed for the LLM to play that role.

The reality of LLM’s right now is if a human isn’t great at a specific role, then an LLM probably won’t be either.

3. Limiting your strategy to “chatbot”

“If I have to build another chatbot I’ll blow my lid.”  – my development agency friend.

It’s fair to say that a lot of generative AI strategies in the past year could be summarized by “chatbot”.

People limit themselves to today while they should be thinking about what tomorrow will look like.  As a result, we see many companies focusing on use cases that don’t drive enough value for end users to be worth investment.

The real challenge is to balance what you learn from your current system and pain points with the value you deliver your end users.  Avoiding this pitfall simply requires the empathy that seems to get lost with any new “shiny-object” technology – rooting yourself in what internal and external stakeholders actually need.

 

Root yourself in what internal and external stakeholders actually need

Specifically consider the value you deliver to your customer and how you deliver it today. Can you skip parts of your current delivery model? Are you building something people will actually use? Is there adjacent value in your end user’s day-to-day you can unlock?

An example of how we rethought value for our clients was by addressing their key challenge: “the innovators dilemma” – getting organizational buy-in from managers incentivized by short-term growth. Our clients were asking for great ideas, but what they really needed was a great idea and a great internal pitch.

In an AI-demonstration project for a B2B company, we used a panel of AI agents mimicking internal stakeholder perspectives to critique our pitch to the business in order to anticipate objections and hone a more compelling (and ultimately accepted) product strategy.

Today we are further unleashing AI by experimenting with ways to integrate with downstream systems like capacity planning or supply chain to anticipate barriers to innovation, as well as unlock always-on opportunity identification.

4. Underestimating what it takes to transform

Part of Sam Altman’s point in painting the distinction between AI jobs and AI tasks is that the jobs at your company will look different depending on what tasks AI executes.

Ironically, we see leaders limiting their AI vision while simultaneously underestimating what has to change organizationally for AI to add the most value.

 

Consider your entire operating model

To get the most out of any AI tool, your operating model must be considered: the right data is available, the right talent and skills are considered, processes need to compliment the value of the tool and strategies need to align new ways of working and unlocked capabilities.

When companies take the steps to avoid the first three pitfalls, your AI strategy becomes clear and focused. That is, if you’ve tested your potential use cases and have a deep understanding of the people and data in your processes, you end up with a better-informed strategy that your organization can get excited about. 

Today, we start with strategy sessions to help align leaders to a practical yet ambitious unified vision. We bring our learnings about AI innovation capabilities across clients, and call out barriers we recognize from the traditional innovation and business design systems we’ve built for years.  

By bringing clarity to long term strategy, we are able to chart a path through the short term wins made possible by the power of generative AI while remaining ambitious in the ways we’ll transform to grow.

Bringing this together in a clear strategy to win

Avoiding these pitfalls doesn’t take much effort. We’re not asserting you need an in depth process analysis or dive into your available data fields – but quick testing and getting one click down from that 10,000 foot view.

By testing off-the-shelf solutions, considering roles, and not limiting yourself to “chatbot” you will quickly educate yourself on what the “jagged frontier” means for your team and your business.

A good AI strategy defines how you will win with AI, and what you do and don’t do. Given the rapidly changing frontier, a great AI strategy includes what you don’t do right now, but may use AI for later when the technology gets there.

Our vision for our clients is always-on Autonomous Innovation that doesn’t miss market opportunities. We’re exploring how to integrate the way strategies of how new revenue originates with the systems used to execute.  We plan to win with AI by having it imagine, make, and sell – while validating continuously.  

 

How will you transform?

Dive deeper into our learnings from running AI-powered innovation programs with global business leaders on our AI-powered innovation and design hub, or get in touch if you have a specific ask.

Join us at our virtual Autonomous Innovation Summit  to discover how AI is changing the way we innovate, operate and design – and how businesses can transform to thrive in this autonomous world.

VIRTUAL SUMMIT

Autonomous
Innovation

JUNE 5 & 6