Experimentation and validation are necessary to de-risk your innovation process – but the methods are not bulletproof. After collaborating with and running validation for multiple global business leaders, we’ve put together a list of the most common experimentation mistakes and how to avoid them.

Around 80% of startups and corporate innovation projects fail, and what they (very often) have in common is the lack of a market need. Organizations spend long periods of time, investments, and workforce on new products, many times to see them fail just after launch. Before actually launching them, it’s possible to find out which projects are worth pursuing by tracking the right decisions along the way.

How to know you're making the right call

Running validation experiments is a powerful way to navigate the customer jungle when introducing products or services into the market. Targeted experimentation is meant to validate informed assumptions, and provide you with clear future steps. Validating solutions through experimentation isn’t only about finding out whether something works, but it minimizes risk and investment costs as well. 

These kinds of experiments fit your solutions to specific market needs. However, there’s a right and a wrong way to make them happen. Lack of engagement, sales deficiency, and not achieving the desirable responses can be a consequence of poor validation. The worst part? You might not be able to pinpoint what went wrong.

Preventing unsuccessful experiments

1. Think measurables

When outlining an experiment, it’s good to focus on the ‘what’. But it’s just as vital to be able to measure the outcome of the experiment. With a North Star Metric, you’ll be setting a single metric at the beginning of your experiment. It’ll be your guide when gauging the outcomes based on what you wish to achieve.

Why we do this

You want to know where you’re going before you start walking. A descriptive set of milestones can help you sketch what this metric will look like. You can use our Experiment Card to outline a North Star Metric based on your own validation goals.
While working on one of our most recent projects, we wanted to determine whether buyers understood the message of the product. To analyze consumer understanding, we developed a 5-second test. In this case, the North Star Metric represented the comprehension rate. We asked those who took the test what they thought the product did, and then measured the number of people who gave a correct answer within the five-second time frame.

2. Define clear success criteria

Setting clear success criteria is yet another vital step some innovation teams are known to leave behind. Essentially, this step is about discussing the metric values your North Star Metric will analyze, and ultimately measure the experiment as a success or failure. To set it in motion, you’ll need the ability to kill or pivot your validation methods.

Why we do this

Setting up the right criteria throughout all phases of experimentation will help you determine if, and why, an experiment is considered a success. Make sure to design your North Star Metric with all team members to bring clarity to your process.

Bonus tip! Most information can be restronused. Data from past experiments often serves as a benchmark for future validation processes. Don’t despair if you’re working towards your first experiment, a Google search can always provide existing benchmarks you can set as your own.
One of our B2B clients used LinkedIn messaging as a validation technique, and operated it as a tool to check if their offer raised any interest. After running an experiment with them, we noticed that out of 63 sent messages, they had only received five positive replies. In the end, this collected data wasn’t relevant, because they hadn’t defined how many responses would be considered a success.

3. Don’t rush into it

Control is one of the many advantages of digital validation experiments. These can be conducted quickly, enabling teams to gather consumer or end-user data in a matter of days. But like all good things, there’s always a downside. More often than not, teams hurry towards the experimentation phase believing speed is the only factor to take into account. Although digital experiments usually offer faster outcomes, rushing into them will mostly result in inconclusive data insights. You might’ve already noticed the connection between the North Star Metric, a clear success criteria, and a thinking-before-doing approach. These first three steps to avoid experimentation malfunction are usually set in motion hand-in-hand.

Why we do this

To avoid putting effort in results we can’t use, we always take into account a data modelling basic principle: garbage in = garbage out. You should too. To avoid hasty mistakes and irrelevant information, our mindset is to always choose data quality over fast execution. Reverse engineering from desired results is a good starting point to set up these types of experiments.
While trying to harness the speed this type of experiments provide, we’ve had clients coming up with ideas they want to test in less than a day of work. Without diving into the nitty gritty aspects of the experiment, setting tests in this narrow time frame would only offer ineffective outcomes.

4. Avoid short-term learning memory

Experiments provide consumer insights based on real market data, showing which route to take next in your project. But information saturation is never helpful. Conducting more experiments than are needed may end up tangling up results with other quantitative and qualitative research sources. This ends up diluting the value of the insights. Having clear takeaways and revisiting compiled data can help you avoid the common pitfall of not making the most out of the information that’s already available.

Why we do this

We avoid wasting assets, and make the most of the data we already have. It is usually not single-use.
By conducting an experiment for a maternity product, we discovered only 3 out of 12 features had a high positive response. Based on more research, a new feature was proposed. However, revisiting the previous experiment outcomes led to killing this proposal before putting it to the test.
Tool tip! For a project, we used Miro as a tool to create an experimental war-room with easy access to the team. All participated in outlining experiment setups, goals, results overviews, learnings, and future steps to follow. This granted all team members quick access to past experiment results that would come in handy in the future.
Experimentation war room for a project

5. Don’t fall in love with your idea

Successful validation experiments depend on unbiased teams. What’s the point of running a test in the first place otherwise? When conducting trials, it’s important to look out for confirmation bias, as well as the human tendency to look for information that confirms one’s beliefs. A good way to keep track of assumptions is listing them and designing an experiment around them. However, you have to be willing to be proved wrong! Experiments are not set to confirm preferences or reflect your team’s presumptions, they’re meant to give you a glimpse of what end-users and consumers want.

Why we do this

Avoiding biases creates a learning-prone mindset, much needed in innovation teams. All who are part of these processes should be prepared to have their ideas debunked, to learn from unexpected results, pivot around testing outcomes, and to re-define their success criteria as much as it takes.
One of our clients in the food and beverage industry conducted groundbreaking scientific research on a product. Our job was to validate the data. We underwent a testing phase, intended to analyze the product’s relevance amongst a target audience. We led an experiment prioritizing one of the product’s features, and actually discovered a more relevant one. The team was caught off guard when realizing their initial assumption wasn’t as prominent as they had thought it would be. Because of their initial expectations, they decided to run the experiment more than once, but the results stayed the same.
Sample output of feature experimentation

6. Draw a line between business rationale and desirability metrics

You might’ve heard of soft and hard key performance indicators (KPIs). When setting up validation experiments, we like to split them up. Soft metrics measure values such as impressions, reach, and engagement; and sometimes even click-through-rate. Despite being hard measurables, these are called vanity metrics. Why? They make you feel good, but they don’t reach conversion. Although these metrics do show desirability insights, they’re not where the business is at. Conversion rates, cost per sale, cost per qualified lead, and acquired customers. These are the hard metrics that reflect direct value, so they need to take on the spotlight.

Why we do this

It’s not one or the other. Paying attention to all kinds of metrics while focusing on the ones that reflect conversion rates is possible. Map out the metrics that show real end-user value for specific stages of the project, and pay attention to hard metrics in the viability stage. You can also focus on soft metrics when analyzing desirability. The goal is to reach a robust analytics structure that uses all these values the right way.
Different audiences have different behaviors throughout digital channels. 50+ target users have significantly higher click rates on Facebook compared to a younger audience. While using landing pages to contrast each audience’s conversion rate, we find the older users present similar (sometimes even higher) engagement than younger generations. The soft metrics show favorable engagement of the older users, but ultimately, the conversion rates lean towards the younger ones.

7. Avoid force-fitting tools

Tools aren’t just a hype. An elaborate toolbox facilitates effective validation experiments. UsabilityHub, Phantombuster, and Umso are a few examples of widgets with pretty awesome functionalities for experimentation. Each tool comes with its own strengths and weaknesses, which outlines an ideal scenario to use them respectively. This is why it’s important to choose the tool that better fits your specific learning goals, and not the other way around.

Why we do this

We want our capital and time investment to be more efficient. Avoiding using tools without a specific need helps us get there. By creating an overview of your toolbox, your team will always be mindful of the right time to implement them. You can take a look at our experiment picker flowchart. This is a nifty tool that helps you set up the right experiments for the validation you need, and makes it easier to pick your tools accordingly.
We have a good example of tool usage. Recently, a client of ours received a message from their legal department - a new tool had been approved for their use. This turned into a recurring topic, the main focus being the client wanting to implement this tool into their experimentation processes. We found ourselves still in the discovery phase of the project. At the time, the new tool wasn’t suitable, it would come in handy at a later testing stage. Although it is always important to be aware of potential tools, you shouldn’t aim at using them just for the sake of their implementation.

It's go time

We experiment because validating ideas and assumptions through experimentation provides the means to increase your product’s market-fit. Taking into account these tips to avoid an experiment malfunction, you’ll have a safety net around your validation processes, and will virtually increase your chances of success.