The Step-by-Step Process to Get Started (& Why You Must)

By admin Aug 11, 2024


“Hey
there 
👋

I’m a
Bottybot!

How can
I help you?”

I don’t know which websites you’re going to
visit today … but you’ll end up visiting at least one where you’ll hear a *Pop*
sound and a bot will start “talking” to you.

… Offering you pre-sales support.

Or assisting you with your post-sales
questions.

Or simply offering support.

Every day chatbots have millions of such
conversations with users; bringing real, tangible business results such as more
leads, more sales, and higher customer loyalty. And they’re pretty mainstream
with a whopping 80{c87e2df4b343d0515d304e127afe4653a549475791ab451641a18e09bd64e760} of businesses predicted to use them by 2020.

Because chatbots drive revenue, they can — just like any other revenue channel — be optimized for better results.

Optimizing Chatbots with A/B Testing (and other experiments)

Depending on how you use chatbots in your
marketing, sales, and support strategy, running experiments on them can offer
many benefits.

For example, chatbot experiments can help you
identify:

  • Pre-sales sequences that generate
    more and better leads
  • Trial messaging that converts more
    leads into customers
  • Onboarding experiences that
    convert better
  • Customer success sequences that
    result in higher customer satisfaction (and loyalty)
  • …. And support sequences that
    result in fewer tickets

In short: If you’re a business using chatbots,
you can improve your ROI from the channel with A/B testing.

Quite a few chatbot solutions even come with
native A/B testing functionalities that allow businesses to run experiments to
find the best performing messaging, sequences, triggers and more.

But in order to run meaningful CRO experiments for chatbots, you must use the right optimization process.

A/B Testing Chatbots: The Process

Before you begin creating your chatbot
experiments, first choose the metrics you want to improve.

For example, if you’re using a chatbot for
marketing, your metric could be the
number of
leads who opt-in after a
successful chatbot interaction
.

Alternatively, if you’re using a chatbot for
boosting sales, your metric could be the
number of trial leads whose engagement score improves because of interacting
with the chatbot
.

Finally, if you’re using a chatbot for
offering support, your metric could be the
percentage decrease in the number of inbound tickets
.

Whatever it is, once you’ve identified the
metric (or metrics) to optimize, you’re ready to begin working on your chatbot
experiment.

Here are three simple steps for setting up and running winning chatbot A/B tests:

Step #1:Hypothesis Crafting

Just like regular website or app experiments,
chatbot experiments, too, begin with a clear hypothesis.

For example, when Magoosh, an online test
preparation company, decided to run an onboarding experiment, it started off with a clear
hypothesis:

If we
send trial customers a welcome onboarding message when they first log into a
Magoosh product, they will be more likely to purchase premium accounts in the
future.

While Magoosh didn’t exactly test a chatbot,
it did test if sending an automated welcome customer onboarding chat message
could help with more conversions.

In your chatbot testing strategy, your
hypothesis could become “Offering
automated chatbot assistance to new trial signups would result in …

You get the idea, right?

Helpful resources:

Tools for Writing Hypothesis for Your Experiments: These are five really cool CRO tools that will help you write a winning hypothesis for A/B testing your chatbots.

How to Create a Winning A/B Test Hypothesis: This webinar breaks down the process of writing a winning hypothesis into five simple steps. A must-watch if you’re only starting out with experiments.

Complex A/B Testing Hypothesis Generation: This is another excellent tutorial on writing hypothesis for your experiment. These hypothesizing tactics apply seamlessly to chatbot experiments.

Step #2:Designing the Experiments

Just as you would in a regular A/B testing or
CRO experiment, in your second step, you need to “create” your
chatbot experiments.

In this step, you need to translate your
hypothesis to a “change” (or a set of changes) to test.

For example, if you hypothesized that a
“more branded” chatbot will get better results for your marketing
team, in this step, you’ll have to see what elements of your chatbox could be
branded better. It could be your chatbot’s voice or tone or simply the visual
interface.

While you’re at this step, do check out this guide from the awesome folks from Alma. It will be very helpful for designing your experiments. For instance, in this branding experiment, just visit the personality section of this chatbot testing guide and you’ll see some questions that will show you the branding items you could actually experiment with. See the screenshot below for inspiration:

branding experiment

Once you know what element/elements you’ll
test (based on your hypothesis), determine the length of your chatbot experiment
and the sample size.

Helpful resources:

Tools for Calculating the Duration and Sample Size of Your Experiments: Here are some of the best CRO tools to calculate the ideal sample size and duration for your chatbot experiments.

Convert’s A/B Test Duration Calculator: Just input your data into this calculator, and you’ll know how long your chatbot test or experiment should run. Convert’s A/B Test Duration Calculator: Just input your data into this calculator, and you’ll know how long your chatbot test or experiment should run.

Step #3:Learning from Experiments

Once your experiment is over and the data is
in, it’s time to analyze your findings.

Usually, there are just three outcomes to any
optimization experiment, including the ones you’ll run for your chatbots. These
are:

  • The control loses. Here, your hypothesis is
    validated and your change brings a positive impact on the numbers. An example
    of such a result would be getting 1000 opt-ins instead of 890 by changing your
    chatbot’s profile image from a cartoon to a mascot.
  • The control wins. Here, your hypothesis needs
    to be rejected as your change brings a negative impact on the numbers. For
    example, the new mascot profile picture getting way lower signups than the
    regular cartoon picture.
  • The test is inconclusive. These are usually
    the most common and often the most frustrating outcomes because you don’t get
    statistical significance to have a clear winner.

So once you’ve your test’s results, you need
to go back to step #1 of your experimenting: the hypothesis step.

Either you can start a new experiment to test a new hypothesis or go with iterative testing, which means going back to a hypothesis that didn’t get validated (either because of a losing or an inconclusive test), improving it, and then re-running the test.

When doing iterative testing, make sure you
spend time into understanding why your test failed in the first go.

Think:

Was it
choosing a wrong test segment?

Was it a
bad hypothesis all along?

Were your test logistics bad? The idea here is to learn all that you can from your winning, losing, and even your inconclusive chatbot experiments because that’s how you optimize — with continuous learning.

Wrapping it up …

If you’re more tech-savvy, you can take your
chatbot experiments to a whole new level by testing with the content you feed
your chatbot (or its “knowledge base”).

Or, you can also try a different learning
algorithm.

Chatbots are here to stay and as machine
learning matures, they will be up-front and centre, acting as the first
touchpoint with large segments of your prospects.

It just makes sense to get on-board with A/B testing their performance.

99 Conversion Tips
99 Conversion Tips


Originally published April 12, 2019 – Updated November 23, 2023

Mobile reading?
Scan this QR code and take this blog with you, wherever you go.


Authors

Disha Sharma


Disha Sharma


Content crafter at Convert. Passionate about CRO and marketing.



Source link

By admin

Related Post