A/B Testing inside a Dialogue Tree
Updated
In scenarios where you aim to maintain the initial steps of a bot workflow unchanged but conduct A/B testing later to evaluate two or more alternate workflows, you can seamlessly execute this within a dialogue tree. This approach allows for iterative optimization of bot performance by comparing different workflow variations while ensuring consistency in the initial user experience.
For instance, a customer service bot may start with a standard greeting and FAQ retrieval process before branching into different paths for resolving specific issues or inquiries, enabling comparative analysis of alternative resolutions or engagement strategies.
Implementing A/B Testing in a Dialogue Tree
Begin by creating a dialogue tree with the desired initial steps that you want to keep consistent across all variations of the workflow. This could include standard greetings, FAQs, or any other introductory interactions.
After the initial steps, insert an Update Properties node to extract a random variable. For example, extract the current time in epoch format. This variable will be used to determine which path the user will follow.
Next, apply the modulus operation on the extracted random variable. For example, the modulus operator (%2) is used here to perform the division and return the remainder. This expression would help determine which path a user follows based on whether the remainder is 1 or 0 (i.e., if the epoch_time is odd or even).
Based on the result of the modulus operation, add a Decision Box node to the dialogue tree. This node will direct users down different paths according to the remainder of the modulus operation.
Create two different paths from the Decision Box node, each representing a different variation of the workflow. For example, Path 1 could represent Workflow A, and Path 2 could represent Workflow B. Implement the desired variations or changes in each path based on the A/B testing objectives. These variations could include different response messages, alternative actions, or any other modifications being tested.
Deploy the dialogue tree and conduct A/B testing by gathering data on user interactions with each workflow variation. Analyze the results to determine which variation performs better based on predefined metrics such as user engagement, completion rates, or task success.
Points to Remember
To accurately filter the results of A/B testing in reporting, it's crucial to update case-level custom fields within each individual workflow. These custom fields serve as identifiers that distinguish between different versions or variations of the bot. By updating these custom fields, you can ensure that each case is tagged correctly with the corresponding version it was exposed to during the A/B test. This enables you to track and analyze performance metrics accurately, such as engagement rates, conversion rates, or user satisfaction, for each variation of the bot.