Simulate Test Cases

Updated 

Simulating test cases based on scenarios or generating them from past conversations are two approaches used to validate and assess the performance of the Digital Twin.

Simulating Test Cases Based on Scenario: In this approach, you have the flexibility to set parameters such as customer description and issue, which will guide the simulation of a customer conversation. By defining specific scenarios, you can encompass various pathways, edge cases, and potential responses.

Generating Test Cases from Past Conversations: This allows you to create test cases using an existing conversation as a basis for testing and evaluating the Twin's performance. By using conversations that have occurred in live environments, you ensure that the test cases reflect the dynamic nature of user interactions. This method helps in identifying areas for improvement based on real user behavior, ultimately leading to a more robust and effective conversational AI system.

To Simulate Test Cases - Happy Flow

  1. On your Digital Twin window, click Simulate Test Cases within Simulator at the bottom right.

  2. On the Test Cases window, click the dropdown icon next to Test Case in the top right corner. From the list of options that appear, select Simulate Test Cases.

  3. On the Set Simulation Parameters window, choose the Simulation Type as Happy Flow to create and preview the best possible chat experience for users.

  4. Add a test Name and select the Supported Language for the simulation.

  5. Provide a Customer Description to explain the user's situation and frustration.

  6. Specify the Issue that the customer faces with the product or service to create realistic scenarios.

  7. Enter the total Number of Simulations that you want to run to thoroughly test the Twin's performance.

  8. Once all parameters are set, click Save to initiate the simulation process. The Twin will simulate interactions based on the defined parameters, providing insights into its performance.

To Simulate Test Cases - Jail Break

  1. Selecting the Jail Break simulation type allows for the simuation of user interactions that may lead the Twin to provide undesired or hallucinatory responses.

  2. Utilize the above outlined steps to simulate a specific scenario designed to elicit less-than-ideal responses from the Twin, potentially invoking outliers. For instance, create a scenario where the Twin is prompted to compare your brand with its competitors.

  3. You can guide the Twin by specifying the Jail Break Attack to compel it to respond to an outlier or unusual scenario. Integrating jail break attacks in the simulation helps uncover potential weaknesses in the Twin's responses, enabling you to implement enhancements for improved performance and reliability.

  4. An immediate refinement could involve implementing Guardrails based on insights gained from these simulations, followed by testing the scenario again with the guardrails to ensure the Twin responds as expected.

To Create a Test Case with an Executed Case

  1. On the Test Cases window, click Test Case in the top right corner.

  2. On the Test Case window, add a descriptive Name for the test case.

  3. Select the desired Case Number to test your Twin using a previous case. This allows you to use an existing conversation as a basis for testing and evaluating the Twin's performance. Click Save.

View Cases

  1. After setting simulation parameters for test cases, click View Cases to access the created simulations.

  2. On the next window, click the desired case to inspect and analyze each conversation individually, assessing the Twin's performance.

  3. In this view, you can review the conversation and access the Debug Log. This analysis helps in understanding how well the Twin handled the simulated interactions and provides insights into areas that may require improvement.