Deploying a Smart FAQ Model in a Dialogue Tree
Updated
Before You Begin
Create a Smart FAQ Model, add relevant Content Sources, and ensure the Content Sources are active.
Overview
In this article, we'll delve into utilizing the Smart FAQ model you've created to deploy FAQ bots empowered by generative AI.
Steps to Deploy Model
Navigate to the Dialogue Tree where you intend to integrate the model you've created.
Enablement note:
To ensure BotSmartReply API is incorporated into your environment, please work with your Success Manager.
Insert an API node along the path where you want to get responses from the GPT Model.
Choose BotSmartReply/BotSmartReply from the API selection dropdown menu.
Configure the input parameters for the API node as follows:
A. engineID:
i. The engineID value should match the ModelID of the Smart FAQ model you intend to utilize for generating responses.
ii. You can locate this value by accessing the Smart FAQ Model page. It typically appears as the last part of the URL, as shown below:
B. text: Set to USER_SAYS_TEXT
C. assetId: Set to MESSAGE.ASSET_ID
D. language: Set to 'en'. Note: It's essential to keep the language parameter as 'en', even if you prefer the GPT response in a different language.
E. readTimeout: It's recommended to maintain the input as 60,000 ms.
F. prompt: Utilize this parameter to return a string where you specify the persona, rules for response generation, etc., which will be followed by the LLM. You can learn more about best practices for writing prompts here .
Configure the output parameters of the API node as follows:
After setting up the API node, display the stored output parameter in a bot reply node.
What's Next?
Test your responses using golden test set.