FAQs on Smart Comprehend

Updated 

Below are some frequently asked questions regarding Smart Comprehend:

Smart Comprehend is available for all channels supported by Sprinklr. Currently, the feature supports five languages: English, German, Portuguese, French, and Spanish.

For an AI-based solution to be feasible, there should be at least 10 articles. Additionally, to achieve better accuracy and coverage, these articles should encompass the majority of call drivers. Consequently, a lower number of articles will likely result in reduced accuracy and recall.

This may be possible due to the following reasons:

  1. Lack of specific call drivers: The majority of cases may consist of spam or non-engageable messages that do not contain identifiable call drivers.

  2. Insufficient coverage in articles: The existing articles might not cover all the major call drivers for a particular partner, leading to fewer predictions.

  3. Need for fine-tuning: If the above conditions are not met, fine-tuning the model might be necessary to increase the frequency of accurate predictions.

There are 3 steps involved in the prediction of knowledge base articles :

  1. Case to query generation: Using Sprinklr's in-house LLM, Sprinklr AI analyzes the current case conversation to generate a relevant query.

  2. Article retrieval through generated query: The generated query is then used with similarity embeddings to retrieve the most relevant article.

  3. Extraction of relevant article content: The similarity embeddings are further employed to extract the relevant portion of the article, providing an appropriate answer to the user's query along with the article.

Yes, Smart Comprehend can be enabled for selected users.

Yes, the AI model can be trained on both public/private articles, however, the Share URL option won't be available for private articles.

No, it is not necessary to use Sprinklr's Knowledge Base to utilize Smart Comprehend.

You can continue to maintain your articles in your existing content management system and still use Smart Comprehend using an API Integration. Using this integration, a copy of your KB articles will be maintained on Sprinklr and the AI model will be trained on those articles.

  1. Format: Articles should be in a FAQ or issue-specific SOP-based format, directly addressing customer pain points with clear steps or answers.

  2. Feedback: Agents and users should provide feedback on recommendations whenever possible to help gauge and improve the model's performance once enough feedback is shared. This can be used for offline re-training of the model.

  3. Text-based content: Important information should not be presented in images, as text-based LLMs cannot process image content.

  4. Specificity: Each article should focus on a particular issue and be concise to avoid creating unnecessary noise for the model.

  5. Relevance: Articles containing generic guidelines for agents should not be used for recommendations, as they are not specific to individual scenarios.

For the Comprehend model to be effectively deployed, there should be at least 1,000 cases within the applicable filters. This is a major requirement, though not an absolute blocker.

Approximately ~2 weeks are required for Model Training and Internal Validation. This is applicable when the required number of KB articles have been shared/integrated with the Sprinklr Care team. This does not incorporate integration timelines.

The frequency for the KB articles sync can be decided with the mutual understanding between you and the Sprinklr team (Sprinklr Service).

Yes, you can give feedback on AI recommendations via Thumbs Up/Down button present on each article card.

Yes, there is a standard reporting dashboard available for Smart Comprehend.

Currently, the feedback mechanism is not used to automatically train or fine-tune the deployed model.