AI+ Studio - Provider and Models Settings

Updated 

Within the Provider and Model Settings in AI+ Studio, you can manage all your providers and the models associated with them. You have the option to either use Sprinklr-provided LLMs or bring your own keys for a provider—currently, this is limited to OpenAI, though additional providers and the capability to bring your own models will be supported in the upcoming quarter.

Enablement note:

To learn more about getting this capability enabled in your environment, please work with your Success Manager.

Permission Governance

Once enabled, you can access the module based on the range of permissions offered for Providers and Models:

image

Permission

Yes

No

View

Can only view the providers configured for the partner and the corresponding models. Tile becomes visible after getting this permission. 

Cannot access AI+ Studio at all. 

Edit

Can view, add or edit own provider. Add Provider Global CTA and Edit action on created providers becomes accessible after getting this permission.

Can only view the providers and corresponding models, and cannot add/ edit own key or fine tune a model

Fine-tune

Can fine tune model. Fine-tune CTA becomes visible after getting this permission.

Can only view the models, but not fine-tune any

To Configure Provider and Model Settings

  1. Click the New Tab icon. Under Platform Modules, click All Settings within Listen.

  2. Within Manage Customer, search and select AI+ Studio.

  3. Click the Provider and Models Settings card on the landing page.

    image

  4. If you've opted for Sprinklr-provided LLMs, they will automatically appear on the Providers window.

  5. To add a provider using your own key, click Add Provider in the top right corner. Currently, only OpenAI is available.

  6. After saving the details, you’ll be redirected to the Providers window, where the newly added provider will be displayed.

  7. Hover over the provider card to see options to View Models and Edit (to change the provider's name).

    Note: The Edit option is only available for providers added via your own key, not for Sprinklr-provided LLMs.

  8. Click View Models from the action bar to see available base models and any fine-tuned models.

    Note: For Sprinklr-provided LLMs, contact the support team at tickets@sprinklr.com if you need access to additional base models.

To Fine-Tune a Model

Fine-tuning optimizes a model’s performance, enabling it to deliver higher quality results than standard prompting. This process allows for training on more examples, saving tokens with shorter prompts, and reducing request latency.

  1. To fine-tune a model, click the Tune Model button in the top right corner of the AI Models window.

  2. On the Basic Details window, select the model you wish to fine-tune from the dropdown list. This list includes both fine-tunable base models and existing fine-tuned models.

  3. Next, enter a meaningful name and description for your fine-tuned model.

  4. On the Tuning Dataset window, upload your training dataset in either JSON or JSONL format. Ensure the dataset includes a diverse set of demonstration conversations that closely resemble the types of interactions the model will handle during production.

  5. Additionally, you can upload a validation input file to evaluate the accuracy and performance of your fine-tuned model.

  6. On the Advanced Settings window, you can fine-tune additional parameters to further customize the training process:

    • Batch Size: Controls the number of samples processed before the model's internal parameters are updated. Enable manual setting, or leave it for auto-assignment.

    • Number of Epochs: Specifies how many times the entire dataset will be used to train the model. Enable to manually set the number, or let it auto-assign.

    • Learning Rate Multiplier: Adjusts the speed at which the model learns during training. Enable manual setting, or allow it to auto-assign.

  7. Click Tune to start the fine-tuning process. This may take a few minutes to complete. You will receive a notification once the process is finished. After that, you can use your fine-tuned model across any of your AI+ driven use cases.

    For additional support or customization, please contact the support team.