Calibration FAQs

Updated 

Here are some common questions regarding calibration and their answers:

For ATA calibration, you filter cases based on conditions specific to your use-case, similar to how you assign cases for basic evaluation via sampling. For P2P calibration, the manual selection of cases depends on your specific needs. You choose cases based on certain criteria, such as very low or very high scores, by applying relevant filters to focus on particular areas of interest or concern.

If you have multiple "primary evaluators," you have two options: you either clone the rule and designate each evaluator as the Primary Evaluator in their respective rule, or use a single rule and update the Primary Evaluator field as needed.

There is no inherent advantage of one variance parameter over the others; it depends on the client's specific use case. Each method provides different insights without requiring additional manual work. These are just different variance calculation scenarios, each solving the same purpose of calibration with different base scores to check variance on.

Primary Evaluator Variance: Variance is calculated with respect to the primary evaluator/supervisor.

Used when insights are needed into individual auditor deviations from a primary reference. This is useful for assessing how each auditor compares directly to a designated primary evaluator.

Mean Variance: The average of all the scores from auditors.

Used to gauge how each auditor's evaluation differs from the collective average. This approach provides an overall sense of how auditors' scores compare to the group as a whole.

Median Variance: The middle value of scores from all auditors when arranged in ascending or descending order.

Sets the middle value in the dataset as the reference point, offering a robust measure, particularly in scenarios where extreme values might skew the mean. This is helpful for understanding deviations in a more balanced way.

For ATA calibration, you first use a sampling rule to filter out cases and then assign them accordingly. For P2P calibration, you need to apply a case rule runner macro and manually assign the case to the users specified in the rule.

You can create a custom dashboard in the sandbox and modify it based on the checklist setup there.

The Primary Evaluator is a field you need to set in the rule for a particular P2P Calibration. It represents the auditor whose score is used as the base score for checking variance among the auditors. P2P evaluations are conducted on new cases (generally with no existing evaluations) to calculate the variance of different P2P auditors relative to the Primary Evaluator.