Dataster Documentation

Dataster helps you build Generative AI applications with better accuracy and lower latency.

Run an Automated Evaluation Test

Dataster provides a robust automated evaluation framework that empowers builders to rigorously assess the quality of their GenAI applications' outputs across their entire use case. This framework can handle hundreds of prompts, sending them to various Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems.

Prerequisites

  1. A Dataster account.
  2. One or more user prompts grouped in a use case.
  3. To be included in an automated evaluation, the prompts must have a ground truth.
  4. One or more system prompts part of the same use case as the user prompts.
  5. One or more LLMs. Dataster provides off-the-shelf LLMs that can be used for performance testing.
  6. Optionally, one or more RAGs.

Step 1: Navigate to the Human Evaluation Page

  1. Navigate to the Automated Evaluation page by clicking "Auto Evaluation" in the left navigation pane.

Step 2: Select User Prompts

  1. Select the use case to use for testing.
  2. The interface indicates how many user prompts have been created for this use case.
  3. One use case must be selected.


Select User Prompts

Step 3: Select LLMs and RAGs

  1. Select the LLMs to use for testing.
  2. Select the RAGs to use for testing.
  3. At least one RAG or one LLM must be selected.
  4. LLMs and RAGs are indicated by different icons.


Select LLMs and RAGs

Step 4: Select System Prompts

  1. Select one or more system prompts for the use case.
  2. At least one system prompt must be selected.


Select System Prompts

Step 5: Run the Automated Evaluation Job

  1. The user interface indicates how many tests will be run.
  2. Click Run.


Run the Auto Evaluation job

Step 6: Automated Evaluation Job Execution

  1. The user interface displays each test execution.
  2. Upon test completion, the output quality evaluation is displayed (thumb up or down).


Auto Evaluation job execution

Step 7: Observe the Results

  1. After all the tests are complete, the consolidated results are displayed.
  2. For each model and RAG, the average score is displayed.
  3. For each system prompt, the average score is displayed.
  4. For each combination of model, RAG, and system prompt, the average score is displayed.
  5. Optionally, save the job results.


Auto Evaluation job results

Conclusion

You have successfully run an automated evaluation test in Dataster. This allows you to measure the performance of your use case and make informed decisions to optimize output quality.


If you encounter any issues or need further assistance, please contact our support team at support@dataster.com.