Non-exhaustive list of sources used in this course and evals tools that can help you.
Sources
We relied on several sources to write this series, including:
- AI Engineering: Building Applications with Foundation Models, Chip Huyen
- De-risking QA for LLM-powered applications - Michael Hablich, Chrome DevTools
- Using LLM-as-a-Judge For Evaluation: A Complete Guide - Hamel Husain
Eval tools
Examples of evals solutions and tools include:
- AlignEval
- Arize
- Braintrust
- Datadog
- DeepEval
- Gen AI evaluation service and API by Vertex AI
- Inspect Evals
- JudgeLM
- LangSmith
- Evaluation harness
- OpenEvals
This list is not exhaustive. If you are using other eval tools, share them with us.