© 2026 Arize University
  • Terms of Use
  • Privacy Policy

Browse products

Browse products

Product filters:
search
Product image for AI Agents Mastery: From Architecture To Optimization - 2026 Update

AI Agents Mastery: From Architecture To Optimization - 2026 Update

Dive into modern AI agent engineering in this updated, hands-on bootcamp. You’ll go from spinning up your first LLM-powered agent to designing robust architectures with tools, MCP, and agentic RAG, all grounded in real-world use cases. Along the way, you’ll work with popular agent frameworks, add observability using traces and spans, and build evaluations that keep your agents reliable as prompts, models, and requirements change. By the end, you’ll have an end-to-end agent you can deploy, monitor in production, and iteratively improve with real-world feedback.

Course•By Sri Chavali

Learn more
Product image for LLM Evaluation Basics

LLM Evaluation Basics

This course led by Laurie Voss -- Head of Developer Relations at Arize AI and a former founder at NPM -- dives into the importance of evals in AI systems and how they serve as a testing mechanism for outputs that are inherently variable. It covers two types of evals: code evals for deterministic checks and LLM-as-a-Judge evals for more nuanced assessments. It then walks you through the process of setting up tracing, writing code evals, and configuring LLM judges, emphasizing the need for clear criteria and structured prompts. Start small with one code eval and one LLM eval to identify patterns and improve your outputs. Finally, explore the Arize-Phoenix documentation and evaluation tutorials to implement these strategies in your projects.

Course•By Laurie Voss

Learn more
2 products found
Arize University logo
  • Browse products
  • Log in
Arize University logo

Main menu

Includes navigation links and user settings