Tau-bench

text
+
+
+
+
About

TAU-bench (Ï„-bench) is a comprehensive tool-agent-user interaction benchmark testing language agents' ability to use domain-specific APIs while following policy guidelines in dynamic conversations. This real-world evaluation assesses AI agents' capability to interact with users, utilize tools effectively, and maintain consistency across multiple trials, highlighting challenges in function calling and real-world application deployment.

+
+
+
+
Evaluation Stats
Total Models1
Organizations1
Verified Results0
Self-Reported1
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers

Score Distribution

1 models
Top Score
63.0%
Average Score
63.0%
High Performers (80%+)
0

Top Organizations

#1OpenAI
1 model
63.0%
+
+
+
+
Leaderboard
1 models ranked by performance on Tau-bench
LicenseLinks
Apr 16, 2025
Proprietary
63.0%
+
+
+
+
Resources