ARC-AGI-3 is the first interactive reasoning benchmark designed to evaluate human-like intelligence in AI agents. It requires agents to explore novel environments, acquire goals dynamically, build adaptable world models, and learn continuously, with a perfect score indicating performance that matches or exceeds human efficiency in every game.
arcprize.org
1 min
4d ago
SkillsBench is a benchmarking framework designed to evaluate the effectiveness of agent skills across 86 tasks in 11 domains. It includes curated skills and deterministic verifiers to assess their impact on large language model (LLM) agents during inference.
arxiv.org
2 min
2/16/2026
A new benchmark evaluates outcome-driven constraint violations in autonomous AI agents to enhance safety and alignment with human values. This benchmark addresses limitations of existing safety assessments that mainly focus on harmful actions.
arxiv.org
2 min
2/10/2026
ARC-AGI-3 is the first interactive reasoning benchmark designed to evaluate human-like intelligence in AI agents. It requires agents to explore novel environments, acquire goals dynamically, build adaptable world models, and learn continuously, with a perfect score indicating performance that matches or exceeds human efficiency in every game.
arcprize.org
1 min
4d ago
A new benchmark evaluates outcome-driven constraint violations in autonomous AI agents to enhance safety and alignment with human values. This benchmark addresses limitations of existing safety assessments that mainly focus on harmful actions.
arxiv.org
2 min
2/10/2026
SkillsBench is a benchmarking framework designed to evaluate the effectiveness of agent skills across 86 tasks in 11 domains. It includes curated skills and deterministic verifiers to assess their impact on large language model (LLM) agents during inference.
arxiv.org
2 min
2/16/2026
ARC-AGI-3 is the first interactive reasoning benchmark designed to evaluate human-like intelligence in AI agents. It requires agents to explore novel environments, acquire goals dynamically, build adaptable world models, and learn continuously, with a perfect score indicating performance that matches or exceeds human efficiency in every game.
arcprize.org
1 min
4d ago
SkillsBench is a benchmarking framework designed to evaluate the effectiveness of agent skills across 86 tasks in 11 domains. It includes curated skills and deterministic verifiers to assess their impact on large language model (LLM) agents during inference.
arxiv.org
2 min
2/16/2026
A new benchmark evaluates outcome-driven constraint violations in autonomous AI agents to enhance safety and alignment with human values. This benchmark addresses limitations of existing safety assessments that mainly focus on harmful actions.
arxiv.org
2 min
2/10/2026
No more articles to load