According to Beating, AI evaluation firm Vals AI released its second-generation Finance Agent v2 benchmark on May 14, testing financial analysis workflows through 927 expert-reviewed questions. GPT-5.5 topped the rankings with a 51.76% accuracy rate, closely followed by Claude Opus 4.7 (51.51%) and Claude Sonnet 4.6 (51.03%). The test required models to independently locate relevant sections across hundreds of pages of 10-K and 10-Q financial statements and complete multi-step calculations with precise intermediate figures.
Under strict grading standards requiring completely correct answers, all leading models’ accuracy rates fell below 40%, with the hardest categories—financial modeling and precedent analysis—reaching only 23% at best. Among other models, Kimi K2.6 ranked fifth at 44.87%, followed by GLM 5.1 (44.79%) and DeepSeek V4 (44.08%). Compared to the previous version where Opus 4.7 scored 64.4%, the significant decline underscores that while AI handles simple retrieval, it remains far from replacing human analysts in finance’s complex domain requiring strict numerical precision.
Related News
Experts Say Zk Proofs Give DePINs an Edge as AI Trust Demands Rise
Fidelity publicly supports the CLARITY Act, saying it provides a balanced regulatory approach
Mistral AI in talks with European banks to develop Mythos as a replacement for internet security models