Official evaluation code?

#18
by xbkj - opened

Hi All --

I'm wondering if there's interest in releasing an "official" or "semi-official" evaluation script for FRAMES. It is a good benchmark, that is AFAIK not saturated by current SOTA LLMs / deep research systems.

However, outputs are complex and need to be scored by a "judge" LLM. The original paper shares the judge prompt but uses Gemini-Pro-1.5-0514 as the judge model, which (to my knowledge) is no longer available. This means that people are going to pick their own grade model + write their own grading scripts, which substantially reduces the usefulness of the benchmark. (See here for an example - report numbers of FRAMES but use a different judge model + a different prompt(?))

The code is easy to write, the open question is just coming to a consensus of what the right judge model + prompt should be. Ideally, something that won't be deprecated too soon.

Also -- it would be great to have an offical / semi-official FRAMES leaderboard ... haven't been able to find one yet :)

Thanks!

Thanks, that looks like you're using an OpenAI model as the grader? That's different than the original paper, and probably different than what other people have done.

To illustrate the problem, I was just testing gpt-4o-mini vs gpt-4.1-2025-04-14 as the auto-grader model on a private dataset and they gave 368/400=92% vs 376/400=94%, which is a reasonably large difference. Without an "official" evaluation model / prompt / parameter settings, we can't really compare systems in a rigorous way.

You can choose whatever model you want for judging by changing it here - https://github.com/codelion/optillm/blob/2e4c0dac87d0d0e65dacf371a2cdb1ea0e17a409/scripts/eval_frames_benchmark.py#L72 we choose the same model for evaluation as is under test. The prompts used in the script are the same as in the paper, you can refer to the appendix of the paper for the prompts. The script is able to reproduce the results in the paper for Gemini and Gemma models so I believe it is quite good for a comparison.

Sign up or log in to comment