GPT-5.4 Accuracy Drops from 100% to 54% on ARC-AGI After Repeated Memory Summarization

According to Beating, a recent Agent memory study by Dylan Zhang, a PhD student at University of Illinois, found that repeatedly summarizing model experiences can degrade performance rather than improve it. In ARC-AGI tasks, GPT-5.4 achieved 100% accuracy on 19 problems without memory, but after multiple rounds of memory compression based on correct solution trajectories, accuracy fell to 54%. Similarly, in WebShop shopping tasks, the AWM memory method scored 0.64 with 8 expert trajectories but dropped to 0.20 with 128 trajectories, returning to baseline. The research suggests the issue stems from over-summarization: each abstraction step loses specific details and merges task-specific rules into generic guidance, ultimately degrading model performance.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments