Microsoft Open-Sources Phi-Ground 4B Model, Outperforms OpenAI Operator and Claude in Screen Clicking Accuracy

According to Beating, Microsoft recently open-sourced the Phi-Ground model family, designed to solve the problem of where AI should click on a computer screen. The 4-billion-parameter version, paired with larger language models for instruction planning, exceeded the clicking accuracy of OpenAI Operator and Claude Computer Use in the Showdown benchmark and ranked first among all sub-100-billion-parameter models across five evaluations including ScreenSpot-Pro.

The team trained on over 40 million data samples and found that three common training techniques used in academic papers became ineffective at scale. The key approach proved simple: output coordinates as regular numbers, such as “523, 417.” Previous research invented specialized position vocabularies for coordinates, but these failed to scale. The team also discovered that placing text instructions before images improved performance, as models could identify targets while processing pixels. Additionally, reinforcement learning methods like DPO improved accuracy even after fine-tuning.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Sakana AI and Nvidia Achieve 30% Faster H100 Inference by Skipping 80% of Invalid Computations

Sakana AI and Nvidia have open-sourced TwELL, a sparse data format that enables H100 GPUs to skip 80% of invalid computations in large language models without sacrificing accuracy. The solution delivers up to 30% faster inference and 24% faster training on H100s while reducing peak memory usage.

GateNews24m ago

Tilde Research Discovers Muon Optimizer Kills 25% of Neurons; Aurora Alternative Achieves 100x Data Efficiency Gain

According to Tilde Research, the Muon optimizer adopted by leading AI models including DeepSeek V4 and Kimi K2.5 has a hidden flaw: it causes over 25% of MLP layer neurons to permanently die during early training. The team designed Aurora, an alternative optimizer, and open-sourced it. A 1.1B

GateNews1h ago

Nvidia Commits Over $40 Billion to AI Investments in Early 2026, Including $30 Billion to OpenAI

According to TechCrunch, Nvidia committed over $40 billion to equity investments in AI companies in the first months of 2026, with a $30 billion investment in OpenAI as the largest single commitment. The chipmaker also pledged up to $3.2 billion in glassmaker Corning and as much as $2.1 billion to d

GateNews4h ago

NVIDIA’s open AI long-term partner Deepinfra raises $107 million Series B funding to build a “token factory”

AI startup DeepInfra announced it has completed a $107 million Series B funding round, led by 500 Global and early Google engineer Georges Harik, with strategic investors including NVIDIA (Nvidia), Samsung Next, and Supermicro participating. According to official information, this capital injection will be used to expand global data center capacity, addressing the computational cost and efficiency bottlenecks faced when current AI applications shift from “model training” to “large-scale inferenc

ChainNewsAbmedia5h ago

ECB Governing Council Member Escrivá Flags AI Risks to Financial Infrastructure on May 9

ECB Governing Council member Escrivá stated on May 9 that central banks must reassess the resilience of financial infrastructure and cybersecurity robustness in light of artificial intelligence developments. According to his remarks at an event, recent AI advances compel a reevaluation of financial

GateNews5h ago

SpaceX Rebrands xAI to SpaceXAI, Files Orbital Computing Trademark Ahead of $1.75T IPO

According to trademark filings with the United States Patent and Trademark Office, Elon Musk's artificial intelligence company xAI is being folded into SpaceX under a new brand, SpaceXAI. The rebrand encompasses satellite-based data centers, orbital computing, cloud computing, and AI workload

GateNews7h ago
Comment
0/400
No comments