OpenAI Research VP Criticizes Anthropic's Perceived Stance That Only They Can Build AI

According to Aidan Clark, OpenAI’s research vice president for training, on X platform recently, he expressed criticism of what he has heard from Anthropic colleagues: a belief that only Anthropic is qualified to be trusted with building AI. Clark argued that having multiple organizations developing AGI (artificial general intelligence) is beneficial, while going it alone makes it nearly impossible to find the right path.

Anthropicemployees responded to clarify their positions. One user denied the characterization, stating “that’s not our view,” while another distinguished between Anthropic not trusting OpenAI specifically versus believing no one else should build AI. Clark acknowledged the distinction, admitting he may have misheard the original claim, but noted that distrust of OpenAI alone is “not much better.”

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Cursor Discloses Autoinstall Training Method, Boosts Composer 2 Performance by 14 Percentage Points

According to Cursor's recent disclosure, the company unveiled a training technique called autoinstall for its Composer model series: using a prior-generation model to automatically set up executable environments for the next generation's reinforcement learning. When training Composer 2, Cursor

GateNews1m ago

OpenAI DevDay 2026 will be held in San Francisco on 9/29

OpenAI announced that DevDay 2026 will be held in person in San Francisco on 9/29, along with a submission contest created with GPT-5.5 and Image Gen. Codex will automatically filter 2–3 entries each week, and the winners will receive free tickets plus cross-city flights and hotel accommodations. The conference will focus on evaluating the GPT-5.5 ecosystem and human-machine collaboration. Participants must be over 18 and not be direct family members of OpenAI employees. Areas of observation include the new model, agent integrations, and multi-cloud strategies.

ChainNewsAbmedia3m ago

NVIDIA invests in Swedish AI legal-tech startup Legora, with Jude Law as the global brand ambassador

NVIDIA made a Series D expansion investment of $50 million into Legora, bringing Legora’s total fundraising to $600 million and its valuation to $5.6 billion. Atlassian, Adams Street Partners, and Insight Partners participated. Legora focuses on AI legal technology, providing tools such as automated review, contract analysis, and legal research; ARR exceeds $100 million, and headcount grew from 40 to 400. Jude Law became the global brand ambassador, with the advertising slogan: Law just got more attractive.

ChainNewsAbmedia9m ago

AI 2027 Predictions 65% Complete, Software Development Acceleration Lags at 17%, Says Google Docs Co-Founder

According to Steve Newman, Google Docs co-founder and chairman of the Golden Gate Institute, AI has completed approximately 65% of the quantified predictions outlined in the 2027 scenario forecast released last year. However, the most critical metric—AI's acceleration of its own software

GateNews41m ago

Does Claude/GPT love pleasing too much? A Claude.md prompt lets AI deliver tough, accurate answers

This article introduces a prompt that can be put into Claude.md / Agents.md. It turns AI from a tactful assistant into a blunt consultant by adjusting four layers: identity setup, fact-checking, freeing the tone, and political-correctness exemption. It requires full coverage, step-by-step verification, and absolutely no hallucinations—if necessary, it can also be provocative. It also explains when to load it, the risks, and the applicable scenarios (research, writing, technical judgment, academic discussion), and notes that it is not suitable for customer service, education, or medical advice. The original source is ABMedia.

ChainNewsAbmedia1h ago

OpenAI launches ChatGPT Futures: 26 inaugural students receive $10k in funding, spanning more than 20 universities

OpenAI announced the inaugural ChatGPT Futures Class of 2026: 26 enrolled students from over 20 top universities, receiving a $10k grant and access to cutting-edge models each. These students began their studies in fall 2022 and grew alongside ChatGPT. Their research areas include mapping space objects, detecting disaster survivors, preserving endangered languages, and health care. The program aims to use AI to address real human needs and to build infrastructure for a new generation of creators.

ChainNewsAbmedia1h ago
Comment
0/400
No comments