How much is the title of “former OpenAI employee” really worth in the market?
On February 25th local time, according to Business Insider, Mira Murati, former CTO of OpenAI, just announced the new company Thinking Machines Lab, which is launching a $1 billion financing at a valuation of $9 billion.
Currently, Thinking Machines Lab has not disclosed any schedule or specific details of products and technologies. The only public information about this company is the former team of over 20 OpenAI employees and their vision: to build a future where “everyone can access knowledge and tools, enabling AI to serve people’s unique needs and goals”.
Mira Murati and Thinking Machines Lab
The capital appeal of OpenAI’s founders has created a “snowball effect”. Prior to Murati, SSI, founded by former OpenAI chief scientist Ilya Sutskever, had already achieved a valuation of $30 billion based solely on the OpenAI gene and a concept.
Since Musk left OpenAI in 2018, former OpenAI employees have founded over 30 new companies with a total funding of over 9 billion U.S. dollars. These companies have formed a complete ecosystem covering AI safety (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This brings to mind the wave of Silicon Valley startups that emerged after PayPal was acquired by eBay in 2002, with founders like Musk and Peter Thiel leaving to form the ‘PayPal Mafia’, which gave rise to legendary companies such as Tesla, LinkedIn, YouTube. Former employees of OpenAI are also forming their own ‘OpenAI Mafia’.
The script of “OpenAI Gang” is even more radical: “PayPal Gang” took 10 years to create two hundred-billion-dollar companies, while “OpenAI Gang” has spawned five hundred-billion-dollar companies in just two years after the launch of ChatGPT, including Anthropic valued at $61.5 billion, Ilya Sutskever’s SSI valued at $30 billion, Musk’s xAI valued at $24 billion, and it is very likely that a hundred-billion-dollar unicorn will emerge within the next three years in the “OpenAI Gang”.
The new round of “talent fission” triggered by “OpenAI Helper” is affecting the entire Silicon Valley, even reshaping the global power map of AI.
OpenAI’s divergence path
Among OpenA’s 11 co-founders, only Sam Altman and Wojciech Zaremba, the head of the Language and Code Generation team, are still in office.
2024 is the peak of departures for OpenAI. This year, Ilya Sutskever (resigned in May 2024), John Schulman (resigned in August 2024), and others successively left. The OpenAI security team was reduced from 30 to 16, a 47% reduction; key figures among executives such as Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, departed one after another; in the technical team, core technical talents such as Alec Radford, the chief designer of the GPT series, and Tim Brooks, the head of Sora (who joined Google), left; deep learning expert Ian Goodfellow joined Google, while Andrej Karpathy left for the second time to start an education company.
“Gathered together, it’s a fire; scattered apart, it’s a sky full of stars.”
Before 2018, more than 45% of the core technical backbone members who joined OpenAI chose to set up their own separate entities, which also disassembled and reorganized OpenAI’s technical gene pool into three major strategic groups.
First is the “direct lineage army” that continues the genes of OpenAI, they can be said to be a group of ambitious individuals of OpenAI 2.0.
Mira Murati’s Thinking Machines Lab has almost completely ported OpenAI’s research architecture: John Schulman is in charge of the reinforcement learning framework, Lilian Weng leads the AI security system, and even the neural architecture diagram of GPT-4 is directly used as the technical blueprint for the new project.
Their “Open Science Manifesto” directly points to OpenAI’s recent trend of closure, planning to create a “more transparent AGI development path” through continuous openness of technical blogs, papers, and code. This has also triggered some chain reactions in the AI industry: three top researchers from Google DeepMind joined with the Transformer-XL architecture.
Ilya Sutskever’s Safe Superintelligence Inc. (SSI) chose a different path. Sutskever, along with two other researchers Daniel Gross and Daniel Levy, co-founded the company, abandoning all short-term commercial goals and focusing on building an ‘irreversible secure superintelligence’ - a nearly philosophical technical framework. The company has just been established, and institutions such as a16z and Sequoia Capital have decided to invest $1 billion to support Sutskever’s ideal.
Ilya Sutskever and SSI
Another faction is the “subverter” that had already left before ChatGPT.
Anthropic, founded by Dario Amodei, has evolved from the ‘OpenAI opposition’ to the most dangerous competitor. Its Claude 3 series models are neck and neck with GPT-4 in multiple tests. In addition, Anthropic has established an exclusive partnership with Amazon AWS, which means that Anthropic is gradually eroding the foundation of OpenAI in terms of computing power. The chip technology developed jointly by Anthropic and AWS may further weaken OpenAI’s bargaining power in purchasing NVIDIA GPUs.
Another representative figure in this faction is Musk, although Musk left OpenAI in 2018, some of the founding members of xAI also have a background at OpenAI, including Igor Babuschkin and Kyle Kosic, who later returned to OpenAI. With Musk’s strong resources, xAI poses a threat to OpenAI in terms of talent, data, computing power, and more. By integrating real-time social data streams from Musk’s X platform, xAI’s Grok-3 can instantly capture hot events on the X platform and generate answers, while ChatGPT’s training data is only up to 2023, showing a significant timeliness gap that OpenAI, relying on the Microsoft ecosystem, finds difficult to replicate.
However, Musk’s positioning of xAI is not to subvert OpenAI, but to rediscover the original intention of “OpenAI”. xAI adheres to the “maximum open source” strategy, for example, the Grok-1 model is open sourced under the Apache 2.0 license, attracting global developers to participate in ecosystem construction. This is in stark contrast to OpenAI’s recent tendency towards closed source (such as providing API services only for GPT-4).
The third group is some “game changers” who reconstruct the industry logic.
Perplexity, founded by former OpenAI research scientist Aravind Srinivas, is one of the first companies to transform search engines with large AI models. Instead of a list of search results, Perplexity directly generates answers through AI. It now handles over 20 million searches daily and has raised over $500 million in funding (valued at $9 billion).
Adept’s founder is David Luan, former Vice President of Engineering at OpenAI. He has been involved in technical research in language, supercomputing, and reinforcement learning, as well as in the security and policy-making of projects such as GPT-2, GPT-3, CLIP, and DALL-E. Adept focuses on developing AI Agents with the goal of automating complex tasks (such as generating compliant reports, designing drawings) through the combination of large models and tool invocation capabilities. The ACT-1 model developed by them can directly operate office software, Photoshop, and more. Currently, the core founding team of this company, including David Luan, has joined Amazon’s AGI team.
Covariant is an embodied intelligence startup valued at $1 billion. Its founding team is from the disbanded robot team of OpenAI, and the technical genes are derived from the experience of GPT model research and development. Focus on the development of robot basic models, with the goal of realizing autonomous operation of robots through multimodal AI, especially focusing on warehousing and logistics automation. However, three members of Covariant’s core founding team, Pieter Abbeel, Peter Chen, and Rocky Duan, have all joined Amazon.
Some “OpenAI Help” startup companies
Source of information: public information, compiled by: flagship
The transition of AI technology from “tool attribute” to “factor of production” has given rise to three types of industrial opportunities: substitution scenarios (such as disrupting traditional search engines), incremental scenarios (such as intelligent transformation of manufacturing), and restructuring scenarios (such as breakthroughs in life sciences at the underlying level). The common characteristics of these scenarios are: the potential to build a data flywheel (user interaction data feeds back to the model), deep interaction with the physical world (robot action data/biological experiment data), and a gray area of ethical supervision.
The spillover of OpenAI’s technology is providing underlying power for this industrial revolution. Its early open source strategy (such as partial open source of GPT-2) has formed a “dandelion effect” of technological diffusion, but when technology breakthroughs enter deep water, closed-source commercialization becomes an inevitable choice.
This contradiction has given rise to two phenomena: on the one hand, the resigned talents transfer technologies such as Transformer architecture and reinforcement learning to vertical scenarios (such as manufacturing, biotechnology), building barriers through scenario data; on the other hand, giants achieve technological lock-in through talent mergers and acquisitions, forming a “technology harvesting” closed loop.
When the moat becomes a watershed
“OpenAI Bang” is making great progress, while its old home OpenAI is struggling.
In terms of technology and products, the release date of GPT-5 has been repeatedly delayed, while the mainstream ChatGPT product is generally considered to be lagging behind the industry in terms of innovation speed.
In the market, the latecomer DeepSeek has started to gradually surpass OpenAI, with its model performance approaching ChatGPT but training costs only 5% of GPT-4, this low-cost replication path is eroding OpenAI’s technological barriers.
However, the rapid growth of “OpenAI help” is largely due to internal conflicts within the OpenAI company.
Currently, the core research team of OpenAI can be said to have fallen apart, with only Sam Altman and Wojciech Zaremba remaining among the 11 co-founders, and 45% of the core researchers have already left.
Wojciech Zaremba
Co-founder Ilya Sutskever left to start SSI, Chief Scientist Andrej Karpathy shared Transformer optimization experience publicly, Sora video generation project lead Tim Brooks joined Google DeepMind. In the technical team, more than half of the early GPT authors have left, with most of them joining the ranks of OpenAI competitors.
At the same time, according to data compiled by Lightcast tracking recruitment information, OpenAI’s own recruitment focus seems to have changed. In 2021, 23% of the company’s recruitment information was for general research positions. By 2024, general research accounted for only 4.4% of its recruitment information, which also indirectly reflects the changing status of scientific research talent at OpenAI.
The organizational cultural conflicts brought about by commercial transformation are becoming increasingly apparent. While the number of employees has expanded by 225% in three years, the early hacker spirit is gradually being replaced by the KPI system. Some researchers bluntly stated, “forced to shift from exploratory research to product iteration”.
This strategic swing has put OpenAI in a double bind: it needs to continue to produce breakthrough technologies to maintain its valuation, but it also has to face the competitive pressure from former employees quickly replicating its methodology.
The key to victory in the AI industry lies not in the breakthrough of parameters in the laboratory, but in who can inject technical genes into the capillaries of the industry - reconstructing the underlying logic of the commercial world in the answer flow of search engines, the motion trajectory of mechanical arms, and the molecular dynamics of biological cells.
Is Silicon Valley going to split OpenAI?
The rapid rise of ‘OpenAI gang’ and ‘PayPal gang’ is largely thanks to the favorable California law.
Since the prohibition of non-compete agreements by legislation in 1872, California’s unique legal environment has become a catalyst for innovation in Silicon Valley. Pursuant to Section 16600 of the California Business and Professions Code, any provision that restricts the freedom to engage in a profession is void, a system that directly promotes the free movement of tech talent.
The average tenure of Silicon Valley programmers is only 3-5 years, much lower than other tech centers. This high-frequency mobility has created a “knowledge spillover” effect. For example, former employees of Fairchild Semiconductor founded 12 semiconductor giants such as Intel and AMD, laying the industrial foundation of Silicon Valley.
The legal prohibition of non-compete agreements may seem insufficient to protect innovative companies, but in fact it has further promoted innovation. The mobility of technical personnel has accelerated the diffusion of technology and lowered the threshold of innovation.
The Federal Trade Commission (FTC) of the United States expects that after the comprehensive ban on non-compete agreements in April 2024, the innovation vitality of the United States will be further unleashed. In the first year of policy implementation, 8,500 new companies may be added, with a sharp increase of 17,000-29,000 patents and an additional 3,000-5,000 patents. In the next 10 years, the annual patent growth rate will be 11-19%.
Capital is also an important driver for the rise of OpenAI.
Silicon Valley accounts for over 30% of the total venture capital in the United States, with institutions such as Sequoia Capital and Kleiner Perkins building a complete financing chain from seed rounds to IPOs. This capital-intensive model has led to a dual effect.
First of all, capital is the driving force of innovation. Angel investors provide not only funds but also industry resource integration. When Uber was founded, it had only $200,000 in seed funding from its two founders and only three registered taxis. After receiving $1.25 million in angel investment, it started rapid financing, and by 2015, its valuation had reached $40 billion.
Venture capital’s long-term focus on the technology industry has also promoted the upgrading of the technology industry. Sequoia Capital invested in Apple in 1978 and Oracle in 1984, establishing its influence in the semiconductor and computer fields; in 2020, it began to deeply layout artificial intelligence and participated in cutting-edge projects such as OpenAI. The international capital (such as Microsoft) has invested billions of dollars in AI, which has shortened the commercialization cycle of generative AI technology from several years to several months.
Capital also provides higher fault tolerance for innovative companies. The speed of accelerator screening for failed projects is equally important to successful projects. According to the startup analysis agency startuptalky, the global failure rate of startups is 90%, and the failure rate of startups in Silicon Valley is 83%. Although startups are not easy to succeed, in the investment matrix of venture capital, failure experiences can quickly be transformed into nutrients for new projects.
Image Source: startuptalky.com
However, capital has also to some extent changed the development path of these innovative companies.
The valuation of the top AI project has exceeded US$1 billion before the product is released, which has made it exponentially more difficult for other small and medium-sized innovation teams to obtain resources. This structural imbalance is even more pronounced in the regional distribution, where the US Bay Area receives as much venture capital ($24.7 billion) in a single quarter as the world’s top 2-5 venture capital centers (London, Beijing, Bangalore, Berlin) combined, according to database management firm Dealroom. At the same time, while emerging markets such as India saw a 133% increase in fundraising, 97% went to “unicorn” companies valued at more than US$1 billion.
In addition, capital has a strong ‘path dependence’, and capital prefers areas with quantifiable returns, which has also led to many innovations in emerging basic sciences being difficult to obtain strong support at the funding level. For example, in the field of quantum computing, Guo Guoping, the founder of the domestic quantum computing startup Origin Quantum, sold his house to start his business in the early days of entrepreneurship due to insufficient funds. Guo Guoping first raised funds in 2015. Data released by the Ministry of Science and Technology that year showed that China’s total investment in scientific research was less than 2.2% of GDP, with basic research funding accounting for only 4.7% of R&D investment.
Not only is there a lack of support, but big capital is also using the lure of “money” to lock in top talents, which makes the salary of CTO-level positions in start-ups basically locked in seven figures (US dollars for US companies, RMB for Chinese companies), forming a cycle of “giant monopoly talent - capital chasing giants”.
However, there are certain risks associated with the significantly pre-set valuations of these ‘OpenAI helpers’.
Mira Murati and Ilya Sutskever’s two companies, both received tens of billions of dollars in funding with only one idea. This comes from their trust premium in the top team’s technical capabilities at OpenAI, but this trust also carries risks - whether AI technology can remain in an exponential growth phase in the long term, and whether vertical scene data can form monopolistic barriers. When these two risks face real challenges (such as the slowdown in breakthroughs of multimodal models, and the sharp increase in industry data acquisition costs), capital overheating may trigger industry reshuffling.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Silicon Valley is rising 'OpenAI Gang'
Author: flagship
Image source: Generated by Unbounded AI
How much is the title of “former OpenAI employee” really worth in the market?
On February 25th local time, according to Business Insider, Mira Murati, former CTO of OpenAI, just announced the new company Thinking Machines Lab, which is launching a $1 billion financing at a valuation of $9 billion.
Currently, Thinking Machines Lab has not disclosed any schedule or specific details of products and technologies. The only public information about this company is the former team of over 20 OpenAI employees and their vision: to build a future where “everyone can access knowledge and tools, enabling AI to serve people’s unique needs and goals”.
Mira Murati and Thinking Machines Lab
The capital appeal of OpenAI’s founders has created a “snowball effect”. Prior to Murati, SSI, founded by former OpenAI chief scientist Ilya Sutskever, had already achieved a valuation of $30 billion based solely on the OpenAI gene and a concept.
Since Musk left OpenAI in 2018, former OpenAI employees have founded over 30 new companies with a total funding of over 9 billion U.S. dollars. These companies have formed a complete ecosystem covering AI safety (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This brings to mind the wave of Silicon Valley startups that emerged after PayPal was acquired by eBay in 2002, with founders like Musk and Peter Thiel leaving to form the ‘PayPal Mafia’, which gave rise to legendary companies such as Tesla, LinkedIn, YouTube. Former employees of OpenAI are also forming their own ‘OpenAI Mafia’.
The script of “OpenAI Gang” is even more radical: “PayPal Gang” took 10 years to create two hundred-billion-dollar companies, while “OpenAI Gang” has spawned five hundred-billion-dollar companies in just two years after the launch of ChatGPT, including Anthropic valued at $61.5 billion, Ilya Sutskever’s SSI valued at $30 billion, Musk’s xAI valued at $24 billion, and it is very likely that a hundred-billion-dollar unicorn will emerge within the next three years in the “OpenAI Gang”.
The new round of “talent fission” triggered by “OpenAI Helper” is affecting the entire Silicon Valley, even reshaping the global power map of AI.
OpenAI’s divergence path
Among OpenA’s 11 co-founders, only Sam Altman and Wojciech Zaremba, the head of the Language and Code Generation team, are still in office.
2024 is the peak of departures for OpenAI. This year, Ilya Sutskever (resigned in May 2024), John Schulman (resigned in August 2024), and others successively left. The OpenAI security team was reduced from 30 to 16, a 47% reduction; key figures among executives such as Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, departed one after another; in the technical team, core technical talents such as Alec Radford, the chief designer of the GPT series, and Tim Brooks, the head of Sora (who joined Google), left; deep learning expert Ian Goodfellow joined Google, while Andrej Karpathy left for the second time to start an education company.
“Gathered together, it’s a fire; scattered apart, it’s a sky full of stars.”
Before 2018, more than 45% of the core technical backbone members who joined OpenAI chose to set up their own separate entities, which also disassembled and reorganized OpenAI’s technical gene pool into three major strategic groups.
First is the “direct lineage army” that continues the genes of OpenAI, they can be said to be a group of ambitious individuals of OpenAI 2.0.
Mira Murati’s Thinking Machines Lab has almost completely ported OpenAI’s research architecture: John Schulman is in charge of the reinforcement learning framework, Lilian Weng leads the AI security system, and even the neural architecture diagram of GPT-4 is directly used as the technical blueprint for the new project.
Their “Open Science Manifesto” directly points to OpenAI’s recent trend of closure, planning to create a “more transparent AGI development path” through continuous openness of technical blogs, papers, and code. This has also triggered some chain reactions in the AI industry: three top researchers from Google DeepMind joined with the Transformer-XL architecture.
Ilya Sutskever’s Safe Superintelligence Inc. (SSI) chose a different path. Sutskever, along with two other researchers Daniel Gross and Daniel Levy, co-founded the company, abandoning all short-term commercial goals and focusing on building an ‘irreversible secure superintelligence’ - a nearly philosophical technical framework. The company has just been established, and institutions such as a16z and Sequoia Capital have decided to invest $1 billion to support Sutskever’s ideal.
Ilya Sutskever and SSI
Another faction is the “subverter” that had already left before ChatGPT.
Anthropic, founded by Dario Amodei, has evolved from the ‘OpenAI opposition’ to the most dangerous competitor. Its Claude 3 series models are neck and neck with GPT-4 in multiple tests. In addition, Anthropic has established an exclusive partnership with Amazon AWS, which means that Anthropic is gradually eroding the foundation of OpenAI in terms of computing power. The chip technology developed jointly by Anthropic and AWS may further weaken OpenAI’s bargaining power in purchasing NVIDIA GPUs.
Another representative figure in this faction is Musk, although Musk left OpenAI in 2018, some of the founding members of xAI also have a background at OpenAI, including Igor Babuschkin and Kyle Kosic, who later returned to OpenAI. With Musk’s strong resources, xAI poses a threat to OpenAI in terms of talent, data, computing power, and more. By integrating real-time social data streams from Musk’s X platform, xAI’s Grok-3 can instantly capture hot events on the X platform and generate answers, while ChatGPT’s training data is only up to 2023, showing a significant timeliness gap that OpenAI, relying on the Microsoft ecosystem, finds difficult to replicate.
However, Musk’s positioning of xAI is not to subvert OpenAI, but to rediscover the original intention of “OpenAI”. xAI adheres to the “maximum open source” strategy, for example, the Grok-1 model is open sourced under the Apache 2.0 license, attracting global developers to participate in ecosystem construction. This is in stark contrast to OpenAI’s recent tendency towards closed source (such as providing API services only for GPT-4).
The third group is some “game changers” who reconstruct the industry logic.
Perplexity, founded by former OpenAI research scientist Aravind Srinivas, is one of the first companies to transform search engines with large AI models. Instead of a list of search results, Perplexity directly generates answers through AI. It now handles over 20 million searches daily and has raised over $500 million in funding (valued at $9 billion).
Adept’s founder is David Luan, former Vice President of Engineering at OpenAI. He has been involved in technical research in language, supercomputing, and reinforcement learning, as well as in the security and policy-making of projects such as GPT-2, GPT-3, CLIP, and DALL-E. Adept focuses on developing AI Agents with the goal of automating complex tasks (such as generating compliant reports, designing drawings) through the combination of large models and tool invocation capabilities. The ACT-1 model developed by them can directly operate office software, Photoshop, and more. Currently, the core founding team of this company, including David Luan, has joined Amazon’s AGI team.
Covariant is an embodied intelligence startup valued at $1 billion. Its founding team is from the disbanded robot team of OpenAI, and the technical genes are derived from the experience of GPT model research and development. Focus on the development of robot basic models, with the goal of realizing autonomous operation of robots through multimodal AI, especially focusing on warehousing and logistics automation. However, three members of Covariant’s core founding team, Pieter Abbeel, Peter Chen, and Rocky Duan, have all joined Amazon.
Some “OpenAI Help” startup companies
Source of information: public information, compiled by: flagship
The transition of AI technology from “tool attribute” to “factor of production” has given rise to three types of industrial opportunities: substitution scenarios (such as disrupting traditional search engines), incremental scenarios (such as intelligent transformation of manufacturing), and restructuring scenarios (such as breakthroughs in life sciences at the underlying level). The common characteristics of these scenarios are: the potential to build a data flywheel (user interaction data feeds back to the model), deep interaction with the physical world (robot action data/biological experiment data), and a gray area of ethical supervision.
The spillover of OpenAI’s technology is providing underlying power for this industrial revolution. Its early open source strategy (such as partial open source of GPT-2) has formed a “dandelion effect” of technological diffusion, but when technology breakthroughs enter deep water, closed-source commercialization becomes an inevitable choice.
This contradiction has given rise to two phenomena: on the one hand, the resigned talents transfer technologies such as Transformer architecture and reinforcement learning to vertical scenarios (such as manufacturing, biotechnology), building barriers through scenario data; on the other hand, giants achieve technological lock-in through talent mergers and acquisitions, forming a “technology harvesting” closed loop.
When the moat becomes a watershed
“OpenAI Bang” is making great progress, while its old home OpenAI is struggling.
In terms of technology and products, the release date of GPT-5 has been repeatedly delayed, while the mainstream ChatGPT product is generally considered to be lagging behind the industry in terms of innovation speed.
In the market, the latecomer DeepSeek has started to gradually surpass OpenAI, with its model performance approaching ChatGPT but training costs only 5% of GPT-4, this low-cost replication path is eroding OpenAI’s technological barriers.
However, the rapid growth of “OpenAI help” is largely due to internal conflicts within the OpenAI company.
Currently, the core research team of OpenAI can be said to have fallen apart, with only Sam Altman and Wojciech Zaremba remaining among the 11 co-founders, and 45% of the core researchers have already left.
Wojciech Zaremba
Co-founder Ilya Sutskever left to start SSI, Chief Scientist Andrej Karpathy shared Transformer optimization experience publicly, Sora video generation project lead Tim Brooks joined Google DeepMind. In the technical team, more than half of the early GPT authors have left, with most of them joining the ranks of OpenAI competitors.
At the same time, according to data compiled by Lightcast tracking recruitment information, OpenAI’s own recruitment focus seems to have changed. In 2021, 23% of the company’s recruitment information was for general research positions. By 2024, general research accounted for only 4.4% of its recruitment information, which also indirectly reflects the changing status of scientific research talent at OpenAI.
The organizational cultural conflicts brought about by commercial transformation are becoming increasingly apparent. While the number of employees has expanded by 225% in three years, the early hacker spirit is gradually being replaced by the KPI system. Some researchers bluntly stated, “forced to shift from exploratory research to product iteration”.
This strategic swing has put OpenAI in a double bind: it needs to continue to produce breakthrough technologies to maintain its valuation, but it also has to face the competitive pressure from former employees quickly replicating its methodology.
The key to victory in the AI industry lies not in the breakthrough of parameters in the laboratory, but in who can inject technical genes into the capillaries of the industry - reconstructing the underlying logic of the commercial world in the answer flow of search engines, the motion trajectory of mechanical arms, and the molecular dynamics of biological cells.
Is Silicon Valley going to split OpenAI?
The rapid rise of ‘OpenAI gang’ and ‘PayPal gang’ is largely thanks to the favorable California law.
Since the prohibition of non-compete agreements by legislation in 1872, California’s unique legal environment has become a catalyst for innovation in Silicon Valley. Pursuant to Section 16600 of the California Business and Professions Code, any provision that restricts the freedom to engage in a profession is void, a system that directly promotes the free movement of tech talent.
The average tenure of Silicon Valley programmers is only 3-5 years, much lower than other tech centers. This high-frequency mobility has created a “knowledge spillover” effect. For example, former employees of Fairchild Semiconductor founded 12 semiconductor giants such as Intel and AMD, laying the industrial foundation of Silicon Valley.
The legal prohibition of non-compete agreements may seem insufficient to protect innovative companies, but in fact it has further promoted innovation. The mobility of technical personnel has accelerated the diffusion of technology and lowered the threshold of innovation.
The Federal Trade Commission (FTC) of the United States expects that after the comprehensive ban on non-compete agreements in April 2024, the innovation vitality of the United States will be further unleashed. In the first year of policy implementation, 8,500 new companies may be added, with a sharp increase of 17,000-29,000 patents and an additional 3,000-5,000 patents. In the next 10 years, the annual patent growth rate will be 11-19%.
Capital is also an important driver for the rise of OpenAI.
Silicon Valley accounts for over 30% of the total venture capital in the United States, with institutions such as Sequoia Capital and Kleiner Perkins building a complete financing chain from seed rounds to IPOs. This capital-intensive model has led to a dual effect.
First of all, capital is the driving force of innovation. Angel investors provide not only funds but also industry resource integration. When Uber was founded, it had only $200,000 in seed funding from its two founders and only three registered taxis. After receiving $1.25 million in angel investment, it started rapid financing, and by 2015, its valuation had reached $40 billion.
Venture capital’s long-term focus on the technology industry has also promoted the upgrading of the technology industry. Sequoia Capital invested in Apple in 1978 and Oracle in 1984, establishing its influence in the semiconductor and computer fields; in 2020, it began to deeply layout artificial intelligence and participated in cutting-edge projects such as OpenAI. The international capital (such as Microsoft) has invested billions of dollars in AI, which has shortened the commercialization cycle of generative AI technology from several years to several months.
Capital also provides higher fault tolerance for innovative companies. The speed of accelerator screening for failed projects is equally important to successful projects. According to the startup analysis agency startuptalky, the global failure rate of startups is 90%, and the failure rate of startups in Silicon Valley is 83%. Although startups are not easy to succeed, in the investment matrix of venture capital, failure experiences can quickly be transformed into nutrients for new projects.
Image Source: startuptalky.com
However, capital has also to some extent changed the development path of these innovative companies.
The valuation of the top AI project has exceeded US$1 billion before the product is released, which has made it exponentially more difficult for other small and medium-sized innovation teams to obtain resources. This structural imbalance is even more pronounced in the regional distribution, where the US Bay Area receives as much venture capital ($24.7 billion) in a single quarter as the world’s top 2-5 venture capital centers (London, Beijing, Bangalore, Berlin) combined, according to database management firm Dealroom. At the same time, while emerging markets such as India saw a 133% increase in fundraising, 97% went to “unicorn” companies valued at more than US$1 billion.
In addition, capital has a strong ‘path dependence’, and capital prefers areas with quantifiable returns, which has also led to many innovations in emerging basic sciences being difficult to obtain strong support at the funding level. For example, in the field of quantum computing, Guo Guoping, the founder of the domestic quantum computing startup Origin Quantum, sold his house to start his business in the early days of entrepreneurship due to insufficient funds. Guo Guoping first raised funds in 2015. Data released by the Ministry of Science and Technology that year showed that China’s total investment in scientific research was less than 2.2% of GDP, with basic research funding accounting for only 4.7% of R&D investment.
Not only is there a lack of support, but big capital is also using the lure of “money” to lock in top talents, which makes the salary of CTO-level positions in start-ups basically locked in seven figures (US dollars for US companies, RMB for Chinese companies), forming a cycle of “giant monopoly talent - capital chasing giants”.
However, there are certain risks associated with the significantly pre-set valuations of these ‘OpenAI helpers’.
Mira Murati and Ilya Sutskever’s two companies, both received tens of billions of dollars in funding with only one idea. This comes from their trust premium in the top team’s technical capabilities at OpenAI, but this trust also carries risks - whether AI technology can remain in an exponential growth phase in the long term, and whether vertical scene data can form monopolistic barriers. When these two risks face real challenges (such as the slowdown in breakthroughs of multimodal models, and the sharp increase in industry data acquisition costs), capital overheating may trigger industry reshuffling.