Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.
 was held in Bletchley, a British code-breaking center during World War II, and participants from 28 countries and regions, including the United Kingdom, the European Union, the United States and China, signed the “Bletchley Declaration”, reaffirming the “human-centered, trustworthy and responsible” AI development model, and the Declaration specifically pointed out the “frontier” AI, i.e., general-purpose AI models, may pose security risks, with particular attention paid to the significant risks that cutting-edge AI may bring in areas such as cybersecurity, biotechnology, and misinformation.
A week later, at the OpenAI developer conference, GPT-4 rolled out major updates that included longer contextual processing power, lower-cost tokens, a new Assistants API, multimodal capabilities, and text-to-speech technology, GPT-4 The improvement of the performance and scalability of the Turbo model, as well as the launch of the most eye-catching new feature, the GPT Store, has been interpreted by many industry insiders as a way for everyone to make money by making different GPTs of their own.
Even hardware built specifically for AI has already arrived, with the AI Pin, founded by a former Apple executive and funded by OpenAI founder Sam Altman, garnering global attention upon its release. The device has no screen and only supports voice and gesture operations, but it has powerful AI capabilities, and is known as the “iPhone of the AI era” and can complete a variety of smartphone tasks, and is seen as a competitor to smartphones.
When AI is racing at a rapid pace, imagine a world in which every keyboard stroke and every screen slide is an interaction with an intelligent and extraordinary being, an AI brain that not only understands the meaning of your words, but also predicts that you have not yet formed a mind, and people seem to live in a better world with artificial intelligence.
But in fact, the panic caused by artificial intelligence has never gone far.
Create it, be it?
As early as 2014, a heart-wrenching prophecy came from the mouth of Stephen Hawking, who, like a prophet foreseeing the future, declared: “The full development of artificial intelligence may become the prelude to the end of mankind.” That same year, Elon Musk’s words were equally dramatic and urgent, warning that “AI is no less than the greatest existential threat we face, as if we were summoning an uncontrolled demon.” "These two giants of the era paint a terrifying picture of a future that could be dominated by artificial intelligence.
It is true that fire can burn down an entire city, however, fire is also the foundation of modern civilization.
In the early morning of November 29, 1814, the printing shop of The Times in London was filled with nervous waiting. The workers were wandering in uneasiness, and the orders of the boss, Mr. John Walter, were to wait - an important piece of news was about to arrive from post-war Europe. The hour hand was shifting, workers’ worries were heightened, and the slow pace of hand-printing was the norm in the newspaper industry at the time, but today’s delay seems to herald something unusual.
At six o’clock in the morning, Mr. Walter walked into the workshop with a freshly printed copy of The Times and revealed to the astonished workers the astonishing fact that the newspaper had been printed by a steam printing press secretly installed in another building. In the wave of the Industrial Revolution, this machine symbolized a huge leap in productivity, but it was also a nightmare for workers to lose their jobs. Their fears are not unfounded – the efficiency of machines is unmatched by human power.
This story is not only a microcosm of the Industrial Revolution, but also a constant theme in the history of technological development: whenever a new technology emerges, people are always accompanied by fear and resistance. From ancient Greece’s skepticism about writing to modern concerns about the internet and artificial intelligence, every advancement in technology challenges the tranquility of the old world.
History’s answer, however, is that this fear is also a catalyst for progress. It pushes us to reflect, adjust, and ultimately embrace new technologies in a more mature form. Looking back today, it may be tempting to scoff at the fears of the past, but it is undeniable that it is this fear that has shaped today’s society and defined what tomorrow is possible.
Of course, with the development of AI technology, there are also many problems in the run-in process of its technology and society, such as the recent explosion of Guo Degang’s English cross talk, which makes many people worry that their voices may be used by criminals to carry out telecom fraud after being collected, with the rapid development of AI in image, sound, video and other technologies, DeepFake has also ushered in a qualitative change, and many people have become its victims.
But it is not easy to regulate AI. AI technology, especially advanced algorithms such as deep learning, is very complex and difficult for non-professionals to understand; AI technology is developing rapidly, and it is difficult for existing regulations and standards to keep up with its development; AI involves data privacy and ethics issues, and it is not easy to formulate unified standards; there is a lack of international cooperation, the regulatory standards and laws for AI in different countries and regions are quite different, and there is a lack of an international unified regulatory framework, and even how to regulate it is a big problem; The opaque nature of the AI decision-making process makes it difficult to enforce regulation. **
Take, for example, the most mysterious “black box” of AI, which refers to the fact that the decision-making process of AI systems is often opaque, especially in complex machine learning models.
In these systems, even if their output or decision results are very accurate, it can be difficult for outside observers, including developers, to understand or explain how the model arrived at these results. This lack of transparency and explainability raises questions about trust, fairness, accountability, and ethics, especially in high-risk areas such as medical diagnosis, financial services, and judicial decision-making.
So, is AI still worth trusting when the brightest scientists don’t understand it either?
Turing Award winner Joseph Hickakith once asked a thought-provoking question: “Can we discuss the credibility of AI on the basis of objective scientific criteria, rather than getting bogged down in subjective, endless debates?”
Huang Tiejun, Dean of KLCII and Professor of the School of Computer Science at Peking University, said that both humans and AI are agents that are difficult to fully understand and trust. He emphasized that when AI intelligence surpasses humans, that is, artificial general intelligence that surpasses humans in all aspects, anthropocentrism will collapse, and the question will become whether AGI trusts humans, not the other way around. Professor Huang believes that in intelligent group societies, trusted agents can exist more persistently. He concludes that we cannot ensure the credibility of other agents, but we can work to ensure our own.
When a complex black box system is currently impossible to decipher, “alignment” may be the best solution at the moment. “Alignment” is the process of ensuring that AI systems are aligned with human values and ethics. As models become larger and more capable of handling more complex tasks, their decisions may have significant real-world impacts, and the purpose of value alignment is to ensure that these impacts are positive and in the overall interest of human society.
Shane Legg, founder and chief AGI scientist at Google’s DeepMind, believes that for AGI-level AI, it is important to make it deeply understand the world and ethics, and make robust reasoning. AGI should conduct an in-depth analysis, rather than relying solely on initial responses, and ensure that AGI complies with human ethics requires extensive training and ongoing scrutiny, involving sociologists, ethicists, and others to determine the principles to which it should be followed.
OpenAI’s scientists have even come up with Superalignment on top of Alignment.
Musk also said in a conversation with MIT scientist Lex Fridman that he has long called for the regulation and supervision of artificial intelligence, and Musk believes that AI is a powerful force, and great power must be accompanied by great responsibility. There should be an objective third-party body that oversees the leading companies in the AI space like a referee, and even if they don’t have the power to enforce it, at least be able to openly voice their concerns.
For example, after Jeff Hinton left Google, he expressed strong concerns about AI, but he is no longer at Google, so who will take on the responsibility?
In addition, what are the rules of fairness on which objective third-party oversight is based? This seems to be a difficult problem to solve when humans have not figured out how AI works. Musk also asked the same question: “I don’t know what the rules of fairness are, but before you can supervise, you have to start with insight.” ”
Even the recent signing of the Bradchley Declaration by 28 countries has only helped to promote the process of global AI risk management, but there are no laws and regulations that can be implemented in practice. What regulators in various countries can do at present is to maintain the adaptability of regulatory approaches by constantly reviewing them, which is simply to take one step at a time.
Of course, the fanatical pioneers of technology have long since raised the torch and are not worried about the problem of AI getting out of control. Ilya Sutskever, the chief scientist at OpenAI, who recently claimed that ChatGPT may already be conscious, is even mentally ready to be part of AI.
“Once you’ve solved the challenge of AI getting out of control, then what? Is there room for humans to survive in a world with smarter AI?” "There’s a possibility — which might be crazy by today’s standards, but not so crazy by tomorrow’s standards — and that’s that many people will choose to be a part of AI. This may be the way humans try to keep up with the times. In the beginning, only the boldest and most adventurous would try to do so. Maybe someone else will follow, or maybe not. ”
Nvidia CEO Jensen Huang’s words may make people understand Ilya Sutskever, who created ChatGPT: “When you invent a new technology, you have to accept crazy ideas, and my state of mind is always looking for something weird, and the idea that neural networks are going to change computer science is a very weird idea.” ”
Like smashing Newton’s maverick apple, only the strangest and craziest people can create technology that transcends the dimensional walls of this era.