#Gate Latest Proof of Reserves Reaches 10.453 Billion Dollars#
Gate has released its latest Proof of Reserves report! As of June 2025, the total value of Gate’s reserves stands at $10.453 billion, covering over 350 types of user assets, with a total reserve ratio of 123.09% and an excess reserve of $1.96 billion.
Currently, BTC, ETH, and USDT are backed by more than 100% reserves. The BTC customer balance is 17,022.60, and Gate’s BTC balance is 23,611.00, with an excess reserve ratio of 38.70%.The ETH customer balance is 386,645.00, and Gate’s ETH balance is 437,127.00, with an excess reserve
Signatures support global control, and the AI Big Three leaked "survival desire"
Written by: Katie
Another "star-studded" open letter warning of the risks of AI is here. This letter is short but explosive: mitigating the risk of extinction posed by AI should be the same as managing other social-scale risks such as epidemics and nuclear war. become a global priority.
On May 30th, this statement with only 22 words in the original text appeared on the official website of the American non-profit organization "Center for AI Safety", and more than 350 celebrities in the business and academic circles in the AI field spoke for it. sign.
Compared with the open letter issued by the "Future of Life Institute" two and a half months ago, with thousands of signatures calling for a "suspension of training large models for 6 months", the most eye-catching thing is that this time, OpenAI, Google DeepMind and Anthropic's CEO signed.
It is worth noting that before signing the "22-character statement", the heads of OpenAI, Google, and Anthropic all attended the AI Risk Governance Conference initiated by the White House on May 4. In the face of risk management, the AI giants seem to be more anxious than the government recently.
Recently, the CEOs of these three companies have frequently met with the heads of many European countries in an attempt to influence the ongoing European Union's "Artificial Intelligence Act". The "virgin land" that the myth-maker OpenAI has not won yet.
Behind the need to manage and control AI risks is the full market "desire to survive" of AI companies.
The Big Three leave their names for the latest AI risk statement
「Mitigating the risk of extinction from AIshould be a global priority alongside other societal-scale risks such aspandemics and nuclear war.」
22 words form a piece of "Artificial Intelligence Risk Statement", directly wording the risks brought by AI as "extinction", which is comparable in severity to "pandemics" and "nuclear war" , the urgency of control was expressed as "should be a global priority".
On May 30, under the statement posted on the official website of the non-profit organization "Center for AI Safety", more than 350 people, including scientists and business leaders, signed their names.
The names of the CEOs of the AI Big Three appear in the list
“We have to get out in the open on this issue because a lot of people are only talking to each other in silence,” Dan Hendrycks, executive director of the Center for Artificial Intelligence Safety, said of the statement’s purpose. The brevity of the statement was intentional to allow for a "broad coalition" of scientists, some of whom may not yet agree on what risks AI poses and the best solutions to prevent them.
The first two of the 350 signatures were Turing Award winners Geoffrey Hinton and Youshua Bengio, followed by Demis Hassabis ( Demis Hassabis, Sam Altman and Dario Amodei are the CEOs of Google DeepMind, OpenAI and Anthropic, three world-renowned artificial intelligence development companies.
Some well-known Chinese scholars also signed, such as Zhang Yaqin, Dean of the Institute of Intelligent Industry of Tsinghua University, Professor Zeng Yi of the Institute of Automation of the Chinese Academy of Sciences, and Zhan Xianyuan, Associate Professor of the Institute of Intelligent Industry of Tsinghua University.
If "serving mankind with knowledge" is regarded as the social responsibility of intellectuals, the signatures of experts and scholars are easy to understand. Hinton, known as the "Godfather of AI", resigned from Google in April this year, and has been expressing his disapproval in public since then. Concerns about artificial intelligence getting out of control; Bengio signed the "Pause on Giant AI Experiments" open letter published by the nonprofit Future of Life Institute in March this year.
However, in the letter two months ago, the heads of the three companies Google DeepMind, OpenAI, and Anthropic did not sign. Compared with the "22-character statement" this time, the letter also elaborated on the potential impact of AI. Various specific risks, and even gave clear risk management principles, so far 31,810 signatures have been collected.
The clear disagreement in the AI community sparked by the last letter is whether or not AI experiments should be put on hold for the sake of risk.
At that time, Wu Enda, a scholar in the field of artificial intelligence, said on LinkedIn that the idea of suspending AI training for 6 months was a bad and unrealistic idea. Later, OpenAI CEO Sam Altman, Sam Altman, expressed the meaninglessness of the suspension more intuitively, "We suspend for 6 months, then what? We suspend for another 6 months?"
But this time, the leaders of the three giants of AI companies took the lead in signing this "22-character statement" with vague but stern wording on risks. From their acknowledgment of the statement, they revealed the urgency of controlling AI risks.
In just two and a half months, what caused AI companies to change their attitudes?
OpenAI traffic growth slows down
Now, even if human beings are not extinct by AI, generative AI has revealed its "dark side" driven by perpetrators.
For example, AI's imitation of sound and appearance is being used as a fraud tool, and the ability of AI to generate pictures and text is not only used to fake news, but also creates "yellow rumors" that directly damage the reputation of ordinary individuals.
Compared with generative AI's "nonsense" to common sense, falling into logical traps, not being good at solving math problems, data privacy and security and other defects, the destructive power of AI shown in the victim cases is more concrete and intuitively presented to the public. For AI development companies, the most direct impact is that those development companies that have just opened AI applications to the public, how much incremental market can they gain in the negative public opinion?
Take OpenAI, which provides the text generation application ChatGPT, which has the fastest growing number of visits, as an example. Its webpage visits have increased from about 20 million times per month in the fall of last year to 1.8 billion times in April 2023, refreshing the myth of Internet traffic growth. . But according to data collected by web analytics firm SimilarWeb, OpenAI’s web page visits are growing at a slower pace.
OpenAI's web traffic growth slows down
The traffic contribution of OpenAI web pages mainly comes from the United States, accounting for 10.25%, followed by India (8.82%), Japan (7.48%), Indonesia (3.84%), and Canada (3.06%). From these countries, the increase in visits from Japan is the largest, reaching 28.86%. However, Russia, China, and European countries basically did not contribute any obvious traffic to it, and Internet users in these countries and regions have limited access to OpenAI.
OpenAI cut off these markets by itself. The biggest reason is that countries have different management policies on the Internet. Compared with Russia and China, Europe seems to be a more desirable market for artificial intelligence companies. However, the European Union has clearly defined legislation for artificial intelligence.
Since then, the CEOs of OpenAI, DeepMind, and Anthropic have all appeared in Europe.
Three CEOs in Europe ahead of EU legislation
On April 27, members of the European Parliament reached a tentative political agreement on a proposal for an Artificial Intelligence Act. On May 11, two committees of the European Parliament adopted a draft negotiating mandate for the proposal of the Artificial Intelligence Bill (hereinafter referred to as the "Bill"). The draft will be submitted to the plenary session of the European Parliament for a vote on June 12-15, after which the European Parliament will negotiate the final form of the law with the Council of the European Union.
Lawmakers in the bill categorized different AI tools based on their level of risk, from minimal to limited, high and unacceptable. Government agencies and businesses using these tools will have different obligations depending on the level of risk. The bill will also strictly prohibit "artificial intelligence systems that pose unacceptable risks to human safety," including systems that purposely manipulate technology, exploit human weaknesses, or make evaluations based on behavior, social status, and personal characteristics.
If the "Act" is successfully enacted, this will be the first law targeting artificial intelligence in human history. Since the "Act" applies to all artificial intelligence systems operating in the European Union, American companies leading AI technology will obviously be constrained if they want to enter the European market.
For example, the Act requires companies developing generative AI tools to disclose whether they use copyrighted material in their systems. For AI companies that train large models with vast amounts of data from various sources, this requirement will simply keep them out of Europe.
OpenAI, DeepMind, and Anthropic have all begun to pay attention to European trends.
In late May, OpenAI CEO Sam Altman traveled to Europe several times, and held talks with the heads of government of Spain, Poland, France, and the United Kingdom to discuss the development potential, threats, and regulation of artificial intelligence. DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amoudi also met with British Prime Minister Sunak.
British Prime Minister Sunak meets the CEOs of the AI Big Three
Communicating with European heads of state, AI technology companies try to influence the process of the Act. More than that, Sam Altman also used various methods such as speeches and interviews to speak out to the European public, hoping that "public opinion" would stand with them.
In a speech at University College London, Altman stuck to his point: People's concerns about AI are valid, but the potential benefits of AI are far greater. When it comes to AI regulation, Altman said that OpenAI welcomes regulation, but needs "the right way to regulate it" because "over-regulation can harm small companies and the open source movement."
On May 25, Altman expressed his views on the "Act" to the British media. "Many wordings are inappropriate." He believes that the current draft EU AI bill will cause excessive regulation, "but we listen to Said it would be adjusted, and they're still talking about it."
It is less than half a month before the final vote on the "Bill", Altman's "lobbying tour" is still going on. According to public information, Altman will next meet with EU industry chief Thierry Breton (Thierry Breton) to discuss how the EU should implement rules that lead the world in artificial intelligence.
After a series of European trips, the three CEOs of the AI Big Three signed the "22-character statement", agreeing with the statement that the global priority to control AI risks is stated in the statement. Behind the gesture is the "desire to survive" of AI companies trying to expand the market under control.