When OpenAI’s chatgpt took the world by storm last year, it caught many power brokers in both Silicon Valley and Washington, DC, by surprise. The US government should now get advance warning of future AI breakthroughs involving large language models, the technology behind ChatGPT.
New Government Requirements
The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government when they train an AI model using a significant amount of computing power. The rule could take effect as soon as next week.
The new requirement will give the US government access to key information about some of the most sensitive projects inside OpenAI, google, amazon, and other tech companies competing in AI. Companies will also have to provide information on safety testing being done on their new AI creations.
OpenAI has been coy about how much work has been done on a successor to its current top offering, GPT-4. The US government may be the first to know when work or safety testing really begins on GPT-5. OpenAI did not immediately respond to a request for comment.
Secretary of Commerce’s Statement
“We’re using the Defense Production Act, which is authority that we have because of the president, to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results—the safety data—so we can review it,” Gina Raimondo, US secretary of commerce, said Friday at an event held at Stanford University’s Hoover Institution. She did not say when the requirement will take effect or what action the government might take on the information it received about AI projects. More details are expected to be announced next week.
White House Executive Order
The new rules are being implemented as part of a sweeping White House executive order issued last October. The executive order gave the Commerce Department a deadline of January 28 to come up with a scheme whereby companies would be required to inform US officials of details about powerful new AI models in development. The order said those details should include the amount of computing power being used, information on the ownership of data being fed to the model, and details of safety testing.
The October order calls for work to begin on defining when AI models should require reporting to the Commerce Department but sets an initial bar of 100 septillion (a million billion billion or 1026) floating-point operations per second, or flops, and a level 1,000 times lower for large language models working on DNA sequencing data. Neither OpenAI nor Google have disclosed how much computing power they used to train their most powerful models, GPT-4 and Gemini, respectively, but a congressional research service report on the executive order suggests that 1026 flops is slightly beyond what was used to train GPT-4.
Additional Requirements for Cloud Computing Providers
Raimondo also confirmed that the Commerce Department will soon implement another requirement of the October executive order requiring cloud computing providers such as Amazon, microsoft, and Google to inform the government when a foreign company uses their resources to train a large language model. Foreign projects must be reported when they cross the same initial threshold of 100 septillion flops.
It’s no surprise that OpenAI and other tech giants will have to notify the US government when they embark on new AI projects. With the increasing significance and potential impact of artificial intelligence, it’s only reasonable for the government to remain informed about these developments. This requirement can lead to more transparency and accountability in the AI industry, as well as potentially help in identifying and addressing any potential risks or concerns associated with these advanced technologies.
As AI continues to evolve and grow in importance, it’s crucial for the government to stay abreast of the latest developments in the field. OpenAI and other industry leaders will need to keep the government in the loop when launching new AI projects in order to ensure that these technologies are developed in a responsible and ethical manner. By having these open lines of communication, the government can better understand the capabilities and limitations of AI, while also ensuring that necessary regulations and safeguards are in place to protect the public’s interests.