Several tech juggernauts including Microsoft, OpenAI, Amazon, Meta, and more, last week signed an agreement that would set standards on artificial intelligence safety. At the Seoul AI Safety Summit, developers from over seven countries pledged to commit to several safety measures in future AI model development, including the publishing of safety frameworks for their “frontier” AI technologies.
IBM and Meta launch AI Alliance with over 50 global members
At the summit, the Korean and U.K. governments announced that the involved companies agreed to “not to develop or deploy a model at all” if doing so carried unavoidable risks. If risk mitigation proved to be impossible, the companies agreed to implement a “kill switch” that would halt the development of said models.
“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” U.K. Prime Minister Rishi Sunak said in a statement. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.”
The companies will determine “thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable.” In the process of doing so, they will receive input from “trusted actors,” such as their country’s government, before publicly releasing their findings ahead of next year’s AI Action Summit in France.