PARIS (AN) — The world's first international, non-binding standards for artificial intelligence were approved by 42 nations as part of an effort to guide the ethical development of our increasingly mechanized world.
The standards adopted on Wednesday were part of a policy recommendation at a ministerial-level meeting of the Paris-based Organization for Economic Cooperation and Development's 36 mostly wealthy and industrialized member nations.
Six non-member nations — Argentina, Brazil, Colombia, Costa Rica, Peru and Romania — also approved the standards intended to "promote artificial intelligence that is innovative and trustworthy and that respects human rights and democratic values," the international organization said.
"The OECD AI principles are the first such principles signed up to by governments," it said. "The OECD AI principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct."
They were developed under the guidance of an OECD group of 50 experts representing governments, academia, businesses, civil society, international bodies, the tech community and trade unions. The policy recommendation has the backing of the European Commission and is slated for discussion at the G-20 leaders’ summit scheduled for the end of June in Osaka, Japan.
OECD's Secretary-General Ángel Gurría, a Mexican economist and diplomat, said AI is energizing economies, helping everyone from shop floor managers to operating room physicians make better predictions and decisions.
"It is facilitating our everyday lives — your smartphone can use AI to detect your fatigue levels while driving or provide you with personal health data. At the same time, AI technologies are still in their infancy. Much potential remains to be realized," Gurría said.
"And while AI is driving optimism, it is also fueling anxieties and ethical concerns," he said. "There are questions around the trustworthiness and robustness of AI systems, including the dangers of codifying and reinforcing existing biases — such as those related to gender and race — or of infringing on human rights and important values such as privacy."
IBM, which helped develop the OECD standards, said they provide sound policy guidance for governments and stakeholders worldwide working to advance responsible, human-centered artificial intelligence.
“In the 1980s, OECD guidelines on data protection and privacy provided the essential, international foundation for privacy legislation enacted by many countries," said Christopher A. Padilla, an IBM vice president for government and regulatory affairs.
"The organization is well-positioned to provide a similar global basis for balanced and consistent approaches to AI policies that prioritize trust and maximize the benefits to society while mitigating risks," he said.
Five "values-based principles"
Though the OECD recommendations have no legal teeth, they carry weight among national and international policy makers.
For example, privacy laws and frameworks in Asia, Europe and the United States were shaped by OECD privacy guidelines on the collection and use of personal data. Corporate governance regulations, too, were influenced by G-20-endorsed OECD principles.
And in a rare occurrence, the OECD's AI standards are an international agreement that U.S. President Donald Trump's administration has thrown its support behind.
"Together, we call on every nation that shares our values to join with us to develop AI and make our countries stronger, the world safer, and our people more prosperous and free," Michael Kratsios, Trump's deputy assistant for tech policy, said in welcoming the OECD standards.
The AI standards were built around what OECD called "five complementary values-based principles for the responsible stewardship" of future AI development:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards — for example, enabling human intervention where necessary — to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Last year, international organizations, tech giants and academics announced they would examine AI's potential for driving sustainable development. The United Nations Development Program, which operates in 177 of the U.N.’s 193 member nations, announced it was joining the Partnership on AI founded two years earlier by Amazon, DeepMind, Facebook, Google, IBM and Microsoft.
The partnership aims to ensure that AI — including machine learning that enables computers to improve without programming — will be used for safe, ethical and transparent purposes. It also hopes to advance public understanding of AI, create best practices for using AI and serve as an open platform for discussion and engagement about AI and its influences on people and society.
Drones and remote sensing to collect data are among the uses of artificial intelligence that UNDP said it already has adopted. Some of the data, for example, has been used to help the Maldives better prepare for disasters and to help Uganda create better living quarters for refugees.