AI ethics: Codes and principles

September 19, 2019
Despite the region’s prodigious road-mapping, articulated positions on ethics are lagging in Asia, relative to other regions, and in terms of other AI development priorities.

This is an extract from “The ethics of AI“, the fourth report of the four-part series, “Asia’s AI agenda”, by MIT Technology Review Insights.

Goals and broad ethical principles are beginning to be shaped. Policy documents include Japan’s AI Technology Strategy (March 2017), China’s Next Generation Plan (July 2017), and India’s #AIforAll national strategy (June 2018), with key principles including ensuring AI has broad benefits for the nation’s development. The Australian government recently earmarked AU$ 29.9m (US$ 20.6m) for a four-year AI and machine learning program, including the development of an “AI Ethics Roadmap”. Malaysia’s government has announced the development of a national framework for AI to be completed by the end of this year. Most of these frameworks only address the ethical issues in broad terms, and largely informed by pragmatic concerns around job loss and reskilling requirements. The Malaysia Digital Economy Corporation (MDEC), the country’s technology promotion and investment coordination body, which is leading the AI framework development, is using it in large part to coordinate knowledge-sharing and best-practice development between Malaysia’s AI ecosystem participants (academia, government bodies, and established and startup enterprises) to build coordinated responses to the implications that AI will have on skills, livelihoods, and economic competitiveness.

Other ethical framework implementation efforts are more incentives-driven: Australia’s chief scientist Alan Finkel recently proposed that his government implement a certification process to award firms that demonstrate they have adopted AI in a responsible manner, in terms of job-loss management and sustainable manufacturing process. Finkel calls it the Turing Certificate—after the computer scientist (and recognized “founding father” of AI) Alan Turing—and believes that making it mandatory for firms seeking government contracts will send a strong signal to both the Australian and global economy.

Despite the region’s prodigious road-mapping, however, articulated positions on ethics are lagging in Asia, relative to other regions, and in terms of other AI development priorities. In an analysis of 18 national and regional AI strategic plans, the Canadian Institute For Advanced Research found that the strategies coming from Asia placed relatively lower priority on ethics than those developed in other regions, although in one related area—the use of AI to promote social and economic inclusion—India scored highest globally. The heat map shows that Asian economies are still largely focused on pointing AI at industrial development strategies.

In an analysis of 18 national and regional AI strategic plans, it was found that the strategies coming from Asia placed relatively lower priority on ethics than those developed in other regions.

Along with the ethical chapters in national strategies, governments are also issuing codes and charters, although most lack legal status. In 2012, South Korea issued a “Robot Ethics Charter”, which covers standards, illegal use, and data protection. These are written as guidelines rather than laws. Japan issued a “Robot Strategy” three years later (2015) covering policy, ethics, and safety standards.

China’s “Next Generation AI Development Plan”, published in July 2017, envisions AI “to improve social management capacity” and pledges research on civil and criminal liability, privacy and IP, information safety, accountability, design ethics, risk assessment, and emergency responses, and commits to participate in AI global governance.

In 2018, Singapore’s monetary authority introduced principles to promote fairness, ethics, accountability, and transparency (FEAT) in AI and data analytics in finance. The FEAT Principles require firms to demonstrate that their use of AI Data Analytics (AIDA) does not result in systematic disadvantages for individuals; that analytic models and frameworks are regularly reviewed to reduce or remove biases, and that any decisions based on AIDA are held to the same codes of conduct and same level of ethical scrutiny as would be human- driven decisions.