Please Enter Keywords
资源 63
[Beijing Forum 2019] Taking control of the wheel in artificial intelligence
Nov 06, 2019
Peking University, Nov. 6, 2019: The Beijing Forum 2019, themed “The Changing World and the Future of Humankind”, featured a panel session dedicated to artificial intelligence.


At the panel session

These days, a forum on the future of humanity could not be complete without a discussion on this rapidly developing technology. The richest companies in the world, from Apple to Tencent, are all involved in the technology space, and yet the full potential and impacts of artificial intelligence are still unknown. On November 3, philosophers, scientists and lawyers from the US, Europe and China gathered to continue discussion on this crucial issue.

Scott Shackelford and Rachel Dockery, cyber experts from Indiana University, gave an overview of the AI landscape before presenting their research on AI regulation. They focused on the design principles of Elinor Ostrom, whose ground-breaking research won her the Nobel Prize in 2009. With lessons such as implementing nested decision-making, establishing baseline norms and more, the speakers found that experience drawn from other areas, for example climate policy, and presented a practical way forward to govern artificial intelligence. Finally, they discussed autonomous vehicles as a literal example of AI “moving into the driver's seat as a primary determinant of human destiny”, quoting Wendell Wallach who is the Chair of Yale University Interdisciplinary Center for Bioethics and one of keynote speakers at the Opening Ceremony of Beijing Forum 2019.

Yi Zeng from the Chinese Academy of Sciences discussed his research on building a robot “self -model”. The intention was not to create self-consciousness, but rather to instil human values within artificial intelligence. This would enable better AI decision-making, such as in the case of self-driving cars. Yi also touched upon Chinese perspectives on AI, from the principle of harmony (和) to the principles of AI governance released earlier this year.

Mark P. McKenna from the University of Notre Dame presented on the legal ramifications of artificial intelligence. His key idea was that, in a legal sense, “it’s rare for technology to raise new ethical problems… the ethical problems are not mysterious or unexamined”. The question of what it means to be human, for instance, has been explored long before the emergence of robots. What they do reveal is the way previously drawn lines “become unstable in light of new technological capability”. McKenna raised the example of data-driven advertising, which appears to be unique compared to advertising practices in the past.

“The question of when technology should be regarded as qualitatively different due to its pervasiveness is the hard problem for regulatory design.”

He warned against framing AI ethics as a unique issue only open to technology experts, encouraging greater focus on “current and tractable problems” such as the use of AI in policing. He believed academics had have an obligation to translate their work “in a way that contributes to the conversation”.

Similarly, Mark Coeckelbergh from the University of Vienna warned against AI “alarmism”. As shown by the story of Frankenstein, concerns about technology as a kind of monster are not new. He suggested differentiating between moral and legal responsibility. Even if the question of moral responsibility for AI is not resolved, attaching legal responsibility can still lead to workable outcomes and incentives for proper regulation.

Finally, Huw Price from Cambridge University spoke about the role of academics in navigating the future of AI. He believed that academia presents four advantages: breadth of expertise, the luxury of long time horizons, global collaboration and resilience as a source of authority.

Price then discussed principles for designing a global AI community. Firstly, he encouraged humility. With most of the AI revolution still beyond the horizon, it is important to acknowledge how much remains unknown.

“We don’t yet have all the answers, and we don’t even have all the questions.”

Secondly, he stressed the importance of interdisciplinarity. Since AI touches all aspects of society, AI research can draw upon everything from biology to gender studies to international relations: the disciplinary connections are “not predictable in advance”. He pointed to the example of the Leverhulme Centre for the Future of Intelligence, where he is director. At the same time, he believed that research on AI impacts is yet to form a “disciplinary identity”.

Following the theme of connectedness, Price encouraged interactions across and beyond academia to the policy, corporate and technical worlds. Acknowledging cultural and national boundaries and differing interests, he stressed the importance of building trust.

Finally, Price quoted UK’s Astronomer Royal Martin Rees,: “Our Earth has existed for 45 million centuries, but this century is special: Iit's the first when one species, ours, has the planet's future in its hands.”

This showed the importance of taking the long view.

“Not building for the long term would be a disastrous mistake.”

Beijing itself is a hub for AI development. The city became China’s first pilot zone for artificial intelligence in February, as part of China’s plan to become a leader in AI over the next ten years. In May, Peking University was among the institutions that jointly unveiled the Beijing AI Principles , a set of ethical standards for AI research.

The event, emceed moderated by Huw Price and Peking University’s Zhe Liu Zhe, concluded with the shared recognition that regulating a global technology demands the cooperation of the global community. The guests showed remarkable commitment to facing the challenges and opportunities together, representing not only the idea of trust as “the most important resource” in AI governance, but also the theme of harmony and prosperity that underpins the Beijing Forum.

Reported by: Cherry Zheng
Edited by: Huang Weijian

Latest