英文

下一篇

Editorial : AI as a force of good

【明報專訊】THERE has been increasing concern over the ethics of the different applications of Artificial Intelligence (AI). With their algorithms, internet technology companies such as Facebook control the kind of information accessible to their users, which naturally has aroused considerable controversy. What is also eyebrow-raising is Amazon's smart home products, which secretly collect data about users' homes. The European Union was the first to draw up a set of ethics guidelines for AI. Tencent, the Chinese internet technology giant, has also proposed the developmental philosophy of "AI — a force of good" as a response to public concern. If put to good use, AI brings about convenience and raises productivity. But we cannot underestimate the risks of its abuse. More communication and dialogue is needed between governments and technology sectors around the world so as to handle this new issue, while the politicisation of the ethical issue of AI should be avoided so that it will not become a never-ending ideological argument.

Technological development is not restricted by existing policies, let alone remaining in the same spot waiting for anyone to catch up. AI develops in leaps and bounds, and China and the US are at the forefront of the field. Many discussions are about how AI can enhance people's lives and make society more efficient. Last month in New York, the United Nations Human Settlements Programme held a seminar with representatives from companies including Tencent to discuss how to make use of new technologies such as AI to realise the UN's goals of sustainable development in an innovative and highly‑effective manner. But AI is indeed a double-edged sword. If abused, it can harm the public interest. The ethical issue of AI is complex, manifesting itself in different problems varying from one social, political system to another. If the discussion becomes politicised, it will make one likely to look at the speck of sawdust in someone's eye without paying attention to the plank in one's own.

When it comes to AI development in China, what the West is most concerned about is whether the government will abuse the technology and put its people under political surveillance. For example, public opinion in the West is often critical of the Chinese government's use of facial recognition technology to conduct mass surveillance in Xinjiang. The mainland authorities emphasise the need to maintain stability, and their political censorship of the Internet is often criticised. To be sure, it warrants concern whether AI will become a new tool of surveillance. But it also has to be said that apart from China, countries such as the US and the UK are also studying actively how to make use of AI for law-enforcement and surveillance purposes. Too much focus on the abstract, ideological differences is likely to make us forget that the crux of the matter is the extent to which the technology is employed practically.

If the risks of abusing AI technology in China are mostly political, the biggest risk facing Western societies currently is the exploitation and infringement of privacy by surveillance capitalism. The goal of the former is to maintain stability, while that of the latter is to pursue bigger profits.

No matter what the circumstances are, the West will always have a lot of scepticism about Chinese enterprises. However, as pointed out by Danit Gal, an academic at Keio University, Japan, it does not seem that Tencent has done less than Western internet technology giants in tackling the ethical issue about AI — it might even have done more. To start with, Tencent has invited a wide spectrum of stakeholders — including public interest groups, universities and even monks — to participate in AI ethics discussions. China, the US and Europe are all major forces in AI development. They can start meaningful discussion only after they have done away with their political prejudices.

明報社評2019.05.06:「AI向善」回應疑慮 倫理爭議避政治化

人工智能(AI)應用倫理問題愈益受到關注,facebook等科網企業透過演算程式,控制用戶可以接觸的資訊,固然備受爭議;亞馬遜家居智能產品暗中蒐集用戶家居信息,同樣令人側目。歐盟率先制定AI道德指引,中國科網巨擘騰訊最近也提出「AI向善」發展觀,回應外界關切。善用AI可以便民,提高生產力,惟濫用風險亦不能低估,各地政府和科技界需要加強溝通對話,處理這一新興課題,避免將AI倫理問題政治化,變成沒完沒了的意識形態紛爭。

科技發展不會被現行政策所限制,更不會停下來等待任何人。AI發展一日千里,中美成為領頭羊,不少人都在談論AI可以改善民眾生活,令社會運作更有效率,上月聯合國人居署便跟騰訊等代表在紐約舉行研討會,探討如何利用AI等新興技術,以創新高效手段,實現聯合國可持續發展目標。然而AI確是一把雙刃劍,倘若濫用,就有可能損害普羅大眾利益。AI倫理問題複雜,在於不同社會政治制度之下,具體展現出來的問題亦不一樣,一旦討論變得政治化,很容易變成只看別人眼中的刺,卻不去正視自己眼中的樑木。

針對中國AI發展,西方最關注官方會否濫用AI技術,政治監控人民,例如最近西方輿論便不時批評,中國政府運用人臉識別技術,在新疆大規模監控民眾。內地當局強調維穩,對於網絡信息的政治審查向惹詬病,AI技術會否成為監控新手段,確實需要關注,不過話說回來,除了中國之外,美英等國其實也在積極研究運用AI於執法監控,過度聚焦形而上的意識形態之別,容易忘記形而下的操作分寸拿揑才是問題核心。

若說濫用AI技術風險在中國主要集中在政治層面,當前西方社會面對的最大風險,則是「監控式資本主義」(surveillance capitalism)對個人私隱的剝削侵犯。若說前者意在維穩,後者則是為了追求更高利潤。

西方對於中國企業,總有很多猜疑,然而正如日本慶應義塾大學學者Danit Gal所言,騰訊在AI倫理問題方面所做過的事,不見得比西方科網巨企少,甚至有過之而無不及,例如騰訊有廣邀不同持份者,包括公共利益團體、大學乃至僧侶參與AI倫理討論。中美歐都是當前AI發展主要力量,唯有放下政治有色眼鏡,各方才能展開有意義對話。

■Glossary

eyebrow-raising : shocking

spectrum : complete range of opinions, people, situations etc, going from one extreme to its opposite

prejudice : an unreasonable dislike of or preference for a person, group, custom, etc, especially when it is based on their race, religion, sex, etc.

上 / 下一篇新聞