【明報專訊】THERE has been increasing concern over the ethics of the different applications of Artificial Intelligence (AI). With their algorithms, internet technology companies such as Facebook control the kind of information accessible to their users, which naturally has aroused considerable controversy. What is also eyebrow-raising is Amazon's smart home products, which secretly collect data about users' homes. The European Union was the first to draw up a set of ethics guidelines for AI. Tencent, the Chinese internet technology giant, has also proposed the developmental philosophy of "AI — a force of good" as a response to public concern. If put to good use, AI brings about convenience and raises productivity. But we cannot underestimate the risks of its abuse. More communication and dialogue is needed between governments and technology sectors around the world so as to handle this new issue, while the politicisation of the ethical issue of AI should be avoided so that it will not become a never-ending ideological argument.
Technological development is not restricted by existing policies, let alone remaining in the same spot waiting for anyone to catch up. AI develops in leaps and bounds, and China and the US are at the forefront of the field. Many discussions are about how AI can enhance people's lives and make society more efficient. Last month in New York, the United Nations Human Settlements Programme held a seminar with representatives from companies including Tencent to discuss how to make use of new technologies such as AI to realise the UN's goals of sustainable development in an innovative and highly‑effective manner. But AI is indeed a double-edged sword. If abused, it can harm the public interest. The ethical issue of AI is complex, manifesting itself in different problems varying from one social, political system to another. If the discussion becomes politicised, it will make one likely to look at the speck of sawdust in someone's eye without paying attention to the plank in one's own.
When it comes to AI development in China, what the West is most concerned about is whether the government will abuse the technology and put its people under political surveillance. For example, public opinion in the West is often critical of the Chinese government's use of facial recognition technology to conduct mass surveillance in Xinjiang. The mainland authorities emphasise the need to maintain stability, and their political censorship of the Internet is often criticised. To be sure, it warrants concern whether AI will become a new tool of surveillance. But it also has to be said that apart from China, countries such as the US and the UK are also studying actively how to make use of AI for law-enforcement and surveillance purposes. Too much focus on the abstract, ideological differences is likely to make us forget that the crux of the matter is the extent to which the technology is employed practically.
If the risks of abusing AI technology in China are mostly political, the biggest risk facing Western societies currently is the exploitation and infringement of privacy by surveillance capitalism. The goal of the former is to maintain stability, while that of the latter is to pursue bigger profits.
No matter what the circumstances are, the West will always have a lot of scepticism about Chinese enterprises. However, as pointed out by Danit Gal, an academic at Keio University, Japan, it does not seem that Tencent has done less than Western internet technology giants in tackling the ethical issue about AI — it might even have done more. To start with, Tencent has invited a wide spectrum of stakeholders — including public interest groups, universities and even monks — to participate in AI ethics discussions. China, the US and Europe are all major forces in AI development. They can start meaningful discussion only after they have done away with their political prejudices.
eyebrow-raising : shocking
spectrum : complete range of opinions, people, situations etc, going from one extreme to its opposite
prejudice : an unreasonable dislike of or preference for a person, group, custom, etc, especially when it is based on their race, religion, sex, etc.