英文

Editorial : Guiding AI Use to Protect Privacy and Public Interest

【明報專訊】THE RAPID ASCENT of generative artificial intelligence (GenAI) presents humanity with boundless possibilities, yet it also spawns a host of potential perils. Beyond the copyright disputes, the spectre of privacy infringement looms large. As corporations increasingly integrate GenAI tools into their operations, the imperative to regulate and guide their proper utilisation becomes paramount.

The latest "Checklist on Guidelines for the Use of Generative AI by Employees" published by the Office of the Privacy Commissioner for Personal Data (PCPD) represents a crucial step forward. The authorities must amplify their outreach, cultivating awareness among businesses and the public, while proactively drawing insights from mainland China and international precedents, laying the groundwork for future legislative oversight.

The PCPD's guidelines advise enterprises to formulate comprehensive GenAI usage policies, outlining who within the company is authorised to use AI tools, which platforms are approved, and the permissible scope of their application—be it drafting, summarising information, or content creation. Enterprises should also stipulate restrictions on data input (e.g. personal data), the permissible use and storage of AI-generated outputs, and the relevant data retention policy.

GenAI, in its training phase, exhibits an insatiable appetite for data. One of the concerns is that user-generated content on social media platforms could be harvested and repurposed without their knowledge or consent. Furthermore, any information uploaded to GenAI systems risks becoming training data for AI models. Should users inadvertently furnish sensitive information, such as personal curricula vitae, to these systems, such data may be subject to misuse or even incorporated into large language model systems.

Another sensitive data breach scenario arises from improper corporate data input. Two years ago, Samsung Electronics in South Korea suffered a data leak when employees inputted sensitive company information into the ChatGPT system. While GenAI tools offer a wide array of applications for businesses, corporations bear the responsibility of ensuring that the usage process does not lead to the leakage of customer privacy. Chatbots should not be permitted to solicit client names and contact details, and the indiscriminate input of client personal data into systems during AI-driven data analysis or model training is unacceptable.

Globally, approaches to GenAI regulation diverge significantly. The European Union has taken a proactive stance with its Artificial Intelligence Act, which imposes tiered levels of oversight based on the perceived societal risks of AI applications—the greater the risk, the stricter the regulatory scrutiny. However, there are worries that excessive regulation may stifle the pace of innovation. The United States, by contrast, tends towards a laissez-faire approach, where lenient regulation fosters innovation but potentially at the expense of privacy and security.

Mainland China seeks a middle path: while comprehensive AI-specific legislation has yet to be introduced, interim measures have been implemented to govern AI applications and ethical considerations. For Hong Kong, immediate legislation at this juncture may be premature, but the authorities must engage in proactive planning and closely monitor international developments to prepare for future regulatory needs.

明報社評2025.04.01:引導正確使用AI 保障私隱公眾利益

生成式人工智能(AI)發展迅速,既為人類帶來無限可能,也帶來不少潛在問題,除了版權爭議,私隱風險同樣不能忽視。隨着愈來愈多公司引入生成式AI工具,規管及引導正確使用,是必須認真處理的課題。

私隱專員公署最新公布的《僱員使用生成式AI的指引清單》,向前踏出重要一步,當局有必要加強宣傳推廣,提高企業和市民意識,同時積極借鑑內地及海外經驗,為日後立法規管早作準備。

私隱公署的指引建議企業制定使用生成式AI政策,包括訂明授權使用的人員類別、獲准使用的AI工具及用途範圍(諸如起草、總結資訊、生成文本等),以及列明可輸入和禁止輸入的資料(例如個人資料)、輸出資訊的獲准用途及儲存方式、所適用的資料保留政策等。

生成式AI訓練求「材」若渴,外界其中一個關注的問題,就是用戶在社交媒體上的個人材料,會否在不知情或未經同意下被蒐集及運用。此外,任何上傳到生成式AI工具的內容,也有可能成為AI模型訓練數據,倘若用戶無意中向系統提供敏感資料諸如個人履歷等,相關資料亦有可能被濫用,甚或成為大語言模型系統的一部分。

另一個敏感資料外泄的場景,是企業不當輸入資料。兩年前,韓國三星電子便曾經出事,有員工將公司敏感資訊輸入ChatGPT系統,導致資料外泄。對不少企業而言,生成式AI工具用途確很廣泛,然而企業有責任確保使用過程不會令客戶私隱外泄,不應允許聊天機械人索要客戶姓名及聯絡資料,運用AI工具做資料分析或模型訓練時,亦不應隨便將客戶個人資料放入系統。

放眼世界,各地對監管生成式AI取態不一。歐盟已率先訂立《人工智能法案》,根據可能構成的社會風險,監管AI在不同範疇的應用,風險愈高規管愈嚴,惟亦有人擔心管得太緊會限制創新速度;美國傾向放任自流,寬鬆的監管有利創新,但可能犧牲了私隱和安全。

內地則嘗試走中庸之道,目前尚未出台針對AI的獨立法律,但有暫行管理辦法,就AI應用及倫理規範等提出要求。對香港而言,現階段立法規管,也許言之尚早,然而當局必須及早籌謀,密切留意各地經驗,為日後立法規管做準備。

■ Glossary 生字 /

insatiable : always wanting more of sth; not able to be satisfied

inadvertently : by accident; without intending to

stifle : to make sb unable to breathe; to prevent sth from happening

上 / 下一篇新聞