Apply for Trial
Responsible AI Practice
To form a favorable environment for industry-university-research cooperation, the AI Ethics Committee attaches great importance to fostering an external ethical ecosystem. We are committed to building SenseTime as a leading company in ethical and responsible AI through various practices.
Home > Ethical and Responsible AI > Responsible AI Practice
Global Artificial Intelligence Technology Conference (2021): Liu Zhiyi Discussed Industrial Ethical Practices
2021-06-06

The Global Artificial Intelligence Technology Conference (2021) themed "Communication, Integration and Mutual Benefits", was held in Hangzhou Future Sci-Tech City from June 5 to 6, 2021.


During the Conference, the 3rd “Forum on the Ethics of Artificial Intelligence from a Global Perspective”, co-organized by the AI Ethics Committee, Chinese Association for Artificial Intelligence (in preparation) and Peking University Berggruen Research Center, kicked off on June 6. The Forum was chaired and moderated by Professor Chen Xiaoping, Director of the Robotics Laboratory, University of Science and Technology of China, and researcher Duan Weiwen, Director of the Research Office for Philosophy of Science and Technology, Institute of Philosophy, Chinese Academy of Social Sciences. Professor Yao Xin, Dean of the Department of Computer Science and Engineering, Southern University of Science and Technology, and IEEE Fellow, and other Chinese and international experts and scholars delivered keynote speeches at the Forum.


图片 1.png

The 3rd "Forum on the Ethics of Artificial Intelligence from a Global Perspective"


Focusing on hot topics, such as digital governance, privacy protection, and industry regulation, the Forum consisted of two sessions, "Ethics and data governance should be implemented! Smart Technology and Digital Governance” and “The deeper issues behind ethics should be made clear! Artificial Intelligence and Human Nature”, aiming to address the problems of ethics and data privacy protection brought about by the application of artificial intelligence in social scenarios, and find solutions for the long-term challenges that AI technology poses to mankind and the prospect of global digital governance.

Focusing on AI technology and digital governance, Professor Yao Xin delivered the keynote speech "How can we make machine learning fairer?". Professor Yao first explained what AI ethics mean, and proposed that technological measures can be used to make machine learning fairer, so that AI ethics can be better implemented. Using recruitment data as an example, Professor Yao proposed a multi-project machine learning method to solve algorithmic fairness issues such as discrimination in recruitment.


Professor Chen Xiaoping then delivered a keynote speech on “Can artificial intelligence deliver the changes that we need?” Professor Chen pointed out that AI research and application required classified governance. AI applications cannot determine the changes brought about by artificial intelligence, and may also need support from innovative systems. Nowadays, although Schumpeter’s innovation model meets the demands for commercial interests, government taxation, user needs, and production efficiency, it cannot meet social needs without sufficient commercial interests, resulting in digital divide, class stratification and other social issues. Based on this, Professor Chen proposed the theory of "innovation for public interests", which is committed to realizing the balanced improvement of economic and social benefits, and can be embodied in software open source and content entrepreneurship.


Focusing on AI ethics and human society, researcher Duan Weiwen talked about "Anxiety under the Data Gaze: The Socio-technical Imaginaries and Ethical Strategies of AI". With the accelerated progress of digital intelligence, digital platforms not only play the role of media, but also the multiple functions of production and consumption. Mr. Duan pointed out that under the data gaze, there is a growing trend of technological apprehension such as privacy apathy and algorithm anxiety among general public. Therefore, these technological apprehensions should be presented perceptually, in order to reconstruct the social and technological imaginaries of artificial intelligence, and lay the foundation of value recognition for the participation of multiple subjects in the co-development of AI ethics.


After the keynote speech, Liu Zhiyi, Director of SenseTime Intelligent Industry Research Institute, and other guests joined the roundtable discussion. Mr. Liu focused on sharing SenseTime's ethical practices in the AI industry.


On August 6, 2020, SenseTime, as the representative for computer vision enterprises, was selected as the member of the first Artificial Intelligence Technical Subcommittee of the China National Standardization Technical Committee, the national standardization organization for artificial intelligence. The subcommittee is mainly responsible for the formulation and revision of national standards on artificial intelligence foundation, technology, risk management, reliability, governance, product and application, and other AI industry fields.

As one of the first companies to propose the concept of AI sustainable development, SenseTime has established an internal ethics committee and jointly released the whitepaper "Code of Ethics for AI Sustainable Development" in June 2020, focusing on building an AI sustainable development system and solving emerging social issues such as ethics, morality, safety, credibility and others, and integrating these concepts into our AI products and applications.


SenseTime has been actively cooperating with universities across China to help build an AI governance ecosystem. On March 26, 2021, SenseTime and Shanghai Jiaotong University jointly founded the Center for Computational Law and AI Ethics Study, focusing on the in-depth study of privacy protection, algorithmic fairness, network security, urban and social digital governance, smart courts, cognitive science and neuromorphic computing, technology and law and policy, and other major subjects, and built a platform for exchanges and cooperation in computational law. SenseTime, together with the Institute for AI International Governance, Tsinghua University (I-AIIG), is studying AI sustainable development issues and making contributions to AI governance.


With the popularity of AI technologies, topics beyond technology such as "fairness", "safety", and "privacy" have received increasing attention from the public. In the future, how to implement AI governance mechanisms from a technical perspective, how to better integrate governance principles with industrial practices, and how AI affects the progress of ethical issues in human society are questions that we need to think about while achieving technology iteration.

Recommended reading