Apply for Trial
Newsroom

AI ethics: key to making AI safe, trusted and embraced

2021-08-20

By Dr Xu Li, co-founder and CEO of SenseTime


Even with AI in its relative infancy, we are already in awe of what it can do for humanity. Diagnosing very early stage cancer, controlling pandemics, solving congestion through autonomous vehicles and creating astounding works of art are just a few of the many applications that are now possible, with AI powering the fourth industrial revolution and impacting upon society, businesses and the environment. But as the saying goes: “With great power comes great responsibility.”


AI’s full range of possibilities stretches far beyond what we can conceive today. But the power to abuse — as well as the threat and consequences of bias — is intertwined with its positive potential. Hence the pursuit of ethical AI is occupying many of world’s brightest minds in business, academia and policymaking.


There is universal understanding that AI must be developed and applied under a framework with ethics and a will to do good at its core. The challenge is that AI will likely never be free of some bias as algorithms are honed on training data, which can be skewed by human prejudice or historical or social inequities.


图片2.png


At SenseTime, the objective is not to remove all bias from AI (which is unrealistic and can become a game of cat and mouse), but rather to make AI safer, fairer and more sustainable, so that people and our planet are truly factored into any AI decision or act. We believe AI should be built upon a balanced AI ethics and governance framework that emphasises controlled safety, people and sustainability.


Our Code of Ethics for AI Sustainable Development, developed with Qing Yuan Research Institute of Shanghai Jiao Tong University and jointly published in 2020, provides a focal point for AI governance, as well as guiding principles in our development of new AI technologies. Going forward, we envisage ethical AI governance to evolve around three core principles:

 

1. Ethical controls for safety and trust

 

With any emerging technology, trust and safety must be attained before any major adoption occurs. Key attributes such as verifiability, certification, creditability and responsibility must all be in place to ensure adequate levels of safety to build trust — in the same way that a bicycle has built-in structural integrity, brakes and lights that give a rider the confidence to jump on, or an airplane features the core systems and controls that let a pilot take-off with trust in its ability to fly. Today, the AI industry is still formulating the necessary controls to deliver trusted autonomous driving. Like any new technology, AI is striving for that inflexion point when a critical mass of usage drives widespread adoption.

 

2. Create values for humans through ethical AI

For AI to advance and fulfil its potential, all endeavours must benefit humanity. But defining what is human-centric, so that it can be coded into algorithms and software, remains a challenge. Institutions and authorities are advocating a broad range of factors that must be considered, such as human rights, privacy protection, inclusiveness and openness, to name a few. Our vehement belief is that whatever evolving human-centric elements are determined to guide AI’s development, the most basic tenet must be that AI is for all and must not be limited to select communities or groups.


图片3.jpg


3. AI ethics to uphold sustainable development

 

As well as for the good of humanity, we should be advancing AI technology for the good of our planet. When it comes to our environment, AI can play an important role in augmenting our efforts to address climate change and other sustainability challenges. The important word here is augment, because as humans we have a responsibility to halt, or even reverse, the environmental damage that our kind has done. Technology can assist but not replace our commitment. Business strategist Boston Consulting Group has found that AI could reduce carbon dioxide emissions by between 2.6bn and 5.3bn tonnes by 2030. (https://www.bcg.com/publications/2021/ai-to-reduce-carbon-emissions)

 

We see these three principles as essential pillars if AI technology is to be trusted, to grow and to be widely embraced, while a key objective is to boost the inclusiveness of AI with technology development. To put these principles into practice, SenseTime has teamed up with governments, industry and academic institutions to explore a model for the sustainable and responsible development of the AI industry.

 

As part of this, the SenseTime Code of Ethics for AI Sustainable Development has been included in the Resource Guide on Artificial Intelligence Strategies released by the United Nations. We are collaborating with China’s Tsinghua School of Public Management on a study related to sustainable AI with the aim of contributing to more comprehensive AI governance. We have also teamed up with Shanghai Jiao Tong University in a cross-disciplinary effort centring on computational law, AI governance and data ethics, hoping that cases with reference to the use of AI can be established as guidelines for future issues.

 

In many ways, the nature and impact of AI technology requires a holistic approach when addressing the debates relating to its development. As Professor Ji Weidong, president of the China Institute for Socio-Legal Studies at Shanghai Jiao Tong University, puts it, “The trajectory of AI technology is not linear, but rather a leap. It is part of the fourth industrial revolution with the ability to achieve otherwise unimaginable feats in humanity. AI’s development, risk management and ethical considerations are not mutually exclusive discussions and must be addressed in parallel. On this front, successful companies will be those that take a progressive, practical and principle-based approach, and pay attention to the protection of data rights.”

 

Meanwhile, our views on AI development and ethics have been featured in the latest World AI Industry Development Blue Book. Our leadership team has also established an internal ethics committee, a safety committee and a product committee to regulate and implement strict ethical standards for all AI technology usage. Our pursuit of sustainable AI has no boundaries, and we are committed to driving and supporting any universal framework of ethical AI governance.

 

Everyone at SenseTime will stay true to our mission of “Innovation for a better AI-empowered tomorrow” and consistently push the possibilities of AI to positively impact people and planet.