SenseTime and Public Safety

Cities and countries already have begun to deploy AI and ML technologies for public safety and security. On one hand machine learning applications in image and video recognition can help law enforcement officials in detecting criminal activities and efficiently prevent acts that endanger public safety. On the other hand, without thoughtful safeguards, the misuse of these technologies by law enforcement poses sobering human rights risks. Using SenseTime as a case study, this article discusses how China's most valuable AI company, SenseTime, has partnered with government authorities to accelerate product development, why its partnership with the Chinese state is potentially problematic, and how the company can safeguard itself from the misuse of its product.

On Apr 9th, 2018, Beijing, China based company SenseTime announced that it had raised $600m from the Alibaba Group and other investors at a valuation of more than $4bn [1]. With the announcement, the company became the world’s most valuable artificial intelligence startup, and further underscored the gravity of the Chinese government’s national policy announcement just a year prior to become the world’s leader in the research, development, and commercialization of artificial intelligence technologies by 2030.

Founded in October 2014 by Dr. Xiao’ou Tang, a Professor of Information Engineering at the Chinese University of Hong Kong, SenseTime has developed commercial products that leverage deep learning and machine learning in the development of computer vision to replicate tasks typically performed by trained human eyes. These tasks include facial, image, and text recognition, video image analysis, and image and video editing. SenseTime’s core platform is currently being utilized by more than 400 companies across a wide range of industries and verticals in applications that range from the playful to the mission critical [2]. One of SenseTime’s customers, Meitu, a Chinese selfie app, allows users to modify their appearances and take funny or more attractive-looking selfies using SenseTime’s image and video editing capabilities [3]. In comparison, China’s fintech companies leverage SenseTime’s platform as a mission-critical identity-verification system for opening an account. For China’s 4,000 peer-to-peer lenders, SenseTime’s identity-verification product SenseFace 3.0, phased out the days-long manual identity verification process that bottle-necked loan disbursement in online lending [4].

Exhibit 1: SenseTime provides Rong360 with face identification (including liveness detection), identity verification, and ID card and credit card recognition functions. [4]
However, perhaps the most visible and controversial client of SenseTime is the Chinese government itself. Since its early days as a company, SenseTime’s platform and core technologies has been used by the Chinese law enforcement as part of its national surveillance program. SenseTime processes video captured by China’s nearly 170 million CCTV cameras across the country to help police officers identify suspects and root out potential criminals by matching faces identified in video with corresponding faces on government-issued IDs [5]. In 2016, SenseTime Head of Product Development Yang Fan claimed that police in Chongqing, powered by its technology, identified 69 suspects and caught 14 fugitives within 40 days. “What we’ve accomplished happens only in movies” [6].

SenseTime actual case comparison results Exhibit 2 [7]
For SenseTime, China’s police forces and surveillance footage serve as an important source of training data for the company’s platform and product development. While SenseTime’s efforts thus far have centered on identity verification and image recognition, the company remains excited about developing more products that can improve public safety by automatically recognizing incidents when they happen and alerting the police. For example, tracking car number plates of stolen vehicles, automatically recognizing traffic accidents as they happen, and locating lost children and senior citizens in public spaces [8]. Products created for public safety use could then be adopted for commercial use cases, such as cashier-less shopping and consumer big data analysis in physical retail. [9]

Questions of privacy and the potential for government and corporate misuse have consistently dodged the company since its early days of product development and commercialization. Civil libertarians within China and internationally argue that SenseTime’s technologies have been used to track minorities in places like the Uighur region of Xinjiang and religious worshipers attending church in the coastal city of Wenzhou [10]. In response, the company has often attempted to publicly absolve itself from the use of its products by its clients. As SenseTime’s PR manager Franky Chan states: “SenseTime mainly provides customers with algorithms and technology to process their data. We do not obtain, and have no control of, the data from customers. By nature, AI is only a tool, it depends on whether the user uses it for good or bad causes.” [11]

SenseTime’s response to these criticisms indicates that companies developing machine-enabled products within the image / video space need to strike a balance between efficiency and privacy. Instead of evading conversations on the potential misuse of its products, the company instead could proactively work with activists, policy makers, and industry actors to incorporate some of these concerns into the company’s product development by explicitly creating safeguards within its products that prevent misuse. Furthermore, the company’s public perception could be strengthened with additional transparency and disclosure on where, when, and how its products have been used in law enforcement actions within China and globally. Lastly, given its broad social ramifications, SenseTime could work in conjunction with civil society and corporate actors to help define and enforce laws governing the acceptable use of its technology both within China and globally.

As AI and Machine Learning technologies transform every profession, industry, and society, we will continue to be confronted by the ethical implications and consequences of these innovations. This raises the question: Are companies responsible for the misuse of their products? Furthermore, how should companies, if possible, safeguard themselves from such misuse?

(787 words)

[1] Bloomberg, “China Now Has the Most Valuable AI Startup in the World”, accessed November 2018

[2] Quartz. “The billion-dollar, Alibaba-backed AI company that’s quietly watching people in China”, accessed November 2018

[3] Jiayang Fan, “China’s Selfie Obsession”, The New Yorker, December 25, 2017,, accessed November 2018.

[4] SenseTime. Customer Cases., accessed November 2018

[5] Josh Chin and Liza Lin. “China’s All-Seeing Surveillance State is Reading Its Citizen’s Faces” Wall Street Journal., accessed November 2018

[6] , Shu-Ching Jean Chen “The Faces Behind China’s Artificial Intelligence Unicorn” Forbes., accessed November 2018

[7] “Searching for pictures” to judge intelligence AI to help criminal investigation”, accessed November 2018.

[8] Sebastian Moss. “China’s SenseTime, the world’s most valuable AI startup, plans five supercomputers”, accessed November 2018.

[9] Suning. “Suning Announces Investment in SenseTime to Further Deploy Smart Retail Strategy With AI Innovation”, accessed November 2018.

[10] Josh Chin and Liza Lin. “China’s All-Seeing Surveillance State is Reading Its Citizen’s Faces” Wall Street Journal., accessed November 2018

[11] Newstateman. “SenseTime: How the world’s most valuable AI startup in changing China,”, accessed November 2018



Using Machine Learning for Crime Prediction


Is SigTuple positioned to disrupt medical diagnostic space in India: Can Data become Doctor?

Student comments on SenseTime and Public Safety

  1. Thanks for the article! This is a hard dilemma, and its impact only grows stronger as the days go by.
    I believe companies should be held liable for the use of their products. On one end of the spectrum, there are regulations imposed on arms dealers especially for that purpose. Since the law always lags technology, I’m unaware of such regulations yet, but they are sure to come. Such products, if they are used for public defense purposes, should be regulated. The root problem is that we can’t trust the Chinese government to have any checks and balances on the use of such technology.
    I call and raise your question: next year, when a criminal group hacks the platform to spy on their targets – who will be held liable? The government or SenseTime?

  2. Even though we are still far from a sci-fi scene from a Spielberg movie (the film “Minority Report” reflects this dilemma), AI developments are evolving at a faster pace than regulation does, making the gap between law and reality larger. As you mentioned, companies have an excellent opportunity to promote the debate in society and involve many stakeholders even before launching the solutions to the market. However, to safeguard their position and sustainably develop the AI applications market, I believe that both suppliers of AI solutions and users of these products must act coordinately. Otherwise, the single action of an individual company won’t be enough to mobilize society and policymakers. The private sector, acting coordinately, can fuel the ethical and social debate and push people, governments and international organisms to react. However, I think that policymakers have the core responsibility for preventing AI misuse and for granting society that its voice would be reflected in according regulations.

  3. Interesting topic, and great essay about AI and Sensetime.
    I believe AI can be used in a good or bad way, which could benefit or hurt our sociey and human beings. While company has the responsiblity to use AI in a positive way. As you mentioned in the article, “By nature, AI is only a tool, it depends on whether the user uses it for good or bad causes.”
    In terms of Ai using in public security issue, I personally think it should be used because public security is crucial for every single person. While I agree that this might hurt personal privacy. It should be very carefully when used in public.

  4. Its somewhat eerie how close this mimics the “God Eye” in the Fast and Furious movie. I think society always gets nervous when we find out how much information companies have (i.e. Google and Facebook). Sometimes I wonder if we need to just accept that is the state that we are in and, if so, can we unleash the legal hurdles so that companies like this can go out and do tremendous good in the world. Privacy I think is almost a myth now. Even when we think that we have privacy – do we? Very thought provoking article.

    1. Life imitates art perhaps 🙂

  5. Very interesting article, thanks! While the ethical topics might be controversial and the technology poses a substantial threat, I think it is not the primarily role of the company to determine what is “good” and what is “bad”. While we as a society hope that companies will operate with corporate responsibility, it would be foolish for the government not to regulate the potential ramifications of high-tech innovation. Since the government should represent, and work, for society, I think the government is the organism that is most likely to do a reasonable job of mitigating the potential threats.

Leave a comment