Affectiva: building AI that reads human emotions

As the amount of data we collect increases, companies are leveraging advanced analytics to drive insights also from the most unusual sources of data, including people facial expressions or speech. In fact, an interesting application of artificial intelligence (AI) and machine learning (ML) lies in the space of emotions. Emotion AI is an emotion detection technology, that “combines analysis of both face and speech as complementary signals to provide richer insight into the human expression of emotion”. [1] Affectiva, a Boston-based company founded in 2009 as a spin-off from the MIT Media Lab, is a pioneer in this space.[2]

By using a standard webcam, Affectiva can identify a face and its key landmarks (e.g. mouth corners, nose tip etc.), to classify facial expressions into seven main emotions (anger, contempt, disgust, fear, joy, sadness and surprise). With pre-recorded audio, speech detection can be integrated as well, with the ability to classify “how” something is said, with a frequency of few hundred millisecond. With about six million faces analyzed in 87 Countries, data accuracy is in the high 90th percentile.

Current business model and future opportunities

The two core applications of emotion AI for Affectiva are in the space of media analytics and automotive. The media analytics application is targeted at advertisers that want to test their consumers’ reaction to videos, ads and TV shows.[3]  With such information, it is possible to improve ad story flow, trailer creation or analyze ad character perception. Results at this stage are extremely promising, also considering the partnerships with Coca Cola and Mars,[4] as well as the strategic investments made by Kantar (subsidiary of WPP, the largest advertising conglomerate).[5]  In automotive, Affectiva developed an In-Cabin Sensing (ICS) that understand emotions within a vehicle, in real time. [6]  This application can support the safety of passengers, for example for ridesharing providers or fleet management companies, by detecting driver impairment. Also in this business vertical, partnerships are quite strong, like in the case of Accenture and Faurecia, as well as with other software or product safety companies.[7][8]

Overall, the total addressable market for affective computing, across these two industries, as well as considering all the other industries where they could expand, can be huge. Tractica estimated that the AI software market may reach $118.6 billion by 2025.[9] While these projections account for all the different AI use-cases, emotion AI can capture a significant share of such exponential growth, adding emotion component many AI use cases and making them more human-like.

Affectiva uses clear monetization strategy to leverage on both existing and new opportunities – it offers SaaS services to current customer groups (advertisers and OEMs) and SDKs (software development kits) that allow developers to build new applications that leverage Affectiva’s core technology (now through iMotions).[10]

Key risks and mitigation strategies

Going forward, there are two main challenges for a full global scale up of Affectiva.

One challenge is ‘technical’, as cultural norms and how people express emotions are very different across different geographies, cultures and product categories. To obtain robust results, the company will need to work across multiple product/geography/culture combinations (plus any other variable that may be relevant), to define the right descriptive benchmarks. To scale up the platform is a structured and sustainable way, a modular expansion may be needed, focusing first on customer groups with more data available and then expanding in a modular way (before expanding immediately to mass users). [11]

The second challenge is ‘ethical’, given the lack of regulations and the fear emotion AI can drive discrimination like in the case of predictive sentencing or housing algorithms. Many activists are trying to boycott facial recognition technologies and some Countries are already assessing the opportunity to ban them, considering the impact they can have on people’s lives, as well as all the unwanted consequences. To mitigate such risks, Affective will need to work with authorities to define the right security and privacy protocols, at industry level.[12]

While the first challenge can be overcome by collecting more data and expanding research in partnership with multiple brands or overall customers, the second challenge is more crucial to the survival or expansion of the company. More specifically, assuming an effective lobby with institutions can ensure survival, Affectiva will need to tackle its main challenge in terms of growth. While opening up again SDKs to developers may trigger new use cases, sources of revenues and potentially triggering network effects, this solution would diminish Affectiva’s control on how the technology is being used. This may accelerate growth but at the same time it can bring Black Mirror potential evolutions closer.

[1] https://www.affectiva.com/emotion-ai-overview/

[2] https://www.crunchbase.com/organization/affectiva#section-overview

[3] https://www.affectiva.com/product/affdex-for-market-research/

[4] https://www.forbes.com/sites/samarmarwan/2018/11/29/affectiva-emotion-ai-ceo-rana-el-kaliouby/#5d6441821572

[5] https://www.wpp.com/news/2011/07/kantar-makes-strategic-investment-in-affectiva

[6] http://go.affectiva.com/auto

[7] https://newsroom.accenture.com/news/accenture-faurecia-and-affectiva-team-to-develop-the-car-cabin-of-the-future.htm

[8] https://xconomy.com/boston/2018/03/21/affectiva-launches-a-i-tech-to-help-cars-sense-your-emotions/

[9] https://tractica.omdia.com/newsroom/press-releases/artificial-intelligence-software-market-to-reach-118-6-billion-in-annual-worldwide-revenue-by-2025/

[10] https://imotions.com/contact-us/

[11] https://www.affectiva.com/wp-content/uploads/2017/03/Does_Facial_Coding_Generalize_Across_Cultures_ASIA.pdf

[12] https://www.technologyreview.com/2020/02/14/844765/ai-emotion-recognition-affective-computing-hirevue-regulation-ethics/

Previous:

Skywise: Airbus bet on big data

Next:

Woebot – Bringing mental health care to your finger tips

Student comments on Affectiva: building AI that reads human emotions

  1. Thank you for sharing! I learned a lot from your post. I have always thought of emotions as a very distinctly human feature, so it’s interesting that ML algorithms are becoming quite good at identifying them.

    I wonder in the future if there might be more of a role for emotion AI in mental health care as well. Being able to detect emotions — particularly changes in them — might be able to help physicians triage patients and determine when someone might need a higher level of care. I agree that different cultural norms and manners of expressing emotion will have to be considered.

  2. Great Article! I am a huge fan of Affectiva and I see many applications for their service such assessing emotions in an interview, lie detection, analyzing speeches, and even dating! That’s why I am not really worried about their growth potential, but i do see the risk in the other challenges you mentioned.

  3. Great article! This technology has the potential to have applications with tremendous effects. Given that, I’m curious on whether there needs to be a specific rollout plan which starts with the less risky applications first before moving into Black Mirror territory – thinking how this could be used in the legal profession to determine the validity of testimonies or even verdicts. If that is the case, then the company will need to retain a lot of control as you previously mentioned.

  4. Great article – thank you for sharing!

    I agree with the challenges that you’ve mentioned.
    On the data perspective, I agree that the expression of emotions varies between cultures, gender, etc. and they also vary based on the situation the person is in (stressful, personal, professional, etc.) In addition, research conducted by the Association for Psychological Science states that: “the relationship between facial expression and emotion is nebulous, convoluted and far from universal”. Another perspective lies with the ability of humans to understand their own emotions – so if most of the time we don’t know the output (e.g. am i happy? am i sad?), how can we train even if we have all the needed inputs? (I’m not sure unsupervised learning works in this context).

    I also completely agree with the ethical perspective. Facial recognition technologies are said to be used in certain situations to further racial segregation and exploitation. Focusing on emotional AI, the worry would also be on the undesirable use cases that can arise from its use (e.g. retailers exploiting sadness or happiness to push products).

  5. Thank you for sharing! I was curious about Affectiva’s technology from an academic standpoint because it’s a fascinating next step towards machines emulating emotion, once it can perceive or understand it. I wonder, however, whether there there is a robust pathway to profitability for a technology like this. As for now, it seems more like an interesting thought experiment more so than a killer application for AI.

Leave a comment