Analyzing facial expressions to increase emotional resonance of video ads
Affectiva is winning by using digital technology to create and capture value in the market research industry.
Companies like Mars and Unilever spend billions per year on video advertisements. Traditionally, they have relied on focus groups and surveys to test and refine their campaigns. While focus groups allow market researchers to accurately assess consumers’ reactions and involvement, they are expensive to scale in terms of participants and geographic reach. Surveys are more scalable but often lack accuracy as they rely on participants’ self-reported emotional reactions.
Affectiva applies sophisticated digital technology to achieve both: high accuracy and almost unlimited scale. Market research participants simply log-in on Affectiva’s website using their own devices and watch an ad – or any other video content that is being tested – from anywhere around the world. Through the devices’ webcam, Affectiva’s software records and analyzes viewers’ facial expressions using computer vision algorithms and a database of more than one billion facial frames. It measures discrete metrics such as smile, surprise, dislike, attention and confusion as well as continuous metrics such as valence and expressiveness. Advertisers can use a cloud-based user interface to analyze emotion metrics moment-by-moment to see which specific parts of an ad might need to be tweaked. They can also compare reactions of different groups of viewers to gauge the impact of an ad across demographic and regional differences. These insights allow companies to increase the emotional resonance of their marketing content resulting in more effective advertisements.
Affectiva captures the value it creates for customers using a software-as-a-service model. Instead of buying the software, customers are charged per usage. Affectiva has partnered with market research companies such as Millward Brown and InsightsExpress to acquire new customers for its innovative technology.
Like other digital winners Affectiva has benefited from extremely low marginal costs compared to traditional industry players, a first-mover advantage as well as the use of application programing interfaces (APIs) and software development kits (SDKs) to spur innovation by third-party developers. Affectiva’s marginal costs of analyzing the emotional reactions of an additional viewer consist of nothing more than the negligible required computing power. While the same is true for online surveys, for Affectiva there are not only costs but also benefits associated with each additional user. A machine learning algorithm improves its software with every facial expression it analyzes. This continuous improvement allows Affective to realize a significant first-mover advantage. Since its launch in 2011, the company has tested 11,000 media units and gathered facial expressions from over 2 million face videos in over 75 countries building the world’s largest emotion analytics database.  To gather even more data and explore use cases beyond its market research business, Affectiva offers third-party developers APIs and SDKs to include its emotion analytics technology in their applications. In the future, this may allow Affectiva to win in other markets as well. For example, smart televisions might leverage Affectiva’s software to understand peoples’ movie watching preferences and refine its recommendation engine accordingly. 
Student comments on Analyzing facial expressions to increase emotional resonance of video ads
This is brilliant and scary at the same time. I think the implications of this can be much greater than smart TV recommendations, for example, it could influence results of elections if political candidates adapt their speeches based on the audience’s emotional reactions.
Since we’ve been talking about Google and its strategy a lot in the class, it will be interesting to see whether Google enters this market and if it does, whether it will be through acquisition or organically. I think this technology can really improve Google’s ads targeting and customization making it even more powerful. But, in the wrong hands this can become the Thought Police Orwell envisioned.
Hi Erika, thanks for your post!
Affectiva already analyzed viewers emotional reactions to the election debate between Obama and Romney in 2012. Here is the paper Affectiva published about their findings:
Hi hawkeye, really like this post! I wonder if you’ve ever heard of Cogito? It’s doing something related but with voice instead of video. Cogito analyzes voice for behaviorial data and then looks for usable information about the speaker. I actually think it could be interesting if the two companies worked together.