AI as a safeguard against cyberbullying

Children are facing day-to-day risks on the internet. AI could be a solution to protect children from such risks.

When I was a child, I didn’t have a mobile phone or computer to hop around the internet world. Now, many children have means to get into the digital world. It is such a wonderful thing that they can explore thousands of information and interact with thousands of people on the internet. But it also poses parents a huge challenge to protect children from risks that exist in the digital world.

It is said that children between the ages of 8-15 use their mobile phone for a weekly average of 40 hours.[1] This means that they are spending so much time alone without any adult supervision. Because it is difficult to see what exactly is happening in the digital scene, it is estimated that more than 43% of kids have been bullied online, only 1 in 10 talked about the personal abuse to adults and more than 68% have been sent a distressing private message. [2]

About the company

Keepers, a Series A start-up company based in Israel, is trying to solve this issue using AI. The company offers a mobile app which detects (1) bullying and abusive language, (2) use of sexual/unsolicited language, (3) unsolicited contact from strangers, (4) mental health concerns, (5) harmful and adult content websites, (6) app time usage and filter banned websites, and (7) location. Currently, the app can analyze more than 10 languages and can monitor all popular chat apps and social media platforms such as Instagram, Snapchat, TikTok, WhatsApp etc.

The way it can be used by parents is very simple. A parent has to download the app, sign-up and share the link with her child’s device. And the child can download the app using the link shared by the parent. That’s it.

Technology

The company uses multiple technology to provide their service. First, their algorithm can detect dangerous situations based on its vast database. It also uses natural language processing to conduct contents analysis of the content children are viewing and to detect meaning behind words. Machine leaning also helps to detect new trend words and texting terminology. Their database must be improving daily ass they are detecting more than 20K dangerous situations every day.

The company also conscious about the privacy issue. It says that is encrypts all the contents and that they will never misuse personal information and never share information with any third party. Also, they make sure that they do not display things that are not offensive to the parents to protect children’s personal digital space.

Challenges and Opportunities

In my opinion, there is a huge opportunity here as there is a strong demand from parents to have a better solution to protect their children in the digital space. The company also seems to be pursuing a clever way to make their business model viable by going a B2B2C route and collaborating with many big companies such as Vodafone.

One of the challenges would be the competition. There are other players such as Jiminy in the space and it seems that these companies are offering pretty similar solutions to parents, i.e., an app to monitor children’s activities on the internet. It is questionable how Keepers can differentiate their product from others.

The other challenge would be around how parents can leverage the app in a constructive way. As stated above, the app is designed in a way so that it only shows inappropriate contents to parents to protect children’s privacy. However, having an app on children’s mobile would make children feel as not having any privacy and as not being trusted enough from parents. This could create a bit of tension between children and their parents, and I am not sure if the company provides any support around how to handle the conversation or what to do in case the app detected something wrong.

Potential path?

Combining the issue with the competition and the potential tension between children and adults, it might be interesting for the company to pursue a more integrated path in which the company also provides parents and children with resources on how to deal with when bad things happen on the internet or provides counselling service to children who are suffering from cyberbullying. If this is the way the company is going, then they definitely need to shift their resources, process and priorities to deliver these new services.

Or if they want to remain as a modular player, providing a specific service of detecting dangerous activity, then the path they should take is to quickly scale to acquire more end customers so that they can aim to exit by being acquired by a big telecom company or tech company.

In addition, if the competition is about having a better algorithm and bigger data base, then the company should double down on broadening its partnership so that they can gather more data at a faster pace than its competitors.

All of these options have pros and cons so let’s see what would happen in the next few years.

 

[1] “Q&A: How To Use AI To Keep Children Safe Online (Includes Interview)”. 2019. Digitaljournal.Com. http://www.digitaljournal.com/tech-and-science/technology/q-a-how-to-use-ai-to-keep-children-safe-online/article/562892.
[2] Company website: http://www.keeperschildsafety.net/

Previous:

Spacemaker: Merging AI technology with Urban Planning and Design

Next:

GNS Healthcare – Using AI to ID novel drivers of disease

Student comments on AI as a safeguard against cyberbullying

  1. Dear Kanako, thank you very much for this interesting blog! As you noted, I’d be very interested in seeing which levers could Keepers pull to pursue a more integrated approach in the future. Do you think its current and potential competitor will pursue an integrated approach as well?

Leave a comment