http://www.digitaljournal.com/internet/q-a-the-startup-that-s-using-ai-to-protect-children-online/article/569069

Q&A: The startup that's using AI to protect children online Special

Posted Mar 21, 2020 by Tim Sandle
Many parents worry about what their children might stumble across online. L1ght is an anti-toxicity startup using AI to detect and filter harmful online content to protect children. We talk with the company's CEO, Zohar Levkovitz.
An example of online education
An example of online education
Helgi Halldórsson (CC BY-SA 2.0)
L1ght uses proprietary algorithms provide a solution for social networks, communication apps, gaming platforms, and hosting providers to identify and eradicate many of the dangers for children associated with digital technology, such as cyberbullying, harmful content, hate speech, and predatory behavior. The company has built technology designed to detect and predict toxic content in online text, audio, videos, or images.
To learn more, Digital Journal spoke with L1ght’s CEO, Zohar Levkovitz (who also happens to be a judge on the Israeli version of the TV show Shark Tank).
Digital Journal: How dangerous is some online content for children?
Zohar Levkovitz: The more time children spend on online platforms, the more they are exposed to toxicity, abuse, and predatory behavior. The number of young people who view dangerous content as a regular phenomenon is constantly rising: more than 1 in 3 teens have received threats online, and more than half have experienced an incident of cyberbullying. 83 percent of young people want to see social networks act against online bullying. Clearly, this has consequences offline too: 8 percent of 9-10 year-olds have thought about or attempted suicide.
These statistics are shocking and are only getting worse, and both big tech firms and legislators have been unable to reverse this deeply worrying trend. That’s why we created an AI-based platform capable of identifying cyberbullying, harmful content, hate speech, and predatory behavior, and giving social networks and hosting providers the tools to remove toxicity from their platforms.
DJ: Why did you develop L1ght?
Levkovitz: I was personally exposed to the terrifying numbers I just mentioned after living in California, where teenage self-harm is at an all-time high. Many attribute these troubling statistics to social networks. My cofounder’s son was even approached online by a predator while playing a popular game.
We realized then that existing measures, which react to toxic behavior after it’s been seen and flagged by users or moderators, weren’t cutting out the problem at its source. We want networks and platforms to ensure a safe environment and are developing the most advanced technology to do just that.
DJ: How does L1ght work?
Levkovitz:L1ght is an anti-toxicity startup using AI to detect and filter harmful online content to protect children. L1ght provides social networks, hosting providers, multiplayer games and other platforms with an API. Our product plugs into the customer’s back end and works directly with them to identify and prevent toxic content on their platforms. Our proprietary algorithms’ ability to learn the context of a conversation or online interaction is a central part of what makes our AI more reliable, complex, and sophisticated than other technologies currently available. The platform analyzes text alongside images, videos, and voice recordings to detect toxic content. We can even monitor actions like if someone is being kicked out of a group chat repeatedly.
The precise nature of the products we deploy differs depending on the client’s needs. For instance, if we are working with a messaging service, we can analyze communications to identify predatory behaviors, whether that be cyber bullying or shaming. If we are working with a hosting company, we can analyze millions of websites to identify negative types of content.
DJ: How did you test out the technology?
Levkovitz:We are at a commercial phase, so it’s actually been running live with an immense amount of real data and with real results for enterprise clients.
But before that - research. After two years of extensive research with a team of world-class PhDs, data scientists, and cyber experts, we were able to develop algorithms that effectively think like kids and their potential attackers as they analyze text, images, video, voice, and sound to identify toxic online content.
DJ: How do you ensure L1ght stays up-to-date? Does it have the ability to ‘learn’?
Levkovitz:L1ght’s algorithms are constantly learning- they leverage intelligence (big data, deep learning) and human knowledge to analyze and predict online toxicity in near real-time accuracy.
Having proven that deep learning can be a much more effective method in identifying toxicity than today’s common practice of dictionary-based analysis, we plan to use our newly-acquired seed funding to expand and enhance our capabilities of prediction.
We want to be able to stop harmful acts in their tracks, before any actual harm can be done to children. We plan to move into a new phase of R&D in order to improve our platform’s voice recognition capabilities. Being able to accurately detect shaming and bullying inside game chat channels is critical to protecting kids.
Last but not least, we are in the process of creating a “full cycle” platform of detection and prediction, with an extremely agile moderation system integrated into it. That way, our customers will experience both proactive detection and manageable moderation. The result will be an end-to-end solution for detecting and mitigating online toxicity.
DJ: What is your marketing strategy?
Levkovitz:We have many discrete strategies, but one of the main channels revolves around creating massive brand awareness, as well as gaining thought leadership in the media.
Online toxicity is a major societal issue which has been largely created by tech companies, and we believe that tech companies must clean up the mess and help to solve the problem that’s taking place on their platform.
Much like the auto industry builds cars with safety belts, we as an industry need to build into our platforms safety precautions.
Tech companies need to be aware that in order to solve this issue, they need a holistic response which incorporates parental awareness and thorough education for children about potential dangers on the Internet. Parental interest and involvement in their children’s online activity is a key component of that. Spreading this message is the best way we can protect children from online toxicity.