Parents download the app to the child’s device, and pair it to their own device. The app will never show a parent what the child is sending or receiving, as we believe that children too have the right for privacy. They can, however, manage the risks. For instance, the software might say the child is being bullied on Instagram, and guide and advise the parent on how to talk to the child about the risk. Part of this recommendation might be to disable Instagram for a while, but content filtering is only a part of it. The rest is about giving parents appropriate advice on how to respond to such events. Research shows that when it comes to social media, children know a lot more than their parents, and that’s one of the biggest worries of parents. The software teaches them about the threat as it occurs in real time, and guides them through to the solution.
Can this solution be utilized in an organisation environment?
Yes. Our brand is also registered as SafeToWork, which is all about behavioral analytics, a technology that could easily be adapted to the workplace. We’re working with a global brand that had a situation. A member of staff was using Facebook on his phone, and opened a pornographic message just as somebody was walking behind him. That person was reported and lost his job. You could rightly argue that a business has the right to defend its brand, but employees also have rights, so who would you favor? People spend so much time on social networks, they could be placing their company in danger. In this particular case, the company has given phones to all its employees, so they felt they had a right to know what they are doing. However, saying that such a product would be unpopular is an understatement.
How is it different when discussing child protection?
Child suicide rates are going up year by year, much due to cyber bullying and other imposed risks that kids and their parents are not aware of. We read a lot about abuse and aggression online. In the UK, we’re reaching a point where people have had enough. It’s like we’re living in a social experiment; nobody knows where it will end up, but there are so many problems and they go beyond ransomware. The internet isn’t regulated; people say what they like because they get a feeling of anonymity. We all have a duty to do something about it, but how do you know what your child is doing? Most reasonable people would say something has to be done.
What can you tell us about apps like the Blue Whale?
Sadly, the Blue whale didn’t shock or surprise me, because it’s only one of many similar apps that endanger children’s life and messes with their minds. Children download these apps because they think it’s cool to take risks. There are a number of apps that you can get yourself into trouble with. Getting drunk and then being seen online is now a trendy thing, but that’s only the tip of the iceberg. People can hurt themselves and others or even die, just to get some attention online, there are more and more apps that encourage such behavior.
With over 5 million apps out there, how could we possibly have an idea of which ones are safe for your children and which ones aren’t?
A software like SafeToNet is vital. We can advise parents when these trends exist, to keep them aware and alert. There’s a Peppa Pig video on YouTube where Peppa takes a knife and cuts her own head off. Similar videos with Blaze and the Monster Machines and other popular cartoons are being seen millions of times before YouTube removes them. We can’t know every risk, so we use. By building communities of collaborative safeguarding, parents can warn each other and keep their children safer. Our software can filter URL’s and apps fairly quickly after they are reported, as well as inform large volumes of parents. We do this much quicker than google, because we have no commercial benefit from displaying them.
Is there any regulation around safeguarding children online?
No, it’s mainly a self-imposed regulation. Certain apps set age limits, but those are easy to bypass. Also, there are many apps that endanger kids not because of the content they deliver, but because of the people who use them. What children do on those apps and who they interact with could vary greatly from one child to another. Facebook openly admits it has over 270 million “undesirable users” on their network. In their terminology, which is not clearly defined, it means either fake or duplicate accounts. That’s 1 in 10 users, so if kids have an average of 300 people on their friends list, 30 of them could be fake identities. Sextortion is a huge global issue. Kids are sending images of themselves to people they have never met face to face. Our software is designed to identify that using the multi-faceted tools we deploy. For instance, you can install our own keyboard on your child’s phone to detect changes in his or her behavior. You’ll be surprised to see how many things can be determined just by the speed of typing. Classically in a closer relationship, children will type quickly without giving it a second thought, while with other people they may be more careful about the words they choose. With the ability to pattern the language and emoji’s being used, and the position of emoji’s, you can start to determine changes in behavioral patterns. If I’m normally online on certain hours, if I suddenly use a more aggressive language or respond more quickly, using more words or less words, there may be a strong likelihood of aggression. It seems to be ok to be abusive online. In particular, if I started calling you names, you’re likely to change your behavior pattern. You might go quiet or raise your voice; those are the kind of changes we look for. Typically, they become punchy and use fewer words. That’s how we trained our software to detect differences in mood, and block content before a child is hurt.
What languages does SafeToNet support?
At the moment, SafeToNet is only supported in English. Although translating the software is possible, teaching it to identify semantics in different languages could be very tricky. What you might find offensive may not be so for me, because we come from different cultures. I might swear a lot, you might not. With our software, parents can allow a certain level of profanity, or filter it out completely. So to translate our software we would need to teach it to read between the lines, identify the subtle differences in how people communicate, and identify what’s ok and what’s not.
What changes can we expect to see in the near future with regards to child safeguarding online?
I could give you lots of different opinions, but there is one that surpasses them all: something has to be done. Many parents are talking about how the large corporations should act, but they aren’t doing anything about it. The way I see it, responsibility will slowly shift towards the parents. If I go into my car, I have to put a seat belt on because it’s the law, but also because it’s safer. Similarly, the whole landscape of online safeguarding will move away from laming “the system” towards taking personal responsibility. It is beyond me that newborns are given iPads, I think there will be much greater recognition on the safety of it and the damages it can do to such young children. In the future, parents will never give their child a phone unless it is safeguarded, if that doesn’t happen, the social experiment will end up badly. Food packaging has warnings about the health risks, but there are no warnings for mobile devices and apps. Child data privacy will become a standard part of life in the future. If not, who knows where we’ll end up?