SafeHome.org may receive compensation from some providers listed on this page. Learn More
We may receive compensation from some providers listed on this page. Learn More
Ever heard of five finger fillet? I’ll give you a hint. It involves your fingers and a large knife moving at high speed in between them. Desperados play it in the video game Red Dead Redemption. (Liam Hemsworth also played it on a Hunger Games press junket … and stabbed himself in the pinky.)
Well, here’s some news for you. Our kids may also be playing it — with a knife or a pen — at school or in their rooms, blindfolded or not. That all depends on which particular version of the #fivefingerfilletchallenge they saw on TikTok.
Let me be clear. TikTok didn’t create five finger fillet, aka, the knife game. It’s been around for centuries.
But the knife game is a fairly typical example of the kind of alarming trends making the rounds on the web these days. What else should be on our radars heading into 2023? A lot more than we can fit into one article! But these four trends should put every parent with web-aged kids on high alert.
Did You Know? Then-Governor of Illinois Pat Quinn accepted the potentially lung-popping “cinnamon challenge” in 2012, which highlights one important fact about the culture of dangerous online stunts: Sheer ignorance is a key driver in participants’ decisions to play along.
The statistics are devastating but also head-scratching. TikTok, Snapchat, Google, Dropbox, and Microsoft collectively flagged nearly 1 million pieces of child pornography circulating on their servers in late 2020.1
Think that’s bad? It’s peanuts compared to Facebook’s 5.4 million.
These aren’t unique pieces of content. The worst of the web tends to get shared over and over again. How, in 2020, was this possible?
Part of the answer is that companies like Facebook and TikTok aren’t doing their jobs. If a billion dollars is pocket money to your company, you’ve got to be investing a lot more in keeping young users safe.
But there’s something else parents absolutely need to understand.
Kids that fall victim to cyber sex predators are almost always duped into the “friendship” under false pretenses in full light of day on their favorite social media (or gaming) platforms. One DM leads to another leads to another — until the sleazeball gets what he wants: a sex pic or video that gets passed around the web a million times.
What you can do:
FYI: According to a national poll conducted by C.S. Mott Children’s Hospital, 32 percent of 7- to 9-year-olds and 49 percent of 10- to 12-year-olds in the U.S. use social media today.2
Think back to December 2021, when Meta launched its VR social media platform, Horizon Worlds.
Not picturing it? Imagine normal Facebook but with an added feature where you could be virtually confronted by the raving troll who told you just yesterday that the moon was a huge server built by Google to enslave Earth.
Before they launched it, Meta needed to test their new product. In one of those test sessions, some goon in an Oculus (a beta tester presumably) groped another beta tester.3 Facebook’s response? She should have activated the “Safety Zone” tool, which would have placed an invisible “anti-pervert” force field around her avatar.
As a parent, I can’t imagine the ramifications of a virtual Facebook city populated, at least in part, by real predators hiding behind avatars. Do I let my kids take a stroll there? Do I take a stroll there?
What you can do:
Great But Not Really: In March this year, Meta rebranded Safety Zone (see above) as the “Personal Boundary” tool.6 When you switch it on, only “friends” can get to within four feet of your avatar. It’s kind of sad that it’s come to this. But also, we know exactly how predators operate online. They befriend our kids before they inflict the damage.
We’ve written extensively about the dangers the dark web poses for our children. Typically, the conversation revolves around bad actors and their bad deeds: a hacker busting into a hospital database and stealing thousands of health records, for example, or a sicko paying for infant pornography.
What often gets left out of the discussion is all the infrastructure that’s enabling these criminals to peddle their illegal wares. You need a lot of it — a platform and host like WordPress, a domain, and an internet service provider, bare minimum. If any one of those channels gets bottlenecked, down goes your soapbox.
Here’s where our kids come into the equation.
Legally, service providers aren’t liable for the user content that ends up on their platforms.7 (This is a huge conversation, but that’s the gist.) More worrisome for parents: They don’t particularly care who owns it as long as they’re paying rent. How many of those users are exploiting children? Last year, the National Center for Missing and Exploited Children logged 29 million reports of possible online child sexual abuse.8
This doesn’t have to be the case. Just ask the neo-Nazi Daily Stormer, which got ousted from the internet in 2017.9 If you ask me, it’s about time the companies in charge show a little muscle against the human cockroaches lurking in the web’s unlit cracks.
Until then, parents, beware.
What you can do:
Did You Know? Anti-human trafficking nonprofit Thorn reports that 1 in 7 children ages 9 to 12 admitted to sharing a nude picture online in 2020. Half of those children said they’d sent a photo to someone they didn’t know in real life, and 41 percent of them knew the recipient was an adult.10
When our kids get harassed online, the culprit is usually a bully, right? Not necessarily, according to a massive (41,000 people across 24 countries) new study by the Boston Consulting Group (BCG) and the Global Cybersecurity Forum.11
Cyberbullying is still a massive problem. Twenty percent of all the threats BCG recorded were Insta bullies. But this year, ads and popups topped the list of children’s internet gripes (47 percent), followed by exposure to inappropriate content (36 percent), hacking and phishing (17 percent), and unwanted flirting (17 percent).
That’s the nitty-gritty. But BCG’s study also paints a broader picture of a web that’s evolved from a community of sharers into a wildly unpredictable house of mirrors with dangers — large and small — lurking around every corner. As parents, the motley, uncontrollable nature of these threats can feel a little like a relentless army of zombies out to destroy our children. (Or maybe that’s just me.)
If there’s any light at the end of this tunnel, it’s a pretty big one. Eighty-three percent of the children BCG interviewed said they would go to their parents for help if they felt threatened.
Good job, little ones! (If only more of you actually did. According to the folks with the pads and pens, only 39 percent of parents said their kids actually followed through.)
What you can do:
FYI: We all hate popup ads. They’re annoying, lame, and, well, popping up everywhere. But they can also be dangerous for kids. Early this year, we reported on the GriftHorse scam, where an innocent-looking popup in a few harmless-looking apps duped 10 million users of all ages into paying for 100 million dollars’ worth of expensive premium subscriptions.
Kids have been pulling reckless stunts forever. The difference between whatever dumb games we used to play in the past and what’s happening now is that today’s trends tear like wildfire over the internet. The allure of fame and a global audience is the lighter fluid.
And that’s just TikTok challenges. The metaverse is coming, too, and the only thing we can count on for sure is an invasion of privacy.
Sometimes it seems like our kids are more savvy about the internet than we parents are. (That’s because they are. They grew up on it.) But that doesn’t mean they can deal with everything they experience online. They’re kids after all.
If there’s anything that gives me hope, it’s that 83 percent of kids in the BCG’s far-ranging study said they trusted their parents enough to talk to them if they felt threatened online. Let’s take that as a positive sign that if we do take the time to listen — and to talk, instruct, and act — we can lead our kids into adulthood (virtually) unscathed.
Fast Company. (2021, Jul 14). On social media, child sexual abuse material spreads faster than it can be taken down.
https://www.fastcompany.com/90654692/on-social-media-child-sexual-abuse-material-spreads-faster-than-it-can-be-taken-down
National Poll on Children's Health. (2021, Oct 18). Sharing too soon? Children and social media apps.
https://mottpoll.org/reports/sharing-too-soon-children-and-social-media-apps
MIT Technology Review. (2021, Dec 16). The metaverse has a groping problem already.
https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem
Scientific American. (2016, Oct 4). Are Virtual Reality Headsets Safe for Children?
https://www.scientificamerican.com/article/are-virtual-reality-headsets-safe-for-children/
Meta. (2022). NEW AND PRIVATE SESSIONS.
https://www.oculus.com/horizon-worlds/learn/tutorial/new-private-sessions/
Meta. (2022, Feb 4). Introducing a Personal Boundary for Horizon Worlds and Venues.
https://about.fb.com/news/2022/02/personal-boundary-horizon/
Washington Post. (2022, Sept 30). Section 230: The little law that defined how the Internet works.
https://www.washingtonpost.com/technology/2020/05/28/what-is-section-230/
Rainn. (2022, Aug 25). What is Child Sexual Abuse Material (CSAM).
https://www.rainn.org/news/what-child-sexual-abuse-material-csam
The Washington Post. (2022, Aug 25). Gatekeepers: These tech firms control what’s allowed online.
https://www.washingtonpost.com/technology/2021/03/24/online-moderation-tech-stack/
Thorn. (2021, Nov 12). Thorn research: Trends confirm need for parents to talk about online safety with kids earlier, more often.
https://www.washingtonpost.com/technology/2021/03/24/online-moderation-tech-stack/
Boston Consulting Group. (2022, Sept 21). Why Children Are Unsafe in Cyberspace.
https://www.bcg.com/publications/2022/why-children-are-unsafe-in-cyberspace