Fake moustache trick exposes gaps in UK online age checks


A 12-year-old boy reportedly fooled an online age verification system by drawing a moustache on his face with an eyebrow pencil. The system then verified him as 15, raising fresh questions about how reliable facial age estimation tools are under the UK’s Online Safety Act.

The case comes from new research by Internet Matters, which looked at how families are experiencing online safety changes since the Online Safety Act started reshaping platform rules in the UK. The findings show clear progress in some areas, but also reveal that children can still bypass age checks with simple tricks.

The Online Safety Act requires many platforms to assess child safety risks and use stronger protections, including age assurance where harmful content may reach children. However, the research suggests age checks alone cannot solve the problem if children can beat them with fake birthdays, borrowed accounts, VPNs, or basic appearance changes.

Why the moustache case matters

The fake moustache example has become a symbol of a wider problem. Age estimation systems can help platforms make better safety decisions, but they can also make mistakes when children alter how they look on camera.

Internet Matters found that 53% of children were recently asked to verify their age on platforms. At the same time, 46% of children said age checks are easy to bypass.

That gap matters because age checks now sit at the center of the UK’s child online safety framework. If platforms rely on weak verification methods, harmful content can still reach younger users while families gain a false sense of protection.

At a glance

Key pointWhat the research found
Fake moustache caseA 12-year-old was reportedly verified as 15 after drawing facial hair
Children asked to verify age53% said they were recently asked for age checks
Children who see more safety features68% reported more tools such as reporting and filtering
Children who bypassed age checks32% admitted bypassing them
Children who think checks are easy to bypass46%
Children still seeing harm49% said they experienced online harm in the past month
Parents who allowed bypassing26%

Children are noticing some improvements

The report does not show total failure. Many children and parents have noticed visible changes since the Online Safety Act began affecting platforms and services.

Internet Matters found that 68% of children and 67% of parents reported seeing more safety features, including ways to report or filter content. In addition, 54% of children said the content they recently saw online had become more child friendly.

Some children also noticed limits on risky features, such as messaging, sharing, or contact from strangers. These changes suggest platforms are taking action, even if the results remain uneven.

Age checks remain easy to bypass

The biggest concern is that age assurance systems still rely on methods children know how to manipulate. Some children enter a false birthdate, use a parent’s login, borrow another device, or use facial spoofing tricks.

The moustache example shows the weakness of systems that judge age from appearance alone. A child who looks slightly older, changes lighting, uses makeup, or copies an older user’s image may still pass a check that should block them.

This does not mean every facial age estimation system works the same way. It does mean platforms need stronger testing, better fallback checks, and ongoing monitoring to confirm whether their systems work in real homes, not only in lab settings.

What the Online Safety Act requires

The UK’s child safety regime under the Online Safety Act requires in-scope services to assess whether children can access them, identify risks, and put protections in place. Ofcom’s guidance says providers may need highly effective age verification, age estimation, or both when harmful content could reach children.

Ofcom opened an enforcement programme on 24 July 2025 to monitor whether relevant services are using highly effective age assurance to prevent children from encountering harmful content. The protection of children duties came into force the following day.

The law focuses on outcomes, not just visible safety prompts. A platform cannot simply add a check and assume the job is done if children can still bypass it at scale.

Privacy concerns are also growing

Families are not only asking whether age checks work. They also want to know what happens to the data collected during verification.

Age assurance can involve facial scans, identity documents, app-based checks, payment checks, or third-party verification providers. Each method creates different privacy risks, especially when children or parents do not understand who stores the data and for how long.

This creates a difficult trade-off. Families want stronger protection from harmful content, but many do not want every platform collecting sensitive identity or biometric data.

What platforms should improve

  • Test age verification systems against real bypass methods used by children.
  • Use layered safety controls instead of relying on one age check.
  • Limit risky features by default for younger users.
  • Improve reporting, blocking, and filtering tools.
  • Reduce harmful recommendations for children and teens.
  • Explain clearly what age data is collected and how long it is kept.
  • Audit third-party age assurance providers regularly.
  • Measure whether children still encounter harmful content after new controls go live.

What parents can do now

Parents should not treat age verification as a complete safety system. It can help, but it cannot replace device settings, platform controls, family rules, and regular conversations.

Families should review account ages, check privacy settings, restrict unknown contacts, and use parental controls where available. Parents should also ask children how they get around restrictions, because many workarounds spread quickly between friends.

The most useful approach combines technical controls with trust. Children are more likely to talk about harmful content if they know they will not lose all access immediately after reporting a problem.

Practical steps for families

  • Check the real date of birth on your child’s main accounts.
  • Turn on built-in teen or child safety settings on major platforms.
  • Review who can message, follow, or add your child.
  • Use device-level controls on iOS, Android, Windows, PlayStation, Xbox, and Nintendo Switch.
  • Talk about fake age checks, borrowed accounts, and VPN use.
  • Ask your child what harmful content they still see online.
  • Report weak or unsafe platform behavior when you see it.
  • Avoid uploading government ID unless you understand who handles the data.

The bigger problem

The fake moustache case shows that online child safety cannot depend on one technical checkpoint. Children are creative, and platforms need systems that account for that reality.

The Internet Matters findings suggest the Online Safety Act has pushed platforms to add more visible safeguards. Still, 49% of children reporting recent online harm shows that harmful content remains common.

Stronger enforcement, better platform design, safer recommendation systems, and clearer privacy rules may matter more than simply adding more age prompts. Age checks need to work as part of a wider safety system, not as a box-ticking exercise.

Summary

  • A 12-year-old reportedly passed an online age check by drawing a moustache with an eyebrow pencil.
  • Internet Matters found that 53% of children were recently asked to verify their age online.
  • Nearly a third of children said they had bypassed age checks.
  • Almost half of children still reported online harm in the past month.
  • The Online Safety Act has increased visible safety features, but age assurance still has reliability and privacy challenges.
  • Platforms need layered protections, stronger testing, and clearer data handling rules.

FAQ

Did a child really bypass age verification with a fake moustache?

Yes. Internet Matters included a parent account in which a 12-year-old used an eyebrow pencil to draw a moustache and was verified as 15 by an age estimation system.

What does the UK Online Safety Act require?

The Act requires regulated services to assess child safety risks and put protections in place. In some cases, services must use highly effective age verification or age estimation to stop children from encountering harmful content.

How are children bypassing age checks?

Children reported using fake birthdays, borrowed accounts, another person’s device, facial appearance tricks, and other workarounds. Some parents also admitted helping children bypass checks.

Does this mean all age verification systems are useless?

No. Age verification can reduce risk, but it works better when platforms combine it with safer defaults, content moderation, reporting tools, recommender controls, and privacy protections.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages