Thursday, 2 October 2025

Instagram Raises the Bar on Teen Safety



Instagram’s not just tightening the rules for teen safety – it’s charging ahead with tools that might just become the gold standard for every platform out there.

 



If you've been keeping an eye on the rising concerns around young users online, you'll know things are heating up globally. But Instagram is playing offence instead of defence – and this new update makes that clearer than ever.

Here’s what’s changing, what’s ahead, and what the rest of the industry better start preparing for.
Smarter, Sneakier Age Detection

Forget date of birth – Instagram’s age-detection tech has levelled up, and lying about your age just got a lot harder.

Using AI, Instagram doesn’t rely on what you say your age is, but what your activity suggests. This includes looking at who you interact with, what type of content you're scrolling through, and who’s following you. Yes, it even keeps tabs on those cringey birthday shout-outs your mates post every year. Because apparently, those help confirm your real age too.

It’s still not flawless. Mistakes will happen (Meta says they expect some), and teens who get wrongly flagged will be able to appeal. But the tech is evolving – fast – and it’s learning as it goes.

The main win - teen users will get safeguards even when they've tried to game the system. Certain features and types of account interaction will be limited unless Instagram believes a user is truly 18+.
Canadian Teens Targeted

Instagram is now bringing its strict teen protection features to users in Canada. Anyone under 16 will be automatically placed into “advanced security mode,” where certain controls kick in to limit who can message them, what content they’re recommended, and who sees their posts.

U.S. users have been under these tightened settings since late 2023, and now it’s Canada’s turn. Teens can't disable them without a parent. Tougher, yes – but definitely protective.

And as governments across the globe start pushing hard for under-18s to have extra layers of safety, this timing doesn’t feel like a coincidence.
The Global Trend Is Crystal Clear

Several countries – France, Greece, Denmark, Spain, Australia, New Zealand, Norway – are all eyeing new restrictions for young users. We're not talking voluntary measures, but actual laws.

Think: completely cutting off younger teens from accessing social apps altogether or enforcing 16+ minimum ages.

It’s no longer a matter of if stricter age rules come in, it’s how soon and who gets hit first. Meta clearly sees where the wind's blowing and is now sprinting ahead before governments can force its hand.
What This Means for the Whole Industry

As you’d expect, other platforms are watching this very closely.

TikTok, YouTube, Snapchat - all of them. They can't hide behind vague policies much longer. Why? Because eventually there will be enforceable laws in place, probably with detailed benchmarks and official age safety standards.

If you're a digital platform relying on large teen user bases (and that's pretty much all of them), your AI and protection systems can’t just “exist” – they’ll need to actually work. Which is why Instagram doubling down now gives them a big advantage, and also cranks up the pressure for everyone else.
Bottom Line

Instagram’s latest updates aren’t just another security patch. They're a calculated, forward-thinking move. Not perfect – no tool is – but a solid leap toward accountability and leadership when it comes to keeping teens safe.

This isn’t a bonus feature anymore. It’s fast becoming the standard – and others had better catch up. Fast.



Tuesday, 1 October 2024

FTC Reaches Final Ruling on Fake Reviews and Spam Social Proof

Much of this could be seen as a natural progression towards the kinds of platforms that were and always will be engineered with younger audiences in mind. But there are also those in broader tech circles who believe that it’s more down to Google’s own evolution. Which could be seen by many as a step (or series thereof) in the wrong direction.

Google is no longer the simple, clean, useful and helpful search engine it once was. It's now complex, comprehensively ad-cluttered and almost completely impossible to understand. And with each new 'enrichment' that comes along, it gets worse.


 

Not that any of this means curtains for the world's top search engine – at least not for the time being. But what we can be sure of is that something even as seemingly infallible as Google is vulnerable to the winds of change – especially where the needs and expectations of younger web users are concerned.

After what at least feels like an eternity of wondering why it hadn’t already happened, the Federal Trade Commission (FTC) has finally clamped down on false feedback. Specifically, new rules are set to be introduced later this year that will make fake customer reviews and testimonials illegal.

This includes (but isn’t limited to) common practices such as:

· Buying Reviews – Something most self-respecting businesses wouldn’t even consider, but research suggests is beyond rife.

· Misrepresenting Company-Controlled Review Websites as Independent – An all-too common practice that misleads people into buying into highly biased, one-sided and ultimately false information.

· Fake Indicators of Social Media Influence – This includes the sale, purchase or distribution of anything that could mislead the public as to your status on social media.

According to the FTC, civil penalties will be imposed against any violators found to be in breach of any of the above.

But given the potential scope of the issue as it exists today, it seems almost impossible that each and every brand that’s bought into practices like this to date will have implemented policies to ensure compliance in a matter of weeks.

What Does ‘Fake Indicators of Social Media Influence’ Mean?

This is perhaps going to be the most challenging issue to police properly. Why paying for positive reviews is fairly commonplace, the number of social media accounts using artificial inflation to boost their profiles is incalculable.

From the smallest brands to the biggest businesses to influencers and even politicians, it’s no secret that much (if not most) of their influence is often attributed to bots.

And it’s not difficult to understand why. Key in a quick online search and you’ll be returned with hundreds (if not thousands) of sketchy sellers from around the world, selling everything from Facebook followers to TikTok views to custom-written reviews.

According to the FTC, the official rules will apply to “any metrics used by the public to make assessments of an individual’s or entity’s social media influence, such as followers, friends, connections, subscribers, views, plays, likes, reposts and comments” which are not genuine reflections of the opinions and experiences of real people.

Influencers Under Increased Scrutiny

All of which represents yet another attempt to crack down on fake engagement and artificial inflation of key social media metrics. The message for brands and businesses being clear – don’t attempt to illegally ‘buy’ your way to social media fame and fortune.

But it’s not quite as simple as this – at least not for brands that work closely with high-profile influencers. As a general rule of thumb, the larger an influencer’s audience, the higher the likelihood a proportion (potentially large) of their follower-base is comprised of bots. And by associating yourself with them (and perhaps having their followers directly or indirectly endorse you), any dubious dealings on their behalf could reflect badly on you.

For the time being, no such rules or regulations exist in most other major markets – shy of the policies of the platform's themselves. Either way, it should be seen as an important wake-up call for any businesses still relying on purchased social proof.

You might be getting away with it for now, but you’ll eventually find yourself in the regulatory crosshairs.