Instagram has launched a new artificial intelligence (AI) system to find users who lie about their age. The move is part of the platform’s efforts to better protect teenagers and offer a safer experience online. The AI looks at suspicious accounts and checks if the age entered matches the user’s behavior and profile data.
This change comes at a time when more people are concerned about how social media affects teen mental health. Instagram, owned by Meta, is taking steps to make sure teens are treated differently on the platform, with more privacy and less risky content.
The new AI doesn’t just rely on the birthdate a user enters when signing up. Instead, it watches how users behave on the app. It looks at things like:
-
What kind of posts they like or share
-
Who they follow and interact with
-
What they put on their profile
If something looks off, the AI flags the account. Instagram will then review it to see if the person might be underage. If the user is found to be a teen, Instagram will change the account type. That triggers stronger privacy rules.
Meta says this approach makes it harder for people to pretend they are older than they really are.
What Happens If a User Is Found to Be a Teen?
If Instagram reclassifies an account as a teen account, several safety tools turn on:
-
Private by Default: Teen accounts are automatically set to private. That means only approved followers can see their posts.
-
Blocked Messages: Unknown people can’t send direct messages to teen accounts.
-
Content Limits: Teens won’t see posts with violence, plastic surgery ads, or extreme beauty content.
-
Time Reminders: After one hour on the app, teens get a reminder to take a break.
-
Sleep Mode: Between 10 p.m. and 7 a.m., the app turns on sleep mode. It mutes notifications and sends auto-replies to messages.
These settings aim to reduce screen time and protect young users from harmful content and online strangers.
Parents Will Be Notified Too
Instagram is also working to keep parents in the loop. If a child’s account is flagged and reclassified, parents will get a notification. This notice will suggest they talk to their child about being honest online.
Meta hopes this will lead to better family conversations around social media use. The company also offers a Family Center where parents can manage tools, track usage, and set time limits.
Why It Matters Now
Instagram’s new safety system comes during a time of increased pressure on tech companies. Lawmakers across the U.S. are pushing for strict age checks on social media platforms. Some states are even trying to pass laws that make age verification a must.
But there’s a legal debate about who should handle it. Meta and others say it’s the job of app stores, not platforms. Still, platforms like Instagram are acting on their own to improve safety.
A report from the U.S. Surgeon General in 2023 warned that social media can harm teen mental health. It linked overuse of apps like Instagram to anxiety, depression, and low self-esteem in young users. That’s one reason why stronger protections are being rolled out now.
A Step Toward Safer Social Media
Instagram’s AI tool won’t stop every false age claim, but it is a step forward. The system adds another layer of defense to make the platform safer for teens.
Experts say it also sets a new standard for how tech companies should handle user safety. AI is now being used not just to recommend content but to monitor for risk and protect young users.
Other platforms like TikTok and Snapchat may soon follow suit. With new tools like this, Instagram hopes to strike a balance between user freedom and safety, especially for those most at risk.
The internet is always changing, and keeping young users safe is a growing concern. Instagram’s new AI system to detect age lies is just one piece of a bigger puzzle. But it’s a strong signal that the company is listening to parents, experts, and lawmakers.
As social media continues to evolve, protecting teens online will remain a top priority. And AI, if used well, might just help lead the way.