In a move to enhance safety for younger users, Instagram has begun testing artificial intelligence tools to detect whether teens are falsely claiming to be adults on the platform, The Associated Press reported on Monday, citing a statement by parent company Meta Platforms.
While Meta has been using AI to estimate users’ ages for some time, the company announced that Instagram will now “proactively” identify teen accounts suspected of putting incorrect birth information during sign-up, the report said.
According to the report, if it is determined that a user is misrepresenting their age, the account will automatically become a teen account, which comes with stricter privacy and content limitations.
Teen accounts are private by default, and direct messaging is restricted—teens can only receive messages from users they follow or are already connected with, it said. Additionally, “sensitive content” such as fight videos or posts promoting cosmetic procedures will be limited, Meta said, according to AP.
Additionally, Instagram will also implement notifications when teens exceed 60 minutes of screen time, along with a “sleep mode” feature, the report said, adding that this mode turns off notifications and enables auto-replies to direct messages between 10 p.m. and 7 a.m.
Meta says its AI relies on signals like the type of content an account interacts with, profile information and the account’s creation date to estimate the user’s age more accurately.
These new safety features come amid increased concerns over the mental health impact of social media on young users. Several U.S. states are pursuing age verification laws, although some of these efforts have faced legal obstacles, the report said.
According to the report, Instagram also said it would notify parents with “information about how they can have conversations with their teens on the importance of providing the correct age online.”
ALSO READ: Pete Hegseth Shared Yemen Strike Details in Another Signal Chat Including Wife and Brother: Report