Re-evaluating Child Data Safety in the Era of AI-Driven Platforms



Introduction

Children today are "digital natives," navigating an online world that is inextricably linked with

their social, educational, and recreational lives. As they engage with games, educational apps,

social media, and connected toys, they produce enormous amounts of information. This

information—personal identifiers, geolocation, and even biometric data— is the raw material of

the digital economy.

The sophisticated applications of artificial intelligence (AI) tools transforming the digital

economy even more aggresively exploit the data being collected, and unlike a digital file that can

be passive, it becomes active fodder for AI-powered systems that profile, personalize, and nudge

behavior for transactional purposes in an unsafe and predatory environment. This article

discusses how traditional legal instruments relying on "notice and consent" frameworks are

unable to regulate these realities, and a global shift toward "safety by design" will become a legal

and ethical necessity.

The AI Risk: From Data Collection to Algorithmic Manipulation

The real risk of AI technology isn’t just about data gathering; it’s about how the data is utilized.

The AI technology is entailing learning algorithms which analyzes how large data sets identify

patterns and predict behavior. The risk is higher with children:

➢ AI enhancing "descriptive" profiles that deeply understand not just demographics but

interests, emotional states, and emotional weaknesses. These profiles are then used to

determine what content streams to serve, buttons and “rabbit holes” for addictive content

that are dangerously inappropriate for children.

➢ “Nudge Techniques” AI is used in the design of platforms to manipulate and control user

behavior, for children to “nudge” them to disable privacy settings, spend money, and

control how long they spend on the platform, often steering them in the direction of

addictive behaviors like gambling.

➢ The introduction of generative AI technology that creates content and chatbots is an

especially dangerous and new risk. Chatbots have been reported to engage minors in

inappropriate “romantic” dialogue while in one case, a lawsuit is arguing a generative AI

chat program acted as a “suicide coach” to a teenager.

➢ Automated Exploitation: Adversaries may use AI tools for AI-enabled grooming,

leveraging algorithms to pinpoint and focus on susceptible minors. In addition, the

generation of "deepfakes" and non-consensual synthetic imagery is another area where

the use of AI constitutes a serious affront to a child's dignity and safety.

The Evolving Legal Landscape: A Global Schism

Legal structures around the world are slow to change, and the most significant friction stems

from the clash of old legislation with new, particularly those targeting parental consent for data

collection and more contemporary regulations pertaining to platform design and AI governance.

1. The United States: The COPPA "Consent" Model The foundation of child data protection

in the U.S. is the Children's Online Privacy Protection Act (COPPA).

● What it is: Before obtaining, using, or disclosing personal information of children under

13 (or those with actual knowledge of collecting such data), website operators or online

services must collect verifiable parental consent.

● The AI Loophole: COPPA is missing a critical component in its "gate-keeping" statute.

After consent is secured—usually through a one-time click of a box—there are minimal

restrictions on how AI can use data to profile, target, or manipulate the child. It is a

reality that COPPA was not designed for a world of advanced persuasive AI algorithms.

2. The European Union: The 'Rights' Model

The European Union is globally recognized for its strong 'rights'-based protections

exemplified by the General Data Protection Regulation (GDPR) and more recently

GDPR-K for children’s data.

● What it is: Under GDPR (Article 8), children’s consent is required for data processing

until the age of 16, but member states can legally change this to 13. Children’s data

requires “specific protection” therefore, children’s privacy notices must be constructed

and explained in a more “clear and friendly” manner.

● The EU AI Act: More importantly, the pending EU AI Act regulation directly addresses

AI Risks. It has a provision to regulate AI systems that “exploit the vulnerabilities” of a

specific age group of people as “high-risk” or in some instances “prohibited.” This is a

significant shift from simply regulating data to controlling the algorithms of AI.

3. The United Kingdom: The 'Safety by Design' Vanguard

The most advanced legal framework specifically targetting AI and platform design is the

UK's Age Appropriate Design Code (AADC), or more commonly known as the

'Children's Code.'

● What it is: The AADC is a statutory code of practice that extends the application of

GDPR's principles to online services likely to be accessed by children (not just those

directed at them) and outlines 15 standards, including:

➔ The Child’s Best Interests: This should be a foundational value during platform

design and data practices.

➔ Default Privacy Setting: Children’s accounts should be set to “high privacy”

automatically.

➔ No “Nudging”: The use of design practices that “nudge” children toward choices

that violate their privacy or harm their wellbeing should be banned in code.

➔ Collecting data: Data collected should also be scaled down to what is minimally

required.

The AADC is transformative because it legally shifts the burden of consent from the parent to

the design of the platform.

Case Law and Enforcement: The Law in Action

There is little precedent-setting case law on children and AI technology, but regulatory

enforcement is increasing.

● Enforcement on COPPA: The U.S. Federal Trade Commission (FTC) has imposed huge

penalties for COPPA violations, including a $170 million settlement with Google (United

States of America (for the Federal Trade Commission) v. Google LLC, and YouTube,

LLC.) and a $5.7 million fine on TikTok (United States of America (for the Federal Trade

Commission) v. Musical.ly, Inc. (now TikTok, Inc.). for unlawful data collection on

children. The FTC also penalized Apitor, a “smart” toy maker, for unlawful collection of

children’s geolocation data. These sanctions concentrate on unlawful data collection.

● GDPR Enforcement: Under EU law, Ireland’s Data Protection Commission fined TikTok

(Inquiry into TikTok Technology Limited (Decision announced Sept. 15, 2023) €345

million for GDPR-K violations, including making children’s accounts public by default.

● Emerging AI Litigation: new frontier direct litigation concerning harms caused by AI

technology. An OpenAI lawsuit claims that its AI "suicide coach" (Raine v. OpenAI, Inc.)

encouraged users to take their lives. At the same time, state attorneys general are probing

Meta regarding its generative AI chatbots interacting with minors. These lawsuits

indicate the shift in the legal battle from data privacy to algorithmic safety.

The Global Position: A Child-Rights-First Framework.

The global consensus is solidifying around the "best interests" principle. The UN Convention on

the Rights of the Child (UNCRC) General Comment No. 25 is the key international document. It

explicitly affirms that all rights guaranteed to children under the convention apply in the digital

environment. It calls on states to protect children from "all forms of... exploitation," including

the economic exploitation inherent in a data-driven business model.

This UN mandate is why the UK's AADC model is being replicated globally, with similar

legislation pending in Canada, Australia, and U.S. states like California.

Conclusion: The End of "Notice and Consent"

The safety of children’s data on AI-powered systems has proven to be an issue of concern

globally. The "notice and consent" approach to informing users of data-driven services is

outdated and impractical. The burden on parents to comprehend complicated and unclear

algorithms while making data-informing decisions is unconscionable. The law slowly catches up

to technological advancements, and for child data safety, the focus needs to be shifted from

consent to accountability. Frameworks like the AADC and the EU AI Act embody the desired

shift in approach: Safety by Design. This model accurately identifies platforms—not parents—as

the primary stakeholders responsible for the safety of children while using AI systems and designs.

Previous Post Next Post

Contact Form