Let’s take a look at Age Verification:
First: anyone can create a website. There is no license. No mandatory security certification. No universal encryption standard. Under age-verification mandates, any site can demand a government ID, a selfie, or biometric data in order to grant access. That includes incompetent operators. That includes data resellers. That includes sexual predators. If someone wants to harvest minors’ identity documents, these laws hand them legitimacy. “Upload your ID to continue.” That's not protection. That's structured data collection.
Second: this creates a data-retention epidemic. Every ID upload becomes another stored record. Every stored record becomes another breach target. Birthdates, driver’s licenses, and facial scans cannot be reset. When that data leaks, and it will, it persists. Expanding age verification multiplies permanent identity databases across thousands of private entities. The attack surface grows. The exposure grows. The risk grows.
Third: the blame shifting needs to stop. Large language models are not sentient. They are probabilistic systems. Neural networks trained via gradient descent to optimize next-token prediction across weighted parameter matrices. At their core, they function like a highly advanced Markov Chain: they generate each next word based on statistical patterns and conditional probabilities derived from prior text but with more dimension. They can capture context, intent, meaning, and long range relationships and dependencies in language. They do not "understand". They do not intend harm. There is no thought, no awareness, no comprehension. Only mathematical computation over learned parameter weights producing algorithmic output. The belief that there is “thinking” happening is a byproduct of anthropomorphizing a pattern-prediction machine. When people claim an AI company is responsible for “leading” a child to harm, they are assigning agency to a system that simply calculates probabilities and emits text. This misattribution of agency mirrors the logic behind what became known as the “Twinkie defense.” In 1979, Dan White was tried for the murders of Harvey Milk and George Moscone. His legal team argued diminished capacity due to severe depression, and junk food consumption was cited as one symptom of his mental decline. The media reduced that complex mental health argument to a simplistic narrative that snack cakes caused murder, turning it into shorthand for blaming an external product instead of personal responsibility. Blaming social media companies for individual overuse repeats the same flawed logic. Platforms may influence behavior, but they do not compel it. Individuals still choose how long they scroll, what they consume, and how they act. Just as a snack company is not legally responsible for someone’s violent actions, a tech platform is not solely responsible for how a person uses it. Personal accountability and parental oversight especially for minors remain central. Tools shape environments; they do not eliminate choice. I do not think The Twinkie Defense is an excuse to blame big tech for poor parenting, or lack of supervision.
Guns don't kill people, Forks don't make people fat, and pens didn't write the Declaration of Independence.
Fourth: We already know how accountability works. School attendance is mandatory. If a child repeatedly skips school, responsibility falls on the guardian. Six strikes you go to jail if I remember correctly. There are escalating consequences. Phones belong to parents. Access is granted by parents. A strike-based enforcement model similar in structure to truancy escalation keeps responsibility where it belongs. Not on OpenAI. Not on Facebook. On the adult who handed over the device.
Fifth: devices should ship locked down by default. Child-protected mode out of the box. The parent explicitly unlocks higher tiers. The phone remains inspectable. Handing a child unrestricted internet access without structure is like handing over car keys without instruction. The answer is not forcing every website to collect ID. The answer is enforcing supervision at the device level.
Sixth: In the early 1990s, the video game industry faced intense public backlash over violent and suggestive content, particularly from games like Mortal Kombat and Night Trap. Mortal Kombat drew criticism for its realistic digitized violence and “fatalities,” while Night Trap sparked controversy due to its live-action scenes involving young women being attacked, which some critics compared to exploitation media. These concerns led to U.S. congressional hearings in 1993, where lawmakers, most notably Senators Joe Lieberman and Herb Kohl pressed the video game industry to regulate itself or face government intervention. In response, the industry created the Entertainment Software Rating Board (ESRB) in 1994. The ESRB introduced standardized content ratings (like “E,” “T,” and “M”) to inform consumers about game content, similar in concept to the film rating system already used by the MPAA. While there were public fears that violent video games would lead to real-world violence, attempts to hold developers legally responsible for players’ actions have generally failed in court. U.S. courts have consistently upheld that individuals, not media creators, are responsible for their actions, and that video games are protected under the First Amendment as a form of expression (reinforced later in Brown v. Entertainment Merchants Association). A key issue that keeps getting overlooked is pre-existing conditions. Mental health problems don’t appear out of nowhere from a single app. They are recognized in Clinical Psychology as complex and multi-factorial. If someone already has underlying issues and turns to social media instead of seeking professional help, that raises a separate question about personal decision-making not just platform design. The same goes for parents. Expecting social media companies, tech platforms, or even companies like Apple to act as a full-time safeguard or “babysitter” doesn’t align with how these systems are built or how responsibility is defined. Supervision, guidance, and making sure a child understands what they’re seeing online what’s real, what isn’t, and how to process it falls on the parent or guardian. Take films like Natural Born Killers. There have been cases where individuals claimed they were influenced by media like this (and went on a killing spree), but courts have consistently rejected the idea that filmmakers or studios are responsible for crimes committed by viewers. The same principle applies: consuming content doesn’t remove personal responsibility for actions. A TV is not a babysitter, and neither is social media. They are tools and platforms designed to deliver content, not to supervise behavior, replace parenting, or provide mental health care. Using them as a substitute is a misuse of the product, not a failure of the platform itself. Parental controls on a child account already works on platforms like Netflix to keep kids from harmful content. Similarly websites can publish standardized content ratings similar in concept to ESRB categories in machine-readable metadata or an extended robots.txt declaration. The device, operating in parental control mode, reads that rating and enforces it locally. If a publisher lies, parents can see these sites visited by children, file a complaint which can be investigated and sites with false or misleading ESRB or MPAA ratings can be held accountable. The new "WebESRB" is maintained by crowd sourced statistical reputation based scorecard system monitored by parents. Unreliable websites can be removed or blocked by it's ratings including sites with no rating. This creates a "web of trust" in which the child does not upload a photo or driver’s license to browse the web.
Seventh: where age thresholds truly matter, verification can happen on the device using cryptography instead of document storage. The operating system verifies age once. It then derives a salted hash representing an age condition, such as “age ≥ 13.” When a service requests proof, the device returns a signed attestation derived from that hash. The service verifies the signature against the OS’s public key. It never receives the birthdate. In more advanced designs, zero-knowledge proofs allow the device to mathematically prove the age condition without revealing any underlying data at all. Advertisers can take a list of customer identifiers tied to a specific interest such as emails or phone numbers and run them through a one-way cryptographic hash function (for example, SHA-256). That process converts each identifier into a fixed-length hash value that cannot be reversed under normal conditions. The social media platform does the same thing independently with its own user database. Instead of exchanging raw personal data, both sides compare hashed values. Where the hashes match, they’ve identified the same person without either party disclosing the underlying identifier. This is called a hash "collision". The advertiser never sees the platform’s user data, and the platform never sees the advertiser’s original list. Ads are served and everyone is happy. An API can automate this matching process by accepting only hashed inputs and returning aggregated match results or audience segments. No plaintext identifiers are exposed, and no direct transfer of customer records occurs. The system relies on deterministic hashing, same input, same output, so matches are exact without revealing identity. When properly implemented with salting, secure transport, and strict access controls, this approach limits data exposure to mathematical fingerprints, not personal information, preserving privacy and enabling targeted ad delivery. These two mechanisms can be used to verify on device and similarly serve anonymous ads. No third-party database. No identity upload pipeline. No retention problem.
This model keeps Personal Identifiable Information (PII) proofs local, uses on device parental controls and legislation to control accountability, and keeps child data out of the hands of anyone outside the home. Age-verification mandates do not solve a parenting problem. They expand data collection, expand attack surfaces, and take control away from guardians. If child protection is the objective, then the solution is device-level defaults, ESRB-style content signaling, cryptographic on-device verification, and clear parental accountability with real consequences for repeated neglect.