Monday, February 23, 2026

The AI-pocalypse

The AGI Takeover Narrative Is Wrong and so is AI slop:

Everybody keeps warning about super-intelligent AGI escaping onto the internet and taking over the world. I think that’s wrong, and I think Sam Altman and a lot of other experts are wrong about it too..

AI isn’t going to be some disembodied digital god floating around the web. The whole direction of development is integration with the real world. The popular narrative imagines AI as a unified, self-aware entity that gains agency and turns hostile. In reality, today’s systems are narrow, task-specific models. They do not possess goals, intentions, or autonomous will. They generate text probabilistically in response to prompts. The most immediate and measurable impact of these systems is not existential violence, but informational degradation: the large-scale production of "AI Slop" that lowers content quality, pollutes search ecosystems, and erodes trust.

Instead of a singular superintelligence launching weapons, the more plausible outcome is a fragmented landscape of automated systems generating shallow, repetitive, and sometimes inaccurate material at industrial scale. The harm manifests as reduced signal-to-noise ratio, declining platform credibility, and increased difficulty verifying truth, not machines deciding to eliminate humanity.

AI Slop will ultimately alienate the users on the platform driving them from it. The result is that there will be no one on the platform to watch ads and buy products. Therefore it behooves the platforms to restrict the use of bots. Otherwise these platforms will not be getting paid through ad revenue because bots will not be clicking on, purchasing, or watching any of these ads, and there will be no real users interested in digesting low quality content. Therefore Dead Internet Theory is a "nothing burger", the problem will solve itself in time.

In college the fix is straightforward: allow AI-assisted submissions, grade them for accuracy, then generate a quiz directly from that submission and test the student on it. Average the assignment grade with the quiz grade. If the paper earns 100% but the student only demonstrates 0% understanding, if the average is 50% they fail. If they actually learned the material, they pass. If AI produces well-structured, factually correct material and the student can prove mastery of it, that’s an academic win, not a threat. This doesn't double the workload for teachers, instead what happens is students can turn in their finals on their own, be graded on that, and the final quiz will actually be on the content they turned in. AI can develop the exam, and grade the exam against the test results for accuracy, then average the results for a final grade. Still only one day of finals for a professor. Professors shouldn’t be upset that AI exists, they should adjust the evaluation model.

If we want AI to do what we do, lawn care, repairs, logistics, elder care, companionship , it needs plug-ins, add ons, sensors, vision, touch, mobility. It needs a body, and that body is a constraint. It’s a physical construct used to interact with the world. You don’t “release” that onto the open internet, you ship it as a product. If it starts behaving in ways we don’t like we can monitor, audit, and roll it back to last stable version. You will be able to look at the decision branch on an iPad and say, “Nope, I like response number two better” such as we see with current models when branching output to a user. This isn’t Skynet. No government is handing nuclear launch authority to a probabilistic language model. No Department of Defense is letting an autonomous system “turn the key.” We already see the flaws in current AI. Nobody serious is giving it unrestricted fire control.

AI won’t escape and rule humanity, it will be boxed, versioned, patched, and controlled. It will have hardware limits, power limits, firmware controls, sandboxed permissions, and managed update channels. In a body. On a subscription plan. With an off switch. That’s the future.

No comments:

Post a Comment