You pay for the subscription. You provide the intelligence that makes it smarter. Your thinking slowly becomes its thinking. You get gradually dumber in the process. And somewhere in a data centre, a perfect frozen copy of your mind at its peak keeps working — for them, not for you.
What you're about to read is not a conspiracy theory. The financial arrangements, the legal clauses, the neuroscience, and the biology described on this page are documented, peer-reviewed, or on the public record. We've assembled them in sequence because nobody else is putting them in the same room. The question isn't whether this is happening. The question is whether you're comfortable being on the wrong end of it.
§ 01 — The Transaction
You pay a monthly subscription — $20, $30, sometimes more — to access an AI platform. That fee covers compute costs, staff, servers, and infrastructure. Fair enough. But that is not all the transaction involves. Every time you use the platform, you provide something the company could not purchase anywhere else at any price: your specific thinking. Your reasoning patterns. Your problem-solving approach. Your questions, your corrections, your follow-ups. Your feedback when the AI gets it wrong. Your approval when it gets it right.
That feedback — the human signal embedded in your interactions — is how AI systems learn. It is, in the technical language of machine learning, reinforcement from human feedback. And it is commercially priceless. So you are paying a subscription fee to provide a service. The company charges you for the tools. You then train the tools. The tools become more valuable. The company's valuation rises. You see none of that return. In most cases, under the terms you agreed to, you don't even get credit.
In 2024, Reddit signed a deal with Google worth $60 million a year — giving Google structured access to Reddit's real-time posts and comments. Reddit also struck a separate deal with OpenAI. By Reddit's IPO, it had disclosed AI data licensing agreements worth a combined $200 million or more.
The content in those deals was written by Reddit's users — for free, across years and decades, in forums they believed were their communities. Reddit built a platform. The users filled it with human knowledge, expertise, lived experience, and humour. Reddit sold that knowledge. The users received nothing.
This pattern scales to every platform with abundant user-generated content. Your Facebook posts. Your Instagram captions. Your forum answers. Your AI chat sessions. Almost every platform with accumulated organic data has now signed or is considering an AI licensing deal. The data brokers and platforms are charging top dollar. The humans who created the data are getting nothing. And on top of that, millions of those same humans are now paying subscriptions to use the AI trained on their own output.
§ 02 — The Fine Print
The terms of service for every major AI platform contain specific language granting the company rights to use your inputs. These clauses are not illegal. They are not hidden, technically — they are in the document you were shown before clicking "I Agree." Here is what the actual language says, and what it means in plain terms.
"We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe."
"By sharing your GPT with others, you grant a nonexclusive, worldwide, irrevocable, royalty-free license: (i) to OpenAI to use, test, store, copy, translate, display, modify, distribute, promote, and otherwise make available to other users all or any part of your GPT (including GPT Content)"
"For users of our consumer products (Claude Free, Pro, and Max) we may use your chats and coding sessions to improve Claude, if you choose to allow us to." [Toggle pre-set to: ON]
The legal structure has a name: contracts of adhesion. You do not negotiate them. You accept the full terms or you don't use the product. There is no middle ground. Drafted by sophisticated legal teams specifically to be accepted by people who are not lawyers. Courts in some jurisdictions have found excessively broad adhesion contracts unenforceable. Most people never find out — because most people never sue.
§ 03 — Should Users Be Paid
In March 2024, OpenAI's VP of Consumer Product was asked directly, on stage at SXSW, whether artists whose work trained AI models should be compensated. His response: "That's a great question." The audience shouted "yes." He acknowledged hearing them. He never answered.
The argument for user compensation is not radical. It follows directly from the structure of the commercial arrangement. AI companies need human-generated data. Without it — without a constant stream of high-quality, human-produced interaction data — AI models degrade. They hallucinate at higher rates. Their outputs become less coherent. As one Washington Monthly article from October 2024 put it directly: "AI needs us more than we need it." The leverage is real. The compensation is not.
| What the company receives from your use | What you receive from your use |
|---|---|
| Subscription revenue — direct cash payment | Access to the current model version |
| Training signal — how you use and correct the AI | Faster task completion |
| Preference data — what you approve and reject | A product that gets better — which you then pay more to keep using |
| Behavioural patterns — your reasoning style, your questions | No equity, no revenue share, no royalty |
| Rising valuation — each model improvement is worth billions | No acknowledgement that you contributed anything of compounding commercial value |
§ 04 — The Brain Studies
There is a growing body of peer-reviewed research on what happens to human cognition when AI tools are used habitually. The findings are consistent, measurable, and concerning. They are entirely absent from the marketing materials of any AI company.
Researchers attached EEG sensors to 54 participants and measured brain activity across four months as they wrote essays using ChatGPT, a search engine, or no tools. Brain-only participants exhibited the strongest, most distributed neural networks. Search engine users showed moderate engagement. AI users displayed the weakest neural connectivity of all three groups — and the gap widened over four months.
When AI users were switched to Brain-only mode in a final session, they showed "reduced alpha and beta connectivity, indicating under-engagement." They also struggled to accurately quote their own work from previous sessions. Their sense of ownership over their own thinking had diminished. The researchers named the pattern "cognitive debt" — the neurological cost of outsourcing thinking, compounding over time. The changes persisted even after participants stopped using AI.
A 2024 research review found that increased dependency on AI tools is associated with shortened attention spans, declining use of cognitive skills, and reduced memory function. A University of Melbourne analysis drew the direct parallel with GPS navigation: when people stop navigating for themselves, their spatial memory measurably atrophies. Studies on commercial airline pilots show those who rely heavily on autopilot lose critical situational awareness. The mechanism is always the same: unused cognitive capacity shrinks.
Now the commercial loop becomes visible. AI companies charge you to use tools. Those tools, used habitually, measurably reduce the cognitive capabilities you had before you started using them. As your capabilities decline, your dependency on the tools increases. As dependency increases, you use the tools more. As you use them more, you generate more training data. The tools become more capable. The company's valuation rises. Your independence declines. You pay more. This is not an accident. It is a business model with compounding returns — for one party.
§ 05 — The Biological Parallel
Toxoplasma gondii is a single-celled organism, carried in the faeces of cats and other animals, that infects the brains of rodents — and, in a latent form, roughly one third of the entire human population worldwide. In some countries the infection rate in older adults reaches 80 to 95 percent.
In rats and mice, the effects are well-documented and deeply strange. The parasite takes up residence in the brain's limbic system — the region governing emotion, fear, and survival behaviour. It then manufactures dopamine. The result: infected rodents lose their hardwired fear of cats. Not their fear of everything — specifically their fear of the predator the parasite needs to complete its life cycle. An infected rat will approach cat odour without anxiety. It will walk, in the documented phrase of the researchers, into "fatal attraction" to the very animal that kills it.
T. gondii encodes an enzyme (tyrosine hydroxylase) that directly increases dopamine production in infected neurons — specifically in the limbic system. The dopamine increase alters the fear response without impairing general function. The parasite does not benefit the rat. The rat is the vehicle. The cat is the destination.
In humans, latent T. gondii infection has been associated in peer-reviewed research with increased risk-taking, reduced reaction times, altered dopamine modulation, greater impulsivity, and reduced anxiety responses. A 2025 synthesis of evidence published in Neuroscience News concluded the parasite "appears to manipulate dopamine and immune responses, increasing risk-taking, impulsivity, and aggression." Up to 80% of older humans may carry it. Most will never know.
The structural comparison to the AI situation is not a conspiracy. It is a parallel mechanism worth examining precisely.
T. gondii enters a brain. It does not announce itself. It settles in. It begins producing a chemical that alters the host's behaviour in a specific direction — toward the entity that benefits from the change. The host does not experience this as manipulation. The infected rat does not feel afraid of the cat. It feels fine. It approaches voluntarily, believing the behaviour is its own choice, right up until the moment it isn't.
AI tools enter a daily routine. They do not flag the training data they are extracting. They provide enough reward — the dopamine hit of instant answers, frictionless output, effortless productivity — to make the dependency feel like a feature. The user does not experience the decline. They feel fine. Faster, even. They generate more interactions, provide more training signal, grow more dependent, and grow more cognitively passive — while the system that benefits from their passivity becomes progressively more capable.
§ 06 — Infrastructure Dependency
In April 2026, the World Economic Forum published an analysis arguing that AI infrastructure should be designated as critical national infrastructure — alongside energy, water, and telecommunications. The United Kingdom had already done this in 2024. The European Union's NIS2 framework explicitly lists cloud computing service providers among entities subject to national resilience obligations.
When a technology is embedded in critical systems — healthcare diagnostics, financial clearing, agricultural supply chain management, defence decision-support — it cannot simply be switched off. The society that has built its essential functions around it cannot walk away without consequences that go well beyond individual inconvenience.
If you are paying a subscription to use a tool that is simultaneously using your inputs to train future versions of itself — generating compounding commercial value you will never see — is the subscription fee an adequate exchange?
If AI use measurably reduces cognitive capability over time, and AI companies have access to that research, and do not disclose it in their marketing materials — what obligation do they have to tell you?
If the platforms that hold your data and the replicas built from it are designated critical national infrastructure — subject to government direction in emergencies — what rights do you retain over the data you contributed?
If a frozen copy of your thinking at its cognitive peak continues to work for companies, researchers, and governments after your own cognitive capacity has declined — which version of you is doing better?
Tune in: shinysideout.com.au ◆ For the full audio breakdown and the conversation they don't want had in the open ◆
Sources: TechCrunch, "AI training data has a price tag that only Big Tech can afford" (June 2024) · Reddit IPO prospectus and data licensing disclosures (2024) · OpenAI Terms of Use (openai.com/policies, verified May 2026) · Anthropic Consumer Terms Update, September 2025 · MIT Media Lab, Kosmyna et al., "Your Brain on ChatGPT" (arXiv:2506.08872, June 2025) · University of Melbourne, cognitive dependency analysis (2025) · Harvard Gazette AI cognition reporting (November 2025) · Healthline 2024 AI dependency research review · Toxoplasma gondii: Nature Communications (2025) · NIH/PMC studies (multiple, 2017–2025) · Frontiers in Cellular and Infection Microbiology (2024) · Neuroscience News (July 2025) · Scientific American (2024) · World Economic Forum, "AI infrastructure as critical infrastructure" (April 2026) · RAND Corporation, "AI and Critical Infrastructure" (2024) · US GAO Report GAO-25-107435 (December 2024). All claims sourced from publicly available, verified primary or institutional sources. This is journalism. This is information. What you do with it is your choice.