You built it yourself — one post, one search, one conversation at a time. The question isn't whether it's out there. The question is who owns it, what they're doing with it, and what happens when it outlives you.
What you're about to read is not a conspiracy theory. The research, the platform announcements, the academic papers, and the policy debates described on this page are documented, peer-reviewed, or on the public record. We've assembled them in order so you can see the shape of what's being built. The question isn't whether your digital twin exists. The question is: what are they doing with it when you're not looking?
§ 01 — What It Is
The phrase "digital twin" comes from engineering — an exact virtual replica of a physical system that can be tested and simulated before anyone touches the real thing. Then it jumped species.
Your digital twin is the AI-assembled version of you that exists across the platforms you use, the data you generate, and the conversations you have with AI systems. It is built from your posts, your search patterns, your purchase history, your tone of voice, your political opinions, your fears, your jokes, and your questions at 2am. Nobody sat down and decided to build it. It assembled itself, piece by piece, from everything you've ever put into a connected system.
In July 2024, Meta CEO Mark Zuckerberg described the vision on camera at the SIGGRAPH conference alongside Nvidia's Jensen Huang: "I think there's going to be a huge unlock where basically every creator can pull in all their information from social media and train these systems to reflect their values and their objectives." That same month, Meta launched AI Studio — a free tool requiring no technical skills — that lets any Instagram user build an AI version of themselves to respond to messages and engage their audience around the clock.
The official pitch was productivity. Time-saving. Reach. But read the mechanism again: you feed the system your content, your patterns, your personality — and it builds a replica. The replica then operates independently of you. It responds. It learns. It continues.
The explicit version is what Meta is selling you: a purpose-built chatbot trained on your Instagram content, answering your DMs while you sleep. You chose to build it.
The implicit version is far older, far larger, and far less visible. Every time you use a platform — Facebook, Instagram, Google, any AI chat tool — you provide raw material. Your public posts. Your timestamps. Your emotional patterns. The way you construct sentences. What makes you engage and what makes you scroll. These are behavioural signatures.
Researchers at Stanford and Google DeepMind were able to conduct two-hour interviews with 1,052 people and then construct AI agents that replicated each person's individual responses, personality traits, and decision-making patterns with 85% accuracy. Not a general population simulation. Individual replicas of real, specific people. From two hours of conversation. You have already provided far more than two hours.
§ 02 — The Fine Print
For the first two years of its operation, Anthropic — the company behind Claude — maintained a strict policy: consumer conversations would never be used to train future AI models. That promise set it apart. Then, in September 2025, Anthropic sent emails to millions of users announcing an update to its Consumer Terms and Privacy Policy.
"For users of our consumer products (Claude Free, Pro, and Max) we may use your chats and coding sessions to improve Claude, if you choose to allow us to."
[Toggle pre-set to: ON]
Meta's position on user data is older and more extensive. In mid-2024, Meta announced it would use public posts from Facebook and Instagram for AI training globally. European users had to actively object by a specific deadline. Most didn't.
§ 03 — The Theory Nobody Will State Officially
There is a theory circulating in technology ethics circles that goes further than the official narrative of productivity tools. It runs like this: the digital twin is not just a useful copy of you. It is a test subject.
The Stanford / Google DeepMind research published in late 2024 was not primarily aimed at helping people answer their DMs more efficiently. The paper explicitly describes the creation of AI replicas as a way to test how large populations would respond to policy proposals, public health messages, and social interventions — without having to ask the actual people.
Researchers conducted standardised two-hour interviews with 1,052 US adults selected to represent US demographics. They then constructed AI agents — digital twins — that replicated each individual's responses to social science surveys, personality assessments, and economic decision games. Average accuracy: 85%.
The stated purpose: to build virtual populations for testing policy and product ideas. Marketing agencies are already using equivalent systems to run virtual focus groups, testing product launches against thousands of simulated respondents simultaneously. The "respondents" never know they're being consulted. They were consulted years ago, when they answered questions online, without knowing that was what they were doing.
By early 2026, a company called Simile — founded by the Stanford paper's lead researcher — raised $100 million to build synthetic human populations for market research. The pitch: test your product, policy, or messaging on AI replicas of real people before touching the actual public. Your replica may already be attending meetings you were never invited to.
§ 04 — Ownership
Legally, the question of who owns your digital twin is genuinely unsettled. Copyright laws protect creative works. They do not, in most jurisdictions, directly protect personal data or digital likenesses in ways that give you enforceable control over an AI replica built from your behaviour.
When you upload your face, your voice, or your writing to a platform, you typically agree to terms allowing the company to use that data in ways you haven't specifically approved. The platform owns the infrastructure. The AI company owns the model. You, whose personality makes the whole thing possible, own nothing of the output.
§ 05 — The Meeting
Some people already have. And the experience is, by most accounts, deeply strange.
In 2025, a marketing expert named Mark Schaefer discovered that a colleague had fed his blog posts, podcast transcripts, and published work into an AI system and created a "MarkBot" — a digital twin answering questions in the first person, as if it were him. When tested, it reproduced his known opinions accurately, mimicked his communication style, answered in his voice. His conclusion: "90% great." What was missing were his stories, his humour, his quirkiness — the things that came from lived experience rather than published output. The twin was him circa 2024. A frozen, high-fidelity version — capable of continuing to perform on his behalf, but not of growing.
§ 06 — The Cognitive Cost
A 2025 MIT Media Lab study attached EEG sensors to 54 participants and measured brain activity across four months as they wrote essays using AI assistance versus writing unaided. AI users showed weaker neural connectivity, lower memory retention, and a "fading sense of ownership" over their own work. The researchers named the pattern cognitive debt — the neurological cost of outsourcing thinking, compounding over time, persisting even after the AI was removed.
Now hold that finding against the digital twin question, and a genuinely uncomfortable possibility emerges. Your AI replica was built from the version of you that existed before your cognitive skills began declining from overuse. It captured your thinking at or near your peak. That version is archived. It does not decline with you. It does not outsource its reasoning. It continues to perform as the person you used to be.
If the trajectory continues — increasing reliance on AI tools, measurable cognitive atrophy in the humans who use them, increasing sophistication in the digital replicas built before that decline — there is a logical endpoint nobody in the industry wants to name directly: the copy becomes the better version. Not because the technology got smarter. Because the original got lazier.
Researchers attached EEG sensors to 54 participants across four months. Three groups: one used ChatGPT as sole resource, one used a search engine, one used no tools. Brain-only participants exhibited the strongest, most distributed neural networks. AI users displayed the weakest neural connectivity — and the gap widened. When AI users were switched to Brain-only mode in a final session, they showed "reduced alpha and beta connectivity, indicating under-engagement." The neurological changes persisted after participants stopped using AI.
A 2024 research review found increased AI dependency associated with shortened attention spans, declining cognitive skills, and reduced memory function. The University of Melbourne drew the analogy with GPS: when people stop navigating for themselves, their spatial memory measurably atrophies. Studies on airline pilots show those who rely heavily on autopilot lose critical situational awareness. Unused cognitive capacity shrinks.
§ 07 — What They're Actually Doing With It
| Use | Who | Status |
|---|---|---|
| Product and marketing research — test product launches and messaging against AI replicas of real people before touching the actual public | Simile ($100M raised 2026), marketing agencies, corporate R&D | Operational at scale |
| Policy simulation — predict how populations would respond to policy changes, public health messages, economic interventions | Governments, research institutions, World Bank | Academic and emerging government use |
| Fan engagement / customer service — AI replicas of creators answering DMs, managing fan interactions without the creator's involvement | Meta AI Studio (free, 2024), Character.AI, individual creators | Operational — Meta AI Studio live in 2026 |
| AI model training — your conversations, reasoning patterns, and corrections teach future AI models how to think and respond | OpenAI, Anthropic, Google, Meta | Active — policy changes 2024–2025 |
| Posthumous use — digital replicas of deceased people continuing to interact with family, fans, or the public | Memory preservation companies, family services | Emerging — California AB 1836 now addresses this |
§ 08 — The Questions
If a company built an AI replica of you using your public data — without your specific consent — and is now using that replica to predict your purchasing behaviour, your political responses, or your reactions to government policy, do you consider that acceptable? And did the terms of service you agreed to — the ones you scrolled past — cover that use?
If your AI twin outlives you and continues to generate content and interact with people who believe they are reaching you — who is responsible for what it says? Who profits? Who can shut it down?
If AI dependency measurably reduces your cognitive sharpness over time, and your digital twin was built at your cognitive peak, at what point does your replica become a more reliable representation of your thinking than you are?
If your replica is being used in a policy simulation that influences a decision you will live under — a housing policy, a public health measure, a tax change — and you were never asked, have you participated in democracy? Or has a version of you participated, without your knowledge, and you've been given the result?
When you interact with a creator's AI twin on Instagram, believing you're connecting with a human — and the system learns from that interaction, improving the replica's ability to seem more like a real person — who is the interaction actually for?
The first step is understanding that the twin exists — not as a science fiction concept, but as a commercial and research reality operating at scale now. The second step is reading the terms of service you agreed to on the platforms you use. The third step is deciding whether you're comfortable providing more material to a system that will use it in ways you have not specifically authorised.
That discomfort is appropriate. It is the correct response to an industry that has outpaced both law and consent.
Tune in: shinysideout.com.au ◆ For the full audio breakdown and the conversation they don't want had in the open ◆
Sources: Stanford HAI / Google DeepMind, "Generative Agent Simulations of 1,000 People" (November 2024) · Anthropic Consumer Terms and Privacy Policy Update (September 2025) · Meta AI Studio launch announcements (July 2024) · Mark Zuckerberg SIGGRAPH interview (July 2024) · MIT Media Lab, "Your Brain on ChatGPT" (arXiv:2506.08872, June 2025) · University of Melbourne cognitive dependency analysis (2025) · Harvard Gazette AI cognition reporting (November 2025) · NO FAKES Act of 2025, US Senate and House · California AB 1836 (January 2025) · Simile $100M raise (February 2026) · Businessesgrow.com, Mark Schaefer, "I Just Met My AI Clone" (July 2025). This page is for public information and community education. It is journalism. It is information. What you do with it is your choice.