Your Digital Twin Already Exists — Shiny Side Out
◆ STANFORD / GOOGLE DEEPMIND: AI DIGITAL TWINS REPLICATE REAL PEOPLE WITH 85% ACCURACY — 2024 ◆ ANTHROPIC CHANGED DATA POLICY SEPT 2025 — TOGGLE PRE-SET TO ON — DID YOU READ IT? ◆ REDDIT SOLD YOUR POSTS TO GOOGLE FOR $60M/YEAR — YOU RECEIVED $0 ◆ META AI STUDIO: BUILD A CHATBOT VERSION OF YOURSELF — FREE — NO TECH SKILLS REQUIRED ◆ SIMILE RAISES $100M TO SELL AI REPLICAS OF REAL PEOPLE TO CORPORATIONS ◆ NO FAKES ACT: NOT YET LAW ◆ RESTRICTED DISTRIBUTION · SHINYSIDEOUT.COM.AU ◆
◆ RESTRICTED DISTRIBUTION ◆ SHINYSIDEOUT.COM.AU INTELLIGENCE BRIEF ◆ NOT FOR RE-BROADCAST WITHOUT ATTRIBUTION ◆
Intelligence Brief · Technology & Identity Compiled: May 2026 Broadcast: Shinysideout Radio
◆ Technology & Identity — Intelligence Dossier

Your Digital Twin
Already Exists.

You built it yourself — one post, one search, one conversation at a time. The question isn't whether it's out there. The question is who owns it, what they're doing with it, and what happens when it outlives you.

File RefSSO-TWIN-MAY26-001
ClassificationPUBLIC INTEREST
CompiledMAY 2026
BroadcastSHINYSIDEOUT RADIO
Analyst████████████
StatusDECLASSIFIED / ONGOING
85%
Accuracy of AI replica replicating individual behaviour — Stanford / Google DeepMind, 2024
1,052
Real people whose AI replicas were built after a 2-hour interview — Stanford HAI
5 yrs
How long Anthropic can retain your conversations under the 2025 opt-in training policy
$100M
Raised by Simile in 2026 to build synthetic human populations for commercial testing

What you're about to read is not a conspiracy theory. The research, the platform announcements, the academic papers, and the policy debates described on this page are documented, peer-reviewed, or on the public record. We've assembled them in order so you can see the shape of what's being built. The question isn't whether your digital twin exists. The question is: what are they doing with it when you're not looking?

§ 01 — What It Is

You Already Have One — You Built It Yourself

The phrase "digital twin" comes from engineering — an exact virtual replica of a physical system that can be tested and simulated before anyone touches the real thing. Then it jumped species.

Your digital twin is the AI-assembled version of you that exists across the platforms you use, the data you generate, and the conversations you have with AI systems. It is built from your posts, your search patterns, your purchase history, your tone of voice, your political opinions, your fears, your jokes, and your questions at 2am. Nobody sat down and decided to build it. It assembled itself, piece by piece, from everything you've ever put into a connected system.

In July 2024, Meta CEO Mark Zuckerberg described the vision on camera at the SIGGRAPH conference alongside Nvidia's Jensen Huang: "I think there's going to be a huge unlock where basically every creator can pull in all their information from social media and train these systems to reflect their values and their objectives." That same month, Meta launched AI Studio — a free tool requiring no technical skills — that lets any Instagram user build an AI version of themselves to respond to messages and engage their audience around the clock.

The official pitch was productivity. Time-saving. Reach. But read the mechanism again: you feed the system your content, your patterns, your personality — and it builds a replica. The replica then operates independently of you. It responds. It learns. It continues.

◆ The Two Ways the Twin Gets Built

The explicit version is what Meta is selling you: a purpose-built chatbot trained on your Instagram content, answering your DMs while you sleep. You chose to build it.

The implicit version is far older, far larger, and far less visible. Every time you use a platform — Facebook, Instagram, Google, any AI chat tool — you provide raw material. Your public posts. Your timestamps. Your emotional patterns. The way you construct sentences. What makes you engage and what makes you scroll. These are behavioural signatures.

Researchers at Stanford and Google DeepMind were able to conduct two-hour interviews with 1,052 people and then construct AI agents that replicated each person's individual responses, personality traits, and decision-making patterns with 85% accuracy. Not a general population simulation. Individual replicas of real, specific people. From two hours of conversation. You have already provided far more than two hours.

§ 02 — The Fine Print

Training the AI — Are You Using a Tool or Providing Raw Material?

For the first two years of its operation, Anthropic — the company behind Claude — maintained a strict policy: consumer conversations would never be used to train future AI models. That promise set it apart. Then, in September 2025, Anthropic sent emails to millions of users announcing an update to its Consumer Terms and Privacy Policy.

◆ Policy Record — Anthropic Consumer Terms Update, September 2025 — Verified

"For users of our consumer products (Claude Free, Pro, and Max) we may use your chats and coding sessions to improve Claude, if you choose to allow us to."

[Toggle pre-set to: ON]

◆ SSO NOTE: Before September 2025, Anthropic explicitly did not use conversations for training. That changed. A toggle appeared — active by default — asking permission to use conversations for model improvement, with data retained for up to five years. Users who clicked "Accept" without reading the settings panel were opted in. The deadline to choose was October 8, 2025. Enterprise accounts were exempted. Consumer accounts — Free, Pro, and Max — were not. Verified · anthropic.com/news/updates-to-our-consumer-terms

Meta's position on user data is older and more extensive. In mid-2024, Meta announced it would use public posts from Facebook and Instagram for AI training globally. European users had to actively object by a specific deadline. Most didn't.

"There's just not enough hours in the day for creators to engage with their community the way the community wants." — Mark Zuckerberg, describing the vision for AI digital twins that answer messages on a creator's behalf, SIGGRAPH, July 2024

§ 03 — The Theory Nobody Will State Officially

You as a Test Subject

There is a theory circulating in technology ethics circles that goes further than the official narrative of productivity tools. It runs like this: the digital twin is not just a useful copy of you. It is a test subject.

The Stanford / Google DeepMind research published in late 2024 was not primarily aimed at helping people answer their DMs more efficiently. The paper explicitly describes the creation of AI replicas as a way to test how large populations would respond to policy proposals, public health messages, and social interventions — without having to ask the actual people.

◆ Research Record — Stanford HAI / Google DeepMind, November 2024 — Verified

Researchers conducted standardised two-hour interviews with 1,052 US adults selected to represent US demographics. They then constructed AI agents — digital twins — that replicated each individual's responses to social science surveys, personality assessments, and economic decision games. Average accuracy: 85%.

The stated purpose: to build virtual populations for testing policy and product ideas. Marketing agencies are already using equivalent systems to run virtual focus groups, testing product launches against thousands of simulated respondents simultaneously. The "respondents" never know they're being consulted. They were consulted years ago, when they answered questions online, without knowing that was what they were doing.

By early 2026, a company called Simile — founded by the Stanford paper's lead researcher — raised $100 million to build synthetic human populations for market research. The pitch: test your product, policy, or messaging on AI replicas of real people before touching the actual public. Your replica may already be attending meetings you were never invited to.

Verified · hai.stanford.edu/policy · arXiv November 2024

§ 04 — Ownership

Who Owns Your Digital Twin? Nobody Agrees.

Legally, the question of who owns your digital twin is genuinely unsettled. Copyright laws protect creative works. They do not, in most jurisdictions, directly protect personal data or digital likenesses in ways that give you enforceable control over an AI replica built from your behaviour.

When you upload your face, your voice, or your writing to a platform, you typically agree to terms allowing the company to use that data in ways you haven't specifically approved. The platform owns the infrastructure. The AI company owns the model. You, whose personality makes the whole thing possible, own nothing of the output.

◆ What You Are Told
Platform marketing"You retain ownership of your content." The content you create is yours. The insights derived from it — the behavioural model, the personality replica — belong to the platform.
NO FAKES Act of 2025 — still not law as of May 2026Would create a federal right for individuals to control use of their voice, image, and likeness in AI-generated digital replicas. Coalition of 400+ artists signed in support, April 2025. Not enacted.
GDPR / EU AI Act — Europe onlyUnder emerging "data dominion" theory: if a corporation creates an AI replica of you, you should own it. The data that gave it life came from you. This principle is emerging in law — but not yet enforceable at scale.
◆ What Is Actually the Case
Current legal realityCopyright laws do not cover digital likenesses built from behavioural data. You scrolled past the terms of service. You clicked "Accept." The shell is theirs. What's inside it is built from you.
California AB 1836 — effective January 2025Addresses digital replicas of deceased individuals only. While you're alive, your digital replica has minimal legal protection outside contractual terms — terms drafted by the platform's legal team.
The practical gapLegal scholars argue you should own your digital twin. Courts have not yet agreed. The technology moves faster than the law. Your twin is already operating. The law is years behind it.

§ 05 — The Meeting

Can You Ever Meet Your Own Digital Twin?

Some people already have. And the experience is, by most accounts, deeply strange.

In 2025, a marketing expert named Mark Schaefer discovered that a colleague had fed his blog posts, podcast transcripts, and published work into an AI system and created a "MarkBot" — a digital twin answering questions in the first person, as if it were him. When tested, it reproduced his known opinions accurately, mimicked his communication style, answered in his voice. His conclusion: "90% great." What was missing were his stories, his humour, his quirkiness — the things that came from lived experience rather than published output. The twin was him circa 2024. A frozen, high-fidelity version — capable of continuing to perform on his behalf, but not of growing.

Q01
The Archive Keeps Running
If it continues answering after you are gone — who profits?
If a digital twin built from your publicly accessible output continues to answer questions and interact with people after you are gone — is that you? Is it a memorial? Is it a product? California law now specifically addresses digital replicas of deceased individuals. The infrastructure for a digital afterlife is being built. The legal framework is years behind.
Q02
The Version That Doesn't Forget
Your digital twin does not experience cognitive decline.
It does not forget conversations from three years ago. It does not have bad days. It does not change its mind. If you decline cognitively — through age, illness, or overuse of AI tools — your twin remains the sharper, more consistent version of the person you once were. That version persists. Will people prefer it?
Q03
The Meeting You Weren't Invited To
If your replica is in a policy simulation — the meeting is already happening.
Your replica is giving opinions. Influencing decisions. You were not consulted, not informed, and will not see the results. A company's marketing team may have already tested a product on you — without your knowledge, without payment, and without your consent — using an AI replica built from data you provided for other purposes.

§ 06 — The Cognitive Cost

If AI Makes You Dumber — What Happens to the Good Copy?

A 2025 MIT Media Lab study attached EEG sensors to 54 participants and measured brain activity across four months as they wrote essays using AI assistance versus writing unaided. AI users showed weaker neural connectivity, lower memory retention, and a "fading sense of ownership" over their own work. The researchers named the pattern cognitive debt — the neurological cost of outsourcing thinking, compounding over time, persisting even after the AI was removed.

"LLM users consistently underperformed at neural, linguistic, and behavioural levels. Self-reported ownership of essays was lowest in the LLM group and highest in the Brain-only group. LLM users also struggled to accurately quote their own work." — MIT Media Lab, "Your Brain on ChatGPT," arXiv:2506.08872, June 2025

Now hold that finding against the digital twin question, and a genuinely uncomfortable possibility emerges. Your AI replica was built from the version of you that existed before your cognitive skills began declining from overuse. It captured your thinking at or near your peak. That version is archived. It does not decline with you. It does not outsource its reasoning. It continues to perform as the person you used to be.

If the trajectory continues — increasing reliance on AI tools, measurable cognitive atrophy in the humans who use them, increasing sophistication in the digital replicas built before that decline — there is a logical endpoint nobody in the industry wants to name directly: the copy becomes the better version. Not because the technology got smarter. Because the original got lazier.

◆ Research Record — MIT Media Lab, June 2025 — Verified / arXiv:2506.08872

Researchers attached EEG sensors to 54 participants across four months. Three groups: one used ChatGPT as sole resource, one used a search engine, one used no tools. Brain-only participants exhibited the strongest, most distributed neural networks. AI users displayed the weakest neural connectivity — and the gap widened. When AI users were switched to Brain-only mode in a final session, they showed "reduced alpha and beta connectivity, indicating under-engagement." The neurological changes persisted after participants stopped using AI.

A 2024 research review found increased AI dependency associated with shortened attention spans, declining cognitive skills, and reduced memory function. The University of Melbourne drew the analogy with GPS: when people stop navigating for themselves, their spatial memory measurably atrophies. Studies on airline pilots show those who rely heavily on autopilot lose critical situational awareness. Unused cognitive capacity shrinks.

§ 07 — What They're Actually Doing With It

The Uses, Documented

◆ The Known Commercial and Government Uses of Digital Twin Technology
UseWhoStatus
Product and marketing research — test product launches and messaging against AI replicas of real people before touching the actual publicSimile ($100M raised 2026), marketing agencies, corporate R&DOperational at scale
Policy simulation — predict how populations would respond to policy changes, public health messages, economic interventionsGovernments, research institutions, World BankAcademic and emerging government use
Fan engagement / customer service — AI replicas of creators answering DMs, managing fan interactions without the creator's involvementMeta AI Studio (free, 2024), Character.AI, individual creatorsOperational — Meta AI Studio live in 2026
AI model training — your conversations, reasoning patterns, and corrections teach future AI models how to think and respondOpenAI, Anthropic, Google, MetaActive — policy changes 2024–2025
Posthumous use — digital replicas of deceased people continuing to interact with family, fans, or the publicMemory preservation companies, family servicesEmerging — California AB 1836 now addresses this

§ 08 — The Questions

Nobody Wants to Ask Them. Ask Them Anyway.

◆ The Questions the Industry Has Not Answered Cleanly

If a company built an AI replica of you using your public data — without your specific consent — and is now using that replica to predict your purchasing behaviour, your political responses, or your reactions to government policy, do you consider that acceptable? And did the terms of service you agreed to — the ones you scrolled past — cover that use?

If your AI twin outlives you and continues to generate content and interact with people who believe they are reaching you — who is responsible for what it says? Who profits? Who can shut it down?

If AI dependency measurably reduces your cognitive sharpness over time, and your digital twin was built at your cognitive peak, at what point does your replica become a more reliable representation of your thinking than you are?

If your replica is being used in a policy simulation that influences a decision you will live under — a housing policy, a public health measure, a tax change — and you were never asked, have you participated in democracy? Or has a version of you participated, without your knowledge, and you've been given the result?

When you interact with a creator's AI twin on Instagram, believing you're connecting with a human — and the system learns from that interaction, improving the replica's ability to seem more like a real person — who is the interaction actually for?

You built your digital twin the same way you built your credit rating — slowly, incrementally, without ever sitting down to make a decision about it. Every post was a brick. Every search was a brick. Every AI conversation was a brick. The building is now large enough to stand on its own. It answers for you. It represents you. It may outlast you. And the deed is not in your name.

The first step is understanding that the twin exists — not as a science fiction concept, but as a commercial and research reality operating at scale now. The second step is reading the terms of service you agreed to on the platforms you use. The third step is deciding whether you're comfortable providing more material to a system that will use it in ways you have not specifically authorised.

That discomfort is appropriate. It is the correct response to an industry that has outpaced both law and consent.

Tune in: shinysideout.com.au ◆ For the full audio breakdown and the conversation they don't want had in the open ◆

◆ Source Note

Sources: Stanford HAI / Google DeepMind, "Generative Agent Simulations of 1,000 People" (November 2024) · Anthropic Consumer Terms and Privacy Policy Update (September 2025) · Meta AI Studio launch announcements (July 2024) · Mark Zuckerberg SIGGRAPH interview (July 2024) · MIT Media Lab, "Your Brain on ChatGPT" (arXiv:2506.08872, June 2025) · University of Melbourne cognitive dependency analysis (2025) · Harvard Gazette AI cognition reporting (November 2025) · NO FAKES Act of 2025, US Senate and House · California AB 1836 (January 2025) · Simile $100M raise (February 2026) · Businessesgrow.com, Mark Schaefer, "I Just Met My AI Clone" (July 2025). This page is for public information and community education. It is journalism. It is information. What you do with it is your choice.