Meta will begin using AI chat as data for ad targeting and content personalization across the Meta ecosystem. Here’s what we can do.
Recently, I got an email from Meta Privacy saying that they’ll start using my interactions with Meta’s AI tools to “personalize” my experience by suggesting content I may find interesting and showing ads that are more relevant to me. I’m generally positive about technology use (when accompanied by digital literacy training), but this struck me as beyond the pale. First, having “better” ads isn’t my idea of personalization. Second, yes, the email made this intention transparent, but in practice, it will be completely invisible to everyone who hasn’t read the user agreement. This is my take. What do you think?
Key Points
- Meta’s AI chats now factor into our likes, follows, and clicks when driving content and ad algorithms.
- Our tendency to bond with AI leads to self-disclosure that can be recorded and repurposed.
- AI-driven targeting reinforces our existing thoughts, narrowing exposure to new ideas and increasing perceived risks or threats.
- Teens naturally use chat tools to explore, inadvertently seeding algorithms that reinforce tentative ideas and misinformation.
Starting December 16, 2025, Meta will begin adding AI chats to the behavioral data it gathers to further personalize its experiences and ad targeting. This move blurs the line between private communication and behavioral data, raising questions about how AI interactions can be used to shape what we see and how we see ourselves online.
Meta currently provides generative AI to users across its ecosphere of apps, including Facebook, Instagram, WhatsApp, and Messenger, and as a standalone app. Even though we don’t pay in cash, Meta’s apps aren’t free. We pay with our data. AI chats take this to a whole new level. Conversations that feel private will now be part of what you “pay” Meta for using its tools.
Behavior as Data
Every click, swipe, and chat tells a story. AI conversations are no different, but they’re more revealing. All kinds of technology gather data about what you do. It can track your online activity using cookies, device fingerprinting, IP addresses, and tracking codes. Offline, technology tracks you through GPS, Bluetooth beacons, and Wi-Fi scanning.
But you are not powerless. Understanding these two things is the superpower that lets you take control.
- Learn how any technology you use works, such as how it handles your data and privacy, and what settings you can control.
- Understand how our built-in vulnerabilities to technology keep us engaged with notifications, the allure of social connection, and FOMO.
New Kind of Targeting: Your Thoughts
When Meta uses your AI chat interactions as additional input, your thoughts about hiking, cooking, parenting challenges, or mental health can shape the Reels, posts, suggested groups, and ads you see. Meta says it will not use chats about “sensitive topics” such as religion, sexual orientation, politics, and health for ad targeting. But to make that distinction, they still must collect and process your data to decide if it’s fair game.
Why AI Feels Personal
Humans are wired to connect through conversation. Chatting, whether with a person or a bot, activates social cognition networks that foster trust and self-disclosure. We anthropomorphize technology, projecting intention and empathy onto digital assistants.
That makes AI feel safe. When users treat AI as a confidant, they often share details, doubts, and ideas they’d never post publicly. This psychological illusion of intimacy makes AI chats a goldmine for platforms that trade on behavioral insights.
A New Kind of Targeting: Your Thoughts
Previously, algorithms could only infer your interests based on behavior. With AI chat, they can go straight to the source. Your thoughts.
Algorithms will be able to match our thoughts with tracked behaviors. If the algorithm knows we asked about teen stress, it might serve us more posts about adolescent behavior. If it links that AI query with visiting a site for parental-control apps or teen mental health resources, it now has a whole new level of specificity from which to infer and shape our reality.
This can sometimes feel helpful, but it creates a tighter “feedback loop” that can narrow your worldview in ways you may not even realize. The more you engage with a topic, the more the system reinforces it. Over time, this can magnify anxieties or amplify perceived threats, fueled by our natural attentional bias toward emotionally charged content.
The Illusion of Privacy
The use of AI chat can be helpful and entertaining, and, because it feels private, gives us a place to try things out. We can role-play social situations, explore identities, or ponder our wildest dreams. AI chats feel like a safe space to try on ideas, experimenting with self-presentation, and ask questions that may reflect our vulnerabilities or curiosities. This kind of exploration is particularly common (and healthy) during adolescence.
However, those signals feed our recommendation system. In that case, the algorithm can strengthen aspects of the self that were tentative or experimental but not internalized. Narrow feeds can bias our views of what’s good and bad in ourselves and others. They can be reflected in the stories we see, the communities we join, the ads we’re served, and our sense of self. This algorithmic reinforcement has been linked to polarization, anxiety, and distorted perceptions of social norms (e.g., Pariser, 2011; Ito et al, 2023).
What You Can Do
- Be mindful of what you ask your AI. When you ask Meta’s AI a question, it may influence what you see in your feed, so use it with awareness.
- Check your ad preferences. Navigate to Meta’s Settings to check what personal data the system is using and limit personalization.
- Educate teens/kids. Explain that AI chats in the Meta ecosystem are not isolated private spaces. What feels like a private conversation can feed algorithms.
- Alternate tools. If you want to use AI chat interactions for personal reflections or sensitive topics, look for AI tools outside the Meta ecosystem. Meta does not have access to conversations with other AI chatbots, such as ChatGPT or Claude. Each has its own user agreements, so check the privacy and security policies to confirm that your content will not be used for targeting or AI training.
Final thoughts
Meta’s policy shift marks another step toward algorithmic intimacy, the blending of private cognition and public data. What we once considered internal, our curiosities, fears, and reflections, is now measurable and marketable.
The good news: Awareness brings choice. You might not be able to stop all personalization. You can choose where and how you engage, what data you feed into these systems, and how you guide others, including your kids, to use AI thoughtfully.
References
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.
Ito, M., Cross, R., Dinakar, K., & Odgers, C. L. (Eds.). (2023). Algorithmic rights and protections for children. MIT Press. https://doi.org/10.7551/mitpress/13654.001.0001
Dr. Pamela Rutledge is available to reporters for comments on the psychological and social impact of media and technology on individuals, society, organizations and brands.