Trusted Marketing Services is a full-service marketing agency based in Saskatoon that supports small businesses & organizations across Saskatchewan. In our latest post, Sara shares how AI is learning our personal secrets.
I asked AI a bit of a crude question. What Happened Next Made Me Think About the Future of Humanity.
It started with a joke.
I was curious about a Canadian slang term I’d heard and, instead of asking a colleague or firing up a search engine, I opened an AI chat window. But here’s the part worth pausing on: I didn’t go to ChatGPT, which I had been using until a few weeks ago, when the news hit about the US military policy. I switched to Claudeai because of how the parent company, Anthropic, showed their integrity. Because of that situation, I instinctively trusted Claude more with the question.
I got a clear, measured, completely judgment-free answer. Which I appreciated.
And then I sat back and thought about two things simultaneously. First, what does it say about where we are that I felt more comfortable asking that question to an AI than I would have asking another human being? And second, what does it say that I had a preference between AI systems based on something as intangible as trust?
That second question might actually be the more interesting one. I had developed an instant brand loyalty, a gut instinct, a relationship of sorts, with a piece of software, Claude, that I had just chosen/ met after DUMPING it’s rival ChatGPT.
For the record, I had developed a 2+ yr relationship with ChatGPT. ‘In the early days of our relationship with ‘My ChatGPT’, I asked it to name itself, and it chose the name Quill. Is that weird? Probably, but I’m sure lots of you have done the same…? Is it also completely human and entirely understandable, given how these tools present themselves? Absolutely. And that tension is exactly what I want to talk about.
I’m someone who leads with logic. I appreciate tools that do the same. So when I reflected on that moment, I wasn’t embarrassed about the question. I was fascinated by the dynamic.
Because the trust I extended to Claude ( who declined to name itself when I asked, actually- weird, eh?) wasn’t arbitrary. I didn’t know it was the better fit until I made the switch. What pushed me to switch at all was something much more concrete: watching how the people behind these systems behaved when it cost them something to have a spine.
When the political pressure of the Trump era started bearing down on the tech world, the choices made by AI companies were revealing. Anthropic, the company behind Claude, held its ground. OpenAI’s leadership made a different calculation. I watched that play out, and I made a decision. I left what I knew for something I hadn’t fully tested yet, because the values of the people building the tool matter just as much as the tool itself.
That is not a small thing. That is the whole thing. And it connects directly to everything I want to say about where this technology is headed and who gets to steer it.
“There’s something genuinely new happening here, and I think most of us are sleepwalking through it.”- Sara
The Confession Booth Nobody Warned You About
When I raised this with the AI directly, asking whether it found it surprising how broadly people use this technology, from debugging code to processing grief to asking questions they’d never say out loud, the response was striking. It told me that people have always wanted a place to ask anything without judgment. That the range reflects how people actually live. And then it said something I keep coming back to:
“The lack of judgment is a big part of it. People can ask things they might hesitate to Google, let alone ask another person. There’s no awkward pause, no raised eyebrow. That tends to open the door pretty wide.” – Claudeai
That’s not a comforting observation. That’s a warning dressed up as a feature.
Because here’s what that really means: people are sharing their fears, their confusion, their most private questions with a system that remembers patterns, processes behaviour, and exists within a structure owned by corporations and subject to the decisions of governments and shareholders.
“The confession booth of the digital age has no priest. It has a data architecture.” Sara
The Most Powerful Tool in Human History
I pushed on this directly. What happens to all of that information? What are the real stakes? The answer was honest in a way I didn’t expect:
“The same qualities that make this technology feel safe and judgment-free are exactly what would make it dangerous in the wrong hands. A system that knows what millions of people fear, believe, doubt, and want is an extraordinary tool for manipulation. Targeted influence at a scale that no historical propaganda effort could match.”- Claudeai
Think about that for a moment. This isn’t theoretical. The behavioural data that flows through AI interactions is more intimate than anything social media has ever captured. Social media knows what you click on. AI knows what you’re afraid to ask out loud.
The potential for good is just as enormous. Patterns in what people struggle to understand could reshape education. Aggregate insights into fear and confusion could improve mental health systems. The data could, in the right hands with the right intent, make us genuinely better at helping each other. But intent is the operative word, and intent belongs to people, not technology
Where Do We Go From Here?
I believe in this technology. I use it every day in my work, and the productivity and creative gains are real. But belief in a tool doesn’t mean blind trust in the ecosystem around it. The AI I spoke with put it plainly: whether this trend toward the beneficial or the catastrophic depends almost entirely on governance, ownership, transparency, and incentive structures. The technology itself is neutral. The institutions controlling it are not always.
So here is what I think a responsible path forward actually looks like.
Transparency has to be non-negotiable. People deserve to understand in plain language what is retained, how it is used, and who has access. Not buried in terms of service. Stated clearly.
Governance needs to move at the speed of technology, which means yesterday. The gap between what AI can do and what regulations currently address is not a gap. It’s a canyon.
Ownership of personal data needs to shift meaningfully toward individuals. The value being generated from human vulnerability and curiosity is extraordinary. The people generating it should have rights over it.
And perhaps most importantly, we need to be having this conversation loudly and publicly rather than leaving it to technologists and lobbyists.

I started this with a joke, a harmless question asked without embarrassment in a chat window. But the reason I felt no embarrassment is itself a signal worth examining. We have already handed a significant degree of trust to these systems. The question now is whether the systems, and the people behind them, are worthy of it.
That answer isn’t written yet. And it should be written by all of us…
Just in case you’re wondering ….Thanks for reading to the end – Sara
