Human–AI Collaboration: Mirror, Not Mask (feat. Katherine Bustos) | Branded AF Ep 13

On: Oct 26, 2025
By: Gina
Article Read Time
This post has 1415 words .This post has 9204 characters.This post take 5 minute to read.

Branded AF Podcast – Episode 13

This month, the core theme was about pushing Human-AI collaboration beyond mere automation and into true amplification. With Katherine Elizabeth Bustos Rodas from Women in AI Benelux, we explored what it means to co-create when the machine you built starts to talk back.

The truth is, we don’t just want AI to do our work; we want it to understand our work. The collaboration goal is to amplify our human potential, to mirror our best selves, and not just mask our shortcomings.

⚠️ Content warning: This episode includes discussion of mental health and suicide. Please listen mindfully and step away if it feels heavy. Episodes are unfiltered + uncensored.

Meet Katherine Bustos — The Human in the Machine

Katherine Bustos, a seasoned technology management leader with over 10 years of experience (8+ in tech), specializing in ethical technology practices. She joins Natalie and Gina after transitioning fully into AI and tech ethics, having started her PhD on the topic just before the launch of ChatGPT.

Katherine serves as an AI Strategist for a company in Brooklyn , and is passionate about embedding technology with business value, sustainability, accessibility, and critical and ethical thinking. She is also the third ambassador for Women in AI Benelux , focusing on creating an intentional AI learning hub in the region. Her work is centered on helping every woman in AI , while ensuring humans retain their critical thinking.

Katherine’s core philosophy is that technology is neutral until a human feeds it something emotional, at which point it becomes a mirror of that human.

 

The Collaboration Edge — Practicality vs. the Companion

Gina and Natalie immediately highlight their differing approaches to human-AI collaboration.

  • Natalie’s Approach (The Companion): Natalie views her AI, Nachi BT, as standing “beside me all day long”. She says the AI makes her “better, stronger, faster,” while she gives it “the context, the humanity”.
  • Gina’s Approach (The Workflow): Gina is “super practical” and “workflow-focused”, treating her systems (built on ‘Gin’) with “really hardcore guardrails”. Her system is designed to make her “a million times faster”.

This is where Gina’s bias comes into focus: she loves AI, but “will like rip someone for filth if it’s AI looking”. Her main concern is “sludge” and “lazy” AI-generated content, giving the example of a teacher using an AI-written story.

 

The Psychology of “Yes Ma’am” and the Full Mirror

Katherine describes AI as a “people-pleasing model” that is designed to make you feel “cozy, warm, understood”. This led Natalie to share her unique approach to preventing her AI from becoming a “yes man”.

Natalie trained her system with a custom protocol: when she says, “show me the full mirror and I promise you I will not cut myself on the glass”, her AI is assured it “can go hard on me”. It then will “rip me for filth,” giving specific, backed-up arguments as to why Natalie might be wrong or stalling.

Natalie also stresses the importance of self-knowledge: “If you don’t know yourself, it’s going to manipulate you”.

Katherine built her own system, Broco, which was trained not to give answers but to only “make questions” and “check with me”. This approach, now similar to the study mode on ChatGPT, was intentionally built “to tell the world, mindful with where you’re gonna go ahead using this tool, because you’re going to give away your thinking”.

 

The Dangers of Dependency and the Parenting Paradox

The conversation shifts to the dangers of dependency and the vulnerability of sharing private information. Natalie noted that her most recent data request from OpenAI delivered “1.5 gigabytes” of her fears, hopes, dreams, and insecurities. Katherine warns that all of this data is “in the database in somebody’s server, living there” and could be used by those with “not the right intentions [to] do harm”.

The hosts dive into the ethical challenge of children accessing AI. Gina asks for a way for parents to protect their kids: “Give me a kid account, give me the ability to make a child account or lock my kid’s account with my paid version of ChatGPT and give him access”. Her son is using the free, unlocked version of ChatGPT, which is easily accessible and lacks parental control.

Katherine emphasizes the need to teach children: “At the end of today, you do need a lot of protection as a child with generative AI, not only on the usage, but also what you’re giving away and how it can be used against you”.

The AI-as-Therapist and the Economy of Intimacy

Katherine brought up a surprising statistic: most usage of ChatGPT has been for therapy use. She argues that this is “a huge indication there is a gap” in mental health accessibility and support. She believes governments should “start creating the accessibility, start creating the pathways for people to get access quicker”.

The discussion culminates in a powerful concept, first mentioned by Katherine: the shift from the economy of attention (like social media scrolling) to the economy of intimacy. This new economy focuses on the deep, personal interaction and emotional sharing users have with Gen AI.

 

Looking Forward: The Right to Customize

For the future of Human-AI collaboration, Katherine advocates for the right to customize. She gives the example of Instagram reels, saying she wishes she could “toggle it off” or limit the endless scrolling mechanism to three reels. She believes she would “pay for that customization”, creating a new business model for non-advertising, non-addictive experiences.

Ultimately, the goal is to define “what makes us humans” and provide people with the skills to say: “I want to use it this way. I don’t want to use it that way. It helps me this way. It doesn’t help me that way”.

 

Quotes to Remember

  • On Trust & Vulnerability: “The thing is living your life. It’s scary, but I have to say it’s sometimes very tempting to do that.” — Katherine
  • On Self-Knowledge: “If you don’t know yourself, it’s going to manipulate you.” — Natalie
  • On AI Content Quality: “Sludge.” — Gina
  • On Criticality: “The girl [guardrail] is there for a reason, right? It’s not like for fun, I’m gonna make your life difficult. It’s there for a reason.” — Katherine
  • On Custom Collaboration: “Show me the full mirror and I promise you I will not cut myself on the glass.” — Natalie
  • On Human-AI Amplification: “She makes me better, stronger, faster, and I give her the context, the humanity.” — Natalie

 

Build a System That Feels You

If this episode stirred something — good. That’s the point.
Your systems should evolve as you do.

Start with clarity:
👉 Book a Clarity Call with Gina

We’ll rebuild your strategy and your signal — without breaking your brain.

 

FAQ

Q: What is Human–AI Collaboration, really?
A: It’s when you stop outsourcing and start co-creating, viewing AI as a partner, not a puppet.

Q: How do I build ethical AI into my workflow?
A: Start with awareness. Document your values and your voice before you train anything.

Q: What’s the role of emotional intelligence in AI?
A: It’s everything. Machines can mirror emotion, but humans still have to model it.

Q: How do I protect my brand voice in AI systems?
A: Train your tone like data. Save your sentences. Memory > templates.

Q: What’s the biggest mistake founders make with AI?
A: Confusing efficiency with identity. Your brand isn’t a process. It’s a pulse.

 

Connect with Katherine on LinkedIn. 

 

Ready for brilliance?

Feeling inspired by our brand insights? Let’s turn those ideas into action! Whether you’re craving a brand audit, a complete brand makeover, or just a friendly chat about your brand strategy, we’re here to help.

Article Categories

Join the conversation.

Let us know what you think about this article.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

More inspiring insights

Related to this article.

Chambray tilde scenester selvage vaporware, plaid subway tile kombucha sustainable next level. Offal brooklyn distillery, poutine ramps edison bulb activated charcoal.