What you need to know from this episode
Why AI trust has become conditional - and what leaders must do about it
Symphone'e Lindsey has spent over a decade navigating large-scale organizational change at companies like Amazon Web Services, PepsiCo, and now Twilio - and the pattern she sees repeating across AI rollouts is not hostility. It is conditionality. Employees have not stopped trusting their organizations. They have started requiring a reason to extend that trust to AI. The distinction matters enormously for how leaders frame their communication strategy.
The shift happens, Symphone'e argues, at the point of introduction. When AI feels imposed - announced from above, tied to productivity targets, and associated in the news cycle with layoffs - employees default to self-protection. They ask whether their skills will still be relevant, whether their career trajectory still exists, and whether the organization sees them as a person or a workflow to be optimized. These are not irrational fears. They are the natural response to a change that has not been adequately explained. Research on AI in HR consistently shows that transparency about purpose and scope is the single most reliable predictor of employee willingness to engage with new technology.
The more it feels imposed rather than introduced, that's when trust erodes. Employees are willing to engage with AI when they understand the why - and when they can see how it helps them, not just the company.
The WIIFM - what's in it for me - question is not a selfish one. Symphone'e is direct about this: in a moment of significant change, where multiple pressures are hitting employees simultaneously, it is entirely reasonable for people to ask how this particular change serves their own growth, not just the organization's efficiency goals. Leaders who dismiss or sidestep that question leave a vacuum that speculation fills. Leaders who answer it directly and honestly create the conditions for genuine adoption.
Integration pauses: the practice most AI rollouts are missing
Change fatigue is real - and AI rollouts land on top of it, not instead of it. Symphone'e's most actionable contribution to the playbook is the concept of integration pauses: deliberate, structured breaks built into an implementation timeline that give employees space to process, experiment, and ask questions before the next wave of change arrives.
The pause is not passive. It is an active learning environment. Employees are given room to explore how AI works in their specific context, above and beyond their job requirements - how they might use it personally, what use cases they can identify, and what questions they still have. The goal is to prevent the cognitive overload that comes from continuous roll-forward, where new tools arrive before employees have internalized the last ones.
When employees teach each other how to use tools, it normalizes AI as a resource rather than a threat. The recovery doesn't become passive - it's more structured, to allow and give permission to adapt at a human pace.
Peer learning is a critical component of this approach. In environments where employees teach each other - sharing prompts, use cases, and honest accounts of what did not work - AI adoption loses its top-down, compliance-driven character and becomes genuinely collaborative. Symphone'e describes the organizations where she has seen this work best: employees coming to shared sessions with things they found on their own, techniques that surprised them, and questions that led to collective problem-solving. The result is not just faster adoption - it is adoption that sticks, because employees feel ownership over the process. Effective change management depends precisely on this kind of participation being built in, not bolted on.
The AI Reassurance Playbook: Five Moves That Build Trust Through Change
Symphone'e Lindsey's practical framework for people leaders navigating AI rollouts - covering trust architecture, recovery time, manager enablement, honest communication, and listening discipline.
Introduce, Don't Impose
Lead with the why - not just the what. Employees engage with AI when they understand how it augments their role, not just how it improves company output. Answer the WIIFM question directly and early.
Build Integration Pauses Into Implementation
Create structured breathing room between waves of AI change. Give employees space to experiment, ask questions, and process what is shifting - before the next tool or policy arrives. Recovery time is not lost time.
Activate Managers as the First Line of Reassurance
Equip managers to normalize AI conversations in one-on-ones, acknowledge uncertainty honestly, and model the learning journey. They do not need all the answers - they need to be visibly on the same journey as their teams.
Have the Conversation You Have Been Avoiding
Silence does not create calm - it creates speculation. Host dedicated listening sessions where AI is the only agenda item. Employees are already forming opinions from external headlines; leaders who stay silent cede the narrative entirely.
Counter Skepticism With Curiosity, Not Defensiveness
Skeptical employees are paying attention - that is an asset. Treat their concerns as feedback to inform your enablement strategy, not resistance to overcome. Transparency about where AI has not delivered builds more credibility than overpromising.
The manager's role: normalizing uncertainty without pretending to have answers
Symphone'e is unambiguous on this point: managers are the most under-leveraged asset in any AI change program. They sit at the exact intersection where organizational strategy meets the lived experience of employees - which makes them uniquely positioned to either accelerate trust or quietly undermine it, depending on how they show up.
The instinct to wait until you have all the answers before communicating is precisely the wrong one. Employees do not expect their managers to have a complete AI roadmap. They expect their managers to be honest, present, and consistent. A manager who acknowledges that they are also learning - and who creates space for their team to surface concerns, share experiments, and ask questions without judgment - is already doing the reassurance work that most AI rollouts leave entirely to an all-hands memo.
Managers are probably the most under-leveraged asset in any change program, and AI is no exception. Their job isn't to have all the answers - it is to be consistent and provide an honest, present perspective.
Symphone'e also highlights the generational dimension of AI fluency within teams. Some employees are arriving with deep familiarity and high comfort; others are encountering significant new terminology every week and feeling quietly behind. The manager's role includes recognizing who in their team holds AI expertise and creating opportunities for that knowledge to flow laterally - not just top-down. This peer teaching model serves a dual purpose: it accelerates capability development across the team while giving individuals recognition for what they already know. Employee well-being during organizational change is directly tied to people feeling capable and seen - and peer learning, well-facilitated, does both.
Spot hidden AI anxiety before it becomes attrition
See how CultureMonkey's pulse surveys and real-time sentiment tracking help people leaders detect fear, disengagement, and trust gaps during AI rollouts - and act before it's too late.
Have the conversation you have been avoiding - this month, not next quarter
When asked for the single most immediate step people leaders can take to neutralize fear of obsolescence, Symphone'e's answer is disarmingly direct: have the conversation you have been putting off. The worst thing a leader can do in a moment of high AI anxiety is stay silent. Silence does not read as calm - it reads as confirmation of the worst-case scenario employees are already building in their heads from the headlines they see every day.
The format she recommends is a dedicated listening session where AI is the only agenda item - not a ten-minute slot in an all-hands. The goal is not to deliver a presentation. It is to create space for employees to write vision statements, identify workflow opportunities, ask the questions they have been afraid to ask, and collectively develop a language for talking about AI change that feels human rather than corporate. The absence of that language is itself a barrier: when employees do not have the vocabulary to articulate their concerns, those concerns do not disappear - they go underground.
Turning pulse survey data into a real-time AI risk detector
Symphone'e's perspective on pulse surveys is both enthusiastic and precise. She loves a good survey - and she is equally clear about everything that can go wrong with one. The gap between leader assumptions and employee reality is exactly what pulse surveys exist to close, but closing it requires more than fielding the questions. It requires acting visibly on what the data surfaces, communicating about that action, and being honest even when the news is not good.
During AI implementations specifically, she recommends looking beyond traditional engagement scores to psychological safety indexes, manager-to-employee trust indicators, voluntary attrition trends, and - critically - AI tool utilization rates. If tools that are available are not being used, that underutilization is its own signal. The question is not just whether adoption is happening, but why it is or is not. Pulse surveys timed to implementation milestones, rather than fixed annual or quarterly cadences, are far better positioned to capture the actual sentiment trajectory of an AI rollout.
Survey fatigue is a real constraint - especially when employees are already absorbing significant change. Symphone'e's guidance is to calibrate cadence deliberately, ensure questions are specific enough to yield actionable data, and make the feedback loop visible. When employees see that their survey responses have produced a real change - or at minimum an honest explanation of why a change could not be made - participation quality goes up. When they see nothing happen, responses become checkbox exercises. The discipline is not just in collecting data. It is in being accountable for what you do with it.
What you'll learn from this episode
| # | Topic | What you will learn | Applicable to |
|---|---|---|---|
| 1 | Conditional Trust | Why employee trust in AI has not disappeared but become conditional - and the specific conditions leaders must meet to earn engagement rather than compliance | CHROs People Leaders |
| 2 | Integration Pauses | How to build deliberate recovery time into AI implementation timelines so employees can process, experiment, and ask questions before the next change wave hits | HRBPs Change Leads |
| 3 | Well-being Metrics | Which well-being signals to track during AI rollouts - beyond engagement scores - including psychological safety indexes, attrition spikes, and tool utilization rates | CHROs People Ops |
| 4 | Manager Enablement | How to activate managers as the primary trust-building layer in AI change - equipping them to normalize uncertainty, model learning, and facilitate peer knowledge-sharing | People Managers L&D Leads |
| 5 | Fear of Obsolescence | Why silence creates the fear vacuum and how a single dedicated AI listening session - this month - can do more to neutralize obsolescence anxiety than months of top-down communication | People Leaders CHROs |
| 6 | Skepticism as Signal | How to reframe employee AI skepticism from resistance to strategic feedback - and why transparency about where AI has not delivered is more trust-building than overselling its potential | CHROs HR VPs |
| 7 | Pulse Survey Discipline | How to deploy pulse surveys as real-time AI risk detectors - including cadence, question specificity, the importance of visible follow-through, and how to avoid the checkbox response trap | HRBPs People Ops |
Symphone'e Lindsey is a Global HR Executive and Head of Human Resources for Twilio's Go-to-Market organization, a proven problem solver who aligns people strategy with business growth across complex global transformations. She has held senior HR leadership roles across Twilio's Data and Applications Business Unit (2,200+ employees across NA, EMEA, APAC, and LATAM), Global Services, Marketing, and Twilio.org, and previously at Amazon Web Services and PepsiCo.
Her track record includes large-scale restructures affecting 28% of the workforce, 15-point engagement gains through a new Global HRBP Operating Model, a 25% boost in leadership representation for Black, Latinx, and Women, and EVP strategies that strengthened employer branding and talent pipelines, all while developing team members into senior leaders.
Frequently asked questions
Resistance usually signals that AI was imposed rather than introduced. Employees engage with new technology when they understand the why, can see how it benefits their own growth, and are given space to experiment without pressure. When AI lands alongside layoff headlines and productivity mandates, even genuinely useful tools get caught in the crossfire of fear and distrust. The technology is rarely the problem - the communication and rollout approach usually is.
Integration pauses are deliberate breaks in your AI implementation schedule - structured periods where no new tools or mandates are introduced and employees have time to process, experiment, and ask questions about what has already changed. They are most effective when paired with peer learning environments: shared sessions where employees can teach each other what they have discovered, discuss what has not worked, and build collective vocabulary around AI. The goal is to let adoption happen at a human pace rather than forcing continuous roll-forward that produces surface compliance and hidden disengagement.
Traditional engagement scores are a starting point, not the answer. During AI rollouts, look additionally at psychological safety indexes, manager-to-employee trust indicators, voluntary attrition rates, escalation spikes, and - often overlooked - AI tool utilization data. If tools that have been made available are not being used, that gap is itself a well-being and adoption signal worth investigating. Pulse surveys timed to implementation milestones, rather than fixed quarterly cadences, are far better calibrated to capture how sentiment is actually moving in real time.
Honestly - which is precisely the point. Employees do not expect managers to have a complete AI roadmap. They expect consistency, presence, and honesty. A manager who says "I am on this learning journey too, here is what I know and here is what I do not" is modeling exactly the psychological safety that makes AI adoption less threatening. Managers should normalize AI conversations in one-on-ones, create space for questions without judgment, and actively identify team members who hold AI expertise and give them a platform to teach others.
Pulse surveys close the gap between what leaders assume is happening and what employees are actually experiencing - but only when deployed with the right frequency, specificity, and follow-through. Segment sentiment by team to identify pockets of high anxiety that aggregate scores obscure. Time surveys to implementation milestones, not the calendar. And critically: make the feedback loop visible. When employees see their responses produce real change - or at minimum an honest explanation of why change was not possible - participation quality rises and the data becomes genuinely actionable. A survey culture where nothing visibly shifts produces checkbox responses, not signal.
Full Episode Transcript
S06 E07: Reassure Teams in AI Rollouts: Playbook for Impact in 2026 — Symphone'e Lindsey with Darcy Mehta · 35 min
Hello everyone and welcome to season six of CultureClub X powered by CultureMonkey. I'm your host Darcy Mehta. CultureMonkey is an AI powered enterprise employee engagement platform that helps people leaders listen to their employees and enhance workplace cultures.
CultureClub X is our global thought leadership forum where global CHROs and people leaders share insights, discuss trends, and exchange proven strategies for building thriving, future-ready cultures.
Today, we're truly honored to host Symphone'e Lindsey, Head of Human Resources - GTM at Twilio, one of the world's leading cloud communications platforms. Symphone'e, welcome, it's fantastic to have you here with us.
Thank you, Darcy. Thank you for having me. It's nice to be with you all on this week's spring break for my children. So hopefully you don't hear them in the background, but I've told them to be very quiet, so yes.
Sounds amazing and thanks for taking the time during this break time too. And to give a little background about you, with over a decade of experience leading global HR strategy across companies like Amazon Web Services and PepsiCo, Symphone'e has built a reputation for turning complex organizational change into human centered progress. At Twilio, she serves as a strategic people partner to senior executives supporting a globally dispersed workforce across North America, EMEA, APJ, and Latin America.
Beyond her corporate work, Symphone'e is a certified professional coach and founder of Ili Ori Coaching LLC, where she works with leaders navigating transformation, growth, and purpose. She's someone who has sat at the table for some of the most consequential people decisions in tech and consumer goods. And today she's here to share her personal perspective and playbook on one of the most pressing challenges people leaders are facing right now, helping teams not just survive AI rollouts, but thrive through them.
And just please note that the views she shares today are entirely her own and do not reflect the position of her employer. Symphone'e, your hands-on experience leading global teams through massive change and AI-driven shifts is exactly the playbook leaders are looking for right now. And we're so excited to have you here to discuss our latest topic, Reassure Teams in AI Rollouts, Playbook for Impact in 2026. So before we dive in, can you just share a little bit about your own leadership journey?
I think I started with my career journey, not really sure who I wanted to be when I grew up. Went through a wonderful program called En-ROADS, which is still happening, where they basically place interns with various companies and teach you how to convert. And so did that with UTC, United Technologies, a very well known company, doing a lot of their different subsidiaries, and then joined an HR LDP program where I did get to cut my teeth in the defense industry and different COEs - from talent acquisition to compensation to labor-employee relations. Did comp for quite some time and then took off into the HRBP world.
And so a lot of the things you shared - working with AWS, working with PepsiCo, now working with Twilio, and also having some other projects and initiatives that I'm pretty passionate about - really focusing on having the right people in the right place at the right time. And then also really just thinking about when you say those things, it's not just about skill set, but it's all the other things that coincide to help lead to what I would call a very agile world that we're living in that's very fluid. Everybody's talking about AI, AI, AI.
And so it's funny as you talk about the 2026 playbook - I bet if we look back in 2027, it'd be very curious to see what it is, because change is massive and real and happening in real time. And so how we navigate that change, how we provide the necessary agility for that change, is going to be needed and necessary. And what it looks like today may not be what it looks like tomorrow, because we don't know what we don't know about AI, despite it being a massive vortex of information. We're still all building a playbook, which is actually what makes it so much fun to be in the people's space right now. It's not traditional. It can't be that copy-paste of what we did years ago - we literally have to build it from scratch and keep iterating on it to see what works and what we want to keep and get rid of. So it's an exciting time, but also an interesting time, depending on what seat you sit in - either as a people leader or as an employee, which I'm both.
Thank you so much. Your journey sounds really interesting and such a fantastic program that you were part of. I love what you said - agility definitely is the key word, right? Because we don't know what we don't know. And it's likely we'll have to watch this back a year from now and see what's changed. Good - we're getting this in early in the year so we can work on your playbook this year, and then we'll just have to get your new playbook for next year. So let's dive right into our first question.
How has AI shifted trust levels in your organization, and which AI risks worry employees most right now?
You know, it's such a great question and such an interesting one, because I think from where I sit, and also from a lot of my peers who I talk to in other industries, sectors and businesses, trust hasn't necessarily disappeared, but it has become more conditional.
So what that means is that employees are willing to engage with AI when they understand the why - like most things. If you tell me why I need to do something and it makes sense and it clicks for me, I may be more inclined to do it. I think the more it feels imposed rather than introduced, that's when trust erodes - especially in a world where we're constantly seeing layoffs, or that's the message or the headline that's being tagged.
And so I think it's natural to say, okay, is my role at risk? How can this change? How can this impact me? And I think when companies are pushing it in such a way where there is not what I call the push-pull - we're pushing it, but they're also pulling for it, seeking the understanding of it, seeking the why of it. And most importantly, I know a lot of people don't like the "what's in it for me" concept, but that is a real thing in this day and age where a lot is happening all at once - how does this help me, not just for the sake of the company, but how does this help me in real time?
And so I think when we consider the fears of real redundancy - do the skills I have today remain applicable tomorrow? Does the growth path I'm on exist in three years, or do I have to completely shift and pivot in a way that I may not feel capable of being as agile in? I think that's uncertainty that leaders really have to truly address, and that's where the trust builds. But I think the more transparency about how AI doesn't necessarily change our roles - it may augment them. AI may make you more strategic and productive versus spending non-value-added time in certain places and spaces that you may not even enjoy in your day to day.
I think that is the first move to allow for trust building and also giving people the space to fail. Because I think we're in a world where people feel like I have to know this. And you're also dealing with multiple generations navigating technology - folks for whom this is what they do in school, this is all they know, and then folks who are like, "What? How do you build that?" And so I think, again, giving people the place and space, even in a grassroots way, to experiment with it, to pilot it, to provide acumen, to provide clarity of what is in scope in terms of governance - because that's also a big thing: how you use it ethically, how you use it in a way that won't get you fired. I think those are all the things that are new and true that I've seen be the most effective in bringing people along on their AI adoption journey, with trust still being at the forefront.
So interesting. I think you covered a lot of very good points there. And I like what you said about "what's in it for me" - no one wants to say that's what they're thinking, but let's be real, it's a very real thing. And it's not a bad thing - you're wondering how you fit in, whether this aligns with your career goals. And it was interesting what you said about the push and the pull - the employer is pushing it, but is the employee also interested? This is happening whether we like it or not. So how can we get ahead of it? How can we be part of it? Let's try to understand it and work with it. So very interesting points, and this definitely leads us into our next question.
What practices have you introduced to give employees recovery time during AI change?
You know, I love this question so much because I think change fatigue is real, right? And you have AI and everything that comes with it - it's massive. There's a lot of information, a lot of books, a lot of text. And quite frankly, Darcy, it's not new. It feels newer because of the place and space that we're in, but we've been playing in this AI space in some form or fashion for quite some time. I think it's now just coming to a full head.
And so I think with change fatigue and AI rollouts on top of organizational change, one practice I think needs to be deliberate - and why I like this question - is the concept of integration pauses into implementation. What I mean by that: you're giving employees a space to ask questions, you're allowing them to experiment. How do they use AI? How do they use it at home? What are the use cases that they can provide? And giving them a space to process, to understand what's shifting between the next wave.
I also think what I've seen be the most effective is creating an environment of peer learning - where it's not just the company saying, "Here are all the playbooks on how to activate AI." Really taking a step back and saying, what are you utilizing and how do we teach each other? And in the spaces where I've seen folks do that really well, they come up with all types of use cases that make it fun. They're providing their materials, their prompts, things they've pulled not just from the company but from their own research. "I tried this, it didn't work - what are you using?"
I think that level of partnership and participation gives folks both recovery time and also a vantage point that it doesn't have to be perfect. It's not always going to look exactly right in real time. And here's how I iterate, and here are my allies in the journey of AI utilization. Here are people I can go to, people who will geek out on a thing. And here are the people who just want to use it for communications - how can they get better and iterate on it?
And so I think when employees teach each other how to use tools, it normalizes AI as a resource rather than a threat. The recovery doesn't become really passive - it's more structured, to allow and give permission to adapt at a human pace. And I think that creates a psychological space that counters the narrative that a lot of people are telling themselves. You see these companies saying, "We said AI is going to do this," but we're also seeing research that it's not doing that. How do we move at a pace that is real rather than one that is artificial? And I think the pauses in between give people the space to see that and understand the why behind it, so they're also not discounting the capabilities and the value that AI can have for them over time.
So true - it makes me think of that phrase about being scared of what you don't know, which is true and normal. I think it's the giving of the pause, like you said, and then the active learning - making it outside of just "this is what you need to know for your role," and instead playing around with it, making it more fun. Having that safe space, like you said, having that pause and making that a normal part of it - it helps normalize it. I think that's such great advice and could be the real difference.
What well-being metrics do you track during AI implementations?
Yes, you know, this is such a good question because it had me actually really thinking - how intentional are people with that? I think looking beyond the traditional engagement scores, like with CultureMonkey - the surveys and all - but if you look beyond it, during periods of significant change, really looking at some of the psychological safety indexes. Because when you think of AI, there is a level of psychological safety that is very core to how people decide to use it, decide not to use it, decide to engage with it - both personally and professionally.
And so as you look at how people feel about it, it helps you understand what you can do to mitigate some of those concerns. Whether it's manager-to-employee trust indicators - because let's be real, we ask managers to do a lot, but they're also at the forefront of how this looks and shows up within their respective organizations. When you're looking at this type of data - whether it's manager trust, whether it's psychological safety, whether it's very specific questions - you also get a sense of where sentiment is today, where it's going, and what's missing in between to bridge the gap to where you want it to go.
And so I do think pulse surveys timed to implementation or milestones are critical. And with surveys, it's always interesting - depending on what happened that day, depending on how I feel, I may or may not answer those questions with the level of intentionality we'd like. I think we also have to pay close attention to other signals like voluntary attrition, spikes in escalation, or decline in participation. And the other piece is: what is the utilization of what you have today? Most companies, as they're implementing AI, have checks and balances on what you can use, what you cannot use. For what is available, are people using it? If they're not using it, why? I think that all helps feed into the wellbeing piece.
And I also think there is a place for more specificity around what the actual metrics can be that are very AI-focused. Thinking through this question really made me think - I don't know that I've seen folks become that specific above and beyond a typical AI survey, and we have to be mindful of how honest people will actually answer those questions with the level of certainty that we can take that data and move it. So above and beyond that: what are the focus groups? What are the different individuals we can partner with to get real data in addition to the qualitative and quantitative data that we have?
It's so true - the data is important, of course, but it's what you do with it, it's the why behind it all. And there are so many other things too. It sounds like it's a space that definitely needs to be explored more. I like that phrase - psychological safety - and having checks and balances, like you said.
What role should managers play in day-to-day reassurance during AI adoption?
A hundred percent. Managers are probably the most under-leveraged asset in any change program, and AI is no exception. They essentially sit - if you think about a manager - that's your first ambassador, that's your representative. They sit at the intersection of strategy and the lived experience of most employees. So their job isn't to have all the answers, but it is to be consistent and provide an honest, present perspective. In most cases they're seen as the representative of the company - this is what the company is pushing down. So I do think managers' role is to really normalize the conversation about AI, whether in one-on-ones, and acknowledging the uncertainty.
Like, I think in a world of AI where there's a lot of things that are artificial and you see these taglines - I'm starting to see the sentiment around AI is good to use because of all the things it provides. But there's also starting to be a stigma of: what's real versus not? And so when utilizing AI, as managers and as employees, we also have to be real and transparent about where it is. In some cases, you see a lot of articles about the amount of productivity that this company found. And so this is why they've gone in the direction they've gone with how they think about their employees and their employee experience. But then there's tons of other use cases where we're nowhere near close to getting there, or we still need to train it.
And so I think, you know, as people are using it and they're having these moments of doubt or moments of like, "Why should I use this? It's not even real." We have to be honest about where we are in the journey as leaders and managers. We also have to be honest about what the expectation is. And even introducing it - how do we use it in a setting with our employees where they feel safe? Are you having AI hackathons? Are you having sessions to discuss the utilization of it and not boiling the ocean, because there's so many terms that people do not understand?
One of my favorite prompts for AI is "talk to me like a five-year-old." Because I think we also assume that people know every single thing at this point - but there's more words, more terminology, things that are changing and iterating as it happens. So how are we holding space for it? How are we sharing what our happy moments with AI are, what our challenging moments are, what our upsets with AI are? So again, it normalizes it. You don't have to be perfect. I think a lot of the fears is: I've got to know this, and if I don't know it, I'm out the door. We all have to be honest about the journey we're on with AI and ensure that we, as leaders, are giving that level of clarity and transparency - so we are creating comfort for people to get on the journey with us.
Exactly. And what's interesting about this time is everyone is learning - so the managers are also going on this journey and then they're trying to guide their employees. So I feel like it's just such a unique space. Like you said, having that clarity, being more transparent and the manager sharing their journey too is so important, because this is new and changing for everyone.
And the one thing about that is being mindful of where the manager is, right? Because every manager is not AI fluent and that is okay. And so I think even just making sure that they at least know the basics or have the understanding and they're playing around with the tools in some form or fashion, so they get comfortable being uncomfortable talking to their employees about what's working for them and what's not. And also, if you look at the different folks in demographics in the workforce, how are you pulling those folks forward to learn from them? Sometimes it's not me as the manager pushing down, but I may be saying, hey, who really knows this stuff well? And giving them a platform to teach, giving them the platform to push out. Those are ripe opportunities to provide developmental learning moments that can scale above and beyond the function of where some of our leaders sit - where if they don't know something, they can build a bridge and give their employees the space to show what they know. And that in itself creates a really good employee experience as well.
Exactly. And it's the sign of a good manager to be able to recognize what they don't know, to recognize talent in someone else, and to pull them up and give those opportunities and work on that development. That's the sign of a great manager too.
What's one immediate step people leaders can take this month to neutralize fear of obsolescence?
Man, I mean, I would say have a conversation you've been avoiding. The worst thing a leader can do right now is to stay silent. Employees are already speculating. We do live in a world of tons of information and we see headlines every day about AI - what it's doing, what it's not doing. And so the absence of communication doesn't create calm. It creates rumors and it creates angst.
And so I would say my immediate recommendation is to be deliberate and intentional and have dedicated listening sessions where AI is the only agenda item. It's not a "we're going to do this thing and here's 10 minutes on AI" - like, let's really focus. What is it? What does it look like? I've seen a lot of leaders, at all levels, from the C-suite down into the organization, where people are all being empowered to write white papers or write vision statements of how they want to leverage it. And then thinking about, all right, now how do we pilot? How do we experiment? What does good look like? What are we trying? What are the ROIs? Because sometimes we pilot things and we have no idea what problems we're trying to solve or what success metrics are, to know whether what we did actually gave us the ROI that was intended.
But I would say really starting to do this - and don't do it in a vacuum. Don't feel as though you as a leader need to have all the answers. Do it as a mind share: if you thought about your role or if you thought about the workflow - because when we think about AI and jobs, it's workflows. What are we doing on a day-to-day basis? What could AI do for us? Where do we think there is an opportunity? Where do we think, hey, there's probably an opportunity but there's investment required to make this a real thing? How do we start to have and cultivate those conversations?
And I think when the questions are asked - because some leaders are afraid to have these conversations because they're afraid of the real question of "how does this affect my role over time, and what investment will be made in my development to support this?" - if they don't feel like they know the answers, they don't feel ready to have those conversations. But avoiding it creates this vacuum and this negative self-talk amongst employees of what they think may happen. And so even stating, "We're not even there yet" - if you as a manager do not know, you can say that. And if you're not even sure who the AI experts in your organization are, a lot of companies have built AI ethics committees or AI governance or AI frameworks. So who are the folks you can bring in? You can bring in speakers, you can bring in peer groups to share what their journey is. Customer stories are always great for getting your employees excited about what they can do. So I think you can make it really fun and engaging and remove the fear - but managers have to do the work. We cannot have our heads in the sand.
Absolutely. You know, it's - you can't hide from it. So even if you don't know it as a manager, that's great advice and a perfect starting point: have that conversation you've been putting off. I like that - it's something you can do immediately, right now. Just being honest, being transparent is where it's at.
SHRM warns AI hasn't lived up to the hype - how are you addressing employee skepticism while protecting well-being?
I mean, SHRM ain't the only one, okay? I think we're seeing an interesting shift, which I think is good. I think skepticism is healthy. And I actually think employees who are skeptical are paying attention. So that's an asset. Those are probably folks that are seeing a lot of things and have a lot of questions.
I think the mistake is treating skepticism as resistance to overcome rather than feedback to be heard. So if there is skepticism there, why is there skepticism? And how do you utilize that information for your enablement? Because enablement is going to have to happen. So what are the things they're worried about? Where AI hasn't delivered, we have to be honest about that. Where AI is still in a journey - by definition, we're constantly training AI, so it's going to always be in a state of learning and curiosity.
What I advocate for is really just making sure that there is pilot transparency - what is working, what didn't, what changed because of what we've seen over time. That really helps to build credibility where the skepticism exists. And on the well-being side, I think what I see is this invisible pressure - both from companies and employees - to keep up with AI. I feel like every week there's something new about AI, and every week you feel like, my God, I'm still trying to master what I heard a month ago, and now I have to figure out what's happening over here. Employees are quietly trying to show that they are not replaceable. And I think if we as a company are on a hamster wheel with AI, it signals to leaders and employees an environment that is counter to what we're actually trying to get out and optimize for.
And so I think, again, let them be skeptical. Tell them what's real and what's true. And then also - don't counter with the counter. Counter with curiosity. Because that is information and data in itself that will help the organization move in the way they're looking to move versus holding them back. When you counter with negativity, you essentially deflate a conversation. And you also deflate the emotion that you're expecting to have in the organization. So you do also have to allow people to have the space to say what they want to say, even if it's not pro-AI. At a minimum, you don't need to agree with it, but you should understand it. And you should have some talking points as to why they feel the way they feel. And if you don't, you can also say, "I don't know," and get curious about that and figure out how you do know over time. Counter with curiosity.
I love that phrase - counter with curiosity. I'm going to use that from now on too. And like you said, it's creating a healthy environment where skepticism doesn't feel like something scary or resistance. Skepticism means you're thinking, you're asking questions. So okay.
How do pulse surveys and platforms like CultureMonkey uncover hidden AI risks and protect employee well-being?
So I love a good survey. And I do think that pulse surveys like CultureMonkey or others can really help to close the gap between leader assumptions and what employees are actually experiencing. The power isn't just in the data. To me, it's always the frequency of it, right? Because what it was today, something can happen and it can change very quickly over time. It's also - and we talked about this a little bit earlier, Darcy - how honest are they being? Because we've done surveys where I'm like, man, I think it's going great, the surveys are great. And then like, you're like, well, wait a minute, there's still something else happening here.
And so I would say, platforms like CultureMonkey allow you to segment sentiment by team. But also you still have to have the context clues. Like, when you get the data, what are the contextual things that have happened or occurred that may speak to why you're seeing what you're seeing? And then also, if it's not frequent enough - if you do it annually, AI is happening on a day-to-day basis. If you're doing it quarterly, you almost have to pick your cadence, because survey fatigue on top of everything else is a real thing as well.
And so I think, if we're creating an environment where there is transparency - where there is open honesty - is the content that you're getting going to actually yield real information and real data that you can action on in real time? For people leaders, the discipline is acting on what the data surfaces. It's also questioning the data and understanding: who are you partnering with to understand the context? Who in your people team - HR, people like myself - who are you partnering with to really get in the nitty gritty? How are you also utilizing, whether it's round tables, whether it's ask-me-anything forums, or whether it's fireside chats, to get more information from what you're seeing from the surveys - so employees know that their voice actually changes things.
When I see people do surveys and nothing's happened and we've not even explained the why of why nothing's happened, I'm not as inclined to take the survey. And there are certain companies that are like, "We've got to get the participation rate up to get real results." And so people sometimes just take it as a checkbox - let me just go through it. And so again, if you want the level of honesty that we require in the space, then you also have to act with a level of honesty and integrity and consistency and discipline on how you take information from a pulse survey. At that point, you are accountable and responsible for what you do with it, what you share about it, even if there's no change that's going to be enacted - so that we can continue to utilize the surveys as a true methodology and as a true resource for where we are today and where we are tomorrow and what changes we need to action on.
Absolutely - a survey is only as good as what you do with it, right? That you actually listen and what you do with it. So, Symphone'e, I wish I could just keep talking to you all day because I've already learned so many amazing things. And that's why I love getting to host this show. So thank you so much. Your real-world playbook for reassuring teams during AI rollouts is clear, practical and most of all, deeply human.
So it's clear successful AI adoption in 2026 hinges on rebuilding trust, giving people breathing room and keeping well-being front and center. That's where CultureMonkey excels - with pulse surveys and real-time listening that help people leaders spot hidden risks early and protect engagement through every stage of change.
Before we end, Symphone'e, how can our listeners connect with you to keep this conversation going or to share their own perspectives?
That would be great. I always love geeking out on this. Please - I would love to amass more of a board of directors on this topic. You can find me on LinkedIn. I think you guys have that information, but Symphone'e with two E's, you know why - last name Lindsey, not first name Lindsey. But happy to engage and connect. And thank you for the time, Darcy. I've also enjoyed you being my partner in crime today.
Yes, absolutely. And to all our listeners, thank you so much for being here. Don't forget to follow, share and subscribe. And that's a wrap for this episode of CultureClub X powered by CultureMonkey. Until next time, I'm your host, Darcy, signing off.