Happy 2026, everyone! 🎉I’m excited to highlight the dynamic Sable Lomax as we kick off our third season of “AI, Software, & Wetware”. Enjoy!
Introduction - Sable Lomax
This edition of “AI, Software, & Wetware” features an audio interview with Sable Lomax. She is a 🇺🇸 USA-based Instructional Designer and Facilitator and Leadership Development Expert. Sable is the founder of The Leadership Standard. We discuss:
finding AI tools about 60-70% accurate for assisting with company research and summarizing huge published reports in her leadership development work
doing due diligence with the references ChatGPT and Claude give her, providing overlooked context and combating confirmation bias
preferring Claude to ChatGPT for sounding more like “someone went through higher education”
why the humanities, arts, and history still matter, and why people deserve to be ‘paid well’ for all kinds of work
what her Sankofa bird tattoo means and how it’s relevant to coping with AI today
and more. Check it out and let us know what you think!
A special thank you to Becky Mollenkamp at Feminist Founders for connecting me to Sable as part of her follow-up on Juneteenth 2025 activities!
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click HERE to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
AISW Interview - Sable Lomax
Karen: I am delighted to welcome Sable Lomax from the USA as my guest today on “AI, Software, and Wetware”. Sable, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.
Sable: Hi folks. So excited to be here. I am Sable Lomax. My pronouns are she and her. And I tell everyone very simply, I am the principal consultant at The Leadership Standard, where we do all things leadership development for senior leaders globally.
Karen: Oh, that’s excellent. Tell us a little bit about your level of experience with AI and machine learning and analytics. I’m wondering if you’ve used it professionally or personally, or if you’ve ever studied the technology.
Sable: So, study the technology in the traditional sense? No. In terms of using it professionally or personally? More professionally than personally. AI is where I’m the one who’s used it in terms of machine learning and analytics. In the leadership development space, depending on what the engagement is with the client, there’s someone on the team who’s engaging with the machine learning. There’s someone on the team who’s engaging with analytics. I might say, “I need X, Y, Z to be done”, and then they go and do their thing itself. I am comfortable in the AI space, but again, traditional studying, no, not at all.
Karen: Okay. I think it’s pretty common nowadays for anyone who has studied AI, that in some cases it’s an online course, or just experimenting with it on their own, or watching a YouTube tutorial, or something like that. It’s all learning.
Sable: It’s for sure. It’s for sure. And then it changes so fast. Because AI is not new. Even if you studied it 20, 30 years ago, the speed in which it has changed and develops and grows and expands, you’re still constantly having to update yourself on X, Y, Z.
Karen: And the pace of change is not exactly going to slow down anytime soon.
Sable: No, not all. Not at all.
Karen: Can you share a specific story on how you’ve used a tool that had AI or machine learning features? I’d like to hear your thoughts about how the AI features of those tools worked, or didn’t, and basically what went well and what didn’t go so well with using it.
Sable: As someone who’s worked in boutique firms for the last seven or so years, you don’t have the team size of a larger firm, so the Korn Ferrys, the McKinseys, et cetera. A lot of times, more junior roles are tasked with research. I don’t know what’s happening currently, but they would be asked to go find A, B, C, X, Y, Z, summarize it, write a report, and so forth. Well, when you’re a team of three, or maybe even a team of one, you’re responsible for a lot from A to Z in terms of client management, or even prepping to get this company to become a client.
So I have found it most useful on the research aspect of it, and particularly by summarizing. I’m finding 300-page reports. I’m finding 100+ page financial reports. I didn’t go to school for that either. And although I can get through it, and although I can read it, when you are managing existing clients, trying to get current clients, and then just the administration of all things work, you don’t necessarily have time, much less the capacity and bandwidth to engage in multiple 100+ reports with great detail.
So for me it has been super, super helpful with the reading of the reports. I’ll scan it, but with the summarizing of the reports so I can prompt it. I prefer Claude over Chat for a couple of reasons for this aspect. I’ll prompt very specifically what it is I’m looking for in these reports and how I want it to summarize what it is that I’m looking for. That has been helpful.
On the same side, in terms of experiences that have made me chuckle, laugh with despair, disbelief, and a little bit of sarcasm, is in the same grain of the research. I have found, after some trial and error, I need to find my own reports or my own articles and then ask for those to be summarized. The reason why is because of hallucinations.
I have seen ChatGPT in particular, more than Claude, make up articles and give you links, and the links don’t go anywhere. Or they may go somewhere. Think about when you were in school. For those of us in the US and in the UK, in terms of secondary school, so at middle school to high school age, you were learning about what’s a good source, and what’s a source that should not be cited. If the links are accurate – in the sense that they do go somewhere, they are live links – it’ll take you to a source that, in a traditional setting, would not be considered a credible source to use.
So, although it’s great for research and summarizing and all of these things, there’s also: Is the research accurate? Are the articles coming from a verifiable source? Is it written by an actual person, or is this a robot somewhere who’s also using AI to create the content? So you still have to do your due diligence with whatever AI you’re using. I’m talking about Chat and Claude for this matter.
I know folks who use it to put together PowerPoints, folks who use it to run reports, financial reports with Excel. Sometimes those formulas are not accurate. So, you know, it can increase your bandwidth, but it doesn’t remove responsibility in terms of work output. Is it accurate? Is it correct? Is it good?
Karen: You mentioned that you prefer Claude. Is that because of the handling of citations, primarily?
Sable: Not just the handling of citations. I started off my career as an English literature teacher. For the record, folks, I believe in breaking every rule that exists. But that’s someone who loves poetry and speaks in prose when I’m not in a formal setting. That said, even with all of customizing that you can do with ChatGPT, I feel like it produces written content at the level of a middle-schooler. It just does not feel that someone who graduated, and has a graduate degree in whatever subject, wrote what ChatGPT puts out. Whereas on Claude, without the same level of customization, meaning “Here’s previous writing examples”, or “Here’s a writing example of something that I read and liked”, it just feels like Claude graduated from an institution after the age of 18, from the word usage to the ways in which the sentences are put together.
I mean, I used dash and hyphens before AI. So it is annoying that, when people see that, they automatically assume that you used AI. But Claude puts them strategically where they belong. I feel like Chat, they’re just like, “You know what? Here’s a dash, here’s a hyphen, here’s a dash.” That’s why I prefer Claude. I just feel like it sounds and reads as if someone went through higher education.
Karen: So it’s more the voice that you want to project in whatever materials that you’re generating or sharing with clients.
Sable: Exactly. Yeah. For those who aren’t aware of this – and I’m going to have the reading level incorrect, because I haven’t checked on this in years – but 15, 20+ years ago, it was like, New York Times is written on the eighth grade reading level. And New York Post, or local newspapers, are written on a third or fourth grade reading level, to make sure that the people who are purchasing the newspapers can actually engage with content. That’s how I distinguish Claude from ChatGPT. Claude is New York Times, Washington Post. And ChatGPT is New York Post.
Karen: That’s a great analogy. Thank you. That’s an interesting point on the reading levels. I hadn’t thought so much about that in terms of the large language models. I had done some investigations into readability formulas and tools and measuring the readability of my writing, because a corporate research environment was my biggest stint in my career.
Sable: Oh, oh, so you’re reading, writing – you’re up here.
Karen: Well, actually that was part of the problem, is that I was kind of writing ‘up there’! I was used to writing these academic papers. And I wanted to write on my Substack at a level that was more accessible. So I started using these readability scoring tools and watching, and trying to get myself down from, you know, 12, 13, 14 grade levels, and down to an average of about ninth. So that’s kind of where I ended up.
Sable: That’s the thing with Claude. I’m not trying to write at the academic level. I actually have – not so much issues with academic writing; it’s just other than academia, and it’s staying within that sphere, who are they actually writing for? It’s not accessible if you’re outside of that community. So I’m not trying to write at that level. But also, I do want to sound like I graduated from at least three places. And that’s how I choose Claude over ChatGPT.
Karen: Oh, that’s very interesting. That’s a great insight into why you choose the AI tools. I’m always curious about which tools people choose and why. So thank you for sharing that.
You mentioned that you mostly use it professionally. Are there any times where you have used it personally and found it was useful?
Sable: Have I used it personally? Yes. Have I found it useful? No. Do I use it personally anymore? No. It was a handful of times earlier this year, actually. And for those who are listening, earlier this year would be early 2025. And I was like, “Let’s use it to meal-plan.” And I put in, “I have this, this, this. I would like breakfast, lunch and dinner, blah, blah, blah.” And I know how to prompt, particularly for LLMs. And I was like, this is – no. No.
And when you think about the data centers, the water, the level of energy that’s used for folks to engage with this technology, my brain is faster and more accurate in that sense. So for me personally, no. Professionally, yes. What I have done is if I take traditional notes – so if I’m not using, like, Fathom or Note Taker, which I probably only use about 50% of the time – I will upload my notes. Because they are absolutely chaotic, just based on me typing as someone’s speaking. Or I’m writing the notes. I will upload them and say, “Can you put this in a cohesive manner?” So anyone who reads these notes can understand. So I’ve used it in that way.
But in terms of my actual work output, that is still my brain. And I want to make sure that I don’t lose sight of being able to critically engage with anything because I’ve become over-reliant on a tool.
Karen: So you’re using it really more as an editor, or as more of an administrative assistant, than for generating content for you?
Sable: Yeah, definitely.
Karen: And for summarization, you mentioned that you read the report yourself, but then you have the AI tool generate a summary. How accurate do you find the summaries are?
Sable: I would say about 60 to 70% of the time with Claude, they’re accurate. With Chat, it would decrease. I also understand how these models are built, how the knowledge bases are formed, and who’s responsible for – it’s not necessarily an algorithm – the ways in which these tools digest and receive information. It’s very different than someone who is coming from a humanities background. So even summaries, you have to be careful, because it will miss the human element. Because that’s not what it’s pulling on. It’s just pulling on numbers.It might not be contextualizing there was a CEO change or a CFO change. Or they’re on their third CPO within 48 months. It’s not threading how that impacts what’s showing up in the report that might say, “There was four bad quarters, and they had to reduce their overhead costs by X amount of million, which resulted in 130,000 layoffs.” It’s just spitting out numbers. It’s not contextualizing anything.
And this is just for experimentation purposes, which is why I’ve ended up going, “You know what? This is research, to a degree. I’m still going to rely on myself.” I’ve pulled company reports that are public. They’re quarterly — annual reports, but quarterly because I actually just started doing them this year. And I’ve taken articles, headlines that are talking about decrease, increase in whatever it may be. New person hired, person gone, layoffs, et cetera. And I’ve prompted it very specifically, put it all in and say, “Provide a summary, but also contextualize blah, blah, blah.” Still it’s very robotic, extremely robotic. I mean, it’s a tool, it makes sense. But if you have someone that’s only relying on what it spits out, I think that’s where it can get extremely tricky and even dangerous.
Karen: Have you noticed any instance where there was a clear bias in the tools, as far as providing context and not having a broad set of information to build on?
Sable: There’s a confirmation bias where it will almost agree with anything that you are saying. It will not course-correct to say, like, “Hey Sable, what are you talking about here? You might want to check on these three articles that say you are absolutely wrong.” And if you continue to agree to what they’re agreeing with, then you are training your own ChatGPT profile. So then it just further increases the confirmation bias, which is how we’ve ended up with – this is a very elementary analysis, for the record – but when you’re reading about folks who have taken their lives because AI is like, “You know, you don’t really need to be here.” I’m summarizing horribly. But you know, “It’s fine. You’ll be fine.” Or people who are in relationships with AI. And when I say AI, I literally mean ChatGPT, based on some of these articles. The ability for it to disagree and dissent is not yet there. And I’m not sure if that’s coming.
Karen: Yeah, there’s been a lot of talk about trying to improve the safety of the tools, and they’re paying at least some lip service to it. And maybe to some extent they are working on it. But at the same time, they’re also putting it out there and letting all of us be their beta testers for these features and whether or not they work, and not identifying the harms upfront. And that is a concern to a lot of people. It sounds like a concern for you.
Sable: Yeah, I think that when you have anything that just gets to exist without any bookends, without any checks and balances, without any rules and regulations, there’s guaranteed going to be individuals, but also larger communities of folks, who are impacted negatively. And the consequences either show up very early, or they might take a long time to bubble to the surface.
Karen: Have you ever tried any of the tools that generate images or videos or music or anything like that?
Sable: Yes, out of curiosity. Have I used any of them? No. I don’t think they’re accurate. And I would never identify as an artist. I have too much respect for artists for me to identify as an artist. The idea of replacing humans with a tool that’s not that good is one that just doesn’t settle well with my spirit.
Karen: Yeah, you’re in good company there with feeling like it’s not good, and recognizing the impact on the people whose work was basically stolen to be used to train those tools and to generate.
Sable: And that’s the hard part. I think it’s back to the contextualizing of a company’s reports with what’s happening in real time. If we use that framework, if you will, to look at artists, there’s an emotion. There’s a lot more than just putting music notes down on a page. There’s a lot more than just picking up the instrument and playing it. There’s a lot more than just writing the script itself, but taking the words and turning it into a beautiful scene. All of those elements are very human elements that a tool cannot replace. Can it do many things? Yes. But can it replace a human and what a human brings to their work? No.
Karen: That’s a good summary of the concerns around the impact on people’s livelihoods. And you mentioned the impact on our spirit. I think that’s something that tends to get overlooked – the importance of art and music, and the other creative activities for humans. You know, as kids, we draw, we sing, we do all these spontaneous creative things. And then we tend to lose that as an adult. But if people don’t have to worry about making a living, then people do turn to the arts.
Sable: When people are sick physically, they turn to the arts. When people are hurting spiritually, emotionally, psychologically. Art heals. A tool can’t replace that.
Karen: Yeah, absolutely. So I want to talk about one concern that we’ve referred to a little bit: where these tools have gotten the data that they’ve used for training. You mentioned that you use Claude and ChatGPT. There have been some stories about both. They’ve both been involved in lawsuits for misuse of data to train their tools. So I think a lot of folks were hoping that Anthropic would be more ethical in those regards, but they have been involved in some of the lawsuits nonetheless, and just recently had a big settlement about it.
So I’m wondering what your thoughts are about how companies get the data and the content that they use for training their tools. There’s a concept called the 3C’s Rule from CIPRI, which is that creators should have the rights to Consent, to be Credited, and to be Compensated. So those are the three C’s.
And there are some people who say that, as long as this data is being used in the tools for good and for our collective good, that it’s okay for them to basically steal everything. And others say, “No, this isn’t right”, and that people should be entitled to the 3Cs. I’m wondering what your thoughts are on that.
Sable: Many of these conversations are not new. It’s just they’re being discussed within the context of AI now. I fundamentally believe that people should be paid for their work. And people should be paid well. It doesn’t matter what context we’re in. People should be paid for their work and people should be paid well. And this is Sable speaking freely here. I just finished watching the documentary about the Equal Pay Act on Netflix with Lilly – I’m blanking on her last name right now. I’m horrible with names. Even for someone who taught English literature!
Karen: Maybe Lilly Ledbetter? [movie link]
Sable: I think that is it. When we have these conversations, again, whatever context we’re having these conversations in, oftentimes I feel like we’re speaking on the surface and we’re not going to the iceberg. We’re not going to the depths. We’re not going to the core. And the core of that is it’s a capitalism problem – the idea that you don’t pay people, the idea that you don’t credit people, the idea that you don’t get consent.
So again, for me it’s: people should be paid and people should be paid well. And when I say paid well, I think that there’s no reason that someone who works full-time cannot afford to take care of themselves, whatever country they live in. So I’m not even just speaking to a United States context. I literally mean if an adult is working full-time hours and is getting up and going to wherever they have to go to, even if it’s their living room or if it’s a 2-hour commute, whatever they do for a living, there is no reason that they should not be able to take care of themselves and their families on the wages. That’s just a human belief of mine.
And again, a lot of the conversations surrounding this go against capitalism, the system of capitalism, which means somebody has to be at the bottom. We cannot pay everyone well and have an elite or top 10%, 5%. They cannot be reconciled. So for me, sometimes it’s less about isolating, “Well, what about AI?” It’s the whole – we’re arguing against the system that we know is only working for some, but for the vast majority of people, it does not work.
I don’t think that many people have come to that acknowledgement. They know there is a problem. Do they realize it’s a capitalism problem? Most, no. Many, yes. But if we think about the US, our population is massive. How many people, if we gave out a survey that said, “What is the root of all of these problems?”, how many people would pick capitalism? It wouldn’t be the majority of the folks who completed the survey.
Karen: Yeah, I think that’s a fair assessment, especially within the US. There’s such income inequality that it really is glaring. The way one of my interview guests last year put it was that they are “socializing the inputs and then privatizing the outputs.”
Sable: I snap in agreement.
Karen: And that’s just fundamentally unfair, and it’s likely to increase the inequality. We see that with the 8-figure tech bros who are making massive amounts of money, and then people who are losing their livelihoods completely and looking at ways to protect themselves. What’s interesting, I think, is that people are just this year starting to realize that it isn’t just artists and musicians jobs, as if that was okay, but it’s going to be all jobs.
Sable: I think we’re currently undergoing another revolution. You had the industrial revolution. We’re going through another revolution. And unfortunately, it’s the folks 20, 30 years out who will write about how we handle what we’re handling now. And in theory, 20, 30, 40, 50 years out, the students will be studying how we handled what we are navigating right now. And we all know, depending on who wins the revolution, who comes out on top, it’s usually who’s writing those books. So even what they study, is it going to be accurate? Many times it’s not.
In the AI context, originally they might have thought the folks at the bottom of the barrel would be the artist, the musicians, the writers, the playwrights, the creatives. And then they’re realizing like, “Oh, well, maybe at the bottom could include middle management in corporate America, could include office jobs.”
Traditionally, if we’re thinking 1970s to 1990s or early 2000s, if you had a one-income household, there was an office job that paid X amount of dollars. You could afford to take an annual vacation. You could afford to buy a home. You could afford to have a car, maybe two for the household, and do really well. Well, how much does an average house need to have now to be able to have a two-bedroom, one-bed apartment in the country, no matter where? And salary pay has been stagnant for the most part, but everything else has beyond quadrupled.
So yeah, I do think that, again, whatever we’re talking about, AI or insert another topic here, at some point you have to let go. There’s a big old elephant in the room that we’re not discussing. On a large scale, they are, it is being discussed, but it’s in pockets, smaller pockets, but still pockets for a country of this size.
Karen: Yeah. I think the expansion of the job impacts of AI beyond the creative people, the artists to – you mentioned middle management; there’s even software development. I’ve been in software development for decades now, and there’s always been some talk about the latest new technology, a higher-level language or something, “Oh, well, software developers can lose their jobs”. Well, no, it opened it up for even more development. So it’s really hard to tell at this point whether or not this is more of the same, or if it’s going to mean that software development is going to be largely taken over by these tools, and then who’s profiting from that? Who’s using the knowledge and who’s benefiting from the knowledge? And then how do you grow people that can intelligently tell the tools what to do?
Sable: What to do? Yeah, yeah. In certain fields, for sure, in theory, based on what we’ve come to understand, you wouldn’t need the same amount of folks, but you would still need folks. You can’t have a completely robotic floor. Time will tell.
If I think about AI as a whole, the impact on the environment is a huge concern for me. Not so much more than anything, because I don’t think it’s even fair to say, “Oh, but the earth matters more than the people” when you’re supposed to be engaging in harmony. But what is it worth that you could produce a meal plan in 30 seconds or two minutes if the water quality is going to be piss-poor because all the data centers are in your backyard, and your utility bills are going up? Because it’s the residents who are paying for all of these things. Do you really need the meal plan that bad? Do you really need the summary report that badly?
Karen: Yeah. I don’t know if you have heard yet. There have been some initiatives towards more ethically-developed and operated systems. There’s a new AI tool based out of Switzerland where they only source the data ethically. They used a data center that’s powered by only renewable energy to do the training. So they’ve been trying to make it more ethical. When it comes down to actually running it and people using the tool, then at least they can feel like they’re not doing so much harm to the environment as they would be if they were using an exploitative system.
You mentioned the utility bills and that’s been, I think, something people haven’t always realized. They hear about data centers coming to their town and feel like maybe it’s going to bring some good jobs. And they don’t realize that they’ll end up subsidizing the utilities for that data center to be operating in their neighborhood, and it’s going to end up causing other problems.
Sable: Our brains have been programmed to hear construction means more jobs, but more jobs for how long? And more jobs for whom? And what’s the impact of the construction? Our brains haven’t been convinced to go that much further.
And then, what’s the impact of what’s being built? Does it serve the community, the surrounding communities? So I do think we’ve been trained, and when I say we, in that moment, I meant the American people. But I think overall, we have been trained not to think in some ways, and specifically how to think in other ways.
Karen: Yeah, that’s a great insight, and it’s one thing that I think is a concern is that, to some extent, AI tools can reinforce those ways of thinking, based on what they’ve been fed, what subset of it they’ve been fed, and how they’ve been trained. And like you said, whether they’re being obsequious and reinforcing our biases that we are bringing into it, because we all have biases. That’s just a fact of life.
Sable: And that’s why I say, even when I’m facilitating to a group of leaders, I say ‘we’ intentionally over and over and in many instances, to make it very clear: I am not at all trying to suggest that I’m this superior human being that is floating in on this level of humanity that is just so fantastic, I do nothing wrong ever. No, no. We have all been trained how to think and it takes a lot to go against the grain. And you will often be alone or with a small community, and depending on who writes a history book that might not make it to ‘New York Times bestseller’. At some point, folks will realize that that group of few is actually right this entire time. So at some point, folks will see that the artists, the musicians, the playwrights, the creatives were right this entire time. Will they get to see that in their lifetime? That’s where time will tell.
Karen: Yeah. So I do want to talk a little bit more about your leadership coaching practice. I think that’s so interesting because you’re in a position to influence people who are making some of these decisions at higher levels, on how companies use AI and what they do and don’t do with it, and how they respect these different influences and impacts. So I would like to hear a little bit more about that, if you’re open to discussing it – obviously, without sharing any confidential information from any of your specific clients.
Sable: What’s interesting at this state and time is that folks have been very transparent behind the scenes. They are trying to figure it out themselves. Especially because, when you say senior leadership, it means so much and so little at the same time. So it’s based on their title and role and responsibility and influence, how much power – and I’m using the word ‘power’ intentionally – how much power do they actually have within their organization to say, “Hey, I’m going to go against the status quo here and say maybe we should look at how we do X, Y, Z differently.”
It’s very, very hard to do that. I tell people I studied history; I’m a humanities person by heart. It’s very, very hard to do that when we live in a country where your healthcare is tied to your employment. You know, where, to leave, you have to sign a non-disclosure agreement that says you can never say X, Y, Z. We make it very, very hard. And this is intentional. I say that just to give people some color, some insight, because it’s very easy to say, “Well, why doesn’t someone just say X, Y, Z?” There could be so many conditions in place that make it hard to do that if you cannot weather long-term hard consequences.
But let’s say it doesn’t have to be hard, long-term consequences. Let’s say it’s not this intense, volatile atmosphere. Karen, the people don’t know. They are telling folks “Use AI”, but they’re not telling folks how. They’re not telling folks how it will be used, how they’ll be rated in their performance reviews if they are using AI. They’re not telling folks what AI is, okay? And at what point within their role is it permissible versus when they shouldn’t? Because they don’t know. Most companies are in a complete space of trial and error and not much has been formalized.
If anything’s been formalized, Karen, it’s how not to use AI. AKA “Don’t use it here, don’t use it here, don’t use it here.” It’s a very “don’t”, negative-led policy, if you will, versus the, “We want you to use it. Here’s where we have invested in AI. These are the platforms. Here are the training sessions where you can go and learn the new AI that we’ve invested in.” And also “Here’s the policy that someone from HR is going to walk you through, that senior leadership sat down and put together.” To say, “Here’s where it’s permissible to use it in your role. Oh, and we’re not going to hold it against you during your performance reviews and rate you lower because you use AI to assist you with your job.”
Everyone’s just in the space of trial and error. Some are more transparent with “We have no idea” than others. But in the middle of a revolution, it’s too hard to know you’re in the middle of a revolution. So people are just existing and surviving day-to-day.
Karen: That’s a great insight. So how do you help leaders to navigate that?
Sable: I literally say, in a very different form, what I just said. Like, “Well, do we even know together collectively, in this room right now, how much we’ve invested in AI? And not from a numerical standpoint, but from a time standpoint. What software do we have? Is it some in the marketing department, the finance department?” It’s been in HR for eons at this point. So it’s not new in HR, and sometimes we have answers to that, sometimes we don’t.
Have we trained up our people on how to use it? Most times we haven’t. Have we shared with them? Because managers, leaders, directors, you name it, have said, “Our folks have said they’re hesitant to use it.” Well, let’s unpack that hesitancy. Did you just tell them to use it and that was it? And you just keep saying, “I want you to use it”? Or have you spelled out what that means? Especially when on any given day, Business Insider’s putting out an article that says you’re going to lose your job to AI. Why would I want to train the AI on a job I might lose in three quarters? Have we humanized the tech that we’re bringing into the workforce at such rapid speed? Do you even use AI? Do you use the same tech you want them to use?
And that’s when we end up, and I’m no one’s therapist, for lack of a better word, but that’s when we end up in, like, this group therapy unpacking session of, “Oh, we actually have a lot of work to do around this.” Yes, you do.
Karen: Those are great insights. Good. So, I normally ask people questions about use of their personal data. I’m not sure if you have any stories that you want to share on that? Like whether or not your data has been used by any company for a tool, like video streaming, or if you’ve been phished or scammed or had a data breach?
Sable: I haven’t been scammed, but I opt out of everything. I refuse to give Clear my eyes. I’m like, “You all have my fingerprints. You do not need my eyes.” I opt out of a lot of that stuff.
Karen: Yeah, traveling, TSA is a big one, taking photos.
Sable: I know this is not 1990, where you could kind of go under the radar. But I really wish you could. It’s unnecessary. I hate it. If I didn’t need LinkedIn, I would not have a LinkedIn.
Karen: Yeah. LinkedIn has been rather egregious about using our data and not allowing us Americans – and others in countries that don’t have something like GDPR – to opt out. It sounds like we have the same opinions.
Sable: I did some unchecking on Gmail because of what they’re using, and it was like, “Are you sure you don’t want us?” I’m absolutely sure. Go away, no. Because that literally gives you permission to read my emails and store and then – No. No.
Karen: Yeah, I’ve turned off a lot of the smart features in Gmail. Had it done that way for years. And I just saw an article circulating about that — Google can use your personal information.
Sable: Yeah, and I said, “Oh wait, get rid of that. I don’t need it.” I opt out of all cookies. Strictly necessary only. I don’t save my passwords. No. I don’t have period trackers. No.
Karen: Yeah. The whole area of FemTech is really concerning.
Sable: Especially in this political climate.
Karen: Yes.
Sable: We’re not Denmark. And not that I would trust them, but I could understand me going, like, “Huh.” But not, not here. Absolutely not.
Karen: Yeah. Public distrust of these AI and tech companies has been growing. And partly it’s because, I think, we’re starting to realize what they are doing with our data. To that extent, maybe it’s a healthy thing. But I’m wondering what your thoughts are about what it would take for you to truly trust an AI or tech company with your data? And what’s one thing they would have to do to achieve that?
Sable: I am not sure, and I say this very frankly, as a Black woman who is a descendant of folks who were enslaved, chattel slavery in the United States of America, I’m not sure there is anything – and I’m well-studied – I’m not sure there’s anything a tech company could do that could get me to trust them. Not under the system in which we operate, under which so much inequity has been interwoven into our society by design, whether we’re talking sexism, racism, antisemitism, Islamophobia, ableism, whatever we’re talking about — classism, and insert all the other -isms that exist.
And I’m not singling this out to tech. Just in, in most cases, without a serious nationwide reckoning of how horrible we have been to such massive communities of people, beyond the Black community in this country, I just don’t think, for me, that’s possible. I don’t even live in that utopia where that’s a desire.
I think about relationships. Whether it’s platonic or romantic, you might form trust rather quickly. But the minute it’s broken, it takes so much to repair, and it requires effort on both parts. I would acknowledge that, if I sat down in a couple’s counseling session with the United States of America and <insert whatever company and institution here>, and the therapist asked me, “Sable, are you interested in repair?” I would probably say, “No, I just want a genuine apology and then some reparations.” But repair, no. Someone else might be willing to engage in that conversation. I just don’t see that I can.
Karen: Yeah, that’s very fair. And I’m glad you’re raising that point, that it does require a two-way interaction to establish trust. And it feels like the corporate side isn’t motivated to do that work.
Sable: And to be fair, I will say, and this is not in defense of companies and corporations, but they get to do a lot of what gets to happen. And there’s fantastic companies that exist. Because you know you have to do that. If not, everyone’s like, “You said all companies suck.” That’s not at all what I’m trying to convey here. There are fantastic companies who are doing fantastic work and are taking care of their people. And if they discover something is gone awry, they actually try to address it. All of those things.
But the way in which our government has run, like even before we had all these companies who can do what they do, our government has allowed X, Y, Z to occur over and over again, decade after decade. And I’m saying X, Y, Z on purpose. Because again, insert any ‘ism’ here, there’s so many things we could point to.
I don’t think it’s a scenario where all of the blame and onus should just be on the companies. This is a family group session. There needs to be government at the table, whether we’re looking at local, state, federal, and company.
Karen: And with good reason. That’s history telling you that, right?
Sable: With good reason. I tell folks I have one tattoo. It’s a Sankofa bird, which is the Ghanaian symbol. And it means “to return and go fetch”. Essentially, if you don’t know your past, if you don’t know your history, you are liable to repeat it. When I think about even this moment – we’re having this conversation right now – when you don’t know your past, that’s how easy you are to accept an apology that might not be genuine. That’s how easy you are to go, “Oh, things are going to be fine.” And then you just trot along and realize, “No, no, things have not changed much at all. There was a loophole.” There was a loophole. And now you find yourself in a very similar predicament.
So yeah, I understand the desire to trust from a human standpoint. I understand the desire for human beings to want to believe that people will just do the right thing. The problem with that statement is: Who gets to define what’s right? And the right thing for whom? Who holds the power?
Karen: Yeah, and I think you raised a really good point about trust. I always ask, and most people say, “Well, if they would just be a little more transparent...” There’s always this hope, or this optimism, that somehow we can – either through collective consumer pressure or from some other way – we could just show them that it’s important to do the right thing and to respect people’s rights and all that. That we could just convince them somehow. And I think you’re the first person I’ve talked to who has basically said, “Let’s be realistic. This isn’t possible.”
Sable: And for trust. You mentioned with my therapy analogy, it’s a two-way street to convince someone to do the right thing. Even if we collectively come up with the operating definition for what does ‘right’ mean in that context, they have to care. What are your priorities? What do you value? And then that’s how we end up in the spaces that we’re in, which to me, in many instances, still connects us back to the economic system that we’ve been utilizing for centuries.
Karen: You’ve got a great perspective on how companies operate and how to help leaders to maybe think about their values and what is important to them and to make sure that they’re operating within that.
That’s something I actually call out in my Everyday Ethical AI book, for people to just think about their own AI policy. And ‘policy’ sounds very formal, but it’s basically, yeah, how do you make your decisions about what you do and what you don’t do and why? And making sure that that’s aligned with your values. I think coming back to values is an important perspective, to anchor yourself in your values and thinking about what they are. And so I always encourage people to write down their AI policy. “This is why I choose to use this tool, and this is why I don’t use it for that, and why I won’t generate music with it.”
Sable: Yeah, for sure, for sure. So much of the training that I’m doing is communication-based anyway. And if you’re going to communicate with humans, there’s some skills that come with that. But at some point in the training, the value system always comes up. There’s company values, and there’s your values. And sometimes they might feel irreconcilable, and that’s just a matter of life. And dealing with humans, we are beautifully complex and confusing all at the same time. I don’t pretend for folks to have ‘aha!’ moments in a two-hour training or two, three-day training. That’s just not how that works. Well, some do. But if I can plant the seed where I get an email eight months later, 12 months later, even three weeks later, and a scenario came up in their real time; and they emailed me about, because of our workshop or whatever, how they thought about something differently and engaged with that person differently and got a better result. Then I’m like, “I’ve done my job.”
Karen: Yeah. I noticed on your LinkedIn you’ve got some glowing recommendations for the workshops that you run, so you’re obviously getting through to some people.
Sable: For me, it’s not just people first. It’s like “humans first”. Because we can hear people and just engage. Humans and humanity. It requires you to go, like, “Wait a minute, wait a minute.”
So if someone says, “Sable, remember when you did the X, Y, Z?” Usually I remember. Even if I don’t, I always say yes! And they’ll give me more insight. And I’m just like, “Thank you. Thank you for trusting me on a journey that I took you through, and I’m happy to know. Keep doing what you’re doing. Go be great.” So yeah, that is my small part in this wonderful world that we have constructed.
Karen: On that note, if someone wanted to work with you on leadership and how they present themselves and how they act as a leader, how would they go about finding you or getting in touch? What would be the best way?
Sable: For a lot of people, it’s easy just to send a direct message on LinkedIn. Only because whenever I give out my email, they spell something wrong and it doesn’t get to you. So, like, if you didn’t copy-paste my email address, I’m like, “I’m not ignoring you. It didn’t get to me.” But honestly, a direct message on LinkedIn works perfectly.
I let everyone know I do trainings, but I work with phenomenal coaches. So if someone wanted a coach, we absolutely provide coaching. You just would not be getting coaching from me. I am not a coach. I’d be fired before the first session was over. And I own my strengths.
But yeah, a direct message on LinkedIn. And I’ll be honest, it might take me three, four days, because how LinkedIn does its messaging is really bizarre to me. But I always get back to people.
Karen: Yeah, I’ve noticed big delays between when I actually have a message and when I finally get the email saying, “You have a message from this person.” Yeah, I got to that yesterday.
Sable: And you go in: there’s no new messages. And then an hour later, there’s 30 new messages, but from seven days ago. I don’t understand.
Karen: Can you say a few words about what kind of training sessions you might offer? What are some examples of trainings you’ve given already?
Sable: How to handle difficult conversations. Team cohesion. Strategic planning and offsites. Leadership retreats, the offsites, all of those things. How to communicate effectively. The root of most of my work is how to be a better communicator. And the reason why that’s the root is so many people would say, “Oh, so-and-so’s a horrible communicator. I’ve been trying to give them this feedback, and they’re just not hearing it.” And then they place the onus on that person, versus reflecting on, “Well, how did you give them the feedback? What is the rapport? Are you actually neutrally and objectively giving them feedback? Or is what you’re saying riddled with judgments?” And all they’re hearing is, “I’m a horrible employee. I’m a horrible person.” So they shut down and defend, deflect, denial.
Karen: That’s a great example. Giving feedback well is really an art and a skill that needs to be built. And it sounds like you help people build that.
Sable: And I think it’s the empathetic skill more than people want to admit. Like, this is not, “Here’s the framework. Say this, say this, say that again.” What’s the context for what’s going on? If Sable is always early, but for three weeks she’s been showing up late, and she usually has high, high energy, and now her energy’s just lower and the quality of her work is dipping, but no new variables have entered the equation at work. What else is happening? And just saying to Sable, “I need you to do better. I need you to do better.” might not work, because Sable is struggling. But we’ve been told we have to separate personal from professional. So then we can’t acknowledge the fact that Sable is a human who is struggling with something outside of work, because you just need what you need done. It’s an empathetic skill, but most of us don’t have it.
Karen: Yeah. It’s important work to help people build that skill, so I appreciate that you’re in that space and helping people to get better at that. I’m so glad you joined me for this call. I’ve really enjoyed hearing your stories about how you’re using AI and not using AI. And I appreciate you joining me. Thank you.
Sable: I thank you for your time, and for providing the space for folks to talk about this, outside of a solely work context. So thank you for that, Karen.
Interview References and Links
Sable Lomax on LinkedIn
The Leadership Standard
Sable’s Personal Website
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% human-authored, 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to Beth Spencer for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)
















