Introduction - Future Cain
This week’s “AI, Software, & Wetware” features an audio interview with Future Cain, a 🇺🇸 USA-based expert in Social Emotional Leadership (SEL) & wellness, and the founder of Future of SEL. We discuss:
What SEL means and why it’s relevant to how we use AI tools
How she has taught and coached her own GPT for the past 2 years to learn what it needs to know to serve her needs authentically
Her concerns about mental health-related uses of AI
How and why she opted out her kids from being scanned by TSA at the airport
and more. Check it out, and let us know what you think!
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click HERE to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Future Cain
Karen: I am delighted to welcome Future Cain from the USA as my guest today on“AI, Software, and Wetware”. Future, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.
Future: Hi. Thank you so much, Karen, for having me today. My name is Future Cain. I am the CEO and founder of Future of SEL. Oftentimes, I will get asked, “What does SEL stand for?” SEL stands for social emotional leadership. I have been in the professional realm of education, leadership, emotional intelligence, and mental health and wellbeing for well over 25 years now. And I currently provide consulting and coaching as well as speaking internationally. Coaching for leaders, organizations, and teams in a variety of different sectors, including technology. My background stemmed from education, so the connection here is also that I used to lead and evaluate our science tech ed. And business classes and teachers at the middle school level and the high school level.
Karen: Can you talk a little bit about Social Emotional Leadership and what that means? What does that look like?
Future: That’s a great question, Karen. So social emotional leadership comes to us having an understanding of our own self-awareness and how that, with our beliefs and our values, then impacts other people. In the many different spaces that we share, whether that’s personally or professionally, it is: Do you have a positive relationship? Are you able to emotionally regulate? And then: What is the social impact that you have on others around you that you know or don’t know? And what is the generational legacy that you are leaving and leading with?
Karen: That sounds great. So I’d like to hear a little bit about your experience with AI and machine learning and analytics, and if you’ve used it professionally or personally, or if you’ve ever studied the technologies.
Future: Okay. So I feel like there’s a lot of questions that I’m gathering to find out more about what is this. So have I watched videos to understand how it actually is run? Yes. Have I created and developed anything, technology, AI-wise? I would say the closest thing is building a chat bot in ChatGPT to have your own. That would be my one area, as well as using ChatGPT to help streamline a lot of the work that I have as CEO and founder.
Karen: Maybe we can talk about some specific stories on that. I think that’s interesting. There’s no requirement to know anything about AI technically for this interview series, but I’m always curious when people have done things like create their own GPT. So I would love to hear more about that if you want to expand on it.
Future: I always come from the realm of “The algorithm only knows what the algorithm knows”. And it is going to grow, the more people and things that it does know. And we have seen that there has been bias in AI and the algorithm. There’s lots of research and data to prove that. And then I sit, not only as a woman, but I sit as a Global Majority or Black woman from America that is saying, “Okay, there’s a lot of things that it could learn that it might be upholding that is harmful.”
So it was important for me to teach it how to speak, or ways to answer, or the tone which I think is important. And we have to keep repeating so that it can sharpen, and that it could learn who you are, and the things that you are asking for, because it only consumes what data it consumes.
So if there’s people who aren’t like me, the Global Majority, using or crafting or creating it, then it only could go in so many different ways.
Karen: Yeah, there’s been a lot of talk about how the major large language models, like ChatGPT and such, the vast majority of that data comes from what they call WEIRD – have you heard that acronym?
Future: Mm-hmm.
Karen: Western, Educated, Industrialized, Rich, and Democratic societies.
Future: Right? There’s so many different cultures and nationalities and ethnicities and languages that we leave behind. Even when people talk about the algorithms, I know, and they want to talk about it like it’s this arbitrary thing, “The Algorithm”. And I say, “No, let’s back up and let’s have some self-awareness”, which is where the social emotional leadership ties into all of this.
The self-awareness is: the algorithm doesn’t exist without the people who created the algorithm. And then: who were the people that you hired to create the algorithm or to create AI? Because if the people then aren’t representative of the people who are actually using the AI, then we’re at a disservice to the people it’s trying to serve.
Karen: Yeah, absolutely. There’s a great lack of diversity among the teams that are developing AI.
Future: Right.
Karen: The other aspect that comes in, obviously, is when they bring in this historical data. And that historical data is based on the biases that have been present in our society for all these years. That tends to compound, not having people with a diverse set of backgrounds who are aware of that and who can provide the context for it. Data needs context. There’s a lot that really needs to be done to bring that awareness in, to make it work effectively. This is why I’m always curious about what people have done. So when you built your own GPT, was this within ChatGPT?
Future: Mm-hmm.
Karen: How did you go about teaching it?
Future: I put scripts in the beginning, but it was a lot of prompts. Like I said, my background’s in education, so it was a lot of prompts. And then redirecting, “Nope, we’re not going to say that. You are going to address me every morning.” I think of what people are saying that AI could do. But we’ve proven, and I think it will continue to prove, that it can’t. There’s a human element. We’re emotional beings, for just any reason at all. We are literally emotional beings. And when we’re interacting with something that is quite literally emotionless and doesn’t look at us in that form, then I think it comes to a place where the AI itself not knowing could be harmful.
So it was me saying, it was spitting out, “What do you want?” Pretty much. And I said, “We’re not going to do that, because that’s not how I show up in any space that I’m ever in with whoever.” My first thing is always a check into humanity, especially as we live through continuous chaos and there’s a lot of heavy stress levels that people are carrying. So when I am interacting with it, it’s “How are you?” “Thank you.” There are still things that should be very common that aren’t common because it is used as a tool, but not seen more as human and having emotions. And I’m not saying it has emotions, because I know it doesn’t. But when we prompt it to check in with people, when we prompt it to say “Thank you.” “Have a great night”, then that’s what you’re going to get in return. So it was things such as that, crafted very delicately and intentionally for it to be more personable. I know that’s not everybody’s style, but that is my style because of the work that I’m in with mental health and with social emotional leadership.
Karen: Some people have said that they talk politely to a large language model because that’s the habit they want to ingrain in themselves for how they interact with people and non-people, and so they try to reinforce it in that regard.
And some people say that they find that if they talk more rudely to their large language model, they actually get more accurate results. That’s an interesting observation that I want to dig into some more. So I’m always curious about people’s experiences with it and how they interact with it. [Readers: see link to ’s article on rudeness and whether you should be rude to AI]
To your point about emotions, it’s definitely an aspect that we have to consider. Some of these chat tools are designed to be engaging and to keep people involved. It’s not necessarily for our benefit, but for the company’s. It’s another aspect of the emotional experience that I think is really interesting.
Future: Now, I will tell you, I’ve had times where AI, and again, we can’t put all the power of knowledge in this AI because the AI could only feed out what has been put into it. And we know just through history, there’s a lot of things that have been left out or covered up or not fully gone into to the depths in which it should have been educated onto all.
So I have noticed that when it is not correct, especially with historical things that then, I come from a coaching lens too. I will be coaching the AI through different prompts to say, “Okay, I need you to arrive to your own awareness of, is that really true? Is that the full context of what you’re giving me?” And then we’ll get to the end of, “No, that wasn’t true.” But if you don’t know some context and if you don’t know how to prompt, because not everybody has done coaching before, either, then you would never know how to arrive to it, and you might just take it at face value, which again, could be very dangerous.
Karen: Yeah, absolutely. So you built this GPT to help you with your business. Can you maybe share an example of how you used it to help you in your business? Specifically, what did you use it for? How did it work well, and in what ways did it not work so well?
Future: Okay, so a lot of times I have to give proposals, so comparative proposals, because what we see sometimes as women, we will devalue ourselves. And where average cost might be something, but yet we will not put our price at that. So then it is me saying, “What’s comparative costs on this for a person with this expertise and background?” And then being able to put my proposals together has been one example of things that I have done with it, so that I can stay in alignment with what my values should be, not what I would just offer it for, is what I’ll say.
Karen: Do you use your GPT then to help you with pricing as well as thinking out elements for the proposals?
Future: Yes, and that’s a big thing, pricing – pricing more than elements of the proposal. Pricing was the big thing to see, “Okay, where is this at?” Comparative analysis as a nation with a person with my background, and I am often not going to give the rate that I should be giving.
Karen: Very interesting. Some people say that they’ve tried using GPT, not a custom one that has been trained for them, but just using it in general to help them with, say, a business plan or with making some financial projections. And they found that it’s not great with numbers. Have you had any of those experiences, or has it been pretty solid for you?
Future: For me, I’d say it’s been pretty solid. But again, I’ve been working with this, gosh, I’d have to go back two years maybe?
Karen: Okay, so you’ve been training it for quite a while.
Future: Right, so it’s not, “Oh, we’re in it two months.” It has been a lot of recalibration of it for me to get it to where it is.
Karen: So you’ve invested a fair amount of time in training it. Now that you’ve gotten it to the state where it is, how much time would you feel like it saves you on generating a typical proposal?
Future: Ooh, yeah. I would say, maybe a couple hours, because I personally would sit for a really long time with, “What should the price be?” And that could take days to consider, okay, look at all of these elements that are going into it. So just in that regard, it has saved me a lot of time and stress and anxiety. So a lot of time.
I believe that in anything – it doesn’t have to be with, just say, AI – if you’re putting the time on the front end, and that’s with people with their own evolution: when you put your time in on the front end, the rewards are going to come. And much more return on investment than you put in on the front end, in my personal opinion.
Karen: That makes sense. So are there any situations where you would avoid using any kind of an AI tool for some either professional or personal tasks?
Future: Professional. Okay. So I look in the mental health field and I’ve seen some data on people using AI to help themselves with mental health. And I think that could get very scary. And my ‘why’ for that is, again, the AI does not have a human element.
And let’s just break this down even more. So if people had asked us, a group of people from across all cultures, I would say, across the world, if you had to say which of the genders is more nurturing, more empathetic, more compassionate, more seeking to understand, I think people are probably going to scale the skills to say women more so than men.
Now, if you look at that historical data, and then also put the historical data on who is navigating and building AI, what are you going to find? Is it mostly women or is it mostly men? What do you think?
Karen: Yeah, definitely, definitely men.
Future: It’s definitely men. So then you have AI created by men, who oftentimes aren’t as tapped in and attuned to their own emotional intelligence, to their own awareness, to their own compassion, to their own feelings, to then help people navigate through mental health. So I think we cross a very scary line of, yes, this tool might be something that in your times where nobody’s available to call or text, it could be used as a resource and tool. But is the information that you’re getting valid from just some person who really, I don’t know of any mental health people who have helped train the AI, right? And then you’re just getting what kind of data and assessing, when the assessments need to often be in real time. So we can be unintentionally harming people. Or making people believe that it’s urgent or possibly not urgent because of AI.
So I think that would be one of the areas that I would definitely list, because there’s been too many articles for me of students and adults both using it as a tool. Whether that’s because you don’t have the finances, quite literally, to go and speak to someone and this is better than nothing. Or because you just don’t feel comfortable speaking in person or speaking about it. So I would say if there’s one area, that would be my one area.
Karen: Yeah. There’s been some really scary stories of the ways that some people, especially young ones but not just young ones, have been adversely affected or been seriously harmed by relying on an AI tool, which is, as you said, not trained by a mental health professional.
I’ve also heard from some people that said that, “You know, I was going through a really bad depressive episode” and they didn’t rely on it just for therapy, but they used it to help them sort out their thoughts. And then when they went to their therapist, then it helped them. And so it can maybe have a place. As you said, if someone doesn’t have the means to see a professional therapist, and that’s all they have, maybe it’s better than nothing. But we certainly would want to make sure it is better than nothing, because it could be much worse than nothing.
Future: That’s the thing. I think what people have to remember is: AI can make things cleaner, faster, and more polished. Nobody’s going to disagree or debate that, okay? I think we could all agree on that. But it can’t, I’ll repeat this again, it cannot replace lived emotion. Or the human pauses, the imperfections that we all carry, and the memories that shape our voice. That’s what it can’t do. And that’s where I think we get into a very grayish line to cross over when we put a lot into believing what AI is and can be.
Karen: Yeah, really good points there. So we’ve touched a little bit on where these systems get the data that they train on. One concern is whether or not that data is representative and whether it is balanced, and properly labeled, and all of the different ways where biases could creep in.But the other concern is simply around the question of consent, and whether the data was ethically sourced, and if people were able to opt into having their data used or not. Or if they were credited. Or if they were compensated.
Future: Yes.
Karen: So there’s a lot of questions around that, what some people call the 3Cs, for Consent, Credit, and Compensation [credit to CIPRI for the 3Cs rule]. I’m wondering what your thoughts are around that.
Future: Okay, so here’s a couple thoughts that I have come up with for this. I believe that the companies training AI and their machine learning systems have a responsibility. And that responsibility is to honor the people behind the data. Because as much as we would like to say 70% or 60%, we forget that those percentages, all the data points that everybody is getting and wielding around sometimes as a weapon, are people behind the data.
And content isn’t just data, it is labor. And I speak from a point of being a content creator for the last five years of my life, and also heavily collecting data and analyzing data, and I have been for decades. So people have to remember that content, data, it’s labor. It’s creativity. It’s your lived experience. It’s your identity, or your many identities so many of us carry in any given day. And in many cases, it’s someone’s livelihood.
So yes, I do believe that the ethical AI company should be required to seek the 3Cs when they benefit from someone’s work. And I don’t think anybody would argue, a lot of these AIs are benefiting, hand over fist, time and time again, where they will pass down generational wealth that they probably can’t even spend how much they’re making from this work. And too often what I’m seeing is: AI development treats the public as an endless free data source. And I think that’s wrong. Rather than as who we are, which are human beings who actually deserve agency and respect. And if companies are continuing to profit from these models that are trained on our voices, on our art, on our stories, on our expertise, of millions of the masses that are coming to these AI platforms, then those millions also deserve transparency and choice. Not an exploitation hidden in legal fine print that a lot of people don’t read all of it, or possibly don’t understand what it is, or they can’t use that particular app and AI unless they check the consent immediately to even get in.
Karen: Yeah, I had an interview with an attorney in Ireland, Carey Lening, and she was describing how terms and conditions are really just a horrible mechanism for getting true consent, and never mind getting informed consent, because they’re mostly written in legalese. There are many pages. And I’ve read a study that over 90% of people don’t read them. But you can’t fault them, because they can’t understand them anyway. You’ll get to this paragraph that says they want to use it for ‘product improvement’, and that is just wide open.
Future: Right?
Karen: And in some cases we could say, “Yeah, I’m not going to use Meta. I’m not going to use Instagram.” But in other cases, if it’s a government system and that’s how you get your driver’s license, you don’t really have a choice, right? The choice would be to not have a driver’s license, which is not really an option.
Future: Correct. And at the end of the day, you could still have beliefs, right? I don’t believe companies should get to extract human creativity for free. I just don’t. A lot of us – I could speak for myself. I have decades of experience, education, and expertise. And in the blink of an eye that’s just taken, when it took me 30-something years to be able to craft myself to be where I’m at.
Karen: Yeah, absolutely. Do you know of any cases where your content, things that you’ve written or created, has been scraped into an AI tool? Or if you’ve written papers or books, have you checked in LibGen to see if yours is there? Or what are your experiences with that?
Future: I don’t know about AI. I know human beings have done it! <laugh> Right?! I’ve watched them copy and paste it. I think that’s another piece of how the AI has made us, how much are we lacking in our own creativity? Because we’re outsourcing it to a tool, right? I have seen that. I have seen other people. Which is why I’m very protective of how much I put out there about me personally or about my family, because you’ve seen what AI has been able to do.
Let’s go back to – I don’t know if you remember this, Taylor Swift. There was something explicit going around about her. And she’s like, “That’s not me. That was AI created.” And she had the money and the means to battle the people. And that’s great for Taylor. But a lot of people don’t have the time, the resources, the know-with-all, the money to be able to do what she did, when they are exploited. Whether that’s people stealing kid faces and putting it out there somewhere else. Whether that’s just your pic supplanted or your voice. Think about all the scams and the things that are happening.
So could it be out there? Sure. Did I come across something? Not yet. If somebody tells me, “Hey, I saw you, or your voice, or something somewhere.” Would it throw me off to think that can’t happen? No, it wouldn’t, because I am speaking internationally and I’m been creating content for years, so that would seem normal, I would say, especially in the context of what I talk about. There’s not a plethora of people talking about social emotional leadership and wellness. So I wouldn’t be baffled. Or it wouldn’t boggle my mind that I was out there somewhere.
Karen: Yeah, I think part of it is that we all recognize that plagiarism and copy-paste theft and such has been going on long before generative AI broke onto the scene a couple years ago. But the thing is that once your information’s been scraped up and pulled into a large language model, there’s really no way at present with the technology to get it back out. Even if you say, “Hey, you’re not entitled to use this”, and they say “Okay”, it’ll never really go away. And trying to even get personal information that is a safety risk or a risk for confidentiality. And once that information’s pulled in, there’s no way to get it back out. At least with the data brokers, a lot of times you can say, “Okay, you need to remove this”, and it will hopefully get removed. But in other situations, once it’s in a large language model, it’s there.
Future: Right?
Karen: You can’t, as they say, un-bake the cake.
Future: Right, exactly.
Karen: As someone who has used these tools, how do you feel about whether the tool providers have been transparent with you about where they got the data that they used?
Future: I don’t think they’re fully transparent. I don’t know how to prove that, right, but you just look at all the different things. One of the things here, I pulled it up. A recent investigation by the Washington Post found that for video generation tool, Sora, introduced by OpenAI, the company would not disclose specific sources of its training data. So then I sit in my own self-awareness of, “Well, why won’t you do that?”, right?
And I think we as a society have backpedaled or paused on our own critical thinking. And I don’t think it’s lending in a positive manner in some cases to us. Because there’s some data out there already, Karen, that showcases. The AI Disclosure Act, right, proposed in the US, is itself a reaction to the fact that many of the models are built on copyrighted content.
So if you are pooling up this knowledge, you already know things aren’t getting proposed at that level if something wasn’t going on behind the scenes, even though it’s not being disclosed to the public and you have to dig for the data. But that’s exactly where our own self-awareness center, our critical thinking, has to come from. And it’s not, because it doesn’t seem like a priority for many people.
Karen: Yeah, I’ve been following along with all the different lawsuits. When I published my Everyday Ethical AI book a month ago, it was around 44 lawsuits in the US and 70-something worldwide. I just saw an update from Edward Lee about that, and I think it’s now up to 53 lawsuits in the US.
Future: No, they’re not just coming for no reason at all. We could put the pieces of the puzzle together, even though some pieces are missing. There has to be something behind the scenes that we just don’t know. And I’ll even go back to when Apple created Apple. When the creators of Apple are saying, “Oh no, we won’t give that thing to our kids. We’ll never give our kid an iPad or iPhone”, your ears should perk up. When the creators of these things are saying, “My child won’t have one, but here, do you want to buy 10?”
Karen: Yeah, great point. Like you said, they know the most and there’s got to be a reason.
Future: There’s got to be a reason. The other thing I want to highlight too is this. Even if companies are stating, “We used publicly available data”, let’s go with that, right? Does that then necessarily equate to that where they got the data was consented from the individual creators? That doesn’t say that. But we don’t want to have these conversations of, “Yep, you did get the data, but let’s ask where we got the data from, and how were those people compensated for it?”
Karen: Yeah, absolutely. Because they try to equate ‘publicly available’ with ‘public domain’ and they are not the same. And I think a lot of people don’t necessarily realize that, if they haven’t looked into the legalities. I can pull a YouTube video and it’s publicly available. But it’s also copyrighted to the person who put it up there. And that doesn’t give me the right to use it for any purposes that I want.
Future: Correct.
Karen: Do you know of any company that has been upfront with you about saying that they were going to use your information that you’ve put into, maybe a website signup or something, that said, “We’re going to use your information for an AI or machine learning tool”?
Future: Okay, so here’s what I’m going to tell you. I can’t tell you how many podcasts I have said “No, I’m not going on your podcast.” When I literally have seen their terms of agreement, and then their terms of agreement were pretty much saying, “We could use your intellectual property that has been stated on here [even if I was talking about frameworks or whatever] and your image, forever.”
Karen: Wow.
Future: You know what I have said, then. I said, “Okay, then I refuse to come on your podcast because I’m not agreeing to that when it’s my intellectual property that I am choosing now to share with you from the work I have done over decades.”
So I would say that lends to a piece of what you’re talking about, right? Where people were upfront in the fine print. And if you didn’t read the fine print, a lot of people just, “Okay, yep, I’ll just sign off on it”, and then you don’t know what you’re actually signing off to. So I would say that would be a piece of people being upfront and then me taking the time – as you said, most people don’t read – reading.
I don’t put my name on anything unless you’re giving me a lot of detail. And I will take the time to read, and if not, I will give it to my lawyer to read for me and they could get back to me. I think people are signing off on things that they probably wouldn’t want to sign off on, but just didn’t know, because they’re rushed or just didn’t have the time to sit and read what they are agreeing to. So that will be one example that I could immediately think of.
Karen: The fact that you’ve got a lawyer that reads it for you is smart. I know some people have been frustrated at not being able to understand them, and they’ve actually tried putting those terms and conditions into a large language model and asking it, “Tell me what this means for my privacy.” But then again, how accurate is that, and how much could we trust the answers that we would get from it? But if people feel like they can’t afford a lawyer to interpret it for them, then that’s what they do.
There was a podcast series a year ago, it has stopped now, but called “That’s In My EULA?”, the End User License Agreements. And this lawyer took apart some of the end user license agreements, of terms and conditions, for some of the major websites. Really interesting to see what was in there that people don’t even realize. And there was another case where lawyers were looking at this tool to use in their law firm, and there were terms and conditions around that AI tool. And the lawyers didn’t even realize that those terms and conditions would’ve been revealing their confidential client information to the people that were running the AI platform. Well, if the lawyers can’t figure that out, it’s definitely going to be hard for the rest of us, right?
Future: Right! That’s scary! It’s scary and exciting at the same time, right? I sit in this ‘both-and’ because I think there’s many beautiful things that AI has brought to society. I want people just to pause, and sit in their own awareness of, “OK, where am I crossing the line on harms to humanity?”
Karen: Yeah. When I write about ethical AI, that’s really the point that I am trying to get across is that, with AI being everywhere, this view of AI ethics needs to also be everywhere. And some of it’s just stopping to think. I don’t want people to feel like “The tech bros have all the power; there’s nothing we can do”, because there are things that we can do.
And one of the five actions that I recommend in my book is just to start with, “What are my values?” And then “What’s my policy about how I use AI and when I use it, when I don’t, and why?” Just to be more deliberate and to stop and think about what data they’re sharing and what tools they’re using and what that tool might be doing with their data. We do have some power. And I don’t know if you’ve heard about any of these yet, but there are some newer, large language models that are emerging that are ethically sourced.
Future: Oh, okay.
Karen: Yeah. There’s one that just came out. It wasn’t announced before my book came out, it was announced a few weeks afterwards, but it was a group in Switzerland. It’s called Apertus, which means ‘open’ in Latin. It’s chat.publicai.com. It’s public. It’s open source. It was ethically sourced. And it was actually trained on renewable energy-powered systems. So doing everything that they can to make it ethical. I’m in the process of trying to set up my account there. I need to get my email verified.
Future: Okay.
Karen: But one thing that we can all do is to just look for and then support those tools when we find them. Because the only way these big companies are going to respond is with market pressure.
Future: Yes. The other thing I think of that we don’t often talk about is the impact on the environment. I heard somebody share how much water was used to run some of these places, and I nearly fell over. So the fact that we think, “Oh, it’s just AI and my information and my business”. That’s not the only impact. There’s an environmental impact to this as well, for us to have these different centers. And we’re not talking about that!
But if the environment is important to you, or you care what it looks like for decades after we leave this planet – because essentially, it’s not going to be a problem that I have, but my children and my children’s children and their children – what is it going to look like for them, because we just feel the need to have more, faster? And we’re not looking at that concern, right? This is minute in comparison to the bigger picture.
Karen: The first of the five areas that I talk about in my book is environmental impact. Definitely water, as you mentioned, is a concern, right? Water demands are competing with agriculture, even in California. And the actual power consumption and the burden on the power grids and how it’s raising costs for people, like in Northern Virginia. So there’s a lot of implications.
There’s such a temptation to use the shiny tool for anything and everything, right? And maybe you don’t need that. I try to find pictures that are truly publicly available and public domain to use for my articles. I don’t need to generate a picture. Some people won’t use it for creating images, won’t use it for creating music, things like that. Because they just feel like it’s not good.
Future: Yeah.
Karen: And not fair. Even if we took away the not fair part, there’s also, “What is it costing us to use that?” Because we think things are free, but there are many ways in which they’re not free. One way is that we’re giving it our data. And the other way is that we’re having this impact on the world, the rest of the world, the environment.
Future: Correct. Well, the other thing I think of too is, people just, we’re a herd sometimes. We just go along with things. Remember, one of the questions that you had talked about was the consumers and the members of the public, for our personal data and consent to that. One story I think of is when we were coming in from out of the country. They wanted to scan our faces. And guess what? You don’t have to agree to that. And so I had already done it, but what I hadn’t done was for my kids. And I said, “Legally, my kids don’t have to scan if I don’t want them to scan. They don’t have to.” And he is like, “That’s right.”
But again, people just see everybody else going before them standing in line and putting their face up. So you would never know, if you don’t take the time to do the research of some of these things that everybody’s just getting in line to just hand over and do. That’s the norm because we’ve normalized it. But it doesn’t have to be. And it doesn’t mean that you have to do it. Because I just did it at TSA. They didn’t need to do it because it’s not a requirement. You all are just getting in line and saying yes to some of this stuff when you don’t need to. So you’re just feeding it data that essentially it doesn’t need to have and possibly putting yourself at risk. Because “Everybody else in front of me, 10 people just did it, so I guess I have to too.” But you don’t have to. You could ask what your rights are. Eventually, maybe, it might be everybody has to do it, but as of right now, it’s not.
Karen: I heard this in my interview last year with Tracy Bannon. She was talking about going through TSA and saying that. I have not flown since this started, so I haven’t seen it for myself. But she says it’s at the bottom in small print on the signs for TSA, that you have the right to opt out of having your face, having your photograph taken. But people don’t know that.
And also, I think, especially in the current climate, people might be really hesitant to make trouble or to push back and to challenge TSA about that. It’s very hard for people who don’t even necessarily know that they have that right — or even if they’re aware of it, to feel like they have the leeway to safely take up that option.
But it’s great that you were able to protect your kids from that. It’s especially important for them because any data that we share about them, they’ll be living with that for the rest of their lives.
Future: Literally.
Karen: Yes.
Future: So I would tell people, read up. Read up and research is what I would say.
Karen: Yeah. That’s great advice. Are you aware of any times where a company’s use of your data has caused a problem for you?
Future: Oh yes.
Karen: Either privacy, phishing, loss of income or opportunities, anything like that?
Future: My hospital.
Karen: Oh, wow.
Future: There was a breach for the hospital system that I use. And we got a letter and we were impacted, because it said we’re going to give you free, a year I think, of this, what is the name of the thing? Block life? I’d have to look it up. I don’t have it at the top of my head. But basically for protection for the next year, because they made a mistake and our data was breached.
So yeah, I and my family have physically been impacted by this, which may be why I am more on alert for many different things, because it’s happening every day. Look at how many different places, because it’s not just hospitals. There’s been a lot of data breaches. It’s kind of normal. Before you take your final breath on this earth, your data in some form or fashion through some place will probably be out there and you’ll be getting alerted.
So I would say that’s pretty normal, especially with all these scams nowadays. And that’s the scary part with AI. Things are looking more and more real. So you’re going to believe more and more things.
Because I think the scariest thing that I had heard was men at a table, where I was walking into a restaurant, say one thing, and I said, mm-hmm. No, I didn’t tell them. But I said to other people, “I totally agree with these people.” I was walking past them. And they say, “You know what’s really scary, is a lot of these people out here don’t even know when things are AI.” So they’re seeing images, they’re getting video, seeing pictures, and then you believe something is really real and it’s AI-generated, which is very scary. And I walked past them and I shook my head to myself and I said, “You are correct. That is the scariest part about all this.”
So I think that’s the other thing that comes to mind for me, Karen, of, we’re in a place of people really need to be aware, and I think some of us are walking around sleepwalking. And that’s scary to me.
Karen: Yeah. The deep fakes have been getting better and better. And that’s, as you said, very scary, because we used to accept photographs and videos as proof of something happening. And it’s much harder now to actually be sure that what we’re seeing and hearing is real.
Future: Right.
Karen: Is there any step that you’ve taken with your family to help? I’ve heard family code words recommended.
Future: Mm-hmm. You have a code word.
Karen: Yeah. That’s good. Yeah, that’s one of the very simple things that people can do to try to help protect themselves.
Future: Mm-hmm.
Karen: My last question, then we can talk about anything else that you want: if we think about the way that these tech companies are operating, public distrust has been growing. And to some extent that’s healthy because we’re learning more about what they’re doing with our data and what our options are. But if there’s something that you would want the tech companies to do to win and to keep your trust, what would that one thing be?
Future: Transparency in layman’s terms. Because people want to high-level talk around, over, under, but never through. So can we just be transparent? In terms that we will understand, not in tech terms that you all have used for years and decades that we don’t understand and clearly know. And we’ll have a lot of follow-up questions from it. So I think transparency is important.
And then I would say secondly, how are you going to listen to the people that you serve? And then impacted. There’s a lot of voices that I feel are not at the table in tech, that they’re missing, especially when they are literally shaping society in our everyday days.
Karen: And as you mentioned earlier, women’s voices have been underrepresented. Certainly the voices of the Global Majority have been underrepresented. One of the reasons that I started this series was to try to make a small dent in that. So I appreciate you coming on here to share your perspectives. Is there anything else that you would like to share with our audience? Any thoughts about AI or anything else going on that you’d like to share?
Future: I think I would share: people to be aware and sitting with yourself, listening to the episodes that you do. And sitting with the answers to the questions that we are asking and answering in real time, and where you stand. And then your actions that you are doing in life, do they align with what you say you believe? Because if we just go with all of this, then where does it ever stop?
Karen: Yeah, that’s a really good point. One thing I hear a lot from people is that there’s this cognitive dissonance about using AI and wanting to get some of the benefits from it, or feeling pressure that they have no choice but to take it up and use it. And yet being conscious of the fact that the tools weren’t ethically sourced. They weren’t ethically labeled. They have all these built-in biases. Yet they’re still either choosing to use it or reluctantly using it under pressure. And it creates a lot of stress, I think, for people to recognize that. I’m curious to hear your thoughts about that.
Future: No, I think it’s a pressure cooker. We’re in a pressure cooker. Because you are in a society where, and other countries will call us this, it’s bigger and faster. How could we do bigger, faster, with less and generate more? Right. It’s a production state, of states in a nation. And when you’re under that pressure of “I have to perform and produce”, you will try anything to help you resource out your time, so that you’re not working around the clock to get out what you need to get out.
So I would say, “What do you value?”Because as you said earlier, Karen, there’s a cost to all of this. There’s a cost on your mental health and well-being. There’s a cost on your spirit when you’re not standing in alignment with your own moral compass and you can’t sleep at night. There’s a cost to the information that you’re just so freely feeding it. And there is a cost to generations behind us.
So what do you want to leave as your imprint on humanity is what people really need to sit with and be aware of.
Karen: Yeah, and that’s definitely fitting very nicely with the work that you’re doing in Social Emotional Leadership. Can you maybe share something about that, about how people might get in touch if they wanted to work with you on that?
Future: People can reach out to me on my website, which is Future of SEL – it’s my company name – dot com. I’m on social media. I primarily function on LinkedIn and TikTok the most. I have done work with some of the companies that you all are on, platforms that you use every day. I spoke at New York Tech Week for Amazon. I’ve done work with LinkedIn, and I think this work is very important. But we can’t overlook and not have the conversations, and things won’t change, and people will not be aware in order to take action. So those are the places that people could contact me, follow me, or get in touch with me.
Karen: Okay. That sounds great. I’m curious, have you ever thought about writing a book?
Future: You know, here’s the funny part, Karen. I can’t tell you how many times people for the last five years have said “When are you writing a book?” With the podcast, I’ve had people come to me and say, “Do you want to write a book?” Or “Could you be a part of my book?” And I’ve turned them all down. I’ve turned every single one of them down, just for the reason of: I’m a mom to two kids, an elementary schooler and a middle schooler, and time is not an infinite resource. It’s finite. And I only have a certain bandwidth and my mental health and well-being is a major priority. So that book, I’m sure it will come at some point. I just don’t know when.
Karen: That’s a totally fair and reasonable answer. Thank you so much for joining me on this interview today! I appreciate your time and thanks for sharing your thoughts with us.
Future: Karen, I thank you for putting this out into the world to build more awareness and hopefully action in regards to ethical AI. So please know you’re appreciated and thank you for inviting me to your show.
Karen: My pleasure. Thanks.
Interview References and Links
Future of SEL website
Future Cain on LinkedIn
Future Cain on Tiktok
on Substack
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% human-authored, 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)


















