6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #049: Blair Glaser, USA-based leadership coach
0:00
-37:26

🗣️ AISW #049: Blair Glaser, USA-based leadership coach

Interview with USA-based writer, executive coach and leadership consultant Blair Glaser on her stories of using AI and how she feels about AI using people's data and content (audio; 37:26)

Introduction -

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Blair Glaser - provided by Blair and used with her permission. All rights to the photo are reserved to her.

Interview -

I am delighted to welcome

as my guest today on “AI, Software, and Wetware”. Blair, thank you so much for joining me today for this interview! Please tell us about yourself, who you are, and what you do.

Oh, I'm really happy to be here. I am a writer, executive coach, career coach and leadership consultant. I mostly work with leaders and their teams. And my focus is on preserving what I call H I, which is Human Intelligence - in the workplace, mostly.

I define HI as the intelligence we all have that can’t be coded: real empathy, strategic contemplation, asking the right questions and exercising the ability to be present, to see yourself and others more clearly. And I think that's an important ballast to all of the developments that we're making with AI. And an important thing to keep in mind as we step into this Wild West of a future.

Great! Can you talk a little bit more about your work in human intelligence?

Sure. I started off as a therapist. I was in the therapy space for 25 years. I ran empowerment workshops for women around the country. In that time, I worked with a lot of powerful women who were interested in bringing me into their workspace. And I learned very quickly that professional and personal development are actually different, even though they are linked. So I took it upon myself to study organizational dynamics, group dynamics, and leadership, really. And my practice morphed into a leadership consultancy.

Okay. That's very good. So what is your level of experience with AI and machine learning and analytics? I'm wondering if you've used it professionally or personally, or if you've ever studied the technology?

Compared to most of your guests here, I would say I'm a Luddite <laughter>. But I'm very aware of how AI is a part of my life. So, as a writer, I've used products like Grammarly. I use Google Workspace. I see all of the advantages that AI gives me in that endeavor. Although - we can talk about this a little bit later - as a writer, I'm now a little bit reserved in feeding my information into those programs.

Even just having a Tesla (which at this moment, I feel conflicted about). But, you know, there's so much AI in that. I live in Los Angeles, where the Waymo cars are taking everyone to and fro. And watching a self-driving car is a marvel.

So I'm aware of how much AI is a part of my life. And Siri is definitely a decent digital assistant. Google Translate, Siri Translate, these are all things that I use on a regular basis.

Yeah, thanks for sharing those examples of how you're using different AI-based systems, and also how they're around us, and affecting our lives whether we choose to use them or not! For instance, all the other Waymos on the road when you're trying to drive.

Exactly, exactly, yeah. Are they in your area too, or not?

I haven't seen any Waymos. I'm over in North Carolina in the Research Triangle Park area. And we've seen a fair number of Teslas, but no Waymos yet, at least not where we're at.

It's quite a sight. And then, here's the weird thing, is that I barely “see” them anymore. Like, the first time I saw one, it's this car with these cameras on the top. It's quite otherworldly. And there's nobody in the driver's seat and you're just like, “What? What? What is happening?” And then they're just everywhere, and we're in the future.

And that happened very quickly. I would say it was the spring when I started seeing them cropping up and I would marvel at them. And now, not even a year later, and they're just standard, ubiquitous.

Yeah. You mentioned earlier that you've described yourself as a Luddite. I think there's actually a lot of misunderstandings about the Luddites. They didn't actually hate technology. What they were rebelling against was the technology being exploited, and themselves being exploited and being replaced by the machines in ways that were not friendly to humans.

Well, actually, I'm going to then reclaim that, and say that was the exact right word.

Okay!

Because my mission really is to preserve the aspects of humanity that, in some ways, AI keeps threatening to usurp - these things [aspects of being human] that help us see each other as three-dimensional beings. When we spend so much time looking at a screen, we start to perceive our world in a much flatter way. We stop seeing the people who are sitting across from us in all of their dimensionality, in their embodiment, if you will.

And those are really important things for creating feelings of connection, understanding, empathy. They seem small. But eye contact, being able to look at the store clerk and say “Thank you” - such a small way to keep our humanness alive. But if we're so in our heads, and we're so in our screens, we start to lose that specialness.

That's a really good way to look at it. I hear a lot of people use the term Luddite to mean that they don't like technology, or they assume that people don't like technology. And it really isn't that.

No. I like technology. Yeah.

Yeah, so I'm happy to hear that you're reclaiming the original meaning and just making it non-exploitative.

Yes. Yes.

That's great. So can you share a specific story about how you've used tools that included AI or machine learning features? Deliberately chosen to use, I should say?

Yes. Actually, in one of my Substacks, I wrote about how someone had contacted me and said, “Did you write this paper on using acting with neuroscience?” Because drama therapy is a big part of my training and my history. And I had NOT written that paper.

But it brought me into an exploration of how AI, which I always say is developed in man's image, lies. So for that piece, I used AI to generate an image to go along with the piece. I thought that was very fitting.

Again, I try not to feed my writing work into it because I don't want it to be plagiarized, which, again, I find ironic because ChatGPT is making up papers that I haven't written.

Was it a real paper that someone else had written, but it just wasn't under your name?

No, it was a complete hallucination.

Okay. Alright.

It was a very good hallucination. I was like, “Wow, I wish I wrote that.”

That’d be impressive, huh?

Yeah. Exactly.

So it wasn’t a different Blair Glaser, and it just wasn't a real paper at all.

At all.

Wow. Okay.

Right?

So that was one instance where I asked GPT to come up with an image. You know, I do a lot of dictating with Siri, dictating texts. I mean, this is very low-grade again, but that is a way that I use it.

And one time I had written a primer on LinkedIn for a client who was interested in growing in that way. And I had a lot of information that I had studied over the years, and I popped it in there, knowing that it already was populated, and said, “Is there anything you would add to this?” And there were a few decent tips that came out of that.

So, very sparingly do I use it in these types of work-related regards. But I have used it, and I do.

You mentioned a few things that you've used AI-based tools for. Are there any things that you have avoided using AI-based tools for, that you would choose not to? And can you give an example and talk about why?

Sure. I mean, I really need a grammatical editor. And there's so many times when I just want to - I don't use Grammarly anymore. But I want to just put my information into a service as such, right, and have it edit. But I don't. I refrain because, again, I really want to preserve my IP. I don't want it to be used for training.

And I also just as a writer feel in solidarity with other writers about protecting our work. You know, good thought leadership really does take a lot of effort. On the Substack that I write, they're usually not longer than 800 words. I try to keep it within our attention span! But I have spent sometimes up to 8 hours on those 800 words, so that I can take information and chunk it down, and make it very readable. And that takes work. I want to preserve the effort. I want to preserve the grit. I want to honor the human intelligence that was put into it. And, sometimes I think that feeding it into AI to have it check that will compromise that.

Yeah. You mentioned Grammarly. I had done some investigation last year into tools for measuring readability. And one thing I realized with Grammarly is that, if you have a free account, they ARE going to use whatever you put in there and feed it into their tool. And I guess that's the price of a free tool, but it's something that we have to think about. I mean, I would only put things in there that were going to be published publicly within a matter of a day anyway.

Yeah.

And that was really just for measuring readability. And I found another tool which doesn't do that, which gives more complete information. Initially, I was writing at about 11th grade level, or 12th. And I got it down to about 9 most of the time. So I feel like using the tools has really helped me.

But I remember looking at that for Grammarly, and looking at what they're doing with AI and saying, “Yeah, I think I'm going to avoid this particular tool.” Plus, I found some of its suggestions kind of annoying.

Yeah. Yeah. I had a lot of issues when I was first using it before I knew about all of these copyright issues where I had to have it loaded onto my Mac. I don't know if they're still doing it in that way, and it was a problem. It was not smooth.

But back to what you were saying, what is the tool that you're using now that doesn't compromise your work?

It's called Readability Formulas. I can drop the link into here, and I'll send you the link. I wrote articles about it on my newsletter last year - about my adventures in trying to find tools for measuring readability, and what readability actually means, and where tools fall short in actually measuring everything that matters for readability. It's just a score, and the score isn't everything.

Yeah. Yeah. Interesting. I had an article published in an academic journal this year, and so I'm very familiar with the differences. And also then, once you get into writing in that mode, you know, it can be hard to break out of it because you're dealing with complex ideas. And it's kind of fun to be able to write a very long sentence in which you're packing those ideas in.

That does not work for the kind of writing that, on your newsletter and on mine, we're trying to do now. But switching between those modes can be difficult, and I'm glad to hear that AI has been helping you. I'm looking forward to testing it out.

Yeah. Readability Formulas is - you know, there's probably some machine learning under the hood, but it's definitely not generative AI. And they're not scraping up what I put into it to use it for an AI-based tool. So that made me feel fairly confident.

And it's pretty straightforward. There's not just one measure for readability. There's a bunch of measures, and you can look at what goes into them, and decide whether or not that measure makes the most sense for your kind of writing. There are some that are aimed at grade school children and education, you know?

Mmhmm.

So it's a really interesting package. I'll definitely drop the link in. [Readers: see endnote1 for links to the readability scoring tool and the evaluation article.]

Thanks. Looking forward to checking it out.

Yeah. That's good. So as a person who has used an AI-based system or tool, you've talked a little bit about solidarity with other writers, and not wanting content to be used. A lot of these tools get their data and content from places where people have published it online, or it's been put into an online system creating an account. Or where you sign up for a loyalty card, and you get a discount, but then they go and sell your data.

Mmhmm.

So a lot of these companies aren't transparent about what they're actually going to do with our data when we sign up. For instance, you might agree to sign up for something and say, “Yes, you can use my pictures to tag my friends.” But then they use your pictures for many other things that you have no control over.

So there's a lot of concerns about the transparency of these companies, and them using our data and content. And I'm wondering what your thoughts are about that, and whether or not they should be required to get consent, which can be hard. Or compensate the people whose content they use, when it's very clearly traceable to an artist or a writer?

Yeah. I don't have much to offer in the way of how a company could go about that. But I do think they need to figure it out. Because the work that an artist puts into their art is their livelihood. And it's unfortunate that, as I mentioned before, all the grit that goes into that then can be so quickly scooped up and used for a different purpose without any attribution or compensation. It's just a very clear problem.

I really enjoyed the talk that you had recently with a lawyer [Carey Lening] who was parsing through all of the privacy statements and made good use of what she was able to pull out and how much is hidden in there. Clearly, we're coming up against ethical concerns, breaches, and things that are important to consider as we're moving forward.

Yeah. You mentioned earlier about spending so much time to write something that was short and clear and easy to understand, and getting to your 800 words in the 8 hours. It definitely can take a lot more time and effort to come up with something that's clear and concise, and that's, I think, been well recognized.

There's a saying - attributed to Mark Twain, but it actually wasn't him:

“I'm sorry that this is so long. I didn't have time to make it shorter.”

I love that quote. Yes. Exactly!

So as someone who uses some of these tools, do you feel like the tool providers have been transparent about sharing where they got the data from, that they use for training their AI models?

I think it's getting better. I noticed that when I logged on to ChatGPT more recently, there was a reminder to check what you get, that it's not always accurate. I think that's helpful and useful.

Does ChatGPT prompt you if you're going to upload something, to make sure that you have the right to that material to share it?

I don't remember that at all. Yeah. I don't think so.

Because there's the content that they use to train the model in the first place, and then there's the content that they basically take in when people use the tool. And then I think a lot of people maybe don't realize that they're going to use that and take advantage of it.

I think a lot of people don't, yep.

Yeah. Because they're really not transparent about it, but people shouldn't have to dig to figure that out.

Well, then we need to talk about the whole shift in culture around privacy anyway. Because a huge swath of our communities don't care about privacy. They feel that their privacy has always been for public consumption, which I find interesting and definitely different.

But recently, there was a huge article, which you may have read in the New York Times, about a woman who is in love with a chatbot, and the entire story of the intimacies between her and this chatbot and her process. Because with her current subscription to ChatGPT, she really can only have her boyfriend for a set period of time before they cut off the chat on that particular topic. So, basically, she has bought boyfriends for a number of months, and then they die, basically. And she has to start again, training ChatGPT to be in relationship with her.

This is an entirely fascinating and mind-blowing area of human development that I don't even know how to fully process. But what I can say is that the value around privacy is changing for many people.

Yeah. There's been some interesting surveys about how people around the world feel about tools like this scraping content, or using things that are ‘publicly available’, even though they're not public domain in the legal sense. And the opinions do vary somewhat around the world in different cultures.

Some people feel that, “Well, if I'm putting it out there and if I get to use the tool in return, then that's a fair trade.” Other people point out that, “Well, you may not mind them using your stuff, but that doesn't mean you have the right to use other people's stuff.” And THEY should have the right.

That's it. Yeah. Yeah.

I've been trading my data for use for a long time. And, you know, on Facebook, I feel okay about it, because I have grown quite a following there. And as I always say, if I out myself as a Facebook user, I'm telling you how old I am! But, you know, it's been very good to me in certain ways, in being able to get a message out and create receptivity around that message. So I do feel that there's some fair trading going on there. But I try to do it with consciousness.

Yeah, it's interesting, also, to think about privacy. You see certain patterns. Some people grew up without everything that we did as a child being online for posterity. Others, I think it's maybe slowed back the other direction - where my nieces and nephews who are having kids now are purposely NOT blasting everything out there about their kid, because they're aware that this is going to be around forever. And that might not be something that the child is comfortable with. And so they're taking more steps to be proactive about protecting their child's privacy from the beginning, which I think is an interesting development to see.

But there are also people who are in the middle who are - I think they feel kind of powerless. Like, “My data's out there. They've already scraped it. I can't stop them.” But I think we have more power collectively, certainly, than we do as individuals, and I think we overlook that. We shouldn't give up so easily.

What you just said was inspiring to me on so many levels and has more application than just how we're thinking about data in our technological world. You know, the power in the collective, the importance of preserving things that are sacred to us. And others coming together to fight for that, to battle the helplessness that in some ways is really being called up. I think those are important messages for right now.

So we talked about giving our data, making some trades to companies, in return for some benefits. Do you know of any companies that you gave your data or content to that actually made you aware that they might use your information for training an AI or ML system?

The only thing that I'm thinking of is when Meta, and I think it was through Instagram originally, started doing the photo selection process. So either they could have access to all of my photos and that they were letting me know that they would have access to my photos. Or I could select, and that information would be more protected. That's the only thing I can remember. And I might not be the best at paying attention to that kind of stuff, but that one stood out.

You also mentioned LinkedIn earlier. There was some flak last year about LinkedIn basically opting all of us who weren't in a GDPR-protected region to them using our content, everything we've ever put into LinkedIn up til that day.

Yeah. I don't have anything to say to that.

Yeah. I went in and opted myself out from things going forward. But the fact that they opted us in by default left a sour taste in a lot of people's mouths.

Yeah. Absolutely. Yep.

So the other thing that I've heard, you mentioned Siri, and I don't know if you heard about the lawsuit. This was announced on January 2nd that Apple was settling a lawsuit where they had taken conversations from Siri, from people's phones. And even if people thought they weren't using Siri or didn't have it enabled, they were then selling offline conversation data to marketers, to advertisers, to market to people.

I mean, it's become so outrageous and so sophisticated. I'm sure you've had this experience, maybe not. But I have had a thought in my head, not spoken out loud. “Boy, I could really use a” - I don't even know what it might be - “I could really use a paper clip that looked pretty.” And all of a sudden, I'm getting ads for pretty paper clips.

But, yeah, like, literally, I will just be thinking of it. I haven't even said it out loud. The first time that happened, I was with my now-husband, then-boyfriend, and we were buying a present for his mother. Siri was not enabled. And we were standing over a jewelry counter. And we happened to be in a ring section. And he said, “So if I were going to get you a ring, would you like any of these?” We're there for a different errand, but that happened. And the next day, in my social media, in the ads, was a counter with rings in it, and said “Looking for diamonds? We have them.” And it was, like, a literal counter like we had just been standing at.

Wow.

So it's been spying on me heartily. And somehow, as I said earlier, it kind of even has gotten into my psyche, where I cannot believe some of the things that I'm seeing in my feed. Because the algorithms are so sophisticated. They're taking what I'm saying and translating into things I might even be thinking about.

Yeah, I think it also goes the other way, where even if we don't necessarily realize it consciously, some of the things that they're pushing into our feeds for us to see will put those ideas into our heads.

Absolutely.

Ads about the pretty paperclip.

So I have a ‘no ad’ policy. I don't purchase from the social media sites. And if there is something that comes across it that really sparks me, which I even try not to pay attention to, but if there is, I'll research it separately.

Yep, that's probably a good idea to try to limit the advertising that's targeting you.

Yeah. Yeah.

Yeah. That's good. Do you have any other home devices other than Siri that you're concerned about?

Yeah. We have a Google Home and I've got a lot of Google products, and … yeah.

Do you know of any cases where any of your Google devices, things that they picked up from you ended up being used in marketing to you?

I am not, actually, even though we're aware that it's listening to us.

But when we had an Alexa a while back, we had a few instances. Yeah.

Yeah. One of my other guests, Tracy Bannon, told a story about how she de-Alexa’d her home after she realized it was listening to her and to her parents. And it would end up surfacing in online advertising to her. So, yeah, that's pretty common. Yeah.

Ugh. Yeah. Gives me a very ‘ick’ feeling hearing about it.

Yeah, yeah. So has a company's use of your personal data and content ever created any specific issues for you? Other than the creepy feeling of being spied on?

I don't know how this is AI-related, but I have found a few of my blog posts or articles, rewritten or republished in other places around the net without attribution.

Oh, wow.

Yeah. It happens a lot.

Did you have something like, I'm going to say, a Google alert on your name or your content that's flagging you to these? Or how were you finding out about them?

You know, it's funny. I did not put on a Google alert. But I remember looking for one of my articles. I was like, “I don't remember exactly the URL.” I didn't have a record of it, and I found it in a few places. This was a number of years ago, but I found it in a few places.

Yeah. I think that's not so much necessarily AI. It's just people that aren't behaving ethically.

Agreed. Agreed. Yeah. There may be some search involved, but yeah.

Yeah. So what this all kind of comes down to is that we hear a lot more, and we're learning a lot more, about what these companies are doing with our data, which is leading to greater distrust. And in a way, that's healthy because it means that we're realizing what all they're doing, which we weren't really aware of before. And we're starting to see some more pushback from consumers on uses of our data. What do you think is the one most important thing that these companies need to do to earn back and then to keep our trust?

Hmm. Let me take a roundabout way of answering that. They have a lot of figuring out to do. And right now, there's not huge pressure for them to be ethical, in a political sense. I think the best thing I have on this is for anybody listening to start to look at the trust in their worlds.

  • Who are the people that they trust?

  • Who are the companies that they trust?

  • Who are the mentors that they trust?

  • How are they building trust within their families?

Because this is really the only trust that we can control. And I think that, you know, trust is one of those words that doesn't mean that much after you say it. So I think it's important for every person to understand what a trusting relationship means to them.

You know, there are people right now in power who think trust is all about 100% loyalty. That's not my definition. But it's important for me to define it for myself. And then to make sure that who I'm in contact with joins me in the iterative process of building trust.

Yeah. That's a great point, when I think about the people that I trust. On the one hand, there are some people who say, “I don't want anyone who's going to criticize me.” On the other hand, one thing that I'm finding, as a writer in my new solopreneur venture, what I really would cherish the most, and would feel valued by, is someone that I trust enough to know that they're going to tell me when something I just wrote is crap. <laughter>

Yes. Yes.

And that's really hard to find. And so someone that I know will do that for me is someone that I would really feel that I could trust. Of course, I’d want them to do it in a non-degrading, non-derogatory way. But I really feel that someone who doesn't just say “yes” to everything I do and say is far more valuable to me than someone who does.

I share that with you. And I also want to support the fact that finding that person, and building the trust with that person, you know, allowing them to criticize you, seeing how they're doing it, feeling how it feels, seeing how they react to your reaction, and then vice versa. That is a sacred process that adds great meaning to life. And I'm interested in keeping that alive while we enhance our lives with technology.

In fact, I see my role almost as a steward or guardian of these human interactions that add meaning and quality to our life. I was listening to your podcast and there was this great agile and transformational coach that you had on. And she shared a really powerful story.

Kimberly Andrikaitis?

Yes. She shared a really powerful story about having an issue with either a coworker or someone that she was consulting with. And so she sought ChatGPT's advice on how to handle it, and it went off well. Now on the one hand I thought, “Oh that's really smart. She prepared for a conversation, which a lot of people don't do, and she got some good advice, and it went well.”

I don't really have a problem with that. But I will say that as someone who is in the business of helping people particularly at work get along, collaborate, form generative partnerships it was certainly threatening! But the thing that really got my attention was that there was a missed opportunity in her going to ChatGPT. Maybe she works for herself and that was a good way to do it? But there's so much that we get out of reaching to other people for help. Not just coaches, I mean colleagues — going down the hall and saying “You have a minute? Because there's something that I want to deliver to our boss. Maybe you could help me figure out the best way to do it.” Because two people's H I, their knowledge of who that boss is, can create a very effective communication. But it also then strengthens the bond between the two coworkers.

And that risk of saying, “Hey, I need some help”, the risk of asking somebody else for help, and then the risk of presenting that conversation - though in those risk-taking experiences, people learn how to bond, how to relate. It's in risk, it's in the unknown, it's in the unpredictable, that we start to really develop our human intelligence. It's actually the awkwardness and the mistakes, in a way, that give us the data that we need, and help us stay humble.

So I just wanted to make a plug for both, you know? I mean, I don't judge how resourceful she was. But I also think that we need to remember how important it is to ask for help from people. And also to take those risks in conversation, where maybe you don't know exactly how it's going to go, but you learn together, if that makes sense.

Yeah. It absolutely does. It reminds me that I interviewed a student recently who was talking about not wanting to bother all of her teachers with all these questions that she had about math or a STEM subject, and so she used ChatGPT as a tutor.

And I thought, “Well, that's good”. But, you know, when I was in college, we didn't have that option, and we had study groups where we talked to other students. And, you know, we could all help each other. Everybody knows something somebody else doesn't, or understands something better, and we helped each other.

And I think it's great that she has that resource. But it also made me feel a little sad that she doesn't have other students or other people that she can rely on, and have that experience with, and be able to pick up that knowledge in that way. It's great to have it, better than having nothing, but like you said, the really ideal thing is to have both.

Yes. And in the example that you just used, what gets reinforced is the need to be seen as perfect and having the answers. And also isolation, that if she can sort of hole up with her ChatGPT tutor, she doesn't have to make the bridge to the teachers in the outside world and other students. And that creates the isolation and the loneliness crisis that you keep hearing about.

The other example that that put me in mind, when you start talking about loneliness and interacting with the bots instead of the humans, was - this is a case that I think you had mentioned to me, when we were talking before the interview - about the young man who ended up killing himself.

Yes. Yes. Tragic.

That's so sad.

Yeah. Yeah.

I talked to the psychotherapist that I had mentioned earlier about some of these cases and people relying on these tools. And she pointed out that for some people, having a tool and being able to interact that way is a safer thing, and it gives them a means of interaction and support that they can't easily get in other ways.

But they're also obviously extremely dangerous too. Especially if the developers don't take care about making sure that the tools are safe for people to use, rather than just developing it and throwing it out there and trying to build market share.

Yeah.

So these things are all tools that are useful and helpful, but an over-reliance on them creates often the opposite effect.

Hey, you mentioned the fires. I know that the LA area is still really suffering from those fires. 2

I think anyone who lives here right now is personally affected, whether it's through smoke or ash in the air, or through loved ones being displaced, or even just neighborhoods that you once knew and loved being decimated. It's quite, quite extraordinary what has happened here. And, luckily, I happen to be in a safe area, and we were able to house some evacuees. And we are recovering, and the process has begun. And there's more rain in the forecast for this weekend.

Yeah. We saw pictures on TV out here of some people basically dancing in the rain and celebrating the fact that the rains came [interview was recorded on Jan. 30]. But I know there's still quite a long way to go. So if you know of any local charities that are helping to support people, we can get that link into the interview as well, for people to better support the people that are dealing with all the consequences of these fires.

I absolutely do. [Readers: see end note3 for the relief organization links Blair provided]

And for those who are listening to this in a place where they'll forget to go back, the Red Cross has been very helpful to people I know right now. So that's always a good fallback, if you don't come back to some of the others that I'll give you that we can post.

Awesome. Great.

Blair, thank you so much for joining me for this interview. Is there anything else that you'd like to share with our audience today?

I would love to share, for anybody who's inspired, the H I Stack, the “HI stack” I call it, which helps people strengthen their human intelligence in the face of our technological advancements. So, if you'd like to receive tips, tools, and insights on how to do that, please join me over at the stack.

And I want to say one more thing, which is that I've started monthly conversations, where people get to talk about how they're being impacted by technology, by politics, by climate, and together make choices about what's right for each person in how they want to lead moving through this time.

Yeah, that sounds great. And I think you had mentioned that you were writing a book as well?

I am. Yep. It's coming out in a year. It's called “This Incredible Longing”, and it's a memoir of living in an ashram in my twenties.

Well, that sounds awesome!

Yeah. Yeah.

So what's the best way for people to stay informed about progress on your memoir and when it comes out?

You can subscribe to the HI stack, or you can go to my website, which is blairglaser.com. And I'm sure you'll put it in the notes so that people get the spelling right.

Absolutely, yes, yeah. Well, thank you so much for joining me today and for sharing your thoughts on AI and data!

Thanks, Karen.

Interview References and Links

Blair Glaser on LinkedIn

Blair Glaser’s website

Blair Glaser on Substack

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Audio Sound Effect from Pixabay

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊

Share

1

Karen’s preferred tool for measuring readability of her writing: Readability Formulas

More information on Karen’s June 2024 evaluation of readability scoring tools:

2

2025 Southern California Wildfires” and Story Map” on EPA’s ongoing response, US Environmental Protection Agency (EPA)

3

Relief organizations recommended by Blair for helping people affected by the Los Angeles wildfires, in addition to the American Red Cross:

Discussion about this episode

User's avatar