6 'P's in AI Pods
6 Ps in AI Pods (AI6P)
AISW #027: Dr. Julie Rennecker, US-based founder and 'Chief Catalyst' 🗣️ (AI, Software, & Wetware interview)
0:00
Current time: 0:00 / Total time: -51:53
-51:53

AISW #027: Dr. Julie Rennecker, US-based founder and 'Chief Catalyst' 🗣️ (AI, Software, & Wetware interview)

An interview with US-based Syzygy Teams founder and 'Chief Catalyst' Dr. Julie Rennecker on her stories of using AI and how she feels about how AI is using people's data and content (audio; 51:53)

Introduction - Dr. Julie Rennecker

This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available in text and as an audio recording (embedded here in the post, and later in our 6P external podcasts).

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Dr. Julie Rennecker, provided by Julie and used with her permission.

Interview - Dr. Julie Rennecker

I’m delighted to welcome Dr. Julie Rennecker [from the USA] as our next guest for “AI, Software, and Wetware”. Julie, thank you so much for joining me! Please tell us about yourself, who you are, and what you do.

Okay. Great. Well, thank you for having me. I'm happy to have this opportunity. I am the owner and Chief Catalyst of Syzygy Teams, which is a leadership and team development consulting company focused on small business leaders and funded startup founders who all need to figure out how to do more with less, but how to have more fun doing that rather than burning out.

Basically, I use a lot of strengths-based approaches to help them leverage their existing capabilities and amplify those. Working mostly right now in the healthcare innovation space with medical device companies, sometimes internal healthcare innovation groups, but also some of the small businesses that also support startup companies.

Very good. Chief Catalyst - I love that title.

🙂 Yeah, I don't remember how it actually came about! But I think I was in another startup at the time. And it seemed that that was often my role, to get things going. And so I thought, every group needs a spark.

Very cool. So I'm curious. Do you have any clients who are asking questions about how to do more with less with AI-based tools? And if so, how do you handle those questions?

I don't know if it's something that's obvious from my face, but I think they know not to ask me technical questions.🤣 I couldn't code my way out of a bag, as someone used to say.

No one's asking me specifically for guidance about how to use AI for that. But AI is coming up a lot in conversations, mostly around just people feeling uncertain, right? It's sort of in the ether, so people will say, “Well, and with AI …”, and “Who knows what's going to happen with AI …”.

Some colleagues have started using it more, and so that's brought it more to my attention, right? How they're using it in their day-to-day work. But clients aren't specifically asking me for guidance around that.

I think that it may become more part of my advising to them, right? If they're not already, that will probably become a more standard part of how I engage with them, is asking them if they're using it and how, and start them on the road if they're not already.

Tell me a little bit about your level of experience then with AI and machine learning.

I'm a bit of a generative AI enthusiast, like, I would encourage others, right? And I recently upgraded to the ChatGPT paid version, which gets me a few more features.

And I took a course about a year ago. I think it was something offered on LinkedIn, how to write prompts. And just before that, I had seen something that said, “You're going to have to know how to write prompts”, and then this class came up. 🤣

So I took the class and learned some basic things. And that got me to establish an account. But it wasn't a habit for me. I didn't think about it until a professional group that I'm a part of met. They had a special session on using AI. And some of my colleagues started talking about different ways that they were using it.

And so now I keep open tabs, even if I'm not logged in. I keep it up there so that when I'm in the middle of work and I get stumped, I need a little bit of research, right, or I can't remember the name of a concept, or something like that, then I'll go there. I've just stuck my toe in, I think, enough to know that it can be powerful, but also a little bit cautious.

You mentioned that some of your colleagues have been using it for work-related purposes. Can you give some examples you can share?

Sure. Let's see. One person said she uses it for almost everything. So she will use it to generate 10 social media post titles or topics, right, around whatever her main thing is. And she said she doesn't just use the text directly, but she finds it helpful instead of sitting there, trying to think, “What should I write about today”, right? That it gets her brain going. And maybe, out of the 10, she'll end up using 5 or 6 of them, and then she'll write her own post. But just having the title or something like that is helpful.

Another colleague who's more advanced than the rest of us, she wrote a children's book. And it's not her main line of business, but she just happened to write a children's book. And she used, I think it was Midjourney, to do the graphics. And she didn't tell me that at first. I just saw a preview of the book and I said, “These graphics are fabulous!” Because it was a real conceptual kind of book, and yet somehow accessible to kids. And she had done it herself using Midjourney. 

And someone else, she used it a lot for research. And I'm finding that I actually like to write. And most of the time, I enjoy the writing process. 🤣 Sometimes it's frustrating. But I found it really helpful for research.

So I'm currently working on a book. There was a concept, and I knew it and I could describe it, but for the life of me, I couldn't remember the name of it. So I went to ChatGPT and asked it to resource peer-reviewed journal articles, or trade publications with good journalistic practices, blah blah blah blah. And it brought back exactly what I needed, as well as some references.

So, the consulting world, I think - there are some people in many professions who might use it to take shortcuts. I think they will regret that if they don't double-check things. But there's a lot of experimentation going on.

Yeah. So on the professional side, I think there are a lot of different opportunities, and I hear from a lot of people using it in very different ways, or even deciding not to use it, which we'll talk more about later.

The other side of it is in our personal lives. If you think about an iceberg, the whole generative AI thing is the part of the iceberg that's above the water that people can see and point at. 

Right.

Or the AGI, the part where it's going to be the robots that take over the world and run everything.

But there's all this under the waterline of AI and machine learning that just pervades our day-to-day life, and we don't always realize it.1

Right.

Do you have any examples of using AI and machine learning tools, for instance, without even necessarily realizing it?

Well, I think once I became aware of what it was, I had technical friends who I knew wrote algorithms for big companies, right? And I'd hear about AI that way. And I knew about it in the medical innovation field. But I really wasn't thinking about it until I started playing with it more myself. I started noticing and realized all of us who've been doing online shopping have been using it for years, whether we think we have or not. 

Amazon, of course, is probably the most ubiquitous, familiar example - that once you've chosen a book or a product, how they serve up 5 other things that you might be interested in, or people who bought what you bought also bought these other products. And I don't think I had appreciated that. I thought of it kind of like a Google search, which probably has more AI in it! 🤣 And of course, social media, what we get served up.

I think that's what I've come to appreciate more is that, you're right, it's below the waterline, invisible to us. We interact with what is presented to us, whether it's on Google or shopping or in social media - not appreciating how what we see is being shaped by these algorithms, by forces beyond our control, or beyond our conscious choosing.

With Netflix, right, another example, right? We've inadvertently chosen it by the movies that we've chosen - that affects what we see. But I don't remember there being a step in there where I told Netflix, please show me these things.

Right. Yeah, I actually wrote a quick post back in mid-September about 8 different areas of life. It's not that there's only 8, it was just -

Yeah, right!

the first things that came to my mind about different areas where AI and machine learning are under the hood.2

Speaking of under the hood: in our cars and the amount of data that our cars capture from us and what they're doing now with computers and cars, it's things that we don't necessarily think about, but it's definitely affecting all of us.

Yeah. I read that post. That was an eye-opening post for me. Yeah. Thank you. I read that.

I'm probably going to do another, with another 8 🤣, because there's still so many more. I just dashed off the first 8 that came to my mind, just looking at my own life and what I was doing. Like, well, I have my phone, and we have Netflix, and everything else.

Yes! Does it scare you, how much is under the hood?

I guess I'm cautious. So I've been involved with AI and machine learning for over 10 years. And this is actually one of the reasons that I want to interview people who are NOT software developers. Because I know that I come into this with a certain frame of reference based on, I know what's going on with the data. I mean, on the one hand, I know some of the traps. I know some of the things that can happen with data behind the scenes. I know the ways that machine learning models can be biased.

Right.

But at the same time, people who don't come in with that same perspective are going to see different things in the tools or in the way it affects them that I won't necessarily see. So I think it's important to get a large range of voices and people from different backgrounds and different experiences, because your experiences are very different than mine.

I know how to build a machine learning model. But you shouldn't have to know how to build one to use an AI-based tool safely. It's no different than you shouldn't have to be an auto mechanic to drive a car safely.

Right. Thank you. That makes me feel a little less puny. 🤣

Well, everyone knows something that somebody else doesn't, right? 🙂

I want to go back to when we're talking about using ChatGPT to do some research, I think, on your book. Can you talk a little bit more about that?

The concept was “threat rigidity3, this idea that when we're going through change, we feel challenged, right? We tend to regress to our routines. And I can remember that, regressing to the routine; couldn't remember the name. And so I went to ChatGPT and just basically wrote that out, like, as if I were talking to it. “Here’s what it is, but I can't remember the name. Could you please find the name?”, and gave it the instructions around only accessing peer-reviewed journals and trade publications with high journalistic standards. And it brought up 2 lovely paragraphs. And apparently that's a term that is used in both the psychological literature and the cognitive science literature, so they had slightly different names for it. 

And it was in seconds, right? And it would have taken me - that particular thing digging around, I've been out of academe long enough that I had forgotten the specific researcher, so I wouldn't even have known the names to go to to jog my memory. Maybe I could have had at least the name in 30 minutes. ChatGPT, of course, took a minute, maybe 90 seconds at the most 🤣, but probably a minute. 

And the additional information that was available was also just so much more helpful, right, without reading 10 research papers, to have all that distilled and some additional points. It validated that what I had been thinking was correct, but also gave me some additional content to weave into what I was writing.

So when you got this list of citations, were you able to validate that all of them were genuine? Because I've heard from other guests that they've gotten imaginary references from the tools - maybe earlier versions of the tools.

Maybe so. I didn't do all 10. But when the list came up, I did recognize the names of the researchers, and I checked 3 or 4 of them. They were valid, and I was able to download the papers. I've also started asking it to provide access to full text versions of the papers, so hopefully the actual papers. I hope the papers that I get back are real. 🤣 I don't know if it could manufacture papers that quickly.

That's funny! 🤣 Since you mentioned writing a book - a friend of mine was looking to market her book, and so she tried using ChatGPT to help her build up her marketing plan. And she was asking it for a list of podcasts she should go on to promote her book. But most of the podcasters it told her to go to didn't exist.4

Oh.

She got a little disillusioned with that tool pretty quickly. Again, that was maybe a while ago, but if it gives you a list of 10, and the first 8 aren't real, that obviously makes her suspicious.

I would love to hear more about your book. Maybe you can talk about that a little bit toward the end?

Sure. Be careful asking, right? Because once I get started, yeah, it's dangerous 🙂

Okay, well, great :)

So these are some examples you've given about when you've used AI-based tools. Are there any cases where you AVOIDED using AI-based tools for some things? And do you have any examples of when, and why you chose not to use AI for that?

Yeah. I think when we had talked before, you mentioned Grammarly AI. And I had not been aware of Grammarly as an AI tool. But I had tried to use Grammarly before and didn't care for it. It's been at least a couple of years, maybe 3. So I don't know if that was the AI version, if Grammarly's always been AI and they just renamed it “AI” to make it more popular. 🤣 You never know when the marketing gets involved, right?

I just found it intrusive. It was like, as I was writing, it was too quick onto things, rather than just a subtle underline that I'd made a boo-boo, like you see in Google Docs or Word. My memory is that there were popups and recommendations or something happening too much, and it interrupted my train of thought, and so I didn't find that helpful. So that may just be a user experience issue that could be corrected, that maybe there's no problem with the tool itself.

And some things I ended up overriding. I do a lot of writing, and so sometimes I use incomplete sentences and things like that on purpose, right? So it was a little intrusive.

Oh, I know - the other thing is AI note-taking. Zoom has it, and another friend uses Fathom. And I think a virtual assistant that worked with me for a while used Otter AI. I'm sure there's a slew of them, right? I don't know all of them. I found that appealing. And when I would receive the summary of the meeting afterwards, I found that it was compelling. It was, wow! 🤣 I can't believe that it summarized and had the action items for each of us, as well as a transcript of the meeting. I found that very helpful.

I have not used it yet myself, because I coach a lot of people 1-on-1. They might be revealing proprietary information to me about their company, or personal information about themselves. And I have just been cautious, right? I promised them absolute confidentiality. And I've been cautious. Once the machine eats that information, where does it go? Could it be identifiable? That kind of thing.

So I have not used it myself in conversations with collaborators. I guess there could be some confidential business information in there. But as a general rule, we're discussing the color of something for a slide, right 🤣, or something completely inconsequential. In those cases, or working with a virtual assistant, I don't mind if they use it to make sure they get good notes. But I have been cautious.

Yeah. And I think that's pretty wise to be aware of the fact that confidentiality is one of the biggest risks of using AI tools. Because even if it says that they don't SELL the data, the fact that it USED the data for training an AI model, it could potentially leak out.

You mentioned Grammarly earlier. I had done an evaluation of Grammarly a few months ago.5 And it had some nice features. But I found its recommendations were more annoying than helpful a lot of the time.

Right. Right. Same.

I also found some ethical concerns about the way that they trained their models, and they used our content for AI, especially for free accounts. If you had an enterprise account, they would say, “Well, we only use that data for training within that enterprise.” So that at least provided some protection.

But if someone had a free account, like me, your information would get mixed in with anybody's. So I've only used it on things that I know I'm going to publish publicly within a matter of a day or two anyway. 

Oh.

Well, if it's going to be out there in the public anyway, then I can't object too much to it, that it's used for that. But it made me not choose Grammarly as my primary tool for analyzing readability in my writing. That was one of the main deciding factors.

You are wise, I think, to be very cautious about confidentiality when you see leaks. Even if they intend to protect it, there's so many issues with data security and leakage that, if I were one of your clients, I would be relieved.

🤣 Good. Good. Yeah. So I'm - I scribble away. I'm kind of a compulsive writer anyway. And so probably even if I had the tool going, I would still be writing something. I was trained as an ethnographer, right, and so you just always try to get down as much information as you can.

You read how they would be using the data. I think that's key. So I used to teach the human aspects of cybersecurity. I'm not a cybersecurity expert, but I've been in a cyber risk management startup. And routinely, right, the humans, the human aspect is the challenge.

And all these apps that we use, the privacy statement, we don't read it. I don't either. That would be one of my exercises: the students had to choose an app and read all the confidentiality, and be able to map how it was processed, where it would live, who could access it, that kind of thing. So good on you for reading it, because I know most of us don't.

I read a study not too long ago that said 91% of people don't ever read the terms and conditions. And I think for people 18 to 24, it's 97%. 6

🤣 Right. Right.

And mostly I read them because, like I said, this is my area of professional work, and I was specifically trying to evaluate these tools for ethics and features. Of course, I had to read it to try to understand that. But it's very difficult, though, to really understand when they just say, "We're going to use your data to help improve the product". What does that mean? Does that mean they're going to use AI? Or are they going to make it available to OpenAI, and then it's gone? We don't know.

Or to their vendors, their additional vendors or subcontractors, or whatever. You just don't know where it goes.

Exactly. ‘Affiliates’, I think, was a term LinkedIn used when they were talking about sharing our data.

[18:08] This brings me to the next question, which is: I'd like to hear how you feel about companies that use our data and content for training their AI & machine learning system and tools.

There's a lot of talk about whether or not ethical tool companies should be required to get what they call the 3C's: get CONSENT, give CREDIT, and COMPENSATE people whose data they want to use for training. And by the way, the 3C's comes from a group called “Cultural Intellectual Property Rights Initiative®” (CIPRI). And I can share a link for that.7 They've got some good information about it.

Basically, there's a lot of discussion around whether or not AI tool companies should be required to address the 3C's.

I definitely think so. I had not heard of the 3C's, but I think that's brilliant. Certainly, at a minimum, consent, and then credit and compensation. I certainly have sympathy for artists, authors, whose works have been appropriated. That's their livelihood, right? And to have those works be digested, repurposed without any kind of credit or compensation. And so I think the 3C's are brilliant. Definitely thinking it should be required.

Somebody had a much better quote. I'm not going to remember it, but I had been thinking along the same lines, that these companies will, and already are probably, making money hand over fist. Or they'll do well, they'll get acquired, you know, whatever, they'll IPO.

And if they are using data that they have acquired without consent and without compensation, just the economic model - if you don't have to pay for labor, you don't have to pay for supplies, you don't have to pay for your inputs - it's a lot easier to be profitable, right, on the back end. Certainly exploitive.

And there are times, like in healthcare, I think about how hospitals are thinking about using patient data for clinical decision support systems, and some using it in research. Some of these models go through reams of cancer data or rare diseases where there are 3 people here, and 5 people in Zimbabwe, and 9 in Norway, and it’s spread out. And if that data can be pooled, and they can learn from it and gain insights about what treatments work and don't, that that can be really beneficial.

And so I can imagine that some of my colleagues might be hesitant about sharing their information. Or myself, if I were the person that might be hesitant about sharing my information for purely their economic gain, versus it going to some greater good, right? I differentiate between those two use cases.

So you're saying you do feel that people should have the right, the opportunity to opt into their data. To your point about how it's so easy to make money if you don't have to pay for labor: One of my recent interview guests, Dr. Mary Marcel, was talking about this - and she teaches business ethics, by the way. And she characterized the use of stolen content for AI as "privatizing the gains and socializing the costs", which I thought was a great way to sum it up.8

Yeah. I do too. That's brilliant. It's very clear that that's what's happening. And it's interesting. We see headlines about ethics. "Someone should be doing something about ethics." But it's not clear to me that, within these companies, there's someone being really conscious and intentional about ethics and paying attention.

Yeah. That's actually one of the things that I noticed earlier this year, when I first started really getting into writing full-time and looking into ethics for AI. And some of the companies have what they call “Responsible AI” teams.

One of the big news stories over the summer was that OpenAI's responsible AI team has pretty much disintegrated. The people that were leading it left.9 And there were some stories about Microsoft laying off and reorganizing their team that was responsible for this. So companies like to talk about it.

But whether those people are 1, hired, 2, resourced, and 3, able to have an effect on the actual business? There's a lot of questions around that. And so it's good that ethics is being talked about, because this is super important.

One of my other friends, Ravit Dotan, has a new ethics game that she just introduced. I need to point you to that.10 It's really interesting. But one of the questions was with a startup. And you talked about working with funded startup founders.

Right.

So one of the questions is, “Well, how much attention are you paying to ethics right now?” "Well, no. We're thinking about it, but, you know, we're doing other stuff. We have priorities." If you don't start with a groundwork of solid ethics in the first place, you're NOT off to a good start, really.

As an organization behaviorist, I know that it's much easier to - they actually call it “imprinting”. So whatever happens at the beginning, the founder or the founding team, you can see evidence of that in the company 20, 40 years later, even though the company goes through lots and lots of changes. Some of those early things are there.

So if you want the company to be ethical, that's got to be part of the bedrock, really. Because if it's not, it's hard to “get the horse back in the barn”. 😉

Right? That's one of those things that, once it becomes a part of the operating principle that, “Eh, we're kind of on the edge of being ethical” or ”Ethics aren't all that important here”, it's really hard to reel that back in. It's not impossible, but it's hard, and there's going to be a lot of damage along the way.

So you were talking about working with healthcare innovation and you mentioned medical data. I want to come back to that, because that feels like an important topic. 

Because it's not just the person's medical data, but in some cases, there's genetic data. My genetic data isn't just mine. It belongs to my family members who, maybe even if I consent to something, THEY haven't consented. So it seems like a really complicated area, and I'd love to hear your thoughts on that.

I think that's very thoughtful of you to recognize that. That it's not just your data, but it is these other people's. I'm not sure I would have been thinking that. I would have been thinking much more selfishly.

An academic colleague has done research in this area and been critical of people not wanting to give their genetic information, because of seeing all the potential benefits, right, in medicine. She is from Finland, a relatively small country. There's an initiative that she's gone around the globe to these different countries that are trying to do collective measures like that. Get as big a pool of genetic data as they can.

But she also comes out of the technology world, where there's a positive bias toward the technology, right? You see the good that can come, and really reluctant to acknowledge the dangers or to get around to that.

Actually, I did get some genetic testing, and it didn't cross my mind at the time to ask them where that was stored. It's been a few years. I haven't thought about that.

Again, I think if it were to be used, and you mentioned, like, maybe with enterprise where they say it will only be used with that enterprise. If I could contribute it, knowing that there were some sort of boundaries on it, for it to be used for a particular purpose, I might not have a problem with that. And I would assume that the benefits could extend to my family, as well. 

But just to have it generally available and accessible to law enforcement, politicians, whatever, I can't see the service of that, and would be really hesitant. I might even lobby against that. I might even write a letter to my congressman 🙂 if I thought that kind of thing were at risk.

Something that people often don't realize about medical data is they're not always interested in your, and often not at all interested in your, diagnosis. It's all the other information that's in a medical record. It's one of the most complete identity profiles that exists outside a home mortgage application. It has birth date, and Social Security numbers, and names and contact information for family members, and all those kinds of things.

So, yeah, there's a lot of information in the medical record that deserves being extremely careful with. I'm not sure the care is always taken to the level that it should be.

Yeah. I remember when HIPAA first came into effect and thinking, "Oh, this is good - you know, we're safe now". And then realizing, well, it's better than it was, but it's still far from perfect. So you probably have more insights on that than I do.

HIPAA made us aware that it needed to be protected. But most organizations weren't aware of all the regulations. They sort of allocated it to the IT department and assumed IT took care of that. Or if they had a HIPAA-compliant electronic health record, that that took care of everything, and didn't understand all the behavioral components associated with HIPAA for the staff just on a day-to-day basis. And often, they just trusted that the EHR, the IT, was taking care of things.

And when I was working in that space a little more, going into organizations and doing audits, there was often information just freely available or cached on systems that somebody with a little bit more technical expertise than I could have easily stolen. And often, the administrators, it wasn't ill-intended - it was just ignorance.

And so, yeah, the HIPAA law at least put some teeth into that. But there's still a lot of vaguery in there. Necessary and sufficient is a term that is used - put that in air quotes - saying, you know, you’ve put the "necessary and sufficient" measures in place. And they do that because a single doctor's office is a lot different than a multi-site hospital system, for instance - in the resources that are available and the protections that might be needed. But it still leaves a lot of wiggle room for organizations, right, to play with this.

And so it's interesting. I wonder how HIPAA is being applied to AI or people working with AI. I think Congress is working on this some. The law came from them, and Department of HHS does some advising around this. And they're trying to get ahead of it, but those rule-making processes are very slow. So they're almost always behind the actual capabilities of the technology. So things have often already happened, right, before the law is in place. 

And that might be an argument for - this will sound very socialistic, very anti-capitalistic, and I don't really have it. But I'm thinking: products like this that could have such far-reaching implications, why isn't there some sort of national review board, for lack of a better term, that you have to submit? The same way medical devices have to submit to the FDA. You have to prove safety and efficacy. They continue to collect data for years afterward and can pull the product at any time.

So something like this with such far-reaching effects and implications, why they don't have to submit that to some sort of a technical review board that pushes back, and asks some of these ethical questions, and tries to put some protections in place?

You talked earlier about not reading the terms and conditions and consenting. I've been in doctor's offices, and they've given me a form for signed consent for treatment. And the question that always comes to my mind is, “Is this really informed consent, and do I really have a choice? Is there any medical office that I could even go to that wouldn't require me to sign something like this to get treated?” Because if not, then it's not really a choice.

Exactly.

The choice, I guess, is not to get treatment, but that's not really a viable choice.

So this whole idea of informed consent in a situation like that, it feels very much compelled. It doesn't feel like it's a true choice.

So this was one direction I was thinking about when you brought this up. And the other is the data and having a board. The example that comes up is "23 and Me". Have you heard about this? That the company is now looking at being up for sale.

No.

And so ALL of that genetic data that they've captured over the years, which is probably more than just the ancestry bits?

Right.

It’s now going to be sold to somebody. And that is scary. And I didn't do that, and I don't think any of my immediate family members have. But that is scary.

And, again, it's who owns that data, right? It's your genetic information. They provided the service to you, and analyzing it and giving you results, but now they have the data. Do they own that?

The current standing is that they do, and what rights do they have to use the data and to distribute it, and with whom?

Yeah. The reason that one particularly sticks in my mind is that I was just reading something else about terms and conditions where they said, “These are the sorts that we won't share with others.” And then it said, “But in the event of a merger or acquisition, then the data would be transferred.” And that's a wide-open loophole that anybody could drive through. Once it gets sold, whatever protection you thought you had from the original terms and conditions, even if they were good, is potentially gone. So that's a little scary.

And that looks like an opportunity for new law, doesn't it? That, because you've given particular permissions, somehow those permissions should travel with the data. If I'm a company wanting to acquire another company, I get their data, but I get the data with the conditions.

Yeah. You've made some really good points about commercial manipulation and exploitation of our data. Most people that I know don't want that, even if they're willing to share their data for the good of humanity. They don't want it to be exploited.

The other question is: as someone who has used these AI-based tools, do you feel like the tool providers have been transparent about sharing where the data that they use for the models came from, and whether the original creators of the data consented to its use?

No, I think that's a NO in all capital letters. 🤣 Yeah. I can't. Again, I've only used ChatGPT. But then, the other things we talked about with the way it's used in social media? No.

Yeah. Chat GPT is one of the ones that originally we've seen a lot of publicity about OpenAI and the fact that they were not transparent about where they got their data. And the CTO being interviewed, asking if they scraped public data. ‘Publicly available”11 is not the same as ‘public domain’. They were very slippery about that. And now it's come out, "Oh, yes. Of course we scraped, it because otherwise, we couldn't afford to do all this."12 Then maybe you shouldn't do it, you know?

My friend Mary used this phrase that "behind every great fortune is a great crime".

Ooh, I love that. 

The crime being the stealing of all of the data that was used to feed it.

Right.

and referring to the size of the company and the profits that they're making. If someone takes all the data, pays the creators nothing, and then makes billions of dollars from it, something's wrong.

Yeah. Agreed. Very much agreed. It's the wild, wild west out there, right? And I do write some, but it's not my livelihood enough to worry about, yet. But artists, musicians, and authors of all sorts, I would be very careful about where I was publishing and who I was giving access. And very angry, if I saw it exploited.

As members of the public, there are cases where our personal data or content may have been used or has already been used (to your point) by an AI-based tool or system. Do you know of any cases that you could share?

We've already talked about social media, right? And retail. And I know that they use it to affect what they show to me, so I assume that my profile is being used to affect what they show to other people that are similar. The other thing is the Echo and Alexa, those kind of intelligent speakers.

Yeah. The home automation devices.

Right. Actually, even bigger than that, but those are the ones I think of, just because they interact. Even if you think you've turned them off, they can be in background listening mode. You have to be really knowledgeable about the settings, and really intentional to make sure that they've been turned off, right?

So they're just basically scraping information from our home life all the time, in order to be available so you can ask it to play this music, or give the weather report, or whatever, I don't have one in my home for that reason. But I know people that do, and like, “Oh, yeah, I use it all the time for this or that.” But that means it's in your home.

Yes. And it's always listening, because it has to be on to listen for the ‘wake word’. We were actually gifted in Alexa some years ago. And we plugged it in, and looked at the terms and conditions, and said, “Nope, we're not going to use this”, and we just unplugged it, and it's in a box somewhere, I think.

What you're describing, it listening all the time, this is exactly what one of my guests, Tracy Bannon, discovered was happening in her home. And in her interview13, she went through how she went about “de-Alexa'ing” her house afterwards, because of things that were said within its earshot showing up in other places.

And Quentin has stories also, not from Alexa, but just on saying things offline that are kind of offbeat, and then it shows up in an Amazon recommendation.14

Exactly. An ad or something, right?

Yes.

Actually, one friend told me - and she didn't have Alexa, I think it was just her phone and computer - but her husband's going to run errands. And she went to the door and said, “Hey, check on this”, right? The price of this, or these things at a certain store.

And she came back and the ad popped up. Like, within minutes, an ad popped up. And it was really creepy, right? Because she wasn't in a Zoom meeting. She didn't have Alexa. She didn't have anything like that that she would have thought would have made that happen. She hadn't been searching.

Yeah. Creepy is a good word for it. I actually wrote an article about that recently too.15 There was a disclosure about Cox Media Group and this feature they call “Active Listening”, which is that your cell phone basically is always listening to you using your microphone. I wrote the article about it because I wanted to say, “Okay, here's how you can prevent that”. And it basically means going through your apps and looking at which ones have microphone permissions at all. And do they really need it only when they're using the app, or do they REALLY need it all the time? And trying to scale that down. That is creepy. 

I can see now that I'm going to need to just block a day on my calendar and listen to all of your episodes so far and read all your articles and fix all my technology! Yeah. Scary. 

It is scary, yeah.

So we talked a bit about terms and conditions and finding out about it being used for AI. A lot of cases, we don't really get much of a chance to opt out. Or we're not aware of having chances, even if they're offered. And sometimes they're not.

Right.

Do you know of any situations like that? 

Situations where you can opt out? 

Right. For instance, one example that you mentioned was about a website and how it's more ubiquitous now for them to offer to let you reject the nonessential cookies.

Oh - right. So if they allow me to manage the cookies, I opt out of everything, except there's one category that are somehow required. And then something that I learned from a cyber guy, several years ago, was to then clean my history and clean cookies. He recommended every day. Of course, I never remember to do it every day, but probably at least once a week. But if there is a site that doesn't let me manage, lots of times I'll go ahead and use it, but then I'll clear cookies right afterwards, my whole history, if it doesn't give me the option.

Yeah. So some browsers actually let you specify in your preferences for the browser to get rid of cookies periodically. That might save you some work or it might be worth checking out.

Thank you. Yeah. Just kind of on a schedule or something, huh?

Or just every time you exit the browser, which may be too often - depends on how you use it.

I'm not out of my browser. 🤣 I always have, like, 27 tabs open, right? They're taxing my computer!

Oh, you and I are tab people, it sounds like! 😏

Yeah.

So there was this recent flap about LinkedIn using our data and content. And the part that really felt creepy was possibly including our direct messages for training their AI models. It's them and their ‘affiliates’, which is the other question.

Right.

There were 2 opt outs that people who were protected by GDPR16, they didn't even have to bother with this, because they were automatically opted out. But the rest of us were opted IN by default. And there's 2 different steps that you have to take to opt yourself out from LinkedIn using our information for training. So I'll put the links into the article just so that people have that for reference, but that was a big flap.17

I feel like I've been living in a cave! 🤣 You know, I'm not searching on that kind of thing now, right? So it's - it doesn't come up in my information feeds in the same way. So I'm really so glad we're doing this. And, yes, I'll definitely check the notes and take those steps.

As you spoke, I realized it makes me angry that the GDPR people were automatically opted out. Why all of us aren't opted out and given an opportunity to opt in?

Because GDPR would make it illegal for LinkedIn to do this in the EU countries that are governed by GDPR. But we don't have GDPR or something like that - yet, I'll say YET.

Right. Their bias is not to choose the ethical road to begin with, right? So they're forced to by GDPR. And it seems like that would have provided a bit of a wake-up call to say, “Ah, perhaps we should provide this to all of our users.”

Right. They have a financial incentive to not do the right thing in cases like this. One of my other interview guests18, he was saying that, by doing it the way that they did it, the nanosecond that they turned that setting on (I think was his phrase), they immediately gave themselves cover for all the data that they've used up till now. And at most, when we turn it off, that says, “Okay, don't use anything further.” 

Okay. 

But by doing it that way, they basically forced all of us to - I won't even say give consent - they forced all of us to accept that they were taking and using everything prior to that nanosecond. Between when they turned the setting on and then, I think it was even almost a week later that people found out about it and said, "What are you doing?"

But that's why they did it - because they have a financial incentive for it, and the ethical incentives and motivations didn't override that.

Yeah. 

And we can have opinions about whether it should have. My opinion is it should, but everyone has different opinions.

Yeah. Funny when ethics and profit come into conflict, ethics often takes a beating, right, until there's a law in place.

We've talked a little bit about use of personal data and content and having it being stolen. There's some consequences such as privacy, phishing. Some people have lost income from it. Has this ever happened to you, or do you have any examples that you know of?

No. There have been enough breaches over time that I have been automatically signed up, right, for a handful of security monitoring services. And one of them notified me that my Social Security number has been found, out on a dark web, but I'm beyond that.

There have been so many breaches, right? Of banks, of healthcare organizations, of insurance companies. I don't even know where that might have come from. I haven't been a victim of a specific phishing attack myself, right? I haven't experienced ransomware or anything like that.

Yeah. And you're right. Data breaches are just so common nowadays. It's one thing for the data brokers that go out of their way to collect our data. But also there are some very big, reputable vendors and companies that we thought could be trusted to handle the data well. And it turns out they had vulnerabilities.

Right.

Partly as a result, I think, of all of this - the public distrust of AI and tech companies has been growing.19 And in a way, I feel like it's a good thing, because what it reflects is that awareness of what they're doing is growing, and we're saying, "Wait a minute. This is not okay."

What is the one thing that you think is the most important thing that AI companies would need to do to earn back and keep your trust? 

I think transparency is just so critical, right? So: informing me upfront, right, that there's intent to use my information; asking consent; and then back to your 3C's, which I had not heard of, but I wrote down here, right. So the "Consent, Credit, Compensation" - I think following that model, using that as a framework for how they engage, would give me increased confidence that they intend to operate ethically, and that I could then trust the results that I get.

I know, like, OpenAI, that a lot of those things came out of the open source community. And a lot of really good quality work and programming that happens through that community that benefits all of us in ways that we don't even know. At the same time, and again, I'm not a programmer, so I don't know how participants in open source communities are vetted, but it seems to me that there would be an opportunity for nefarious characters to participate, right, in open source communities because they’d have the technical skills. And if that's what they're being evaluated on or vetted on, it's like, “Yeah, they have the skills”. And no one maybe does a background check on them before allowing them access to the tools and the data that the community is using.

So I guess that's a little bit of skepticism that, even if the commercial companies were following the 3C's, if what they are drawing on and if they continue to engage with the open source community, is that another one of those sort of fuzzy boundary kinds of things, where you don't know where the information's going or who's using it or who has access?

Yeah. Open source is interesting because a lot of the big AI companies have claimed that their systems are open source, when really they aren't. Some don't even share source code. Some claim that they share data, but they're really only sharing the weights that they got from the data.20

And I’ve also seen concerns about research paper authors being pressured nowadays to share their data, so that other researchers can verify that their results are reproducible. Which in principle seems like a good thing, but if that data was collected under any considerations about confidentiality, that's obviously problematic.

One thing that comes up with OpenAI and those types of tools, a lot of people are advocating that they have to be more transparent about sharing their data. But, on the other hand, keeping the data private reduces risks related to that data, even if it was stolen data, but related to that data being exploited. And so some people feel that we shouldn't press them to disclose their data because of this. It's a bit of a conundrum.

I was going to ask you: as a technical person, knowing more about what safeguards are and are not available, because as you say, some say they're open and then they actually are not. Do you have a bias or a preference in whether or not they should be open?

I guess my sense is - you were referring earlier to horses. The phrase that I grew up with was "closing the barn door after the horse is out". Once that data has escaped and gone out, there's no clawing it back. The privacy is - once it's gone, it's gone, and you really can't undo it. Or as some people have said, “you can't take the sugar out of the cake after it's baked”.

Right. Oh, that's a good one. 

And so then the companies who then refuse to share data, that's disingenuous because the data - especially if they initially acquired through open source or whatever - they're just trying to exploit.

I mean, I definitely feel like they should be sharing the SOURCES of their data. And that's the transparency and the credit aspects, and making sure that they have the actual consent for using that source of data.

Disclosing the actual data itself: in principle, it sounds like the right thing to do. But putting it out there for just anybody to also exploit doesn't seem right either. So there's probably some middle ground.

I've heard people talk about having secure repositories and having the data there. But I think the sourcing and the - what's called the ‘data provenance21, knowing where it came from and being able to trace where it came from and where it went -

Right.

is probably more important. That's a significant technical challenge, but it's not impossible. It's hard, but AI is already hard. This is not hard-ER.

Yeah. I like that idea.

So this is my thought.

So is there anything else that you'd like to share with our audience today? Maybe about your book?

My book is called “The Strengths Paradox”. I have written all of it, but as I mentioned, I have used ChatGPT to help me with the research, which has sped me up. The premise of the book is strengths-based approaches to - whether it's career choice and career development or management leadership - are really powerful and valuable. Lots of people have taken StrengthsFinder [now CliftonStrengths]. I use a different tool called StrengthScope. Patrick Lencioni's group came out with a thing called Work and Knowledge [“The 6 Types of Working Genius”], right?

So there are lots of different tools out there, and the benefits to be gained are from applying that way of thinking and making task assignments and how you compose teams. Although, there's a lot to be gained, and it's brilliant, and I teach people how to do that. At the same time, and particularly with young people who maybe don't have a sense of their strengths, it helps them find their way.

As people become more seasoned, like I am, our bigger challenge is that our strengths can become our liabilities. And so the strengths can go into overdrive in a variety of ways. I talk about “too much of a good thing”, and if we were doing this visually, I would show you a picture of 10 different kinds of women and they're all dressed up as Wonder Woman.

Or you think of an analogy with orchestra, where everyone is playing their instrument as loud as they can, as fast as they can, all at the same time, right? We end up with noise, not music.

And I see that happening in organizations all the time. And it's because whatever we're strong in, that's kind of our default, right, if we're presented with a challenge.

This book is about both the benefits of the strengths-based approach and these potential liabilities and how to rein those in, and also the potential for any complex project today. We need people with very different strengths to come together. But complementary strengths that we need to be effective as a team also set us up for conflict, right, just because we think differently.

Once people get to be mid-career, they've made career choices that have steered them away from things that are their areas of weakness, right? They've steered them into career fields, for the most part, that take advantage of their strengths. So the danger at that point for professionals from mid-career on is this risk of relying too much on our strengths or using them in inappropriate ways. So it's work that I do with my clients, and I'm hoping that people find it interesting and helpful.

So when will the book be out?

It's forthcoming and I keep thinking it will be done at the end of the month and the next month. It's about 80% done at this point. That's what's coming. I've got a couple more chapters - probably by end of year would be the eBook version and with on-demand print, for print version as well, in English.

And then, you know, I always think it's going to take a month or probably take a quarter, right, to get the audiobook version released.

And then I would also go into Spanish. So Spanish eBook, Spanish print, and a Spanish audiobook. So that would be over the course of 2025.

Very nice!

Yeah. Thank you.

I know some folks who speak Spanish that would probably really welcome that book. So maybe someone in my network can help you connect with a translator.

That would be lovely. So I've got the designer for the cover. The editor's lined up, bugging me every once in a while, ready for me to send her pages. And finding a translator would be the next step.

Looking forward to it - good luck with the book launch, Julie! And thank you again for being my guest on this interview series about how real people are and aren't using AI. I really had fun with this conversation, so thank you.

Really, thank you for the opportunity. And I've learned so many things myself. I just appreciate this opportunity to talk with someone knowledgeable about what's going on. And AI is going to just continue to shape our lives. And so I think what you're doing to help raise awareness and conversations between people who might not otherwise talk about it, or have the opportunity to have their voices heard, is really valuable. So thank you. It was a lot of fun, and I learned a lot myself. Thanks for having me.

Interview References and Links

Dr. Julie Rennecker on LinkedIn

Syzygy Teams

Leave a comment


About this interview series and newsletter

This post is part of our 2024 interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!

6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (it’s free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)


Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome 😊

Share 6 'P's in AI Pods


Series Credits and References

Audio Sound Effect from Pixabay

Thanks for reading 6 'P's in AI Pods! This post is public, so feel free to share it.

Share

6

Study on percentage of people reading terms and conditions is in this article:

7

Credit for the original 3Cs (consent, credit, and compensation) belongs to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

8

interview and comments on "privatizing the gains and socializing the costs" of AI:

17

LinkedIn AI training opt-out instructions, from Anonymous4 interview, via Ravit Dotan:

  • Opt-out1 (Settings | Data for Generative AI Improvement)

  • Opt-out2 (Help | LinkedIn Data Processing Objection Form)

  • Info

21

Data Provenance, in the Data Glossary of the US HHS Network of the National Library of Medicine (NNLM)

Discussion about this podcast

6 'P's in AI Pods
6 Ps in AI Pods (AI6P)
AI is affecting People & Places we care about, Practices & Processes we use every day, and Products & Platforms we build & use. Want to understand how use (& misuse) of data and AI impacts these 6 'P's, and what you can do?
This podcast is for you! We share episodes on ethics in AI, use of generative AI in the music industry, and selected audio interviews from the new “AI, Software, & Wetware” series.
All words are 100% human-written - we do not use AI content generators (unless as a clearly-labeled demonstration of the technology).