6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #075: Sue Cunningham, Australia-based founder and creator
0:00
-49:40

🗣️ AISW #075: Sue Cunningham, Australia-based founder and creator

Audio interview with Australia-based founder and creator Sue Cunningham on her stories of using AI and how she feels about AI using people's data and content (audio; 49:40)

Introduction - Sue Cunningham

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click here to read the whole post online.)

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Interview - Sue Cunningham

Karen: I’m delighted to welcome Sue Cunningham from Australia as my guest today on “AI, Software, and Wetware”. Sue, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.

Sue: Good morning, Karen and everyone. My name's Sue Cunningham. I'm the founder of the Uncertainty Lab, and I'm based here in Melbourne, Australia. I've spent about the last 25 years working in strategy and transformation in the for-purpose sector. So I've worked across health, aged care, emergency services, international development. And earlier this year I decided to step out of that career and go all in on AI. I'm now really focused on helping leaders and organizations make sense of AI, and just navigate uncertainty, build confidence to be able to lead what's coming next.

Part of what I do is run programs like the AI Leadership Circle, which is a small peer-based learning experience where we focus not only on exploring AI's practical uses, but also the strategic and emotional impacts of AI. My work is essentially informed a lot by lived experience. I have led enterprise-wide transformation in 50 million to 3 billion type organizations. But I've also led through crisis, like bush fires and pandemics. So whether disruption is planned or unplanned, it's really important to be able to navigate uncertainty. And that's why I've reinvented my career, and now really focusing on AI, which of course is our next big disruption.

Karen: It certainly is. Yes. So tell us a little bit about how you got started with your Uncertainty Lab.

Sue: I think the real moment happened a couple of years ago actually. I was in the middle of doing a strategy analysis, a Porter's Five Forces Analysis, and I thought I'd throw the prompt into ChatGPT. And to be honest, I was really amazed at how quickly it did it and also the quality of it. So within five or 10 minutes, I had a really great sector analysis and that would normally have taken me days, possibly even a week, to get it to the form that I would've been happy to submit through to the board. That's when I stopped and went, "Wow, this ChatGPT, this AI, this is really powerful."

And so I started, I guess, on a learning journey at that point. Picked up a course at MIT Sloan on AI implications into business strategy. Started to explore safety and risks through some UK institutes. Really started to play with it a lot and learn with it.

And yeah, earlier this year I made the decision to step away from my executive career and focus full time on it. I think it's going to be fully transformative. So for someone who's had a career in transformation, and that word is used a lot and doesn't always talk about transformative, but I do believe AI is going to be truly transformative and one of the biggest changes for all of us. And I want to lean into that and be part of leading forward and hopefully supporting being part of the movement to lead forward in an ethical and safe way.

Karen: Sounds great. So you mentioned taking a few courses and training sessions. I'd like to hear a little bit more about your level of experience with AI and machine learning and analytics and if you've used it professionally. It sounds like you have. And it sounds like you've studied the technology.

Sue: Yeah, so I guess I'm certainly not someone who can be behind the scenes. I would describe myself as a practitioner, a strategist. I'm really interested in the impact that AI has on how we lead and how to lead ourselves and others through AI.

I use a whole range of different tools. ChatGPT is my tool of choice partly because I think it's been great, and partly because I've started to notice already a tool lock-in, in that it's got so much memory about me now, and I guess I've developed such familiarity with it that it is my core tool. And I confess, I also love the voice to text capability, so I'm constantly talking to it and capturing things in ChatGPT.

But I'll also use Claude, Fathom, Perplexity, Gamma, so really depending on what I'm trying to do. I'm trying to build out workflows for which I'm mainly using Relay and Zapier. And yeah, just trying to transition across from the basic level of using AI, from the perspective of, it's human initiated, it's done in one tool, to being able to build to that more competent level of being able to have AI actually initiate tasks and automate and work in the background across multiple tools. So I guess I'm still on that learning journey.

Karen: I have to ask, since it just came out right at the time we're recording this interview. The new ChatGPT version that came out, how did you feel about that switch over from four oh to five?

Sue: Yeah, look, I'm still struggling with it. I know a lot of people have been, I am a little underwhelmed. I also feel that I haven't explored its capabilities fully. I've had a few teething issues with it. Within the first day or two I couldn't even attach files into it, and it's, what's going on? It's got less functionality than previously. I haven't really had the chance to explore the ability to generate artifacts and do some of the things that I think it hasn't been able to do in the past.

As well as Claude. So I think I'm still on the learning journey. I was bemused to find out about its four different personality types. So I'm looking forward to playing further with that and testing that out. Yeah, jury's out. I suspect it's got extraordinary power. Have I quite worked out how to tap into that in ways that are more useful to me? Because as we all know, the tools can do things, but it's learning how to actually use them and apply them in ways that are meaningful. I haven't really leveraged ChatGPT in that way yet.

Karen: So you've got quite a toolbox there with ChatGPT and Claude and Perplexity and Gamma. How do you decide which tool to use when?

Sue: It's really just about trying to work out why using AI in the first place, and therefore, what do you think, which tool is going to be fit for purpose, if you like? Everyone would appreciate the Perplexity is stronger and better for research. I do love Gamma. I had my first website up through Gamma. I use it a lot for presentations. And Fathom, I do love as a note taker. I must admit that's my favorite tool for that. So I try and explore different ones. But I'm also a huge fan of encouraging people to curate their tech stack. So I think you really have to learn, experiment, and play. But then you have to make a conscious decision. Is it actually serving you? Is it actually providing value for what you're trying to achieve in your life? And set those aside that aren't actually helping you.

Karen: You made a good point about the potential for lock-in. As the tool gets to know you more, then you're more likely to stay there. I was talking with a friend not long ago who was looking at a way to make context portable from one tool to another. And I thought that was a good idea that would help us to be a little more platform-independent, being able to switch from one tool to another if something got jacked up in price, or broke when you needed it most, or something like that.

Sue: I must admit that really troubles me, which is probably why I referenced it. It bothers me. It's extraordinarily useful, the memory feature, the fact that it has that learned pattern about you. So it does at times mean, rather than move through to Claude or Gemini, I will use ChatGPT. So yeah, I don't have an answer for that.

And I think that lock-in is exactly what's going to happen. I think the business models around AI are changing significantly. And I'm expecting them to get rapidly more expensive quite quickly and we're all going to become so dependent and so used to them that, yeah, I think the business model wars are only just beginning.

Karen: Yeah, that's fair. I think you mentioned that you've built some GPTs? Could you talk a little bit about that?

Sue: Yes. In ChatGPT I've built some custom GPTs. My favorite one's really the prompt optimizer. I think that's the one that I've got to a state which I love and use all the time.

I also run the AI Leadership Circle, which is a conversational-based program, and I do recordings there. I've built a tool to help me actually revisit the conversation. So I'll load the transcripts into my AI leadership circle "insights distiller" (got to work on some catchier name). And it will help me go through the conversation and identify what went well in the conversation. So positive moments, negative moments, opportunities to improve key insights. So it really just helps me drill down into what happened. When you're facilitating conversations you're in the moment. So it's useful to have it as a reflective tool as well, and to deep dive down into it after the session.

Karen: When you look at the insights that it gives you, how often do you feel that they're accurate? Is it pretty much on the nose, or there's some times where it just interprets something completely wrong?

Sue: Look, that's an interesting question actually. Because I actually think the Fathom tool is excellent, its ability to capture information. And if you're on the paid product, which has got advanced functionality, you can absolutely play with different forms of transcript and get some really good information from it. I think my custom GPT isn't fully optimized. It doesn't always identify the right speaker. So I might say "Summarize some of the comments I've made", just to revisit to see where I was taking the conversation. In the insights distilling custom GPT, it may not always pick up who's speaking. And sometimes there's some stunning quotes. Part of me wonders if that's just my prompting and coding in the GPT, and I need to perhaps finesse it. Yeah, still learning how to get those custom GPTs to do exactly what I want.

Karen: You had also mentioned that you've had some training courses. I'd like to hear a little bit more about those, and which one you think was the most beneficial for you, for the type of work that you're doing now.

Sue: I think the first material course I did was that MIT Sloan course into AI Implications for Business Strategy. Obviously attracted to that because of my strategy background, but also because a lot of people, particularly in the early days, were talking about AI strategy. And I'm a huge fan that it's a business strategy. It's about what you’re trying to do, and then AI may or may not play into that. I think that was perhaps a reasonable foundations course. It introduced me to NLP. It introduced me to large language models. It helped at the beginning in terms of setting the scene and providing some basic context and how that might apply into my work environment.

But I have to say, the course that I found most interesting was actually the Blue Dots Impact course on AI safety, which is an not-for-profit institute in the UK. And that was really great. That was a five day intensive course that involved not only significant readings, but also a curated discussion every day based on the readings and live exercises.

So that was really useful because it helped me understand all of the risks and ethics and implications of AI and, if you like, the existential risk. Now, that was actually somewhat alarming in many respects because it was right at, earlier on in my AI journey, it was like, oh my goodness, I was so excited about the power. And then look what it's potentially able to do. But I think it was really helpful that did give me a little bit of a deep dive into what's under the hood. So I certainly don't understand a lot of the technical detail, but concepts about alignment and gradient descent and transformers and neural networks, started to just get a bit of an appreciation of some of the things that sit behind AI and some of the potential risks with it.

So it opened my eyes. And I made a conscious decision at that time that I don't want to work in AI safety. Although I may revisit that, to be honest. But it's it's good when you're starting in this new space, and it is so transformative, to actually understand the potential good of it and the potential evil, if you like.

Karen: Yeah. One analogy that I've been toying with has been this idea of AI as a chainsaw. It's extremely powerful — you can do things with it that you couldn't possibly do with hand tools — but also very dangerous. And even if you're properly trained, and you keep your equipment sharp, and do everything else, you could still have accidents that can hurt people. And not just necessarily the person using it, but bystanders. And I feel like, yeah that definitely can happen in AI. Over here I tend to hear more about ethical AI or responsible AI, but AI safety is a good way to look at it, I think.

Sue: Yeah, I've noticed that we still don't have a common language to talk about AI. I signed up to do a bootcamp on AI agents, and actually they were teaching me how to do what I would call workflows in relay. So the naming is yet to settle across different sectors across different continents.

And the same with safety and same with risk and ethics, we haven't quite landed, I don't think, on a common language. But yeah, no, this was a course about AI safety. And I think it's an emerging discipline and is very much different from risk and ethics. But, all interrelated and, we'll get there in terms of the language and the commonality. I know AI started, I think the coin was first turned in 1956. I know it's been around for many years. I actually was an electrical and electronic engineer as my first university course. And I did neural networks as a subject in the 80’s. So I know it's been around for a long while. But I think, yeah, where we're going to land with it is going to be very interesting.

Karen: Yeah. And I think it's really interesting for those of us that were around in the earlier days and heard about AI and said, "Yeah, this is cool, but how practical is it really for solving real problems?" and ignored it for a while. And then it had a resurgence in the 90’s. And then it kind of went away again. And now it's coming back, and this time it seems like it's here to stay. Third time's the charm.

Sue: Yeah, no, I think and that's the thing, isn't it? It is here to stay. And like it or hate it or anywhere in the middle, you've got to get your head around what it is, what it means for you individually, professionally, societally, and actually make a decision about what role are you going to play? What posture are you going to take about AI?

Karen: Yeah. So my next question, I think you've already somewhat answered this, but we'll see if you want to add anything to it. Can you share a specific story on how you've used a tool that included AI and ML features? I'd like to hear your thoughts about how it worked for you. What went well, what didn't go so well?

You talked a little bit earlier about Fathom. Anything else that you want to add on that?

Sue: Probably the other point, it's not necessarily a tool per se, and I'll go back to ChatGPT in the first instance. But I love the conversational element to AI. I would use ChatGPT not just from a voice to text perspective, but also conversationally. So I set up 30 minutes at the start of every week on a Monday, and I have a strategic conversation with ChatGPT at the start of the week.

And I'm really vexed by this. I love having a conversation because I'm a solopreneur. It's at times lonely. So it's nice to be interacting with tools. And it can be useful because I am someone who verbalizes and thinks out loud. So I get to say things and test ideas and get challenged back. I do love that element of it. The challenge for me is also that I start to notice that you feel like you're forming a relationship. You can start to think of them as a team buddy. I've started to select pronouns for them. I know someone who's named theirs. I do love the conversational elements and I think that's really useful. It helps me think out loud. It helps me feel a bit more connected, but then I get into this interesting space of what on earth is my relationship with AI? It's actually a piece of technology. It's not a person, but yet I am forming a relationship with it. I am interacting and working with it. And how to understand and define that remains a kind of unresolved but ongoing query for me.

Karen: Yeah, and there's been such interesting conversation about the emotional relationships and whether the companies are doing that deliberately as a business model to try to get people basically locked in. And then we go through the whole standard process of enshittification later. I don't know if this came up in your AI safety course, but it is risky for people that aren't as aware. It's easy to slip away from remembering that it is a computer and it's not a human being.

Sue: Yeah, that's actually one of the key points I make in my leadership circles too. It really bothers me because we've got this quite dramatic uptake of AI, and I'm trying to be part of that. I'm trying to help people get access to AI and move from feeling overwhelmed or scared about it to using it. So I do want people to engage with it. But at the same time, people need to be aware of the risks with it. And how many people have learned or know about hallucinations and bias and some of those issues? So as people are forming relationships with it, you've got to wonder: Where is the training? Where is the education? Where is the awareness of what is actually happening? And how do you look after yourself?

Yeah, I do think it is really challenging. And yes, we did actually do a case study in the safety course about that, and Replika in particular was the study I was looking at in terms of, the impact that has on people. And that was the first time I got exposed to people becoming so emotionally dependent on AI that they were suicidal. Replika did an update and people lost their companions. And for some people it was devastating.

But I come from the for-purpose sector and I work in sectors like aged care, where it provides such enormous potential benefit. People are lonely and have an opportunity. So you can see devastation, you can see danger, but you can see really positive applications as well. What an ethical quagmire to work through! How it should be used. How to educate people to be safer about it. The decisions the organizations are making themselves about, as you say, whether they're leveraging business opportunity or actually caring about what's their level of ethical responsibility versus profit gain.

So yeah, fascinating area. I feel that area is going to be one of the most material topics that's really going to become more dominant in the next year or two: the harm that's happening there and how people navigate through that.

Karen: Yeah. This is all really good insight on professional use of it. And the leadership circles sounds like a really effective activity and sounds really useful for your clients, so I'm happy to hear about that.

I'd like to hear some other stories you may have about other ways that you've used a AI tools, whether or not it's obvious that they're AI or not, or whether or not you have any control over them, but ways that AI has shown up in your life. Do you have any examples on that?

Sue: Yep. I use AI all the time as help desk now, my IT help desk. So I'm constantly using it. I snip little parts of whatever I'm stuck on, if I'm trying to learn a new tool. If I've got a hardware problem, I'll use it for absolutely anything. So I do that a lot.

I used it actually from a cybersecurity perspective recently. I made a booking on Ryanair and had actually got an email saying, "You've booked third party. You need to validate yourself." And I thought, "That's weird. I've booked directly on Ryanair. This must be a scam." But actually I popped it into ChatGPT to ask its opinion on that. And it instantly gave me multiple reviews from others who'd had similar experiences and, in fact, talked me through why Ryanair asked it. It actually gave me the European case law that was under consideration and suggested it was ethically troubled. But nonetheless, which parts of the website I needed to go to, to respond. I wouldn't ever fully rely on AI for cybersecurity for detection. But I think it's a good first pass if you use it, yeah, for something like that.

Karen: That's a great example, yeah. Any other areas?

Sue: I guess just on a personal basis, I'm taking a trip to Scotland, Scottish Highlands, shortly, and I needed to buy a raincoat. I mostly use AI strategically, I like to think, in a deliberate, considered fashion. But I have noticed, every now and then, if I'm uncertain or stuck in indecision, I can start over-relying on AI and using it all the time. And I must admit I was a bit tired and frazzled on the weekend, and I was stuck in the store, overwhelmed with analysis paralysis about which raincoat to buy and which one would keep me dry in the Scottish highlands. I whipped out my ChatGPT voice mode and in the fitting rooms started to ask it questions about "Can you do a quick review of brands and consumer ratings of the brand that have been recommended to me just to help me make the decision so I could walk out the door?" Yeah.

Karen: Oh, neat. So you're using it right there on the spot in the store.

Sue: Yeah, that's right.

Karen: Oh, awesome. That's convenient. And are you happy with the raincoat you ended up with?

Sue: I'm actually really worried it won't be okay! I think the advice was fine. I think it's a good example as someone who processes information, I constantly scan the environment for data points. In the old days if I wanted to find something out, I'd talk to five different people to get everyone's different perspectives. So now I'm caught with AI, of course, that I can always get perspective. So it's another data point for me. I can go in, I can get an opinion. Playing with prompts and different tools, I can get different data points. It helped me make the decision and get through my data gathering. Time will tell as to whether I stay dry in the Scottish highlands.

Karen: That sounds like an amazing trip. I will look forward to hearing how it goes for you.

Sue: Thanks!

Karen: We've talked a lot about the ways that you've used AI-based tools. I'd like to hear if there's any times where you avoid using AI, anything specific when you choose not to use it, and why you chose to avoid it for that?

Sue: Yeah, I think, I mentioned before that I can use the ChatGPT quite frequently as my IT help desk. That's such an entrenched habit for me that when I was stuck the other day in one of my bank accounts, I actually took a snip of my screen and literally had hit control V to paste it into ChatGPT before I went, "What on earth am I doing? This is containing private financial information!" Thankfully I hadn't hit the button to make the prompt churn and instantly wiped it out. I want to say it's obvious that I don't use it for financial information, by way of example, but I would also say you've got to be aware, don't you? You're so busy using it, it's become so ingrained as part of your workflow and your way of working that it can be easy to forget sometimes to make sure.

So my answer to that is, in theory, I know not to share too much personal information, obviously client information that's inappropriate, or just sensitive information, like financial information. But I share that story just because it's a really good example that even when you know you shouldn't share certain sorts of information, you've got to stay ever vigilant about not doing that.

Just the other day, my son is traveling and in China, and had a rash on his ankle and sent me a photo. So I have used it for health purposes. I know some people are very concerned about that. I guess I'm choosing to be opportunistic about when I can use it to help navigate health issues. I'd say I'm aware of the risks and I try not to use it for things that are sensitive, but i'm also interested in trying to leverage the power of it as well, so it's a little bit case by case at times.

Karen: Were you able to get a good answer about your son's rash?

Sue: In some respects, I think so, and it was great actually. I hadn't really used it much for health information. It did come up with an opening line about "Please don't rely on this. It's indicative only." I was really pleased to see that it had that. It was actually really helpful about identifying possible causes. And it emphasized that I'd told it certain things about he didn't have any other symptoms, et cetera. And it actually helped identify if these things happened, then he should seek emergency medical advice. I sent him a couple of screenshots of it and he actually found it really helpful. So it gave him some guidance in different contexts about, if certain things happen, then that was a good indication for him to seek help. Yeah.

Karen: Okay. That's good to know. One concern that comes up a lot is looking at where these AI and ML systems get the data and the content that they use for training. They will often use, for instance, things people put into the chats, or that we put in online systems, or we published online. And they're not always transparent about where they got the data that they used in a tool that we're using, or how they intend to use our data when we sign up for a service. So I'm wondering how you feel about companies using data for training their AI and ML systems and tools, and specifically what your thoughts are about something called the three Cs, whether creatives have the rights to active consent and to get credit and be compensated for their data when companies want to use it for training.

Sue: Yeah. Look, again, I think this is a really complex area. I think the simplistic answer, which I philosophically I do feel that it's important to think about consent and credit. And I think you're a little bit generous to describe that they don't always be transparent, because maybe if you go looking, you can find it sometimes. But I think in principle, they're good principles to aspire to. I think in practice it's a really vexed issue because there's so much data out there. The horse has bolted is how I think about it. And I don't know that two wrongs make a right. So perhaps the fact that there's all this data out there that has already been consolidated, used, and massaged, those tools have been built and trained and created, and they've already drawn from a whole lot of content. I struggle with this because I can see that what's happened isn't necessarily fair or appropriate, but the pragmatist in me says it's already happened.

I think I'd love to see some stronger and clearer rules, and more effort and focus on trying to make sure there was appropriate, as a minimum, consent and ideally that kind of credit and compensation. And I think that's possible in certain circumstances. But I think it has to draw the lines. I don't actually think that can be implemented now in full. So I think we need to evolve a system or a framework to at least establish some ability to do it in some way.

Yeah, I'm sure you can see I don't have clear ideas about that. Part of my work with strategy is there's no point building a strategy unless you can execute it. And I'm really focused on if you can't implement it, then it's no point writing it down on a piece of paper. And that's why I struggle with this concept. I can see what should happen, but I don't quite know how it can happen given where we're at.

Karen: Yeah. I hear from a lot of people that they feel this sense of it's a done deal. The cake is already baked, can't take the eggs out, that sort of thing. I could certainly empathize with that. At the same time though, it feels we shouldn't just throw up our hands and say, "Oh, we're living with an unethical system, let's just move on." Because it seems like there are probably some things we can do to move it towards a more ethical ecosystem.

Sue: Yeah, and I couldn't agree with you more. Yeah, absolutely. And the cake has baked, but the future cakes aren't baked, right? So we should be able to think about how we approach the kitchen next time. So I absolutely think that's fair game. But I'm also conscious, so yeah, I don't want to throw the baby out with the bath water here. I agree with you. I'm trying to be pragmatic about it, but it's not okay. And that doesn't mean we shouldn't be trying to improve substantially what's in place in the minute.

Karen: Yeah. Do you have any specific thoughts about what we could do in this brownfield space that we're in that could make it more ethical, something that would be feasible to implement?

Sue: I did see something that came out of Dubai recently. I think it was the Dubai Future Foundation, although I might be misremembering the name. They have published a new framework to help identify the different levels of human AI collaboration. It's a little bit like a kind of nutrition label starting with purely human at one end and purely AI at the other. And, human led AI, augmented AI led, fully AI, so it's a lovely kind of grading system.

When I first saw that a little while ago, I went, "Oh, that's great." And then I really wasn't sure about it, but I've been thinking a lot about it recently. And again it's, how do we apply it? Because, emails are AI generated, but who really wants to label an email to tell you whether it was fully AI automated? What's the value in that? Having said that, an email is very simple, a transactional moment in time. Obviously there are many documents and products that are created that absolutely would be very valuable to have on.

So I think it would be great to see that type of approach being taken. And I think it's about trying to identify which context does that apply to? What are the factors that you take into consideration? And when is it appropriate to be labeling things? What's the value of labeling it? I think there is lots of value in certain contexts. But I think, we need to somehow build out a a methodology and approach that's a protocol that people can adopt even if it's on a voluntary basis in the first instance, just to try and get it explored and perhaps used in places.

Karen: Yeah, I think the analogy to an information label, nutrition label, is interesting. I think the question is, whenever people say, "Here's a specific technical proposal", I say "What problem are you trying to solve, or what kind of decision are you trying to make?"

So when we come to these labels, I would say, "What decision would someone be trying to make about this tool, or about this piece of information, or about this email, based on knowing that?" From there, that's where I think you go to, this is a way that you can help that person make that decision.

And one thing I hear a lot from people is about transparency. And they want to know, "Was this AI generated? And where can I read more about whether the tool was trained only on US English speaking people's information, or only images of white people or men?" Just to have a way that they can learn more and say, Okay, how much can I trust this information?" If it was AI-generated, but it was built on a tool that was exclusionary, then maybe I feel like I need to be aware of that. So looking at it as a decision support system or mechanisms, what would I want to do based on seeing this email, this image, this video was AI-generated? What does that tell me about whether or not I can trust it and believe it and rely on it, or if I should discount it or skip it or block it or whatever?

Sue: Yeah, look, it is a really interesting one, because I think sometimes that's really helpful. But it's interesting that you've made the connection between it being AI generated and whether you can trust it. Because to me, the fundamental question is, can I trust this information? Do I want to rely on it? I'm not actually sure that it being AI generated is, to me, the sole determinant of trust. There's some tech studies out there, but there's also plenty of compelling evidence to show that AI is more likely to give a correct health diagnosis than a human might, for example.

So there are instances, this comes to the balance we're talking about before, that it's good and evil. And sometimes it's more powerful and sometimes it's more dangerous than humans. If it's AI generated and it's a medical diagnosis and it comes out of an AI tool that's tuned and trained on health data, then actually I might trust it because it's probably potentially more accurate than a human.

So I think it's a really interesting question about can I trust it? But I'm not a hundred percent sure whether AI generated and human generated, how the importance of that will differ depending on the context that you are considering. Someone made a great comment to me the other week because we were discussing ethics, safety, and risk in the meeting. It was funny because the whole session was focused on can you trust AI and the ethics and safety. Someone made a great comment, which is, "Can you trust what humans say? Look at history." Yeah. It's a fascinating point, isn't it? His story, her story, the capture of the people in power. And if you actually look at human generated content with all of the unconscious bias and conscious bias, like in the context of a novel or something perhaps, but like in different contexts. Who says human content is better than AI content? I dunno that in and of itself, human content is better than AI content, nor worse is it?

Kranzberg's principle "Technology is neither good nor bad, nor is it neutral." — it's about how people are using it. And people can generate AI content. It's about all the context you just mentioned about, can you trust it and what's the basis of how it's been used and generated?

Karen: Yeah, that was one of the things I was thinking about when you were talking about AI content and whether it's trustworthy. It's certainly not the only factor whether something should be trusted, and certainly a lot of distinctions about whether or not human content can be trusted. There's always been this phrase, "consider the source". With humans, we can look up the person, look up their reputation, or see if they're a scammer. But if we don't know if something was AI assisted or AI generated, we can't really consider the source unless we know the source.

And then with an AI tool, if it's providing references, where did it get its references? If it's giving medical advice, was it trained on women's health, or does it judge women's heart attacks based on men's heart attacks, symptoms, things like that?. So trying to consider the source is, I think, where this transparency comes in. And it's certainly part of trustworthiness, but definitely not all of it.

Sue: Yeah, and I think the other thing I'd say is about the source, when I was doing my study in AI safety and I was trying to look under the hood, so to speak, to understand what AI is. It still blows my mind. And I know it's known by some, but not everyone, that we don't know how AI works. That kind of just blows my mind.

Karen: So when we talk about transparency, as someone who's used a lot of different AI-based tools, do you think any of them have been transparent with you about sharing where they got their data and whether the original creators consented to it being used?

Sue: We talk about AI generated and knowing the source. At one level, I'm all for what's the system that we can create the transparency statements. Maybe people go in and deep dive and audit it and give it some kind of rating. Or there's some ability or framework to help identify the training data. There's so much training data.

And in the health context and the question about, what if it's not been trained on women's data? Well, most GPs haven't been trained on women's data because all the medical research has been done on men. And again the AI is perpetuating the bias and challenge that exists in the human world.

But given you train so many models on such large amounts of data, and then tune them, and given the mystery that sits beneath it, yeah, it's challenging, isn't it? And I can't agree with you more that we need to understand it better. But yeah, I guess I remain unclear about how we're ever going to achieve that level of transparency. I'm sure there's things we can do like frameworks and ratings and different ways of making it clearer. But I think we shouldn't forget, we don't know how AI works, and that's important to consider.

So yeah, anything we can do to be more transparent, more ethical, and give people — it's about information, isn't it? To your point, can you trust something? It depends on how much information you have. In the human context, you can check them out, you can look at their profile. You can maybe get a sense of who they are and what position they hold and whether you do want to trust a person. And you should be able to have multiple data points to go in and explore and understand. Can you trust an AI? Which company has produced it? What is their transparency position? What is the context that they're sharing or information that they're sharing about their tool? And what is your ability to interrogate, check it, et cetera.

So I think it's trying to build up multiple ways of exploring the nutrition label or the human AI collaboration level could be a part of that system. But it's really, you know, what are good data points and systems to build out so that you can feel that you are making an informed choice?

And I think that certainly has a long way to go, but it'll be interesting. We are becoming more dependent on AI. And you can see that shift happening right now. I was reading that in 2026 they're expecting AI to start being fully embedded. We'll stop talking about AI in the same way we stopped talking about emails. They were interesting for a while. It's not quite the same comparison but it's an interesting point in another 12 or 18 months if it's so embedded.

And that's invidious, isn't it? Because we don't trust it. We don't understand it, and yet it's becoming embedded in our lifestyle. But the reality is it is going to become embedded in our professional and personal lives. And so I think it's, yeah, what can we do to stay alert and understand the risks and encourage any transparency we can?

Karen: So for the tools that you have used, do you feel like any of those tool providers have been transparent about showing where the data for the tools came from and whether the original creators of that data consented to it being used?

Sue: I don't think I've ever noticed the second part of your question about original creators having consented because if you look at the large language models, for example, they're trained on such large amounts of data, it would've been impossible. And I'm sure they didn't try to achieve consent.

I guess Claude is often considered one of the most ethical tools. And I like Anthropic's approach in terms, I feel it's an organization that's at least trying, or more committed to an ethical position. And I must admit, when I used Claude, I was delighted when, unprovoked, when I asked it a question, it came back with a response about — I was actually looking for a company name and I was trying different company names out. I'd been searching in ChatGPT and I popped it into Claude, and it instantly came back with a view about how gendered it might be and how it might appeal more to men than women, my proposed name at the time. And I was really interested that it was able to identify a gendered nuance like that. So I do feel that Claude has a degree more of an ethical basis.

For a consent perspective, no, I'm not aware of any LLM that actually comes close to what I think would be good practice in the way you've described it.

Karen: Yep. Yeah, that's fair. So as consumers and members of the public, our personal data and content is being used by AI-based tools and systems all the time. Do you know of any specific cases that you could share? Obviously without disclosing any sensitive personal information.

Sue: No, I don't think I do, to be honest. Yeah, no, nothing springs to mind.

Karen: Okay. You mentioned that you use paid versions of some of the tools. Do you find that gives you better protection for your privacy? Or is it more a matter of the features that it gives you?

Sue: Oh yes. No, that is a good point. Absolutely. I'm well aware and I encourage other people to be aware that the paid tools do offer stronger protection. There is greater access to privacy settings. I don't honestly know how fabulous those privacy settings are. But I do encourage people when they're exploring different AI tools to actually think about trying the paid versions, even if just temporarily. One, because I think you get a better sense of the functionality when you go onto a paid version. But two, because I do feel that they are a slightly more secure, and safer way to engage. And again, I guess I'm largely talking about the main LLM tools there. Yeah.

Karen: Do you know of any companies that you gave your data content to, or that you used, that made you aware that they might use your information for training an AI system? Or did you ever get surprised by finding out that a company was using it? Sometimes it's buried in the terms and conditions, or sometimes they change those even after the fact.

Sue: Look, I must admit when ChatGPT put its agent function on very recently, i'm always one for diving in trying new things, and I'd heard the agent function had been announced, and I also didn't think it was coming to Australia or to my license very quickly.

And yet it prompted and came up, it actually popped up a little square that said, “Read the terms and conditions", or it had something that I thought, "I'm just going to hit this button first and see what it says." And actually, when I went into the terms and conditions, it actually made it clear that with the agent function, that all of my data was basically quite insecure. And I actually went, "Oh, I can't deal with that." And I actually closed it down. I didn't use the ChatGPT agent function because I'm quite concerned about the agent function, just because it seems to require such enormous access to your entire personal life.

Even Relay App and Zapier and the workflows, you're giving away your rights in Google Calendar, not just, to create events, cancel events, you're already giving it access to a whole lot of different things. Relay, for example, will have all those consent buttons and I guess ChatGPT, to their credit, obviously was worried enough about the exposure that actually prompted, "Do you want ChatGPT and do you want to read the terms first?" They obviously understand the risk, I think, with that, for them to have bothered to put that in. But again, it's a question of time. We're not going to stay away from that function forever. So I really don't know how to navigate there. I'm just procrastinating on it for now. Yeah.

Karen: Yeah. There's an attorney in Ireland, Carey Lening, that I had talked to. And one thing she pointed out was that these dialogs for prompting, "do you agree with these terms and conditions?" are really a bad form of trying to get informed consent because you're right on the spot. You're trying to move forward to get something done. You're on a deadline. And even if people click through, and most people, I think it's something like 80, 90% never read the terms and conditions, but you can't really blame them because they're so opaque. I went into a banking website earlier this week and their terms and conditions were 76 pages! So I'm supposed to sit here, 76 pages, read it all, understand it, and then go back to my signup dialog before it times me out?

Sue: But they're also non-negotiable, right?

Karen: It's just ridiculous.

Sue: They're non-negotiable. So you're either going to use the tool or walk away from it. That's, I think, where you've got the disproportionate amount of power. Even if you did read your 76 pages, if there's one line you don't like on the 76 pages, you still need to hit the accept box, or you can't use the tool. So that's where I think it's very challenging.

Karen: Yeah. And that was Carey's point about it not really being informed consent. It's bad in the consent. It's not like we really have very many options. And in some cases, like for instance, for government systems, it's not like we even have a choice. You have to use it.

Sue: Well, interestingly, I was reading about a very experienced AI expert, if you can put it that way. And she made an appointment for her child to see a specialist. Before she turned up to have that appointment, she was required to consent to a clinician note taking tool in the appointment. And because she had her expertise — I think had actually even looked at the tool that was in use at the practice — she didn't give consent. She didn't want to do that. She was actually told that the clinician was very in demand and had a very high hourly rate and wouldn't be able to take the time to manually write the notes. Basically, she was redirected to another clinician. What was good was that the consent was sought prior to the appointment. And she chose not to give her consent. But the answer was, "Go find another specialist. We won't service you."

So it's got good and bad points to the story. Everyone's got to be informed enough. And you know how concerned you might be about your child and access to specialists, that starts to get very interesting, doesn't it? About how much pressure you might be under to consent even though you're not comfortable, but you didn't want to expose your child's health data.

Karen: Yeah, and in some rural areas, there's really no choice. There may only be one doctor of a certain kind in an area. So if you don't agree to that doctor's terms, you don't get to see a doctor. So it's really very limited in terms of having real choice. But yeah, that's definitely a concern.

I've been hearing a lot here about medical scribes as well, is what they call them. And at my last GP appointment, she asked if I was okay with her using it. And so we talked about the fact that I work in AI ethics and I looked at some of these tools. But she did give me a choice about whether or not to use it. She wasn't going to kick me out of the appointment if I said no. So at least I felt like I had a choice there.

Sue: That's great. In Europe they're looking at introducing regulation and actually calling the note taking devices medical devices. But I think they're not bringing it in until 2027. And it's been two years putting everyone's personal health information into unregulated devices, the horse will have bolted again. And in 2027 people will have to prove what they've embedded for the last three years.

But, there's huge clinical uptake on it. It's an area under huge pressure and they need the efficiency. And it's also one of those parts of the workflow that I'm sure people love. You can see all the reasons why it's had such rapid adoption. And I can see all the reasons why we are right to be concerned about the safety of the data that's being captured.

So anyway, regulation is coming, but this is the challenge, isn't it? Where the pace of adoption is outpacing the regulation and the debate and public awareness about everything that perhaps would be good for people to know about.

Karen: Yeah. Yeah. And I think most people are okay with the idea of a medical practice using their data. They want them to know about their health so they can give them better care. I think where people get concerned, at least here in the US is, medical practices have all this information. They know where you work, what kind of job you have, or if you're working on your own. They know your emergency contacts, and this is your sister and this is your mother. And one of my guests from last year, Julie Rennecker, was saying that data is not covered by health privacy protections in the us. Most people assume it is, but it's actually not.

Sue: Wow.

Karen: And that data is actually of more value to data brokers than your specific health information. Because they can sell that and combine it with all kinds of other information to now really pin down, this is you, and use it for marketing to you. Again, it's a matter of awareness. People think that it's health information, so it must be protected, but it's not.

Sue: Wow. I'm certainly not a lawyer and I'll defer to the experts, but from my time in the health sector, I'd feel confident in Australia that that would be considered health information. But again, there's so much, SO much complexity these days isn't there? In terms of all the different entities that might be using the tool, the procurement processes where all the data gets stored. So who knows who's ultimately accountable and making sure that it is being protected, even if it does have that status here.

Karen: So with all these companies using our personal data and our content, has it ever created any specific issues for you, such as privacy invasion or phishing or loss of income, or anything like that?

Sue: No, thankfully, as far as I'm aware I haven't had any loss yet. But I would say 'yet', because it's hard to envisage. It's such a common occurrence, isn't it, that you assume that is going to happen at some stage. And you just do what you can to try and minimize it and be aware of it. So yeah, at the moment I do what I can to look after my cyber safety.

Karen: Yeah that's definitely important. We talked a little bit about trust earlier. I think as people are becoming more aware of what these companies are doing with our data that the public trust of the companies has been degrading. And in a way that's probably healthy. More people are learning to ask more questions. On the flip side, what would be one thing, what would be the most important thing, that these companies could do to earn and then keep your trust? Is there anything that they could do? And do you have any specific ideas on how they could maybe do that?

Sue: We've touched on transparency a few times already in this conversation, and despite my confusion if you like about how to recover from the past, I do consider that to be really important. I do like to be able to go and look up on Anthropic’s site about its transparency and safety information. I think they invested about $6.5 billion at one point in their safety institute, which, drop in the ocean compared to the trillions, you know. But nonetheless, I think when you see signs and signals, that makes me feel more likely to trust an organization.

So I think, the more organizations openly acknowledge the challenges, share information on their website, and in how they behave, you can see whether people make some degree of attempt to try and be more transparent and more ethical. Any company can have a vision and mission and values. It's how people speak and how they act. I don't think it's anything different really to any other concept of would, would you trust a company? It is about how they engage and what they're trying to do.

And I think ultimately also their business model is a huge indicator as well. How are they trying to make money? And what is that set up? If you look at the Facebook business model, it's fundamentally set up in a way that doesn't promote transparency and trust, for example. Unfortunately, I'm not completely convinced that a lot of AI companies will be motivated to be acting in a transparent and trustful manner. Not because they're deliberately trying to be distrustful necessarily, but just because I think business models are very powerful forces, and I think there's huge amounts of money to be made. Yeah, I don't have any specific ideas. I think it's more about watching their ongoing behavior, looking at the multiple signals, looking for the, how the leaders behave whether they make choices about how they develop their tools, and whether they look for business models that are more ethical and equitable. Yeah.

Karen: Yeah, and I think a lot of people feel like we don't really have a choice. The company is so big and so powerful. But at the same time, even things like nutrition labels on food didn't come about because the companies thought it was a good idea. It came about because of consumer pressure.

And so it feels like the more that we can do to have awareness and for people to ask questions and to look for more ethical alternatives, the more likely the companies will respond. Because they basically do what they're rewarded for doing. It makes sense. So we have to find ways to reward them for doing the right thing collectively as consumers.

Sue: Absolutely. And competitive pressure comes in. So the more that there is competitive pressure and people still choose, we don't have model lock-in just yet. Although ChatGPT does remain the world leader with two and a half billion prompts a day or something, or a week, whatever it is, it's a staggering amount.

I think as long as there's choice out there there is some commercial imperative to appear trustworthy. I think that could be a competitive advantage. I remain hopeful about that because I think that would be a stimulus for the companies if it was a competitive advantage.

And I a hundred percent agree with you, Karen, that as consumers, if we talk about it, if we care about it, if we choose models that are more ethical, then that will have some ability to influence the landscape.

Karen: And it sounds like you're moving in that direction with your leadership circle. That's all my standard questions. Is there anything else that you would like to share with our audience about the circles or anything else?

Sue: No, just if anyone is interested in joining the AI Leadership Circle, then by all means look me up on LinkedIn or jump onto my website www.uncertaintylab.com.au.

I am kicking off some more programs in October and if anyone is feeling a bit overwhelmed or uncertain about AI and want to build your confidence and capability, then yeah, please reach out. Love to connect and hopefully could help.

Karen: All right. Yeah, that sounds like a great session and amazing program, and we'll put the direct link into the article as well. But I'm glad that you said it so that our listeners can also pick up on it.

Sue, thank you so much. This has been a really fun conversation. I enjoyed talking with you and thank you for starting your morning with me today!

Sue: Yeah, thanks Karen. Really appreciate the chance. Love talking about AI! Been a great conversation.

Karen: Great. Thanks, Sue.

Interview References and Links

Sue Cunningham on LinkedIn

The Uncertainty Lab

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode

User's avatar