6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #088: Dr. Anna Katharina Meyer, Germany-based sustainability activist and entrepreneur
0:00
-48:42

🗣️ AISW #088: Dr. Anna Katharina Meyer, Germany-based sustainability activist and entrepreneur

Audio interview with Germany-based sustainability activist & entrepreneur Dr. Anna Katharina Meyer on her stories of using AI and how she feels about AI using people's data and content (audio; 48:42)

Introduction - Dr. Anna Katharina Meyer

This article features an audio interview with Dr. Anna Katharina Meyer, a 🇩🇪 Germany-based sustainability entrepreneur and the co-founder of FindingSustainia. We discuss:

  • how she and her ‘sister in crime’, Santa Meyer-Nandi, use AI for their work in FindingSustainia

  • what “AI Tool Times” are, why they started running the sessions in Skool, and why they are 30 days

  • setting up her custom GPTs for “Anna’s Expertise” interview requests and for building funding applications more efficiently

  • why she sees LLMs as dangerous for academic research work, for reinforcing societal biases, and for critical thinking

  • sending her kids to a Steiner school to focus on core needs of human beings

  • using LLMs to help her analyze annual sustainability reports year over year

and more. Check it out, and let us know what you think!

Leave a comment

🎁Bonus: I joined Anna and Santa recently on their podcast. We had a great conversation about Everyday Ethical AI. Listen to it here on Spotify!

Karen, Anna, & Santa chat about AI

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click HERE to read the whole post online.)

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Interview - Dr. Anna Katharina Meyer

Karen: I’m delighted to welcome Dr. Anna Katharina Meyer from Germany as my guest today on “AI, Software, and Wetware”. Anna, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Anna: Yeah, thank you so much. Maybe this is one of the most difficult questions to answer! I would consider myself as, I think, an activist, but I also like academia, and I am also an entrepreneur. And in everything I do, I try to support the sustainability transformation of our society. And with my company, founded together with my co-founder, Santa Meyer-Nandi, we support change agents in their organizations when they decide to drive social or ecological change.

Karen: Very nice. Thank you for sharing that. I understand that even though you and Santa have a similar name, you aren’t actually related. Is that right?

Anna: Yeah, that’s right. Santa has parents with an Indian Bengali background, and we do not look alike. But I don’t know, because we have the same mission, people sometimes believe we are sisters. Definitely we are sisters in crime. And for more than 10 years with FindingSustainia, we are supporting those who are changing the world for the better.

Karen: That’s an amazing track record. I’m so impressed that you’ve built this and sustained it for over 10 years. That’s really cool. So tell us a little bit about your level of experience with AI and machine learning and analytics. I’m wondering if you’ve used it professionally or personally, or how much you studied the technology.

Anna: Yeah, I would laugh to tell you a little bit of our story, because Santa and I never envisioned to have anything to do with machine learning. This was very far away from our zone of genius, or our sphere of experiences. But beginning last year we used all these easily accessible tools to help us in becoming more effective in our change making, in growing our business for good, and in using all these tools to drive positive change.

And when we talked with our community members about our heavy usage of AI tools, they were quite impressed. Then there were several different reactions. Some said, “Oh, but is this ethical?” Because of course, people who have this systemic view, who want to change the world for the better, they are also aware of all the ethical shortcomings of AI usage. But then others were also interested and said, “Huh, are you really using it for good? Couldn’t you teach us how you use it?” Eventually Santa and I shared the idea of teaching AI tools in our community, together with a guy who is using AI tools from morning to evening. And the three of us are now hosting cohorts – we call them AI Tool Times –where within 30 days we try to give our experiences, let them share their experiences, and of course reflect on ethical issues as well, because that’s how we are.

Karen: Your AI Tool Times sound like a really cool initiative. Is that primarily in German, or how do you run those sessions?

Anna: We have cohorts in German and we have cohorts in English. The third person is American Scott Denton, the one who is also facilitating these 30 days. And it’s always 30 days because we believe 30 days are good for a start, good for experimenting. And it’s just a great way to concentrate on something, knowing that there is an end. And also to get to know others because after these 30 days, we want the people to stay connected and share their experiences and help each other out. We’re happy if international guests join us, and send us a note! And I’m sure you will put our link somewhere so people can find me.

Karen: Yep, definitely. We can put them right into the interview here and at the end as well in the references section. Definitely, we want people to find you.

Anna: That’s great.

Karen: So that’s a good overview of how you’re using AI professionally and you’re helping others to use it, and we’ll talk more about that later in the call. I’m wondering about how you use AI personally.

Anna: What I really find very helpful is to create my own custom GPT. I don’t know if they’re called this way in using all the different LMEs, but at least in ChatGPT, they’re called custom GPT. This for me is a way to train – I don’t know how to explain, but – my own assistant with some specific information.

To give you an example, I have a custom GPT which is called Anna’s Expertise. And I trained it, giving some rules to it, telling it about my tone of voice. And then feeding it with all the publications I have, because I have a PhD which has several hundred pages. I recently published articles on sustainability management, sustainability controls, accounting issues, on transformational leadership issues.

Oftentimes I’m invited for a talk or something. And then these organizers, on their website, they want to have a little interview with Dr. Meyer. And before, it was always like this, they asked me for scheduling an interview, or they sent me questions, and then I needed half an hour or an hour to answer. With this custom GPT, I can just give it to my Anna’s Expertise GPT and say, “What would Anna say?” And it’s really astonishing how well, in the style I would actually answer questions like this, the AI helps me to very shortly answer with the expertise, which is well researched and took a lot of time to gather, and which is oftentimes even peer reviewed; but then it’s there in the interview within seconds.

Karen: Very nice. So you use that to help you respond to these interview requests that you’re getting?

Anna: Yeah, exactly. And another example is we oftentimes apply for funds. I always feed the custom GPT with all the information concerning the funding requirements and what is needed to receive the funds. This is also a great source for answering all the different questions for this specific funding instrument. When the custom GPT already knows what FindingSustainia is doing, it just has to be applied to all the requirements from this specific funding instrument. And this also happens in seconds then. And I can really use my time better than answering questions, just repeating my expertise, or repeating what FindingSustainia does, to meet the requirements of a certain funding instrument.

Karen: That sounds like a good efficiency tool for you.

Anna: Yeah, definitely.

Karen: It sounds like it’s worked fairly well for you, in both of those examples that you gave. Are there any aspects of it where it doesn’t work so well, or certain questions where you’ve seen that it doesn’t respond the way you would? Or it doesn’t reflect FindingSustainia’s mission accurately when you’re using it for these purposes?

Anna: With these custom GPT, which are trained by me, so all the information given to the GPT is coming from me, I really see no flaws. I really see that it’s on point.

But then there are other experiences I have which are ridiculous. For example, when I ask just, in a general question to ChatGPT, “Tell me in which academic papers you see similar findings to the findings I have”, for example. And then I would expect ChatGPT to find academic papers where the researchers came up with similar results. And this really does not work. ChatGPT is literally, how can I explain it? Literally making up sources. Because it’s only a large language model.

A large language model works in a way – I mean, I don’t have to tell you about it, but maybe some people in the audience might be interested to just have an explanation by someone like me who is not an IT trained person. But the LME’s, they’re good with language, and they know that certain words follow certain words, and that in the long run they make sense in sentences.

And ChatGPT then just makes up article headlines which sound very promising. And then it adds a journal which actually exists, a journal from the field, so it seems to be kind of fitting, with an author who also is known by me oftentimes. So it all seems to make sense. But if I then go into my academic databases and I want to double check if the researchers really found these results, then I found out that not even this article exists.

So it’s really dangerous to just believe facts when randomly working with ChatGPT. There are other AI tools which are better to be precise. ChatGPT is not meant to be very precise, but merely to be very logical. This is what I believe. Maybe you have an opinion on that too, Karen. But def I can tell you that the articles not even existed. Even it sounded to be too good to be true. “Oh, now I have five papers from these renowned authors published in these great journals where the title of the article just seems to be perfect.” And it’s just hallucination, or I don’t know the English exact word for it.

Karen: Yeah, yeah. Hallucination is the word that’s most commonly used, although some people say confabulation is a better word. Most people don’t use that word, but I think linguistically it’s probably more accurate. But yeah, a lot of people have talked about hallucinations. One of my early guests was trying to find podcasts that she could go on to market her book, and it gave her a list of 10, and eight of them simply didn’t exist.

Anna: And really, ChatGPT is not another Google or search engine. It’s really something different. And for the first use case, it’s wonderful, but for this second one, it’s just not eligible.

Karen: Some of the other LLMs have been announcing features that will only use a list of verified sources. For instance, Perplexity has done that and Claude came out with that most recently. ChatGPT has not. Have you ever tried any of the other large language models other than ChatGPT? Or is that the main one that you’ve worked with?

Anna: Perplexity is definitely the alternative, which we are suggesting for use cases like the one I just talked about. And we are always saying that all of the other large language models are good as well. Gemini is even for free. And Claude is better in terms of ethical issues. It’s also supposed to be better for creative writing. But I have to admit that I mainly use still ChatGPT, but only in the version which is paid. I believe that only the paid version really uses the potential needed for serious working.

Karen: Have you looked yet into the new Swiss public chat?

Anna: Not yet. How was your impression?

Karen: You know, I haven’t been able to get myself authenticated! I’m not sure if it’s because I’m not in Europe, or not, but it won’t authenticate my email. I need to go try a different email address or something to get into it. But I know someone who has tried it and she was able to get authenticated from the US and she really likes it.

Anna: Okay. Huh!

Karen: So I liked that it was ethically developed with fairly-sourced material, and they trained it on the Alps servers, which run on renewable energy. So from a sustainability perspective, it’s got some really nice advantages. I haven’t heard about how they did their data labeling. That’s one thing I haven’t really heard yet. But they did everything else well. So I’m hoping that they also did that well.

Anna: Yeah, probably. Maybe I can tell your listeners that you were a guest of our podcast as well, of the FindingSustainia podcast. And it was very insightful for us to learn from you about the hidden costs of AI training or data segregation and identifying which data should be used in the training material, and who is doing the very crappy work to really separate the data which should not end up in these learning materials. And that these are actual real human beings who need to make these decisions, which are not easy to make, or where the content is nothing nice to look at.

Karen: Yeah, there’s a whole section in my book about the data labeling business and how they mistreat the workers and everything around that aspect of bias. But I don’t want to preempt the interview! I want people to go find it when it comes out and listen to this.

Anna: Yes, please!

Karen: If it’s out before we publish this, we’ll put the link into this interview so people can go find it there. But yeah, I’m looking forward to that interview coming out. That was a lot of fun to talk with you and Santa.

Anna: Yeah. And you will have Santa in your podcast as well! I heard about it and yeah, so double FindingSustainia power on your podcast!

Karen: Yes! That’s excellent, yes. So we’ve talked a lot about how you have used AI and the pros and cons. I’m wondering if there are things that you avoid using AI-based tools for? And if so, can you give an example of when you avoid it, and why you choose not to use AI for that purpose?

Anna: We always say that in two ways, we believe that it’s dangerous. One is that it oftentimes increases biases. In staffing processes, for example, of big companies, there’s a lot of academic literature on how the AI learns that an ideal IT professional looks like this and that. And the algorithm and the machine learning part does not understand that oftentimes an IT professional looks like a woman. And we, when only receiving the 15 applications which came out of an AI staffing process from 500 IT professionals, “Oh, well, they are all male”, maybe we do not even notice the bias.

And we definitely have to be very careful with these biases which are part of our society, of our culture. They have been identified. We are aware of some of them. But then with AI, they are somehow forgotten, because no one really looks into the decision making within AI. So this is one very difficult aspect about AI usage, I would say.

And the other one is that we risk our own creative thinking and our ability to structure things ourselves or to write things ourselves if we rely too much on AI. And I’m really scared what this development will do to our younger generation. First of all, they will have the impression that speaking is sufficient. No one needs to write anything anymore because the AI can write. And I only have to talk to tell what I want. So we somehow unlearn our writing skills, our skills to structure topics and yeah, maybe even our skills of creative thinking, because it’s so easy to ask “What are four possible creative ways to answer this or that question?” And if we do that on a daily basis, and we do not see the need for our own creative thinking anymore, I believe that we can lose it over the next years. And this is quite dangerous for our society, I would say.

Karen: Do you have any family members who are in this younger generation, and have you had any observations about how they’re using AI or not using AI?

Anna: My kids are still too young. They’re 6 and 8 years old and they go to a Steiner school. I don’t know if this is very well known in the US. I know that there are some of these schools, but it’s a very special school, I would say, where they very much concentrate on the core needs of the human being. One is the need of achieving something with your hands, and of course the need for creative thinking, for problem solving. And I love this school because they are knitting and they are doing many things with wood, and so really doing things with their hands and with being very creative. And I like it because I believe we don’t know what these young people need to know in 15 years when they start to work. This would probably be very different from what we do need today. But it’s definitely good to know that you can do things with your hands and that your brain is a creative instrument and that your individual thinking is important. But I know that these schools are raw and they also have some disadvantages and so on. But therefore they have no experiences yet with ai.

But I know from some neighbors that they are using ChatGPT as if it would be a search engine. And there I see a very big shift because all of the marketing efforts in the digital space are based on SEO optimization, for example, search engine optimization. And I think that all of these companies need to shift to an LLM-based optimization. And I do not think that this makes a lot of sense actually, but I believe that this is taking place, so I do not have a solution yet for that. But I see that this is happening and that every marketeer needs to change his or her processes.

Karen: I’ve heard people refer to that as GEO, generative engine optimization.

Anna: Definitely this is what is probably the next big thing. But from your point of view, I mean, you understand that technology behind AI a lot better than I do. Does it make sense to use LLMs like search engines?

Karen: That’s a good question. I think it’s happening, whether it’s a good idea or not, like so many other things in the world of technology. I think it’s definitely having an impact from what I’ve been hearing about. The people aren’t getting the click-throughs that they used to, because the generative AI overviews are trying to answer people’s questions right there, and so then people don’t need to click through to the articles.

So anyone who has a site that’s based on getting ad revenue or anything like that, they’re business is being significantly affected. So it’s definitely something to consider. The other aspect is these AI overviews can hallucinate, right? And so some people, including me, have taken a shortcut you can actually put into the Google search to make it turn off the AI overview so that it won’t show you those. And it acts like it used to, where it’s really just a search engine that points you to articles. It’s a UDM-14 shortcut, they call it.

Anna: Okay, so the real experts turn it off!

Karen: I generally do. There’s a couple of times that I’ve launched the Edge browser and I just want to do a search. I’m looking up a Python library call and I just want to see what the official parameters are. So I’m really just trying to get it to take me to the Python documentation. And I’ll put it in, and you’ll see the Copilot search is just running and running and running. And meanwhile the link I want is three links down, and I just scroll past that nonsense and look. But then, well, it’s just wasting cycles and computational power and my time to try to run it.

Anna: Yes, yes.

Karen: So I really would rather just go turn it off.

Anna: Yes. Wonderful insight. I mean, dear listeners, did you hear that? This was a hack by Karen Smiley!

Karen: I mean, it just doesn’t make sense to waste that time and the cycles. And it takes longer for me to get to the link that I just want to go to. And it’s not helping anybody else to have that. I’m not even going to read what they’re trying to generate, and they’re not even done generating it by the time I find the link I want. It’s just wasteful and I hate waste.

Anna: Yeah. You know, where it really helps, and where it also changes the way of working is when you consider all of these big annual business reports and the annual sustainability reports. Oftentimes, analysts or stakeholders, instead of reading everything, now only ask their questions and use AI tools, which are really functioning for this task, to give them the information from the report. This works quite well.

And it’s really helpful because oftentimes more than half of the report is pretty much the same as last year. So as a human, it’s very difficult to read through and remember, “Oh yeah, they already said this last year”, and so on. So it’s wonderful to go into a conversation with a specialized AI bot, and ask “What is new compared to last year?” And “What is written about this, this, and this, the three things I am interested in?” And this really helps on the side of the consumer, but on the side of the producer of these reports, it’s also an interesting shift, which has happened because on that side, people now understand that no one really reads it anymore, but that they have to write it in an AI-optimized way so that the answers are still accurate, or that the insights which are identified by the AI are the insights the company really wants to display. So they need to change their writing and make it more obvious what is important and what is not.

Karen: That’s really an interesting insight because we hear a lot of people talking about summarization. I wonder if the people who are writing these reports now would put the report into an AI tool and ask it to tell them “This is what someone’s takeaways would be if they read this” and see if that’s what they want. Do you know of anyone who’s actually doing that?

Anna: In our cohort we have all these sustainability professionals. And it was not my discovery, but the discovery of a lady. She sits on several supervisory boards. And in our cohort, she identified this as an important change. And she now uses her knowledge to inform people in this supervisory board, where again, the executive committee is influenced. And yeah, I think, through us, this gets into the business sphere. Maybe someone else identified it earlier, but we did too.

Karen: Yes, yes. And you’re helping now to share that knowledge, so that’s a good thing. So we’ve talked a lot about AI tools and what they’re good at, what they’re not good at. We’ve also talked a little bit about where AI and ML systems get the data that they use to train on. A lot of times that data is not a good representation of society as a whole. It reflects the biases in our society. And they’re also oftentimes getting data from online systems or things that people have published online, which may be subject to copyright, but not respecting those copyrights.

But companies are not always transparent about how they plan to use our data when we sign up for a service – music streaming, video, anything like that. I’m wondering how you feel about companies that are using this data for training their systems and tools, and specifically what your thoughts are about what some people call the three Cs, which is consent, credit, and compensation, which many people argue that creative people are entitled to for the use of their content.

Anna: Yeah, I’m definitely not that much into these questions, even though I know very well that they’re very important, and I’m happy people like you are talking about it publicly. I can only tell you from a consumer point of view that I really find it very scary to see that all this creative potential, for example in, let’s pick an easy example. People who are speaking in their native language to make film versions available in German, for example. We have all these great speakers with a lot of character who are dubbing the American superstars. And this is done by AI now oftentimes. I think here again, this creative potential is somehow killed. There is no business case for it anymore.

But I do not yet have a nice or valid position how to avoid the consumption of data which is retrieved from sources which I believe are not ethically correct. I definitely am on the very beginner’s standpoint on that topic.

Karen: So you’re referring to dubbing movies? Dubbing, basically replacing the soundtrack, and where they used to use these talented voice actors to do it, now they’re using AI instead, and so those people basically don’t have work anymore?

Anna: Yeah. Exactly. Yeah, yeah. And this is taking place in many creative areas where somehow these creative resources are not needed anymore. Or even worse, as you said, their creative work is stolen and used for training, and then it becomes not needed anymore afterwards.

Karen: Yeah. We see that happening in voice cloning, and in music, and in art, and in writing obviously is another area. But we see it happening in a lot of areas now, with video generation even.

Anna: Yeah.

Karen: I think people look at it one way if the tool was trained by, say, musicians who recorded clips specifically to give to that company to use for training, and then they generate something from it. Okay, that’s a form of competition. But it’s another thing if they stole somebody’s art, like the Studio Ghibli art videos from Japan, and then they generate videos in that style.

Anna: Exactly.

Karen: And they’re blatantly stealing. And that I think feels different to a lot of people.

Anna: Yeah. Yeah. And this is taking place, and this is definitely a very severe downside of AI usage.

Karen: So in an ideal world, what, how do you think it should work?

Anna: Hmm. I think copyright should still be something very important, very valid. I also like the European AI Act where we at least try to secure certain data rights. I believe that the European countries are always handling these issues with a little more care than other countries. That’s also a reason why we’re slower. But I see some sense in it, definitely.

In my ideal world, we would have the chance to use AI without feeling bad about it. Now we did not even talk about all these energetic resources which are needed and which are also causing problems. We are all here to fight climate change and then we are using these large servers and yeah, it all feels a little contradictory. And I would love to somehow resolve it, even though I know life is complicated and it’s not possible to always resolve everything.

But yeah, asking me for my ideal world, it would also be not harmful to enter an airplane. I would love to visit my American friends more often, because I spent a high school year in San Diego. I have many friends in the US. And I’m always very shy to tell them, “Oh, I’m not coming because I feel bad flying because of climate change.” And yeah, AI is not making it better.

There the ideal world is, yeah, based of course a lot on renewable energies. I have an MBA in renewable energies because I believe this is such an important topic. But it still doesn’t save us from using resources. We have other problems again. The ideal world, does it exist?

Karen: Not yet.

Anna: Not yet. But maybe. To close, I don’t know if you have more questions, but what I always like to highlight is that I believe that the efficiency gains we can have with AI can help us spending our time more on the human activities. Because it’s really not very human to sit in front of a computer all day long. And it would be so much more human to go for walks and play with kids and play the guitar and so on.

For example, in sustainability management, the sustainability manager oftentimes has to collect data. So they call their colleagues from another department, “Hey dude, I need this data and actually I need it tomorrow.” So they’re always not the most-liked people in the company, because they’re always asking for data, they’re always having these deadlines. And wouldn’t it be nice if AI would just take care of this data collection and the person could just go over to the other department, have a coffee with this colleague, and then they could really talk about sustainability gains of their work and not of how to collect the data in the most efficient way? So there, I believe human beings are more needed and I would hope for AI to take over and do the boring work.

Karen: Yeah. I think we all want AI to do the boring work and leave us free to do the fun work and the artistic work and the human-to-human connections!

Anna: Right.

Karen: I did have a few questions, just very quickly: as far as the way that our data is being used by these companies, and do you know if any of your data has been used by these AI companies? Either personal data that is being used by say, a movie streaming system or music streaming, or maybe if any of the articles that you’ve published have been scraped up and used to train a large language model? There was that LibGen database that came out that let us look and see if our articles got used, and I’m curious if you’ve tried that.

Anna: No, I didn’t. I’m not aware of it. I’m very certain that it is used, but I did not double check.

Karen: Do you know of any cases where you’ve given someone – a company, a government organization, or anything like that – given them some of your information, and then they told you upfront, “We’re going to train an AI or model on this data”? And just making sure you’re okay with that? Knowing, with the EU AI Act, there’s more privacy protection, I’m just wondering what your experiences have been.

Anna: I know it only from the American companies, which of course are our friends too, like Meta and so on. We are using WhatsApp and Instagram, and there we are giving many consents, which we do not want to give, but where we somehow are forced to and we still want to use it. But for governmental issues, I’m not aware that I was asked very specifically, no.

Karen: Okay. I was just curious. I know when people do fly internationally and they go through the TSA screenings, there’s some different things there, where they now are trying to take people’s pictures, and then use AI to compare those pictures to your ID and to other records and such. So sometimes it’s unavoidable. And so I was curious what the experiences were for you there.

Have you ever had your data stolen, like, had a data breach where your information was leaked out?

Anna: No, not that I know of.

Karen: Ah, you’re lucky.

Anna: I always believe that there is so much I do not know of, but yeah.

Karen: Yeah. So probably a good number of your American friends have had at least one data breach where their information’s been leaked. It seems to be really common here. The data brokers are much more active, it seems.

Anna: Yeah, yeah, yeah.

Karen: Final question. It seems like public distrust of these AI companies has been growing lately, partly because we’re realizing what they’re doing with our data, like you said, being kind of forced to do it, and so trust has become an issue. I’m wondering what you think is the most important thing that these companies would need to do for you to feel like you could trust them, if that’s possible, and if you have any specific ideas on what they could do to earn your trust?

Anna: Yeah. I have the feeling that right now, we have no clue how they’re training their models and how the data labeling takes place, and I would feel very good if a company would just display it. I’m very used to read sustainability reports. And at least in the ones from Europe, companies handling chemicals really have to explain how they are handling these chemicals.

Companies who are handling data, they have to explain how they’re handling data. And of course not every person would read these reports, but then media coverage and journalists, they take a look, and then they spread the word. And I would really like to see transparency reports by these companies. Then I would feed them into my system, ask the question, and they have definitely programmed or wrote in a way that, retrieved with AI, I’m getting the information I’m interested in.

Karen: All right. That sounds like a good idea. Almost everybody wants more transparency from these companies. It’s almost universal in all of these 80+ interviews that I’ve done so far. So you’re in good company there.

Anna: Yeah. And I believe that there is a role of the regulatory bodies. No company would report in a way which is really helpful for consumers if they are not forced to do so, or if there is no comparison with others. If everyone does it in the way they want to believe that it’s correct or right or sufficient, then it’s definitely not sufficient.

Karen: All right. So that is all my standard questions. Is there anything else that you’d like to share with our audience? Or anything you can share about how people would get in touch with you to find out about your AI Tool Times, or anything else that FindingSustainia is doing?

Anna: Yeah. What really is our mission is that people who care, so all the Karen Smileys and Anna Meyers in this world team up or get together and have a say in the discussions, in the dialogue on future AI usage. We should not leave it to some tech enthusiasts who just don’t care or who have no systemic thinking knowledge.

And we really want the people who care to be aware of what AI is and what AI can help with and what the shortcomings are. And I believe that your podcast is wonderful and that your listeners would also like the FindingSustainia community. We are on the platform school, S-K-O-O-L, and we have a community there and it’s called FindingSustainia for Impact. I welcome everyone to join us there. It’s a free community and from there, you can get all the information on the AI Tool Time and on everything else we’re doing. We’d love to have more international community members. And it’s a wonderful opportunity, Karen, for me to share this information here on your podcast.

Karen: I’ve really enjoyed reconnecting with you and having this additional discussion about AI, and hearing about your experiences with it, with your custom ChatGPT and with your other adventures into using AI and setting up the Tool Times. I think it’s great that you’re helping people learn how to use it effectively for good causes. That’s wonderful to hear.

Anna: Yeah. Thank you so much. I will join the Christmas party downstairs now! And I hope that we meet again. There will be new opportunities, I’m sure. And to all my American friends – because I will share the podcast with them as well, so you might get new listeners, Karen – I miss you!

Karen: That’s a good note. I’m sure they miss you too! At least we have these technologies to help us stay in touch remotely, so that’s great. I obviously would never have met you otherwise. So I’m happy that we have these technologies. It’s been great talking with you, Anna! Thanks so much for joining me.

Anna: Thank you so much. Bye-bye.

Interview References and Links

Anna Katharina Meyer on LinkedIn

FindingSustainia website

FindingSustainia podcast (in German and English)

FindingSustainia for Impact on Skool | Join our AI Tool Time

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% human-authored, 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to Beth Spencer for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode

User's avatar

Ready for more?