6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #063: Noemi Apetri, Denmark-based business coach
0:00
-56:26

🗣️ AISW #063: Noemi Apetri, Denmark-based business coach

Audio interview with Denmark-based IT lawyer and business coach Noemi Apetri on her stories of using AI and how she feels about AI using people's data and content (audio; 56:26)

Introduction -

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Noemi Apetri, provided by Noemi and used with her permission. All rights reserved to her.

Interview -

Karen: I am delighted to welcome Noemi Apetri today from Denmark as my guest on AI, Software, and Wetware. Noemi, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Noemi: Thank you for inviting me, Karen. It's a pleasure to be here and I'm pretty sure it's my first podcast. I've done other interviews, but not podcasts. So I'm very excited. My name is Noemi Apetri, as Karen perfectly pronounced. I am a lawyer, a tech lawyer. I've worked in the IT industry for the past decade. Now I'm not in the industry, but supporting the industry as an external counsel. I have my own consultancy and I'm supporting one client that is an IT company through their privacy and compliance audit.

What I do also is that I write and I have a publication on Substack called "Me Time", which is a publication that started from this theory that our time is taken from us by apps, by social media, by all kinds of distractions. And that time, if we reclaim it and put it towards what we need in terms of career development, in terms of rest, in terms of investing in our relationships, it would be much better served and we would be happier, less burned out and more fulfilled with our careers, with our lives in general.

So those are the two big things that I do. And now, as a more recent addition to the Substack, I'm adding legal and compliance, and I'm going to help Substackers with having Terms and Conditions for their website, for their publication, with their Privacy Policy if they have any questions. I'm looking into the privacy policy of Substack at the moment, trying to dig some dirt on them, but I think they're a fair company. I haven't uncovered any skeletons so far.

I think that's good enough for an intro. I have a Corgi. I have a daughter. I have a husband. I live in Denmark, but I'm from Romania. That's it.

Karen: All right. That's a great introduction. Thank you. I think I first found you because of what you wrote about the terms and conditions and how we as writers really should have them on our sites. And that was like, wow. It's one of those things that's obvious when you called it out, but it certainly wasn't obvious before that. And I think that was super important to realize that we were all at risk because their terms and conditions protect Substack. They do not protect us as writers.

Noemi: Exactly. I'm happy to add, just to put a bit of meat on that bone, to kinda wrap it up. But basically yes, when you are writing on Substack, if you're writing about a trip you made, something that you wrote like a poem, things like that, you're okay. They have a policy around copyright to protect you as a creator, because the platform at the beginning was a writer's platform. And then they expanded it to do creators, which means people that provide advice.

Once you start providing advice, once you start being a business advisor, investment advisor, psychologist, coach, all these things, when you're kind of influencing people's actions through what you write, then there's a liability there — that people follow your advice, basically. And there might be that something goes wrong and then they think you are liable for it and try to go after you.

Of course, there's questions here in the context of the real case. Will it stick or not? Or so on. But you can remove all of that uncertainty and stress, and sleep well at night, if you put Terms and Conditions on your Substack publication that clearly govern what's okay, what's not okay? What kind of risk or amount of risk do we accept from one single subscriber? So Substack is not doing anything wrong. But they do allow in the settings to create your own terms and conditions and your own privacy policy.

Karen: It's great to hear about that. One of the other things that we have started looking at is the influence of AI. So I'd like to hear a little bit about your experience with AI and machine learning and analytics, and if you've used it professionally or personally, or if you’ve studied the technology. I know about one experiment you did, but I'll let you talk about that.

Noemi: Now I'm trying to remember what I told you, Karen!

Karen: The ChatGPT term and conditions generator, remember that?

Noemi: Okay. Where should I start? I found out about AI in terms of LLMs from my husband. My husband came home one day, in 2020, I think. And he is like, "Have you heard of this thing that's happening, that there's this app, and on this app it just answers your questions and it's crazy" and blah, blah, blah.

And I thought nothing of it for like six months or something, until it became bigger and bigger, and I became more and more curious. And the thing is, at that point I was head of legal of an IT company, a privacy solution. And I was running a legal community. And that legal community needs the help of AI to deal with the amount of work. It's quite normal that the ratio of lawyers to other employees in a company is one to a hundred, which is quite a disproportionate amount of people in a company to support with legal advice. It's very, very difficult, very demanding. So then any type of AI help to transcribe, to fine-tune some things, to do research, to go through lengthy documents and things like that, it’s a godsend for lawyers.

So as the community started being curious, we would have round tables where we would discuss these topics. I, of course, started to also experiment with it. And at first it was very small things. And then I tried, for the sake of experimentation, more complex topics, a lot about contracting and drafting policies and things like that. And I have to say, ChatGPT is an excellent assistant, even if they're terrible and they don't understand everything.

For lawyers, starting with an empty page is so painful. No lawyer drafts terms and conditions from scratch. It just doesn't exist. You start with a template and then you adapt it to the context of the business you support or the person you're helping.

I am a cautious user of AI, professionally and privately. I think it's great. I only use ChatGPT as my main AI source. Not for any preference reason. I just started a subscription to them, and I taught ChatGPT a lot of things of how I think and my context. And because it remembers it, I'm too lazy to switch to Claude, but I kind of want to also.

Karen: I've heard some people that really love what they call the vibe of Claude. And some people really love the vibe of ChatGPT and the way that it talks to you. And I think there's some personal preference that comes into play there. But yeah, it'd be great if you could take all your context and shift it from one tool to another.

Noemi: Data transportability. And it's actually interesting because in ChatGPT, and maybe to a similar extent in others, you can choose the level of things that it remembers. So you can say, "Remember everything". They just launched this in Europe right now where I got a notification on ChatGPT that it can remember everything you do there. So it knows everything from chat to chat, and you don't need to kinda give it any context.

I'm not letting it learn about everything I use, but sometimes I say things that it should remember, like, "Please remember not to compliment me for every question I ask you, wasting my time" kind of a thing. Or, "I'm a lawyer. I want things to be clear and precise and this and that. I don't want it salesy. I don't want it sugarcoated.” Things like that. And it remembers.

And I also love one feature that it has. Basically I really love what ChatGPT does in terms of its project functionality, and I can dig deeper into it later on if you want. But I've noticed some really cool things 'cause I've been doing some fun projects for Substack.

Karen: Okay. So that's a good overview of what you used it for. How do you use it in your personal life?

Noemi: For my personal life, it's been really silly things, sometimes like how to take care of a certain plant that I'm struggling with, or where to put it, things like that. And I've also used it to create funny poems, to translate an Italian text that I need to send to my in-laws (it's very good at translating), and much more. Much better than Google translate. Yeah, all kinds of small things.

And then one thing that was private, and then it kind of spilled into my professional life, was that I challenged it to create a coloring book featuring me and my daughter and my husband. And it did, and it's kind of similar to us enough to for my daughter to recognize the characters and our dog. And then I took it and I created a coaching workbook around boundaries that features all of us. And there's prompts, you can write your thoughts on the bottom. Your kids or yourself can color and kind of relax and then have a quiet time with yourself and think about boundaries. And, yeah, it was lovely. And, that experiment actually taught me a lot about Mr. ChatGPT.

Karen: And I think you had written at least one article about that experience with the coloring book, am I right, on your Substack?

Noemi: I wrote, I think it was a Note. So one thing about Substack, that maybe you can also relate to, is that there's very little engagement for the amount of work you put into a piece. So you need to super-promote it in order to get people's eyeballs on it. And I was really proud of the coloring book because it came out so well. And I think it's so innovative and amazing to be able to work on yourself, together with your kids, coloring a book. And at one point I was sitting around the table with me and my husband and my daughter. We were all coloring and smiling and bonding over this thing that I created with AI. And I thought it was so magical. And there's definitely, not just this kind of fear and something like "AI is going to destroy us", there's nice things also. But let's see what side wins.

Karen: Yeah, that's a really good point. The coloring book is really a fun story on how you used it in your personal life. I know you've also used AI in your professional life and for things like the terms and conditions. Could you maybe say a few words about that?

Noemi: Yeah, so I use AI quite a lot. Again, because I'm a lawyer and lawyers don't invent contracts or policies. You need to go off of known templates and then adapt. For clients, for example, with their permission (in a paid account where the learning functionality of AI is turned off), I put in certain documents, certain templates that I've liked and used in the past, and I request it to create a draft. And I give it very clear guidance as to "use this verbatim and adapt it to all the things that I've taught you about this project".

So I create projects. There's a way to create projects within ChatGPT, at least the paid version, I'm not sure about the free one. And then they know who the client is in terms of, it's a company normally. Rest assured, my client is very excited about it. So excited that he says jokingly that he's going to replace me with ChatGPT. And I also jokingly tell him, "Please do, because it's not like my biggest passion to write policies". But I don't think it's a smart move.

I mean, even before you've had founders, leaders in the IT field, that kind of just steal a template from an old company they had or whatever. And they just put it for the new thing they're doing and they're like, "It's good enough". You always need to adapt. Things develop so much in terms of context, society, functionality, all these things. You need to really always adapt everything to the context of the company.

And that kind of brought me to this live session that I had previously about the terms and conditions generator, and I'll kind of look back to that of how I made it and so on. But basically, I was interviewed about it and Kristina God from, I think it's called Writer's Club, asked me, "So can we just use the terms and condition you are using, and just slap it on ours, and then call it a day?" And I'm like, "Well, first, it would breach my terms and conditions." But the terms that I created on my own website really reflect what I'm doing. When I created the template of terms and conditions that other people can get as paid subscribers of my "Me Time" publication, you have the shell of, more or less, all the different functionality in Substack that then you can fill with your things. If you try to do mine and reverse it back, it might not be the same, 'cause it might speak about coaching that you maybe don't do, for example.

So that's the thing about ChatGPT, that unless you understand what you're doing there, if you are giving prompts about law and you don't know anything about law, you don't know the quality of the output.

And I've had this kind of situation in my private life where I was trying to get ChatGPT to interpret some test results I had, medical test results. And it basically told me that I might have cancer. But then I realized, trying to figure out how it came to that conclusion, that at one point it asked me, "Do you want the first degree something, something, or the second degree something, something?" I didn't know what they were, so I just picked one. And I picked the wrong thing. And then I went to the doctor, and the doctor's like, "You are low on calcium", which was a much better diagnosis. Thank God.

And then I went back to ChatGPT and I'm like, "What the fuck, ChatGPT? Why did you say I had cancer?" and he's like, "Yeah, that's definitely not". And then I said, "The doctor said it's low calcium." And he's like, "Yeah, that makes sense, da da da." And then I realized that I made that choice at one point that I didn't know what I was making a choice about. And I told him about it. And I say, "Instead of you asking me weird questions I don't know, because I'm not a medical professional -- please remember that -- just give me the most likely, least scary possibilities first. And then go to the worst-case scenarios." And he's like, "Okay, I remember."

Should I go back to the terms and conditions generator and how I created that?

Karen: Sure, sure.

Noemi: So basically, my thought is the way you work with terms and conditions classically is that there's a Word document with little blank spots where you put some information, like the name of the person or the name of the company, the address, blah, blah, blah. You might need to fill in a little bit of information and then you use it. However, because of the hype of AI chatbots and whatever, I wanted to try and see if I can do it with a chatbot. Because then people might want to add additional stuff on top of what I train it to do, because they do some other things that maybe I didn't think of in the moment. So it would be a more complete offering.

So what I did is I created a chatbot. I drafted terms and then I fed the terms to it. Then I worked with ChatGPT creating a persona of an experienced commercial counsel, knowing a lot about it in US and Europe jurisdiction to suggest maybe sections that I might have missed, that I might want to add. I've done a lot of iteration on that, just to understand if I'm missing anything, so using it as a sparring partner.

And at one point I realized that it was in a loop. ChatGPT is created to always give you more work to do, to always ask the next question, to use it constantly. So it doesn't know when to stop. There's no end. If I would've continued the prompting of suggestions for the terms and conditions, we would've gone on forever, I'm sure. But that's why lawyers should use AI for lawyer stuff. And doctors should use AI for doctor stuff. And us laymen should use it, but maybe with a lot of caution and a lot of humor, and knowing that it might come up with really crazy stuff and always check it by a professional.

Yeah, so then I created the generator. I tested it. I think also you, Karen, if I remember correctly, tried it, with a few other people. And basically what I had to do at the end was to tell it to just use it as a template. Because it was just erasing all the hard work that I was doing and drafting these really shitty terms and pissing me off.

Karen: Wow.

Noemi: So at the end, my ChatGPT generator just fills out the template. But I think that's also nice because people, maybe sometimes they're not really used to working with the template, and they mess up the formatting and they're not really sure. So what I did is, I trained it to ask six questions and to qualify the quality of the answers so it can fill out correctly the template, and not change the template.

And that worked the best for me, in at least the first iteration of this terms and conditions generator. Because I trust myself for the terms, and I trust people to know their name and address and where they live and things like that. But I don't trust ChatGPT to keep the level of quality and detail about the project. Because the problem with AI bot is they don't know the context. So they default to their normal kind of general knowledge. So if you let them free, it's basically just ChatGPT with my name on it, which I don't want. Because that's not really what I want to put my name on, and I have no control over it and what's going to shoot out?

So yeah, I had to restrict it to very strict prompts of, keep to the strict script, and then say then they can be invited to go on another chat and add things to the terms, because that's how I can control the quality of the output.

But then I realized that maybe also classic terms are nice. So I'm also offering just a normal template that people can use and not go through all this learning experience. it's time consuming, but it really gives you a lot of insights into how it works, what it could be good to be used for, and not so good.

And yeah, when you combine chatbots to solve problems, creative solutions, add language, think of other language to add in a contract, it can lose control quite easily and delete the whole thing, and just put some random language. And then you want to pull out all your hair. So, yeah, it's been an interesting experience with the generator. But I'm happy and it's stable and it works and it's really good.

And then there's templates too, and my client use the templates more than they use the generator. Then as a fail-safe, I also offer what I call Office Hours where my paid subscribers can book me. So we can look through the terms together and see if there's anything else that they might want to add in that maybe could benefit them.

So, I'm trying to satisfy the perfectionist in me and also make sure that even if I'm helping people that don't have any terms there, that they get really good ones. Because they deserve it, and their livelihood is sometimes on the line. Because when you provide service, advice without any terms, you're basically responsible with all you have for the advice you give and the repercussions that has on other people.

So you need terms. You need the decent ones. They don't need to be excellent. They don't need to cover everything under the stars. But there needs to be something to create some guardrails where it's okay, where you feel safe to produce, to help people, to create a lot of output and content that gives people value, depending on what type of advice you're giving. A lot of crypto and investment and all those guys that write about stuff like that, they can get into so much trouble. And I really hope they all have terms and conditions, but from looking a little bit through Substack, it's very few that do have it.

Karen: Yeah, I do think Substack could do a better job of surfacing our terms and conditions in the various places where people have to agree to theirs. But at least it's good that there are some of those there.

So one thing that you mentioned is about knowing when to stop. And I think that's really important too, is you need to understand the limits. I had run into that when I was first using an LLM for generating Python code. And it would get through the first few iterations. It would add in a function. It would add in some error handling. And then it just got to a point where, the more I asked it to do, the more it started breaking other stuff. Like, "Okay, I'm just going to switch over to my code editor because this is no longer productive. This is no longer saving me time." But it was good for as far as it got me.

And I heard something similar from a guest, Jing Hu. I just interviewed her. It hasn't been published yet. But that was one of her main points, that you have to know when to use it, and you have to know when to stop using it. Sometimes knowing when to stop is harder.

Noemi: Yeah, because you don't know when you've taken it too far. And also sometimes it might be okay, it might be producing good stuff, and then it kind of freezes or crashes, and that's so painful. So painful. But yeah, it's funny and I'm glad that you told me that other people are experiencing the same. It's funny that it happens with Python and it happens with legal terms, and maybe other things too.

Karen: Yeah, definitely. I'm wondering if there are times when you avoid using AI tools for some things or for anything. And if you can give some examples of when you don't use it and why you choose not to?

Noemi: So I don't use it for anything that involves having personal data in there, just because that's something I don't really understand about ChatGPT, so I prefer not to put anyone's data in danger. I think it might know my data, from maybe documents that I've shared with it, but I'm fine with that 'cause it's my own name. But I don't want others to be affected. I have the settings turned off in terms of learning from my data and so on, but you never know.

I used to use, it was generating photos for my publication. And now I'm doing the thing with the cartoon. So I guess I still use it, to a certain degree, but the types of images that it generates look like AI. I cannot explain the cartoons to make them look like very simple line drawings. Took so much prompting that I wanted to pull all my hair out.

So yeah, I avoid the images, because it's just very frustrating and you always have three legs, or six fingers or three fingers. The first iteration of the cartoon, the coloring book, I had to stop. ChatGPT told me "It is better if we stop and you calm down." Because I said, "You're getting me really frustrated. You are wasting my time. You're not doing it right, and da, da da da." And he is like, "Maybe it's better if we stop" and I'm like, "Yeah, maybe it's better if we stop."

Karen: Oh, that's funny. I've never heard of it telling someone that they need to take a break! I mean, they designed those tools to be addictive.

Noemi: Interesting scenario. I tend to break ChatGPT, and I just push it to all its limits, but also to the limits of my patience sometimes. I'm very demanding.

Karen: Did you see there was something that came out recently where they said people should stop saying ‘please’ and ‘thank you’? Because it just wastes extra tokens and it doesn't change the way it behaves. So we're just creating all this extra processing burden on these tools for no good reason. Did you see that?

Noemi: I think, Mr. Altman can go fuck himself because he is not making ChatGPT shut up and stop when it needs to stop. So that PLS that I put at the end, or please, is not really making his company go bankrupt, so OpenAI is still in the news.

The way I treat AI is not because I think AI is sentient and is going to feel bad about what I say. It's because of how I want to feel, about how I treat and how I communicate. I don't want ChatGPT to be an outlet for anger or anything like that. I want the way I behave to be the same for everyone and everything.

Karen: Yeah, part of it too is just our habits. If we spend so much time talking with the machine, it might be easy for those patterns of talking to slip into how we talk to people too. So I definitely don't want that crossover.

So thanks for sharing all that, and those were great examples of how you've been using AI and when you choose not to use it.

One of the big concerns is where these AI and machine learning systems get the data and the content that they train on. You mentioned being cautious about not putting in personal information so that they don't use it. A lot of times they will use data that people put into online systems or that they publish online, and they're not always very transparent about how they intend to use our data when we sign up and use their services.

So I'm wondering how you feel about companies that use data and content for training their AI and machine learning systems and tools. There's a company called CIPRI, Cultural Intellectual Property Rights Initiative, and actually the leader's from your home country, Romania. They have this “3Cs rule” saying that companies should be required to get Consent and give Credit and Compensate people whose content they use for training. I wonder about your thoughts?

Noemi: I think I agree. I also have supported companies that have incorporated machine learning or AI within their technology. And I was the one drafting those sneaky terms where you say “for improving the functionality of the platform”. And you know what that means, Karen?

Karen: Yes.

Noemi: “We're going to do whatever we want with it.” Because we're training our models or we're learning from what you're using the platform for and we're going to use it for that. So that's what I think. Short and honest.

Karen: Yes, those terms and conditions are always so vague. Maybe I only want them to use my pictures to tag me when I'm in a picture, but then they use my pictures for some other purposes, and we don't really get any control over that. And like you said, a lot of times it's covered under “product improvement”, which is just wide open, and I think intentionally so. At least that sounds like what you're saying. It's intentional.

Noemi: I mean, I have to say that's how I advise companies to draft it if they want to use. I was advising more pre-AI, actually. Then during the AI era, I moved more into, like, community building and privacy. So I was actually the good person. I moved over to the good guys. But I was advising them, "Do not give a lot of details in the contracts, because the contracts won't get signed as easily. If you say a lot of stuff there, it's going to raise more questions, and the commercial goals and the sales teams will not make their quotas, because you're giving too much detail."

Karen: There was an interesting report that the people who know more about how AI works under the hood tend to trust it less than people who don't know very much about it. Like, "Okay, well, it's magic. We assume the companies are honest and well-intentioned." And they just don't worry about it. So there's a difference in how much people trust it versus how much they know. Some companies say, "We just won't talk about it. And then people won't worry about it."

Noemi: Yeah, I mean, in an ideal world, companies would act ethically and think of the best interests of humanity, and provide something that helps people instead of enslaving them. But I think that's never the case in reality. From whenever, whoever invented the VC funds forward, the way companies have started to behave, have been directly driven by just revenue, incremental increases in profits. And you cannot sustain that and also care about ethics or DEI or sustainability.

A lot of the IT companies that I know of care about it just for the stamp of approval, kind of, "Let's do something about it, as little as possible. It's not our focus. We're an IT company. We don't have a big footprint." Well, now AI has a huge footprint. So whenever you're incorporating AI in your functionality, then you really need to look at your footprint.

So there's a lot of ethical things there. I think ethics is beautiful. But I don't know if companies really care about it to really make a difference. It's a checkbox exercise from my point of view.

Karen: I know with regard to sustainability, a lot of companies have been accused of what they call “green washing”. In other words, just pretending to really care about sustainability when they really don't. And they're just, like you said, checking the boxes. And there's also some talk about “AI washing” in the same way: that we talk about AI, but we don't really care about the ethics of it.

I was just reading something over the weekend about something called “ethics dumping”, where companies take the less ethical parts of what they do, then they outsource it to other countries or other areas that don't have regulations. So they still were doing all the unethical things, but they tried to shield themselves from being blamed for it really. I found a course about ethics dumping that I am going to try to go through and to see if there's anything there that I wasn't already aware of; I'm sure there is. [link]

Noemi: Oh, definitely let me know how it goes. It sounds super interesting. That sounds like what a company would do, yes. if we look at how Facebook acts, how all of the big players, how they outsource anything to a place that's cheaper, that has less regulations around employee rights, so they can work all the time or not pay overtime or whatever. They all do it. That's kind of how businesses are run, unfortunately. Even if there is an ethics officer or someone involved in that, I don't see how that person would have so much influence over a company really, unless it's, like, something really to the core of the values of the company and they really stick to it.

And I hope more companies will become like that, at least in the future with the generations ahead of us. But I think right now it's still kind of a green-wash fest, washing everything. Some of it is the extreme cost of software to do sustainability calculations, for example. They're crazy expensive and complex and difficult to do. So when you embark on such a journey, you realize you need to hire a team. You need to understand what you're doing. You need to give them good data. Then you need to also remediate, make some actions to improve your footprint, and so on and so forth.

So once you start playing that game, you really need to commit and do things the right way. And some companies are forced to do that because they're regulated industries that are required. like in production, compliance people can advise all kinds of creative ways to reduce costs to not be liable for any type of fines or losses in a reputation. But if you dig deeper and look under the hood or you speak to someone in the company, I'm sure you can always find examples of people kind of outsourcing work to a less regulated country or things like that, unfortunately.

Karen: Yeah. One of the areas of ethics that I had written about back in March, a pair of articles that I'm actually turning into an ebook, where I think it's been under the hood and they've done some ethics dumping, has to do with labeling data so that it can be used by a machine learning system, so what they call data labelers or annotators. And a lot of that work has been pushed out to countries that are still developing and where jobs are scarce and not many people can work remotely. And they take terrible advantage of the people and mistreat them and make them deal with looking at horrible things for hours on end with no mental health support, and it's just really appalling that they do that. Then a lot of them do it by going through, "Oh, we outsourced it to this other company."

Noemi: It happens in the airline industry, for example. They always charter planes from other companies, and then if your luggage is lost or if your plane is late or whatever, they're like, "Well, it's not our problem. It's this other company that we outsource it to. So talk to them.” and they're like, “What are you talking about? Talk to the person that told you the ticket.” And you're lost in the middle of it. It's the same thing. It's a different industry.

Karen: Yeah. So for the tools that you've used: you mentioned ChatGPT, and you've experimented with a few others, it sounds like, for images. As someone who has used those tools, do you feel like the people who created those tools were transparent with you about where they got the data that they used for training them?

Noemi: I mean, if you just think of the speed of it coming up and its development, it's impossible to think that they had the time to really source it ethically. I mean, there's no myth of data scraping. That's how it happened. Everyone knows about it.

But then again, governments pillaged Egypt to take all their statues and stuff, whatever wasn't attached to the ground, and bring it to museums all over the world. So why are we surprised about how we act? That's how humanity has started new fields. Let's say we're a bit like, it's no man's land. And then regulations come into place and people take stock of what's happening. And then things settle down. And there's a place where there's more regulation, there's structure, and there's more control over it. But it's always, first, kind of this no man's land. And then slowly the regulation comes to start to create some sort of fairness and respect for human rights, and other people's copyright, and so on and so forth.

Karen: Do you know of any cases where your personal data or content or something that you've written has been used by one of these AI tools without your consent?

Noemi: I haven't been so concerned with myself to Google me, so no, I haven't come across anything. But I did hear of cases where people have researched for a really long time this topic that for sure was very, very niche, because they used books and there was nothing anywhere, 'cause they looked. And then all of a sudden they asked ChatGPT about that topic, or maybe it wasn't ChatGPT -- one of these AI tools. And then it kind of spit out a version of their article. So I've heard those types of stories, but it hasn't happened to me. But I also haven't checked, so I can't really give you an example.

What is a big debate in the legal and compliance field -- privacy field that is trying, and to a certain extent, is very involved in AI governance. Is it really likely that privacy will be infringed if you put personal data within these systems? Questions like, "Give me the name of podcast hosts starting with Karen", and they're going to give me your name because of scraping or because of searching the internet or whatever. And at the end of the day, if your name is out there on the internet and if AI finds it through scraping or through searching the internet, what's really the difference?

Karen: You had mentioned earlier when you were talking about the ways that you use ChatGPT and having it find, for instance, the way you take care of a plant. There's been some interesting articles about ChatGPT and other LLM tools taking search away from Google, which I think is an interesting shift.

Noemi: Yeah, I've actually, by mistake turned that ChatGPT should be the one doing search from my Chrome browser. And I'm always so busy I didn't find time to turn it back to normal Google search. It's so annoying. Because it uses Bing or something to Google it. Or it's like, "Let me ask my friend". It makes no sense. And we know that AI is very expensive from a sustainability point of view. So every question you ask has this kind of environmental impact. I should really put it on my schedule for tomorrow to switch it to a normal Google search.

But then I was listening to one of your newest podcasts, published on your podcast, and this question about search. And my question is, if you asked ChatGPT a question that's more like, "How do you take care of a plant?" And it goes to the internet and finds some answers and compiles it and drafts something, or you go and do the same, like what really is the difference? And you can also ask for the sources of "Where did you get that?" So to a certain degree, I think it's a fun functionality.

I think the most fun functionality you can use ChatGPT for is to turn on the voice and have a conversation with ChatGPT to improve your language skills. So I sometimes talk to ChatGPT in Danish so I can improve my Danish pronunciation, because I'm too embarrassed to speak it to real humans, so I just use ChatGPT for it.

Karen: Oh, that's a very interesting use of it. I've heard from a lot of people that are concerned about the use of LLMs for search. Instead of the usual way that people get search results, and then they click through when they go to the website, it’s the whole web, SEO paradigms, and the way that sites get credit for usage, and for the generation of their content. Now, with the LLM summarizing what's there and just giving you the answer, people aren't clicking through to those sites anymore. And so the website owners, the people that put all the effort into building them, aren't getting any business from having done the work to provide that content.

Noemi: Yeah. That's a really good point. And I've seen something of the sort, I think on Substack actually, where there was this graph of the traffic on websites that looks way low in terms of ad revenue and so on. And like, "Is Google losing its grip?" It's like, no, there's more players on the market now because, for example, Noemi turned on the ChatGPT search as its default browser setting, and is too lazy to move it back to the other one. And now I'm not going on websites as much, or a bit less, because ChatGPT is going there. And it's not leaving a mark because it's probably remembering a picture of that website from, I don't know when, and just recalling that.

AI functionality is going to develop so much, and it does every time you probably speak to a new person, there's new things happening and so on. What I really want people to experiment with, with AI, is trying to do projects where you teach the ChatGPT, who you are, what your goals for the project are, and give it documents, give it things, and see how good it is. It's so good. And then if you try to do the same with an agent, it's quite terrible.

And I have the example. I was trying to do an agent to do cartoons for other people, so they can make their own cartoons of their family, just for fun to see if it would work. And it wasn't working because it couldn't remember the style. It would do grayed-out black-and-white kind of photos, cartoonish styles, but it wouldn't remember anything of what I taught it, because it defaulted to this kind of LLM model.

Because once you access it as a new user, a lot of the training that you give it is more just like the prompt is "Create a lady in a field and it's a coloring book" and whatever happens, happens. When you use it in a project where you say, "This is the style, these are the characters, these are the things." I was able to create a book, and then create a second book with the same style, with the same quality where I wasn't like losing my mind and trying to pull my hair out. Because it remembered and I love it.

It's such good functionality, and now I'm using it for work in my legal work where I'm putting the information that's not private, privacy-related names or things like that, but templates, stuff like that, that I'm trying to consolidate a checklist, and it really helps. It's not perfect. It still messes up. So don't rely on it, but it does help. It helps.

But then my biggest struggle is because I'm a perfectionist and a control freak, maybe also. I gave it my terms and conditions before we talked and I said, "Can you summarize what are the top 10 things that I deal with in these terms?" And it spit out some really salesy thing about, I don't know, it was very superficial. Didn't really relate to what the terms are about. And the funny thing is that my terms start with 10 bullet points about what the terms are about! So it's not accurate. And sometimes if we rely on it too much, it's to our detriment, unfortunately, so far.

Karen: Yep. It sounds like you've been very cautious with your personal data. So I'm guessing that you hopefully haven't had any problems with your privacy being violated or having a phishing attack or losing money because of someone stealing any of your content. And again, this whole data theft really predates AI. It's been going on for decades now.

Noemi: Yeah. knock on wood. So far, so good. I am concerned, especially us opening up more on social media, on Substack, creating a platform, creating a following, having people that I don't know, or accounts that might not be real, following. It does open me up to be more front and center to bad actors out there. I think my biggest concern with AI is that I'm creating content that's also video content. And they could create a video of me saying things that I didn't say in my voice and the way I speak. And maybe it's a video like this and there's no hands, and it could be easily AI. How can I prove that it's not me?

Karen: Yeah, deep fakes are definitely a problem. I'm curious, there's a setting in Substack that says you can turn off AI training on your newsletter content. Do you know if you’ve set that setting to disable? 'cause they warn you that it may hurt your search discoverability.

Noemi: I think I'd never turned it on, or if I did it is, it's off. I don't care about being found on the internet. My intention with Substack is that the right people find me. Not that everyone finds me. I'm not trying to get a million followers or anything like that. I'm never going to be famous. And I want to find a balance between being open to help other people and opening so much that people can find out where I live. So yeah, it's a tough balance, right?

Karen: Mm-hmm. Privacy is one of the bigger concerns.

Noemi: More like security than privacy, to be honest. Like actual scenarios that can become very scary, right? So that's why a lot of creators don't show their kids, don't show their houses, don't show a lot of different details because it can so easily be triangulated into knowing a lot about you.

Karen: Last question, and then we can talk about anything else that you want! So a lot of the things that we've been talking about, they're all reasons why we are learning not to trust the different AI and tech companies with our data, with what we do with them, and being very cautious, as you are.

What is the one thing that these companies could do that would help you to feel like you could actually trust them?

Or IS there anything they could do to help you feel like you could actually trust them?

Noemi: Having worked in an IT field for more than a decade? I think the people in companies are normally great, nice people. You get caught up in the goals, the lofty goals of these companies. And you end up doing almost anything in order to achieve those goals and climb the corporate ladder and achieve the accolades or the bonuses or whatever type of incentives they provide you. You're in a company that you see is rising and it's going to go really far. And the way to stay on and succeed is to go with the plan and with whatever the leadership thinks is a good direction. So I don't think there's something companies would do to earn back people's trust. Because when it's about profit, it's about profit, and that's that.

I think what I advise people to do is take calculated risks. There's ways to use tools without giving your data or too much of it. You can use documents that speak about you, but redact your name, for example, and then put it in, or use template that has nobody's name on it. But get information, get AI to help you.

For example, in my Google Suite, I turned off AI functionality. The only AI functionality that I have apart from the ChatGPT is Canva that has an image generator thing and also some text generator, I think. I don't even know if you can turn it off, and I don't really use it. So it's kind of there, it just appeared one day.

That's one of the problems that the privacy community really has with these tools: that any type of software tool started putting AI in their tool. It's a real headache for privacy professionals within companies because you need to report that. There's requirements under the AI Act that you need to keep track of all the tools having AI. Now, it seems like every other tool has AI and then you need to, like, track through the organization. Is it turned on? Is it not on? Is it training on our data? Is it not? It's a huge headache. And a lot of companies just add AI because it's trendy. It might not even make sense. Automation might have been easier and cheaper and whatever, but it was nice to have AI in it and now they do. And there's a lot of things that come with that.

So, yeah, I don't think companies can do something to earn my trust. It's just from a practical point of view, it's not going to happen.

Karen: Yeah, that's a very fair perspective to take when we look at the way the companies are. One of my earlier interview guests, Julie Rennecker, was talking about how she's worked with a lot of startups, and startups tend to be imprinted with the values of their founders. So if they don't value ethics and treating people honestly and being trustworthy from the beginning, it's very difficult for the company to adopt that perspective later. Because the initial views and values of the founders tend to persist for decades, if the company lives that long.

Noemi: I think to begin with, yes. But whenever new investors come, and there's influence over the founders, things start to change. But yeah, it's always good when you have founders that are responsible, that care about privacy. Because, especially when you're small, you almost know that the chances of a regulatory data protection authority, for example, to come and knock at your door and audit you is very unlikely. So you feel like you're flying under the radar and you can be a bit more reckless, and you don't need to invest that much time and effort into privacy.

But some companies really do care about it for, if not anything else, because their customers care about it. And then you need to show that you're SOC certified, and that you do the privacy audits, and what's your tech stack, and where's the data going and being hosted, and so on and so forth.

So yeah, there's many incentives to comply with privacy and one is the cultural imprint and values of the founders. But I think the biggest influence is the clients. If the clients care about the privacy, then the founders and anyone in the company also does, and they do, deeply, and they spend a lot of money.

Karen: So those are all my standard questions. Is there anything else that you would like to share with our audience today?

Noemi: To be very, very honest, I remember reading the questions when I first prepared for the interview a while back. And now I basically took it quite in a relaxed and unplanned manner, so I hope it came out well. But I think a topic that we bonded a little bit about last time was about AI's influence on how careers will look like in the short future, in the more distant future. That's a topic that fascinates me, but unfortunately I don't have the answer. I keep on seeing different interviews with Bill Gates and other people, and talking about what are the jobs that are going to be wiped out first, and how it's going to affect different industries. And I think that's fascinating. It speaks to our security, our need for certainty, and our need to have our future secured.

So it's an important topic and I feel like that's where governments, international organizations, anyone could do more work to ensure that AI is not going to be a cost-saving mechanism that will derail an economy, where people will not have salaries to spend. That would create incredible instability in an economy. I think those are topics that need to be discussed. Maybe even more than just ethics in kind of a general way, but more like securing people's futures. Because the economy is not going to be run by robots. People need to earn money in order to spend it and have the economy spin around. That's a topic that I want more to hear about in the press, in politics, and in discussions because I think it's a really important topic.

Karen: Yeah, I actually consider these societal impacts to be part of AI ethics. And the impact on people's jobs and livelihoods and their lives, that's one of the five key areas of ethics that I called out in my articles in March, because I do feel like that is easy to overlook. If you're just focused on, "I have this new tool, I have this new technology, what can I do with it?" versus “How is it going to affect the world?” or what Jing called the “second order effects” or even sometimes third order effects. So: we change this, but then something changes, and then that changes something else. And we should be playing chess, not checkers.

Noemi: Exactly. Exactly, and the fact that you cannot say, "Oh, but it's going to adjust in the end. You know, when the TV was invented, and the radio lost, da da, and when did it, and then it just adapted”, and whatever. And I'm like, this is a shift in how work is being produced, of unprecedented importance. And what it can do to the livelihoods of people is so uncertain that we really cannot afford to just let it happen uncontrolled and be like, "Yeah, let's see", and then allow companies to outsource people to AI tools more and more and more without any type of limits, any type of regulations.

Oh, just to win against China, basically. I was like, "Oh no, this is not going to be good for the future of people." I still believe that's such an important part. And of course it's part of ethics, but it's a part of ethics that is the scariest part for me because I'm trying to teach my daughter and educate her and think, "Okay, I want to know a little bit about the future of what I'm preparing her for." But I don't know how the world will look like in two years from now anymore. There's so much uncertainty.

So what I'm banking on, and why I'm writing on Substack, and why I'm letting my human side be more visible than the words and the templates and the legal work that I do, is that the words and the templates will be sooner or later done by AI, whilst community building, talking, connecting to people, responding to people's fears, that human element is what will not be able to be outsourced [to a] mechanical entity for a really long time. And that's why I moved from commercial contracting and the things that in the legal field is the most affected and in danger of disappearing with AI, and moving towards more like people management and communication and strategic thinking and processes and training and things like that, to hopefully avoid becoming obsolete with the help of these tools.

Karen: Yeah, that's a great summary and I'm glad you brought it back around to your Substack. So I want to make sure that people know that they can get help from you there in your publication, in your community that you're building there. And I think that's great 'cause it's important that we keep that human element.

Noemi: Exactly, because you can ask ChatGPT today, "Create terms and conditions for my Substack", and it's better than nothing. But I always say there's like a gradient of:

  • You can have nothing.

  • You can have a disclaimer.

  • You can have ChatGPT create terms and conditions and hope for the best.

  • You can have my terms and conditions created for Substack.

  • Or you can have a lawyer sitting down with you side by side and drafting something specifically for you - might be thousands of Euro or dollars to do.

There's options and everyone can choose what they do. But I do really urge people that are providing advice, that are teaching people things, they're providing things to download, that are doing one-on-one meetings, coaching, mentoring, anything like that, psychologists putting advice about how to deal with different things like stress or burnout or putting boundaries, which is what I started writing about in the month of May, really should consider having terms and conditions.

It doesn't have to be anything that I provide, but definitely have something there because it's so important. It's going to protect your livelihood. If something goes wrong, there's a limit. You put a limit to how much you stand to lose from one single claim from one single person that maybe is not paying you a dime to get that advice, and then something happens, and most likely it's not even your fault, you know? It's one thing what you communicate, and another thing what the person understands within their own context and their own kind of situation that they're in, right? So I have this disclaimer that I'm going to share as a template with all my subscribers. That's going to be this blanket thing that you can add in your Substack, it's just the beginning of "What I say here is for information purposes only." [See the Series References of this post below for Noemi’s post with example disclaimers]

Karen: Noemi, thank you so much for joining me today for this interview. It's been great fun chatting with you, and we've got a lot of good information here for folks, and good luck with your community and with your volleyball!

Noemi: Thanks.

Interview References and Links

Noemi Apetri on LinkedIn

on Substack (, )

Me Time Coaching: You're the CEO of your own life!
Protect your Substack, build on a solid foundation.
Building something on Substack, like a bona-fide digital business, takes a lot of time and effort. We want to make sure we build on a solid foundation…
Read more

Get Noemi’s Coaching Booklet to color your Me Time!

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Reference for the disclaimer above is Noemi’s May 14, 2025 article:

Me Time Coaching: You're the CEO of your own life!
Protect yourself with a DISCLAIMER TEMPLATE for your Substack! (First Edition)
Hey there…
Read more

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode