Introduction - Marina Vytovtova 🇷🇺 🇺🇸
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.

Interview -
Karen: I am delighted to welcome Marina Vytovtova from the USA as my guest today on “AI, Software, and Wetware”. Marina, thank you so much for joining me on this interview!" Please tell us about yourself, who you are, and what you do.
Marina: Thanks so much for having me, Karen. It's really my pleasure. And I am a product manager. I've been managing product for over 15 years now. At this point, I lead sometimes teams, and sometimes I actually get very hands-on and into the weeds. Because I am currently consulting as a fractional PM. So whatever I need to do for the size of the client, I do. And that means that I identify for teams what they should be focusing on in order to deliver the most value for their customers and their business in the shortest amount of time. And then I work with them to actually go and do that.
Yes, obviously AI is one of the main tech innovations that exists. I'm looking at it from just a philosophical perspective. But also we actually implement features that utilize AI that deliver value to customers.
Karen: Great. So as a fractional PM it sounds like you're a combination of an entrepreneur and someone who also works with industry or enterprise clients.
Marina: Yeah. I love how you look at it. Yes, I guess I am. I'm at the office of one of my clients right now and they are a larger company. And I also work with a very small startup. They're pre-revenue. And it's all very scrappy and a lot less sophisticated perhaps.
Outside of work I am a big fan of wellness yoga. I love this particular type of yoga I've been practicing for a while now. Just a big fan of mindfulness, and trying to understand myself as a person. I have a son who is 15, so obviously he's my world. That's a big part of my outside of work life. And I also have a husband who is a musician. My son is a musician too, so all of that is a big part of my life as well.
Karen: Oh, that's very neat. Music is being affected quite a bit by AI nowadays. And maybe we'll talk a little bit more about that later.
Marina: Sure, yeah.
Karen: Okay, awesome. Tell us about your level of experience with AI and machine learning and analytics, and if you use it professionally or personally, or if you've studied the technology. It sounds like you are using it somewhat?
Marina: Somewhat, yes. I started looking at AI and machine learning when I was working in my last full-time role. I was VP of product at Create Me which was a software startup where we were building an automated manufacturing line for t-shirts, and then also customization on t-shirts, so direct-to-garment printing. I was leading software development for that. And because we were building an autonomous production line we did need to start thinking “How are we going to deliver the autonomy on the ongoing basis?”
So that's where I took my first ML and AI class. It was at MIT. And I first learned smart factory manufacturing. And it does use machine learning. And we were looking at, “How does machine learning compare to AI and various levels of autonomy?” And then I also took a class there on AI product design and management.
This was all before ChatGPT became mainstream. I've known about ChatGPT for quite some time because even, I believe, five years ago you could still access it and see what it produces. But then, ChatGPT became mainstream and became available to all of us. There is a consumer-facing gap. And I think from that point on, the evolution and development became a lot faster, or at least it became a lot more visible to us general users and innovators. So I started then looking at AI and how I can integrate AI into my processes now that I'm consulting. Or, as a product manager, how do my processes change now that I can actually build? I don't have to always go to engineering and ask them to mock something up. It's interesting.
A product manager is essentially, to an extent, a producer. I don't mean to get into some philosophical disagreements there. So yes, they need to have a very clear business vision of how does this product impact the business, impact the customer? But then once they have the vision, they have to bring it all together. They work with designers, they work with engineers, and most of the time you don't actually have time to build anything yourself, but you probably are a builder at heart. And that's what brings you to this job of product management.
So now we actually can build some as well. So you could create some first iterations, using all of this rapid prototyping, or AI-focused prototyping and design tools. So I started doing that and that was very interesting and sometimes frustrating.
But I also have seen opportunities with customers that I work with to change some of the flows and integrate AI into their process flows for customer benefit. For example, I'm working with one customer where their customers and consumers of their application that they're using had to do a load of repeat data entry. They don't actually have to do it now. We could have agentic AI doing that for them, and that's a huge value. And this client could deliver to their consumers. So that's the extent to which I use and work with AI.
Karen: You mentioned that you're using it in your own consulting practice as well, and you maybe building some small tools to help you with streamlining your own work. Can you maybe share a specific story about how you're using that?
Marina: Yes. I have tried to build some automations. For example, one automation is to aggregate content from various sources and then identify content that would most likely resonate with my ideal customer persona. And then provide that content to me, so that then I can decide to maybe share it with my potential customers in social media and so on.
I have to say this automation is sometimes working, and it's delightful when it's working. And sometimes it doesn't, and then it's very annoying and it costs me time.
And I originally wanted to take it a lot further than what I currently use it for, but what I'm realizing with all of these promises of AI automation tools, the first iteration is incredible and very inspiring. And then the further iterations, once you need to actually bring it to exactly what you need, that's where it falls flat.
Karen: Yeah, I've seen that when I've used it for writing code. The initial drafts are great and then the more you ask it to do, then it starts to fall apart, and breaks the code that you wrote the first time.
I'm curious which tool you're using. When you say it doesn't work, does that mean it's giving you made up articles and or things that you can't actually share? Or what's it doing?
Marina: No in this particular case, it's actually just in the Zapier settings. From the get-go, sometimes I exceed the maximum kind of lines, and that's what breaks my automation. I'm not receiving anything. Why is it not working? And then I would be receiving the error notifications that something is wrong. " You are exceeding the maximum lines on something." So I then have to go and figure out “why is it not working?” And then at some point you're just thinking "I could have just read stuff myself", because I cannot troubleshoot and figure it out. So that's the type of challenge with this particular kind of automation.
Then, like you said, some of the tools that I've used were – one was Replit, and I decided to integrate it into the need for my client at the time. And we needed to just build a quick prototype – actually, we just needed to capture requirements. And I was so inspired by what I saw from Replit at first: “Rather than writing requirements, let's just build this little prototype, and then give it a prompt to document requirements. And this will be so much more visual. There will be no room for error.” Then I made a ton of progress in one day. And then I just made no progress in two weeks, and I just couldn't get it to do what it needed to do. And that was very embarrassing because the client lost faith in me. And that's not where you want to be as a consultant.
Karen: That's for sure. In our earlier conversation you were referring to that as "accelerating the zero to one phase", but then "going from one to done" is where you needed to put in a lot of – "manual muscle" was your term.
Marina: Right? Yes. Manual muscle if you have it. But many of us who are not coders, we actually don't have that muscle. We can't really troubleshoot and say where things need to be corrected. So I think these tools are really great for engineers who can see very well what the tool is doing, and then can direct the tool to exact steps, or actually just manually interfere and correct something.
But those of us who are not engineers – the good thing is that, as you use them, you become much more familiar with what is required to code. All of a sudden you, "oh, this is the file structure. This is actually how it all works. This is the file that it's added." Saying the challenge is in that file, so maybe you can then take the content of the file and take it elsewhere, and try to work with other tools to help you troubleshoot.
Karen: So do you mostly use ChatGPT, or have you tried other AI tools?
Marina: No, I use a lot of different things. I do use ChatGPT a lot for writing. And I have my custom chat bot there that's trained on what I write and how I write. I use Perplexity for research. I would never ask ChatGPT a factual question, for example. But Perplexity is really great for that. I also use Claude. So those are the three that I use the most for kind of content creation. I also use other AI power tools. A note taker that I really like is Granola.
What else do I use that utilizes AI? I use Gamma for deck creation. Although now I think Canva just had a release that might actually do a lot of what Gamma is doing too. I do use Canva as well. I use so many tools. It's a little scary. And like I said, I did have a paid subscription to Replit. Cursor. I tried V zero. I wasn't that impressed with it. I also tried Bolt but I felt like it was so similar to Replit that I didn't know why I would go with one over the other, so I just stuck with Replit back then.
Karen: Okay. Awesome. So it sounds like you've used a good variety of tools. Are there any times where you have avoided using AI-based tools for some things? Or for anything? And can you share an example of a time when you chose not to use AI, when and why?
Marina: I think this point, to us as end users, if we are still interacting with something digitally, it's hard to even know if what we're using uses AI or doesn't use AI, right? I always just tell my husband, “No, don't ask ChatGPT that, don't ask it factual questions.” I would put them in the Google search. Guess what? Google uses a lot of AI too. It's still search. It's still going to match things. Now Google also has that summary at the top. So I try to go to sources for research. So again, I wouldn't do research in ChatGPT, but I would use a robust search tool that probably is also going to use AI. I do pull into sources that it's going to provide to me. And now ChatGPT does that too actually, and gives you sources.
Now, let me think. See, I'm not a native speaker. And for me, writing was always a challenge in English. It would take me much longer than probably somebody who is a very verbal English speaker, a very confident communicator. So now I wouldn't start with, necessarily, ideas from ChatGPT or any tool that I'm using for writing. I would put my ideas, but then I would talk to it, and interact with it. And I would get the final result. I would review it, and I would probably edit it somewhat to make it to my tone of voice. But most of the time, at this point, I use actually an AI tool to help me write, just because I would do it so much faster in this way than if I did it myself. It's probably never going to happen that I will know how to use articles, like 'the' and 'a', or the absence of them. I've lived in the States now for over 20 years. And to this day when I write myself, I will omit it, and then I notice, "oh, it actually calls for an article". Most people who are native speakers probably wouldn't think about that the way I think about that.
Karen: Yeah, I think it's very common nowadays for all the basic word processors – and other specialized tools like Grammarly, of course – but they all use some sort of intelligence. Even before large language models came along, there were spelling checkers and grammar checkers and such.
Marina: Right.
Karen: There are some that will also check for biases and things like that in what you write. But I think a lot of people, even if they were born here, end up using those tools to help them with some of their English. English is weird sometimes!
Marina: Right? You know what I never use AI for is --I know that some people look for recipes or try to come up with recipes using AI. I wouldn't do that. I actually need a human- tested recipe to cook. I wouldn't want to risk doing something that got made up and has not been actually tested by a human being who knows how it actually tastes.
Karen: Interesting! I've talked with people that have used it to look for recipes, that say "I have mushrooms and spinach and whatever. What can I make?" But I actually thought they were just using it more like a search engine to find recipes someone had already published on the web, had been, perhaps, scraped without permission. But I didn't think about the possibility of it actually inventing a recipe. Baking would be hard to get away with. Cooking maybe.
Marina: Yes.
Karen: Yeah. You get the wrong proportions of some of the ingredients when you're baking and it just doesn't work at all. That's funny.
You had mentioned in our previous offline conversation about the amount of computing power and such that it takes to use AI. Did you want to share some thoughts about that?
Marina: Yeah, I remember that I went to an event maybe half a year ago here in New York and it was the first time where I was introduced into the compute needs and what goes behind, hosting the large language models. It was a presentation from a company that essentially created a water cooling system for server rooms. So they are a contractor that goes and works on these implementations. And they were basically sharing how great their tech was, that with water, they could cool so much faster than with air.
But I'm like, what? Water? And I started just learning all of that. And I was shocked actually by how much water was needed, and how much air is needed. Air is even worse. And then where do you put all of those things? So I chatted with them after the presentation. And they were scared themselves, how resource-intensive all of this tech is. That led me to research to try and understand, “Okay, so then what happens? How do you power all of that?” It requires energy. It requires water right now. And then the energy is so prohibitively expensive that now all of the companies that require a lot of this computer are looking into nuclear energy.
And our regulation in the United States is a lot more geared towards businesses. I'm learning that Google was able to essentially buy several, like three, small nuclear reactors. But where are they going to put them? Are people going to have any say if there is a nuclear reactor being put in their backyard tomorrow? And the way I see this country, it's probably going to go all somewhere in the south, and somewhere where people don't have a lot of say. And that is very scary to me.
Now with Deep Seek's model, there was big progress made towards how much compute power is needed. So that was a good development. And now there has been a lot of improvement in the resource needs of large language models since then. So I think we are going with AI in the right direction there. But to me it still is just completely mind-boggling that a private, or maybe publicly traded but still a for-profit company, can have access to tech that has such public implications, like nuclear power, for example. I think it's just wrong that this can happen, but yet it does happen.
Karen: Yeah. Data centers certainly use up a lot of power. You probably heard about Microsoft negotiating to reopen Three Mile Island?
Marina: Yes.
Karen: There are the environmental concerns you mentioned about water and power usage, and how data centers require all these rare minerals and resources just to build the chips. This was a problem, actually, even before large language models came along. But that has really amplified the acceleration in the use and the expansion of data centers. And it's caused a lot of companies to drift away from their commitments to sustainability, which is sad to see.
Marina: Yeah, very sad to see. Indeed. Yeah. We all will have to reckon with it at some point, unless we really want to not care and be okay living without sunshine, and then other planets, which we don't know yet what is possible.
But there is a more fundamental question behind all of that. And that's what are we all working towards, right? I think there is this kind of underlying assumption that the growth can be infinite. And guess what? Our planet is not infinite. Our planet is our planet. It has a finite amount of resources. They're incredibly abundant, wonderful resources. But if we just keep on growing the way we assume we can, we are going to basically exceed what this planet can offer or what this planet can support.
Karen: Yeah, there's definitely some concerns around that, so I appreciate you bringing that up.
Another concern I wanted to talk about today is where the AI and machine learning systems get the data and the content that they train on. Sometimes they'll use data that people put into online systems or publish online. It could be writing, it could be songs, or images, or other content. And companies aren't always transparent about how they intend to use our data when we sign up for their services. I'm wondering how you feel about companies that use data and content for training their AI and ML systems and tools, and what you think about the ethics – whether the companies should be required to get Consent from and Credit and Compensate people whose data they want to use for training their tools, or what some people call the “3Cs rule.”
Marina: I do. So it's a bit of a challenging situation, right? I do feel that creators need to be compensated, but it also has not really happened fairly, and not just with the use of AI. There is the free ' public domain' and a lot of the classics, and is part of that open domain, and that's what's always existed, not just for AI training, but for other purposes as well. So I do think that now that this realization and this developing awareness was created, we need to figure it out.
How it was originally built and trained, I think it's a moot point. We need to move on. I do think it's just something that happened, this technology and this development. It already happened. The bulb was invented. Now we need to figure out how to live with it.
That's my take on this. And it's a take of a builder, somebody who gets excited about new opportunities. So I don't know if it's right, because guess what? People were also really excited about discovering the Americas. And then what happened?
Karen: "Discovering" in quotes, yeah.
Marina: Yes. Yeah.
Karen: You mentioned public domain, and there was a lot of noise last year that the people at OpenAI were talking about ' publicly available' and conflating that with things that are truly public domain and don't have copyright restrictions. You mentioned that you've been in the US for 20 years. Do you have any insights on your home country and what their regulations are regarding the use of public domain, or concepts of fair use, or anything like that?
Marina: No, it has not really been something that I've been watching closely. I'm sure there are some rules and regs. And I'm sure that even my husband is familiar with, because he published both in the United States and in Russia. But what those regulations are and how they're being used, that I'm not familiar with.
Karen: So when you say ‘published’, was that writing or was it music? Did he publish music also?
Marina: He published writing. He's a music theorist. So he writes texts about music.
Karen: Do you know, has he or your son ever experimented with any of the generative AI tools for creating music, or for composing lyrics, or anything like that?
Marina: Not yet. No. They have not. But I'm sure that they probably will at some point.
Karen: Do you know, does either of them record music that they publish or put out on YouTube or any of the social media sites, anything like that?
Marina: We had one recording, but this is just a YouTube recording of a piece that they didn't compose, but they play the duet, and they recorded it when my son was eight or nine years old.
Karen: Oh, neat.
Marina: They played the duet and they recorded it. But none of that would've been copyrighted, because they didn't create that piece.
Karen: Okay. One of the things you mentioned about the legal use of people's content and things that came from the internet, there are a lot of lawsuits. The last count I saw, or was 41 in the US alone. And some of them are from the big music companies on whether or not the music of the artists in their catalog was misused by one of the video or music generation tools or such. So it's very controversial that in a lot of cases, people's books, written content, have been stolen and used in some of these data sets that were originally intended for research. Okay, it's fine, you can have it for research. But then they used it for commercial purposes, and that's outside of what was originally intended or allowed by the people that contributed the content. So there's a lot of controversy around that.
There's a phrase where I grew up "the horse is out of the barn, so there's no point in closing the doors". But the thing is that these systems are still being refined and improved, and there are actually some companies that are trying to develop and release ethically- trained tools. And I think we certainly need to encourage that.
Marina: Yeah, no, I agree with you. And also they need to be on the same ground, right? Like the same playing field with everyone else. So that they can actually be at an advantage for doing that.
Karen: Yeah, that's true. So as someone who has used different AI-based tools, do you feel like the tool providers for ChatGPT and Perplexity and Claude have been transparent with you about sharing where they got the data that they use for training their AI models?
Marina: Perplexity is more of a search tool and I think it's pretty transparent. The rest of the tools, maybe not as transparent, but I have to say it's not that I've been that inquisitive either. It's not that I've been really caring about that. I do think that we should know about all of these concerns, more to be curious and to ask those questions and to make sure that we are not contributing to this problem. But to date. I'll be honest, I've been more interested in what I can do with it than how this came to be. I think it's kind of part of human nature and that's part of the problem and that's part of why the horse is out. I do think there is something in us that makes us want to go for this kind of infinite growth. Or there is this kind of underlying belief in us that just always is curious and wants more and new and interesting and what else is possible, expand our possibilities.
Karen: Yeah. You mentioned Perplexity being transparent, but do you mean because they cite sources? And give you the links to the sources, is that why you are thinking of them?
Marina: Yeah. They're also not the LLM builders, right? They're just a search tool on top of existing LLMs. You understand these are the LLMs that you are using. But then it's on those LLMs to tell you what they have been trained on. They're in an easier position. The onus is on the LLMs to show you what you know. And I knew that, for example, ChatGPT is trained on the internet, right? Like the ‘publicly available’ internet, whatever that means. But I didn't really dig further what that means.
Karen: Yes. You mentioned working with building an AI-based tool or system. Can you share anything about where the data came from and how it was obtained?
Marina: In that quality assurance system, the data would have come from us, and basically we would take pictures of what the defect looked like and non-defect. And then that's how we would train the model to understand what's defective and what's not. Again, we wouldn't start from scratch the learning model. You still would need to use something that had some training on something else, to understand “How do you recognize that this is the image?” And there the transparency, of course, is marketing.
Karen: Yeah. So your data was basically private data from your manufacturing processes from your production run. That makes sense.
You've also mentioned doing some work with training your own custom GPT on your writing and your voice, so I assume that's your own content there as well, right?
Marina: That's right, yes.
Karen: Okay. So as consumers and members of the public, our personal data and content has probably been used by some AI- based tools or systems. Do you know of any cases that you could share, obviously without disclosing any personal information?
Marina: When the data is being used without the public being notified that this data is being used?
Karen: It could be that we haven't been notified. Or just, for instance, if we go to the airport and we go through TSA screening and they tell you that they're now going to use a machine learning system to take your photo and compare it to your ID instead of a human comparing it to your ID, for instance. So there we would be informed. And you actually can opt out from that. I think it's very hard to do, but it's theoretically possible.
In other cases, for instance, if we're just walking down the street and there are cameras, we're not really having a chance to opt in. By being out in public, we're basically opting in, saying, yeah, you're accepting that you're probably going to be recorded.
But there are other cases too, which are maybe more specific. Sign up for a social media website and they ask for your birthday. And they don't really need to know your whole birthday, but they ask for it anyway. And sometimes, if you give them a truthful birthday, then they have your real birthday. And that's an identity theft risk, things like that, right?
Marina: Yes. Yeah. It's true. Like using even Gmail, it's going to make some assumptions about what's the primary email for you, what's not, and it's going to sift things into the folders. I think at this point, we are trained to see if there is value in it. For me, it's super valuable that I don't get all my social notifications in my primary inbox, that I basically say, "Okay, yes, you're watching what I'm doing, but this is helpful to me."
So I feel like we are at this point, looking at, “Am I getting some value out of it?” And if yes, then maybe we're okay with losing some of the privacy. In some instances, we are not getting potentially any value out of it, but we cannot actually avoid losing that privacy.
Now there are some regulations. Even with cameras, I think, they cannot retain the images of facial recognition beyond certain times. And then, I'm using security cameras because I'm at the risk of theft. I'm just this small store owner, and how else can I protect myself? Some of the store owners would put a sign up that "we use cameras" and some wouldn't. But there is an unspoken contract, right? We all are entering at this point that yes, it's not all that private anymore. I don't know if I answered your question, Karen.
Karen: You did. That was a good answer. What you said made me think about-- I don't know if you remember when Google Glass came out, and people were worried about people that were using them, that they termed 'Glassholes'. because they basically felt like they were invading people's privacy by wearing them in personal situations. And the person they were talking to and looking at, and using their Glass with, wasn't able to really consent or say, "Hey, would you take those things off?" or something like that.
But now they're talking about even having newer, less obtrusive glasses that someone could be wearing. I was reading an article the other day that someone was saying, "If someone comes into my home wearing one of those, I'm going to ask 'em to take it back and leave it in their car. 'cause I don't want them constantly capturing everything about my home, my family photos on the wall, and it all gets fed into OpenAI, and I don't want them to have that." So there's definitely a trade off there between privacy and whether or not people can consent. But if you're walking down the street, and you're walking past someone who has some of those on …
Marina: Or just has a camera on. I live in New York. Almost every other step, you will be in someone's shot, or in someone's video recording. I like this. I'm actually right now at the World Trade Center building. There is Oculus. There are so many tourists. I am definitely captured in many of those videos and images.
Karen: We probably need to have some new social mores around, what's considered acceptable and what's not? And it's probably going to take some time for that to shake out. But if you look back at the Google Glass experience, it suggests that it's going to be a bit bumpy.
Marina: Yeah, true.
Karen: All right. Do you know of any company that you gave to other than Google that made you aware that they might use your information for training AI or machine learning? For instance, the different tools that you've mentioned using?
Marina: The challenge is that it's entirely possible that they all have told us that they were going to use this data for training purposes, but who reads that whole terms and conditions statement, right? Or who reads the privacy policy that you are given? It's mostly a formality, and if we treat it as a formality, then we are a bit reckless. I guess we are okay being reckless about our privacy in that case. Am I aware? I am basically. And maybe this is a reckless statement, and my son now is telling me that I am not being as mindful of his and my privacy, having shared some of the pictures and he's looking at my setup in Facebook and Instagram.
And he's right, because I have basically just accepted that. I have not been posting anything, damaging or so I thought. Obviously my son thinks otherwise about some of the stuff I've shared about him. But I've just accepted that whatever I post online, even if I say that this is for friends only, at this point it's public. And that does make me actually a lot less active on some of the social networks that I use. And I do see how our perception is changing.
So to answer your question, am I aware that I have given consent to other companies? I am pretty certain I have given my consent to use a lot of my data to a lot of companies, because I have not been actively reading those privacy statements and terms and conditions.
Karen: You mentioned Facebook and Instagram. There was a big uproar last summer about Meta. They were going to start using our content for training their AI and ML systems and tools. But those of us in the US did not really have a way to opt out. I put in my opt-out request and they basically said, "Yeah, we don't have to do that, so we're not going to", and they blew me off. That, I think, was pretty common for anyone that wasn't covered by something like GDPR. They basically said, "No, we don't have to respect your opt-out request." And so anyone who's still using Facebook or Instagram – some people use it for business, and it would be very hard for them to stop using it. But anyone who's still using Facebook, we basically would have to accept that they are using our family photos and our travel pictures and what we write in our comments on the sites, that they're using it all.
And LinkedIn actually does the same thing now. I don't know if you remember hearing about that last year. We were all opted in retroactively by default and didn't really get a chance to say anything about it, except for what we wanted them to use or not use going forward.
Marina: Working in technology, I understand both sides, right? I do also see, for example, AI can be super valuable in the medical field. All data does have to be anonymized. Even when you are using it, it's anonymized. At this point, we cannot really trace you. I don't know if the public knows that it has to be anonymized. And I feel like, if it is anonymized and it can serve somebody well in the future, because now there is a recognition that yes, it's valuable. If my image gets attributed and, I don't know, I have an automated tag on where this picture took place, that's helpful to me. So if the feature that will get developed using this data that couldn't trace back to me, I am okay with that. That's my position now. Sounds like my son probably will not be okay with that!
And I understand that some, yeah, people should be able to sign in or sign up or opt in or opt out. Now a lot of people will opt out just to be on the cautious side, but then we won't be able to get some of those benefits. But if you asked, for example, "Can your medical data be used?", a lot of people would say no. But actually that could lead to detecting cancer in someone or not. If it's anonymized data, why not? I guess maybe it's education on all sides. What can you do with that? What is this for? And is the benefit worth this sharing of the information?
And I think that's to the whole question: what would you use AI for, versus not? Now knowing that it's so much more computing power that's needed for it, is it worth it writing a perfect email? Or couldn't the email have been imperfect, without that article? Or without something? Because we've lived with imperfect cover letters and imperfect emails for many years, and it's not the end of the world. We also raised the bar now. “Can you just put this into ChatGPT so that it looks okay?” Maybe it goes back to that whole kind of perfectionism and indefinite, infinite growth, and all of those things that I actually disagree with. It would be worthwhile to think, "Is this hack that we're using really worth the end result? And all of the consequences that go into using this tech, the resources, privacy, and all of that?" And I imagine that it's worth it for some things, and it's not worth it for some other things.
Karen: Yeah. It's so interesting that your son is more cautious about privacy than you are. Some people say that it's the other way around, that youngsters are more open and older folks are not as savvy. But you're certainly very savvy and he has those concerns.
There's been a lot of talk in education. Some of the schools, for instance, even for very young kids, they're having them use these online systems or giving them tablets to use that are capturing the children's personal information. And the parents aren't even always aware of it or having a chance to say, "Hey, why are you using this tool that's going to steal my child's privacy and I will never be able to get it back?"
And I'm curious, do you know if in your son's school, do they encourage, support, forbid use of AI in schoolwork? How are they approaching it?
Marina: I think they generally forbid. They generally don't want kids to use AI. But they also are, like everybody is, grappling with the fact that the horses are out, and they understand that this is also somewhat inevitable. So I think it's shifting now, maybe sometimes allowing for it. But generally they certainly don't want anybody to use it for writing. But asking your Siri to calculate x times y is pretty much the same as using a calculator. So there, it's probably okay.
Karen: Yeah. Your point about anonymization too is interesting, because there's some discussion that even if you think the data has been anonymized, it's really just more giving a pseudonym. But if you look at the patterns, like a person's here in the morning, and then they travel to this location, they spend most of the day there, and then they return back to their home location. Even if you don't have the person's name, with a surprisingly small amount of data, you can actually identify a person, the GPS mapping being just one example. Just because the identity information has been removed, that doesn't mean a person can't be identified from it. So the whole idea of it being anonymous is – we're maybe not as anonymous as we might hope we are.
Marina: Yeah. No, that's true. Yes.
Karen: So do you know of any times when a company's use of your personal data and content has created any specific issues for you, like loss of privacy, or a phishing attack, or maybe loss of income from it?
Marina: I have been lucky, I think, to not have ever had the consequences of the phishing attacks. Other than just annoyances, like, my phone number got leaked. I was receiving spam calls. It's annoying. I've been pretty aware of what can go wrong. I have not fallen prey for any of those. So I have been spared, but I know of many who have not been.
Karen: Yeah. Let's hope you continue to be spared! It's not fun, from what I understand. Last question and then we can talk about anything else that you want. So we're seeing that public distrust of AI and tech companies has been growing. I'm wondering what you think is the most important thing that they could do or should do to earn and keep your trust. And if you have any specific ideas on how they could do that.
Marina: I think to me it would be valuable to know, to have that transparency, right? "This is what we're collecting, this is what we're using good, and this is what we're using it for." That would be enough for me to make a decision about sharing data or not sharing data, for example.
Or also being much more specific about what we have. "This is how it's been built, and this is what it's been trained on, and this is what we are training it on going forward." I tend to be a lot more forgiving, in terms, because I'm interested in the end result and capabilities that come out of it. But yes, certainly it would be great to have this transparency.
Karen: Yeah. Okay. Awesome. So that was my last question. Is there anything else that you would like to share with our audience?
Marina: This was great. First of all, Karen, thank you. I really appreciate you raising questions and making me think about these areas that I wasn't really thinking about before. I think it's important to consider all sides, right? And it's important to consider the costs and what is everything and anything for? So I feel like I'm coming out of this conversation with just a slightly different perspective, right? We don't have to necessarily all accept that.
Karen: Yes.
Marina: It's just like everything, an exploration. It's good to think through, like, what's the price and cost for whatever explorations and advantages that we are pursuing?
Karen: Thank you so much for joining me for the interview, Marina. It's been a lot of fun getting to know you and getting to meet you and have this conversation. So thank you. I appreciate your time.
Marina: Likewise. Thank you so much, Karen. Good luck to you.
Karen: Thank you. You too!
Interview References and Links
Marina Vytovtova on LinkedIn
Marina’s consultancy Product in Action
on Substack ()
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)
Share this post