Introduction -
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.

Interview - Evgeniy Panchovski
I am delighted to welcome Evgeniy Panchovski as my guest today on “AI, Software, and Wetware”. Evgeniy, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Hi. Pleasure to be here, and thank you for the invite. Well, what do I do? This is usually one of those questions where I don't know how to answer because I do a lot of things! But to make things simpler here, I'll just say my profession first, and then my hobbies, because I think that's the two most important things.
So in my work life, I'm a marketer. I work in digital marketing, and I cover pretty much everything except media buying and except SEO. So I pretty much do everything under the umbrella of digital marketing that is not paid ads and SEO optimization.
And outside of work, I draw, I write, and I try to learn as many new things as I can. And both in my work life and in my so-called casual life, AI actually has a place.
Thanks for sharing that background. It's actually pretty hard to avoid AI in our daily lives nowadays.
Yeah. That's true. I am trying to cut it back as much as I can, and we will talk about this if you want.
Yeah. Absolutely. That'd be great. So first, tell us about your level of experience with AI and machine learning and analytics, and if you've used it professionally or personally or if you've studied the technology.
In terms of studying the technology, I know a very limited amount. I know the basics of how a large language model operates and how it functions, but not anything advanced.
As to using it, yes, well, in work, we use ChatGPT, and we've also used Microsoft's model. I mean, we've tried using it, but it ended up being just not that good of a model, basically.
And we've also dabbled with some voice generation tools, with image generation tools because in marketing, especially, I've noticed this. Some of these tools are very, very helpful because they help you do a lot of things faster.
Now having said that, they don't make it better, oftentimes. Because I've seen a lot of ads and I've seen a lot of videos that just have both image and video are generated by AI, and you can spot it. AI is not that advanced yet, so it can't really replace human labor or human creativity. And it shows, and you can see and you can hear it, and it's just a little off-putting.
That's basically my experience. I haven't really dabbled into it with anything else.
Sounds like you've tried a bunch of different tools. You mentioned ChatGPT and Microsoft. Was that Copilot or something different?
Yeah, I think it's Copilot, yeah. Because I remember when they launched it, it was just Bing AI. Now it's Copilot, yeah.
And you mentioned voice generation and image generation. Which of those tools have you tried?
For voice, we used ElevenLabs. And for image generation, we used Midjourney and Leonardo dot AI. And we've also used ChatGPT itself to do some images. But, I don't know, compared to Midjourney, at least when I was using it three or four months ago, it was very subpar. But now I saw that Midjourney is getting worse and worse. I mean, worse in the sense of, people are just starting to realize that all of these AI models are actually trained on stolen data, especially with visual media. So it's just - there's a question there. Would you rather use it? Would you not use it? Where's the ethical question?
Yeah, those are exactly the kinds of questions that we want to explore in these interviews. So it's great that you're already aware of that.
Yeah.
Yeah, I had looked into ElevenLabs. I did an evaluation last year of voice cloning tools and looked at which ones were ethical, and which ones were something that I might consider using. At the time, Substack didn't have read-aloud enabled for my newsletters, and I wanted to have that for accessibility.
So rather than manually recording all of my episodes that were not interviews, I wanted to use that, and so that's why I looked into it then. So ElevenLabs, they have some interesting things. But when they made some additional moves later in the year, it sounded like they weren't being very forthcoming about the ethics of where they got the data that they use for that.
Yeah.
And I pulled away from using them.
See, that's the problem. I should have done my research when using these tools because I really like knowing, where did this come from? Because when it's voice generation, the last thought I had was, “Okay, but where are these voices coming from? Like, how is this actually generated?”
I know that there's a lot of tools to generate music also. And you have to ask, “Okay, where does that come from? Is it just resampling like every other model? And then whose voices does it sample?” It's kinda wack, honestly.
Yeah, no, those are exactly some of the questions that I had started looking into last year when I started writing full time about ethics and music and everything going on in the world of AI. And, yeah, a lot of the music sourcing isn't legitimate, certainly for the images. Midjourney is known for having scraped. And a lot of the video generation tools are, likewise, scraped YouTube videos and violated copyrights and such. So it's actually hard to find tools that are ethically trained, and sourced, and built.
Yeah.
All right, well, thank you for sharing that background. Do you have any specific stories you can share on how you've used one of these tools that had AI and ML features? You mentioned that ChatGPT was subpar to what you were seeing from Midjourney. I'd like to hear about one thing that worked well for you, and something that maybe didn't work so well.
Okay. Well, I'm going to start with the first story I have using AI in general, and that was when Midjourney launched, and everybody had, like, 30 free generations on Discord. So me and my friends were just generating a bunch of stuff, just trying out different video game characters or maybe anime characters and just trying to see how it works.
And I noticed, on the second or third generation of an image, that the images that it gave me – I think it was an image of a samurai – on the lower right hand corner, you could see something scraped. You could see something, like, blurry, distorted. And when I zoomed in, it turned out to be a signature. And then I saw many different images that had the same thing.
And I was like, okay. This is obviously somebody's signature. It probably took somebody's art piece, combined it with somebody else's art piece, and then the signature just kind of went washed away. And then I realized, “Okay, this is what's going on here.”
I was taken aback, and I was kind of skeptical about using it. Because I draw myself. I mean, I don't draw professionally. I occasionally sell a painting or two. But thinking about it, I definitely wouldn't want my work to be used for training these models, unless there is some sort of artist's fee or an artist's cut or something.
Or maybe if they contact several artists and like, “Okay, look, we have this idea here. You give us your art. We train this model, but you get something back, or we do something for you.” That would be, you know, that would be fair.
But the way it is now and the way it was back then, it's a business-first approach, which I very much don't like. Everything is primarily profit-driven, which just hurts everybody in the long run. Even if you launch a successful business, you know, even if you make your project successful, is it really successful for a wider group of people, or is it not? And I think it's not.
Mmhmm.
But I've also had very good experiences with AI. One of the best things I've done with AI is just shorten my workload quite a lot. Because there's a lot of things, especially in marketing - let's say, if you're developing a strategy, you want to write everything down very meticulously, very verbose. You want every detail to be scheduled, you know, everything to be written down.
So what AI does is if you give it like, let's say, if you have a brainstorm session and you write a bunch of things down, you can then just give those things to the model and say, “Look. I need this, this, this, and this. I need you to structure this in a way that is presentable to a client. I need you to structure all of these steps that we need to take, and I need you to expand on them saying x, y, and z.”
And then it just gives you this beautiful table format thing that you can just present to somebody without having to do the manual work of, you know, writing all this thing down. So that's very helpful. That's actually, I think, one of the main reasons why it's so widely used, especially in our field, because it just cuts down the so-called ‘monkey work’ to a T. You don't have to deal with it.
Or if you want to write a short summary, you have an article, you could just paste that - you know, “Give me the most important things” or “Give me the things that I need to to know in my specific profession to do x task”. And it just gives it to you. So that's great. That's actually the primary thing I use it for.
I used to write articles as well with ChatGPT. But then I realized that they all sound the same. And I'm sure you can agree with me here, because it's just, no matter how you prompt it, it has this weird bias towards certain phrases and words, and it just always uses them. And it just sounds AI. It reads as AI, and it's just annoying to do. And that's annoying to read.
Yeah, I'm sure as the tools get better that we'll find that it's harder to casually detect whether something's written by AI. Some people say that it's already not possible. I think, mostly, I can still tell.
Yeah.
Yeah. So I'll read something and go, “Uh, that doesn't sound like it was human-written”.
Yeah. it's very visible, especially if you've used AI and you use AI on a daily basis. You start to spot things. You start to see the pattern. For example, one of its most widely-used phrases in my experience is, “not only this, but this”
.
So for example, if you want to talk about social media marketing, it always starts with “Social media marketing is not only a viable way to reach your clients, but it's also blah blah blah blah blah.”
And that it uses that all the time.
Very interesting. So you mentioned that you use it for article summaries. How often do you find that those summaries are accurate, or how often have you found that there's something that was put into the summary that wasn't actually in the article?
Not as often as you’d think, by the way.
Okay.
I mean, sometimes it does hallucinate, and it does tell you something that was not in the article. But more often than not, it actually summarizes it properly, especially if I'm using the latest model.
And how about when you're generating documents for your customers? Doing the formatting and automating that sounds like definitely a time saver. But as far as it generating the documents and having it expand on concepts from the brainstorming, do those mostly come out the way that you want? Or do you find on review that you're always having to go through and tweak them anyway? It probably still saves you time, but I'm wondering how much post-editing you may have to do.
Well, it depends on the quantity of information, I would say. Because if you give it a somewhat shorter task, a somewhat simpler task, it does it quite well, and it doesn't hallucinate that often. But if you give it a very complex task or a very information-heavy task, it usually tends to kind of get lost in all these concepts.
And, also, I've noticed that, sometimes when I was in university – this is a totally new topic, but still, I just remembered – when I was in university, we had this class where the professor was like, “You can use any sources you want.” And somebody asked, “Can we use AI?” And he was like, “Sure. Use AI. You can use it to generate your writing. We will go through an AI checker just to make sure. So if you use AI, you know, use it wisely because if you fail the task, then we're going to give you an F.”
And what I used it for, though, was for research purposes, which turned out to be the biggest mistake I made. Because I wrote an essay, and then I was like, “Okay, here's the essay. These are my key points. This is the topic. Give me a list of sources.” Because I didn't have any sources at hand. And it just generated a bunch of sources, like seven or eight sources, and I wrote them down. And then when I got home, I was like, “Are these sources actually real?” And I looked at what it generated me, and it had made up sources. It had made up scientific papers. It had made up books. It had even made up a video that didn't even exist. So I was like, “Okay, well …” And, yeah, I ended up getting an F for that, of course.
Wow. Okay. That's really interesting to hear. Can I ask, which tool was it, and how long ago was that?
It was ChatGPT, but I think before 4.0 came out.
Okay, but still, one of the fairly recent versions.
Yeah. Yeah.
A friend, someone I interviewed earlier, she's a book author, and she was trying to make a marketing plan for how to market her book, and wanted to find a list of podcasts to go on. And she did something similar. She asked for a list of recommended podcasts. And out of 10 of them, I think 8 of them didn't exist.
Woah. That's whopping as a number, actually.
Yeah. That was pretty much a waste of her time.
Yeah. That's what I've noticed. When you ask it to research things, it just totally screws up. I think Gemini is slightly better at that because it has access to Google's API. But still, I would never trust it.
Yeah. I know a lot of people tend to use large language models as search engines, but they're really not designed for accurate search results. That's not what they're for. Yes, you could use the end of a screwdriver as a hammer, but that's really not going to be the right way to hammer in a nail!
Yep, I think the reason why people do that is because it's very easy, and it saves them a lot of computing power. What I mean is, let's say, for example, if you want to know something about ice cream and you type in, “How do I make this ice cream?” ChatGPT gives you a very straightforward answer. You don't have to look through Google. You don't have to look through articles. You don't have to scroll. You don't have to know. You don't have to use your brain. Which is a very bad thing, actually, if you think about it. Because once you get used to this, it's always going to be your default option.
And then over time – this is, I think, the main problem with AI, is – you're just going to trust AI to give you the exact answer you need. And the less you use your brain, the more it just, you know, withers away. But, yeah, that's probably, I think, one of the reasons why people prefer searching with AI tools, especially with large language models too, in in specific.
So you’ve given some good examples about how you've used or tried AI tools for schoolwork and for your professional work. How about in your personal life? You mentioned drawing and painting. I assume you aren't using AI tools for drawing or painting?
No. I don't.
How about for writing? You've got a Substack newsletter, obviously, that's how I found you! Have you used it at all in conjunction with your writing, either for generating images or as an aid for, maybe not researching, but for the writing?
I used to have an AI-generated image on my Substack as a placeholder when I launched it, because it was like, you know what? This doesn't matter. I'm going to change it anyway, but I just want to have something there. So I used Leonardo to generate me a little wizard.
But for writing, no. I tend to avoid it. And I avoid it purposefully because what I'm trying to do with my writing is to be as honest as possible and as raw as possible. So I don't like perfectly-curated language. I don't like when people write too well, especially on Substack because I think it's a platform where you should, you know, have a more honest voice, let's say. At least that's what I'm trying to do. That's my so-called niche, although I don't really think I have a niche. But, yeah, that's my goal. And I don't think AI can help with that. I can try to make it sound like me, or make it sound like someone who's not trying that hard and who's brutally honest. But then it ends up just being naive. It sounds childish. It sounds like a teenager wrote something.
But for research, I don't remember, actually, having used it for research. So like I said before, when I try to research things, it usually gives me either results that don't exist, or it gives me something very vague and just very generic. So it's not as helpful.
Okay.
The only thing I have used it for Substack specifically is to proofread, and I think that's a fair game.
Yeah. That's really more of a form of automation, almost using it as a checker, and I think most people don't have a concern with that. There's actually something that came out from the Authors Guild just recently, where they are giving people who are members the opportunity to self-certify that they don't use AI. But they do allow exceptions for grammar checking and things that are not generative, in terms of trying to create the content for you.
Right.
But just using them as a tool or as an assistant. And so they're already acknowledging that that's something they consider to be an acceptable use of AI, and it doesn't dilute someone's voice.
Yeah. But hasn't that been around for a long time? I think we’ve had Grammarly for the past ten years, and that's pretty much the same thing. It's pretty much an algorithm. This is a very interesting point because we started calling it AI when all the hype came around. But technically, we've had tools that are based on algorithms for ages.
Oh, yeah. Decades.
Yeah, there has been AI as we know it in factories since the eighties. So this whole thing, I think, is a little bit of a marketing trend right now.
Speaking as a marketer 🙂
Yeah. Of course. Of course.
Yes. I think AI has become sort of an umbrella term. It encapsulates a lot of things. Most people consider that machine learning is a subset of it. And generative AI, which is the product that most people hear about and are aware of, is another part of AI, but it's not the whole thing.
When I talk about using AI, I'm talking about some of the more statistical methods as well as some of these generative methods. And so there's a lot of areas that fall in there. But the ones that people hear the most about, they see the robots that walk around, and they hear about the cars that are driving themselves, and the generative AI.
The analogy I use is an iceberg. People see those, and they notice them, and they talk about them. But then there's all the other parts that are under the waterline. For instance, the Netflix recommenders, or the music recommendation tools, or optimizations, or credit scoring, or email spam filtering. Those are all machine learning, and they've been around for ages.
And, like you said, the grammar checkers, a lot of those have been around before there was generative AI in them. And so it feels like a more comfortable thing to say, “Okay, this is not writing it for you. It's providing feedback for you about what you wrote.” And then I think most people are pretty comfortable with that.
Yeah. I think that's the way it should be. If you need support to finish your piece if you're writing, then are you really a writer? I mean, if you need someone to generate little words for you – I don't know. There's a very thin line there. I mean, I'm very hard to one side. Like, I'm very strict, and I think that if you're going to write something and you're human, then just write it as a human would, because there's no need to be perfect. And if you're perfect, then you're not human. So that's where I draw the line.
Yeah. There are stories about some of the people that used to make quilts and textiles, and they would always purposely leave in one very small mistake when they did it. And it was more a matter of saying, you know, “Only God is perfect, and we humans are not perfect.” And so, even if they made the entire quilt perfectly, at the very end, they would put in a small mistake.
That's cool. That's cool.
Yeah, I thought that was neat.
You've talked about drawing and painting and writing. Is there any other purpose that you would avoid using AI for?
I remember we had this client that wanted to incorporate video ads in their strategy. And the simplest thing, because we're a small agency, we're basically a startup, and we don't have access to all of these tools that a bigger agency would have. So for us to make a video ad, we would either have to hire someone very cheap to do the voice over, or we could use AI, or one of us would do it. And I was like, “Well, the easiest thing here is to just use ElevenLabs to generate the voice or clone my own voice and make it sound a little bit more human.” But then I thought, “No. This is not good. As a consumer, as a buyer, I will never fall for this. This is just so crap. I'm hearing right now generated ads in my head, and it's so not interesting to listen to.”
I've avoided that. I said, no. I'm just not going to use ElevenLabs. I know we have access to it, but I'd much prefer taking my time and actually using myself as the voiceover and making it sound human. Because I think that would be like cheating the client in a way. Because they want something. They want it to work, obviously. And in my opinion as a marketer, using AI would just make it work less effectively. So I've avoided using that for voiceovers pretty much ever since then, and I've never actually used it for voiceovers.
Also, there was this trend of people launching AI-based YouTube channels. I'm sure you've seen them. And it's usually an AI avatar with an AI voice that just tells you how to make money with AI, which is a big topic. Like, holy crap. Like, everybody wants to make money with AI. That's something I've also avoided. Because a lot of people are like, “Why don't you try this? You know, you can just make a title. You know English, you can just record your voice, just clone it.”
And I was like, “Umm, that's kind of giving people false hopes, because not everybody can make money from AI just like that.” That's the big lie that people buy into a lot of times.
Yeah. So one thing that you had mentioned earlier, and I definitely wanted to come back to that, is the concern about where these AI and machine learning systems get the data and the content that they use for training. A lot of times, they are using data that people have put into online systems like YouTube, or people have published it online, like in WordPress, in a blog, or something like that. And companies are not always transparent about how they plan to use our data when we sign up for these services. I wonder if you can talk a little bit more about how you feel about companies using this data and content. You mentioned that as an artist, you wouldn't want your artwork used without compensation or credit, or having the option even to consent to whether or not your work is used. And I'd like to hear a little bit more about how you feel about that.
Well, this is probably something that shouldn't be said out loud. That's how I feel. But no. In tech, like, let's go back to reality. I think it's just expected, in a way. A lot of people are shocked by it and they're disgusted by it, and so am I, but you kind of expect it in a way. Because how else are they going to get this data? If they don't use public domain data or if they don't outright steal data, how else are they going to get it? I'm not a US citizen, but as far as I know, if somebody would want to go through the hassle of legally obtaining some information, they would have to go through a lot of hoops. And if you're trying to launch a product and if you're trying to conquer the world with that product, and if you're trying to launch a company so your investors are happy, then you would find a way to work around that. And the easiest workaround is to just take public domain data without asking.
Now I'm not fully aware. I might be, you know, speaking about something I don't fully know. Let's take Facebook, for example. I'm not aware how they operate within their content selling and distribution departments, because I haven't read the terms and conditions. As we all know, it's, like, 300 pages. So I'm not aware whether Facebook can use the data of their users and sell it, or if these tools are just scraping it off of Facebook without Facebook even realizing. Or any other site for that matter.
Yeah. There was a big uproar last summer about Facebook. They came out and said that they were going to basically start using all of our content, our personal photos, and everything for training their AI. And the people in Europe who were protected by GDPR were able to opt out of that. And in the US, they said, “Well, yeah, you can put in your opt out request”. But I put in mine, and, basically, they ignored us. Like, “Yeah, we don't have to do that, so we're not going to.”
Yeah.
So I just deleted my content. And that doesn't mean they didn't already save it and scrape it and aren't going to use it. There's really no way of knowing.
LinkedIn also, I think we're both there, and they had the same thing, where they just went out and said, “We are retroactively opting everybody in for everything that you've ever put into the system up till now, if you're not under GDPR or something similar. You could opt out for future, but we're already taking everything you’ve ever put in here before.”
Yeah.
That was not cool.
Yeah. People have to talk about, you know, capitalism and how it's ruining the world. And you can see it. Basically, the success of these platforms is entirely based on their users. And then you say this thing, and you just basically wave a giant middle finger to all their users. And I don't know. People still stick around, though. I saw this argument online, actually today, and somebody said, “Well, when you're just that big and when you're that powerful, you can pretty much do whatever you want at some point. Like, if a bunch of users quit, that's nothing.”
Yeah. Although, I think that we as consumers do have some power. Like, yeah, if one person quits, not a big deal. If millions of people quit, eh, it's going to start to have an impact.
Yeah. Definitely. The more people do that, the better, honestly. That's why I like Substack, by the way. It seems like a place where you are not under the hateful gaze of some corporate overlord.
Yeah, I've been there - actually, it's just a little over one year since I joined there and started writing. And I really like the vibe there. Feels much more like, you know, very early days of Twitter before things got so polarized. And people there seem to genuinely want to help each other, and that's just really cool. It's a great place for writers.
Yeah. Exactly. I was about to say that. It's like the old blogging days, but with brand new and social media elements, which is great.
Yeah, and it's also a great reading experience. I still love that there aren't any ads when I'm trying to read people's articles.
Yep.
It's amazing.
That's a very big deal.
Yeah. That just stands out. So that's just a really awesome experience. Plus, I'm meeting people there that I never would have run into on, say, LinkedIn. Or that LinkedIn would never have shown me. So that's really fun.
Yep.
So I think we've covered most of the standard questions. Oh, yeah. So it's one thing for us to seek out AI tools and use them intentionally. It's also somewhat unavoidable. Like, as consumers or as members of the public, there is probably data or content about us that has been used, with or without our consent. Cases like if you take online tests and the tools are monitoring you, or if you travel and you go through the airports and they use screening and they take your picture and use machine learning to compare it to your ID, which they're doing here now in the US. I'm not sure about in Bulgaria.
I actually don't know. I should check that out. That's a good point.
There's a lot of situations where, just being out and, you know, having a driver's license or many other things, just shopping at a grocery store or using a credit card. There are so many ways that our data is being used that it's kind of unavoidable. And I'm wondering if you are aware of any cases like that where your data has been used, or where you know that it's been used, or if it's ever caused any issues for you.
Not really. I pretty much know that my data has been used somewhere, but I don't know where exactly and how exactly. But I think we've kinda got accustomed to it. I'm basically in an ex-Communist country, and there's heaps and heaps of documents still laying around in some dusty office downtown from the old days. And that's pretty much data that's still there. They know everything about some people. So I'm not surprised that with the advent of technology and with the rising of these algorithms and these models, that there's going to be more of that.
And I think a lot of people are concerned globally about their data. I know people that are basically, like, they don't want to be filmed. They don't want to be photographed. They don't want any social media presence because they're afraid that their data is being used. But I think it's not that malicious. That's what I'm trying to say. It's not that bad. I mean, so what? They have access to data. What are they going to do? Pretty much nothing. They're just going to target you with ads. Well, okay. I mean, it's not as dystopian as some people think it is. So that's why I'm kind of okay with that.
Yeah, I think for marketing, or being marketed to, people say, well, I can ignore the ads, or I can choose not to buy from the ads. They can sort of live with that, because at least it feels like they have some control over that part of it.
There's some dangers with, for instance, the voice cloning tools. And especially there are dangers around the way it affects children. So I think there are a lot of concerns about that, about misuse of images or voices to conduct scams, things like that. So I think there are some legitimate concerns about that.
The other side of it, I think, and you alluded to this earlier, is the inherent unfairness of basically stealing the work of creative people who are drawers and painters and writers, and different professions where their life's work is now being consumed without their consent, without compensating them, and it's harming or even taking away their livelihoods. Musicians, you mentioned, they're also being affected already.
Yeah. Yeah. I think though, however, it's one thing to steal somebody's work to train your model. The artist can still make money, technically, but it's just kind of a disrespectful move. Because I don't know if people are willing to buy AI art, because it looks bad. But still, I mean, there is generally a concern for creatives. And for the elderly also, because there's been a lot of phone scams lately, especially in Bulgaria. And if you can make a voice so believable and you can perfect it and you can send it to a bunch of elderly people, they could believe you. And you could, like, scam en masse without having to even recruit people to do the calls for you. And that's a concern. That's a concern for sure.
Yeah. So, last question: A lot of people distrust these AI and tech companies, because of things that we're learning about. “So you're doing what with my data?” I don't know if you have that sense of distrust. Like, how much do you feel like you trust these companies? And is there something that you think they could do that would help to increase or to build up your trust?
Well, I would say I have an inherent distrust in every company, actually. Could be a wild saying, but when a business grows this large and it has this amount of capital and this amount of power, I think it's normal to distrust them. Because it's not normal to be this huge and this impactful without having done anything bad, or anything concerning, or anything malicious. So there's definitely something going on in there, definitely to somebody's detriment. And that's one of the reasons why I distrust these companies, baseline, by default.
If they wanted to build up my trust, I think they would have to be a lot more honest and a lot more transparent with their customers. And to tell them exactly what they're getting into, especially in the tech sector, and all these social media companies, and all these Internet companies. Exactly what you're getting into by subscribing to them or using them, in a very user-friendly way.
Because, basically, with all these long terms and conditions, they're just washing their hands, because they know that you're not going to read it. They know that they're going to overpower you with this huge lots of information to make, “Ah, whatever, you know? Just press subscribe. I just want to watch videos.”
And that's the problem! Because when you give them that power over you, then you've lost half the battle. But that's the widespread practice with these companies. They just want to make the barrier to entry as easy as possible. And they do that by making the understanding of what exactly goes on behind the scenes as hard as possible. So we have a little bit of friction where you would just go like, “Oh, whatever”, you know?
I don't know. What do you think about this? Because I think you've used a lot of these companies a lot more. And after talking with guests and other AI experts, what is the general consensus here?
Yeah. I think the one thing I hear from almost everybody is this same interest you've expressed in more transparency. That they should say what they're going to do and then do what they say they will do, and don't do things they say they won't do. And it's not just to be more transparent, but in a very proactive way. In other words, like you said, no 300-page terms and conditions on tiny screens. Some of them won't even let you send them to yourself so you can read them on a real screen.
Yeah.
Which I personally find really annoying. Just having them in plain terms, and also just to have more granularity, in a way. Like, okay, if I want to opt in to let you use my pictures so that you can tag them for me and show me which ones have my husband in them, that's one thing. But I should have a choice of saying, “I only want to use it for that. I don't want you taking these photos and throwing them into your model and using them to train and to generate things for other people. I don't want his or my pictures used for those purposes.” But we don't get that choice. Like, you opt in for pictures, they will use them for whatever they want.
And some of them won't even let you say, “I only want you to have access to these 10 pictures. I don't want you to have access to all the thousands that are on my phone.”
Yeah. And so that's the other thing, yeah - choice. I think we lack choice. You can't even block ads. And this is absurd. I think Meta launched something like MetaVerify where you have to pay a subscription fee per month so that you see no ads, which is like, “What?!” It's just absurd. It's absurd. But, well, you know, that's what you get.
And I think actually, now that you've said this, what these companies can do is just continue the way they are. Because that's going to alienate people over time, and they're going to realize that, well, maybe I don't need this. Maybe this doesn't really enrich my life. Even if they don't steal my data, even if data theft did not exist. Still, these companies are basically feeding you an agenda. They're basically kind of subversing your thought process.
Even on Instagram, you don't see the people you follow. You see the things that the algorithm has curated for you, which is like, “Well, that's not why I'm using the platform. I want to see what my friends are posting. I want to see this artist and that artist. Why am I getting this thing that got arbitrarily popular? Like, I don't want to see this.” And, sure, you can go and manually, you know, say I'm not interested, block this, mute this, but that's just a hassle. I'm there for, like, ten minutes a day. I want to be able to use the app as intended, as I want to, whenever I want to, but that's not possible. So do I really need it? Or I can just go outside and, you know, read a book. Maybe join a dancing class, you know. Or I can maybe talk to my friends over the phone like we used to do before. Whoa. Who would have thought about that? You know?
I keep seeing this trend, especially in my generation and the younger people. Like, we don't really like technology anymore because we grew up with it, and we got promised all of these really wonderful things. And as kids, we were like, “Wow. This is a phone, and you can talk to people, and you can, you know, play music on it.” And then in the span of fifteen years, we went from, “Oh my God, a phone” to “What the hell is going on?” so fast, so quick. And I think we're just, you know, we're kinda fed up with it. And we kind of prefer going back to analog media and analog devices even. So, yeah, I think that's going to keep happening, that a lot of people are going to be quitting social media. And quitting the Internet in general. There will come a point where a lot of the content on the Internet will be just AI slop, which would just make no sense. And you just go like, “Well, that's dead. Why would I spend time there?” Which I think is good.
Yeah. In terms of moving away from technology, I've been seeing more posts, a lot of them on Substack, about how to protect privacy of your data, how to take it away. But also just, you know, services to use which will protect your privacy instead of the ones that don't. Or using an old-fashioned flip phone.
Yep. Yep.
Instead of a phone that is a smartphone because, hey, guess what? You just want to make phone calls and send texts and have them private? Do that.
Yeah.
You don't want someone snooping on your texts? Use Signal.
And there's a new photo app I just heard about last week called Foto, F O T O, and they are saying that they will not use any of your photo content for training on AI. I messaged the creator, and asked them to confirm – because it wasn't stated in their AI policy on their website – “What is your policy on AI?” And he wrote back and said no, we have to get that page up. They're just starting up. But, yeah, they will not use it for any AI training. That's cool. I've got it saved on my phone. I'm going to sign up for an account and check it out.
So I'm not sure yet if they're going after the Instagram market, for people that just want to see the photos that their friends are sharing. But that seemed like at least they're starting out on a good ethical foot, which is what I really love to see, and I like to talk those people up whenever I can find them.
Interesting. Interesting. Yeah, I've noticed a lot of people don't like how popular video has gotten, especially short-form video. So they just want to see pictures. They just want to see good old pictures, you know, without video, without retention editing, without a bunch of shit flying around the screen. And I think that's a good thing. I think apps like that could pretty much get a lot of share. But then comes the question, what's the business model? Where does the money come from?
Yeah, I looked at that, and that they do have on their website for Foto, describing this is what's going to be free, and this is where they see that they will have premium offerings. I'll drop you the link, and actually, I'll put it into the interview as well so people could find it there [Foto link]. But it looks like a cool company.
Alright. I'm definitely going to check them out.
Yeah. Great. Well, this has been a great discussion, Evgeniy, so thank you for joining me! Is there anything else that you'd like to share with our audience today?
Well, stay safe, folks. That's something I'm going to say. Use technology wisely and rethink your usage of social media. Because I think we could use a lot less of that. I actually like to joke around and say that it's become an ‘antisocial media’ because nobody's social there. You know, everybody is just putting a facade. Everybody's trying to be someone else.
Ah, so, yeah, I think good old face-to-face connections are what works best. If you need a sense of community, you can find smaller, more- or lesser-known places like Substack for now. Although it's getting bigger and bigger, which is great.
So if you need a sense of community, just go there. And if you want something to read while you're there, and you're just interested in some ramblings and some general thoughts about life and what's actually worth talking about, you can visit my Substack called The Page Mage, and I would very much appreciate it if you subscribed.
Great. Yes. Your Substack is interesting, and I'm glad to have found you. So thank you very much.
Thank you for having me.
Interview References and Links
Evgeniy Panchovski on LinkedIn
on Substack ()
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)
Share this post