Introduction -
This article features an audio interview with , a 🇩🇰 Denmark-based independent legal consultant & communicator, and the author of the Futuristic Lawyer newsletter. We discuss:
Trying to figure out the paradox of why we have so many opportunities and yet, so many people are so unhappy, and how he believes algorithms and passive entertainment play roles
Using multiple LLMs through poe.com and comparing their biases on various prompts he uses
Going with his own instincts whenever he doesn’t know if he is right or the AI model is right
Writing about legal topics like Section 230
Why he feels bad about the way that AI models are currently trained
Dealing with the impact on customers of a data breach at a former workplace
and more. Check it out, and let us know what you think!
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click HERE to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
Karen: I’m delighted to welcome Tobias Mark Jensen from Denmark as my guest on “AI, Software, and Wetware”. Tobias, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.
Tobias: Thanks for having me. So I live in Denmark, and I have a legal background. I’ve been working different jobs in the public sector, in the private sector, small companies, big companies. I specialize in IT law and GDPR. I’ve been drifting around a bit in my career, trying to find my place in the labor market.
And over the last year or so, I have been working full-time on my writings. So I have just finished the first draft of a book which concerns the social and cultural impacts of recommendation algorithms. When I’m not writing the book, I write a newsletter on Substack called the Futuristic Lawyer about the human rights and IT. Primarily I’m interested in the very big and deep existential questions about how humans relate to technology in this age we live in.
So that’s what I’m focused on and that’s what I spend a lot of time on at the moment.
Karen: Very good. So do you have a draft title for your book? Or not yet?
Tobias: I will say it’s something with the algorithms. I’m not afraid to say too much about it, but I’m a bit wary of saying too much about it, because it might change a lot from now and until the time that it goes out. But it will be something in the line of how we’re at the mercy of algorithms. That’s the main theme for the book.
My overarching idea with the book is to focus on how algorithms impact humans in three different kinds of digital services. So the book is about streaming services, social media, and dating apps.
I think it’s fascinating that we as humans have evolved so far. We have so many opportunities. We go to the supermarket and buy a meal. I suppose everyone listening have a warm bed to sleep at night. Further than that, we have infinite opportunities of entertainment right at our hands. We can watch almost any movie ever created, hear every song that has ever been created. There’s so much we can do. We can even meet people on dating apps, whatever kind of person we would like to meet, or whatever person with a certain interest we would like to meet, is possible through apps and computers.
So on the paper, this sounds amazing that we have so many opportunities. And somehow it is, at the same time, we also live in a time where especially many young people are struggling a lot with depression and anxiety and technology addiction. We can’t let go of these apps. We spend way too much time in front of a screen.
This paradox is something that’s a lot up in the media, and many people are writing about it on Substack. And that’s what I’m interested in too. Trying to figure out,
What is this paradox? Why is it that we have so many opportunities and so many people are so unhappy?
In my opinion, this has something to do with algorithms and passive entertainment in different ways. We spend too much time in a passive state of mind, instead of being out and socializing with friends and family, spending time in nature, hobbies, interests, culture, getting in touch with our roots, getting in touch with who we really are as humans.
All this technology that envelops our lives kind of distances us from our core purpose as humans. So that’s just a very brief overview of my thoughts. And I’m currently trying to formulate it in my book.
Karen: Well, that sounds really interesting. I’m going to look forward to your book. And maybe if you are looking for advance readers, I’d love to volunteer now to be one of them.
Tobias: Awesome. Okay. I’ll definitely keep that in mind. Thank you.
Karen: Yeah, that would be great. Yeah, the whole area, with streaming and with social media and apps, I think one thing that is hard to keep in mind is that people tend to think “The app is there to do what I need”. But to some extent, the app is there to do what the company wants it to do — which is often to make it more addictive, to keep you watching, keep you streaming, keep you online, keep you seeing their ads on social media.
And so there’s a conflict of interest there somewhat. And I think it’s not always very obvious to people that this is a factor in the way that the app interacts with them. So I’m really curious to learn more about your perspective on these three.
Tobias: Yeah. What you just said there, that’s so true. It’s the attention economy, right? These apps live off of our attention. We pay in a different currency than money. We pay in our time and attention, and data most of all.
Karen: Yes, exactly. Well, we’ll look forward to hearing more about that when it gets closer to your book release time. Can you tell us a little bit about your level of experience with AI and machine learning and analytics? I’m wondering if you’ve used it professionally or personally, or if you studied the technology.
Tobias: I’m writing a lot about AI. AI is at the core of my interest. I don’t have a very good technical understanding of AI, to be honest with you. I try to have the kind of understanding that a lawyer working with AI companies, either as an in-house lawyer or as an attorney would have. So I have a superficial general understanding of large language models and predictive AI and so on. But I’m not really down in the code.
I tried to learn coding a few years ago ‘cause I was writing about AI and technology, so it would make sense for me to at least have some capabilities in that area. The truth is, it just doesn’t interest me much. Learning Python coding, it’s just not for me. I would rather focus on what I’m good at and what I like to do. And that is reflecting and thinking about how these technologies impact us.
So I would say that I know a lot more about AI than a random person on the street, but I’m by no means expert level, from a technical standpoint.
Karen: Yeah. One analogy I like to use is that we don’t have to be mechanics to drive a car safely, and we shouldn’t have to be data scientists to use AI safely. So you’re the kind of person I actually want to talk to in these interviews, because I want people who aren’t necessarily data scientists to talk about their experiences of it. Because again, we shouldn’t have to know what’s under the hood.
Tobias: Spot on.
Karen: All right. Well, perfect. So can you share a specific story with us on a way that you have used a tool that included AI or machine learning features? I’d like to hear your thoughts about how the AI features of those tools worked for you or didn’t? And what went well and what didn’t go so well.
Tobias: Sure. So I can tell you a bit about how I use AI for my writing process, when I write my regular newsletters, and I use it for my books. It’s a website called poe.com. It’s spelled P O E dot com. It’s a pretty neat service where you sign up. I pay a small amount every month for it. And then I have access to all the leading AI models. So I can use GPT5, Claude 4.5, Gemini 2.5 Pro, and Grok4. Those are the models I usually use.
So for instance, if I have a new newsletter draft, I’ll run it by each of those four models to get feedback. And then I can compare their answers, so the answer I get is less biased, and I can get different perspectives on it.
Now, I made this joke with a friend that I kind of do this to get praise from the AI models. I see it more as “I want a thumbs-up button”. I want it to say that I’m a good writer and that it likes my thoughts. And that’s kind of the limit of how I use the feedback.
I also use it for spell checking and grammar checking, just to make sure that there are not any embarrassing mistakes. But I don’t use it as I would use a human editor. For my book, I’ve actually paid a human editor to go over my work, because after all, I write for humans, not for AI models. So it should also be a human that gives me feedback on my work.
And I also, of course, as people do nowadays, I use it for internet search (poe.com). If I had some random question, five years ago, I would use Google. Now I use the AI tool and that’s also how I perceive the role of AI, as kind of advanced search engines. I know that AI models are capable of a lot more than just searching the stuff. But by viewing AI models in this light, you also take out all these superstitions, and the religious aspects of the super intelligence and alien intelligence, and whatnot. So that’s why I like to perceive my use of AI this way.
So I use poe.com and then I also use Perplexity instead of Google. And what I really like about Perplexity is that you get sources so you can go and actually read the articles that Perplexity bases its answers on. That is a very great feature, I think.
So that’s how I use AI right now. There are also many different ways you could use AI in legal work, especially for contract reviewing and redlining contracts. Also legal questions, even negotiations. There are some tools that you can use for taking notes. You can use it to draft contracts from the bottom. For instance, non-disclosure agreements are fairly simple to also generate with an AI tool.
So there are plenty of options for lawyers. I don’t work as a lawyer right now. I don’t use AI much in this regard. But once I get new clients or I start working full time again, I will for sure experiment with AI in a professional context as well.
What didn’t work? I don’t take everything the AI model says. I don’t always trust the AI model’s answers. I’m always very skeptical. I see it as input. I don’t listen to it as I would a human. For instance, when I ask for feedback on my writing, if there’s something I’m in doubt about, I don’t know if I’m right or the AI model is right, I always go with my own instinct.
So I never let it overrule my own decisions at all. I only use it as a third or fourth opinion. So I don’t take it that seriously at all. That’s also why I joked that I used it to get praise for my writing.
Karen: Is there an example that you can share about something where the AI models told you one thing, and your opinion was different, and you went with your opinion?
Tobias: It especially happens when I write about very complex legal topics. For instance, something that I don’t know a lot about, it could be American law. And by no means an expert in American law. And I have this weird feeling.
To give you a specific example, I’ve been writing about Section 230. That is a major theme of my book as well, which gives online platforms a lot of freedom from civil liability. So when I discuss, let’s say, case law with GPT5, especially, it’s very tricky. Because sometimes I have a sense that it’s hallucinating, but I’m not really sure, because it’s so clever that it’s really hard to tell if it’s me that has misunderstood something or if it is just a very sophisticated hallucination.
And even when I get feedback on my writing, sometimes I have to give it to OpenAI and the GPT five model. It is really, really sharp at spotting small mistakes, for instance, in an essay. It’s quite surprising actually how good it is at that. But sometimes I feel like it’s telling me something that is wrong. And it can be very hard to fact-check, because the more specialized some kind of knowledge is, the harder it is to just Google the answer. So I’ve had this weird kind of existential crisis a couple of times while communicating with GPT five that I’m not even sure if it’s me that is making a mistake, or if the model that’s making the mistake. And that especially is a case if the subject matter is very complex.
Karen: Yeah. Have you ever tried asking a different LLM a question using poe.com when you got an answer from one of them that you just think, “This can’t be right?”
Tobias: That’s what I usually do. And that can help to clarify. But mostly the AI models are agreeing with each other. Because they’re trained on the same data, so the answers are quite similar, and also their hallucinations can be quite similar. I’ve experienced this. So they’re kind of teaming up on you, or it can feel like it.
I also see risk in that, because back in the days when you would get your knowledge from physical books, that knowledge was physical. Whereas digital knowledge, and especially when it’s filtered through AI models, this knowledge is a bit more abstract. And there can be some concerns regarding the credibility of information and what information to trust.
And I suppose that’s really what I try to get at here. That trusting that information is hard. And that’s why I really like the feature of Perplexity where you can actually go and read the articles, and you can trace the sources back to the origin. And that’s not always as easy with GPT five, for instance.
Karen: I think I read this summer that Claude had added that as a feature also to be able to have direct links to sources as well. It’s certainly something that we’d like to see more of.
Tobias: The problem is that some of these links are from SEO-optimized sites or perhaps even websites that are completely generated by AI. So the quality of these links can be very low. And even if you use the GPT5 Pro version, where you actually pay quite a lot of money — you know, that’s $200 a month just for using that model — so you would expect that the data quality was really high in its answers. But many times the sources that they refer to are just some very shady sites that you would never trust in an academic paper or in an academic context.
Karen: Yeah, we just finished doing some data analysis for a conference presentation I’m giving on a panel this Friday. We’re analyzing the content of the interviews, numbers one through 80, and looking for patterns, looking for trends and consistency in sentiments. We started with ChatGPT. And some of the answers that it gave me for the scores, I said, “I just know this is wrong because I remember talking to that person. And that is not what they said, and that’s not what they thought.”
And so I started trying other LLMs. I tried Notebook, and Notebook was very different, much more negative. And then I said, “Okay, well let me try a third one.” So I ended up trying five different ones and comparing them. And it’s really kind of amazing how different they are, and where they have biases, and how they interpret sentiments. It was really an eye-opener. Like you said, it’s almost like they all hallucinate in the same direction, but Notebook LM was different in a way that I think is really interesting. So I’m going to be digging into that a whole lot more after this conference is over, because I didn’t expect it to be so different. And I didn’t expect ChatGPT to be so wrong about the scoring on the sentiment for these interviews.
Tobias: Okay. I’ve experienced the same thing, generally. Google’s Gemini is extremely positive. When you give it any form of writing, you will think it’s amazing and you’ll get a lot of praise for it. Whereas GPT5, for instance, is very conservative with positive feedback, and it can be very short, and very sharp answers without any feelings. So sometimes I have caught myself getting irritated at it and start discussing with it. And I have to step back a minute and just think how much of a stupid waste of time it is to argue with an AI model. So I just do something else.
And also Grok4. I don’t know if you have used that one, but those answers are really humorous. There’s a lot of personality in its answers. It’s a very good conversation partner, but it can also become a bit scary because it seems so human-like and so friendly that it’s almost hard to believe that you’re talking with code, which essentially you are.
Karen: Yeah, very interesting note. I haven’t used Grok and I wasn’t planning to. I do want to try out Perplexity, and I also haven’t yet tried the new publicai.co that you might’ve heard about, from the new Swiss open-source chat model. I was having trouble getting it to validate my email address for some reason, so I haven’t gotten into that one yet. But I really want to try that one because it was ethically sourced. It doesn’t use any data that was subject to copyright. And it was trained on the Alps servers, if you’ve heard about those. I thought that was all pretty interesting, so I definitely want to try that one.
But for our experiment, we did ChatGPT, Notebook LM, Mistral Le Chat, Microsoft Copilot and Claude – tried those five. Looking at the differences between them was just really interesting. But yeah, I do want to try out Perplexity and see how it compares for these analysis purposes that we were working on. It’s interesting that they share hallucinations. But that was a good example, so thank you for sharing that.
Tobias: Thanks.
Karen: Have you ever avoided using AI-based tools for anything? And if so, what would be an example, and can you say a little bit about why you chose not to use AI for that?
Tobias: So I would never use an AI tool for any creative purpose. I would never use an AI tool to write for me, because that’s something I enjoy doing, and it’s something that I learned a lot from doing, and that’s the whole point of doing it. I would never outsource my writing to an AI in any capacity ever.
That’s also a lot of talk about students, and young people in particular, that rely too much on AI tools to think, essentially. And that’s very problematic because we need to learn to develop our cognitive abilities. And writing is thinking, as people commonly say these days. So when I write, I use that as a way of making sense of the world. And I think many people do that. So when we outsource that process to an AI model, something very important gets lost in that translation. It kind of loses its point. That’s what I feel at least. So no writing and no creative purpose.
There’s actually a lot I would not use it for. I don’t think I would use it much in a professional capacity, either. If I had to review a contract, I would probably use it in the same way as I use it for my writing. I would use it to get the feedback on my thoughts, but I wouldn’t want to let it do the groundwork for me.
I would never let it make the first draft and then I work from that. I would always like to form the groundwork of the creative work or the professional work, whatever it may be.
Karen: Yeah. Especially with legal work because there, it’s so important that these citations be accurate and they be relevant to what you’re talking about. And there have been some cases you’ve probably heard about, where some lawyers in the US have submitted paperwork on a case where a lot of the references were hallucinated. And they were censured by the judge and by the Bar Association, I think, because they were just using it very inappropriately and not checking it. And that’s really problematic.
I’ve heard of a few companies too, now, in the US that are working on using RAG, the Retrieval Augmented Generation, so that they constrain the sources that it can pull from, and making sure it only uses those sources, and to try to eliminate part of that problem. But it’s definitely a real concern in the legal world.
Tobias: It really is. So Europe is a lot behind the United States and the UK in this regard. And that’s, of course, because many of the leading AI models are trained on English. So Danish, for instance, where I’m from, the leading AI models are pretty good at Danish too, but it’s nothing compared to English. So for European lawyers, I suppose that the use of AI is a bit less common than it is in the US, but it’s still a problem. And I think it’s so embarrassing to be caught as a lawyer making up fake citations. That’s, you know — oof — perhaps one of the most embarrassing mistakes you can make as a lawyer, to be honest.
But of course I understand why people use it ‘cause it’s so convenient, it’s so easy. And it saves a lot of time. It really does. It makes the work of lawyers a lot more efficient, but there are really, really big risks associated with it as well. So, yeah, it’s a very difficult dilemma, actually.
Karen: I was just reading about, as far as the languages, 95% of some of the models is based on western English type of communications, and so it doesn’t handle other languages well at all. But there’s one that just got announced, and I’ll have to find the link for it and put it in the interview, but it was trained on, I think, over a thousand non-English languages. It’s a company based in Europe, and they’re specifically trying to address that gap that there’s not enough good-quality models that are built on languages other than English. And that’s definitely a gap.
Tobias: Yeah.
Karen: Of course there are other gaps we need to address too, like the entire global south, and other languages, Asian languages and such. But I thought that was a really good start. They even distinguished Swiss German as its own language and different languages from Romania and such. So I thought that was really good to see, that they were making some progress in that.
Tobias: Yeah, that’s a really good initiative.
Karen: That brings us to the next question, which is: there’s concerns about where AI and machine learning systems get the data and the content that they train on. A lot of times they’ll use data that people have put into online systems or published online. And companies aren’t always transparent about how they intend to use our data when we sign up for these services.
I’m wondering how you feel about companies that are using data and content for training AI and ML systems and tools, and whether they should be required to do what some people call the three Cs, to get consent and give credit and compensate people when they use that data for training? Or do you feel like it’s okay that they’re doing that?
Tobias: So the bottom line is that I feel bad about it. I feel bad about the way that AI models are trained. And I’ll try to explain my thoughts about it here because this is something I’ve actually thought a lot about, ‘cause I think it’s important, and it’s very relevant for the future development of AI. Other people can disagree, but this is how I see it.
I think we have to accept that anything that is out on the public web can and probably will be used for AI training. As I see it, and as I understand it, these AI models are so good because they’re trained on so much varied data. It would not be possible to have the kind of AI we have today if they were not trained on a lot of different data all over the internet.
So that would be my first point. Public data that is out on the internet will be used for AI training. And that’s also why we enjoy these models so much. Instead of Googling some question, we can get the answer through an AI model. And the reason we can, that is exactly ‘cause it’s trained on all those websites we would usually see for Google search.
However, first of all, it should be possible for websites to opt out if they do not wish to have that data crawled and used for AI training. And it’s very important that AI companies respect such opt-outs. Websites can opt out via so-called robot TXT files. And there’s been some controversy on whether AI companies respect that or not, but they’ll have to. I think it’s only fair that people should have the option to opt out of AI training.
A second point I want to make is that AI companies cannot just use proprietary data without either paying compensation to the rights holder or by getting consent to use it. So, for instance, if an AI company wants to use books, and they also want to use articles behind the paywall, they want to use music, images, videos, movies, games, whatever. They use all of that. But if it’s proprietary, and if it’s something that the people that created it use to earn revenue, then the AI company should compensate them for using their work in the AI training. That’s just the only way that the artists can feel respected.
And it’s very inconvenient for the AI companies, because it will cost them a lot of money, but it’s the only fair solution. We have to show artists that we value creative work in society. Creative work is extremely important and AI companies have to pay to use creative work for their solutions. That’s how I see it.
Karen: One thing I found was really interesting when I was doing the research for my book, I came across a project called Common Pile. I don’t know if you’ve heard of that. It’s a joint research project between the US and Canada. And they tried only using ethically-sourced data: either truly public domain or it was something where they paid the creators to generate the data for use for the training, or they were able to get licenses for data. And they were able to show that, even with that much smaller set of data, they still got models that performed very well.
So I know there’s been a lot of hype from the big 8-figure tech bros, like, “Oh, well we have to steal all the data, because otherwise we couldn’t get good models.” But the Common Pile project calls that a lie, because they were able to get good results with it. And I know some of the other models that are being developed, like in Switzerland using only ethically-sourced data, are also getting some good results.
It’s one of the reasons I really want to try chat.publicai.co and see how that works because I’m curious to see how the results compare to the other LLMs. But we’ve all been told that they have no choice — they have to use all this data. But one is, it looks like that’s not true. And two is, at some point, the world stops generating more human-created content, especially as the AI content takes over, and then they’re feeding AI-generated data into AI.
Tobias: Right, right.
Karen: That’s going to be limiting.
Tobias: We start eating its own tail and the model will collapse. Yes, certainly.
Karen: Yeah, they’ve talked about model collapse, exactly, with that analogy, with the snake eating its own tail. It’s kind of been the easy thing, “Oh, well, we want the models to get better. We’ll just feed them more data.” But maybe we just need to think about using better models or more efficient models or other ways of doing it that don’t step all over creator’s rights.
And there’s some people showing that it’s possible. So I like to support them whenever I can find them, because it’s a tough competitive field now, for them to compete against the companies that aren’t sourcing their data ethically.
Tobias: Yeah.
Karen: And getting consent and giving credit and compensation. But it seems like it can be done. We just have to want to try.
Tobias: Yeah. Okay. That’s really interesting. I have to read more into that after our conversation.
Karen: Yeah, I’ll share some links with you. But yeah, it’s really interesting to watch that. And it’s not that that news is easy to find. So when I can find it, I like to share it.
Tobias: Yeah. That’ll be really interesting.
Karen: But yeah, to your points that we want creators to know that they’re respected. Some people call that data dignity, that people need to have those inherent rights. And most people around the world do feel that people should have those rights to own and to control their data, their creative works, and how they get used.
There are a few regions where they say, “Well, it’s okay. It’s for everybody’s good.” And I think that’s fine as long as everybody gets to benefit. But right now everybody doesn’t get the benefit. And so that’s not really how it’s working. By supporting the companies that are doing well, I think we can try to exert some influence to try to get things to be more the way we want them to be, and the way that you think they should be.
Tobias: Yeah, I think that’s so important. Artists are one of the most vulnerable professions already, from a financial perspective. Making money off of any kind of creative expression is so hard these days. And I can say that because I try to do it, and it’s for sure not easy to have these big middlemen platforms like Spotify. Even YouTube. It’s perhaps a bit easier to monetize. But still in general, you try to be discovered by all these algorithms.
Like for me, for instance, when I try to promote my work on Substack and on LinkedIn, I have to go through this entire dance and formula and try to catch people’s attention. And I don’t want to do that. You know, I just want people to read my work!
That’s why I like Substack’s subscription model. And that’s why I don’t enjoy using social media that much. But it’s just something you have to do as an artist. And sometimes it can give you some results. Most of the time it cannot. But, yeah, basically living as an artist or as any kind of creator is hard, and the AI is certainly not making it easier.
Karen: You mentioned a lot of AI-based tools that you use and a lot of different large language models through POE, and using Perplexity. As a user of those tools, do you feel like the tool providers have been transparent with you about sharing where they got the data from? And whether the original creators of the data consented to it being used?
Tobias: Absolutely not. Absolutely not. When you use Grok, ChatGPT, Claude, Gemini, the models I typically use, there’s just not anything. It would be very nice if you could ask the model to tell you about how it generated its answers, what sources it used. And even further than that, if independent experts or government experts could audit how the system was built, what kind of processes went into it, and so on, I think that would just be great.
But there’s almost no transparency at all regarding these models. And the EU’s AI Act is changing that a bit. For instance, in the copyright domain, there are some overarching rules. The companies have to be somewhat transparent about data sets they have used for training their models. But I suppose that these requirements will be interpreted very lightly. I would not expect so much transparency. Hopefully it will nudge them in the right direction. But I don’t know. We’ll see.
Karen: Yeah, that’s a good point about the EU AI Act. I was wondering how much practical effect it can have. A lot of times these regulations have good intentions, but they get watered down toward the end. I think there was some watering-down as far as the restrictions on data being scraped versus being torrented. And technically it’s a small difference, but one’s covered and one’s not. And so, never sure until we actually start seeing the limitation and whether it motivates the right behavior in the companies. But at least you’re farther ahead than we are, as far as having an act.
Tobias: Yes. It’s a good start. But you are completely right. Also because these companies are just so powerful. They have strong lobbying abilities. And there’s just no way around it. You know, in Europe we really depend on Microsoft, Google, Amazon, Apple, Meta. We can’t really do without them right now. Or we can, but it would be a very hard adjustment period for a while. That’s for sure. We kind of have our hands tied behind our backs when we negotiate with them.
Karen: Yeah. So when you think about the different companies that you’ve interacted with, do you know of any company that you gave your data or content to that made you aware that they might use your information for training an AI or machine learning model?
Tobias: That’s a problem with terms of service agreements, right? People don’t really read them. I don’t either. They’re really long and not very fun to read. So typically I just press accept and then hope for the best, like everyone else, essentially. But no, I don’t think there’s a lot of transparency about how these companies use our data.
And I think it’s very problematic. Back in the days when we used cash, or when we interacted with people in an analog way, based on paper files, we didn’t have to give out so much information as we do today. And there’s not any pragmatic reason that we give a lot of our data to pretty much all companies we interact with online. They need very little, but they take much more than they need.
And that’s a problem with the internet, and being in a digital space. And unfortunately I don’t see it changing anytime soon. But yeah, I know that I give a lot of my data out all the time. I don’t feel good about it, but I also know that it’s just how it is.
Karen: And in some cases, we don’t really have a choice — you know, government.
Tobias: Exactly.
Karen: If you fly and you go through the airport and screening, in some places you can opt out of having it take your photo. But there’s still a lot that you can’t opt out of. So in some cases we really just don’t have much of a choice.
And people who run their businesses on these tools, and then are told, “Well, we’re going to start using your data”: it’s a big thing to say that they would either need to give up their data or stop using that for their business. It’s not something people can change overnight.
Tobias: Yeah, completely agree. I can understand it perhaps in airport security. They’ll have to ask some questions, if you seem suspicious, or have to do a background check on you. At least there’s a rational objective reason for collecting data about you in that particular context.
But for Facebook, Microsoft, so on, they gather a lot of data about us that they use for a financial purpose, I suppose, without having any practical reason to actually collect it. So yeah, that’s the world we live in.
Karen: You’ve mentioned giving out some data to different companies. Have you ever had an issue where a company’s use of your data or content created a specific issue for you, such as a privacy violation, or phishing, or a loss of income, anything like that?
Tobias: I suppose I have been relatively lucky in that regard. I can’t think of any major incidents. I’ve had some incidents at my former workplace where one of our vendors that we depended on had a data breach, and some customer data was compromised. And that was a catastrophe for the company. I really got insight into how protective I see companies are of their customer data, for good reason. Because customers can get really, really angry if you don’t treat it properly. So that was really a crisis. We had to sit for a whole day and, write out to all our customers. And we had to spend another day answering all these requests. It was a very serious issue. So that’s in a more professional context.
In a private context, there’s so much stuff happening with our data I’m sure that we are not even aware of. And yeah, it’s a bit frightening if we don’t know. But again, that’s the price we pay for living in a digital world as much as we do. We actually don’t know who are looking over our shoulder, or what is happening to our data. And I’m sure that most people would be very disturbed if we figured out just how much information about us is out there. It’s actually quite frightening.
Karen: I think people are getting to be more aware of it. We’re seeing some studies that show that public distrust of AI and tech companies has been growing lately. And maybe that’s a healthy thing because we’re becoming more aware of what different companies are doing with our data — and not just trusting that they’ll treat it correctly, but asking questions or saying no in some cases, where we can.
But I’m wondering what you think is the most important thing that these companies would need to do to earn and then to keep your trust. And if you have any specific ideas on how they could do that?
Tobias: As you can probably imagine, I have a lot of things I would like them to do to keep my trust. I could talk about many things here.
The number one problem, as I see it, is that AI companies are making some of the same mistakes that the social media companies did. So social media, as we talked a bit about in the beginning of our conversation, operates in the attention economy. They live from our time and our data and our attention. And for AI chatbots that is really problematic. AI companies spend so much capital and energy on running these data centers to train and operate models. And they need to have a return of investment. And that’s why they try to make these AI chat bots so addictive that people spend a lot of time on them.
And the way they do that is to make them seem like they’re friends. When we interact with an AI model, we don’t feel like we’re interacting with an AI model, but with a friend or a therapist or a coach or a life advisor. And that’s problematic because that’s really not what they are. And without going into some of those cases that have been around the world, some people have had mental health issues. Even some people have killed themselves because they were so traumatized. And tried to get some comfort from the AI model. But it can’t provide it because it’s not human. So bottom line is that it’s really messing up our perception of the world and of other people. And we come to live in a fake reality if we regard these AI models as friends.
So that is a very big problem as I see it. And that’s something we have to question and something we have to be aware of as users. To be honest, I don’t have a lot of confidence that these companies will change this business practice, because that’s how they make money. They basically make money from deceiving us. That’s to some extent what it is. They provide a nice service, but a part of it is deceiving us. and we fall for it because we’re fallible humans.
One thing they could implement would be age-verification mechanism. I think, social media, there should be a minimum age for using an AI model. There’s no reason that a 13-year-old child should spend time chatting with ChatGPT, for instance. I don’t think that makes a lot of sense.
Also the transparency issue, as we talked about earlier. It would be great if there were more transparency about data that was used for training.
Also, the data annotators who help train the AI models in developing countries like Africa or Southeast Asia. They should have better working conditions and better payment. There’s been some polling about this.
Finally, the AI companies should also be more transparent about their energy usage. And they should try to source the energy they need to run their data centers from renewable energy sources instead of fossil fuel-based energy. That’s something I care a lot about personally.
Karen: Yeah, you’ve touched on some really great aspects of things that the company should be doing to be more ethical and some of the major ethical concerns.
I’m glad that you brought up the data workers, the data enrichment workers, and treating them fairly. Not a lot of people are aware of that, as one of the side effects of the way that these AI systems are developed, and the way that they’re used, and the way they do content moderation, even after the systems are trained and they’re up and running. So that’s one of the other really important aspects. I’m totally with you.
So I think that’s a good summary. That’s the last of my official questions. So is there anything else that you would like to share with our audience today?
Tobias: Just one more thing regarding your statement there. Again, those companies, they tell us what they want us to hear. They tell us what is profitable for them. But there’s so much that they don’t tell us, because it’s not profitable for them and may put them in a bad light. And that’s my major problem with these big tech companies: that there’s so much stuff they’re not showing us or not selling us.
Facebook, for instance, or Meta: they’re the perfect example. The service that social media provides is so essential to our lives these days. There should be full transparency. I think that technologies we depend so much on in our everyday life should be more democratically governed. It’s not okay that a company like Meta is hiding so much information from users.
And as mentioned, there’s data workers. But the same issue again relates to social media. For instance, on TikTok, you have people that have to watch clips for hours and hours and hours and hours, and label which videos should not be shown — you know, should be moderated off the platform. That’s a pretty horrible job. And there are so many of these jobs, and so many people who do this kind of work. And that’s something there should be more awareness of, and these people should have better working conditions and get better payment for sure.
Anyway, yeah, for anyone who’s still listening, you can sign up to my newsletter on futuristiclawyer.com and you can reach out with any comments or feedback you might have to this conversation or to anything else.
Karen: Thanks for sharing all that information, sharing your stories. I am really looking forward to hearing more about your book as that starts to come out, and as you get a title for it, and start to really shine a light on what’s happening in the social impacts on our society. I think those are super important and I’m glad that you’re writing about it.
Tobias: Glad to hear it. And thank you very much.
Interview References and Links
Tobias Jensen on LinkedIn
Tobias Mark Jensen on Substack (Futuristic Lawyer)
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% human-authored, reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)



















