Introduction - Kris Holland interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and later in our 6P external podcasts).
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Kris Holland
I’m delighted to welcome Kris Holland as our next guest for “AI, Software, and Wetware”. Kris, thank you so much for joining me today. Please tell us about yourself, who you are, and what you do.
As you said, my name is Kris Holland. I am a polymath. I've spent most of my career in computer animation and graphics. A lot of that has been spent doing illustration for technology companies, for engineering companies, for aerospace and industry, generally speaking around trying to convey new ideas. So very much the bleeding edge.
And along that path, I've learned a lot about a lot of different industries. And in the process, that's given me a perspective on how technology works related to people and other technologies, in a way that often times even the experts don't see, because I've had that broad perspective.
In today's world, obviously, that means a lot about AI. And so I spend a lot of time thinking about what AI means for myself and others. And especially being an artist, it hits especially close to home.
Very good. Tell us a little bit more about your experience with AI or machine learning or analytics and if you've used it professionally or personally or if you've studied the technology?
I haven't used it professionally. As with much of the technology that I've become an expert with over time, I play with it at first and see whether or not it suits my needs. I came into AI around ChatGPT 3.5. And so I spent a lot of time trying to figure out, what was it actually capable of doing? And it did surprise me at first, simply because I hadn't seen a chatbot that was that capable.
And as the artistic systems came out, the generative AI on the art side of things, I played a lot with it there as well. And at first, it was disappointing, and obviously, it got better with time. So I spent a lot of time fiddling with it. And as I did, I started to realize, on both the text side and on the artistic side, how much these systems were really averaging out what could be, given the feedstock that they had.
And a lot of my time was spent trying to get these systems to do what I wanted. I spent a lot of time trying to get the result that I wanted and ultimately settling for something that was less than what my vision was. So in the case of text, and even now, it's able to generate a lot of words. But generally speaking, those words are not in the voice that I want them to be in, or necessarily framed the way that I want. So I end up rewriting the whole thing.
In the case of imagery, I mean, a lot of what I've done as an artist has been, I wouldn't say excessively beautiful. It's been very practical and technical.
But when it comes to the AI images, they never actually convey what I want them to, in the way that I want. So I try and try and try rolling the dice many times, changing my prompt, adjusting how I'm asking the question, trying different ways of framing it. And eventually, I come to the conclusion, yeah, I need to do it myself. And so I spend a lot of time running around in circles and not actually getting what I want.
Or if it's a really low value result, it's like, “okay, I'll just settle here, this is close enough”. It tells a story that I want, and I move on. And that sense of sort of giving up and moving on really made me think about what was the value of what I was creating.
I should also say that I know the art that I have done has ended up into the mix of the generative AI systems. I've done a lot of work that's been published in magazines online, and it's all in there somewhere. And so I'm realizing that the art that I do, which I often, as I said before, don't necessarily think is the best art in the world, is still in there.
And now I'm competing against my own art, creating something that doesn't actually convey the vision that I'm looking to create, spending all this time, all this energy, and still not getting there. And so now I'm at the point where I realize that AI could have value. But where we are right now, I don't feel that it does - both from the perspective of the fact that it's not advanced enough to handle what I'm asking of it, but also the fact that the underlying technology relies so heavily on the ethically questionable datasets that the companies have collected over time.
And so I find myself sort of [in] this predicament of wanting to explore the technology, but at the same time, not wanting to feel dirty having done so.
That's a pretty common sentiment. I've heard that quite a lot. So that's a good overview of, professionally, what you look at for when you've tried to use it and when you don't.
On the personal side, AI has become rather insidious. It's everywhere. It's in our email spam filters. It's in our cell phones and the way they handle our pictures and everything. So do you have any experiences with AI as far as your personal life or outside of work that you could share?
Yeah, I think that, personally, AI has become one of my biggest annoyances. Because now, every time I want to do something, I end up against this AI interface where I need a human. If it's a problem that's easy to solve, I'll just solve it myself. So if I can't solve it myself, I need a human, because the AI is not going to be given that kind of latitude.
Like, just today, I ordered a new battery for a laptop for a friend. And the Dell website said it'll be here by Friday last week, and it’s Tuesday - still don't have it. So I went online to get an answer, and I got their chatbot that literally told me it could do nothing. And now because they have this AI front end, Dell no longer has humans that you can chat with. Hard stop.
Oh, wow. I did not realize that. So there isn't any “oh, sorry I wasn't able to help you, would you like to talk to a person?” That's not even an option any more?
Here's an email form for you to fill out, and then you have to wait x number of hours to get a response. And so I got a response, which was, “oh, well, we shipped it, but we don't have a tracking number”. How do you ship it without a tracking number? But then you want to go back to it, so then you send an email back, and then you get a digital response that tells you nothing.
It's like amplifying the amount of garbage information that's out in the world. I mean, before AI, there was already a lot of garbage out there. Now almost everything that's online is garbage. And, you know, it's an attempt, in my impression, for companies to lower their costs, to increase their revenue by having lowered their costs, but not actually offering a better service. So my experience as a customer is worse. The products that I'm getting are worse.
It's just that that whole enshittification of the Internet is now coming to a point where it's fully automated and industrialized. And it makes me feel like where we were once, “look, it could go one way or it could go the other way”. Well, now we're fully going downhill. And the question is, is it a soft landing or is it a wall?
Every time I go to get information from a supplier or find out what's the situation of something, I'm dealing with an AI that either has incomplete information, doesn't have the access that it needs, or just hallucinates itself into giving me non-answers that are just wasting everybody's time. And so it's just frustrating.
Yeah. And when you think about the corporate balance sheets and the financial incentives that are driving some of them to replace human support agents and customer service people with chatbots, what they're basically doing is they're shifting that cost to us as the consumers. And they're also not showing on their balance sheet the cost to the business from the decrease in customer satisfaction. So the beans don't get counted correctly, and that's partly why they're making decisions that are counterproductive.
Well, that's exactly it. Balance sheets are often manipulated to give the light that makes the CEO and their management team look the best. “Oh, so this cost went down. That's good for us. Yay. We're making more money.” Even though you're not necessarily seeing that trajectory of new sales is starting to level off because customers aren't happy. They're not coming back.
And one of the things that I have the most fear around is when government starts to get on the bandwagon. Because I've been a business owner all my life, and so I've had to deal with the government for more complex business-related tax issues. And over the years, the only reason why I haven't completely imploded - because I hate accounting, I hate bookkeeping - is because there's been a human on the other end that, in spite of what everybody says about government and taxes, they've been, “Oh, well, I understand what your problem is. It'll be fine. We've got the solutions.” They figure out from my end what the problem is. Get me to the solution. The problem is solved. Always costs me money. But the thing is, it wasn't a stressful situation.
Whereas, when you come into an AI interface, AI doesn't care how you feel. AI is not going to understand what's in between the lines. And I can see that government interface is going to become far more nightmarish than it already is.
You had mentioned something earlier about the systems being trained on content that you published over the course of your career, and that you end up competing with yourself. This is one of the questions that I always bring up with my interview candidates, because this is a very common and growing concern - about where are these AI and machine learning systems getting the data and the content that they train on? Because they will use data that we put into online systems or that's been published online, like in your case. And the companies aren't always even transparent about how they intend to use our data when we sign up.
You've had a very long career. You probably published stuff years ago that you never dreamed would end up in a computer system being used to compete against you.
How do you feel about companies using data and content? And do you feel like they should be getting consent, and compensate, and give credit to the people whose data they want to use for training?
That's a fairly straightforward answer, because copyright is very clear on that. There's been a lot of obfuscation from the companies talking about “fair use”. But fair use is, generally speaking, a non-commercial application or an illustration of a point. So when you're using data to train a commercial thing, it's covered under copyright - they should not be allowed to use it.
They're going out and trying to use it and ask for forgiveness. But even now, where the headwinds are starting to get stronger against the companies, they're talking about giving and getting permission on a go-forward basis, and just conveniently forgetting the decades of data that they've already used and have on their servers.
And I think that the reality is that the only way that we don't go into a dystopian abyss here is that the companies realize the only way forward is to do this ethically. And I've done a lot of thinking in that polymathic perspective of, what would that look like? Both from a text perspective and from an image perspective?
And I think there are solutions here. I don't think that these companies need to be breaking all the rules in order to get where they're going. They're just doing it because it's fast and they want to be the first, and it's cheap because they're just skirting the law. But given the hundreds of billions of dollars that are being spent, I think it would be relatively trivial to actually go out and collect the images that they need, generate the text that they need, and also find ways of being able to positively attribute the people that they're collecting from.
Because, you know, I have a master of engineering. And when I did that master of engineering, I did a literature review, and a lot of that information ended up in my master's thesis. I credited, and that's okay. I didn't have to pay the original author. I just had to show where those ideas came from. And Wikipedia is, if nothing else, a large system of attributions.
So I see no reason why the LLMs couldn't do it, other than a lack of will on the part of the companies building them, because it would take time and money to do. From my perspective, it's going to have to happen. It's just how much money and misdirection are we going to have to suffer before we get there?
Yeah. And I don't know if you're already familiar with the Fairly Trained movement?
Yeah.
So I think there's one Fairly Trained foundational model. The challenge that all of these companies have is they're trying to do the right thing and do it in the right way, but they have to compete against all of these much better-funded and less ethically-behaving competitors. And it's a tough hill for them to climb.
It's an impossible hill for them to climb, because the reality is that it is going to take tens of millions of dollars to generate datasets that are going to be able to replace what is easily available online. Because it's functionally impossible to get permission for everything. Because whoever created that image may not necessarily be included in the metadata. And the person may no longer exist - people have died over the course of the Internet, and the estates may not be easily reached. On and on and on. It just becomes an untenable amount of tracking that would have to be done.
Therefore, the imagery has to be created anew. And how it's created is relative to what system you're training it for. Because I think that LLMs are sort of reaching the end of their viability, simply because you can't create that much more information. We have to find new ways of accomplishing the same thing that can take less, but higher quality, data and have a better result.
But it's going to take money. Any organization that's doing it without that, how do you compete with OpenAI that can go out and raise 5 or 10 billion dollars and run it on entire data centers worth of servers? You can't.
I think the only way to manage that problem is legally. Because governments have to step in and say, “Okay, we have to find ways to go forward”. It's extra difficult because you have other jurisdictions that don't have rules or are willing to bend those rules.
So do you allow the existing AI companies to continue doing what they're doing, but they're also paying for the replacement that is ethical? Like, it's unethical, but it's becoming more ethical with time, and eventually, it becomes completely ethical.
How do you find that balance? Is that acceptable? Because not everybody's going to get on board with that. I personally would, because I think it's the only way it can happen. But somehow, we have to be able to come to terms with the fact that a lot of that data is not owned by the people who are using it. And even though it is highly abstracted in these systems, at the end of the day, every single data point is necessary in order to create the large dataset.
And therefore, in order for it to be ethical, all of the data points need to be ethical. You can't have your cake and eat it too in that situation. It's just, how do you transition from one to the other?
Yeah. That makes sense. I've been reading up a lot on what is happening around the world in different regions, and I think we're seeing some good traction. We're seeing that people are, I think, becoming more aware of what's happening with their data and with their creations. We see these 30+ AI lawsuits in the US. And at the same time, those same companies are negotiating with the big publishers to try to get licenses. And so I think there's some recognition that, yeah, we do have to get consent, we do have to give credit and compensation.
Because what’s going to happen otherwise, as you said, we're hitting a wall. They consumed all the data that's available to be scraped, and then the next thing that is happening is it's going to start feeding on its own AI-generated content, and the whole thing is just going to disintegrate.
Exactly.
The ecosystem, we have to make it financially viable for creators to continue to create new original works.
That's right. While there is a lot of that licensing going on, a lot of it is ethically questionable. So Shutterstock, fairly early on, entered into an agreement, and their entire dataset went over 1. Well, I have a bunch of images in the Shutterstock database that I didn't consent for AI training, and I didn't get paid anything when they did that. They just took all of the images that I have, and sold it to someone else without compensating me at all, and said, “Well, you know, that was part of our agreement before AI was even a glimmer on the horizon”.
And Lionsgate recently had a big agreement with Runway where they're licensing their entire library 2. Well, are the producers and the directors and the actors getting compensated in that? That's not clear. And, of course, they're not necessarily releasing the terms of their arrangements.
And so these big companies are doing what big companies do, trying to figure out how to maximize their profit, but are being completely opaque on the actual right holders that exist further down the ownership sheet. And it's trying to almost whitewash the ethical side of things.
Adobe did the same thing by saying, oh, well, Firefly is completely ethically trained. Well, I know a lot of people, myself included, who have stuff in the Firefly training database, and we weren't compensated or or even notified that it had happened. So how is that ethical? 3 It's just you saying it's ethical because you quote, unquote own this dataset, but you own a dataset of other IP owners. So you're just putting a label on it and making people feel better when you're not actually doing anything ethical.
Yeah. Adobe is a good example because I think it was right at the end of February or very early in March, when I was just starting to get on Substack and started writing about ethics and AI. And at first, it looked very promising. Oh, it looks like they're trying to do the right thing, and working with licensed data. But then, as you said, you start peeling the onion, and part of it is that it turned out they used some percentage of Midjourney generated images 4, which poisoned the dataset.
Yeah.
And then the other part of it was changing the terms and conditions with their own users after that. Users who were paying them a good amount of money for using their tools.
Retroactively and without choice. Yeah.
Exactly. And that's a whole other aspect of unethical behavior. Terms and conditions in general are problematic because they tend to be 10, 20 pages of legalese and very tiny print. You have to scroll through this tiny box. And it makes it very hard for people to have informed consent. And how do you know that when you get down there that, oh, and we might use your data? Because often, they don't even say we're going to use it for training AI. They'll say something generic like, “We're going to use it for improving our product.”
Yeah.
What does that mean? We don't even know.
And, I mean, in the case of Adobe, it's pretty obvious because one of their products is an AI, right? You still don't know. Are they selling it down the river? Because if Adobe decided to send their datasets that they've collected to Midjourney, well, who's going to know, right? Because there's enough abstraction that happens in that transition. They could sell it under the table, and nobody's really going to know that it's happened, but it did. And Adobe was made richer for it, but not anyone else. And at the end of the day, then that data then comes back, and you end up with a tidal wave. Some of it looks good, but is unethical.
And, you know, my experience is that the issue is less of whether it's AI or not. It's just that there's so much of it that it's diluting the value of the market. Because somebody's not going to be willing to pay what used to be market value for a piece of art when they can go and they can get 80% of it from Midjourney and pay basically nothing. It depresses the market, so that way, people who are just getting by as an artist no longer can make a living. And so they're not improving their craft. And over time, the consequence of that is you get less and less and less and less actual new art. And eventually, you get to the point where it's all just an average that's coming out of a prompt, and there's nothing that's really new coming out.
And that for me, that is the most upsetting thing is that what makes us human, what has lasted the longest out of human effort, is the art we've created. And if we're no longer creating art, if there's no path that makes sense for an artist to create other than the need that they have on weekends doing crafts out of a need to create just for themselves, you're no longer going to get masterpieces. You're no longer going to get artistic works that make people think and change their point of view. It's just going to be the same garbage flooding our feeds all of the time. And that's one of the most dystopian things that I feel that's coming out of this.
Yeah. One of my previous interviewees is coming out on Thursday, Roberto Becchini - he's a software architect, but he's also an independent artist. And one thing he talked about was wanting to not put his images online until he could somehow protect them. And so he started using Nightshade and Glaze to protect his images of his art. So there's one picture in this interview article, but it's been protected. It's been poisoned. How do you feel about those types of countermeasures?
It becomes an arms race really quickly. So, yes, right now, those tools work. At least we think they work, but the reality is that the companies and the researchers are, out of a necessity, going to have to figure out how to reverse engineer those poisons. I don't know enough about it to say whether or not it's feasible, but I suspect that there are ways to maybe, not completely undo the poison, but enough that they can get what they need out of it. You have the art on a monitor, and you have another camera pointed at it, and that's enough to abstract the poison, because it's not purely digital anymore. And then they can use that as training. That might be one means to get there.
And when that starts happening, okay, now what do artists do? All the artists today who know better have deliberately poisoned their artwork, so that way it can't be used in AI. But then the AI companies or researchers or somebody with ill intent circumvents that. Now you've added another layer of unethical behavior on top of the previous one. And then it sort of becomes this layer of unethical action and countermeasure.
And, eventually, the only way to make it work is you're only showing your art in private venues -no cameras allowed. And that really then decreases the access to art, which has the same net effect of there being no art.
So, you know, it's a short term solution to hopefully buy us time until governments say, “Yeah, okay, you can't do that anymore”. Here's a new definition within copyright law that allows artists to not just say, “Hey, don't do that anymore, cease and desist”, or try to take you to court. Because what's the point of taking a large mega corporation to court? You're just going to lose, even if you're right, because they have the best lawyers money can buy. So there has to be some sort of a means by which you basically flip a switch, and consequences happen.
Or class action is handled in a different way. Because right now, a lot of class action lawsuits seem to have to prove that the output is infringing versus the input. If you can change it, okay, well, the input, you can't use it in an infringing way. And if you do, it doesn't matter what the output is anymore. You've already used it, and therefore, that's where the legal burden is applied.
And so laws have to adapt. And once that happens, it may not matter anymore. Obviously, if somebody is bent on breaking the law, they're going to break the law. But if you have mechanisms that allow artists and other creatives to be able to at least claw back some of their rights, then it might become manageable at that point.
In your business, are you seeing an impact, either in the number of requests for artwork that you get, or a change in the pricing that you're able to command for your work?
I would say that my business has basically ceased to exist.
Oh wow.
I have talked to some of my customers. I've done some work over the last year. But generally speaking, I've had a lot of customers say, “Oh, well, we can just get an AI to do that”. Where in my case, I'm working on the bleeding edge. AI cannot do what I do. But they're, again, they're getting - in some cases, it might only be 50%, but it's close enough to sort of sell the idea that they're trying to convey.
I've always had a trouble conveying value for my work even though over the years, my customers have said, “Oh, well, you know, we weren't getting the sale, then we showed your piece, and then they suddenly understood, and then we got the deal to the tune of 4 billion dollars of funds raised”. Unequivocal value. But I've always had trouble because I'm working sort of on the beginning side of the equation, convincing that my fee is worth it. But now it's just like, “Yeah, no. We're not paying for it at all. We're not interested in doing this anymore. We'll just get the AI to do it.”
There's some exceptions to that, but they're becoming fewer and far between. And, I mean, it's kinda hard to separate it, because some of it is just the fact that the economy is depressed right now. So some customers are just not doing it because they're not doing the other jobs that require it. And then other cases that I've been flat out told that “We're not paying artists anymore, so thanks, but no thanks”.
So right now, it's a race to figure out, okay. Well, how do I make food appear on the table that isn't what I've done for the last 25 years? It's a real challenge that hits close to home.
But at the same time, technology has changed and advanced over the years. And on the face of it, I'm not upset about being made obsolete because technology changed. It's because I wasn't compensated for my art being used to take away my customer, right? That's the part that I struggle. But, yeah, I've seen an absolutely devastating effect as a result of it.
Has your customer base primarily been local to the region where you live, or has it been global?
No, it's global. You know, I've been an illustrator for Popular Science Magazine. My art's been on CNN, in Time Magazine, The Economist, on and on and on. So I've typically been working for aerospace and heavy industry, and those are all global companies.
And generally speaking, it's because I have a technical background. I have a master of engineering even though I'm not an engineer. I've been able to find a niche in understanding what engineers want and need, and then converting that into an image that helps them sell their idea, whether it's within management or otherwise.
But now, those same engineers being technical people, they're like, you know, “I can write a prompt”. Just as they would tell me what they needed, they'll just tell the AI what they need. And that'll fill their perception of “I need to have an image”, versus necessarily understanding that the image that the AI creates is not actually the image that you need to sell the idea. But for them, the image is there. Good enough. Off we go.
It's kind of like using the chatbot instead of humans and not realizing the impact on your business from losing your customer satisfaction, right?
Exactly. There is a consequence. But while, like I said, there's been huge benefits for customers having used my art in the past, a lot of times, you have the engineers, you have the middle management, you have the upper management, you have the CEOs. And somewhere in there, not every level sees the value of it at the same time. Generally speaking, all of these corporations realize they need visuals to go with it, but not all of them understand the variety and quality in those images and how it impacts.
So they think, “Okay, I have an image, and that's good enough. That's all I needed.” And it may take a year or 2 years, 3 years before they really start to realize that the impact of that is such that we actually did need that other kind of art.
But by 3 years from now, I'm probably going to, by necessity, have moved on somewhere else. And I'm not necessarily going to want to go back, because of fear that this might be a short term change. Because as the AI's improve, you know, at some point, they might be able to actually handle what I do. Two years ago, I never imagined that it would be as bad as it is now. So there's a real concern that I might end up in the position that I'm in now, again. So if I find something else, I'm probably just going to stay there.
I'm really curious about how you're adapting to the change and the AI-driven unethical creation of images, what it's done to your business and how you're adapting to that.
On the one hand, it feels like I can't adapt, right? Because every time I learn a new skill, almost immediately thereafter, the AI is - right? So I'm trying to compete against something that can learn faster than a human can.
I've always been a fast learner. Everything that I've ever done, I've taught myself how to do it. But I still have the limits of wetware. And so it's becoming really difficult. I don't know that my value is going to be in creating images anymore. There are different kinds of creativity I excel at, whether it's strategic thinking or just understanding how to bridge different disciplines. I'm really good at seeing, well, there's one thing over here, and there's one thing over there, and I can connect the 2 of them in a way that the experts can't because I've got this broad knowledge. I mentioned that before.
And AIs really aren't good at that. I've actually been testing Claude and the various flavors of GPT and trying to get them to do that. And sometimes they can, when you basically tell them “Step, step, step, step. Here's the 3 steps that you need to bridge this.” “Oh, yeah. I can see that”
, and it will go off and go on. But if you ask it from a fresh prompt, it cannot bridge. Even though it has the information, it can't make those intuitive leaps.
So at least for now, that might be a place where I might be able to use my skill set because, like I said, I have that engineering background. I have a technical background. All my artistic skills are an expression of a skill set that can be applied in other places. I'm looking at doing that.
Part of that is also just learning more AI. I use GPT and Claude every day, trying to test it in different ways and seeing how it's changing. One of my biggest annoyances with AI companies is how their flagship products are constantly evolving in unexpected ways.
Like, I had done one prompt 2 weeks ago with Claude, and I got an exceptional result. Not that it was as good as a human. It was just better than anything I'd seen before. I gave it the same prompt this morning. It failed it completely. And it was not like, oh, well, it was a little less detailed, or it was a little less clear. It was like before, I asked it to give me a birthday cake, and I got the makings for a birthday cake. This time, I asked it to make a birthday cake, and it threw an egg against the wall and said, “Here you go”
. It was appallingly terrible!
How can anyone build a product based on something that changes that much, in such a short period of time, with no notice that anything is different? I'm playing with them and trying to understand them and see where the flaws are and the holes are.
I talked earlier about trying to understand how you can build out ethical systems and what's involved in that, and thinking through my knowledge base, and what could I do to actually build an ethical dataset? So I've been looking a little bit at that. But at the end of the day there, I alluded to this before, it's expensive to do it.
I have a lot of camera equipment. One of my earlier lives, I developed a system that I could go into an interior environment and turn it into a VR environment that was photoreal. And that same knowledge set can then be applied to create enormously huge datasets of everything you could ever imagine, which then could be a training dataset for Gen AI. But it's time-consuming. It's processor-intensive. I've got a lot of computers, but it still takes a lot of time and effort that I can't necessarily be throwing to the wind and hope I have a product at the end of it.
I'm chasing a whole pile of threads that lead to a bunch of balls of yarn that may or may not exist. That's the only thing that I can do because it's so dynamic. There's so much money being spent, and I'm just doing my best to be knowledgeable about as much of it as I can and try to adapt. If one of my customers came to me tomorrow and said, “Hey, we want to commission this piece of work”, I'm there, I'm 100%.
My assumption is that my business is dead.
Now I'm trying to figure out how to correlate my knowledge base with a job, you know, a 9-to-5 kind of job. But that doesn't work very well because I've never fit in a pigeonhole. But, also, I'm being very vocal in saying how what's happening right now isn't fair or equitable.
Social media is not very good at allowing room for nuance. I actually had one of my former clients and employers tell me that I might be damaging my brand, making myself unemployable to a corporation because: “Here's this guy, he knows what he's talking about, but he's always making drama. He's making a stink. He's not the kind of guy that's actually going to solve problems.” Because that nuance gets lost in my discussions.
You know, the only way that I'm going to get noticed for who I am is by being as loud as I possibly can. But at the same time, the act of doing that might make me unemployable.
At this point, the answer is I just don't know. I'm struggling, and it's frustrating, and it's upsetting, but I'm just doing the best that I can, to use whatever intelligence and capability that I have to try and find some path up the middle that might make it so that way I can survive till the next thing.
I think your point about being a polymath and having that ability to kind of see the problem and seeing how to break it down - seems like that's probably one of the last capabilities that an AI will ever acquire. And I actually saw a joke the other day, on talking about software development and saying that “In order for an AI to take over this area of the industry, customers would have to actually be able to describe what they want, so we're safe”.
I would have said that of art, originally. The online dataset for the artistic solution was much more available than it is for the creative problem solving. From what I've heard, you know, OpenAI, they're hiring PhDs and experts to actually generate good solutions, working with the AI and training it in that way. So I'm not comfortable to say that it's not going to happen in the near future.
I don't believe that OpenAI is as close to AGI as they say they are, because I'm generally unimpressed by the o1 preview. Yes, it's better without being prompted many times. But that just means that it's really good at making a bunch of bad assumptions and going off in the wrong direction. And you can get any of the other flagship models to be as competent as o1. It just takes a little more human interaction and planning ahead of time. So I don't see how o1 is going to lead to AGI.
But at the same time, there's so much that's going on in the background. They're working on many different things, that maybe they do have the secret sauce, and they just haven't released it yet. So I'm really not willing to say that it can't be done, because I've been wrong too many times in short succession. So it's a difficult place to be.
Yeah. You mentioned social media earlier and LinkedIn, and one of the things that happened recently with them - I don't know if you may have seen this announcement, it was maybe 1 or 2 weeks ago? It leaked out that they were quietly opting all of us in to have our LinkedIn personal data and content used for their AI training. And it turns out there's not just 1, but 2 different places that you have to go to opt yourself out of this, but it was opt-in by default.
And one of the things that someone pointed out, you alluded to this earlier, is that some of the people who had posted on LinkedIn years ago have passed away. And so anyone who's dead, they're just going to take their content, and there's no recourse?
Yeah.
The whole idea of having that as opt-in by default, and then the way that it leaked out, really rubbed a lot of people (including me) the wrong way.
Yeah. For sure. Thing is that - I posted about it at the time - for me, the reason they did it as opt-in by default is that, in the nanosecond after they put that switch there, they gave themselves permission for everything that happened before.
And so it was retroactively giving them permission to everybody's data. Hard stop. Nobody was able to opt out of that. And that is just disgusting. Fact is that LinkedIn is owned by Microsoft, that owns a large portion of OpenAI. Where is that data going, right?
It says partners and affiliates, or something like that as well, so yeah.
It's 100% OpenAI has all that data. I honestly think they had it a long time ago, but one of their lawyers realized, “Oh, shit. We really should draw a line under this, and here's how we can do it and not have to ask for forgiveness.”
And they did it. And Meta did the same thing. And I'm sure every other platform has done the same thing the same way. Regardless of how many switches they bury, I guarantee you the data's just going out, and they're just pretending that that switch matters to make people think that that's the case.
Because, I mean, really, how can you know? There's no possible way that any of us could actually dig in deep enough to find out that our data was used in that way. And therefore, they're just going to use it in that way, because there's no consequence for them having none so.
One of the few exceptions was when Google licensed all of Reddit's data and, like, the whole glue on pizza debacle - something that was that obvious and clear, that somebody could easily do a search for (ironically, a Google search) and find the offending piece of information, so that you can draw the line between the two. That is incredibly rare - that there's going to be something that is that unfiltered and that easily accessed that somebody can point to it. And, yeah, Google's stock dropped, and then immediately recovered. There was no consequence to them at all for having done so.
Yeah. And to your point about the enshittification, when that incident with the pizza glue came out and the connection was made to the Reddit post, people started piling on to the Reddit post with additional garbage input.
And I think it's hilarious. Like, on LinkedIn, they have the ability to be a contributor. And the whole point is to, again, drive quality data. A lot of my friends are actively putting absolute crap in there, just to stab at it. I don't do it, just because I don't have that much energy. But now we're getting to the age that now that people do recognize it, the data is going to become useless.
Like, even in my serious posts, I purposefully use wording that is ambiguous and complex and oftentimes has errors in it on purpose. And now that's all just going to propagate back into these things, and it's just going to become more and more bad in the process relying on it more and more. It's getting worse. We're making it worse by doing these intentional countermeasures, however you want to define them.
And so it's just going to reach to this point where, yeah, it does sort of collapse in on itself and fall down because it can't be made any better. But so many people are going to be so dependent on it that it's going to have a real economic consequence and a real productivity consequence. And it's something that is going to be a bit of an “I told you so” moment for a lot of people, I think. But it's going to be cold comfort when it causes a recession that makes us all suffer, right?
Depending on how it's used, it could cause full-on blackouts for accessibility. Like, if all the chatbots that all have, whether it's Anthropic or OpenAI or Google on the back end, if they all implode roughly at the same time, suddenly now there's no infrastructure for any kind of customer relations. When do you do that?
I personally cannot believe the degree to which corporations are wholesale giving away their ability to interface with the people that make them the money to a third party they have no control over, and no apparent means to solve that implosion problem. In my mind, one way or another, it's inevitable. Almost all of the AI companies have had outages of various kinds. It's going to happen.
So when you condense your entire customer-facing ecosystem into those suppliers, you're putting yourself at huge risk. Some are building out their own internal infrastructure to use the open source models, which is better. But at the same time, those are going to become outdated fairly quickly. So how do you keep up with that?
Yeah, and one interesting aspect of the LinkedIn debacle was that people who are in the EU were protected by GDPR. And so LinkedIn didn't even give them the sliders, which were a choice, because they were automatically opted out.
And so this is where you see that there's one set of regulations that provide us some protection to a fairly large number of people. And so this is where I think we need to get my country and your country, we need to get on board and catch up and protect our citizens as well.
Yeah. Like I suggested before, there's no way to really know whether or not all that EU data was excluded or not, right? There's a certain amount of “Trust me, bro” that happens there. And, obviously, the EU has non-trivial consequences.
But at the end of the day, they still have to prove that it happened. And I don't know that they can. Short of having an email from the executive saying, “Yeah, here's all our data, and we're including all the stuff from the EU too, just don't tell anyone.” So if that exists, okay, well, they might get caught. But if it doesn't, they're not going to get caught.
Yeah. So this is a really good set of perspectives that you've been sharing. I appreciate that.
My normal last question has to do with this public distrust of the AI and the technologies and how it's been growing. And I think the fact that it's growing reflects this increased awareness that we're all seeing, though - the fact that the companies aren't behaving ethically in most cases.
If I were to give you a magic wand today and say, there's one thing that you could do, to change how the AI companies work and what they could do that would help to establish and keep trust. What would that one thing be?
I think it would be - and I alluded to it before - that every large corporation above some minimum size, let's say, 100 million dollars, had to put a percentage of their total spending on AI into ethical AI datasets. So let's say there's 100 billion that's being spent on AI this year, that 5 billion of that is going into developing ethical datasets. And those ethical datasets should be communal, so that everybody is contributing into the same set of data that's collecting in this large ball, that can then be looked at by anyone to see that it is indeed ethical.
And then from that is how the training goes forward. And whatever uniqueness comes from the resulting LLMs and GenAI systems is from the individual company's research on how to take that dataset and turn it into the new thing. That they can generate their own ethical data and have their own ball of data that they generate that is not shared, in addition to that. So they can say, “Okay, well, you can take a look at it, but this is ours”. And because we have this sort of ethical framework, other companies can't grab it, but they can still inspect it for its ethical quality. And that can also then go to making an individual company's end result be more unique. But that's what I would do.
And what does that look like? I don't know because, you know, on the one hand, oh, well, you could have a set of companies that are created that go out and do that, or you could have a government organization that does that, or the companies could each be doing that and contributing to it in different ways. I don't know what the right answer is, but I know if they don't start, they're not going to start.
So I would create some sort of law or regulation that forced them to start doing it with some back end oversight that was proportional to the consequence. So if you should have spent 5 billion dollars and you didn't spend 5 billion dollars, the penalty is 10 billion dollars. That makes it so, that way, the company's going to be like, “Well, we have to do this”. Shareholders, everybody's happy that you're doing it.
Because if you don't, the consequence is greater than the spend. You know? Then within 5 years, we don't have to have these ethical conversations any more, because it's solved.
That's a fair answer. Thank you. Is there anything else that you'd like to share with our audience?
I think the only thing that I would say is that there's a lot of people out there that are being very staunchly pro-AI and very staunchly counter-AI. And I would say to anyone listening that it's important to take a moment and think about both sides and realize that, no matter what happens going forward, the actual answer is somewhere in the middle. And that if we spend all of our time decrying how terrible or utopic things are, it's going to take us longer to find that middle ground. It's going to be more divisive. It's going to be more uncomfortable to get there. And odds are that all of us will suffer more as a result.
Whereas if we all agree that there is a middle place, we can have that conversation, and not remove all of the suffering, but certainly decrease the amount that we have. And end up in a place that, like all negotiations, nobody's completely happy, but we can at least be satisfied that we reach that middle ground without causing pain to everyone else.
Yeah. That's a good point. I've read a lot of people who are speaking up saying, why do you hate AI? I don't hate it. I love the potential of what it can do. I just want it to be done without causing all of this other pain and discomfort. We have to find, as you said, the middle ground for doing that.
It's not pure love or pure hate. It needs to be: it has a great potential value, but we can't do it at such a high cost. There's the environmental cost, and there's the cost of what we're exploring, from all the people whose content and life's work is being stolen. And that's too high of a price to pay. We have to find a way to put it in the middle so that the actual cost is acknowledged as part of the cost of doing that business. As one of my previous interviewees said, they are “privatizing the gains and socializing the costs”.
Yeah. Exactly.
Of creating an AI system.
And that's very much the main thing is that, if you're benefiting from it, you're paying for it. And if you're dealing with the consequences in a good way, all power to you. Off you go.
Yep. Sounds good. Thanks, Kris! I really appreciate you joining me today for this interview, and sharing with me and with our audience how you feel about AI and looking for that middle ground.
Well, thank you. It was a lot of fun.
on Substack:
Kris’ Substack
The Last Human Empire
Frank Fell, Unholy Consultant
Other Interview Guest Links:
Kris Holland on LinkedIn
Mafic Studios
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome 😊
Credits and References
Audio Sound Effect from Pixabay
“Runway Partners with Lionsgate in First-of-its-Kind AI Collaboration”, BusinessWire, 2024-09-18.
Adobe and use of Midjourney images for Firefly: https://cyber.harvard.edu/story/2024-04/adobes-ethical-firefly-ai-was-trained-midjourney-images
Share this post