Introduction - Evan Miller
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and later in our AI6P external podcasts).
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Evan Miller
I’m delighted to welcome Evan Miller from the USA as our next guest for “AI, Software, and Wetware”.
Evan, thank you so much for joining me today on this interview. Please tell us about yourself, who you are, and what you do.
Hey, Karen. Thanks for having me. I know you and I kind of connected a couple of years back when I was actually working in a recruiting role at Wayfair, and then we’ve stayed in touch. But I'm based in the US, and I've been in HR and recruiting since 2015.
A couple of months ago, because the job market has been so volatile, I got into an organizing role for the Harris-Walz campaign, which then led to a podcast that I started just a couple of weeks called “Strange Hills to Die On”. And, basically, my co-host and I talk about issues that we think tend to be very polarized, in terms of how people in our country discuss them, to the point where it can feel pretty toxic. So we're trying to have those conversations in a more moderate way, and pull what we think are objective and worthwhile opinions from both sides of the aisle. So, yeah, that's what I'm up to.
Yeah. That's great. The polarization that you're pointing out is definitely a problem1. So it's really great that you and your co-host are trying to do something about that. So, if you can share a link to your podcast for our audience, we can put it into this interview when we publish it.
Yes. Sure. I'd be happy to do that. We're on Spotify, Apple Podcasts, and YouTube right now. YouTube is sort of the funniest way just to listen, because you can also watch us talking live.
All right. Sounds good. So what is your level of experience with AI and machine learning and analytics? Have you used it professionally or personally or studied the technology?
I've done some recruiting for people who have studied that extensively. Especially at Wayfair, that was the focus of my whole team. They were the advanced analytics recruiting team. So I know a little bit about what they do and what the field is working towards, from that.
But as far as using AI tools directly, I have used it in some past HR roles specifically for trying to automate screening of candidates. But to tell you the truth, I don't think it's quite there yet. I mean, we'll see if it gets to the point where it can replicate the human brain. But I'm finding, especially in HR AI tools, I haven't seen them be super revolutionary.
Other tools I've used, I've used ChatGPT and Claude, which is Anthropic’s ChatGPT. But, yeah, I've used tools like that just for helping me write faster. And that's the use case that I find has really been probably the only one that I've thought, “Wow, this is really helping”.
So it sounds like that's mostly in your professional life. Do you have any cases in your personal life where you use AI-based tools for anything?
We do a little bit for the podcast just to sort of test episode titles and try to get a gauge on what might get more people to tune in. And we do that in conjunction with Google Trends, seeing what's trending on the Internet that week.
But, personally, right now, I'm just using them for job searching, so to help me tailor my resume and so forth.
Yeah. Job searching is definitely a big area. I've seen a lot of people talking about that, and AI on both sides of the hiring table, either from the recruiter or hiring manager perspective, and then from the candidate perspective. So we can talk about that a little bit more later.
I'm curious. When you were working on the campaign, there was so much noise or fear early in the year that AI and deep fakes would take over political campaigns. It doesn't feel like we really saw this. 2 3
So I'm curious as to what you saw, from your perspective as an organizer. Did you see either AI-based tools being useful to you as an organizer, or did you see any of these impacts from adverse effects from AI tools that were feared?
Well, the role of an organizer is actually pretty old-school. It's a lot of telephone conversations and in-person events. So I didn't really have a ton of use cases for AI in that role. I did send some emails out where I guess I could've got some content help from an AI tool. But, as far as people talking to me about AI, and their fears of how it would either affect the messaging of either candidate or whether it would affect them economically, I didn't really hear a lot of that on our side.
But I think there were overall complaints in terms of the strategy that the Democrats employed this year, which was to sort of shy away from economic issues generally. So, yeah, I wouldn't say that that was a major theme, at all in my experience as an organizer, but I do know that it was on voters' minds, just not voters that I spoke to, I guess I should say.
Okay. That's fair enough. So I want to go back to what you were saying about using the AI tools for your professional life and for job searching and such. Can you share a specific story on how you used a tool that had AI or machine learning features and how it helped you, what worked, or what didn't work so well?
Yeah. So right now, as I was just saying, I'm doing a lot of applying. I'm sure you know the recruiting job market collapsed pretty hard starting at the end of 2022. And from estimates that I read, about 40 to 50% of recruiter jobs that existed in 2020 are now gone. And we'll see how many of them come back. But, because the job market is so competitive in my field, I've been using a tool called EarnBetter, which basically will take your resume and rewrite some of it, to emphasize where your qualifications are relevant against a job posting.
And so that's helped me a lot because, typically, if I were to have to do that myself for every single job, it would take me hours and hours and hours to try to come up with that on my own. And this is a tool that's really helping, in terms of just providing those prompts for me, which I think has been fantastic. And then also Claude gives me some ideas on how I want to frame my experience. So I think that's been super helpful and super efficient. So that’s great.
The other thing that I can think of in terms of personal life: I drive an electric car that has quite a few automated features. For example, if I try to back into another car, I can't. My car will literally slam the brakes itself. So, you know, I've kind of seen in those three use cases a strong benefit, personally.
In my professional life, some of the AI tools that I've used as a sourcer, for example, have just been super ineffective. And then something I also noticed is, when I put my resume into AI tools and all of the tools that I've used, they do it like a job scan type of thing. They'll suggest jobs for me that no human would ever suggest for me.
So from my perspective, there's a lot of hype and excitement over these tools, but I just haven't seen them be super effective or helpful, other than writing things more quickly, writing an email more quickly, or writing a resume.
Yeah. I've seen some similarly weird job recommendations from LinkedIn that no human would ever say that that's a job that I should be applying for.
Exactly.
Yes. And I remember when I was hiring for my AI team at Wind River in my last corporate role, the ATS had a resume scoring feature - gave it 1 to 5 stars. And it was not reliable at all. I learned to mostly ignore it. Not there yet. I’m sure they’ll improve over time, but they're not there. Yeah.
I've heard a lot too in terms of the cost savings that companies have done, especially in the HR function, which I think is sadly just a big target at all times for cost savings. It's just one of those functions that doesn't have the greatest rep and is not seen as revenue-producing. I don't know if companies are just thinking like, “Well, we know the AI is not perfect or even accurate, but it's cheaper, and it kinda does the job.”
So I don't know. I don't know what the future is of these tools, especially in my professional field and the use cases that I've employed them for. But, yeah, I guess if it gets a lot more accurate and it can function like a human brain can, that's great.
But then at the same time, there's also been roles that I've recruited for that it's taken me maybe a week or two to truly understand the profile of what I should be looking for. And I just can't imagine how much AI training would be needed to have some of these models be able to do it quicker or faster. I think it's a lot harder than people assume.
Yeah. There are a lot of examples where, some of these hiring tools that scan resumes or try to predict who's going to be a fit for a role, one of the big issues with them is bias and that they have biases. All AI tools have biases. It comes from the data that they fed them with, and it's from the people who train them, who maybe didn't think about some of these potential sources of bias.
I don't know if you had heard, but I believe it was Amazon [who] had run an experiment where they tried using an AI-based hiring tool. And they found that it was reinforcing biases in a harmful way, and they ended up yanking the tool and don't use it anymore for that reason. 4
Yeah. I mean, that doesn't surprise me. And I think companies are going to keep trying these experiments in the HR and recruiting realm, again because it's an easy target. It's the part of the company that allegedly does not add to revenue at all. I always think, like, do you want to talk to a robot when you need therapy? Not that a recruiter is a therapist, but, I think it tends to be a situation where some people have a lot more anxiety about switching jobs and just feel more comfortable talking to a human.
But even on hourly roles in Amazon, my understanding is they don't even speak to people anymore. They can just log in to an app, apply, and then they get an email or a text saying, “Oh, we have a role open and there are shifts available. You can start.”
So I don't know if they even see people until they show up to their 1st day.
I'm interested to see if that continues into higher level roles. I don't know if it's a great experience for lower-level roles. It may just be an indication of how a company values a certain worker. I don't know. I don't want to get too deep into it.
But, yeah, there's a lot of just attempts to automate or increase efficiency or lower cost in a lot of different aspects of HR.
Yeah. I've also seen some things about having, not just the filtering processes being automated and based on text and such, and through a computer system, but also even having, for instance, a one-way video interview, and how that is really not fair to certain people who just aren't comfortable on camera, and it makes it harder. And that's something that a human recruiter can understand and address effectively.
Those tools are actually interesting, because I did one recently, where I actually thought, “You know, this is kind of more fair.”, in the sense that it gave me, like, 2 minutes or 3 minutes to prepare, once I knew what the question was. So I could have been Googling an answer. I could ChatGPT something and made sure that I sounded a lot better than maybe if I were put on the spot.
So I think there's certain applications or certain people where it actually would be more advantageous. But, yeah, I think to your point, most people want to be able to gauge, “Does the recruiter sound excited about me? What kind of a chance do I have?”
I know I can tell that when I talk to recruiters, because it's what I would have done. And even things like, when I had a candidate that I thought a hiring manager was really going to want to talk to, I made sure that I was on time, that they knew I wanted to talk to them.
And sometimes I'll read into it negatively if a recruiter's late. It doesn't always mean anything, but sometimes it does. And I think that's the other thing that people are hoping to get out of those first conversations that are now done in one-way video. It's just a sense of, “What kind of effort should I put into this process?” I think oftentimes it can be an indication, by how the recruiter reacts, of what sort of shot you have at getting the position.
The other concern that I think comes up is that interviewing isn't just a one-way process. The candidate needs to be able to ask questions about and learn about the company as well. And it seems like those formats might not support that as well. Have you had any experience with that aspect of it in your job searching?
That could be a real downside, right, in the sense that I don't get to have some of my concerns addressed immediately. And I might get through the whole process or invested a lot of time in the process. And you get to the end, and they're like, “Oh, we're paying $40,000 less than you want”, or something like that.
I feel like the use case where I'm seeing some of this automated technology employed the most is high-volume roles. And also school districts, where they don't have a big HR department, and they may not have the administrative funds to pay for a big recruiting team, and so they're just trying to save time.
But, yeah, I don't know. I guess I feel like those companies probably think that if you pass that one-way interview, that you will have time to ask your questions. And the worst you've done is lose 30 minutes of your time before you get to a real live person, you can ask those questions. But I definitely agree with you and see your perspective. It's just you won't feel like the people are excited to talk to you. And when you get a video interview, you're just kinda like, “Ugh, okay, you're clearly trying to save time here”. <laughter>
Yeah. So you're seeing it used more as a first filter instead, like a screening? It's not the entire interview process?
Yeah. It's usually a screening for most roles that I've interviewed for. Maybe more companies like Amazon, who I don't even think they do a video interview - I think they just have, like, “When can you work?”
<laughter> “Will you work for this amount?”
Because my understanding is some of the Amazon distribution centers are now competitive enough that they can hire people that way. Like, there's enough people that want to do the job that they don't need to really woo people.
But, yeah, not all companies are in that situation. And even the ones that are, is it a great candidate experience? I don't know. And in the case of Amazon, I think they've probably offended people's sense of decency enough and are still in business, that they're not worried about it. <laughter> That's my guess.
Yeah. I don't have any experience with the higher-volume hiring. But I know, as someone who has been hiring in engineering and for leaders, that one of the things that I've always felt was under-recognized was that a wrong hire is hugely expensive.
Yeah.
It hurts the team, and it takes a lot of time to address it and to get the right person in. And sometimes there are restrictions that prevent you from hiring to replace someone if you have to end up dismissing the wrong person. So it's just hugely expensive.
Yeah.
Seems shortsighted not to invest the right effort in finding the right person.
Yeah. I've always been very much a proponent on structured hiring processes, which is basically what Greenhouse is built around. It's an ATS tool, and it's the one I'm the most familiar with.
But, yeah, I mean, humans are complicated. I'm sure you've seen there's often people in companies who they're like, “I don't like interviewing”, or “I don't want to follow a script. I just like having a conversation, you know.” Not everybody is as concerned that the hire is objective. And there may even be some subjective characteristics that people feel a little bit more passionate about. So I think in general, it's just - hiring is extremely, extremely hard.
And I think some of these tools probably are hoping to get a little bit more objectivity built into the process. It's just very hard. It's very hard to measure soft skills, and sometimes people's personalities just aren't right, you know? Like, some people just, for whatever reason, they just rub people the wrong way or they're too direct, and it causes team morale. Like, I worked for a director once. He was very direct. And the rest of my team would just be like,”Just don't worry about him”. But it was a thing, you know?
Now whether that's right or wrong? I feel like with hiring and people, you get into so many ethical questions when you're even trying to evaluate a person. Is it ethical to reject someone based on their personality, or their communication style? Because it's probably something that's a little bit beyond their control or it's hard for them to change. And should you just focus on hard skills and things that are more objective? And that I think is going to be a conversation that will be very difficult to get people to ever agree on.
Anyway, I would say that the hardest [part of] being a recruiter is <laughter> trying to herd the cats and get people to make a decision, because everybody has a different idea on what's going to be best for the future and for the team.
Yeah. This has been a really good discussion on the professional side!
I want to jump back for a minute to the personal side. You were talking about your car and how it has these smart features on it. I'm curious as to how well you find that they work. Like, how often does the car automatically brake when it really shouldn't or doesn't need to? Or does it not brake sometimes when it really should, and you have to obviously still be paying attention, and doing the braking yourself?
Yeah, I would say the car actually is one example of AI - I don't really know how they program the car - it is a computer on wheels. It works ALL the time.
Wow.
It's probably saved me an accident a couple of times. I will say when it brakes, it brakes hard and you flip forward pretty quickly, but that's better than backing into another car and causing damage. The car that I have now, I've never seen it malfunction ever.
Wow. That's impressive. Does your car get Over The Air updates? Do they send new versions of software to it from time to time? Or is that something that maybe happens and you're not aware of?
No. So I don't have a Tesla. I have a Volkswagen. And I think this is their 1st electric car. And I've definitely had a couple of recalls and some things that they had to fix, but they do all the updates. I have to take it to a dealership to have it be updated.
So there's nothing, as far as I know, happening in real time. The only thing that happens when I get it updated is the control switches will sort of change. One thing that happened on the last update is now my car remembers my seat position on both sides, where it didn't used to do that. So when I get in the car, the seat moves up so that I can touch the pedals. And it kind of just auto does that. It didn't do that in the original software version.
But, yeah, I know Teslas have a lot of over the air updates, and that's pretty convenient. I don't know why Volkswagen doesn't. Either they don't have that technology or they think it's safer. I'm not really sure, but I can just say in my case that all of the safety features have worked 100% of the time.
That's really awesome. Yeah. One of my teams at Wind River was responsible for the Over The Air update feature. And one of the concerns that I know some people have is, how secure is it?
Right.
I mean, if you think about how easy it could be to hack a car and attack it and inject a virus or a malware into it. So that Over The Air update mechanism has to be really secure. And what happens if the car battery dies in the middle of an update, and you ‘brick’ the car? There are a lot of technical reasons why OTA updates can be hard. Sounds like Volkswagen is maybe just a little more cautious. So that might not be a bad thing.
There's a Netflix show, it's just blanking on me, but one episode was on hacking cars. And, basically, what happened was somebody hacked a car, a very computerized vehicle, and just pressed the gas and crashed the car remotely. 5
I used to recruit for cybersecurity people as well. And I definitely think that is a huge concern that people have, in terms of any cybersecurity initiative, whether it's a car or your phone or things that run in our oil and gas pipelines, whatever. That I can definitely see as a big fear. And I don't know. Maybe that's why I chose an older company that's probably a little more risk-averse than Tesla.
Yeah. I actually know some folks who are holding off on buying newer cars because they are not sure if they can trust all the new smart features. Or they are concerned maybe about the data that the car is collecting about them and what happens with that data. So, a lot of concerns there about privacy.
For instance, some of them have not just the external cameras to look for a car behind you, but internal cameras that do sensing inside the car cabin. Or the infotainment system where you've connected up your phone. And now all of a sudden this data from your phone is available in the infotainment system, and then what is the car company doing with that data?
Yeah. Yeah, Volkswagen has sent me their data privacy policy about 7 times in the mail. I know people are worried about it. And I think about it too. I don't even think I have a key to get in the door. So if my fob breaks, I don't think I can get in the car. So, yeah, there's little things that you don't think about when you have your 1980 Toyota that's 5-speed, and you can basically get in and out of that. It's all kind of manual. There's definitely benefits to that. But I don't want to pollute the environment, and I would rather have a battery-powered car. And they're all very computerized just because they don't have an injection system.
Our next question is about the concerns around where AI and machine learning systems get the data and the content that they use for training the models, training the tools. A lot of times, companies will use data that we put into an online system, or we published, or they collect it through something like a car, or a signup on a website.
Yeah.
And they're not always transparent about how they plan to use our data when we sign up for them. So I'm curious to know how you feel about companies that are using data and content for training their AI / ML systems and tools, and whether you think that companies should be ethically required to get consent and compensate people whose data they want to use for training.
I'm so removed from the whole process of training AI models that I don't really think about it too much. I'm sure there's data being pulled from pretty much everything that we do. Like, even the fact that I carry my phone when I go for a walk, I'm sure that's being tracked for some reason - Apple Health or whatever.
But, yeah, I don't think about it very much, but I would say I personally am more aligned with a European Union approach to privacy. I know about GDPR and CCPA from sending marketing emails for recruiting. I understand that there's a lot more guardrails there.
And also as someone who was around when the Patriot Act went into effect, getting not too political, but - also something I don't agree with. I think that we're just being surveilled and watched, a lot more frequently than maybe is the best for a democratic system or for people's rights and privacy.
I would be in favor of strengthening that, in whatever capacity we can. I think in this country, any kind of regulation, most companies will just claim that it will kill jobs. I guess in some cases, it will, but I think that's always up to a profitable company to decide that they don't want to pay people anymore.
I don't know if it's human nature, or if it's just sort of the culture of our country since the eighties. But there does seem to be just a big focus in the United States about efficiency and cost-cutting over pretty much everything else. And I think at some point, that's going to break, and I hope it's not too severe of a consequence. But that's where I feel like it's headed if the reins aren't pulled in a bit.
You mentioned GDPR. I think a lot of people have heard of that, the General Data Protection Regulations. But you also mentioned CCPA. Could you maybe talk a little bit about that, for our audience members that might not be familiar with it?
I would call it ‘GDPR light’. GDPR is extremely extensive in terms of the requirements for protecting EU citizens. Or, actually, you just have to be located in the EU, I think, and you have those privacy rights.
But CCPA is like California's version of GDPR, and it's pretty extensive. It basically says that you have a right to know what sort of data is being collected about you if you visit a website, for example, how it's going to be used. And then I believe you also have a right to request that they delete it or return it.
I built a website for a business site I had going before I moved back to Pennsylvania from California. The other way that I learned about this is there's just a lot of legislation. Basically, there's just a lot of rules around how your data is collected and your rights to have visibility into that, and then ask that it not be used.
I don't remember specifically how much more extensive GDPR is than CCPA. I know it is not as overarching. GDPR is a little bit more extensive and involves more. But CCPA is basically California's attempt to give people more data privacy rights. So people in the state of California have that ability to request that their data be used in a certain way or just not be used at all.
So as someone who has used AI-based tools, do you feel like the tool provider companies have been transparent about sharing where they got the data that they used for their AI models, or whether the creators of that data consented to it being used? Or do you feel like they're not being transparent?
I think it depends on the company, and, I would say most companies are only going to be as transparent as they need to be. Again, I use this tool called Claude, which is Anthropic's tool. And sometimes I'll punch in a prompt, like, “Give me a controversial topic for a podcast that a lot of people will want to hear about”. And it'll write back, “We're not comfortable giving you a controversial topic that might incite a sort of toxic or discriminatory or racist conversation, or something like that, online.”
It just won't give me the answers. But I could do that in Google's version, and it'll suggest away. It'll say, “Do a podcast about Nazis” or something, right? That's an extreme example.
So I think it does vary somewhat, depending on the tool. It seems like there are some companies that are trying to have a little bit more of an ethical guardrail. But, of course, that's the ethics from the perspective of that company, and it may not be what is universal.
So, to get to the point, I would say generally no. Just because in the US, we just don't have a lot of guardrails around data privacy and data use in general. We certainly do not have anything nationwide close to the GDPR, and I think it would be extremely difficult to get that done, unless there's more public outrage over it.
And I just think we just live in a country that half the country sees regulations at all as really bad. And, that will take something, I think, pretty monumental to turn around.
Yeah. I think that's a pretty accurate assessment of where we are. Companies will only do what the market forces them to do.
Right.
And so it’s on us, as consumers or members of the public, to say that we want to only support tools that do behave ethically, and do the right thing.
Yeah.
As consumers and members of the public, our personal data or content has probably been used by AI-based tools or systems. Do you know of any cases that you could share?
Hmm. I can't think of anything personally. I just can't think of an example. But I'm sure that I've given my fingerprint or the outline of my face in ways that I would rather have not, but that were required of me to create a login, for example.
I think the IRS is doing it now where they're collecting biometric information because they don't want people's tax returns stolen. And I forget the third party vendor that's running that for them. But, yeah, driver's licensing, you now have to give a little bit more information, allegedly for fraud detection reasons.
But, yeah, I can't think of anything, to answer your first question specifically. But I feel a level of discomfort about having to give out so much identifying information. Especially because I don't know how in maybe a less stable, sort of democratic environment where that data could be used by law enforcement in the wrong way, for example. So, yeah, it does feel a little bit like, you know, the People's Republic of China-ish sometimes, in terms of the level and invasiveness and ease with which people can be tracked by all sorts of different markers.
Yeah. Here in the US, it seems like the data brokers and the large social media companies are more the ones that are collecting our data. And then doing things with it that are well beyond what we would have originally agreed to when we first signed up to do it.
Yeah. Well, Facebook, of course, yeah. And I deleted my Facebook after that first controversy. But I have Instagram, so I'm sure that really accomplished nothing. Yeah, it's really difficult to safeguard your privacy. And I think that's another thing that GDPR accomplishes that we don't have. The law is designed to make it fairly simple to protect your privacy. Whereas here, you need to go into your Facebook account center and change all your settings, and occasionally delete things, and request a copy. It's a huge process, and it doesn't seem like it's very simple. It's obviously built so that they can still extract the things that are valuable to them. And we have some semblance of control.
But yeah, I don't know. You're right. That's my view. I feel a little bit resigned about it, unfortunately, because it just seems like an overwhelming problem. But I'm not for it, I guess.
So as we've been talking about, public distrust of AI and tech companies has been growing, partly because of what they're doing with our data, and what we're learning about what they're doing with our data.
So what do you think is THE most important thing that these AI and tech companies would need to do to earn and keep your trust? And do you have any specific ideas on how they can do that?
Well, I mean, you've worked for these companies, so you have a little bit more of an idea on whether they actually want to earn my trust! I don't feel a strong sense that they do. And it feels to me, again, from a progressive and very biased leftist perspective, it reminds me of free trade agreements, where the line was that there won't be a lot of impact to the domestic economy, specifically in jobs, and that was not true.
To answer your question, I don't know if they want to gain my trust. I would kinda ask you: Do you feel like these companies want people to trust them? Or, do you feel like it's purely about a profit motive?
It is a cool technology. I definitely see the potential to improve people's lives. But I guess I, and probably a lot of other people, we’re just a little bit gun-shy about that type of proposition. Because in a not-too-distant past, that push for efficiency and cost savings and increased profits has come at the cost of people's lives. And the labor movement and entire communities have been really affected in a negative way by some of the globalization trends. It hasn't all been negative, and it doesn't mean that we can produce everything in the United States and not trade with anyone ever again. But just because that seems to be the general culture of how corporations operate, at least since I've been alive (I was born in 1985). I don't know that the business world has ever had a different culture, at least in my lifetime.
It's difficult to think that these companies really do want to earn my trust, or are thinking beyond just how cool the technology is and how disruptive it could be. I don't know if they're thinking beyond the pros to the cons and interested in addressing that, or feel it's even their role to.
The way I look at it is that we live in a capitalist society here in the US, and, basically, businesses will do what they are rewarded for doing. And up til now, they've been able to collect rewards without having to earn our trust. And I think part of it is on us as consumers to say, “No. If I don't trust you, you don't get rewards. You don't get my business. I won't use you. I will use somebody else who does.”
Yeah.
And that takes some effort and some organizing, honestly, to make that happen and to put that pressure on them. But we are starting to see some momentum in that direction.
Yeah.
Certainly, we've seen it in Europe, and there's legislation and different court cases that are starting to happen. And I think it's going to be behind, unfortunately, and regulations pretty much always are. But I agree that they're not going to just say, “Yes, we want to earn your trust” without there being a compelling business reason why they NEED to earn our trust.
Right. Yeah. And I guess there's always the idea of long-term value proposition versus short-term. And I think that is another piece that is not talked about enough. I think American companies right now are very good at hitting their quarterly earnings call goals and thinking about what's going to be best for the company in the short term and not necessarily over the long term. And there are businesses that subscribe to a B-corp model, where they're thinking about wanting these companies to last for a long time, and operate in a little bit more of a long-term thinking way.
So, yeah, I agree with you that I think it will be up to consumers a little bit, to demand a behavior change. But then also, one thing I felt like I did here in organizing is restoring some trust in the government as being a regulatory body that is in the public interest and works for the public good, and not necessarily always thinking about GDP growth and what's best for Wall Street in the economy. And GDP being basically the only measure of what's good for the economy. Or stock prices, right?
So I hope that people get sick of it enough. And we're definitely moving into a very interesting 4 years. So we'll see what comes of that. And I'm just glad that there are people like you having conversations about really important issues, especially with AI, which I think you know is affecting the labor market now. And even if people haven't been replaced by an AI tool, they've been doing a ton of outsourcing, offshoring, these last 2 years as well.
I learned the other day that ChatGPT, I think they were using contractors in some part of Africa or something and paying them, like, $1.50 an hour or $2 an hour to train the models. And you tend to see some of that worker exploitation happening not in front of our eyes, right?
So that's another thing to address with AI and globalization and this entire trend of efficiency over everything. At some point, people have to decide people's lives are important, and they deserve to have a decent quality of life. So, hopefully, this AI revolution adds to that and doesn't detract from it.
Yeah, you bring up a really good point about the data annotation and the way that they're exploiting people at very low wages to annotate the data. Training an ethical AI tool is not just about sourcing the data ethically. It's also about the way that they do the development, and the annotation of the data is a key part of doing that.
Yes, and I think it's an interesting time in the technology industry. I know that there's been boom and bust cycles where certain roles in the technology sector have been offshored and then brought back. And I know a lot of even consulting companies tend to work with offshore teams where they have sort of mixed results. And no offense, there's excellent programmers in India and other parts of the world. That's not a black-or-white issue. But I think you've seen these types of ethical questions in other industries, like fashion, for example, in terms of how cheap garments are produced in developing countries where the wages are very low and the working conditions are very unsafe, and the supply chain is very hard to audit.
AI and this revolution, I think, is bringing some of those same ethical questions in the supply chain, and just how the product is developed, to the forefront in technology too. Whereas maybe 4 years ago or 6 years ago, we weren't thinking about it as much. Me as a recruiter anyway, I was just hell-bent on trying to get US developers into roles. And we're seeing some of those development jobs be done in other countries, at least in the short term. We'll see how that sticks.
But it's a question of, is it better for these folks to have any money, even though they're being treated terribly? Or, is it better for you to kind of have a complete solution? And my opinion is, if you're going to hire someone to do a job, you should hire them to do a job with some dignity and decent working conditions.
So, yeah, it's just a really complex topic that, again, I think it's going to get hard. It's difficult to get people into alignment here in the US. But I hope that these next 4 years especially bring a lot of these really important questions to the surface, and people start to feel really affected by it and start to want to get involved in a change in a positive direction.
Yeah, I agree completely. That's super important. So, Evan, that's the end of our standard questions. So thank you so much for making the time for this interview. Is there anything else that you'd like to share with our audience?
No. I would just love it if folks are looking for a new podcast, something to waste an hour of their week on. It would be awesome if folks wanted to follow us on YouTube or subscribe on YouTube, or check us out on Apple Podcasts or Spotify, and I'll make sure I get that link to you.
Great. Yeah. I think the work that you're doing on polarization is really critical. We've certainly seen some ill effects from that, and we've got to tamp it down. What you're doing in this area is really going to be helpful and valuable, I think.
Yeah. I mean, I think the reality is 99% of us have the same interest, right? And when we're so polarized, it's very hard for us to do anything about some of the negative things that could come out of AI. Because we're too busy fighting with each other, when there may be a small minority of people who are just sort of getting away with whatever because we're too busy arguing with each other.
So, anyway, yeah, that's my take. And thank you again for your time, Karen. It was great seeing you.
Oh, thank you. Yeah. I enjoyed our conversation, Evan. Thank you so much.
Interview References and Links
Evan Miller on LinkedIn
“Strange Hills To Die On” podcast:
Audio: Buzzsprout, Apple Podcasts, Overcast, Spotify
Video: YouTube
On Bluesky
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world who are being affected by AI are using their wetware (brains and human intelligence) with AI-based software tools.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see the post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. No technical experience with AI is required. If you’re interested in being a featured interview guest, anonymous or with credit, please get in touch!
is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read (and listen to). To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Audio Sound Effect from Pixabay
Merriam-Webster chose ‘polarization’ as word of the year for 2024 on Dec. 9, 2024.
“AI didn’t sway the election, but it deepened the partisan divide”, by Pranshu Verma, Will Oremus, and Cat Zakrzewski / Washington Post, 2024-11-09.
“The apocalypse that wasn’t: AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture”, by Bruce Schneier and Nathan Sanders, Harvard Kennedy School, Ash Center for Democratic Governance and Innovation, 2024-12-04.
“Insight - Amazon scraps secret AI recruiting tool that showed bias against women”, by Jeffrey Dastin / Reuters, 2018-10-10
This isn’t the Netflix show Evan was referring to, but it’s a good explanation of car hacking:
“Let’s Break Down That Eerie Tesla Scene in ‘Leave the World Behind’ ”, by Rachel Ulatowski / The Mary Sue, 2023-12-13
If you enjoyed this interview, I’d love to have your support via a heart, share, restack, Note, one-time tip, or voluntary donation via paid subscription!
Share this post