6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #079: Alicia Yanez, USA-based senior product & ops leader
0:00
-38:45

🗣️ AISW #079: Alicia Yanez, USA-based senior product & ops leader

Audio interview with USA-based senior product & ops leader Alicia Yanez on her stories of using AI and how she feels about AI using people's data and content (audio; 38:45)

Introduction - Yanez

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click here to read the whole post online.)

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Interview - Yanez

Karen: I’m delighted to welcome Alicia Yanez from the USA as my guest today on “AI, Software, and Wetware”. Alicia, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Alicia: Thank you so much, Karen, for bringing me here today. My name is Alicia. I am a very proud Latinx and Chinese product and operations leader, born and raised right here in the heart of Silicon Valley. My father was an industrial engineer from Peru. He inspired me early on, showing me how education can shape my North Star.

Back in college when the first Android phones came out, I’d stay up all night trying to understand how they worked. That curiosity led me to pursue a dual degree in English literature and industrial engineering, with a joint program from Emory and Georgia Tech. When I started my career, I started off as any supply chain industrial engineer would at Coca-Cola. But after four years, I needed to move and shake things up.

I wanted to move to Peru, where my father was from, and I looked at everything from being a potato farmer to selling WiFi. I landed at Uber, which was the catalyst of my whole career. We were growing Uber from the ground up, from about 1000 trips to a million trips. The highlight I have is just handing out flyers to drivers in the highlands of Peru.

After being a director at Walmart and Etsy, I took a year off as a sabbatical to reset my career, redefine how I show up and work. And that has led to me realizing that I really want to go back to startups, those days that I was at Uber trying to build new products that can really change people’s lives. I’m passionate to share what I’ve learned with you around AI and ML.

Karen: Awesome. That’s very exciting. And I love your background. I studied industrial engineering and operations research myself. It’s a good skill set, absolutely. So tell us a little bit about your level of experience with AI and machine learning and analytics. Sounds like you are using it professionally. Have you used it personally, or have you studied the technology?

Alicia: In my career, I’ve used machine learning as the connector between operations and data science and figuring out how we can meet certain metrics that are critical to scaling a business.

At Walmart, I partnered with data science to build a real-time driver matching system, which I’ll talk a little bit more about when we get to it. And at Etsy I also worked on fulfillment and data accuracy, where machine learning models were critical to meeting buyers where they were at, and building confidence.

When I was on the sabbatical, all these generative AI tools were coming out. So I jumped into generative AI and I tried to figure out how it worked. And I’ve used a lot of the different LLMs, ChatGPT. Recently I’ve been exploring agentic AI and how it can help me and my husband in our day-to-day lives.

Karen: Oh, very interesting. You mentioned a couple of projects. In my experience, the projects where you have domain experts who team up with data scientists, those tend to be the ones that have the most practical value. And it sounds like you’ve had some fun working on projects like that.

When I took my IE and OR courses, and this was years ago, they were very heavy in statistics and optimization and using software to analyze data. Did you have any courses like that in your Emory/ Georgia Tech program, anything that related to statistics or AI or machine learning?

Alicia: At Georgia Tech, statistics was the foundation for learning, and I took so many courses there. As I’ve been learning more technical tools and resources lately, I am bringing all of what I learned to light, and that’s been fun.

Karen: Can you share a specific story on how you’ve used a tool that included AI or machine learning features? I’d like to hear your thoughts about how the AI features of those tools worked for you or didn’t. Basically, what went well, and what didn’t go so well?

Alicia: Across my career, I’ve seen marketplaces live or die on the parameters of trust, fairness, and application. When I was at Walmart during the pandemic, it was about ensuring fairness to our drivers. As we scaled last mile delivery at Etsy, it was about precision and trust for buyers. And now working with startups, I see the same thing come up again and again.

When I was at Walmart: mind you, this was the height of the pandemic when online grocery was spiking, especially in rural areas where Walmart was most often the main provider. There was this need for online grocery, and at that time Walmart was still heavily relying on third parties. My role was to own this Spark in-house driver app.

Now, the problem is that it took three hours to find a driver. When I came in, coming from Uber, I knew it was so important to find drivers in real time. So I brought in the team from operations and we worked with data scientists to figure out what was the right solution. The idea was “Let’s move away from these manual assignments to a self-learning policy that could balance multiple goals.”

The challenge was that early on, the model over-optimized for speed, which meant that many drivers, especially these new drivers, were left behind. We brought these drivers in to meet with our data scientist and to listen one-on-one to their concerns. And as we did that, the production of the model got better and better. We were able to release it and we saw that assignments dropped from nearly three hours to real time. It was first three hours, then it was two hours, and one hour. It was like that for several months until we got to real time. And we saw that fairness improved. We saw the system was able to scale for millions of deliveries. And it turned last mile delivery from this bottleneck to a competitive advantage.

Karen: That sounds great. It sounds like the tools worked well in the end at Walmart. Did you run into any bumps when testing it, where the models didn’t do what you wanted to do at first, and you had to make adjustments?

Alicia: In the beginning, the weights of the parameters were not optimizing. And so we had to continue giving data to the models. We had to work with the data scientists to say, “Okay, how can the model work correctly and how can we get in a position where we feel like it’s sending enough offers to new drivers?” Because the model in the beginning would send offers to the super drivers. We knew that cost of acquisition for drivers was so expensive that we needed them to have a good experience from day one.

Karen: Yeah, that sounds like you found a good solution to it, and it’s good that you were monitoring it for that kind of fairness from the beginning. So how about at Etsy? You had mentioned some work there.

Alicia: Etsy was a very different challenge. We were trying to figure out these estimated dates that were much longer than the actual delivery times, because the predictions came from sellers and then carriers who had these wide ranges of delivery dates. We knew that every day we were able to reduce estimates would lead to a huge conversion lift. The trade-off was that if we set these delivery dates too conservatively, on-time delivery could be 96%, but conversion would suffer. If we shortened them too aggressively, conversion would rise, but on-time delivery might slip closer to 90%, losing buyer trust.

The solution we had, after working multiple years on increasing delivery speed, was to build a supervised learning model that could make transit predictions much more precise. Blending historical shipping data with real-time carrier signals such as a weather delay. We had to make sure that we were meeting customer needs. We knew that buyers were far more likely to hit ‘purchase’ when they trusted delivery promises. At the end of the day, when we were able to get the model in production, we saw great improvements. We were able to unlock through our strategy about 180 million in incremental GMS.

Karen: You mentioned GMS – can you explain that acronym for our listeners?

Alicia: Gross Merchandise Sales.

Karen: Okay. That’s obviously an important metric! And you also mentioned working with startups? How is that going?

Alicia: I’m working with different startups that are coming into the e-commerce space with the same issues. They’re seeing these long estimated delivery dates, which hurt conversion. So the question once again, that we faced at Etsy and Walmart, is how do we apply AI and ML to make these predictions better while balancing trust and precision? It’s a problem that all companies will have to face, and it’s how quickly can they get the right data and adopt it to ML to increase buyer trust.

Karen: It sounds like you’ve had some really good experiences with using what some people might call traditional AI to solve problems, and that’s great. A lot of people now, when they hear AI, they’re just thinking about generative AI tools. I’m wondering if you have any examples of when you’ve used those tools, things like ChatGPT, Gemini, and Copilot? I’m curious what has worked well for you and what hasn’t worked so well for you with those?

Alicia: Yesterday my husband and I posted a LinkedIn article about our experience building an AI agent for budgeting. [link] The problem we were facing was that most of these budgeting tools are reactive. They tell you you’ve overspent and there’s not really this reinforcement learning of, like, how do we determine your habits and then change behavior? And so my husband and I built this AI agent that pulls our transactions daily and sends us humanized nuggets like “Your budget for coffee today just flat-lined. Tomorrow is a home-brew day.”

We worked together as partners. My role was scoping, “Hey, here are the product requirements. This is what useful feedback would look like.” My husband engineered the stack. And what worked well for the LLMs was to make the feedback feel fun and personal while actually still changing our habits.

What didn’t work well was relying on these models for raw data accuracy. I tried other agents where it would tell me my budget was like 1000 for the month, even though I knew it was much higher. And so my husband had to build the AI agent on its own.

The big takeaway for us was that automation alone isn’t enough. The best AI products can combine real time data with human insights.

Karen: I hear from a lot of people that they’ve tried using these generative AI tools, and for anything that involves math or budgeting or time or planning, they tend to fall kind of flat. So it’s interesting that your husband and you were able to find a solution that worked for you.

Alicia: In work with my startups, I’ve tried to do some data analysis, because of course, in a startup, you don’t have enough hands to do all of that analysis, and data is key for these startups, right? And so I would do this analysis and realize that, “Oh my gosh. The numbers that they’re having are just fully off”, or “I need to do a lot of prompting to get the data right.” But what I’ll say is, it’s just directional. It might give you an insight or nugget, but don’t rely on it for that.

Karen: Yeah. We need to use the tools for what they’re good at, right? For where they fit the problem and not try to fit them onto a problem that’s not really suited for them.

Alicia: Absolutely.

Karen: Yeah. Great. So have you avoided using AI-based tools for some things? If so, can you share an example of when, and why you chose not to use AI for that?

Alicia: Yeah, I’d say, human connection is something that, especially as you get older, can feel so distant, right? And so for me, when I am sending cold emails to people on LinkedIn, or I’m reaching out to friends. I try to make sure that that communication is authentic.

I’m going to an event next week. The whole premise of the event is that we’re going to pair up by our agents, and our agents are going to have all these simulated conversations. And then in person, we’re going to meet the people who we had best stricken up conversations with these AI agents. That’s cool, but let’s not lose the human element of what it means to be human and what it means to care and love for each other. Because I don’t think that an LLM can give you that.

Karen: So each person that’s participating in this event trains their own agent to basically act as their proxy, and then the proxies talk to each other?

Alicia: This company themselves are using different agents to talk to each other. And then based on those conversations, we’ll meet with the right matches in person.

Karen: Oh, very interesting. Is this an event for startups in particular?

Alicia: It’s part of Tech Week in SF next week. There’s a whole week of different activities, but I thought this one would be interesting. I go to so many networking events and it’s so hard to figure out who to talk to. This is a great way to solve that problem.

Karen: That’s very interesting. I always wonder when I hear about things like that, how do they validate? How do they know if they got it right? And maybe they’re collecting feedback from you afterwards. Like, how happy are you with the people that you got to talk to? Something like that. Or maybe you don’t know that yet.

Alicia: I don’t know. And honestly, that’s why out of all these events, I want to go to that one. Because I want to see who they decide is a great connection for me and why. Maybe we’ll hit it off and have wonderful work chemistry and start a startup together.

Karen: Very nice. Are there any other examples of cases where you would avoid using an app-based tool? Either work or personal?

Alicia: I’ve seen examples in recruiting where relying on AI systems can introduce bias. One colleague of mine was telling me that in her experience recruiting, that these AI tools weren’t surfacing women or people of color, forcing her to go outside of the system and spend a lot of time finding people that she could bring on. That underscored for me what we’re seeing right now in Silicon Valley where a lot of these leaders who are creating AI are men that all are making these decisions for what the future of these tools will look like. Bringing women and people of color and diverse opinions and thoughts into these tech roles is so critical for what the next generation of AI products will look like.

Karen: Recruiting is one area that’s been flagged several times as one that reflects traditional biases. It’s in the data that they’re using to train the tool. So the model tends to reinforce or even make worse the biases that are in that original data. AI does have the potential to be more objective and more fair than humans. But if the data already has biases baked into it, then in a lot of cases it can make it worse. And it’s great that your colleague was sensitive to that and aware of it and looking for whether she was getting a representative set of candidates.

Another study I heard of recently said that managers who use generative AI for messaging can lose credibility with their staff. Especially, as you were pointing out, when authenticity is important.

Alicia: Managers are so busy and using these LLMs to streamline communication is wonderful and great. It’s just making sure that you can be there for your team in an authentic way.

Karen: Yeah. I’m interested to hear, as far as the startups that you’re working with, if they’re using any AI within those areas for decision making? That’s one of the areas that I think that, again, people are overwhelmed and they reach a point of decision fatigue sometimes.

Alicia: One note I will share is that I think meetings with note recording have become a necessity. And what I can see sometimes is, these notes capture so much information, it’s hard to know what information is important and what information is not.

So what I try to do when I’m sitting in these meetings is still take notes of, like, “Hey, these are the five decisions we need to make. Let’s see what information we can suss out from the whole conversation.” But let’s still, at the end of the call or afterward, make sure that those decisions are being made and communicated to the whole team. Because that’s the thing with startups is you’re going so fast, you’re making so many decisions. It’s often lost in translation what exactly is happening and what is not.

Karen: Yeah, that’s a really good point. I’ve heard some people that talk about different kinds of note takers and some rely on them, and verifying, of course, what you get. I was on a call the other day and it said that I said something happened in 2022, and no, that was 2024. That was very wrong. And I don’t know how it came up with 2022!

Some people say mostly, “Yeah, it reminds me that, oh yeah, this person said that, and I forgot to write that down.” So it’s a good resource to have. But a lot of people just don’t want to rely on them at all, and they find that they don’t pay as much attention in a meeting if they aren’t trying to take notes. So it can work against you in that regard too.

Alicia: I can see it both ways.

Karen: And that’s one of the points with this interview series, and one of the reasons that it’s called “AI, Software, and Wetware”, is that we have to not disengage our wetware, our human brains, when we use AI tools.

Alicia: It’s a huge challenge. And I’m sure you’ve talked about this, but just seeing how teenagers are using AI tools. I remember I was overhearing a conversation at one startup where one of the interns was asking, like, “What did we do before LLMs were a thing?” I was just sitting there thinking, “Oh my gosh, okay. These are some of the questions people are asking.” We were okay, you know? It definitely makes a lot of work faster and more efficient. But especially when we’re young, we have to still be using our brains and learning and thinking for ourselves, and I hope that doesn’t get lost.

Karen: There’s some really interesting studies. There’s a whole area on Substack where people are writing about AI and education. And I’ve interviewed a couple high school students, some people that are in college, and going through medical school, and just getting perspectives from them. “Here’s what I can’t use it to teach me because I don’t know enough to ask it the questions.” So the whole impact on education, and people learning, and what are the right skills that kids should have nowadays?

Because AI is not going away. So teaching them how to use their wetware on a day-to-day basis and not just hand it over to a tool – that’s a really big area of discussion is so interesting.

Alicia: So interesting and so important.

Karen: We talked about the recruiting tool and the fact that the biased data can lead into a biased model that doesn’t represent people. This is a fairly common and growing concern – where these AI and ML systems get the data and the content that they train on. It sounds like for your projects, you were always working with company data. But in a lot of cases, the companies that are developing these AI platforms or the foundational models are using data that users have put into online systems, or they’ve published online. And companies aren’t always transparent about how they intend to use our data when we sign up for a service.

So I’m wondering how you feel about companies that are using data and content for training their AI and ML systems and tools. And specifically with regard to, for a company to be operating ethically, do you feel like they need to be required to get consent from and credit and compensate the people whose data that they use? Or do you feel like it’s okay for them to be scraping everything?

Alicia: That’s a great question. My 2 cents is that companies need to be more transparent about how they’re using our data and how it impacts us.

Humans also need to be thoughtful about how they use AI tools and what information they’re putting into those tools. The domain is still so new and there’s a massive knowledge gap. For example, if one company is selling personal data to another, that should not happen. But what are the guard rails, and what are the guidelines, is something that I personally struggle with.

Until companies step up with clarity and lead the way, we’re going to be operating in the dark, and that erodes trust. So much of my experience is in marketplaces where building trust with consumers is central to longevity. It’s going to be the same with all these LLMs. What will end up setting each product and each tool apart is what companies can build trust with their buyers versus the companies that are losing trust.

Karen: Yeah, that’s a great observation. As far as the use of people’s data, there are over 40 active lawsuits in the US alone on companies that have used copyrighted data or data that was taken in violation of the terms of use for the site, like YouTube, or stealing books, pirated books, things like that. So do you feel like these lawsuits are something that you would like to see go through? Do you feel like these are appropriate? Or do you feel like this is something that we all should be accepting because this is what it takes to make the tools work? Although there are studies that show that it doesn’t.

Alicia: Lawsuits need to be handled delicately and we can learn a lot from what happens in those lawsuits. It’s important that we spend more time educating ourselves about what we can do to empower ourselves.

Karen: Yeah, there was this report recently about a new project called Common Pile. It’s a joint project, I think, between US and Canadian researchers. And they worked only with data that wasn’t scraped and wasn’t taken illegally. They worked with properly licensed or public domain or contributed data, and they were able to get models that delivered good results. And so that kind of puts a lie to the idea that, “Oh, we have to scrape it, or we can’t do all this good stuff.”

Alicia: That’s fascinating and amazing.

Karen: A lot of people feel like, “Well, maybe it’s just inevitable and they can’t do it otherwise.” Yeah, they can. To me, that bolsters the case that you can’t justify taking people’s intellectual property without credit, without compensation, or without getting their consent. But the whole issue of how to actually make it happen is pretty thorny.

Alicia: We need more leaders and voices in that space.

Karen: As someone who has used these different AI-based tools, do you feel like the tool providers have been transparent with you about sharing where they got the data for the models that they’ve built that you’re using, or whether the original creators of the data consented to it being used?

Alicia: There’s not a lot of transparency about where the data is coming from. It’s critical that you look at terms and conditions, but I can’t tell you that I’ve recalled reading terms and conditions. I should care more about it. A lot of people feel the same way. These systems are powerful, but also very opaque.

Karen: Yeah, I think a lot of that clarity has been missing. They tend to hide things in the terms and conditions, and they’re 20 pages of legalese that nobody can understand. There have been studies showing that 90% of people never read them. But it’s hard to fault people for not reading when they’re really not comprehensible. Some people have tried to put those terms and conditions into an LLM and ask it to explain “What does this mean for my privacy?” Because they really just can’t tell. There’s an attorney in Ireland that I had spoken with earlier in the year, and she said they’re just a horrible mechanism for informed consent because we’re not truly informed. And in a lot of cases we don’t really have a choice about opting out.

Alicia: Yeah, absolutely.

Karen: When you’ve worked with the AI-based systems that you’ve helped to build, what can you share about where the data came from and how it was obtained? It sounds like it was primarily internal company data, but can you elaborate on that?

Alicia: So when I’ve worked closely with these data science teams, while also leading product, in most cases the data primarily comes from operational and customer behavior within the platform. Things like actual delivery dates, or driver acceptance rates, or purchase patterns. What matters most when setting up these right experimentation frameworks is being clear about which are the metrics we’re trying to optimize for?

While I wasn’t personally the one sourcing the raw data, I was involved in shaping how that data is going to be used responsibly. For example, in the case of some of these e-commerce companies that I’ve been looking at, making sure that we don’t just optimize for speed, but also optimize for fairness for drivers or transparency for buyers. This is where the role of a product leader in these large companies that are consumer-facing can have a lot of impact in leadership. It’s not just about the data collection itself, but making sure that these models are trained ethically, in a good way that aligns to the ethics that the company wants to value.

Karen: Yeah. Startups are super interesting. I had spoken with someone last year about startups, and she made the point that the values of the founders tend to persist long after the company is up and running and become something much bigger. That the foundational values of the people who started the company tend to persist and they get ingrained very quickly. So it’s interesting to see, when you are working with these e-commerce startups, that they do care about fairness and being transparent with their customers. That’s really good to hear.

Alicia: Trust is such a critical part of these companies’ longevity and value.

Karen: Yeah, fairness especially is super important, so I’m really glad to hear that you and your teams have been prioritizing it. Some of that may be your influence directly?

Alicia: Yeah, a little bit. I always try to be as close to the customer as possible and really understand their needs and figure out how those needs can translate to key business objectives. It’s a balancing act.

Karen: As consumers and members of the public ourselves, our personal data and content has probably been used by an AI-based tool or system. Do you know of any cases that you could share? Without disclosing any sensitive personal information, of course.

Alicia: In job applications, I’m surprised about how much personal information they’re asking. Even sensitive things like race or ethnicity. Early in my career I didn’t think twice about filling out that information, and actually thought that these companies were looking for diverse people. But as time has gone on, nowadays I just don’t trust how that information is used once it’s in these ATS systems. Is it stored? Is it shared? Is it fed into an algorithm? And these tools are becoming increasingly commonplace to screen resumes. And so personally, I’ve stopped sharing information about my race. Because I don’t know what they’re doing with it. And I just wish for more transparency and trust baked into these recruiting processes, because it’s not clear.

Karen: Yeah, that’s a really good point, because when you’re applying for a job, they ask for this information. And you either provide it, or in some cases, like in the US with some of these demographics, they will give you a choice of “I prefer not to say”. But yeah, you always wonder if that helps or hurts? And it’s a little odd sometimes if it’s on the same application form. Okay, they’re telling me they’re going to keep it separate, but it’s right here.

Alicia: Yeah. Yeah, exactly.

Karen: Have you had any experiences with an AI-based interview? Some people have been reporting that they’ve had an interview where the other side of the interview was an AI agent and not a human being. Have you ever had that happen yet?

Alicia: I have not, but my husband is a domain expert in data science. And for one of these jobs that he was applying for, he had to be screened through an AI agent. They would ask all these specific questions just to make sure that he actually knew what he was talking about. And he was able to quickly move through these processes.

I, myself, am still a little apprehensive to do them, but I am seeing them more and more. Especially on consulting or part-time gigs, I see this being commonplace.

Karen: Yeah. It sounds like the tools could be helpful. I have to wonder how fair those tools are and what they make their judgments on. Is it based on how fast somebody responds? Some people, especially if they’re neurodivergent, may have different patterns of response and they may get misjudged for that. I’m really curious.

And an interview is supposed to be a two-way street. You’re supposed to be able to interview the company. I don’t know that that actually happens in those types of interviews. If it’s just a first step, maybe not as important. But it’s an interesting development. I’ll say that.

Alicia: One of the projects that he’s working on is actually working with these different LLMs and trying to trick them and actually figure out where the information that they’re responding isn’t correct. There is always that necessity to have humans in the loops to validate the information.

Karen: It sounds like, with the job applications, you are very circumspect about what you share. Are there any other cases where you’ve given a company your data and your content, knowing that they might use your information for training in an AI system? A lot of the social media sites, for instance, do this now.

Alicia: Most recently I became aware that ChatGPT has this setting that automatically allows your content to be used for training. And it’s an opt-in automatically. But I don’t recall ever being asked that question. And I only found out as I’m in several of these Slack channels that someone said, “Hey, by the way, this just got released. Go in and make this change.” And I did.

And I felt like that was such a huge miss on behalf of OpenAI. If you know you’re going to use people’s content to train your models, you should be upfront. And I felt like that was an opportunity for them to build trust. And so I love to see them, across all the industry, continue to give us consent before making these decisions.

Karen: That’s a really good point. Are there any social media sites that you use, or do you avoid them because of your concern about what they might do with your data?

Alicia: I do use some social media, but I definitely have a love and hate relationship with social media.

Karen: That’s totally understandable. Have you ever had a time where a company’s use of your personal data or content ever created any specific issues for you, such as privacy or phishing or loss of income, anything like that?

Alicia: Yeah, one time some private information, like my phone number was linked to a private group that I’m in. And someone had reached out to me because they were searching a certain search word in Google and my name popped up as part of that search.

Karen: Oh, wow.

Alicia: When she called me to ask me about it, and I was so surprised, she told me, “Hey, this is where I found it.” She sent me the screenshots and I was able to go and change that. But I couldn’t believe that that information was being publicized.

Karen: Yeah, that’s really disconcerting to find that out. And one concern with AI: people have been using and stealing data, and data brokers have been active, for many years. But if that data is getting pulled into a large language model and trained into it, there’s almost no way right now to get it back out.

Alicia: Yeah, absolutely.

Karen: Even if you find out afterwards, like you did here, that your phone number was out there and you were able to correct it, if someone scraped it already, then it’s going to be in there.

Alicia: Yeah, she did find me on Google search.

Karen: Yeah. That’s disconcerting probably, just finding out about that. I get so many junk calls. I get way more junk calls on my cell phone than I get real calls. At least the phone companies are using AI and machine learning to try to figure out if this is a spam call, and they notify me of that. So it’s a good thing.

Alicia: Yeah.

Karen: We’ve talked a lot about trust. I think the public distrust of AI and tech companies has been growing. And to some extent I think that’s healthy because we’re realizing more and more “They’re doing what with my data?” But if you think about what companies would need to do to earn and keep your trust, what do you think is the most important thing? And do you have specific ideas on how they could do that?

Alicia: This is where, these LLMs, these companies behind them, need to step up and do a better job educating their users, and making transparency part of the actual user experience. Things like consent prompts are so commonplace in platforms like Uber. Can we bring some of those learnings to using LLMs? And put that right in front of the customer and give them that ability to make a decision.

Right now there’s just so many different companies on the market and people are using them blindly for all these different purposes. Companies that win in the long term are the ones that build empathy and trust with their users.

Karen: Yeah, like you said, there’s so many different tools out there on the market. Some of them are just blatantly ignoring empathy and trust and transparency considerations. But some are emerging that are starting to lean on that, especially some companies out of Switzerland. The Swiss have this new free public open source, ethically sourced and trained tool called chat.publicai.co. There’s also Mistral Le Chat. Have you seen any other tools that are doing a good job of this that we could call attention to?

Alicia: I’d love to call attention to one of my friends that I had at Walmart. He built an app called Ask Safely. This is an app that doesn’t store any data. It asks you for your consent. It puts privacy first in AI chats, has no memory, no training.

I am so inspired because I feel like if we had had leaders like this when social media was being built, what a different world that could be.

Karen: So the app’s called Ask Safely. Is that a mobile app or how can people find it?

Alicia: Yeah, it’s a mobile app. You can download it on Apple, and I don’t know if it’s on Android yet, but let’s definitely link it on the podcast for users?

Karen: Sure. We can definitely include whatever links are appropriate for that:

Ask Safely by SafeLife Inc. is available now on Apple (Mac and iPhone)
Android access is planned for Nov. 2025
(join the waitlist at www.asksafely.ai)

Karen: I love calling out companies that are doing the right thing. They have a bit of an uphill climb competing against the companies that aren’t operating fairly. So I like to give them an extra boost whenever I can.

Alicia: Yeah, exactly.

Karen: Well, Alicia, thank you so much for joining me in this interview. That’s all my standard questions. Is there anything else that you would like to share with our audience?

Alicia: Thank you so much for having me on the podcast. It’s been fun. A couple things just to share: first, I’ve started a Substack called Built for Product. This represents not only my consulting business, but after a one year sabbatical and giving myself time to reflect on my values I am putting on there what I’ve learned, as I just talked about this budget app. I put another post on how to shop my values. I’m going to continue experimenting and testing. I’d love, if you’re interested in my story, follow me.

I’m open to new product leadership opportunities. Please feel free to reach out if you need help scaling your marketplace or figuring out how to apply different AI to customer problems.

Karen: Absolutely, yeah, we’ll include a link to your Substack [Built For Product]. And what’s the best way for someone to connect with you about your leadership roles and opportunities? Would that be LinkedIn or contacting you through your Substack, or what would you like people to do?

Alicia: Through LinkedIn is great also.

Karen: Okay. Great. Well, thank you so much. Any final thoughts?

Alicia: Thank you, Karen. This is amazing work you’re doing. Thank you for giving me this opportunity to join you today.

Karen: Oh, it’s my pleasure. I like hearing from people who aren’t the eight figure tech bros that are running things right now. Because I think what real people are doing with it is what matters most. And you’ve got some great insights here, so I’m happy to share them with the world. So thank you for joining me!

Alicia: Thank you.

Interview References and Links

Alicia Yanez on LinkedIn

LinkedIn Article “How My Husband and I Built an AI Agent to Fix Our Budget (& What We Learned Along The Way)”, Alicia Yanez

Alicia Yanez on Substack - Built For Product

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode

User's avatar

Ready for more?