📜 AISW #030: Lakshmi Veeramani, India-based AI software leader (AI, Software, & Wetware interview)
An interview with India-based AI software leader and architect Lakshmi Veeramani on her stories of using AI and how she feels about how AI is using people's data and content
Introduction - Lakshmi Veeramani
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Lakshmi Veeramani
I’m delighted to welcome from India as our next guest for “AI, Software, and Wetware”. Lakshmi, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I’m Lakshmi Veeramani, and I bring 23+ years of experience in the software industry, with a decade dedicated to the field of AI.
Currently, I’m a Senior Architect at Persistent Systems, where I focus on developing the AI/ML/GenAI product vision and roadmap for large-scale projects. My responsibilities include designing the architecture for GenAI agents development and implementing them with AWS cloud-native services as AI/ML/GenAI Engineer. I also have the honor of leading our AI/ML/GenAI squad of my project, where we strive to implement engineering standards and best practices that drive innovation and excellence in our work.
I’m always excited to have discussions with you, Karen. Whether it was our daily one-on-ones during our time together at Wind River or our occasional catch-ups for our blogs, I truly value our conversations.
I do as well, Lakshmi; I’m so happy that you’ve agreed to do this interview with me!
Please tell our audience about your level of experience with AI, ML, and analytics - how you’ve used it professionally or personally, and studied the technology.
I have been professionally involved in AI, ML, and analytics for the past 10 years. However, my journey with rule-based fuzzy logic and neural networks dates back even further, to my M.E. project thesis on lateral autopilot design for aircraft in 1999-2000.
In the early stages of my AI career, I utilized machine learning algorithms for analytics, predictive maintenance, and anomaly detection. I always strive to stay updated with the latest advancements in the AI technology space. More recently, I've been focusing on developing generative AI-based agents.
My work primarily revolves around empowering engineers and professionals to make informed decisions, and I take great pride in that effort.
That’s a great background! Can you share a specific story on how you have used AI or ML? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
Good:
I've found that using ChatGPT for searching has become far more convenient than traditional Google searches. It helps me write code faster and more efficiently compared to searching through Stack Overflow.
YouTube's music recommendations are another feature I love. When I'm in the mood for a particular genre or vibe, I just play one song, and the algorithm takes care of curating a perfect playlist, especially since I don’t always remember all the songs I like 😀.
Google Maps has completely transformed the way we travel. I can distinctly recall life before and after its introduction. It has made long trips safer and more convenient, and I’m excited to see how it continues to improve, especially as more safety features are innovated and integrated.
On my iPhone, the Photos app's AI feature recognizes people. So if I want to create a birthday video for someone, I can simply click on their face, and the app gathers all the photos of them in seconds.
Microsoft Outlook's focus hour suggestions based on my calendar are allowing me to better manage my time. Similarly, Zoom’s automatic meeting notes had been a helpful feature.
The face unlock feature on my phone, and in banking apps like PhonePe, has made access so much easier and safer. Even at my office, checking into the building with facial recognition is seamless.
Those are all great examples of AI in everyday life. And I’m so glad to hear of your positive experience with face recognition. It’s one of the AI application areas that has been problematic for bias and not recognizing people properly.
So those are some things that have worked. Is there anything that hasn’t worked well?
Not so Good
The voice recognition on all the apps search feature has had an unintended consequence. My daughter started using voice commands from the age of three, and now she hardly practices typing, which I feel might hinder her development of spelling and memorizing skills at this stage.
I tried starting a YouTube channel and used various AI tools for content creation, but I found them less helpful than I expected. With so many tools available, it sometimes feels like those who master the tools become successful faster than those with original content but less familiarity with the tech.
That’s an interesting observation about your daughter and using voice commands - that relying on it might hurt her later, due to less emphasis on spelling and typing. Some educators I’ve talked to say that they are trying to find ways to teach their students how to use AI, but without it impairing their ability to learn the underlying skills. It’s a big challenge.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I’m really not a fan of AI-powered health apps, especially when it comes to meditation and mindfulness. It almost feels like it defeats the whole purpose. Meditation is something spiritual, something I do for my soul, so relying on an app to guide that experience just doesn’t sit right with me.
As a parent, one of my biggest challenges is managing my daughter’s screen time. Ideally, I’d love to avoid giving her the iPad altogether, but it’s tough—especially in a world where devices seem to be everywhere.
I’ve read so many great books throughout my life, and I cherish those experiences. But as a mother, I sometimes wonder, will my daughter ever have that same relationship with reading? These days, she’s glued to her iPad from morning to night, and it feels like AI has learned how to capture the attention of kids and teens so effectively. It’s a different challenge than what our parents faced.
Part of me just wants to let her grow in her own way, without pushing her too hard into the “work hard” culture I grew up with. The truth is, I don’t even know what the future job market will look like in 10 years. AI is reshaping the workforce at such a rapid pace, I find myself questioning what kind of dreams I should encourage her to pursue. What does success even look like in a future that’s so uncertain?
And I feel like this is a challenge that parents who work in AI face more than others. We’re keenly aware of how this technology is evolving and how it might impact the next generation. It’s a lot to think about, and honestly, it’s something I’m still figuring out as I go.
I think parents everywhere are figuring that out as they go! 🙂
I know what you mean about reading. I’ve been an avid reader since I was 5. I do like being able to have a new book with me ‘anywhere’ on my phone, nowadays, but I still buy some printed books. It’s a different experience. I remember when e-books and e-readers first became popular, and people were worried that printed books would go away. They haven’t.
A common and growing concern nowadays is where AI and ML systems get the data and content they train on. Some have been trained on copyrighted books that were simply stolen. Some have used data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get consent from (and compensate) people whose data they want to use for training?
One of the things that’s been on my mind is how AI, especially large language models, are trained on a vast amount of data without necessarily asking for anyone’s permission. This is especially troubling for the art and creative communities, who I think will face some challenges in the short term. But over time, I believe original music will prevail. I can’t imagine people getting excited about a song that was announced as “composed by AI”.
From my own experience growing up watching Tamil movies and listening to music, it’s often hard to predict which song will become a hit. There’s no one-size-fits-all formula for success. Sometimes, fan loyalty to actors even drives the success of music, whether or not the songs themselves are that great. So, at least in my view, Tamil fans—and many others with deep cultural roots in their music—won’t easily fall for AI-generated music.
In the words of the famous Tamil poet Avvaiyar, who once said “Padal pudhiyadhu”, meaning “Songs are new”, I interpret this as music needing to feel fresh. If it’s just learning from the past, where’s the newness? (We could dive deeper into that, but I’ll leave it for another time!)
All great points - as you know, I’m passionate about ethical use of AI for music 🙂 And I know there have been a handful of AI-generated songs which have built up a lot of ‘streams’. It’s hard to know how many of those were just curiosity about AI vs. people thinking, “Hey, this song is really good, I need to tell my friends”. Or how many would come back to listen to another song created by the same prompter.
The other thing I’m seeing is that the streaming platforms are being overrun by AI-generated music. Most of it’s considered to be mediocre. And the AI platforms that scrape are going to be training on that ‘new’ mediocre input. It seems like it could be a race to the bottom for music quality. As you said - let’s save that topic for another time :)
There are so many areas where AI ethics are critical, and music is one great example. I’m sure you have more 🙂
Yes, Karen, I really appreciate your passion towards the ethical use of AI in music and the work you do!
As a working woman, I also have concerns about AI-driven talent acquisition tools. What if these systems carry biases, especially against women? We’ve fought long and hard to get where we are, and the last thing we need is to face discrimination from technology itself. It’s a real worry.
That’s absolutely a fair concern. In fact, you probably saw the recent report on Workday’s AI hiring tool being biased. (link)
AI is sometimes called a ‘stochastic parrot’ and it easily parrots the biases of the data it was trained on, and the people who trained it. Even when people are conscious of bias and try to mitigate it, it’s challenging to do. So yeah, AI bias in hiring and HR tools is potentially very serious.
In the software world, I’ve noticed that while we used to have just a few star coders in a team, now, with AI, almost everyone can code competently. The difference is going to be how creatively you approach problems will make you star in the current scenario. It’s not about learning new programming languages; it’s about thinking innovatively. And that’s where we, as humans, will continue to outshine AI. After all, AI is learning from what humans have already done—it’s not truly creating something new on its own.
Exactly - data quality and bias checking and ethical sourcing are paramount. And innovative thinking and problem solving are always going to be essential. Getting the problem to be solved identified and well-defined is critical, and we haven’t yet seen AI able to handle that.
A related concern that I don’t think we’ve fully addressed is how we as people leaders develop our software teams, how we grow beginning developers into experienced senior developers and leaders. Most junior people learn from struggling with progressively bigger, harder problems and being mentored by senior people. Using AI for ‘coding’ seems to disrupt that growth pattern. People ‘write’ the code, but they don’t necessarily understand the system they’re building.
That’s a great point - we need to figure out how new developers using AI-based tools can build the expertise to understand AI-generated code and systems well enough to know when they’re not designed or coded right.
Transparency seems to be a universal demand from people who want to have that kind of visibility and control over how their personal data is used.
Definitely, Karen - there's this saying we hear a lot these days: "Data is the new oil." And I think it's true—anyone who’s making a commercial product using the data should absolutely pay for the data they're using.
When you’ve worked with building an AI-based tool or system, what can you share about where the data came from and how it was obtained?
Personally, I wouldn’t create any tools based on public domain data. The work I’ve done so far is industry-specific—things like manufacturing, engine data, battery data—and none of that involves human data.
That makes sense. One of my first guests, Ralf Gitzel, described something similar, about how he and his team had to invest a great deal of effort to collect the machine data they needed for their models.
Good to know, I will go through that, I believe that when AI is used wisely, it can really help people. A lot of industry-specific tools are reducing human effort, improving safety, and enhancing lives in many ways.
I agree!
When we’re developing AI solutions, one of the key things to keep in mind is following GDPR compliance. Now, while GDPR technically applies only to the EU, it’s actually a really good framework to start with for any region, as I mentioned in my article about The Lost Soldiers and the Map, as some map is better than no map, when you are lost.
But beyond the penalties, designing AI apps that are GDPR-compliant puts us on the right path toward building “privacy by design” AI systems. This approach not only helps us meet current regulations, but also prepares us for any new rules that might come up in the future. More importantly, it ensures that the AI we’re building is ethical, respecting user privacy at every step.
Absolutely - that’s so important. Yet, as members of the public, there have been cases where our personal or private data or content has been used by an unethical AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Have you ever noticed how you browse something online, and then suddenly, you’re seeing ads for it everywhere, even in other apps? It happens to me all the time.
Yes! Did you hear Tracy’s and Quentin’s stories about this? It seems to be very common nowadays - not only what you browse online, but even things you do and say ‘offline’.
Yes, I have listened to Quentin’s stories. And it doesn’t stop there—back when I used to walk every day from 5 to 6 PM, I’d get all these calls for weight loss programs, right around that time. I blocked those callers, and thankfully, I’ve stopped getting the calls, but it really made me wonder. Is my personal data being shared with these advertising agencies? Whether we like it or not, it feels like AI-powered ads are always following us, and to be honest, I really don’t like that.
I don’t blame you! It’s creepy. I looked into this a while ago (link) and it seems that a lot of phone apps are grabbing our data. It’s usually buried in those tiny terms and conditions. Even having push notifications enabled can give an app our location data, because our location is in the metadata for app notifications. So it’s really hard to avoid and protect yourself.
In India, it’s even worse. They ask for your phone number for every little transaction, and the next thing you know, your number is being passed around to advertisers. I must’ve blocked at least hundreds of calls by now from the same firm. Even today, if I give my number at a store for billing, it’s pretty much guaranteed I’ll start getting spam calls. It’s like we’ve all just gotten used to it.
I’ve had a similar experience in US grocery stories, where the clerks just asked for your phone number like it’s normal and expected, and implying that it isn’t optional. I finally just started saying “I don’t give it out”. At first I got weird looks, but they kind of shrugged and said “ok”. And I’ve been hearing about more people doing this.
We (especially us women) have to learn to not be so nice and agreeable, and push back when people and companies ask us for information that they don’t need. Because as you said, they’ll use it and sell it, and it’s generally not for our benefit; it’s for theirs.
The other spam calls I get nowadays have mostly been political. I was getting 10 or more a day coming up to the November election. My mobile number is not in my voter records, so they probably bought my info from some other source and matched it up. Or maybe they are just robo-dialing all of the possible number combinations in an area code. 😉 Either way, they’re never going to reach me, because I ignore all unknown callers, and I block all junk texts about politics, no matter what side they’re supporting. And my phone now has gotten pretty good about flagging them as ‘possible spam’ - so a ML model somewhere is likely taking care of that.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
That’s good to know what you did! What I really think is that, by default, apps shouldn’t be collecting our data. Right now, most apps collect and use our data unless we actively opt out—and that’s just wrong. It should be the other way around: apps should need our explicit permission to use any of our personal information.
Absolutely. And I envy the people in the EU who have GDPR to protect them from this. The rest of us don’t - yet.
Yes, it is good that the EU has GDPR. And then just last week, Facebook got into trouble for storing passwords in plain text in Ireland. They got sued for it.1 And it’s shocking that these big companies are still making such basic mistakes when it comes to data security.
Yes - that’s awful! It’s not really surprising any more that public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Trust is something technology companies have really worked hard to build in other areas. Take Amazon, for example. We’re so comfortable buying online, paying the full amount upfront without a second thought about whether we’ll actually receive our order. That trust didn’t happen overnight—it was built because they’ve consistently compensated customers without making it an exhausting process. The same goes for companies like Zomato, Swiggy, or BigBasket. If there’s an issue with the product, they’ll refund or replace it, no questions asked. That kind of customer-first approach has built a lot of trust.
You’ve put your finger on exactly one of the key issues, I think. The companies that exploit our data are financially motivated to manipulate us into not making more conscious, more frugal decisions.
A recent paper I came across from the Harvard Business Review, AI’s Trust Problem, really hits the mark on this.2 It talks about how there’s a trust gap between humans and AI that needs to be addressed. The paper outlines 12 key AI risks that are frequently cited, and I totally resonate with it. These risks include:
Disinformation
Safety and security
The black box problem
Ethical concerns
Bias
Instability
Hallucinations in large language models
Unknown unknowns
Job loss and social inequalities
Environmental impact
Industry concentration
State overreach
These are serious challenges that AI product companies need to focus on if they want to earn our trust in the same way companies like Amazon have.
Great information! (Readers: a link to the HBR article is in the end notes).
Lakshmi, is there anything else you’d like to share with our audience?
As we navigate the delicate balance between life and death, what truly keeps us alive is the profound sense of growth. Humanity's innate drive to innovate—whether through AI or other technologies—will never cease. However, every invention must be a force for good, uplifting humanity and ensuring we all move forward together.
AI is doing lots of good as well. So I would like to keep the attitude of doing good with the AI power, and all you need is the AI Wisdom!!
Interview References and Links
Lakshmi Veeramani on Medium (see her article on Responsible AI)
Lakshmi Veeramani on LinkedIn
Lakshmi Veeramani on Substack
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (it’s free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome 😊
Series Credits and References
End Notes
Meta was sued and fined for storing passwords in plaintext and allowing them to leak. The password exposures happened in 2019 and DPC Ireland fined Meta for it on Sept. 27 of this year. https://cybernews.com/security/meta-100m-fine-dpc-ireland-plaintext-passwords-facebook-leak/
“AI’s trust problem: Twelve persistent risks of AI that are driving skepticism”, by Bhaskar Chakravorti / Harvard Business Review, 2024-05-03.
Thank you Karen! It’s great talking to you ❤️
Great interview, thanks for sharing.
From my point of view we are in the very early stage of the AI and the ethic about it.
Many people are not ethic at all nowadays.
In terms of AI for coding, I see it as very good tool, but still the human being is the one that will think what is needed to be implemented.