6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #064: Pavan Vemuri, USA-based product and technology leader
0:00
-47:00

🗣️ AISW #064: Pavan Vemuri, USA-based product and technology leader

Audio interview with USA-based product and technology leader Pavan Vemuri on his stories of using AI and how he feels about AI using people's data and content (audio; 47:00)

Introduction - Vemuri

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Pavan Vemuri, provided by Pavan and used with his permission. All rights reserved to him.

Interview - Pavan Vemuri

I’m delighted to welcome Pavan Vemuri from the USA as my guest today on “AI, Software, and Wetware”. Pavan, thank you so much for joining me today on this interview! Please tell us about yourself, who you are, and what you do.

Pavan: Thank you, Karen. I'm so excited to be here. To everyone who's listening, I'm Pavan Vemuri. I come from South India originally, and now Detroit, Michigan is where I call home. As many know, Detroit stands globally recognized for its automotive sector and manufacturing strength. Through my work, I bring a fresh AI perspective to this industrial city. Most people think of AI and machine learning as belonging to the West Coast and Bay Area, but here I am, working to bridge that gap in Detroit. I believe innovation happens everywhere, not just in Silicon Valley.

Karen: Absolutely, yes.

Pavan: As Director of Product Engineering at SDVerse, I lead the technical and product roadmap of our marketplace. My focus is implementing key features that deliver value to both buyers and sellers in the automotive software ecosystem.

I followed a very simple philosophy throughout my career. When I feel too comfortable, it's time to make a change. This mindset guides my path forward and helps me grow. When I was at Cognizant, that's where I started my career, this was in India for two years and was feeling comfortable.

I decided to challenge myself by coming to the US for Masters in computer Science. Later, after establishing myself at STELLANTIS in the US, I pursued MBA in digital leadership at Oakland University to expand my business acumen.

I'm an AI enthusiast and enterprise AI architect at heart. If you need to productionize AI use cases, I'm your go-to person. I'm specialized in AI driven digital transformation in automotive systems, specifically.

And beyond the day job. I'm passionate about sharing knowledge as a technical writer. I published numerous articles on DZone. I've also written a scholarly article on how agentic AI enhances user engagement analysis. This is just my way of contributing back to the field of expertise.

Karen: It's great to hear that you write and you share what you learn. So please be sure to share a link to your DZone articles and the other places where your work is published online, and we'll make sure we include that in your interview. [DZone link]

Pavan: Definitely. In addition to this, I also love talking about AI. I have delivered keynote speech at Oakland University, the place which has given me so much- I did my MBA there. And I will also be speaking about orchestrating AI agents in an upcoming event in Detroit, and also at the IEEE Clouds event from a technical standpoint. Throughout my career at major companies, specifically automotive, I focused on making complex AI technologies accessible and valuable via real world applications.

Karen: That's a great overview, Pavan, thank you. We're looking forward to hearing your stories about using AI today. So what is your level of experience with AI, machine learning, and analytics-- whether you've used it professionally or personally, or if you studied the technology, or if you've built tools using the technology?

Pavan: My experience with AI, ML, and analytics is extensive, both professionally and personally. I've been working with technologies related to AI for almost a decade now. Professionally, I have led the implementation of multiple AI applications throughout my career, successfully delivering enterprise solutions to automotive systems.

My notable projects have included developing LLM power tools that significantly improved content, workflow efficiency, and creating AI analytics solutions that automated the extraction of actionable insights from complex data. In my current role, I'm also leading the integration of AI capabilities into our marketplace to enhance user experience and drive adoption.

I also developed intelligent workflows that provide deeper understanding of user engagement patterns, and help inform strategic improvements. I'm also a fan of modernizing AI infrastructure, including migrating models to cloud environments for better efficiency and cost savings. I have built numerous reusable components and streamlined ML workflows that help data science teams accelerate their model development and deployment processes.

In addition to the above, I also have strong experience with both centralized and decentralized ML Ops platforms, which can help deploy, operate, manage ML models to software-defined vehicles, and can advise on also the best approach based on the business case. My expertise extends to building efficient microservice architectures for AI applications.

In the research side though, I have led a team to develop a supervised learning algorithm for predicting warranty costs during product development phase of the vehicle. That's the early phase of vehicle development, providing warranty cost savings. I just love it when I'm able to focus on real value creation with AI.

And as I mentioned earlier, in addition to the above work, I continuously study emerging technologies and write technical articles about AI advancement. This just helps me stay current with the rapidly evolving field, while sharing knowledge with the broader community as well. I'm also working on some cool AI use cases for healthcare in my spare time too.

Karen: That's a lot of activity and a lot of experience. Can you comment on how you use AI tools and features outside of your professional work? Even your spare time sounds like it's for professional development. For instance, do you use large language models with family or community or personal communications or movie recommendations or photos or anything like that?

Pavan: Definitely, yeah. The one fun activity with AI involved that stands out is carving out like some cool stories for my son who is five years old. He gives me the characters he wants in a story such as a wolf, a bird, or a fox, and so forth. And I'll just ask ChatGPT for a cute little story involving those characters, and also to include Ghibli style images of the important scenes in the story, so that he gets to visualize what I'm narrating.

It just makes me wonder, when I was growing up, the magic of storytelling was all about mystery, where our own imagination filled up every blank. But today, AI amps up that magic giving eyes to our imagination. It's a new era of storytelling, I feel, where it is just armed with a digital sidekick. I even created a custom GPT for it called Pixie Plotter, which creates cool stories and also the Ghibli style images for each scene.

Pixie Plotter plots a slippery spaghetti scene with the ninjas!

Karen: Yeah, making up stories for your five-year-old son sounds like a fun way to take advantage of the fact that generative AI sometimes makes things up that don't actually exist or aren't true. So that's making hallucinations into a feature and not a bug in this case.

Pavan: True.

Karen: So that's a great example. Can you share a more specific story on how you've used a tool that included AI or machine learning features? I'd like to hear your thoughts about how the AI features of those tools worked for you or didn't. What went well and what didn't go so well?

Pavan: Yep. I leveraged many tools to lighten my workload across many tasks. One of the most under underrated but incredibly powerful AI features I found is GPT in ChatGPT. So they are absolute game changers for saving time and effort. There is a Explore GPT tab on the ChatGPT homepage. There are several famous GPT I regularly use. Canvas GPT helps me create quick visuals without opening their full app. And there are a couple of CrewAI assistant and builder GPTs, which are very helpful for orchestrating multiple AI agents. But what's exciting about this is if you start exploring, you'll discover countless specialized GPT that can be incredibly handy for specific use cases.

I have even created my own GPT called Prompt Master Studio, only because prompting is such an underrated skill and it's so often neglected within AI use cases. Anyways, coming back to what it does, it helps write effective and efficient prompts. It's designed to save time and boost productivity by instantly generating custom prompts using the WISER framework given by Allie, which stands for Who, the Instructions, Subtasks, Examples, and Review. This structured approach ensures that prompts are clear, contextual, and drive towards specific outcomes. Whether you need technical prompts for Python automation or ML model optimization, business prompts for drafting emails, or creative writing prompts for character development and plotting ideas, it streamlines the entire process. You simply select a domain, pick a task, provide some details, and my Prompt Master Studio generates a full, customized prompt in seconds. The real value, though, comes up when you take that generated prompt and feed it back into ChatGPT or any other model. That's when you truly see exceptional results tailored exactly to your needs.

And what's even more valuable is how the WISER framework gradually changes your thought process over time. You'll find yourself naturally structuring your thinking around these elements, leading to increasingly effective prompts. It's like having a power multiplier for AI interactions that continues to grow in value the more you use it.

Karen: Yeah, I'm glad you mentioned WISER, and it's great to hear that you found it useful to apply. For those in our audience who maybe haven't heard of it, WISER is a prompting framework that Allie K. Miller published on LinkedIn about a year ago. And I'll drop the link into the interview for anyone who wants to check it out. And we will also include a link to Prompt Master Studio. [link to WISER post] [link to PromptMasterStudio]

Pavan: Yep. And another important mention, for the AI tools that is, that I've become a huge fan of is Cursor and similar AI-powered coding environments. They are game changers because they just handle the tedious coding tasks while letting you focus on what really matters, which is the creative thinking and problem solving. These tools basically free our mental bandwidth for the big picture stuff.

There are so many other cool things in addition to this that I do with AI, I can't mention all of them due to the timing constraints. But what fascinates me is how these tools aren't just time savers. They're actually extending human capabilities in ways we couldn't have imagined just a few years ago.

Karen: It sounds like you're a big fan of AI. Can you share a specific example of a time when your human capabilities were extended by using these AI tools?

Pavan: Absolutely. Since we were on the topic of Cursor, let me share two examples which are real game changers that I've experienced with Cursor. First, when I'm building similar components, like recently when I was creating a function calling tool for an LLM based application and needed another similar tool, Cursor is just incredible. I simply tell it, "Hey, use tool A as reference and give me code for tool B with the same syntax and format." Boom. It just generates everything, puts it in the right path, and I think I can immediately test it.

One could argue "GitHub Copilot does the same. Why are you so excited about Cursor?" That is where the second one, which is even more powerful, comes into play. It's that Cursor actually functions as an MCP host. MCP is model context protocol, which lets you interact with LLMs with a universal interface, that is. This means I can give it access to MCP server resources, prompt libraries, other cool resources, then use it to knock out complex projects and give context to models, in a very easy and ready to use fashion that wasn't possible before. That's the important aspect of it.

Karen: Thank you for sharing that. I'd like to hear about your experiences on how often the tools make mistakes that you have to correct, like API calls that don't exist or code that doesn't run correctly.

Pavan: AI tools definitely make mistakes. It's just the reality. But I have developed some solid strategies to minimize these issues. First, I always provide super specific context in my prompts. That's why I was talking about prompt engineering being so underrated, right?

Instead of asking for a function to process data, right, which many people do, I'll say "write a JavaScript function that takes the specific CSV format and returns an array of objects with these properties." You know how this context helps in minimizing all of those errors or mistakes or hallucinations, right?

Another thing that helps is asking AI to reason on the logics. Let's say you are developing something really complex. You have broken it down into multiple steps. You should ask AI to reason on the logic, and it will help catch logical errors before they actually become problems.

And also incremental development helps, instead of getting to the full chunk of code at once, asking for the whole code. You start small and gradually increment by interacting with AI. That also helps. It helps minimize the mistakes. But mistakes are common. It's just how you work around them or how you work with them is what matters, I feel.

Karen: Yeah. And you mentioned that you feel like it lets you complete a complex project in a fraction of the time, even with having to deal with any mistakes that it makes. Do you have any data that you can share about how much time tools like Cursor have saved you on a typical development project, even considering how much time it takes you to fix any mistakes that it might make?

Pavan: Yep. This is a very interesting question. In the initial stages I and my team used to only use GitHub Copilot for documentation purposes because the reliability factor was very low. This was early 2024. Recently though, there are a lot of advances. In the current stage, you are looking at a 30% gain in efficiency. That's what I have seen overall, factoring in the back and forth, correcting code and ensuring everything is in order. And goes back to also the things I mentioned the earlier, right? How do you factor in making sure the mistakes are minimized and all that, right? However, the actual gains vary based on the task for which you are using the tool. For routine boilerplate code, I'm seeing 20 to 40% time savings, whereas for documentation, it could go upwards of 50 to 60% as well, because that's what LLMs do best. And again, we also have to keep in mind the most important factor here is to only use it for what you know and can validate.

And software engineering is changing with the advances of these tools. Like when I started my development journey, it was okay for me to not have knowledge on design patterns, coding standards, architecture, efficiencies. It was fine. I could come in, learn those over time. But nowadays, it is pivotal for successfully using these tools to do the heavy lifting. One should be able to architecturally guide the tool, be very proficient in design, and to also ensure the code is production ready. Because the expectation from us also is different with these tools being handy for us, right? Yeah.

Karen: Yeah, and I don't know that we have time to get into it today, but one thing that always comes up when people talk about this progression and needing to have the experience is: for juniors who are just starting out. And they go in and they just generate code with one of these tools. How do they learn so that they can be that senior person and able to provide that kind of guidance and architectural framework for getting the tool on the right path?

Pavan: True. That's why it's very important for them to be very sound foundationally, right? Very sound-- having sound coding standards, architecture efficiencies, because a lot of uplifting is done by the tool, and you should be in a position to validate it or guide it in the right way.

Karen: That's a good overview of the different ways that you have used AI-based tools. I'm wondering now about if you have avoided using AI-based tools for some things, or for anything. And if you can share an example of when you avoided it and why you chose not to use AI in that case?

Pavan: Yeah, there are definitely specific areas where I intentionally avoid using it. I'll share a few examples. First, I avoid using AI for topics I don't know. I have touched upon it a little while ago as well. I'm a huge believer that AI should only be used for things you already have expertise in, where you can properly validate its output. Without that knowledge foundation, I think it's impossible to catch errors or hallucinations that might appear perfectly plausible.

Karen: Yeah, absolutely. That's very wise advice. And you're in good company with some of my other guests, including a medical student I interviewed last year who said the same thing about trying to learn and using it as part of his studies, but he couldn't ask it about things that he didn't already know at least something about. Go on please- you said you have more stories?

Pavan: Yes. I'm also not particularly comfortable with AI-generated images and videos. While the technology has evolved impressively, something about it still doesn't sit right with me. Take the recent trend of Ghibli style images. They became incredibly popular, but I deliberately avoided uploading any of my photos to ChatGPT since there wasn't clear information about how these images would be used or stored long term.

Karen: Yeah, a lot of people share those reservations. Did you have any other examples you wanted to share about when you avoid AI?

Pavan: Yeah. There's one other thing, a meeting summarization. You see a lot of it happening nowadays. That's another area where I really steer clear of, for two important reasons.

First, I've reviewed numerous AI generated meeting summaries that missed crucial insights or nuances that would've been obvious to an engaged human participant.

Second, I worry about the psychological effect. If we know AI will summarize everything, we might pay less attention during the actual discussions, creating a dangerous dependency that ultimately reduces our engagement and understanding.

Having said that, I just want to reiterate that these boundaries aren't about rejecting the technology entirely, but rather understanding where human judgment, expertise, and active participation still provides irreplaceable value that AI currently cannot match.

Karen: Yeah, those are great observations about the limits of genAI meeting summaries, and I know a number of people who also won't rely on them. One of my recent guests,

, had commented that the hard part about using AI is not so much knowing when to use it, but knowing when to stop using it. It sounds like you have figured out when to stop using genAI meeting summaries.

Pavan: Yep. Yep.

Karen: That's great. So one common and growing concern nowadays is where AI and machine learning systems get the data and the content that they train on. They'll often use data that users have put into online systems or published online. And companies are not always transparent with us about how they intend to use our data when we sign up for their services.

So I'm wondering how you feel about companies using data and content for training their AI and ML systems and tools. And specifically what your thoughts are about whether ethical AI tool companies should be required to get consent from and give credit and compensate the people whose data they want to use for training-- something that's called the 3Cs rule.

Pavan: I think what we are dealing with is this fundamental tension between pushing innovation forward and being responsible of people's information. I actually don't see this as an either / or situation. The companies doing this best, for that matter, like IBM with their AI ethics board, or Microsoft with their Office of Responsible AI, have shown that ethical approaches actually strengthen their AI offerings in the market.

First, there's consent. We are evolving towards what I'd call informed participation, where people actually understand and choose to contribute to AI advancement. The reality of consent today is problematic, though. Take the GDPR cookie banners we encounter online. A research from the Norwegian Consumer Council has shown that major tech companies often use deceptive design patterns, also known as dark patterns, to nudge users towards more privacy intrusive options during privacy settings and consent processes. Like, we see 'accept all' buttons prominently displayed, while 'reject all' options are buried in sub menus. This isn't meaningful choice. It's designed to extract consent through inconvenience, is what I feel.

Karen: Oh, definitely. One of my past interview guests, Carey Lening, is a lawyer in Ireland who works in data privacy now. And one of her big points in her interview was about how those kinds of opt-ins don't give people meaningful choices. It makes the mechanism really not useful at all for practical consent. And the ones that try to hide it, that's even worse.

Pavan: Yeah. True. And the second thing is context. For example, data collected for your healthcare shouldn't suddenly become training material for marketing algorithms without your knowledge.

With context, we face the challenge of data being repurposed beyond its original scope. The Royal Free NHS Foundation Trust in London learned this lesson when they shared 1.6 million patient records with DeepMind for an app called Streams without adequately informing patients about how their data would be used. Even when technically legal, I think extracting patient insights from medical records to train commercial AI systems raises serious ethical questions.

Karen: Yeah, absolutely, and I'm glad you brought up context because that's an interesting concept. In addition to the 3Cs from CIPRI, there's another framework called 4Cs by the Algorithmic Justice league, the AJL, where they break out control as a separate fourth C. Your point on context sounds quite close to what they describe as control, that I should be able to consent not just to what part of my data is used, but how it is used. And I know that CIPRI and others consider control to be part of consent. The point is not worrying about how many Cs there are, but thinking about how we should be treating creators. And you're obviously thinking about that.

And you mentioned earlier that you've been experimenting with AI for healthcare in your spare time. How have you been able to handle data sourcing for your AI healthcare project in a way that feels ethical to you?

Pavan: Yep, that's a great question. I only use publicly available medical documents currently. I'm just picking them up from PubMed and even those which say they have the full context and so forth. Only those is what I use.

Karen: Okay. So you're working with reports and articles about AI in healthcare, and you're not working with anyone's personal medical data then.

Pavan: Yep. Yep.

Karen: Makes good sense. Alright. Yeah. Thank you.

Pavan: Yep. And the third I think is compensation. It doesn't always have to be direct payment. It might be, I think, improved services, or transparency about how your contribution helps make technology better for everyone.

By, I think looking at all of the above, I actually think the sector specific guidelines make more sense than one size fits all requirements here, right? Medical data obviously needs stricter protections than public social media posts. The reality is that the most successful AI systems aren't just technically impressive. They're built on a foundation of trust. And that requires balancing innovation with ethical considerations from the very beginning of development.

Karen: I'm glad you brought up trust, because we're going to talk about that more in a few minutes. So as a user of AI-based tools, do you feel like the tool providers have been transparent with you about sharing where the data used for the AI models came from, and whether the original creators of that data consented to its use?

Pavan: Some of the tools that I use the most are Anthropic's Claude and OpenAI's ChatGPT, and they have contrasting information about how the data used for the AI models came from. Anthropic is more transparent than many peers about its approach to user data. OpenAI, however, is not transparent about where the data it obtained to train its models come from.

I go back to my example about not using the Ghibli style images produced by ChatGPT, as there is a concern from me on how the images will be used. OpenAI has publicly stated that it does not disclose detailed information about the data sets used to train its models. The company cites reasons such as the competitive landscape and safety concerns for withholding this information. This lack of disclosure is very concerning. Not just OpenAI. Most major commercial AI model providers do not make their training data sets or even basic information about them publicly available. I believe AI companies should be more transparent, reveal more details about their training data sets and how they were obtained. This would at the very least, encourage adoption.

Karen: Yeah. There was a small handful of organizations that have obtained something called a Fairly Trained certification, where they are actually open about confirming that they've acquired all their data from valid sources. They've either taken true public domain content or they've paid creators specifically to create content for them. But that's definitely the exception and not the rule.

And as of May 19th, which is the day we're recording this interview, there are 41 active lawsuits in the US alone on the legalities of AI, for OpenAI and other gen AI companies. And I'll put a link to the site that I used to track that in the interview. [link to list of May 17, 2025].

But the active lawsuits and the anticipation of lawsuits may be part of the reason why most of them don't disclose where they get their data, because they know they're not getting it ethically. And of course, it's not legally settled yet, and it likely won't be for a while. But ethically, most industry observers do accept that OpenAI basically scraped and stole the data that they use to train ChatGPT, without getting consent or crediting or compensating the creators. Anthropic is supposed to be better, and they do have an ISO 42001 certification on their AI use, so I will give them some credit for that. But they also are the defendants in some of those lawsuits for stealing creators’ work.

So do you have any reservations about using ChatGPT or Claude in light of these kinds of ethical concerns?

Pavan: Yeah, that's a really thoughtful question. You are right that the legal landscape is still evolving with dozens of active lawsuits, challenging how AI companies obtain their training data. I think about it like this, right? I'm making an informed choice to use these tools because they dramatically increase my productivity and creative capabilities. At the same time, I recognize there are legitimate concerns about how the training data was collected. The least I can do, and this is what I'm doing right now, is use the tools to only augment my own creativity and skills and not to generate work that competes with human creators.

Also, I would develop anything that only helps us to be more productive and not to replace any of the amazing work each one of us does. For example, I'm right now working on developing an agentic AI framework to support automotive research. I'm developing it in a way that human feedback is gathered in every aspect of the workflow and is pivotal for the success of the framework. I ensure that the framework only augments research but does not replace it. And that's the least I could do, and that is what I feel is the real value add with AI is to augment us and not replace us.

Karen: Yeah, that's an important perspective. I actually like to say that the A in AI should stand for augmented, not artificial. All right. Thank you for that.

And as consumers and members of the public, our personal data or content has probably been used in an AI-based tool system. Do you know of any cases that you could share? Obviously without disclosing any sensitive personal information.

Pavan: Yeah, I have this habit of Googling my name. I believe most of us do often. And looking at what pops up, I sometimes get surprised about the information that comes up. And most of the times it's a result of something that I have not realized when either entering the information into a particular website or not having information on how my information will be used.

Over a period of time, I have developed this habit of only inputting information that I'm okay with being used, presented anywhere. And I will think twice, putting sensitive information into any websites.

Karen: I know people that keep Google Alerts set on their own names and names of their family members. They can try to keep on top of things like this. Definitely a concern.

Pavan: Yeah, I have seen numerous occasions where someone's name randomly was mentioned by both Claude and OpenAI out of the blue. It's very concerning, but unfortunately, the truth with many of the AI systems nowadays. What's particularly alarming is when these systems confidently reference individuals who weren't part of the conversation or query at all. It is very common for LLMs to infer or hallucinate personal details. But sometimes they can produce specific names and personal information if such information is presented to them in their training data.

Karen: Yeah, absolutely. And that seems like a huge problem ethically, right? If you or I disparage someone and what we said was not true, we would be legally liable for defamation or slander or libel. And most people say that it seems only fair that AI tools should be held accountable too. But other people say that under current law, if the AI tool developers didn't have malicious intent or negligence, then it's not legally actionable. But then, companies that are not taking care to prevent harmful hallucinations could be considered negligent. I'm wondering what you think.

Pavan: It's definitely a concerning issue. I think there is a critical distinction between human speech and AI outputs. That complicates the accountability question here, right? When humans make defamatory statements, there's intent and understanding behind those words, right?

With AI systems, we are dealing with statistical pattern matching that can produce harmful content without any actual knowledge or intent in this case. That said, I believe companies developing these systems bear significant responsibility. While they may not have malicious intent when they AI hallucinates harmful content, the rapid development of these powerful tools without adequate safeguards could certainly cross into negligent territory for that matter.

Karen: Yeah, that's a good observation. And the other point that comes up is that it's one thing for the tool to generate it, but if someone then takes it and publishes it with not doing some diligence to verify it, then the human who published what the AI or tool spit out could also be considered negligent as well.

I don't think we've seen all of the legal aspects shaking out on this one yet either. But there were some recent cases where someone was very seriously defamed. I want to say that there was someone who was accused of murdering his children or something like that, really something horrible. And I think that case is just now starting to work its way through the legal system. So yeah, a lot of complications there.

Pavan: Yeah. That is why I say:

You have to always use AI for things that you know, and that you can validate. That's really the crux of it, the core of it. That's very important. And people often try to use it for everything but what they know, and that's where the problems arise.

Karen: Yeah, absolutely. All right, so on the topic of companies using our data and content, do you know of any that have made you aware that they might use your info for training an AI or machine learning system? Or did you get surprised by finding out that somebody was using it? It's often really hard to know. It's sometimes buried in the license terms and conditions, and sometimes they change those after the fact as well.

Pavan: I recently came across an article stating Slack has been reported to use user data for AI training. And opting out actually requires -- this is mind blowing -- emailing customer support, as the option is not easily accessible in the app. After further digging, I came to know this practice is outlined in their terms, but not always made prominent to users.

The reason I brought this up is to point out that there will be numerous other situations like this that we are unaware of. Not to forget recently I think everybody experienced this, the change by LinkedIn automatically enrolling all users in a program that allows their data, including posts, profile information, and potentially even private messages to be used for training generative AI models. This change was implemented quietly with the relevant setting enabled by default. So you are already giving consent, right, without your notice. Meaning somebody has to go to proactively opt out, else their data is used for training.

Overall, I feel that I don't have a choice nowadays. I either have to use the tool or application and my data is used in some way, or I don't get to use it at all. That's how I feel about this.

Karen: Yeah, LinkedIn actually handled it really badly, because they did the opt-in being automatic, and in fact it covered everything we had done up to that point in time, whether we liked it or not. But they did allow the opt-outs for people who were covered by GDPR or the Digital Markets Act. So they obviously had a choice about how they did it, and they chose to do it in the way that took as much of our data as they could get away with, which I'm personally not real happy about.

Pavan: Yeah.

Karen: And there's actually two settings in LinkedIn. I don't know if you knew this, where you have to go into opt out. It's not just one place.

Pavan: Yes. I remember that. Yeah.

Karen: I know what you're saying though. It is a difficult choice sometimes. We either use it and they use our data, or we don't use the tool at all. And sometimes that's not a good option. But I do feel like we are not totally powerless as consumers. We can opt out where those options do exist. We may have to dig to find them, but we can use them.

And in some cases, people are just leaving a platform, like Meta especially I'm hearing about, or Microsoft. But I know for some people, that can be very hard to do either professionally or personally, or both. Some people run their businesses with Instagram and such, so it's very hard to just leave.

Pavan: Yeah.

Karen: Do you know any times when a company's use of your personal data and content has created any specific issues for you, such as privacy or phishing or loss of income, anything like that?

Pavan: No, fortunately I have not faced any issues regarding personal use of my data, at least to my knowing.

Karen: Ah, that's great. Let's hope it stays that way!

Pavan: Yeah. Yep.

Karen: It sounds like you're being very cautious about using your data and sharing it, so that hopefully will help you.

Pavan: Yep.

Karen: So last question and then we can talk about anything that you want! So we talked about trust a little bit earlier. The public distrust of AI and tech companies has been growing. And in a way I think that's a good thing, because it reflects that we're becoming more aware of what they're doing with our data and how they're using it with or without our consent. What do you think is the one most important thing that AI companies need to do in order to earn and keep your trust? And do you have any specific ideas on how they could do that?

Pavan: Yeah. I think this topic is much needed in today's circumstances. I believe transparency stands out as the most critical factor for earning and maintaining public trust in AI and technology companies. Transparency encompasses clear communication about how AI systems work, what data they use, how decisions are made and what safeguards are in place to protect users and society? I think Anthropic is doing a great job by implementing several industry leading measures to inculcate trust with transparency and safety at the core of the approach. For example, the launch of the dedicated transparency hub that provides public reports on safety and governance, other aspects such as the responsible scaling policy and the long term benefits trust, exemplify concrete steps that AI companies can take to earn and maintain public trust in a rapidly evolving field.

What caught my interest recently is Anthropic's article called “Tracing the thoughts of a Large Language Model”, which essentially gives us an AI brain scanner. This groundbreaking work reveals how Claude actually thinks behind the scenes before it gives us the responses, right? Anthropic's researchers have developed what they call 'circuit tracing', a technique inspired by neuroscience, that lets them peek into Claude's decision-making process. Rather than just seeing what Claude says, they can trace the actual internal reasoning patterns that happen before an answer appears.

Looking at these articles I thought to myself, "We have built AI systems so complex that we now need scientific methods originally designed to understand nature, to comprehend our own engineered creations!" Anthropic's work represents just the beginning of truly understanding these powerful systems we have created, but they don't fully comprehend. This is a classic example of transparency that I'm looking forward to, and I'm glad Anthropic is setting the stage for such a thing. But there is still a lot of work to do.

Karen: Yeah, I always like to call out when I see or hear about someone who is trying to do the right thing, because there's not so many of them, and I think they deserve extra credit and attention for trying to do the right thing.

Pavan: True.

Karen: Pavan, that's all my standard questions, so thank you so much for joining me for this interview. Is there anything else that you would like to share with our audience?

Pavan: Yes, definitely. I would like to share something I've observed consistently in enterprise AI implementation. The last mile is truly the most challenging part, and that last mile is considerably longer than most people anticipate when it comes to developing AI applications.

We can typically reach about 80% completion of an AI application with relatively standard approaches. But that final 20% requires meticulous strategy, thoughtful design, and disciplined execution. I think this is where actually my passion lies, in productionizing enterprise AI applications, taking promising models and prototypes and transforming them into resilient, scalable solutions that deliver consistent business value. It's a complex challenge that requires deep technical knowledge combined with business acumen and change management skills. It is a unique skill, I believe, and I'm proud to say I have it.

Looking ahead, I firmly believe we are just beginning to unlock what's possible with AI. As an ally in the enterprise space, there's tremendous untapped potential and I'm committed to helping organizations bridge that crucial last mile gap I just talked about. The companies that develop this capability will have a significant competitive advantage in the coming years, and I'm excited to be a part of making that happen.

Karen: That's great. You had mentioned earlier that you love talking about AI. Do you have any upcoming events where people could hear you talk about AI?

Pavan: Yeah. I mentioned earlier I'm working on orchestrating AI agents for automotive research, with the advancements of SDVs, software-defined vehicles, and so much happening in the automotive space. Me also being a part of a research team in my past work, I'm going to talk about how — in an upcoming conference in Detroit, it's InCabin and AutoSens, from a business perspective — talk about how you can orchestrate AI agents to augment you with automotive research, to stay up to date on what's happening in a particular technology, and how do you take it and grasp all that information, and so forth.

And like I said, I also have the technical abilities to take it further. I will be also presenting the same topic at the IEEE Cloud Summit, but this time from a totally technical concept point of view. So as I mentioned earlier, I bring the unique blend of this business acumen combined with technical promise together, and that's what I'll be doing in these conferences as well. So yeah, that it's really going to be an exciting conversation for me.

Karen: Yeah, that sounds great. I've been involved a little bit with discussions around software-defined vehicles. And one of the things that has come up quite strongly in the past year is the privacy of data that cars are now capturing about people. Everything from the weight of the driver sitting in the driver's seat, or whether their eyes are paying attention, and just so many things, even connecting up their personal devices to the infotainment system and all the information that gets pulled from there. So do you have any insight or any involvement on that aspect of the software-defined vehicle work?

Pavan: Yes. That's a really good point. I'm actually writing an article which talks about safe thinking on how to govern AI in tomorrow's vehicle systems. Say the automotive industry, it stands at a critical crossroads where AI transforms how vehicles operate and interact with humans, right? The things that you spoke about, how much data they can collect, how they use it. So as manufacturers integrate these increasingly sophisticated AI systems, from advanced driver assistance features to emotionally responsive interfaces, the sector faces complex challenges in ensuring these technologies are safe and ethically governed.

So I'm writing this article, which lists out the major considerations to be taken into account for implementing AI in vehicles, while maintaining appropriate safety standards and governance frameworks in various features such as ADAS, driving features, predictive maintenance systems, personalized comfort systems, voice assistance, and emotional and behavior monitors. You touched upon that as well. So it's going to be an interesting article. I'll be looking for avenues to publish it, of course. But it'll be an interesting article. I'm almost finished.

Karen: I was just going to ask you if that was going to be a DZone article, if you were going to be publishing it somewhere else.

Pavan: Yeah, DZone is for technical articles alone. So this is more a combo of business and technical article. So I'll have to see what would be the avenue where it'll be published.

Karen: Yeah, not required at all for this interview, but if you're interested in publishing it as a guest post on my Substack newsletter about AI, I would be happy to share that out to my subscribers. I think that would be very interesting for them.

Pavan: Oh, cool. Cool. I have a lot of other automotive software-related or vehicular-related topics. Yeah, I could definitely publish an article on your newsletter. I would love to do that. Yeah.

Karen: Actually, you might want to start your own Substack. If you're writing quite often on these topics, you could start your own newsletter and we could cross post.

Pavan: Thank you so much for that. Yeah. But at this point in time, I'll be glad to write for you. No problem.

Karen: Awesome. Let's plan on doing that. I really appreciate you making time for this interview and sharing your experiences and wish you the best of luck and your ongoing adventures with AI, and I'll look forward to your upcoming articles.

Pavan: Thank you so much. Like I really say, I love talking about AI, and when I saw this, I was like, "Okay, I definitely want to talk about it." It's a good platform. So thanks for giving me the opportunity.

Interview References and Links

Pavan Vemuri on LinkedIn

Pavan Vemuri on Medium

Pavan Vemuri on DZone

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode