š AISW #037: Arun Mozhi Maruthamuthu, India-based senior DevOps engineer
An interview with India-based senior DevOps engineer Arun Mozhi Maruthamuthu on his stories of using AI professionally and personally, and how he feels about AI using people's data and content
Introduction - Arun Mozhi Maruthamuthu
This post is part of our AI6P interview series on āAI, Software, and Wetwareā. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, āAIā means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and āAI Fundamentals #01: What is Artificial Intelligence?ā for reference.
š£ļø Quick note for those who prefer listening to reading: Substack now supports read-aloud natively in the Substack app. If you try it on this post, let me know what you think!
Interview - Arun Mozhi Maruthamuthu
Iām delighted to welcome Arun Mozhi Maruthamuthu from India as our next guest for āAI, Software, and Wetwareā. Arun, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.
Iām Arun Mozhi Maruthamuthu, with over 13 years of experience in the IT industry, and 8+ years specifically in cloud infrastructure. In my current role as a āMember of Technical Staffā at Wind River Systems, I focus on a variety of key areas such as DevOps, Site Reliability Engineering (SRE), managing cloud platforms and product owner responsibilities.
Throughout my career, Iāve had the opportunity to work across a diverse range of technologies, from traditional IT systems to cutting-edge cloud-based solutions. My expertise has grown from foundational IT roles to leading cloud infrastructure projects, DevOps initiatives, and SRE practices, allowing me to drive efficiency and scalability in the systems I work on.
On a personal note, I am always excited to have discussions with you, Karen. Whether itās during our one-on-one sessions, internal meetings, or even discussing AI-related topics on LinkedIn, I value these conversations as they provide both learning and the opportunity to exchange ideas. I look forward to continuing to collaborate and explore new ideas together!
Iām so glad you agreed to this interview, Arun - I always enjoy talking with you as well š
What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology? (or built tools using the technology, �)
In my previous organization, I worked on ML Proof of Concept (PoC) projects. Currently, I am part of Wind River, where I access AI-related products for both internal use and my own learning. To deepen my understanding, I am taking basic ML courses through various learning platforms and plan to explore data analytics in the future. My experience so far spans the professional and personal use of AI.
Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didnāt? What went well and what didnāt go so well?
WhatsApp AI recommendation : AI suggests quick, context-based replies to my messages. It saves time in formulating long messages. It also gives a wide range of inputs to various personal fronts too.
Drawback: It gives only text based suggestions. Images and document related features are not available.
How often are the suggested replies useful to you?
I would say that this feature provides streamlined and summarized information compared to what Google typically offers. It simplifies searches by giving direct answers, which is especially helpful when you need quick insights. However, there are times when it provides overly long or detailed responses, which can feel less user-friendly.
When AI-powered chat features were initially launched by Meta, I experimented with chatting as if it were a friend. I even vented about some personal things out of curiosity, and I was surprised to see it respond with friendly and empathetic replies. It gave me a glimpse of how AI could evolve into a conversational companion beyond its practical uses.
Currently, WhatsAppās AI chat feature is text-based, and voice-based interactions are not available. If WhatsApp enables voice-based messaging for AI chats in the future, it could significantly enhance the user experience.
With voice-based AI, users could interact more naturally, allowing them to send queries or tasks through voice commands and receive responses audibly or in text form. This would be especially helpful for hands-free use, such as while driving or multitasking. It could also make the platform more accessible for individuals who prefer verbal communication over typing or those with disabilities.
Combining voice capabilities with AI-driven personalization would make WhatsApp a truly versatile toolānot just for chatting but also for managing daily tasks, resolving queries, or even having friendly, interactive conversations. This evolution could further solidify WhatsApp as a leader in both communication and AI integration.
Tagging in Facebook: I think I first noticed AI on Facebook a long time ago when I saw how it could recognize people in photos and suggest their names for tagging. I explored a little more and found that a āDeep Learning Algorithmā was used which recognizes unique facial features.
How often are the tagging suggestions correct, and how often are they wrong?
I donāt recall any instance where Facebookās auto tagging feature gave an incorrect suggestion; it almost always worked perfectly for me. However, Iāve noticed that this feature is no longer available in my account. It seems Facebook has disabled the auto tagging feature in India, likely due to privacy concerns and regulatory compliance.
Oh, thatās interesting to hear!
ChatGPT: Once I started using ChatGPT, I found myself relying less on Google for debugging and coding. Itās incredibly helpful for quickly getting started, even in areas outside my expertise.
Drawback: You do need to exercise caution, as it sometimes provides multiple suggestions, some of which might not work as expected.
Can you share a specific example of a code suggestion that didnāt work as you expected? Do you find that ChatGPTās code suggestions work better on some programming languages than others?
I canāt recall the exact scenario, but I faced issues with Python package installations on macOS. The commands provided werenāt working as expected, likely due to administrative access requirements. While ChatGPT suggested a temporary workaround of using a virtual environment for installations, it didnāt provide a permanent solution to address the underlying admin access issues. This limitation made troubleshooting and resolving the problem more challenging.
Alexa: Alexa has been a lifesaver for managing daily tasks. It helps me stay on top of bills, manage my shopping list, and enjoy music tailored to my preferences. One of its biggest benefits is reducing screen time for my child. Instead of spending hours on screens, my kid uses Alexa to play music for dancing and as a tool for studying, asking questions, and exploring new topics. Itās a versatile assistant that makes life more organized, entertaining, and educational for the whole family.
Those are great examples of uses of a smart home assistant. May I ask, how old is your child?
My son is 6 years old and currently in Grade 1. At his school, they start using an iPad lab from Grade 3, and Iām certain they incorporate AI features into their learning tools. For now, they use the Macmillan Education app for English lessons. The app offers a range of resources like video-based stories, grammar practice, pronunciation support, and extra exercises and worksheets, which I find quite beneficial for enhancing his learning experience.
My son is also learning Abacus at a private global institute called āBrainoBrain.ā They conduct international-level online competitions where AI plays a significant role in evaluating tests and ensuring fairness through fraud detection mechanisms.
I can go on with much more features in my phone like auto-spamming messages/calls, photo memories, Auto-correction and Text Prediction, Face ID, camera enhancements etc.
AI is becoming an integral part of our daily lives, and as it evolves, there may come a point where itās hard to distinguish between actions performed by humans or AI. Itās clear that AI is increasingly shaping our world and our future.
That is definitely true - itās almost impossible to avoid AI in our everyday lives nowadays!
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Since AI learns from the data we input, Iām always cautious about using it for official work and avoid sharing any confidential information.
Thatās wise.
Iāve had a few regrets with Google Maps, especially in situations where it led me down the wrong path or didnāt provide the most effective route. One time, it directed me onto a narrow mud road where my car couldnāt go any further, causing a significant detour. Because of such experiences, I now only rely on Google Maps when Iām traveling to unfamiliar areas, and I double-check routes when possible to avoid any surprises.
Ha, thatās happened to us a few times, too! Once, we were in California, and we decided to check out the town of Mountain View (Googleās HQ). Google Maps routed us to some odd location that was nowhere near Mountain View! My husband still refers to that incident as an example of why we canāt trust it š
Absolutely, reliance on AI tools like Google Maps has become a significant part of our daily lives. It sometimes lead to mishaps/accidentsā¹ļø. Here are two recent incidents from India that highlight such scenarios:
Accident Due to Incorrect Navigation: Link
In a recent case, Bareilly, Uttar Pradesh, where two men lost their lives after following Google Maps' directions. The app mistakenly directed their car onto an incomplete bridge over the Ramganga River, causing the vehicle to plunge into the water.Dangerous Routes Suggested in Remote Areas: Link
In another incident, travelers in a remote region were misled by Google Maps onto an unsafe, isolated path due to incorrect or incomplete mapping. The detour caused them significant delay and put them in a risky situation.
There have been a few times when I tried creating images of specific objects with backgrounds, but the results didnāt turn out as expected.
Yeah, image tools can produce strange results. This past summer, my friend Stella tried to use an AI-based image tool to create a promotional image for her book (on an older woman writing a romance novel), and the results were SO odd - the tool put a white beard on the woman! (link)
Oh God!! I think we can use the current AI image generation tools only for entertainment purposes to create unrealistic images such as your friends š, definitely not for professional use.
Additionally, we don't allow our child to use Alexa or any voice recognition tools without supervision. Weāre cautious because he might ask questions without fully understanding the consequences or implications of the responses. Itās important to ensure he uses these tools safely and responsibly.
That is super important and itās smart that your family is being so careful about avoiding unsupervised use by your child.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get consent from (and compensate) people whose data they want to use for training?
Ethical AI companies should be transparent about how they collect, use, and store data. Clear, accessible consent forms should explain the purpose behind data collection, how it will be used for training AI, and who will have access to it.
If AI tool companies profited mainly because of using one personās data (non publicly available), they should give compensation in my point of view.
I agree, Arun. I also think the 3Cs (consent, credit, and compensation) shouldnāt be limited only to content thatās not āpublicly availableā. Some AI company execs have used āpublicly availableā as if itās the same as āpublic domainā, and itās not. As one example, YouTube videos are publicly available, but theyāre generally copyrighted and not public domain. AI companies shouldnāt be making money off them without giving the creators the 3Cs for their work.
Yes. I completely agree with you on this point.
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
As a user I feel, the transparency regarding the data used to train AI models is often insufficient. While some companies may be more transparent than others, many don't clearly communicate where the data comes from, whether consent was obtained, or if creators are compensated. Ethical AI development should prioritize transparency, clear communication, and fair compensation to build trust and accountability.
Yeah, most AI companies today are not transparent about data sourcing. I like to call out those few who are, whenever I hear about them; I think they deserve our attention and support!
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Social media content is often leveraged by AI tools in various ways. For example, Facebook's face recognition technology has historically used personal photos without explicit consent to suggest tags.
Similarly, targeted advertisements rely on AI-driven data sharing across platforms. If I search for a product on Amazon or Google, I immediately see ads for that product on social media. Itās frustrating and concerning to feel that my privacy has essentially been sold.
Definitely. And an even bigger concern I hear about the photos is: itās one thing to opt in for letting them use the photos to suggest tags. Itās another thing for Facebook or Instagram or another service to use those photos for all kinds of other purposes without our consent.
The privacy concerns around ads are important too. Iām always looking for ways to turn off or disable that data sharing, or to keep my browsing private or separate, to minimize that.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? Itās often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact.
While AI companies provide legal disclosures, the language is often hidden in long terms and conditions. At least I don't read terms and conditions in all the cases, who will read very long terms and conditions. They are playing cleverly to make us fools and we are falling for the same.
Ideally, companies should offer more accessible, user-friendly explanations and clear options for opting out of data use for AI training.
Absolutely. Yes, most people donāt read the Terms and Conditions - there was a recent study that said 91% never do. Itās hard to fault people for not reading them when they arenāt understandable anyway!
Do you feel like you have a real choice about opting out, or about declining the changed T&Cs?
I was not even aware of this opt-out option itself. Most apps can't be used at all, if I decline T&Cs. So there was no choice left out.
How do you feel about how your info was handled?
I dislike my personal information being used without my consent. For example, when I share my mobile number during purchasesāwhether online or while using a debit or credit cardāit often gets misused. I receive spam calls for promotions, donations, or credit card offers, sometimes even from banks where I donāt have an account. These unsolicited calls are frustrating and disruptive.
I agree - the sale of our data and this kind of spam are so annoying.
After hearing multiple instances, I deleted Facebook and Instagram long back for safety concerns.
Good for you!
Has a companyās use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Both privacy and phishing concerns: Recently, I received repeated calls claiming to be from FedEx about a returned parcel that I had no knowledge of. When I finally inquired, they said the parcel contained illegal items and was addressed to a location in Kolkata, with my mobile number linked to it. I've never been to Kolkata in my life. The caller insisted I travel to Kolkata immediately to file a complaint with the cyber security department or, alternatively, click a link they provided to file it online. I told them I would look into it and ended the call immediately.
Wow.
This incident highlights the rise in phishing attempts. On top of that, I often receive spam messages, get added to various job-related groups, and encounter scams offering pay for writing product reviews or posting advertisements. These constant intrusions make it difficult to protect personal privacy and avoid potential scams.
It is definitely hard to stay vigilant about these risks.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
The most crucial step AI companies must take to earn and maintain public trust is ensuring transparency. Users need a clear understanding of how their data is collected, utilized, and protected, as well as the societal impacts of AI technologies.
One practical solution is to provide users with a comprehensive overview of all the platforms where their data is being used. Companies should offer an easy-to-use systemāperhaps linked to an email address or phone numberāallowing users to access a complete list of services and systems utilizing their data. Additionally, users should have the ability to opt out and request the deletion of their data across all platforms seamlessly.
I agree, Arun - this is where we need to go, as a global society. The big challenge, I think, is for us as consumers to make it enough of a business concern for companies that they will invest in doing it.
Anything else youād like to share with our audience?
Thank you so much, Karen, for this opportunity. Through this process, Iāve come to realize the significant impact AI has on our daily lives. Data privacy has become a key concern for all users, and if this issue is properly addressed, AI can truly make our lives easier.
Thank you for taking the time for this interview and sharing your experiences, Arun!
Interview References and Links
Arun Mozhi Maruthumuthu on LinkedIn
About this interview series and newsletter
This post is part of our AI6P interview series on āAI, Software, and Wetwareā, launched in August 2024. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And weāre all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post āBut I donāt use AIā:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical expertise in AI is required.) If youāre interested in being a featured interview guest, anonymous or with credit, please check our guest FAQ and get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (itās free)!
Series Credits and References
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
If you enjoyed this interview, Iād love to have your support via a heart, share, restack, Note, one-time tip, or voluntary donation via paid subscription!
It's interesting that you don't like having your data taken without your consent but are ok with using tools based on others' stolen words and art. I just wonder how you reconcile these views, as someone whose academic and artistic work was "licensed" without my permission.
See this interview with @GrahamLovelace: https://grahamlovelace.substack.com/p/betrayed-author-says-publishing-giant