AISW #014: Senthil Chidambaram, India-based graph data professional 📜 (AI, Software, & Wetware interview)
An interview with India-based graph data professional Senthil Chidambaram on his stories of using AI and how he feels about how AI is using people's data and content
Introduction - Senthil Chidambaram interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview - Senthil Chidambaram
I’m delighted to welcome Senthil Chidambaram as our next guest for “AI, Software, and Wetware”. Senthil, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Good day Karen and all of you! This is Senthil Chidambaram from Chennai, India. I have over 20 years of experience in the software industry. I’m a graph data professional specialized in connected data analytics, uncovering contextual insights and patterns from History of Events and transactional data to drive positive impacts for many of our clients and partners.
Alongside my fantastic team, I’m also writing creative notes & Ei -Ai research for our current and future AI - https://www.ei4aibooks.com/ - and I write newsletters related to Ei4Ai and GraphTalk with the hashtag #SimpleSecrets.
Thank you, Senthil. Can you please explain for our audience what you mean by “EI” and “AI”, and how they intersect?
Certainly Karen. It may be very well known to many of you, but here is my version:
Ei - Emotional Intelligence is simply
“To get the best for you and others” - not just self!
Except for a few, in general people may easily forget the definition or theory (because what is not important). So here is my little story on what is Ei & why it's important for Ai; so you can get why I started writing Ei4Ai Books !!
Assuming this imaginary scenario is like a sci-fiction movie. Future robots have to clear Medical Error exams to become eligible to work as a team. Here is the scenario-based question to convey the high importance of Emotional Intelligence (Ei4AI).
Scenario (Based on True Events):
In an operating room, the Chief AI Surgeon (#Ai1) and an Anesthetist robot (#Ai2) are performing surgery. Suddenly, the patient shows signs of an allergic reaction, and #Ai2 quickly administers epinephrine. #Ai2 also suggests that the Chief AI Surgeon #Ai1’s surgical gloves might be causing the problem. However, #Ai1, being in command, dismisses this, stating the patient had no prior allergy issues. According to AI rules, the team must follow #Ai1's decision, despite the warning from #Ai2.
Question:
How can AI systems in a team-based environment ensure critical warnings, like those from #Ai2, are properly evaluated and acted upon, even when the lead AI (#Ai1) is programmed to follow rule-based decisions?
Answer:
#Ai2, with its deep knowledge of allergies and emotional intelligence, refused to let the Chief Robot #Ai1 continue using the same surgical gloves. #Ai2 recognized that latex allergies can develop after multiple surgeries and may not show up immediately, as was the case here.
Dr. Peter Pronovost, a human doctor, encountered a similar situation. Using self-awareness, empathy, and emotional intelligence, he posed a key point: "If I'm wrong, we waste 5 minutes changing gloves, but if you're wrong, the patient dies."
When the surgeon still resisted, Dr. Peter firmly stated, "Dial the dean. This patient has a latex allergy, and I can't allow her to die because we didn't change gloves."
The surgeon eventually complied, and it was later confirmed that the patient did indeed have a latex allergy.
[Source: Dr. Peter Pronovost - from the book 'Black Box Thinking' by Matthew Syed]
When we humans sometimes lose sight of ‘empathy’ due to ‘power,’ ‘ego,’ or ‘rewards’, it’s a reminder that now is the right time to ensure these 'Ei4Ai' principles are embedded into future robots (AI). For any critical key decisions, the robot has to take a ‘pause’ for nanoseconds to validate the context again with Emotional Intelligence (Ei) weighting - “To get the best for you and others”.
So here is my answer for your original question: What is Ei - Ai and why Ei4Ai ?
"Ei" stands for Emotional Intelligence, which is the ability to recognize, understand, and manage one's own emotions as well as the emotions of others. "4" represents the four key components of emotional intelligence 1:
self-awareness and understanding,
self-management (or self-regulation of emotions),
social awareness (or empathy) and
relationship management (or social skills).
"Ai" stands for Artificial Intelligence, which is the ability of machines to perform tasks that typically require human intelligence.
Ei4Ai is “To make the Ai more explainable and responsible for their actions by embedding those key Ei principles for right decision”
If you have time, please check this 3-min #TechStory for more context here.
“The true challenge lies not in what we can do, but in what we should do.”
Thank you for explaining about Ei4Ai and sharing that story, Senthil.
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I have 10+ years experience in the Data & Analytics space and worked with both product-based and service-based companies based out of India, covering banking, IIoT (Industrial Internet of Things), pharma, and retail domains. Currently I am leading the Graph Center of Excellence (CoE) in a Chennai-based AI and Analytics consulting company. I am also self-employed part time as a creative director for https://www.Ei4AiBooks.com.
Can you share a specific story on how you have used AI or ML? What are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?
I started working on Ei4AiBooks alongside my official responsibilities, where I used AI and ML tools. In 2015, I began with K-Means clustering (a ML algorithm) to identify specific features (characteristics) influencing product quality issues.
It's a proof of value for the largest US-based consumer electronics company and the problem is to predict ‘future battery issues’ for proactive customer support and best customer experience. We got 100 GB+ ‘batches’ of data with battery-specific details and history of customer complaints related to batteries and the product details.
I still remember, as a team we tried various Machine Learning (ML) algorithms but we couldn’t get the right solution, but later used pySpark with simple K-Means Clustering and found some specific group of ‘battery characteristics ‘ causing most of the issues. Couldn't share more details here due to NDA, but some common feature is like ‘the battery charging cycle’, also one of key influencers when it was combined with other features including the Geo point.
Then I got focus onto graph analytics, Neo4j Graph Data Science (GDS) to uncover ‘outliers’ and ‘key influencers’ for an insurance and e-commerce Client and uncovered a fraudster network.
In addition, I’ve applied AI creatively in my personal projects, such as Ei4AiBooks and #BookPunch2 on LinkedIn. I’ve utilized tools like MidJourney for generating unique images, and ChatGPT and Bing for refining my creative writing—particularly in improving grammar, story flow, and word choice to better connect with a general audience.
Midjourney is the one I started first, and was really excited to see my words (prompts) are creating more colorful ‘images’, such as the one above for my blogs. But at one point of time, I stopped using it, for 2 reasons.
When I came across a blog in Medium, a user self-published a book with images generated from Midjourney, but later he got questioned for copyright breaches.
I started feeling it may make me lazy to be creative if I get used to it. Simply try to be organic ;) mode where possible.
Now I'm using Canva for all my work.
ChatGPT, like everyone, I still use that to review my articles, correct the flow, especially for grammar check and right words with correct punctuation before publishing it.
As for challenges, in a recent project where we used the Azure OpenAI model for a ‘Product 360’ Q&A system powered by a Knowledge Graph (KG) as a Retrieval-Augmented Generation (RAG), we encountered some issues. While the model performed better compared to SQL-based approaches, there were times when responses were unexpected or incomplete. For example, when asked which components were used in two different products supplied by the same vendor, the model responded, ‘I don’t know,’ and continued to give the same answer to follow-up questions until we restarted the session. This highlighted how difficult it can be to debug such models, as unlike traditional programming, it's often tricky to pinpoint the root cause. We believe further prompt tuning or engineering is needed.
This is an excellent example, Senthil 👍🏼Although in my experience, it can also be tricky in traditional programming to pinpoint the root cause of bugs!
You are right, Karen. But in traditional programming, especially developers like us can have access to query /system logs, debug pointers. So sometimes it’s a challenge, but we can get the program flow - what value user entered and where it changed in mid, and where it failed and threw an error.
But at the core what I would say where it didn't go well: Explainability - why is my AI /ML model saying/doing this? It's more of a block box as of now.
Good point - explainable AI is an area that’s getting a lot of attention in research. Some people may trust a model just because it’s AI or ML. Many other people will not trust or use an AI or ML model if they can’t understand and justify how it came up with its recommendations.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I have avoided ‘Ai’ for:
1. Official work. I asked my team to not use ChatGPT for queries related to client-specific context - PowerPoint presentation or data context - simply to avoid sharing client-specific content to ChatGPT, for data privacy and infosec (information security) guidelines.
2. Personal side. Creating Images for my #SelfTalk #BookPunch since at one point of time, I feel like I’m just copying “sum of someone's creative work” in the name of prompts; and moreover, I realized that to sustain in this AI world, creative skill is more important than anything, since AI can learn everything from history, but not the future without trained data. So I stopped blindly going with ChatGPT or Copilot for my newsletter and blogs.
That makes sense, and it’s great that you’re making your team aware of the risks to client confidentiality. And you’re right about AI needing good quality data to work well in the future. There is concern nowadays about how AI model quality will start to go down once it starts ingesting and training on content that was AI-generated.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI and ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?
Though we have the Digital Personal Data Protection Act, 2023 and IT Act 2000, what I feel is that what purpose this data is going to help is the key. Is it for business profit or for better social ecosystem?
For example, dashcams on India cars can share the challenges for autonomous driving agents for better training, and I'm ready to share my driving data … but not my PID data - like “Senthil as a person - when he starts to office, which route he prefers, and when he is reaching back home”. I'm sure this information also can easily be extracted from the data, and when it goes to the wrong hands/minds - as a society, it's a big privacy issue.
Another recent example: when we do some contribution through Facebook ads, for example Ketto or Child Trust, immediately the next day, we used to get multiple calls from different people for more donations ... and I used to keep getting those kind of similar advertisement (based on trained algorithms), so for me it's more of ‘PID data for sales’ rather than ‘training’.
Also, the biggest concern is: already this digital world is full of scams & phishing and it's tough for the general people to know - is this originally from the author or from the bad actors?
In simple terms, for collecting and training purpose whatever it could be, the ethical AI company should ‘credit and have reference to the original content creators and should use it wisely for the right purpose’.
When you’ve USED AI-based tools, do you as a user know where the data used for the AI models came from, and whether the original creators of the data consented to its use? (Not all tool providers are transparent about sharing this info)
Simple answer ‘Nope’, though Bing seems to share the source /reference link but not the creators for the actual credit.
If you’ve worked with BUILDING an AI-based tool or system, what can you share about where the data came from and how it was obtained?
I can’t share more due to NDA, but we used to build Intelligent Apps which help our clients to understand their customer purchase behaviors by analyzing their data context and patterns, but PID info we won't get access to. One example - we know if a customer has given consent to use their data for personalized product recommendations, we will get unique Id and general info like occupation, gender, and income range, but not any Personal Identifier (PID). If any PID data is needed, it will be masked at source before we start processing it.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
Yes, especially biometric validation for Ration shops and passport screening at airports as we all know. In India we do have ‘Aadhaar’-based validation for identity verification which is linked to Banking and Tax payments. And now UPI payment service app like Gpay, Amazon Pay are also validating the KYC (Know Your Customer) with our mobile number itself. My view is, this leads to more digital scams nowadays when the fraudster gets our mobile number, and I know it’s more PID data available in darkweb for sale.
Yes, it’s kind of ironic that it creates new fraud risk, when the goal is supposed to be to reduce fraud.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML?
I would say ‘No’ since all we know are cleverly captured in diplomatic way in ‘Terms & Conditions’. My view is majority, including me, have never gone through the complete ‘Terms & Conditions’ (T&C) when we buy, or use digital services.
Have you ever been surprised by finding out they were using it for AI?
Meta and Google - how they simply identify the people's name in the image and popup to show- how me and my son in 2015 vs 2024. So I stopped tagging my son's name, with a little delay for his digital footpaths where someone will follow ;)
Did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
Yes, there is an option for us to decline the use - but mostly we missed to opt-out in the name of personalized convenience or unawareness by non-technical people.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
There is one incident related to my credit card which I barely escaped from a big loss. I got a new credit card and I was paying all my bills on time. I got a call from an agency claiming to be from that particular card Issuer Bank, and they appreciated me for paying bills on time. And clearly knowing my name, my phone number, and which card I'm using, then offered some rewards, and asked for my card details and PINs for rewards. Struck somewhere why she is asking my credit limit and PIN ;) Then I realized this is a fraud / phishing call and immediately reported it to my card issuer - the Bank, and they blocked my card. But when I asked my Bank - how come they got my number and card details and amounts due paid in full - obviously no answer, but we know someone in the middle was ‘selling my info’ to the scammers.
This looseness in the handling of our personal data is exactly one of the reasons that public distrust of AI and tech companies has been growing.
I keep remembering this ‘image’ and this quote ‘if the product is free, then we are the product’ (data) for them. Which promptly conveys each and everyone’s distrust with companies like Meta and Google using our data to monitor us and our behaviors.
What do you think is THE most important thing that AI and tech companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
#1 I believe most of us want this to be regulated and monitored by the third party audit with the right set of powerful team ( Technical + Functional + Government + Risk Analyst) like the Election Committee.
#2 Technically, I like how Apple introduced App Tracking Transparency (ATT) in iOS 14.5, which requires apps to ask for explicit permission before tracking users across different apps and websites. This doesn't directly relate to cookies but adds to the broader privacy protection measures. We can make this more power for the user to completely go on ‘Zero’ sharing of data.
#3 I should get notified like in ‘LinkedIn’ who viewed my profile, if any of my data used for training or shared with some third party system including for any audit purpose.
I read and hear from a lot of people who agree with you on #1 about the need for regulation. I agree that the app tracking transparency in Apple is a good start as well.
LinkedIn actually offers visibility into who viewed our profiles, but only if we pay them for LinkedIn Premium. I’m personally ok with them charging for that, but I completely agree with you that they ought to be transparent about how they are using or sharing our data! That’s especially important since LinkedIn is part of the Microsoft corporate ecosystem. What other data from our Microsoft-related activities does LinkedIn combine with their data? We don’t really know.
Thank you for sharing your views on this with us, Senthil. Anything else you’d like to share with our audience?
In simple terms, AI and ML is definitely a powerful tool which will save our time and effort and help to get quick informed decisions from a wide variety and volume of data and events.
With recent advancement in AI algorithms like Deep Reinforcement Learning, there is no need to train the Ai agent based on history of data, but with simple policy enforcement to ‘maximize their rewards’ for right moves. For more info, you can check here: https://github.com/SENC/AiTennis/blob/master/README.md
But when we make the Ai agents learn by ‘maximize rewards’, we should also embed the ethical way to get that, and I'm sure many of us are working for that common goal. And I can also thank you Karen - since I can clearly see you and your team are also on this mission for the right cause. Thank you!
Absolutely!
Thanks for this opportunity to share my views and appreciate your initiatives to highlight the need for ethical practice in the AI world too.
On this line, I’m working here on some little steps to create context-based Ei Plugins for Ai to be more empathetic, ethical, and socially aware. Here is my idea in a ‘TechStory’ for the need of Ei 4 Ai - A cognitive dissonance.
In simple terms, whatever we achieved as of now in the AI/ML world, a well-trained mostly single #AiModel which would actually do well for prediction or decision process in a solitary play, but what do they need to get trained for? It is like how to play as a team and accept their own mistakes … like a team of AI bots in some life saving scenario. Yes, Deep Reinforcement Learning is already on this line, but how can we embed this at the core?
Could be a need of a tiny behavioral concept to be taught and trained as part of AiEthics and should be easy to import as pluggable #KnowledgeGraph - #tinyBrain - #MicroAI as part of our future AI space research.
Interesting, Senthil - where can a reader find out more about your EI Plugins for AI?
This is more in ‘Seed phase’ where I'm designing and developing context based Ei scenarios. In simple terms, like a Python library for decisions weighted with emotional context.
Senthil, thank you so much for joining our interview series. It’s been great learning about what you’re doing with artificial intelligence tools, how you decide when to use human intelligence for some things, and how you feel about use of your data!
References
Ei4Ai Newsletter - https://www.linkedin.com/newsletters/micro-series-4-1-ei-vs-ai-6969640876297916416/
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
Credits and References
References for the 4 components of emotional intelligence:
https://www.health.harvard.edu/mind-and-mood/emotional-intelligence
https://online.hbs.edu/blog/post/emotional-intelligence-in-leadership
https://www.simplypsychology.org/emotional-intelligence.html
https://mindlabneuroscience.com/pillars-of-emotional-intelligence/
https://evolvedmetrics.com/what-are-the-four-steps-of-emotional-intelligence/
Some sources cite 5 components of EI, the 5th being motivation:
#BookPunch is a tagline to highlight powerful lines from non-fiction books that leave a strong impact. These lines hit you like a “punch”, inspiring a mindset shift or positive change in just one or two sentences.