AISW #013: Gilda Alvarez, USA-based CEO of 'Latinas in Data' 🗣️ (AI, Software, and Wetware interview)
An interview with USA-based 'Latinas in Data' CEO and founder Gilda Alvarez on her stories of using AI and how she feels about how AI is using her data and content (audio; 19:18)
Introduction - Gilda Alvarez interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available in text and as an audio recording (embedded here in the post, and in our 6P external podcasts). Use these links to listen: Apple Podcasts, Spotify, Pocket Casts, Overcast.fm, YouTube Podcast, or YouTube Music.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview - Gilda Alvarez
I’m delighted to welcome as our next guest for “AI, Software, and Wetware”. Gilda, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
My name is Gilda Alvarez. I'm a database architect. I have 25 years in industry. I started with databases when the world of data was just started. It was an incredible experience seeing it be born, right?
And today with AI, I'm seeing this birth. We build up to AI. Data is a foundation for AI. I worked for many big companies. I currently have my own company and I do consulting in many data solutions. I also have a book that shows careers in the data world, including machine learning and AI, so that people can be guided into the data summit, I call it, which is the area of data.
Yeah, it's great always to talk to someone who's been around a longer time in this world of data, same as I have, because we have, I think, a very different perspective on how it's evolved and how it emerged. And actually machine learning started before I was born, I’m sure before you were born: back in the 1950s. So it's been around for a long time. The technologies have really been maturing, and then the availability of the data in massive volumes has really been driving it. So it's really cool to have that historical perspective.
Yeah!
What is your experience with AI, machine learning, and analytics? Have you used it professionally or personally (sounds like you have), and have you studied the technology?
All right. So when people say AI, people sometimes can think it's kind of robots. Oh my gosh. What is this? But under that are multiple technology. Of course, you have your OpenAI, you have your Microsoft Copilot, you have your multiple solutioning for AI in the productivity in the real world.
But what really was happening before this AI revolution is people were building elements so that we can have this capability through ChatGPT, all the things that we can do, right? So as soon as AI came to the picture, I'm like, Oh my God, great. Now ChatGPT can verify my code and do so many things for me.
I started creating my own co-pilot. I start playing with the technology. What I built, it's a foundation of many of these things, using technologies like fabric, DataBricks, BigChart, giant lake houses of data. So the technology had to evolve from the original big data solutions, the Hadoop, the Cloudera, the different solutions that sparked the possibility, right? Because of the movement of data being faster now, we can think faster.
So it was born a long time ago, but it was evolving to get to the point where now you can ask a question and it tells you the whole world. Because all this is what machine learning is, it's teaching the machine so that you can have this answer very fast, right?
I've been able to play from different levels. It's been an interesting journey in the last two years because this is when I think kind of boom. Part of that is just, I'm a very curious person and I love learning.
Can you share a specific story on how you have used AI or machine learning, and what are your thoughts on how well those AI features of the tools worked for you, or didn’t?
It is evolving. One of the things that I built on my own website with AI: I created a chatbot that uses all the information that I mentor with, so that people can actually go in and ask questions to my virtual mentor. And all these things help me not have to be present constantly. I mentored, in the last 20 years, over 300 people. It's been a long [time], but it's not that many people. So this is how you make it, you scale out so that people can ask questions and get what they want, and I don't have to do anything. It's just there.
By the way, it's free. It's on my website, Gilda Alvarez dot AI - go into my little virtual world, ask any questions that has to do with data and all the platforms, your AWS, your Azure, your Google. And it's about careers. That question about what should I do? What is this tool? How long it takes? What class should I take? All these things that you maybe want to ask somebody, you can ask through that chat.
It sounds like a really interesting tool to have on your website, and I can see how it would let you scale yourself to be able to help more people. How well does it work?
Perfect. I tested it and tested it and tested it. You know, garbage in, garbage out. It's all about cleaning and it's all about massaging and making sure that you have the right things will make it successful. Having a little understanding, not so much, but a little understanding will help you in the long run.
It sounds like you put a lot of effort into tuning it.
Yeah, it's working perfectly now.
That's awesome!
Now, outside of the things that I'm teaching, that's when it gets confused. I'm not here to teach you about life lessons. It doesn't know! You know, there are coaches out there that teach, oh my goodness, imposter syndrome and all these things and the behaviors and all that. That's not what I'm here for, right? So I'm a technical coach and I help people with technical questions.
So when you come to me and like, I want to get a job in data engineering, I go, which platform? AWS, Azure? Google and AWS? Okay, do you know how to do this? Do you know how to do this? So that's my focus.
Now, can I add more to it? Yes, there's always many things that I can add. I just want to hyper focus into the things that I feel that are more important, right? When I choose a career myself, I chose the big player in the technology of data - to me, that was Oracle at that time, and Microsoft SQL Server, right? For the people that I mentor and coach and I guide, I push them to the right technology where I see the long term vision and saw those intelligence that I'm building so that they know exactly what is worth putting your time.
That's very good. How have your mentees responded to having the chat available?
They love it. They have also the time that I do sessions on Saturday where I sit with them and kind of go over the whole thing, so that they cannot just go there for very quick questions. But I do get very highly technical questions every now and then in the middle of a meeting and then, wow, how do I do this? And then the thing is that the chatbot is not going to answer those questions. These are questions that are experienced with the work. That's something that they have to come to me.
Very good!
If you’ve avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI for that one thing?
I haven’t yet avoided it. (laugh)
Not at all? Okay. (laugh) So any chance you get, you will try AI first?
Yes. Yes.
Very good.
A common and growing concern nowadays is where AI and machine learning systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up with them.
How do you feel about companies using data and content for training their AI and ML systems and tools? Should ethical AI tool companies get consent from (and compensate) the people whose data they want to use for training?
It gets deeper than that. I wish it was that simple. Wish it was that simple. Some user put something in there and you know, and maybe they feel hurt that that was used. There's a lot of things that are happening in this area that go deeper into this. It goes deeper into very private information, goes deeper into judging and profiling people based on, you know - we have to build a legal structure to protect individuals from what I see happening. I've quit jobs because of the ethics that I see around especially underrepresented.
Oh wow.
So I feel AI is like, like a kid that goes into church. But in church, there's this lady that is very homophobic, very - she has her beliefs. And I'm putting my child there and she's babysitting. And all this information is being fed to my child. And then I pull my child out of that, and I realize this is not what was meant to - what happened?! But it's too late. I learned this information just got fed into this solution. And the only thing you get to do is to expose this child to the others, to the world, to the different things that you know that are ethical so that the child now has the intelligence. And maybe that is not the good approach, maybe this is the good approach.
AI is still like that creation of man that does not know what good or bad is. But it's up to us, the adults, the intelligence that have morals and ethical values, to teach this child what is good and bad, because they don't know. They don’t know.
Do I think we're going to eventually build a structure on the use of data from people? Yes, but that's the least of my worries. The most of my worries is the intelligence behind those algorithms that are actually being used to profile, to stereotype, to discriminate. That's what worries me most.
If I heard you correctly, and please restate or correct me if this is wrong, but what I'm hearing you say is that you're more concerned that the data isn't balanced enough or representative enough to mitigate some of the biases that have been a concern with AI.
So you're more concerned that we get enough of the right kinds of data into it, as opposed to worrying about licensing or compensating or crediting the sources of the data.
Is that a fair statement? Or would you like to redirect that?
The best statement. I'm going to copy that statement because I need it. I don't know how to explain what I'm trying to explain because I get so deep into algorithms and technical that I really don't use much of my communication skills. That's what I use AI, it’s very good at it, but you just said it correct.
Feel free to quote and steal it. It's totally fine!
So when we talk about using AI based tools: as a user of the tools, user of ChatGPT or Gemini or a tool like that, do you feel like you know where the data used for the AI models came from or whether those original creators consented to its use? A lot of tool providers are not transparent about sharing that information.
I don't think they'll ever share, because it's a lot of machine learning that was done for meaning, meaning new sources. And it's very difficult to backtrack what was fed and what was not fed. I think what we could eventually do is filter out based on the needs for the user, all right?
For example, there's you're getting certain things where I am not able to do this, because you know that right there is as far as we can go, but all of these tools were fed everything, everything. So we're going to continue to bring that child back home and tell me no no no, that's not, that's a bad word. You should not say that word.
Right.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I don't have a NDA, so I'm allowed to say certain things. All your information on your loans, your 401Ks, all your information that comes from your financials, it's all available and public. Not public as far as anybody, but if I am an institution, and you work for me, I have access to all that information. Okay. I know if you pull money out, all this information is fed into a system, and I know if you are potentially going to be a threat to the organization, see? And it's okay. I mean companies can protect themselves. I don't see a problem. It's the moment that you start flagging certain people, because they're going through hardship, as a threat - is where I draw the line.
I know of many things that I've fed into artificial intelligence that I feel nobody should even bother with that. How many DUIs? Why would you judge somebody's capabilities based on something like that? Tickets, I mean tickets, you got short sell and I don't know, financials, things, all these things that I've seen in my career. It didn't sit well. I walked out of many projects like that. Is it doable? Yeah, I can do it. Do I want to do it? No, I mean, right.
The last question, and you’ve alluded to it a bit, I think:
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies would need to do to earn and to keep your trust? And do you have specific ideas on how they can do that?
I don't. I look at AI, more of a glass half full, because it could be half empty, for sure. Many technologies are being thrown forwards without an excuse. Nobody asks us about Facebook, and now it's taking over our social media, and it's affecting the psychology of many of our kids. And nobody talked to us about all the TikToks and the influencers and all the things that now it's part of our culture. Nobody asks us for permission for that.
But it's here and we have to navigate through it the best possible way we can as humans. Now we're playing with the big boys, which is AI. And as humans also, we are going to have to figure it out - how we're going to draw the lines. And that's why it's important to help the AI ethic committees that we have in places in many organizations, because they know what I know. And we don't know - you and I are blinded to the future things that are coming out because we can't keep up with so many things. These things are monsters that can eat information of 10 years of history of the human world in three hours, right?
We are the ones who are going to have to draw the line. And to do that, we need to be educated into, not necessarily how to code it, but what is coded, and how it impacts us. So it's a whole different level of solution, not necessarily at the code, privacy. It's more at the behavioral ethics.
Had I known what Facebook was going to be today, I would have been the first one to say like, okay, we need to limit this. We need to have this. Had I know what YouTube was going to be like, I would have been, wait, we need to hold on on this.
And we have TV for many years. We didn't have naked people in the middle of the day. There were restrictions. We had a way to trust the TV being on that it wasn't going to show certain things, right? We need to do the same thing. And we need to step up to the day because it moves faster than we are. And it's us. It depends on us - the parents, the governments, you know, and all the people that are using and realizing the impact that it has in society - to draw those lines. We cannot let this because if we let it, it will be pretty devastating.
Yeah, one of my other guests, Angeline Corvaglia, is very much involved with a group called Data Girl and Friends. And Data Girl and Friends is focused on trying to educate children and the adults that take care of them about how to be safer online. And she's looking at some of the bigger companies - has the ability to check that teens aren't sharing inappropriate content with each other. And yet they don't enforce it. They just say, are you sure? You could do better than that to make it safer for people with a lot of examples.
Thank you for that discussion. Again, I think it comes back to, in some cases, the biases, and making sure that the companies are putting in the right effort to make sure that models aren't going to act in an unfair way towards people. And that I think is also something that's important for building trust.
Yes. You summarize it properly.
That's all the questions I had for today. Is there anything that you'd like to share with our audience?
Yeah, I just want to let you know that I help and coach technical database people into careers in data. So I have a book called “Latina in Data” that shows a little bit of my journey, as well as the careers in the levels of complexity, and the school kids that you need for each of these careers. So if you're interested in learning more about data and AI, then buy the book - and it's on Amazon, Latina In Data - and reach out if you have any questions.
Awesome. Gilda, thank you so much for joining our interview series. It's been great learning about what you're doing with artificial intelligence and machine learning and how you decide when to use human intelligence for some things!
Thank you very much.
And best of luck with your book on Latina In Data. It sounds like a great read. Looking forward to it!
Thank you so much for the opportunity. I hope you guys enjoy!
Thank you!
References - Gilda Alvarez
Here are links to Gilda’s website and its free technical data career mentoring chatbot, plus a link for purchasing her book “Latina in Data”.
➡️ “Latina in Data: Navigating the Tech Terrain: A Survival Guide” https://a.co/d/0PrOFWI
➡️ GildaAlvarez.ai (find her technical data career mentoring chatbot here)
➡️ Gilda Alvarez on LinkedIn
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊
Credits and References
Audio Sound Effect from Pixabay