AISW #006: Jermaine Allgood, USA-based software technologist 📜(AI, Software, & Wetware interview)
An interview with USA-based software technologist Jermaine Allgood on his stories of using AI and how he feels about how AI is using people's data and content.
Introduction
I’m delighted to welcome as our next guest in this 6P interview series on “AI, Software, and Wetware”! Today he’s sharing with us his experiences with using AI, and how he feels about AI companies using our data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
See this Glossary for reference on AI-specific terms used in this interview.
Interview
Jermaine, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I am Jermaine Allgood and I have been in technology for 30 years in many different roles, with the last 8 years in Data Science.
Can you tell us more about your experience in data science? Have you used AI, ML, and analytics professionally, or personally, or studied the technology?
I have used AI, ML, and analytics in my professional career in startups and in large corporations. I have worked in one startup that sold Data Science As A Service to small businesses, and most interestingly in the political space for a CEO that ran a campaign for US President as an Independent.
This is great context for our discussion, Jermaine - thank you for sharing that.
Can you share a specific story on how you have used AI and ML? What are your thoughts on how well the AI features worked for you, or didn’t? What went well and what didn’t go so well?
I have tried to use AI/ML and RAG/LangChain to create a one stop shop for accessing support documents from disparate document repositories. I was successful with the project, but I would not use AI/ML/RAG for this kind of work, as the results can be non-deterministic and just plain wrong (hallucinations).
There’s definitely a risk of AI generating wrong outputs - you’re wise to be wary.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you didn’t use it?
I am a creative. I draw and paint physically and digitally. I would NOT use AI for this work. Creativity is unique to the person’s interpretation, experiences, feelings etc. and AI is a chimera of others’ work.
So many creative people I know feel the same way, Jermaine!
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training? (Examples: musicians, artists, writers, actors, software developers, medical patients, students, social media users)
I strongly believe permission should be granted, and violations of that should be prosecuted. I believe companies have way too much power and feel immune to prosecution for stealing others’ work. They try to make it seem as if the AI mission is greater, and our rights being trampled on is a small price to pay.
Yes, we often see a certain arrogance among leaders in those companies who act like any data they can get their hands on is fair game for whatever they want to do with it. That’s not cool at all. And then they want to keep the rights to what other humans create with the systems they build! It’s hypocritical. It’s so good to see traction finally forming here in the US around protecting how people’s data and content can be used.
If you’ve worked with building an AI-based tool or system, what can you share about where the data came from and how it was obtained?
In the political startup I worked for, we purchased voter registration data legitimately from reputable sources. Voter registration we used did not hold PII and consisted of city, state, political party, etc. which is all public statistical knowledge.
Thank you, Jermaine, it’s great to hear of a real example where data was acquired ethically!
As a member of the public, there are probably cases where your personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I do not know if my data was used by an AI system. With all of the data breaches, I assume this is the case, but it is just a reasonable assumption.
I agree, nowadays that’s probably the safest assumption in almost all cases!
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI?
Come to think of it, I did have a negative experience with a software company using content to train AI models without explicit consent. My daughter and I are creatives. We do painting, sculpting, and digital art. We were taken by surprise when Adobe announced they were going to use any cloud-based content in their apps for training models. We still had 2 months left on our subscription and when it expired we decided to NOT renew and use open-source graphics programs like Inkspace, Blender, and Penpot. The app makers state they do not and will not use your content for training AI models.
I felt betrayed because Adobe products are the de facto standard used for digital art and photograph editing etc. They also made it really difficult to opt out of the terms, which left creatives no choice, particularly when having months left on the subscription. I did not follow up, but I believe the bad press and backlash may have caused them to roll back that policy.
That’s a strong example of a case where changing the terms and conditions on existing customers caused those customers to rebel - people like you and your daughter. I saw so many posts about people ditching Adobe and moving to other tools, like you did, & joining Cara.app 1. had a good article here on Substack about the move to Cara, and on some of the ‘fair use’ arguments. 2
I also saw lots of posts about creators adopting Glaze 3 and Nightshade 4 to protect their artwork from being misused by AI. Adobe did “clarify their policy” in mid-June 5, like you mentioned. But some of the damage can’t be undone - many artists will never trust Adobe again the way they used to.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
Some years ago, before AI was the big rage, I had a creditor try to say I owed money on an account I had paid off years prior. It was a good thing that I keep records, as I was able to prove the account was settled. My credit score was in jeopardy of being reduced had I not had retained proof of the payoff. Creditors and financial institutions sell/share data all of the time.
Good that you were successful at getting that resolved, Jermaine. Data being shared and sold is really out of control, and that causes at least two major problems. One is the obvious risk to our privacy, and all of the potential harms that come from that. The other is that there’s no assurance that corrections to wrong data will be propagated to everywhere the data went. Having our data shared without our consent just compounds all of this.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I believe they should disclose ALL and EVERY method of obtaining data AND inform the public of their plans prior to obtaining the data.
The conflict is that AI companies know that huge amounts of data is needed for their products and services and getting permission to use the data would drastically slow down or threaten to halt their operations.
Anything else you’d like to share with our audience?
Thank you!
Conclusion
That’s a wrap! Jermaine, thank you so much for joining our interview series. It’s been great learning about what you’re doing with artificial intelligence, and how you decide when to use human intelligence for some things! Best of luck with your continuing career in AI and technology - it’s so comforting to know that some of the people working in this area care about ethics, like you do.
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read. To automatically receive new 6P posts and support our work, consider becoming a subscriber! (If you like, you can subscribe to only People, or to any other sections of interest. Here’s how to manage sections.)
Credits
Jermaine Allgood on LinkedIn
Jermaine Allgood on Substack
References
https://cara.app/explore