AISW #028: Anonymous7, Canada-based computer science student 📜 (AI, Software, & Wetware interview)
An interview with an anonymous computer science student in Canada on their stories of using AI and how they feel about how AI is using people's data and content.
Introduction - Anonymous7 interview
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
📜 Interview - Anonymous7
I’m delighted to welcome Anonymous7 from Canada as our next guest for “AI, Software, and Wetware”. Thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I’m currently a student at the University of Waterloo in Canada, doing my bachelor’s in computer science. I’m part of the co-op program at the university, which means that I complete six internships over the course of my degree. All of them so far have been in software engineering, and I’ve gotten the chance to work on a variety of things ranging from front-end to back-end to R&D.
Outside of computers, I’m involved in a few musical ensembles and I take a lot of social sciences electives. I am very much interested in wetware too :) I have a huge interest in anything to do with humans and their minds and how this affects their lives. So I love learning about the social sciences, history, religion, biology, etc. I’m also passionate about mental health and diversity.
With my electives, I’ve mostly centered on psychology courses, which have been super useful for understanding myself, others, and the world. Part of my interest in AI is probably that it’s a bit of a junction between psychology and computer science. In my courses I’ve found that the two fields reference each other a lot! (e.g. neural networks, multi-store model of human memory)
That’s a great background, and I love seeing that you’re diversifying your electives to complementary fields!
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I’ve worked with AI/ML professionally:
I’ve built some tools with it in co-ops that I did. In one, I worked on a POC [proof of concept] using ML models to optimize software testing. In another, I tried to create a feature using generative AI to create insightful questions about data.
Professionally, it’s been about using pre-made models and trying to adapt or apply them to company needs.
And studied it in my courses:
Academically it’s been about the math and internals of how different types of AI work. how the pre-made models I was using at work actually function, essentially. I’ve really only taken one course on it though, that was very breadth over depth.
I’ve also used it personally:
I’ve used ChatGPT, Pi, and Grammarly to help me with various tasks. Mainly writing because I sometimes struggle to phrase what I want to say, and I use Grammarly to do a double check on essays for school or really important emails.
I’ve also played around a little with image generation for fun (Bing, AI-generated stickers in various social media platforms, an image generation app on my phone).
Can you share a specific story on how you have used AI or ML? What are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?
ChatGPT is really good for helping write emails or phrase something you want to say. But it also sometimes has a distinctive style of “speaking” that makes it sound, well, like AI.
ChatGPT is also surprisingly good for advice, but it’s also generally prone to hallucinations. I once asked it to tell me about a specific model of toothbrush which didn’t exist, and it gave me a detailed response.
So I seek advice, but am extra skeptical/careful about it, which is how you should treat any advice you get online. But I find that AI often lies more convincingly than humans do because it can provide you with so many hallucinated details - e.g., I think the response to my fake toothbrush prompt included a story of how the company developed it.
These are excellent observations! Thank you for sharing that fake toothbrush story.
Grammarly works well, but most of it is paywalled which I didn’t pay for. Sometimes it makes suggestions that are off, because no AI is perfect. I find that for AI suggestions, usually you need to go through everything and verify. But it’s still very helpful.
Your experience with Grammarly sounds similar to mine (I also have the free plan). I have only used it for its readability metrics. But when I was evaluating it for that purpose, I remember noticing that the suggestions were sometimes ‘off’.
I’ve used call summarizing too, on Teams. This can be extremely helpful as someone with ADHD, as it can be hard sometimes to focus through a long presentation.
My AI team piloted the Zoom AI summaries last year, and they were not perfect, but not bad. What I liked even better than the summaries was having live captions during the call:
It helped me ‘hear’ better what others were saying.
The caption history window gave me a ‘buffer’ in case I was interrupted - it let me catch up on what I missed when I returned to the call, without bothering anyone else.
Watching it while *I* was talking helped me see if I was enunciating well enough or if I needed to slow down or be more careful with my words.
I haven’t used live captions during calls very much, but I usually have auto captions turned on for social media, and I find them very helpful there. I also find that captions help me ‘hear’ better what people are saying and help me catch everything they’re saying. I think I’ll definitely give live call captions a try!
And I didn’t know that you have ADHD! Have you heard that neurodivergence is more common in tech, and even higher in AI and ML areas of work? (I wrote a post about why AI teams should be more neuro-inclusive a few months ago.) I’d love to hear more about your experiences with communications tools like Teams.
I also didn’t know that I had ADHD until recently :) I’ve heard that neurodivergence is more common in tech, which is really interesting to me, because this is not the only interest of mine where I discovered that ADHD and neurodivergence are apparently over-represented. I didn’t know that it was even higher in AI and ML areas. I’m usually nervous to disclose this to people, so I’m really happy to see the article you wrote and that companies are making efforts to be more neuro-inclusive.
I think AI has a lot of potential for improving accessibility. Live captions and summaries are very helpful for many people, even if they’re not perfect. I also find the other accessibility features of Teams that are not necessarily AI-based and not necessarily targeted towards me to be helpful, like the highlighting of the speaker’s frame, noise suppression options, and the ability to raise your hand. They all provide more options and flexibility when communicating so that everybody can make it work for their brain/body.
Another recent use of AI that I adopted, this is already way more than one specific story but I found this to be very interesting so: I had a counselor recommend trying AI chatbots to help with social anxiety, specifically Pi and ChatGPT’s Communication Coach. I was surprised at how helpful it was for finding strategies or insights. I’d been very hesitant to use AI for anything mental health related due to the unreliability of some of its responses, but I found that these chatbots were very good at summarizing the strategies and advice available on the Internet. Pi also adds a lot of reassurance and emotional validation in its responses, which can help with negative self-talk, and asks follow-up questions which can lead to some self-discoveries. I’d say it works similarly to interactive or assisted journaling. You should never use AI to substitute counseling or for serious mental health concerns, but if you already have a handle on things and just need help with strategies or questions to ask yourself, it can be really useful.
That’s really interesting to hear that you found the chatbot helpful for these specific mental health purposes. Thank you for sharing that!
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I avoid using AI-based tools for topics I’m unfamiliar with. For example, explaining a concept or giving advice for something on which I have no prior knowledge. ChatGPT is incredibly useful for helping me with things where I do have some knowledge, but am just missing some understanding; but for things I know nothing about, I don’t trust that it will be 100% correct.
That sounds prudent - it’s similar to what my medical student guest Anonymous1 said about how they use LLMs to fill in gaps or explain concepts, but not for new areas.
I also avoid using it to completely write or code something for me. When I use it for writing, I use it for ideas or phrasing, but I feel that it isn’t really my own message if I take what is generated in its entirety.
I’ve heard similar comments from other interview guests, about wanting their writing to sound ‘like them’ and not ‘like AI’.
When I use it for coding, which is rare, it is usually to figure out how to implement logic in a language I’m not familiar with.
Do you have a specific story you could share about a time when you tried using an AI-based tool to write code? How good was the code? Like, did it compile and run, or did it make up calls to library functions that didn’t exist?
I think one time I tried using ChatGPT to generate Python code, and it had a runtime error. The code used libraries I wasn’t familiar with, so I pretty quickly gave up on debugging it and just figured out how to write it using libraries I did know how to use.
But I’ve also used it to generate code that worked, and it sometimes gave me ideas on how to better style the logic. For using it to help with an unfamiliar coding language, I knew the logic I wanted in pseudocode and just needed to know what syntax to use, so I didn’t run into many errors.
In general I think it’s easier to use LLMs to help with smaller or more specific pieces of code rather than asking it to generate a large portion, as in the latter case it becomes the same as trying to read and understand another human’s code :)
Disclaimer: my memory might be a bit inaccurate, as it’s been a while since I used AI to write code.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI and ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?
I think for companies to be ethical, they should get consent, and give the option to opt out. Many people are uncomfortable with their data being used for AI, and it raises privacy and intellectual property concerns that I think users should have the choice to avoid.
As well, there are many services now that have added AI that users don’t necessarily want. For example, I and others I know just want to use social media to connect with friends; we don’t use the AI features that have been added, and we don’t want our data to be used for these features.
I think that it’s unfair to the original creators, as AI would not have any intelligence without their data. AI art is also causing a lot of harm among artists - their works are often used without consent or compensation, and companies who use AI to generate graphics profit off of such artists without the artists getting paid.
With AI tools, there is already less motivation to create original content. If we never compensate the people who create such content, I think the overall quality of AI results and of intellectual creations in our world might go down because there will be less and less original human content created.
Great points! And again, a lot of people share your views. I recently chatted with a technical artist who stated that exact concern about dropoffs in new original human content causing AI to degrade.
I haven’t really seen that view discussed, so I’m glad to hear there are others who are thinking the same.
When you’ve USED AI-based tools, do you as a user know where the data used for the AI models came from, and whether the original creators of the data consented to its use? (Not all tool providers are transparent about sharing this info)
I don’t feel that tool providers have been very transparent. It is not necessarily that they keep it secret, but they are not upfront about it; it is often vaguely worded and/or buried in their policies. You have to go digging to find out specifically where and how data was obtained for the models. I feel that most of my knowledge of where data comes from is from news online and not the companies themselves.
I agree - this has been my experience as well, that the companies aren’t transparent. We’re usually finding out from news and social media, not the companies, like about the Adobe and Meta and LinkedIn policy changes and how they’re using people’s data.
If you’ve worked with BUILDING an AI-based tool or system, what can you share about where the data came from and how it was obtained?
Actually, for one of the AI tools I worked on in an internship, it also wasn’t clear to me where the data came from. My team worked just on using the already established model and was not well connected to the teams that worked on the model itself. For my other internship, the tool used previous software test data from inside the company, so no outside or personal data was used.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I don’t think I know of any specific cases, but there’s this online test taking proctoring software that my friends at other universities had to use that seemed like a massive invasion of privacy. I guess that counts because it’s computer vision.
Yes, that definitely counts. I had to deal with one of those online test proctoring tools when I did my latest agile certification. It felt really invasive. And I didn’t appreciate having to give them so much personal info, because data breaches are so common. And both requirements came up kind of at the last minute, which made it more stressful. But it was either that or not get the certification, which I’d already paid for and studied for! That’s not much room for ‘consent’.
Lately I’ve been reading about these test monitoring tools being used in universities and schools, and even in high schools and with younger kids. Some schools give kids devices that they have to use for their schoolwork and that have their personal information. Parents who are privacy-conscious are objecting to use of their kids’ data, with good reason.
Are there other examples that have affected you?
Yes, to expand on the proctoring tool, it was a software that almost all my friends who didn’t go to my university had to use for their exams. From their descriptions, it sounded extremely invasive and almost like spyware. They didn’t really have a choice, either, because they needed to pass their courses. It also seemed very unnecessary and not designed for humans, e.g. it would track your eye movement to make sure you were looking at the screen, which seemed insane to me because humans naturally look all over when thinking.
I’m thankful my university disallowed it and instead created exams that were more difficult but open book, or had video proctoring by a human. I think there are always alternatives to AI that should really be considered, and if an AI solution is very invasive, then it should be avoided.
Biometric and photo screening at airports also make me uncomfortable, but I didn’t really know it was an AI-based system.
Yes, I’m not sure about other countries, but in the US, the TSA is now taking photos of people and comparing them to IDs, and that’s done with machine learning.
I totally get being uncomfortable with biometrics. They are risky because if ours get ‘stolen’, we can’t change them easily. What kinds of biometrics have you experienced, at airports or in other situations?
I’ve experienced fingerprint and face scanning at airports. It seems that is how it’s done in many countries now (I’ve encountered this in Canada, the US, China, and Japan so far). I guess it makes me uncomfortable because it feels like the government, or a government that isn’t even mine, is keeping a really detailed tab on me.
I’ve encountered biometrics as an authentication method in other situations, but those have always offered alternatives so that I didn’t have to agree to biometrics.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI?
Yes, I think Meta (Instagram) and Grammarly? I don’t know why I didn’t pay that much attention.
If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
Oh, I didn’t feel like I had a real choice about opting out. I’m dependent on Instagram as my main way of keeping in touch and messaging a lot of people, so I can’t easily stop using it.
Yeah, that definitely makes it tough. I know some people who have tried switching to different messaging tools, but the “network effect” makes it a real challenge. If that’s where everybody you want to talk to IS, getting them all to go somewhere else with you at once is hard. My family is going through this now.
Yes, I don’t think I realistically could quit Instagram. It’s where all the students at my university add each other, and also where everyone in one of my hobby communities connect. Meta also owns so many sites that it’s very difficult if you wish to avoid them.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
I haven’t had any AI-related problems but yes, there have been multiple data breaches at various companies where my name and email were leaked. I received tons of spam and phishing emails afterwards, and still do. Luckily, they’ve all been very obvious to me as phishing emails - poor spelling/grammar, outrageous claims, government asking for money, etc. - but I worry about more vulnerable people such as older people or new immigrants who may not be able to identify these emails as easily. I’m also worried that my name and email could be used to create spear phishing emails that would be harder for me to spot, if I ever become someone who would be valuable for scammers to target (e.g. a higher up in a company).
I’m glad to hear that those breaches haven’t caused you any harm!
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Be transparent about where the data is coming from, and if they are going to use my data. I wouldn’t necessarily mind my data being used if I knew which info was being taken to be used for what kind of purpose.
Have it clearly written in their policy and make it visible to users, i.e. not hidden all the way down somewhere.
Yes, both great points, and I think most of the world agrees with you!
Anything else you’d like to share with our audience?
I also want to add that I don’t think companies should be so worried about everybody opting out. AI tools are exciting and extremely useful. People are willing to contribute data in exchange for being able to use these tools, and more people would be willing to if you’re honest rather than trying to hide your policies.
Anonymous7, thank you so much for joining our interview series. It’s been great learning about what you’re doing with artificial intelligence tools, how you decide when to use human intelligence for some things, and how you feel about use of your data!
Interview Links
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I don’t use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome 😊