š AISW #028: Anonymous7, Canada-based computer science student (AI, Software, & Wetware interview)
An interview with an anonymous computer science student in Canada on their stories of using AI and how they feel about how AI is using people's data and content.
Introduction - Anonymous7 interview
This post is part of our 6P interview series on āAI, Software, and Wetwareā. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, āAIā means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and āAI Fundamentals #01: What is Artificial Intelligence?ā for reference.

š Interview - Anonymous7
Iām delighted to welcome Anonymous7 from Canada as our next guest for āAI, Software, and Wetwareā. Thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Iām currently a student at the University of Waterloo in Canada, doing my bachelorās in computer science. Iām part of the co-op program at the university, which means that I complete six internships over the course of my degree. All of them so far have been in software engineering, and Iāve gotten the chance to work on a variety of things ranging from front-end to back-end to R&D.
Outside of computers, Iām involved in a few musical ensembles and I take a lot of social sciences electives. I am very much interested in wetware too :) I have a huge interest in anything to do with humans and their minds and how this affects their lives. So I love learning about the social sciences, history, religion, biology, etc. Iām also passionate about mental health and diversity.
With my electives, Iāve mostly centered on psychology courses, which have been super useful for understanding myself, others, and the world. Part of my interest in AI is probably that itās a bit of a junction between psychology and computer science. In my courses Iāve found that the two fields reference each other a lot! (e.g. neural networks, multi-store model of human memory)
Thatās a great background, and I love seeing that youāre diversifying your electives to complementary fields!
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
Iāve worked with AI/ML professionally:
Iāve built some tools with it in co-ops that I did. In one, I worked on a POC [proof of concept] using ML models to optimize software testing. In another, I tried to create a feature using generative AI to create insightful questions about data.
Professionally, itās been about using pre-made models and trying to adapt or apply them to company needs.
And studied it in my courses:
Academically itās been about the math and internals of how different types of AI work. how the pre-made models I was using at work actually function, essentially. Iāve really only taken one course on it though, that was very breadth over depth.
Iāve also used it personally:
Iāve used ChatGPT, Pi, and Grammarly to help me with various tasks. Mainly writing because I sometimes struggle to phrase what I want to say, and I use Grammarly to do a double check on essays for school or really important emails.
Iāve also played around a little with image generation for fun (Bing, AI-generated stickers in various social media platforms, an image generation app on my phone).
Can you share a specific story on how you have used AI or ML? What are your thoughts on how well the AI features of those tools worked for you, or didnāt? What went well and what didnāt go so well?
ChatGPT is really good for helping write emails or phrase something you want to say. But it also sometimes has a distinctive style of āspeakingā that makes it sound, well, like AI.
ChatGPT is also surprisingly good for advice, but itās also generally prone to hallucinations. I once asked it to tell me about a specific model of toothbrush which didnāt exist, and it gave me a detailed response.
So I seek advice, but am extra skeptical/careful about it, which is how you should treat any advice you get online. But I find that AI often lies more convincingly than humans do because it can provide you with so many hallucinated details - e.g., I think the response to my fake toothbrush prompt included a story of how the company developed it.
These are excellent observations! Thank you for sharing that fake toothbrush story.
Grammarly works well, but most of it is paywalled which I didnāt pay for. Sometimes it makes suggestions that are off, because no AI is perfect. I find that for AI suggestions, usually you need to go through everything and verify. But itās still very helpful.
Your experience with Grammarly sounds similar to mine (I also have the free plan). I have only used it for its readability metrics. But when I was evaluating it for that purpose, I remember noticing that the suggestions were sometimes āoffā.
Iāve used call summarizing too, on Teams. This can be extremely helpful as someone with ADHD, as it can be hard sometimes to focus through a long presentation.
My AI team piloted the Zoom AI summaries last year, and they were not perfect, but not bad. What I liked even better than the summaries was having live captions during the call:
It helped me āhearā better what others were saying.
The caption history window gave me a ābufferā in case I was interrupted - it let me catch up on what I missed when I returned to the call, without bothering anyone else.
Watching it while *I* was talking helped me see if I was enunciating well enough or if I needed to slow down or be more careful with my words.
I havenāt used live captions during calls very much, but I usually have auto captions turned on for social media, and I find them very helpful there. I also find that captions help me āhearā better what people are saying and help me catch everything theyāre saying. I think Iāll definitely give live call captions a try!
And I didnāt know that you have ADHD! Have you heard that neurodivergence is more common in tech, and even higher in AI and ML areas of work? (I wrote a post about why AI teams should be more neuro-inclusive a few months ago.)Ā Iād love to hear more about your experiences with communications tools like Teams.Ā
I also didnāt know that I had ADHD until recently :) Iāve heard that neurodivergence is more common in tech, which is really interesting to me, because this is not the only interest of mine where I discovered that ADHD and neurodivergence are apparently over-represented. I didnāt know that it was even higher in AI and ML areas. Iām usually nervous to disclose this to people, so Iām really happy to see the article you wrote and that companies are making efforts to be more neuro-inclusive.
I think AI has a lot of potential for improving accessibility. Live captions and summaries are very helpful for many people, even if theyāre not perfect. I also find the other accessibility features of Teams that are not necessarily AI-based and not necessarily targeted towards me to be helpful, like the highlighting of the speakerās frame, noise suppression options, and the ability to raise your hand. They all provide more options and flexibility when communicating so that everybody can make it work for their brain/body.
Another recent use of AI that I adopted, this is already way more than one specific story but I found this to be very interesting so: I had a counselor recommend trying AI chatbots to help with social anxiety, specifically Pi and ChatGPTās Communication Coach. I was surprised at how helpful it was for finding strategies or insights. Iād been very hesitant to use AI for anything mental health related due to the unreliability of some of its responses, but I found that these chatbots were very good at summarizing the strategies and advice available on the Internet. Pi also adds a lot of reassurance and emotional validation in its responses, which can help with negative self-talk, and asks follow-up questions which can lead to some self-discoveries. Iād say it works similarly to interactive or assisted journaling. You should never use AI to substitute counseling or for serious mental health concerns, but if you already have a handle on things and just need help with strategies or questions to ask yourself, it can be really useful.
Thatās really interesting to hear that you found the chatbot helpful for these specific mental health purposes. Thank you for sharing that!
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?Ā
I avoid using AI-based tools for topics Iām unfamiliar with. For example, explaining a concept or giving advice for something on which I have no prior knowledge. ChatGPT is incredibly useful for helping me with things where I do have some knowledge, but am just missing some understanding; but for things I know nothing about, I donāt trust that it will be 100% correct.
That sounds prudent - itās similar to what my medical student guest Anonymous1 said about how they use LLMs to fill in gaps or explain concepts, but not for new areas.
I also avoid using it to completely write or code something for me. When I use it for writing, I use it for ideas or phrasing, but I feel that it isnāt really my own message if I take what is generated in its entirety.
Iāve heard similar comments from other interview guests, about wanting their writing to sound ālike themā and not ālike AIā.Ā
When I use it for coding, which is rare, it is usually to figure out how to implement logic in a language Iām not familiar with.
Do you have a specific story you could share about a time when you tried using an AI-based tool to write code? How good was the code? Like, did it compile and run, or did it make up calls to library functions that didnāt exist?
I think one time I tried using ChatGPT to generate Python code, and it had a runtime error. The code used libraries I wasnāt familiar with, so I pretty quickly gave up on debugging it and just figured out how to write it using libraries I did know how to use.
But Iāve also used it to generate code that worked, and it sometimes gave me ideas on how to better style the logic. For using it to help with an unfamiliar coding language, I knew the logic I wanted in pseudocode and just needed to know what syntax to use, so I didnāt run into many errors.
In general I think itās easier to use LLMs to help with smaller or more specific pieces of code rather than asking it to generate a large portion, as in the latter case it becomes the same as trying to read and understand another humanās code :)
Disclaimer: my memory might be a bit inaccurate, as itās been a while since I used AI to write code.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI and ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?
I think for companies to be ethical, they should get consent, and give the option to opt out. Many people are uncomfortable with their data being used for AI, and it raises privacy and intellectual property concerns that I think users should have the choice to avoid.Ā
As well, there are many services now that have added AI that users donāt necessarily want. For example, I and others I know just want to use social media to connect with friends; we donāt use the AI features that have been added, and we donāt want our data to be used for these features.
I think that itās unfair to the original creators, as AI would not have any intelligence without their data. AI art is also causing a lot of harm among artists - their works are often used without consent or compensation, and companies who use AI to generate graphics profit off of such artists without the artists getting paid.
With AI tools, there is already less motivation to create original content. If we never compensate the people who create such content, I think the overall quality of AI results and of intellectual creations in our world might go down because there will be less and less original human content created.
Great points! And again, a lot of people share your views. I recently chatted with a technical artist who stated that exact concern about dropoffs in new original human content causing AI to degrade.
I havenāt really seen that view discussed, so Iām glad to hear there are others who are thinking the same.
When youāve USED AI-based tools, do you as a user know where the data used for the AI models came from, and whether the original creators of the data consented to its use? (Not all tool providers are transparent about sharing this info)
I donāt feel that tool providers have been very transparent. It is not necessarily that they keep it secret, but they are not upfront about it; it is often vaguely worded and/or buried in their policies. You have to go digging to find out specifically where and how data was obtained for the models. I feel that most of my knowledge of where data comes from is from news online and not the companies themselves.
I agree - this has been my experience as well, that the companies arenāt transparent. Weāre usually finding out from news and social media, not the companies, like about the Adobe and Meta and LinkedIn policy changes and how theyāre using peopleās data.
If youāve worked with BUILDING an AI-based tool or system, what can you share about where the data came from and how it was obtained?
Actually, for one of the AI tools I worked on in an internship, it also wasnāt clear to me where the data came from. My team worked just on using the already established model and was not well connected to the teams that worked on the model itself. For my other internship, the tool used previous software test data from inside the company, so no outside or personal data was used.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I donāt think I know of any specific cases, but thereās this online test taking proctoring software that my friends at other universities had to use that seemed like a massive invasion of privacy. I guess that counts because itās computer vision.
Yes, that definitely counts. I had to deal with one of those online test proctoring tools when I did my latest agile certification. It felt really invasive. And I didnāt appreciate having to give them so much personal info, because data breaches are so common. And both requirements came up kind of at the last minute, which made it more stressful. But it was either that or not get the certification, which Iād already paid for and studied for! Thatās not much room for āconsentā.Ā
Lately Iāve been reading about these test monitoring tools being used in universities and schools, and even in high schools and with younger kids. Some schools give kids devices that they have to use for their schoolwork and that have their personal information. Parents who are privacy-conscious are objecting to use of their kidsā data, with good reason.
Are there other examples that have affected you?
Yes, to expand on the proctoring tool, it was a software that almost all my friends who didnāt go to my university had to use for their exams. From their descriptions, it sounded extremely invasive and almost like spyware. They didnāt really have a choice, either, because they needed to pass their courses. It also seemed very unnecessary and not designed for humans, e.g. it would track your eye movement to make sure you were looking at the screen, which seemed insane to me because humans naturally look all over when thinking.
Iām thankful my university disallowed it and instead created exams that were more difficult but open book, or had video proctoring by a human. I think there are always alternatives to AI that should really be considered, and if an AI solution is very invasive, then it should be avoided.
Biometric and photo screening at airports also make me uncomfortable, but I didnāt really know it was an AI-based system.
Yes, Iām not sure about other countries, but in the US, the TSA is now taking photos of people and comparing them to IDs, and thatās done with machine learning.Ā
I totally get being uncomfortable with biometrics. They are risky because if ours get āstolenā, we canāt change them easily. What kinds of biometrics have you experienced, at airports or in other situations?Ā
Iāve experienced fingerprint and face scanning at airports. It seems that is how itās done in many countries now (Iāve encountered this in Canada, the US, China, and Japan so far). I guess it makes me uncomfortable because it feels like the government, or a government that isnāt even mine, is keeping a really detailed tab on me.
Iāve encountered biometrics as an authentication method in other situations, but those have always offered alternatives so that I didnāt have to agree to biometrics.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI?
Yes, I think Meta (Instagram) and Grammarly? I donāt know why I didnāt pay that much attention.
If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
Oh, I didnāt feel like I had a real choice about opting out. Iām dependent on Instagram as my main way of keeping in touch and messaging a lot of people, so I canāt easily stop using it.
Yeah, that definitely makes it tough. I know some people who have tried switching to different messaging tools, but the ānetwork effectā makes it a real challenge. If thatās where everybody you want to talk to IS, getting them all to go somewhere else with you at once is hard. My family is going through this now.
Yes, I donāt think I realistically could quit Instagram. Itās where all the students at my university add each other, and also where everyone in one of my hobby communities connect. Meta also owns so many sites that itās very difficult if you wish to avoid them.
Has a companyās use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
I havenāt had any AI-related problems but yes, there have been multiple data breaches at various companies where my name and email were leaked. I received tons of spam and phishing emails afterwards, and still do. Luckily, theyāve all been very obvious to me as phishing emails - poor spelling/grammar, outrageous claims, government asking for money, etc. - but I worry about more vulnerable people such as older people or new immigrants who may not be able to identify these emails as easily. Iām also worried that my name and email could be used to create spear phishing emails that would be harder for me to spot, if I ever become someone who would be valuable for scammers to target (e.g. a higher up in a company).
Iām glad to hear that those breaches havenāt caused you any harm!
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Be transparent about where the data is coming from, and if they are going to use my data. I wouldnāt necessarily mind my data being used if I knew which info was being taken to be used for what kind of purpose.
Have it clearly written in their policy and make it visible to users, i.e. not hidden all the way down somewhere.
Yes, both great points, and I think most of the world agrees with you!
Anything else youād like to share with our audience?
I also want to add that I donāt think companies should be so worried about everybody opting out. AI tools are exciting and extremely useful. People are willing to contribute data in exchange for being able to use these tools, and more people would be willing to if youāre honest rather than trying to hide your policies.
Anonymous7, thank you so much for joining our interview series. Itās been great learning about what youāre doing with artificial intelligence tools, how you decide when to use human intelligence for some things, and how you feel about use of your data!
Interview Links
About this interview series and newsletter
This post is part of our 2024 interview series on āAI, Software, and Wetwareā. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools or being affected by AI.
And weāre all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post āBut I donāt use AIā:
We want to hear from a diverse pool of people worldwide in a variety of roles. If youāre interested in being a featured interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Hereās how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool; one-time tips are deeply appreciated; and shares, hearts, comments, and restacks are awesome š