📜 AISW #059: Anonymous9, USA-based high school teacher (AI, Software, & Wetware interview)
Written interview with an anonymous USA-based high school teacher on his stories of using AI and how he feels about AI using people's data and content
Introduction
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Anonymous9
I’m delighted to welcome Anonymous9 from the USA as my next guest for “AI, Software, and Wetware”. Thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Hello! I am a public school warrior/educator who has LOVED and excelled at his job for 33 of the last 34 years, now just a few months away from retirement. For most of my life, teaching would be the first descriptor on any attempt to nail down my identity, though I give myself strong scores for maintaining a healthy balance of life. I am a devoted husband, father, neighborhood tenant, friend, fan of virtually all of nature, thrilled to engage in many ways of moving through the natural world.
Thank you for that introduction. I love that you describe your multiple roles in life, and not just your professional identity.
(2) What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
Honestly I minimize technological involvement to the best of my ability, so I suspect that puts me in a different realm than most of you. Early in my teaching career, I saw that it was wise to get my Master’s degree and I quickly found it most expedient to get a degree in Technology in the Classroom, just as the internet was becoming more easily accessible back in 1992, before cellphones, and classroom laptops were ubiquitous. I only bring this up because most of my classmates now run the technology department of their entire school district. Still friends with them, I used most of the classwork to argue why I considered much of technology to be hazardous to learning and student well-being. Time and experience have only strengthened that view.
I would say I have low-level familiarity with AI. Most of it is through the experience of student use of Chat GPT with or without teacher support. But I also have lots of indirect experience of it through conversations with my wife, other teachers, neighborhood friends/university professors. I suppose use of my wife’s car and technology in general means I have a relationship with AI whether I know it or not.
Yes, it’s really hard to avoid AI in daily life nowadays, even if you choose not to use it intentionally! Can I ask, how ‘smart’ is your wife’s car? Is it “connected” and does it have any AI-based safety features that you’re aware of?
Ha! That question makes me laugh. A good kind. I think the car is stupid, not smart, because it gets irked when I ignore painted lines to make more smooth curves or avoid obstacles. A few times it has decided to put on brakes when I needed to speed up. We have not really worked out Bluetooth, but the car definitely chooses odd times to suddenly play music etc. I DO have to admit, I'm starting to like the backup camera and the navigation tool. The movie “Leave the World Behind” has stuck with me, particularly the scene of Teslas smashing into each other, essentially convincing me I want to drive my own damned car, not turn it over to others.
My car is the simplest thing available. I love it. But I wish I had the old-style roll-down windows, not electronic buttons. I know people who have drowned in cars because they didn’t open the window in time.
I know what you mean about the windows - it’s odd that the auto manufacturers haven’t come up yet with a design that lets people get out when they need to. The only ‘solution’ I’ve seen is wider availability of the ‘break window glass’ devices to keep under your seat for emergencies.
I remember seeing that movie too. 🙂 My family is also still driving an older car without connected features - no backup camera or infotainment system. We hope it keeps running forever!
(3) Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well? (Reference for this question: "But I don't use AI": 8 Sets of Examples of Everyday AI, Everywhere)
First, please know that I would never intentionally use AI, have only been led to do so through involuntary circumstances. Much as I appreciate efficiency, I am far more dedicated to creativity, work ethic, patience, collaboration. I very much want the world to slow down and consume less.
Understood. You are in good company; some of my other interview guests have expressed feeling the same way. (Example: Lis Sandi-Diaz) And, by the way, a recent study by Upwork showed that AI tools aren’t always helping people be more efficient; 77% of people reported feeling like AI is adding to their workload, not reducing it.
Last year I worked closely with a miserable student who was given the task of researching the role of AI in a topic of passion. This student HATES writing, was adamant that he would NOT blend the worlds of writing and his passions. In response I suggested he write about the role of AI in writing. We proceeded to ask ChatGPT to do things like: “Write a paper on the role of writing.” “Convert that paper into the language of an unmotivated high school student.” “Use an equal blend of research evidence and seemingly personal anecdote.” “Add more mistakes in grammar.” “End with a less confident conclusion.”
In the end, the student completed the paper, essentially narrating the quest, submitting a draft that probably saved him from failing the class. In some small way, I believe the process and my efforts helped him become a better writer. I also fear that he will use the tool more regularly even when a teacher is adamantly opposed to it.
Wow, that is a really interesting story and set of prompts. Was “the task of researching the role of AI in a topic of passion” a standard assignment given to a whole class, or was it a special assignment?
I should make clear that I am now an IA (instructional assistant), after 33 years of running the classroom. It is a HUGE step back in terms of stress and responsibility, but also in reward. I don't make the assignments, I help students navigate them.
In this case, the general prompt was for all students, and my job was to make the assignment tolerable/meaningful for them. I THINK it was the first time this teacher has given the assignment. He works in a highly collaborative group, but I see no sign the assignment has continued this year.
Ah, interesting; thank you for elaborating on that.
In another situation, I work closely with a colleague who teaches College Writing in conjunction with the local state university which means she needs to uphold their standards and curriculum as well as that of our high school. It is readily apparent that increasing distrust of students is sucking the wind out of her and she regularly gives tirades to the effect of: "It's gotten to the point that I am suspicious and sad anytime I read essays that are on time. 50% of the time, I see that work I wanted to give a strong grade to is breaking the rules, is mostly created by AI. It makes me sad and want to quit. And the truly sad thing is, the trend is spreading and I am increasingly paranoid about you all.”
That’s sad to hear that she has gotten so disillusioned by students using AI. I’m curious about school policies on allowing students to use AI, and whether AI-based tools are used to detect use of AI or to detect plagiarism. It sounds like your colleague’s university may not be strict about it. How is it in your high school?
Warning, my sarcasm, distaste, and reasons for retiring may come through. From my perspective, there are no meaningful norms any more, no policies that are consistently upheld, no real attempt to hold students accountable. The bottom line is that no kid should be held back from graduating on time. Every teacher I know is absolutely fed-up and struggling to try, to care. As has always been the case, teachers are too busy to really collaborate and exchange meaningful policies and so there is hardly any consistency on anything.
I don't consider this teacher disillusioned. She is aware and she cares and so she tries hard to thwart inappropriate use. But that’s like one traffic officer trying to uphold traffic rules for a whole city. We have policies in print, but they are pretty meaningless in the end, like the “no camping in parks” signs next to every encampment in town. I suspect the University has similar rules in print, similar lack of ability to enforce. My neighborhood is mostly professors in that University and the power of “likes” and popularity surveys is wildly impactful. Students don’t give professors “likes” for upholding policies regarding AI.
Got it … thank you for sharing your experiences with this.
Last one (though I have many more). Recently we happened to travel through the hometown voice-artist who is a staple source for my wife’s company. He took us to a nice dinner, thanked us profusely with the words “I don't think I could live this life if I didn't have your company as a client.” He was an absolutely wonderful man. Less than a week later, the company’s founder announced an experiment, to secretly use AI voice artists in the work to see if anybody noticed or cared. Nobody noticed or cared and the company has since stopped using the human voice artist. I consider this an abomination.
Oh, that’s awful. I’ve read about so many creative and talented people who are losing their livelihoods because of people using AI tools. One of my earlier interview guests [Anyela Vega] reported that a friend who was a voice artist had his voice literally stolen, by people who made an AI clone of it without his consent; and now they get income from voice jobs with their illegal clone of his voice, instead of him. That’s so unfair.
I would hesitate to even call the clones “AI voice artists”. They’re not artists. They are just clones.
I did some research on AI voice cloning last summer. I found only a handful of ethically-trained voice cloning tools that let people clone their own voices, but not other people’s. There are dozens of unethical AI voice cloning tools. A few companies had success in taking down some of the unauthorized voices, but there are still so many out there.
I hate to say it but that sounds right, just how these things work. Record and use a distinctive voice. As you know, we have many rules and regulations that are ignored or mocked. And then we have realms that we have yet to regulate.
(4) If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I have many. Essentially, if I know AI is involved, I turn away. As a newcomer to Substack, I won’t read anything illustrated by AI. If I can avoid a robot phone system, I will.
I DO use navigation apps, and occasionally check out what Spotify algorithms (and similar systems) find for me, but try to avoid it.
Last year, I mentioned the challenge and time-consuming nature of writing letters of recommendation for all the students that request one to one of my administrators. They immediately suggested I use ChatGPT to write the letters, and said that they frequently do so. I lost much respect for the administrator, and made clear that I will never follow that example. I am horrified by anyone who does this.
Ugh. That is disappointing. And there’s the risk that the more often people send in AI-generated letters of recommendation, the less value letters of recommendation will have.
Grades have lost their value for lots of reasons, as have SAT scores. Students are writing (and reading) less. We have been advised to be less personal in recommendations, sticking only to attendance and grades. Personal essays are increasingly written by AI. Hard to say what matters any more.
I mean, if the recommendation letters are supposed to only cover attendance and grades, we definitely don’t need AI for that - just some simple automation that pulls from the school records. That would be about as much value as corporate verifications of employment are, nowadays, in some places; leaders are sometimes told that dates of employment and job titles are all the company will verify. A lot of them use a third party service. And I’m confident that those employment verifications are automated - and not done with AI.
(5) A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get Consent from (and Credit & Compensate) people whose data they want to use for training? (the “3Cs Rule”)
Absolutely, most definitely, they need to get consent, give credit and compensate more than they imagine. Because of AI it is only more challenging for musicians, artists, writers etc to make reasonable living from their work, and it is utterly grotesque that publishing work feeds and strengthens the power of the Enemy. I have heard and believe that Spotify is working on AI-created music to feed new music based on people’s listening history. This is staggeringly gross, makes me want to abandon Spotify. Perhaps I will.
Yes, Spotify has done some interesting things with ML for music recommendation algorithms that help people find music they like, which seems ok. But reports started coming out in late 2023 that they’ve also started using AI-generated music [ZDnet article]. And I’ve heard from musicians that their own music is being crowded out on streaming platforms by AI-generated songs. Maybe I should un-sync my Substack podcast from Spotify.
I know beyond the shadow of doubt that what you say is true. Can go on and on and on about it, but then raise my blood pressure too high. I was super proud of Neil Young and Joni Mitchell for steering clear of the platform, but they too have caved. I have yet to cut my ties, but believe I should.
My antidote is going to lots of live music, ALWAYS buying a recording of their music from the merch table rather than through the giants. I have posted a few Spotify recommendations on Substack and believe I lost a couple subscribers due to apparent hypocrisy. That, of course, is not the issue. The ease of sharing work without consulting the artist is of course problematic.
I’ve seen some articles saying that the flip side of so many AI-generated songs is that live music performances will rise again in value. And I can see that happening. But I recall that when I first started looking at uses of AI in music last year, one musician flagged the use of in-concert AutoTune as an abomination! And universities are looking at using AI to ‘improve’ real-time AutoTune. So even a live performance might not be AI-free … but I’m sure many music lovers would still prefer it to lists of AI-generated songs. It’s great that you are supporting live music and artists that way!
(6) As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
I try not to use the tools. Whether or not I use them, I do NOT believe or feel that the process of training AI is appropriate, transparent. Sadly, the pattern of using others' work without consent, credit, or compensation is spreading.
An example of this is the large number of posts on Substack (and other platforms I presume) where people simply post pictures or art they have found without any added value or information. It is actually rare that credit for a photo is given. This is flat out theft. The more it happens, the more it will continue to happen.
I agree. I try to make sure I give credit on every image I use, and it doesn’t take that much time. I don’t get why people don’t give credit.
If you’ve worked with building an AI-based tool or system, what can you share about where the data came from and how it was obtained?
The closest I can come is a neighbor who works in robot science at the local university. Much as I love him, I am frequently furious with him for failing to see the repercussions of science and technology. He had absolutely no concern about robots taking over the warehouse jobs around here, saying “They are bad jobs anyway” and continually professing that “Science will fix the problem” no matter the extent of the problem.
It sounds like he’s quite the optimist. The job impact you’re describing is one of the “top five AI ethics concerns” I identified in my March articles. It seems like many people are unaware of job impacts or other harms until someone close to them gets hurt. I do agree with him that science can help to fix many problems, but helping displaced people transition to new jobs is not up to science. It’s up to our society, and it’s not clear that our society is taking the scope of that challenge seriously yet.
(7) As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
I intentionally stick my head in the sand and avoid AI context to not be able to provide anything useful here.
Fair enough! Staying ‘off the grid’ as much as possible has its merits, for sure.
(8) Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact.
I have likely been irresponsible, not trying to pay closer attention. This series of questions and concerns only further validates my instinct to steer clear of technology as much as possible.
Have you felt like you had a real choice about opting out, or about declining the changed T&Cs? How do you feel about how your info was handled? (Reference for this question: "Your Data is Worth Thousands—Here’s Who’s Selling It (And How to Stop It)")
Similarly, this validates my stance against using Social Media. As indicated before, I am a fairly new user of Substack and love many elements of the platform, but am getting closer to dropping out. [Note: since this interview was written, my guest has indeed left Substack.]
Yeah, the pros and cons of sharing our data and using social media are something we all have to weigh for ourselves.
For instance, I’ve decided to trade off some privacy because I’ve committed myself to speaking out about AI ethics and inclusion. But I still want to protect my privacy as much as possible. So I turned off the AI training feature in Substack; I offer my guests the option of being anonymous; and I set up a subscriber-only chat so people can ask & discuss questions that aren’t public. Those are my tradeoffs. The other part is just being vigilant about any future changes in policies and settings options … that’s ongoing.
(9) Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
I regularly receive texts and emails indicating that security walls have been breached, that personal information has been compromised. I believe some are real and some are phishing. They all terrify me, not individually, but en masse. Increasingly the world seems run by hackers and thieves.
Yeah - one previous guest pointed out (accurately) that this problem with misuse of our data predates the prevalence of AI. The risk with AI is that it makes some scams much easier and more effective. Like voice cloning - there are stories of scammers using an AI clone of someone’s voice to try to get their relatives to send money. I feel like the more we make people aware of these risks, the better they can protect themselves. So that’s part of why I do what I do, and write what I write, and why I do these interviews.
(10) Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I have no idea what they can do. I believe this whole thing is a mirror of human nature. Personally, I don't need to work less or less hard. I consider work as an honor and duty. I believe this is a horrendous example of the cyclical power of expectations. I hope that was sufficiently clear.
(11) Thank you so much for sharing your time and thoughts about AI and data. Is there anything else you’d like to share with our audience?
Good luck out there.
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that all content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊