📜 AISW #077: Sam Illingworth, Scotland-based university professor and writer
Written interview with Scotland-based university professor and writer Sam Illingworth on his stories of using AI and how he feels about AI using people's data and content
Introduction -  
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack. If it doesn’t fit in your email client, click here to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -  
(1) I am delighted to welcome Professor Sam Illingworth from Edinburgh, Scotland as my guest today on “AI, Software, and Wetware”! Sam, thank you so much for joining me on this interview. Please tell us about yourself, who you are, and what you do.
Hi, I’m Sam Illingworth, a university professor and poet based in Edinburgh, where I work at the intersection of education, creativity, and technology. My background is in both science and the arts, and much of my work now explores how we can use tools like AI to create more thoughtful, human-centred ways of working. I also run Slow AI, a newsletter and growing community where I encourage people to use AI differently, not as a way of doing everything faster, but as a way of pausing, reflecting, and asking better questions.
(2) What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I am not a developer or an engineer, but I am an experienced user, researcher, and critic of large language models. My Leverhulme-funded project is studying how students in UK universities experience AI in their learning, and I regularly write and publish on how these tools reshape education. Personally, I use AI almost daily, but always in ways that are grounded in reflection rather than productivity hype.
I know a lot of our audience members are very interested in how AI is shaping education. Can you talk a bit about your studies on that?
A lot of my current research looks at how students are actually experiencing these tools in their learning, rather than how institutions or companies imagine they should. With my Leverhulme-funded project we are collecting reflections from students across the UK about what it feels like to use large language models in their studies. Their accounts are often far more nuanced than the headlines suggest. Some students find real value in using AI as a way of clarifying or exploring ideas. Others feel excluded because they lack confidence, guidance, or even access. What this shows is that AI is already part of the learning environment, but unevenly so, and that creates issues of equity we cannot ignore.
In my own teaching I try to model reflective use. For example, I might ask AI to generate alternative framings of an assignment brief, not for students to copy, but so that we can discuss together what those framings reveal about clarity, bias, or hidden assumptions. It turns AI into a discussion partner rather than a shortcut.
That sounds like a very constructive approach! I’m also curious about how you’re using AI daily. How do AI tools impact your teaching? And do you use AI tools in your personal life as well?
Personally, I also use AI almost every day, but always with intention. Sometimes that means asking it to summarise a messy notebook so I can see patterns in my thinking. Sometimes it means prompting it for clichés so that I can avoid them in my poetry. I never fully outsource my voice, but I do use AI to create pauses that help me listen more closely to my own.
(3) Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
Reference for this question: "But I don't use AI": 8 Sets of Examples of Everyday AI, Everywhere
One story that stays with me is when I rushed out an article pitch with the help of ChatGPT. I wanted speed, not reflection. The editor rejected it immediately and asked if AI had written it. That was a turning point for me.
The problem was not the tool, it was me skipping the pause.
Since then, I have shifted to using AI as a mirror: I ask it to challenge my assumptions, reframe my questions, or highlight what I might have overlooked. The results are far richer, and it has become a core part of how I teach and write.
That’s a great insight, and so interesting that your editor detected immediately that your article pitch was AI-generated.
(4) If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Yes. I mostly avoid using AI to draft entire poems or creative work that I intend to publish under my name. My voice as a poet is hard-won, and I do not want to outsource that. What I will do is use AI as a provocateur: I ask it for bad lines, clichés, or metaphors to avoid. That helps me sharpen my own writing without losing ownership of the voice.
(5) A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get Consent from (and Credit & Compensate) people whose data they want to use for training? (the “3Cs Rule”)
Absolutely. Consent, Credit, and Compensation should be the baseline. Too often, creators discover their work has been scraped and commodified without permission. As an educator and a writer, I am deeply uncomfortable with that. Companies profit from our data; the very least they can do is treat us as partners, not raw material.
(6) As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
No. Transparency is still the exception rather than the rule. Most providers hide behind vague terms like “publicly available data” without making clear what that means. For those of us working in education, that opacity is a real problem. If students cannot trust where the knowledge comes from, they cannot trust the outputs.
(7) As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Almost certainly. I publish poems, essays, and academic articles online. It would be naive to assume none of them have been scraped into a dataset somewhere. The difficulty is we rarely know when or where. That uncertainty is part of what drove me to start Slow AI, to create a space where people can think carefully about what they share with these systems.
Your viewpoint is definitely resonating with people, because Slow AI has really taken off!
Thank you, that is kind of you to say. I started Slow AI in July, and the response has been both surprising and humbling. We are now just over 1300 subscribers, which shows there is a real appetite for using these tools more thoughtfully.
Have you ever checked LibGen or tried asking a LLM for something specific to your poetry or research articles to see if your own content comes back?
I haven’t tried LibGen, but I do sometimes give ChatGPT samples of my own writing to see how closely it can echo my voice. With prose it does a fairly good job, but with poetry it still misses the mark, which in many ways is reassuring.
(8) Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML?
Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
The surprises keep coming. When Adobe changed its terms to allow customer images to train its models, it caught many artists off guard. I felt the same when I learned about social media platforms using posts to feed their engines. The opt-outs are often buried or absent, and users are left with little real choice. That lack of agency erodes trust.
Absolutely.
(9) Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Not directly, but the broader erosion of trust has a personal cost. As a parent of young children, I am cautious about what family data I put online. The sense that even an innocent photo could end up training a model without my consent changes how I behave. It narrows what I feel free to share, which was already very little.
(10) Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
They need to slow down. That may sound strange in a culture obsessed with moving fast, but building trust takes time.
Companies must be transparent about data sources, 
honest about limitations, and serious about consent. 
Trust will not come from glossy marketing campaigns; it will come from a willingness to pause, to listen, and to involve communities in shaping how these tools develop.
(11) Anything else you’d like to share with our audience?
Yes. I want to invite people to rethink how they use AI. The temptation is to chase the next tool or the fastest workflow. But resilience comes from reflection. That is what I write about in Slow AI, my weekly newsletter. Each post offers a simple prompt or practice designed to help you use AI more thoughtfully, whether in business, study, or personal life. If you want to explore AI as a mirror rather than a machine, that is the community I am building. Everyone is welcome.
Thank you so much for sharing your thoughts on AI with me and our audience today, Sam! And best of luck to you on Slow AI - I’m looking forward to continuing to watch it grow.
Interview References and Links
Sam Illingworth on LinkedIn
Sam Illingworth on Bluesky
Sam Illingworth’s personal website
 on Substack : Slow AI
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References 
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)










Love this! Such a thoughtful approach to how to use AI. It often feels like everything is moving so fast, so I really resonated with a lot of this and how moving slower is better. And now we can use AI to help us think deeper, instead of just seeing it as a tool to get quick results. Thanks for sharing!