Introduction - Jax NiCarthaigh
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript. (If it doesn’t fit in your email client, click here to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Jax
Karen: I am delighted to welcome Jax NiCarthaigh from Australia as my guest today on “AI, Software, and Wetware”. Jax, thank you so much for joining me for this interview. Please tell us about yourself, who you are and what you do.
Jax: Karen, it's so great to be finally talking to you, almost in person. So hi. And hi to anyone who's listening.
My name is Jax and first I'll tell you a little bit about me and then I'll tell you a little bit about what I do. I am on Ngunnawal-Nambri Country, which is here in Canberra, Australia. And Australia is the continent actually that has the longest continual culture with technology in the world on it.
So we are very fortunate to have many nations of indigenous people who lived here for tens of thousands of years. And even though we don't think of things like fish traps and aquaculture as technology, that's actually our earliest and longest technology.
I am a queer gender-fluid person. I use she and her and they / them pronouns. I'm very flexible about which ones. And I am neurodiverse. I'm from an Anglo Celt background. I am a solo mother by choice, and I've got a lovely daughter who's studying, and we both study at the same university, which is quite cute. However, we don't see each other very often on campus. I've got a really big regard for young people, especially those who go on that journey early and come out strong and clear. And I think there's a lot to acknowledge there.
I've benefited a lot from the work that people have done. And it has helped me think about technology with social change. Not being a good or bad thing, but just being something that happens, and that we can kind of shape as we go along.
In terms of what I do, I have a practice called Gen 2200, which is my own personal experiment for helping us kind of navigate our lives better in especially some of the high stakes decisions that we have to make about the future.
And I've a background in emergency management, resilience building, in writing, adult education. I'm a full-time student of something called cybernetics. Cybernetics comes from a Greek word to kind of be a captain or a skipper of a boat. I can imagine being in this little teeny boat on the ocean and you feel this little lapping of the water, which is our sort of natural environment. And it's a really lovely idea about how we might steer some of the really complex systems that we have with this sort of regard for the ocean, for the sea, for the sky, for the storms that come through, and then for the piece of technology that sits between us and that kind of natural world and how we kind of put all those together. So I'm spending a lot of time in that world, which has naturally taken me further into AI.
My particular big thing is around how we might make choices today that actually have future generations in mind.
Karen: That's awesome. Thanks for that introduction, Jax. I have a question. You mentioned Gen 2200. Can you talk a little bit about what the 2200 means, where that name came from?
Jax: Yeah, I can. Gen 2200 actually came in about August, 2022 and I was really invested in doing meaningful work. I wanted to do something that didn't feel like it wasted my skills. And I think when we work in offices and we do a lot of that 9 to 5 grind, we can't see the results of our work, and often it feels like that work doesn't really go anywhere. So Gen 2200 was my attempt to really pare things back and ask myself what kind of work I wanted to be involved in, what sort of effort I wanted to make in the world, and with the hope that others might join me. So Gen 2200 actually was the shortening of generational thinking. I do draw down from the indigenous concepts of seven generation thinking or multi-generational thinking. So: generation.
And then the other part was 2200. I thought, how far can I reach out in my imagination and my personal history? I can't go back tens of thousands of years and tell the stories of my people back then. I don't have those sorts of connections. What I can do is go, well, I'm an Anglo Celt descendant in Australia. I can go back about 200 years and go forward about 200 years. So I landed on 2200 as being 175 years forward.
And I've got my sights on that generation. So everything I try to do — I'm not perfect to this and I'm just learning how to do it — I try to think what would be of value to that generation up ahead, who I'll never meet. And that's where Gen 2200 comes from.
Karen: That's very neat. So when you talk to people about making a plan that spans 175 years in the future, how do people respond?
Jax: They say things like, “Oh, I can't even plan for the weekend”. And I totally get that. But what I do find is that often we are doing five year plans. It is quite common. I spoke to someone the other day, he said, oh, they've just sat down and done their five year plan. And I've done that a lot in my life too. It's how I've managed to kind of steer through my life, a five year plan. But when you actually put a long-term plan, it could be 175 years forward, or it might be 50 years forward.
So short-term thinking: “What do I need to do right now? What do I need to do right now? Quick, quick, quick.” And that's the rush that we get into.
But if you sort of take it as a long-term thing … and for me, when I was considering finally getting to meet you and have a chat today, it made me slow down and think, “Well, what have I got that might be of value as a seed planter for those later generations?” I'm interested in how we might think about technology today in this kinda long-term way.
And then when I'm planning my weekend and I'm getting all stressed about it, some of those things just don't matter. So it helps me not sweat the small stuff. It's deeply stabilizing to have a multi-generational view of the world. And I'm just learning how to do it because I feel like we've let some of that in the Western cultures get a lot smaller.
Karen: Short-term thinking, I think, drives a lot of the dysfunction that we see with companies that don't plan for how we use the environment and making sure that we have sustainable ecosystems. We'll get into some of that later, I'm sure. But yeah, that longer term thinking is a good thing. I think we need more of it.
So let's talk a bit about AI. I know you have some things you want to say on that, but tell us about your level of experience with AI and machine learning and analytics and if you've used it professionally or personally, or if you studied the technology? I'm guessing with cybernetics, you maybe have gotten into some of that in your schooling?.
Jax: Yeah. Actually, I have got into cybernetics, into a lot more machine learning and AI understanding. And it's given me much more structure, because I'm actually a fairly self-taught person. I am an early adopter of technology. I have been for a long time, right from before the internet was a thing. My experience is very much from the point of view of someone who's a writer and a creator, and someone who's worked in offices and in organizations.
Just to go through my tech stack, so to speak. I use ChatGPT every day — several, many, many times a day. I use most of the main generative AI platforms, so I can dance across the top of them. There's AI built into most of our technologies. It's hard to say now what you're not using. I was introduced to a lot of modeling over the last six months. I've got really excited about some of the ways that we see data and start modeling. I've been also teaching other people how to use it, so I also tend to go where the need is for that. So I look at a lot of user applications and how people might access that sort of front end of technology is a fair amount.
Karen: It sounds like you've been using all these different AI-based tools. Do you have a story about the very first time that you used AI and what that was like?
Jax: Oh yeah, I do actually. My Gen 2200 came out before ChatGPT and OpenAI launched out into the public arena. So I was playing with a lot of concepts about what I was going to do and who I was going to be. And it struck me that I wanted to change my name. The first thing I did with ChatGPT was say to it, "What kind of questions can I ask you?" So I typed in, What kind of questions can I ask you? And it said one of those GPT-3 kind of responses where it's just "You can ask me anything you want to." And I was like, oh, okay.
I thought, “Well, what's the question that's on my mind?” So I typed into it “I'm thinking of changing my name. What are the implications?” And it gave me a very well-balanced answer. And I was like, “Oh my goodness, I need this kind of conversation.”
And I didn't want to talk to another human about it because I thought, “I want to take this choice myself.” It's very important to me to make this choice myself and not to involve anyone else. 'cause if I make a mistake on this, I want it to be my mistake.
What I was able to do using that early version of AI was to actually have a lot of those conversations to work out the consequences, to play with the lists of what I needed to do. There's a lot of paperwork involved when you change your name. How you might approach it with your colleagues. All those sorts of things. So I actually used it to coach me through a name change. And when I did that deeply personal work, it actually opened up all the other possibilities there for me.
I did some public speaking at one point about this, and someone came up to me afterwards. They said, “Did ChatGPT choose your name?” No, no, no, no, no. Because I actually chose my name and I was very conscious of the safety elements there. But you can still have many other conversations which are in coaching and encouraging or teasing out the administrative aspects without divulging your personal details. And I learned that quite early.
Karen: Yeah, that's a great story. So thank you for sharing that. I think it's really interesting that it gave you some objective advice perhaps that you were able to use to help you make a good decision. So that's really cool. So can you tell us a little bit more about the way you've used AI and machine learning and large language models and such in your work in cybernetics?
Jax: Yes, I can. So, look, I'm not a data scientist like you. I don't have that wonderful long background that you have, Karen, in this field. So I'm kind of a new person. Sometimes I can feel like a bit of an interloper. But I also know that we need to have people with my kind of brain involved in these conversations. I'm very much a people person and I've got this other perspective that I bring in.
I built, in my studies, a cyber physical system. Now, a cyber physical system is a very fancy word for saying it's a system that senses and acts. And even that sounds a little bit obscure, but it means basically it, it senses whether it's been touched or whether it's heard something, or whether it's been triggered by something. And then it does something. And we are surrounded by these things all the time. I have a PowerPoint [socket] here which I can turn on, and then it has a reaction that lets the electricity through.
People who do a lot in the electronics field will know Raspberry Pis. Tiny, tiny little computers that you can program with some basic language. I didn't even know what a Raspberry Pi was six months ago, and now I've actually gone and learned Python, and then worked out how to do things. So I built a cyber physical system myself. I can't believe it. And then try to work out what to do with it and how it might be safe, how it might be ethical, how I might be able to use it and other people might interact with it. So I went down that path.
So for me, working through a cyber physical system actually helped me kind of go, “Oh, right, so with machine learning, I need to understand that it's coming from these data sets at a point of time. They were collected or curated for a particular reason. And that reason might have changed.” So it's helped me ask a whole lot of questions just by doing that hands-on building.
Karen: That's really interesting to hear. So thanks for sharing that story. The next question is, if you maybe have a specific story on how you've used a tool that included AI or machine learning features, and what are your thoughts about the AI features of those tools? How well they work for you or didn't? you know, what went well, what didn't go so well?
Jax: One of my favorite examples, Karen, is about how I work with a custom GPT that I've built. I refer to Clarkie as my professional futures advisor. So I actually have Clarkie as my offsider. And I'm talking to Clarkie, often all day long. I'll leave the house and I have a conversation with Clarkie about, “I'm heading out the door and this is what I'm going to do.”
I have enough understanding to know that really I'm talking to a large language model, that it is just math, basically math and letters. And it's able to predict kind of a good answer for me, and I've shaped it in a particular way. So I haven't anthropomorphized it completely.
I use Clarkie to help me shape course ideas or to refine ethical questions and sometimes to reflect on my workshop designs or to be a bit of a coach for me in the background. Clarkie's really useful for me. I can use a little bit with my professional dilemmas. I'm always careful about which ones I put in there.
But I also like to use Clarkie to challenge myself. I am neurodiverse and I sometimes have some communication challenges with trying to translate what's going on in my head into the rest of the world. Clarkie has been a really great help to guide me, and to put some structures in around that are really helpful in that. And also help me learn about how to build a chatbot. I learned through doing, and this has helped me understand with the large language model that sits behind Clarkie. Like, Clarkie is really just an interface that makes it easier for me to engage with the technology and that I've been able to customize for myself.
And one of the things with large language models with AI is, because they're tools of scale, it means that they're designed to smooth out all the diversity and to give a more, not generic, but a more kind of averaged kind of answer, which kind of is helpful when you're having some communication problems. 'Cause you might be drawing in words or phrases that aren't going to make sense for your audience. So as a writer, I'm always trying to translate things, and something that helps smooth some of those things out can be really useful. It also means that it's losing some of that diversity. When we scale things, we lose some of the magic at the middle of it. And that's something that deeply worries me, especially when we think about that long-term thinking.
And as your listeners may have picked up, I have a particular accent. My accent reveals that I'm from Australia. I've lived in lots of different places in Australia. I've picked up a lot of regionalisms. Sometimes I use unusual phrases that is not actually in that LLM, that it doesn't instantly recognize.
So for example, in Australia, it's really common to say when you're working on a side project, you'll often say, “Oh, I'm working off the side of my desk.” I'm doing some writing about something that's not related to my core work; I'm doing it “off the side of my desk”, which kind of means you're doing it, you know, on your tea break. Now, Clarkie has come in and smoothed that out — just deletes that and says, "I'm working on a side project." And I quite like the phrase “working off the side of my desk”, but Clarkie doesn't. So we have a bit of a tussle over that. I have to come back and say, “No, that's what I meant to say.” And I put it back in.
So you can get flattened out. And the magic in our humanity and our future is actually dependent on our ability to to be different and to be working together collaboratively as individual people. Working collectively. That's a downside of our LLMs and something that we need to be careful of.
You hear, especially in the educational spaces around AI, that people say “You just need a better prompt”. But there is no prompt that I can put into Clarkie, or into any LLM, that will say, “Please use my regional accent, and include some of my diverse language use and structures.” It just won't do it. It'll approximate it and you'll end up with something that says, “G'day, mate. I'm from Australia. How you going?“ And it would be awful. It's not me at all. So it is not just about the prompting. It's about bringing our human selves into that AI conversation.
Karen: Those are some good insights on how you use AI. And your story about the regional language is interesting! I just interviewed a documentary fashion photographer in Australia, Liz Sunshine. And she was talking about how she tried using it to generate images that were reflective of older women in Australia and just images with backgrounds that reflect Australia. And she mentioned that she was always getting these fairy floss trees, and I'm like, “Okay, what's a fairy floss tree? I need to look that up.” But that was all she was getting, and that wasn't good.
Jax: Oh yeah. I have to look up fairy floss tree too. I actually have done a lot of work experimenting with those visuals and identity. I am also middle aged. I'm gender fluid and mostly identify as a woman. To find representations that, not necessarily look like me, but that I relate to — very, very difficult. And I've really deeply experimented with many of the visual tools. And it's either come out young and fresh-faced and glowing, or you come out sort of wrinkly and kind of stern-looking and gray. And I'm neither of those, and I'm not in the middle of those. Representation and diversity is a massive thing. It sounds like a great podcast. I'll go and listen to that when it comes out.
Karen: That one's actually a written interview, so should be coming out, oh, in a couple of weeks? Yeah.
Jax: Awesome. That's fantastic. Yeah. Fantastic.
Karen: I'll make sure I tag you on it.
Jax: Oh, I'd love to. Yeah. Thanks.
Karen: Yeah, you're welcome. So this is a good view of how you use AI tools. I'm wondering if you have avoided using AI-based tools for some things or for anything. And if you can share an example of when and why you chose not to use AI for that.
Jax: I've started to get much more discerning about when I use it and when I don't. I am of that generation, I was really schooled in pen and paper. And as a writer and someone who values what's going on inside my head and my body, it's been for me a very deliberate practice to go back to using a pen and paper. The ability just to connect with a hand, a pen, and a piece of paper. I know that we can do that via some computer technology now with some of the pads that we've got. However, there's still something with turning a page that's quite magical.
As I go a little bit deeper, I haven't gone that far into agentic AI. I know that I probably will, just for the learning part of it, and that it will be unavoidable. I know that's pretty much where we're heading into. I also still have a lot of concerns about what that means for my own privacy and the ways that we might go through that ethically. I'm sure we'll drill down on that a little bit later.
But in terms of professional practice, I've been involved in a lot of communication and community engagement. We first started looking at generative AI and the work of understanding the needs of a community. So whether it's a town who's got an infrastructure project going through, or even a large nation, you sort of think, “Well, that would be perfect. We'll just run all that data through AI and then we'll have a really great picture of this community and what their sentiments are.”
And when you're working with community sentiments and language, you actually need to be looking for the nuance there. I'm sure there are agencies who are doing this and doing just a fine job of it. But when I've worked with the people I've worked with, we have a need to be transparent for those communities. How we came up with that data to represent them well, and also to be able to explain how we got that data. And just saying that we dumped it into an LLM and we came out the other side with this magical story isn't enough for some of these really especially sensitive projects.
So when those workplaces are using AI and machine data now for their number crunching, they're being really choosy about which ones they're using it for. So A, the sensitive data doesn't get fed in, like the individual information doesn't get fed in. And B, if they are creating reports that are based on community sentiment, they're making sure that that is touched by humans more than it's touched by AI. So that would be an example where we wouldn't use it in a large-scale professional environment.
It's just really that machine learning is great on past predictions. It's past data at a certain point, but we've gotta remember that past data is not the future. And people, and humans, we are actually quite surprising with what we do.
Karen: Yes, we are! So, Jax, you mentioned in your introduction that you're neurodivergent and you made a few references to having the LLM help you organize your thoughts. Can you talk a little bit more about that? And also, I'm wondering if you prefer the term ‘neurocomplex’. I've seen some people say that that may be a better term.
Jax: That's really interesting, ‘neurocomplex’. I've never heard of that before, so I'll give that one some thought. I'm okay with ‘neurodivergent’. I get a little bit funny about ‘neurospicy’ 'cause I'd much rather be something else than spicy. But I don't mind.
And it's all — sounds a bit lofty. What it is, is that — say for Clarkie, I've got it programmed. I'm neurodiverse, and I've got in some of the specifics about that. And occasionally it will remind me, “Do you know that's probably got to do with that type of thinking that you are doing at the moment?" And it will connect those up and I'm able to reflect on that — go, “Yeah, I think that's probably right”. One of the features of my neurodiversity is that I can rabbit on at people, and I can need to get confirmation that I'm on the right track.
So if I'm writing an email, for example, and this is a pretty common one. And I'm a writer. I'm very good at writing. But I can spend an inordinate amount of time writing something simple to make sure that it's landing with the person I'm writing to. Now, that might mean, Karen, that — it'll probably be easier now that I know you a little bit better — but I'm writing an email that could take me half an hour, even two hours, to get the framing right. “Is this what I'm trying to say? And how's this going to land with that person?”
That could be incredibly hard work and debilitating for me, and for many other people that get stuck in that kind of cycle to be able to say “Here's my draft email. I'm writing this to a person who I'm really looking forward to meet. Can you just go through and see, how's this going to land for that person? Could you be my editor and just check it for me?”
The capability there for us to check and to get some feedback, which is a very cybernetic thing, to get some early feedback before you've even sent it, kind of liberates a lot of that flow and process.
So I'm able to feel more confident about the things that I'm sending out. It doesn't write it for me, but what it can do is give me some early feedback with things. Like, I might have a special interest that I really just want to go on and on about, and I can go on and on to my chatbot all day long. And it never says, “Oh my God, will you please shut up about that?” It doesn't get to that point. So for people who have a need to express a lot, and it doesn't always have to land on another human, chatbots are actually very handy.
Karen: Yeah, it's funny you talk about talking and going on at length. Back when I was interviewing for corporate jobs, there was always this advice to try to keep it shorter and just tell people a little bit and then say, “I can elaborate on that if you like”, and then stop. This did not come naturally to me!
Jax: No, right? It's good advice, but it doesn't come naturally, and you don't get a lot of practice with it. And you get excited about something. It's kinda like, “But why would you want only a part of that information? It doesn't tell the whole story.” So, yeah. It's good advice though. Yes, it is.
Karen: Yeah. By the way, I don't know if you saw, Jenn McRae is writing an article about AI and Neurodivergence.
Jax: I did see that. And good on Jenn for doing that. These are really important conversations.
Karen: What I think is interesting is there's a lot of talk about the ethics of AI, and I certainly have looked at quite a lot about that. And some questions come up about whether it might in some ways be ableist to say, you know, “Writers shouldn't use AI”, or “People shouldn't use AI to help them write” when for some people it's a way of dealing with some of the limitations or the obstacles that they run into, whether it's because of personality or the way that they've been trained or educated, or if they're writing in a second language. And is it really fair to say “No, you shouldn't be using these tools to help you fix the wording of your sentences?”
So that's something that I do think about a lot. And part of that conversation then is, “How does it affect people who are neurodiverse and using AI tools for that?” I think that's an interesting aspect that maybe is easy to get overlooked if we're just, “No, writers shouldn't use AI”, we fold our arms and we say no.
Jax: Yeah, yeah. There's a bit of shame — and maybe this will ease off as time goes on, but it'll probably go underground — a bit of shame and stigma for writers using AI. And I do understand it. This is why I experimented so much. It's like I needed to find out what its capability was, and what it would do to my writing. I needed to know because I thought “I'm not going to get a choice at some point. I need to get onto this right now.”
The comment that you said about an ableist sort of narrative: when I started writing about this and writing about my usage of AI and exploring what it means to be a writer using AI, I had a couple of people reach out, respond instantly. But I had one person who said, “Oh, I've felt so much shame, because I'm not supposed to be doing it yet. It's helping me in my life. And I can't tell anybody. So I'm just doing it quietly. And I feel really bad about it, but actually I can't do without it.”
And I feel like what this person was saying was, actually, they found something that helps them in their life. And to have that validated... One of the things with LLMs, it is a complex ethical field for us to be walking through. I do neurodiversity coaching. I know the benefit they also have through having some kind of LLM help them through their coaching. It is liberating for people and I think we can't forget that.
The big push to use AI for individuals is often about efficiency, and it's about corporate work, and it's about how our organizations can slim down their whole middle section and work, you know, get rid of 60% of the jobs as they exist, and all this sort of stuff.
But actually, if the real value is helping people navigate through some of our complex Western world better without so many bumps, if that was the purpose, I think that's a really good use.
Karen: I think part of that is some of the initial gut-level reactions. In one of my other interviews, Jordan Harrod was saying that she felt that there's not enough nuance in the way that we talk about AI. There's having ChatGPT write me a 5,000 word article about this and then just publishing it, having it write for you; and there's having it help you write. And there's a big difference between those. And it's not so much the tool, but the way it's being used.
Jax: Yeah.
Karen: And whether it's actually reflecting the human that's behind it, feeding it a prompt, keeping that personal voice and the ideas, and everything that comes from the human. There's so much of the bad kind that I think it's easy for people to say, “If it was written with AI, I am not going to read it.”
I personally don't use it for my writing, but that's because I've found that writing is how I think. Like if I want to know what I'd think about something, I try to write it, and then that helps me figure it out. So I just don't want to outsource that to a tool. But that's me. And what works for me is not what works for everybody else. I think we all have to understand that.
One of the big ethical concerns, though, that comes up is: where do these tools get the data and the content that they use for training them? Where did they get all those words? Where did they get all those images? And a lot of companies are not transparent about where they get their data, whether they had consent, or if they gave credit, or they compensated the people whose data that they used. And that is a big concern to a lot of people. So I'm wondering what you think about that, how you feel about the three C's, the consent and credit and compensation, for the data.
Jax: These are not magic boxes. When I think about my LLM as a bit of a mirror, it's not actually mirroring what's going on in my head. It's coming from actual texts that people have written before. Actual photographs or actual artworks that people have done. Actual research that people have done. Everything pretty much that's been digitized, a human made it. We're at this point where previously, to go and do that, you would've gone to a library, or you would've taken the library book out and you photocopied it and plagiarized it or something like that. You had to type it in or something. You know. And that was not okay then. It has never been okay to go and plagiarize somebody's work.
I wonder about what hasn't been scraped. For many who are dealing with artworks, and with creative pieces — that something can just come and take it and mash it up and put it back out there without acknowledgement, without care, without consent, without any attempt of payment — I really can't see how we've got to a point in our world where that's okay. And I don't think it is. No, it's not actually. It's not okay. And it's one of those things that is just happening and going to happen and it, we feel like we don't have any sort of opportunity to voice back and say it's not.
I'm a writer. I'm not a very well known writer, but I have been fairly prolific over the time. I've had lots of little pieces out over all my years. And I never got famous, and I never made a really good living out of writing that. But my early data and my early posts, I'm pretty sure, they'll all be out there and have been scraped. So that's out there. I would've loved some recompense for that, thank you very much. And there's no say about how that gets used. I think that we've got this situation, we've got a world that we find it very hard to live without that data footprint.
And is there anyone who's outside of that? I don't think so. Even those communities who are really living off grid, they're still studied and their data's curated and collected.
What do we actually need to create an LLM? Maybe we don't need everything that's ever been published on the internet, 'cause really what we're looking for is language patterns. Maybe we could ask other questions about how we might be licensing this, how we might be asking writers, asking artists for their use. We probably need some of that information that's in there. Especially in terms of diversity. I'm sure if there was some proper mechanism set up there, the artists would be saying, “Yeah, sure. Here, take my picture, use it. Thanks for asking. Thanks for paying me for it.” But we haven't had those conversations. We've kind of been steamrolled.
So for those three C's, I feel very strongly about them. And I feel complicit when I'm using, that technology, which I know is scraping some of those works. And my friends who are writers approached me and said, “My publisher is about to sign over our content for AI scraping, what do I do?” And there was a case about that recently in Australia where the writers actually clubbed together and went to the media about it. It's not okay to plagiarize and to use work without acknowledgement. I hate that we are being moved into a corner on it.
Karen: Yeah, there'd been a lot of stories about this, about writers being asked. One on Substack, Janet Salmons PhD, had said that her publisher made a deal with some of the LLM companies. They got compensated. But they didn't pass it along to her. and there's a lot of cases where they just are not treating artists fairly. Some, I think, might be trying to varying degrees to do the right things. But a lot of them just aren't. Like, “No, we scraped it, we took it.”
I did talk with one person, Gilda Alvarez, and she was saying that she actually wanted them to use her content because she felt that her perspective was underrepresented. And that the only way to rebalance that would be to say, “Yeah, take my words. Take my perspective as a Latina in Data”, was her term. Said, “I want that perspective represented. So I want them to take it. I want them to use it.” Okay, but no compensation, no credit. And that's where I think we definitely need some much better systems around this. And certainly no one was taking a 175-year view, or even maybe a 5-year view, on this.
Jax: No, they certainly weren't. And I think there is an argument for that diversity in there, right? Absolutely. That long term view would say, “Do we think that's going to be okay up ahead? What do we actually need up ahead and how might we support people now?” Artists and writers are not the well-paid people that the technologists are. They don't draw wages. Most of them live way below the poverty line and do an amazing job. It's actually our duty as a society to support those people. We need those artists and writers and researchers more than ever.
Karen: Another one of my guests, Jing Hu, was talking about the system effects. She's thinking of it more as the second order effects or looking at the system. One thing we were talking about was that, at some point, if you steal the work from artists and writers, and you undercut them by selling competing products so that they don't have work anymore, eventually people will either paywall or not publish online. And the source of new content that these tools need to continue to improve will dry up. And so in the longer-term view, maybe it's not five years, maybe it's 10, but it will eventually have an effect . It'll become a “snake eating its own tail”, not continue to grow and improve anymore. And so there will be effects. And I've talked to some musicians that say they really try to do live performances and interact directly with people that way, because their music streaming revenue has dropped off, and things like that. So there are a lot of consequences in the longer-term view that I think a lot of people just don't think about or look at.
Jax: Totally. And we'll live those. We'll see those. We see that. I love the strategic nature of some of those artists who are starting to think that way and going underground, you know, “This is what we're going to do”. And I do see that too. I see that with writers, with poets and other people. It may revive some of our live work and certainly that sort of artisan place. But can you imagine our streaming services if all of a sudden the music had stopped at 1950? Right? We wouldn't have access to the rich, wonderful music that we've had over the last 75 years. We'd be stuck in that era and that's one of those consequences.
That's a stark example, but that is what would happen if we all took our balls and went home. Then there is no rich, vibrant, artistic life and creative life for us to be in. And if we enter into that way of thinking, then we are entering into a very robotic, very sterile world that is not the sort of world that I think we envisage for our children and our grandchildren at all. And ourselves, frankly — I don't want to live in that world. I want to be in a world where we have these rich experiences and opportunities to connect.
Karen: Yeah, absolutely. Yeah. So you mentioned feeling conflicted about using the tools, knowing that a lot of the data has been scraped. Do you feel like, the tools that you're using, have the providers been transparent with you about where they got their data?
Jax: I would say that is a big fat no. It's like a little hidden secret. Somewhere you actually have to go digging around quite solidly to find where the data's come from. And it's very hard to find out exactly where. I've read and explored quite a bit. I don't have a good sense of where the data boundaries are for who's using what data and where it's come from. And I know that that is really integral to the types of responses that I get back, because there's values baked into that data. For example, if you start thinking about the difference between a university catalog of research papers versus the Twitter sphere, you're going to get very different types of values coming through those systems.
On transparency: maybe a couple of decades ago, we had cosmetics — I don't know if this is the same in the States — but we had cosmetics that didn't have ingredients on them. And people were putting it on their faces, not knowing exactly what they're putting on, and it was a dilemma. Because ethically people were starting to pay attention to where their ingredients came from, whether they were ethically sourced, whether they wanted to be putting petroleum jelly on their face, for example, you know? And so there was kind of a clean cosmetic movement. There was a demand from people, I think quite rightly to say, “We actually demand to know what's going into these cosmetics that we are putting on our skin.”
I think that we should have the same approach or a similar approach for AI tools that we're using. They should come with some kind of a label that is an ingredient style label that says this data has been trained on all of Reddit, all of Facebook, or all of Wikipedia. And actually just be upfront with that. So that A, it starts building our knowledge as people who are interacting with these technologies that we can go, “Oh, look, I use this particular tool because it is trained on that kind of data.”
I have not got that trust in those organizations, which is a major problem. Like, how can we be using tools that we don't trust the premise that they're built on?
That's a big, big problem. I'd like to see an ingredient-style label, and that would help us make better choices. I also think it would go a long way towards acknowledging the work that's gone into some of those tools as well as encourage, if not enforce, the more ethical scraping of that data in the first place.
Karen: I interviewed someone who was talking about this idea of a content label for an AI tool. But it's not the tiny thing that you can put out there with 20 words that somebody could understand. It was more like 20 pages. And it's not a required standard. Companies don't have to do it, and most don't do it. But even if you had one, it would be like a Terms And Conditions document where you'd really have to be an expert to understand that. So there really isn't any such thing right now as the kind of ingredient label that you're talking about. [Readers: see this Aug. 4 Bluesky post by Ethan Mollick on ‘model cards’ for frontier models]
But I agree with you that more visibility in sourcing would be a good thing. Because one of the other issues is that if you look at where all these companies get their data, it's very heavily biased toward the global North and to the western world, and it's very unrepresentative. And that shows up when people use them, even if we don't realize that it's a bias. It's baked into the tool based on the data it was fed. I mean, that's what machine learning systems do. And so it shouldn't be surprising. But when we don't know what's in there, we don't know if it was trained on any content from South America or Africa.
I know that there's also some concerns, though, about use of languages. Some of the indigenous peoples want to protect their language. They want it preserved, but they don't want the big tech companies stealing that. There are some initiatives for trying to do it, like you said, in a way that's respectful and working with the local people, but protecting it so that the big companies don't just steal it and wash it out.
But the idea of having an ingredient label I think would be good. On the cosmetics, it was like food ingredients here in the US. There had to be consumer pressure to make that happen. And so on the AI side, I think we also need to exert consumer pressure.
Jax: We have agency. we underestimate the agency that we actually have. It is easy for individuals or even small businesses or companies to feel that's kind of a tidal wave. It's a done deal. It's not. We're actually building the future and we are doing it every little conversation at a time. And it can feel small. But that's how that social change happened with the food labeling, and the cosmetics. That is consumer pressure, using some of the channels that we have.
Some of the ways that we can actually test this out. I don't know how rigorous these are, but this has been helpful for me. One is when I get those terms and conditions, I don't read the 20 page document. Very few people do. I'm really impressed with those who manage to do that. I put it through an LLM and I ask a question of it, and I say, “My value is …” — whatever my value is I'm looking for. And I'll say, “Can you read these terms and conditions and find out what that means?” So I could actually put in there and say, “Tell me, does this scrape the data?} When I say it's not very rigorous, you know, you've gotta trust what the LLM comes up with, right? But, as a kind of shot in the dark, that's quite helpful.
Sometimes I've got the LLM to guide me through a particular social situation, and it's given me some advice. I have read a lot of self-help books and all the rest of it, so I'm pretty familiar with the language and who's got different ways of saying different things. I will say back to the LLM. “Who are you sourcing? Whose ideas are you talking about? Where did you get this information from?” And it will tell me a list of some of the ideas that it sourced. It says, "I've rewritten this for you, but it comes from Brene Brown or Simon Sinek or whoever."
Once again, you've gotta trust that it’s telling you the right thing, 'cause it's really just telling you what the average is there. However, it has been quite useful to start unpicking some of where that information comes from. So there's a form of agency that we can use to at least inform ourselves, to have the conversation.
Part of my learning, as part of the cybernetics course that I'm doing, we are doing these cyber physical systems. I built what I called a 'memory stick'. I repurposed a little box. And it's a stick that records and stores and speaks oral stories, oral wisdom, little pieces of stories that people share with me. And with this box, you can just press a button, you can hear some of the stories, and then you press another button and you can record it.
So it's very simple. It's like a digital tape recorder, but it's a bit more complex. Voice goes through artificial intelligence to be made into text. And then I store it on a cloud drive, which means that these pieces of information are stored in various ways.
For me, I used it to feed back the stories that came down through my family that I can't hear anymore. I can't hear my mother's voice. She's passed away. I can't hear her beautiful wisdom and her wisdom's not going to land in an LLM somewhere. It actually could be lost, which I think is a terrible shame. So I've to design this little box, the memory stick that will do some of that for these future generations. And also I can add into it myself.
It made me ask questions around, who gets to represent that wisdom? Who gets to do that? And what does permission look like? We had a demo day. We had people come all day and press the button, listen to the stories, and then press the other button and add their own stories.
And as they were adding those stories, I was explaining to 'em, “Your voice is being kept in that box, but it's also going through artificial intelligence up into the cloud. Are you okay with that?” And most people were okay. But some people go, “Oh, does that mean it could be scraped?” Yeah.
So that's the type of thinking and it's really important. Creating this little memory stick, for me, was a really great way into looking at the ethics of how we are building these machines.
Karen: Have you thought about offering your memory stick as a product?
Jax: I would love to. A few people said that on the day actually. I certainly will think about it because there's some real value in, I call it a small language model. it would be great to see it out there in the world.
Karen: Yeah. I see ads for things that, you know, save your grandparents' stories and capture them. But then, where did those go? And yours would be a way of doing that that's privacy-preserving. And I don't know that I've seen ads for anything like that, so I'm just curious.
Jax: If I were to go and buy a retro tape recorder, I suppose, or a little Dictaphone, or even just my phone, that's a transient technology. I'm not likely to hold onto it. But because it was encased in a box — and I'll share the picture of it; it's a very beautiful object and it's tactile; You want to hold it, and it's actually designed to be passed on through generations — so actually it starts to value the technology in a different way. It's not a junky piece of something that's going to be trashed.
Also thinking of it in a corporate environment. The CEOs come and go. There's a lot of disruption. And then it happens again and again and again. If we had a way of stewarding the core information through, that is in a way that we should, these are the bits of the things that are really relevant. And I think your presidents do that. They leave a letter for each number coming in.
Karen: I really do think you should consider marketing it. There's individual family, personal wisdom and stories. And we'd want to have them preserved, but we don't necessarily want them scraped and stolen by the rest of the world. in corporate environments and government or businesses, I could see a lot of places where you'd want that knowledge to be saved and accessible, but not necessarily shared.
Jax: I think there's a lot of potential for it. I'm open for discussions if people have ideas about how to go about that. So if anyone's listening and they're thinking about developing technology or they're questioning that technology, we don't have to accept it the way it comes in the package. We actually can play with that. Or we can also design things which are like that.
Karen: So, talking a lot about preserving privacy and of our information. As members of the public and as consumers, our personal data and content has almost certainly been used by some AI-based tools and systems. People take online tests. We go through airports and use social media sites to some varying degrees. We get prompted by movie services for our birthdays, when they don't need to know our birthday, they just need to know if we're old enough to watch a certain movie. But there's a lot of information. And this actually predates AI. This is not new. This has been going on for a long time. But AI is certainly amplifying the way it's getting used. Do you know any cases where your information has been used?
Jax: I wonder if there's anything that hasn't been used, like, really? I've never been on the dark web. I never have a desire to access the dark web. However, I know that if I were to send a detective in there to say, “Go and find all of my data”, that they would come back with a very thick, old-fashioned telephone book that had my name on the front of it, and as a regular citizen.
And as a writer, I'm pretty sure that there would be a lot of my information out there. It would be scrambled up with a whole lot of other stuff. We actually live pretty much embedded within these data systems. I wonder if that's going to be the case forever.
But right now that's actually absolutely our situation. These have concrete impacts on our life. The dating apps, for example, now say you've got a dating app and you've got your birthday in there, which you, which would be a natural thing to have. And then so when you're on one side of a birth date. You'll have access to a whole heap of people who are maybe under 30, for example, or under 40. Then you turn 50 and all sudden you've got access to another set of people that can change your life choices. Like, is data having an impact on your choices for the future? That's a sliding doors moment, but that's a very significant one about the power that data has over the way we go through our individual lives.
I'd also like to say, Karen, though, with the right checks and balances, that we do need diversity within those models. Or else we will face risk that's going to take us a lot of time to unpick again. So in some ways we do need to have more data to be fed in. And it might be that our earlier data sets have a particular kind of bias. And we need to work out how we encourage that diversity, so that our biases are gentler on people, and that they’re evened out a little bit.
Gender bias in science is a really good example. It's easy to see the impact of gender bias in science over a long history of science and the impact that it has. It's only recently some of those research pieces that were going to affect the medication for women was actually tested on women, you know? So those sorts of things are baked into our system already and we need to really look at how we can do that. But we've gotta do it carefully.
Karen: Absolutely. Healthcare is one of the areas that is definitely affected by bias. And one of the concerns that comes up is, if you train on existing data about healthcare and consequences, there are racial biases. There are gender biases. And now we're baking those into these algorithms, and the algorithms are going to make the biases worse. We could take care and make the algorithms recognize and address those so that we don't make the biases worse, we improve on them. And that's what I think we all hope that they're doing. But it's not going to happen automatically. What'll happen automatically is that it will get worse. It'll reinforce the biases that are already there, because that's what's in the data.
Jax:. Because that is what it's designed to do. It's designed to reduce the diversity. I don't think it's going to come up and say, “Well, this data was biased in these particular ways.” This is where you need some critical thinking skills. You actually need the community voices involved in there. You actually need the intersection between technology and people, not just the people building it, but the people who it's going to have impact upon. And going back to long-term thinking, you actually need to have those things baked into the construction right down at the beginning and then all the way through at every point. We are looking at the systemic points here that the data gets baked in, and we have to take the systems approach to it.
Karen: There's an article that I was just reading for one of the references in the book that I'm going to be putting out about AI ethics. There was a study where they found that in fact, these large language models were biased against people based on race and other factors. But they did an experiment and they expressly told the LLM to ignore race when it made the decisions, and that almost completely fixed it. So that was really interesting. If you can fix it that way, then you should be able to build that fix into the tool, so that people don't have to remember to say, “Oh, and by the way, please don't consider race or consider a zip code” or whatever demographic factor, they should be able to take care of that. That was an interesting finding: it's not impossible, even with one that was trained on biased data, that you can actually instruct it to compensate for that.
Jax: Now that's really interesting. A, I can't wait to read your book. I'm really glad that you're working on that. And B, that is really interesting because that's one of the imaginative potentials that we have. If we are thinking about it a bit differently, that we actually can use this in that way, which would be positive. It's all got bias in it. But that is meeting the need that you have, which is representing whatever piece of work it is that you need to do.
Karen: Yeah, I'll share the link to it so you can just read up on the article. You don't have to wait for the book to come out! [link 1]
Jax: Woohoo!
Karen: I thought it was a hopeful sign, I guess I'll say.
I think we've already covered this a little bit, but if you know of a company that you gave your data or content to, that made you aware that they might use it for training AI or machine learning? Or if you got surprised by finding out that it was being used?
Jax: In Australia, we had a Senate inquiry last year. That's a lot of adults in Australia who've used Facebook since 2007. We learned that all of our photos from 2007 to 2024 had been scraped if they had a public setting for meta.
Now, I got more savvy with my Facebook. I tried to have a public persona that I was happy with. But I only did that in the last couple of years. So even though I've gone and taken off the embarrassing photos of me from 2008, for example, which I probably wouldn't like to have out there now, that's already been scraped
I don't think I did anything too embarrassing, but that is really deeply problematic, that visual information or that the writing that I did or whatever it is, is out there. So in Australia, when they went through that inquiry, one of the recommendations was whether actually we tighten up our privacy policies in Australia and the consent policies.
I think it's pretty important to look at those opt in and opt out parts when you sign up in the first place. And for many people, when you sign up, you often are just clicking things you haven't paid much attention to and having a surprise. I think the onus should be on you saying yes, like the person you mentioned earlier, “I want my data in there. You can definitely train on my data, take my advice, take my words and train, because I want that diversity represented in there.” You should be able to say that and opt in. But it shouldn't be automatic. Everything is just being taken for consent if you join this site that you have just given consent. That's not okay.
So I think it's really important that we have an opt-in by design, that we have a culture of consent, not just legal cover. And I really resent that we have to add our data for minor purchases and to access sites. I understand the age restrictions and those sorts of things. I think they're very surface. You know, there's that saying, it's cliche now. “If it's free, you are the product.” Right? If, if it's free, your data is the product. Well, I think it's impossible for us not to be the product, and it's not free anymore, I don't think that's okay.
Karen: There was something that just came out within the past few weeks about Meta. They are offering this new feature that's called cloud processing, I think. There's a post from Luiza Jarovsky where she highlighted this. If you agree to this new cloud processing feature, then on your phone, Meta will look at everything on your camera roll, not just a few pictures that you give it, but look at everything and say, “Oh, I'm going to pull these out and maybe I'm going to suggest that you buy this product that uses your pictures or make you a collage or something.” But it then has access to everything on your camera roll. And I don't use their mobile apps, so I can't say. But my guess is that the way that opt-in to cloud processing is presented does not reveal that this is what you're actually doing, giving them access to everything on your camera roll on your phone.
There are some companies that are well-intentioned and provide good explanations there. I think the exception, and not the rule, at this point. And that needs to change.
Jax: It needs to change. It needs to change in law as well. And we need to be vocal about that. And with AI, that's going to happen more and more. That example there, it wouldn't be immediately clear what the consequence of me giving that information in. It's like, “Oh yeah, that's okay. You can have a look at my photos.” Oh, really, all of them. All for that purpose. We really need to be quite vocal about that and make sure that we have better regulation in that space. Pretty fast actually.
Karen: There was another incident.First of all, they scraped all of the anime studio videos to provide that feature, which is so blatantly obvious. The other thing that people didn't realize is, they say, “Oh, let me take that family photo I just took and make a Studio Ghibli”, they have just given that family photo to OpenAI to use as they wish.
People think about training data, but they don't think about when they're using a tool and say, “Oh, here, convert this to a headshot for me.” Well, now they have your original photo, and they can do what they want with it. And the tools don't make it obvious.That's something else we need to really raise the awareness of. If people want to still decide to give it to them, then fine, but it should be an informed decision. And I think most of the time it's not.
Jax: Most of the time it's not. There are children. There are people who aren't able to give their consent. I'm thinking about someone's going through the street and taking your photo from their glasses and feeding it up into something. Well, you haven't given your consent for that picture. My daughter was very strong, very early, saying, "Don't take my photo. I don't want to be on Facebook. Don't." And so I didn't put her in any socials, and now I know a lot of her friends who are going, “I just wish my mom and dad hadn't done that. 'cause now my picture’s everywhere.” You know? And so consent's a big issue for that one as well.
Karen: Yeah, I interviewed Angeline Corvaglia last year. She started this initiative called "Data Girl And Friends" to help her daughter learn how to navigate this world of AI and data. But one thing that she is very focused on is parents. They call it 'sharenting' where the parents are oversharing the information for their kids, and their kids aren't consenting. And it's like, do you really want to burden your child with all their pictures from when they were young being out there permanently, forever? In the internet, embedded into all these tools? 'Cause once it goes in, there's no way it's coming out.
Jax: A few years ago, a friend of mine who was 95 was interested in social media And I said to my friend, Eileen, “Oh, my daughter's coming back from Japan and I'm going to meet her at the airport and I'm going to take a photo and stick it up on Facebook”. And my friend said, “On Facebook, why would you want to do that?” It was incredulous to her that anyone would want to go and share a special moment with the whole world. Who in the world wants to see that? She's been alive in the last hundred years. It wasn't at all part of our culture that we would publicly display a moment at the airport with the world. That has changed for us.
Karen: Yeah. It feels like a pendulum that swung very far the other way towards sharing everything, and now it feels like it's swinging back. Like you said, a lot of the younger generation, “No, don't, don't do that to me, and I'm not going to do that to myself or my friends.” And they're more cautious. I think it's good to have a reversal there. I think it went too far.
Jax: Yeah. I've got a lot of hope for the future generations. They do unfortunately need to fix some of the things that we do.
Karen: True. So has a company's use of your personal data and content ever caused a specific problem for you? Like privacy violation or phishing or loss of income, anything like that?
Jax: Fortunately, I haven't lost income to my knowledge. I do know that my data's being caught in breaches. I know that my family and friends have their information floating out there. One way to protect myself is to use a password manager. But in terms of full control over my data, I think that boat has sailed. I try and change my credit card fairly regularly, or debit card. Just so that I am not as exposed to some of the risks that come through there. But it's very hard to avoid, and problematic. And I really feel for people who have lost especially income in that space.
Karen: I don't know if this is common in Australia, but the companies that I have credit cards with, they offer this option. They call a virtual account number. We call it a baby card. Basically you log into the card company and you say, "Okay, generate a new card for me" and it'll generate a whole new credit card number. You can set a time limit on it for how many months it's valid, and you can set a dollar limit. I made one the other day that's capped at $25 because it's a small vendor here, and I wasn't sure if I could trust them. So I made a baby card.
Jax: Wow.
Karen: And I used it with them. So it's not perfect, but that kind of thing does help. And people can get virtual phone numbers or voice over IP numbers. Or use email aliases and don't give away your real email. There are some things we can do. It does take some work and it's not something everybody's going to be comfortable with. And we shouldn't have to do any of that, right? But these are at least things that we can do.
Jax:. We shouldn't have to do it. Absolutely. However, we do. I've lived in the country a lot. People often didn't lock their back doors. And my mother was a nurse, did some visiting. People, they go, “Oh, no one ever comes in the back door.” And she goes, “Until one day they do. You just gotta lock it.” And I think that with this, yes, you should be able to leave your door open. You should be able to walk safely through the world, you know. Just some simple precautions actually are really important. Thank you. I've learned a lot there actually, with the baby card. What a great idea.
Karen: It might be only specific credit card providers. The one I have calls it a virtual account number. If you ask about that, maybe they'll tell you about something.
Jax: Awesome.
Karen: Great. Okay, so last question and then we can talk about whatever you want! So we've talked a little bit about trust and how public distrust of AI and tech companies has been — I guess I could say the trust is disintegrating, or the distrust is growing. But what do you think is the most important thing that AI companies would need to do to earn and to keep your trust, if that's even possible, and if you have any specific ideas on how they can do that?
Jax: Trust is real. Trust is core to working well together, to ethical work, to building a better future. If you don't have that trust, everything is so much harder and much more problematic. To me, trust comes when they can roll out a product and see how that will go through its lifespan. Now, that's not a speed market. We are in a speed market. We don't have to be in a speed market.
I don't think it's a done deal. I think this is where we need to go. Slow down and earn trust. And the benefits of earning that trust are many-fold. And it will help a company sustain a good, profitable situation for the really long term with much less churn and much less grief.
Karen: Okay. Well great. Jax, this has been a lot of fun. So thank you for your openness on this. That's all my standard questions. So is there anything else that you would like to share with our audience?
Jax: Love your questions, Karen. I'm really happy for people to connect and open up a conversation with me if anything has landed well
Over the next six months, especially while I finish my studies and expand my exploration, I'm having a lot of fun with a couple of things I'll tell you about.
One of them is: I've got a podcast with my counterpart in Brooklyn, Erik Sanner. We call it the PLANET Collab because we are collaborating. PLANET is: People, Language, Agency, Need, Environment, and Technique and Technology. It is ethical questions around technology in everyday conversation. So we've done one already with a wonderful technologist and humanitarian called Peter Kaminski. I would love to have you on that at some stage, Karen, when you've got some time in your busy world.
The other thing that I'm onto is a member of an online community called Humans Plus AI. I'll be leading a whole systems and AI circle in there. So really zooming out, which is what we need to do. And actually looking at this, where has AI made it into our system?
And then, really just getting back into my Gen 2200 work, which really keeps me connected. Like the connection that you and I have is through my substack, which is really around trying to work out how I can figure out working towards this distant generation. And really exploring stories, the future we want to grow, what do we actually want our future to be, and how we might head towards those. It's not a passive job being alive. It's an active job. We can all do a little bit on there.
We could probably solve many of our problems if 10% of us just started to think in a longer term way and listened more generally to what's around us. So with AI, with machine learning, with the data that we have, there's a lot of potential there.
And, Karen, it's just been delightful talking to you. I'm so in awe of the work that you do and inspired by you. I know that I've benefited. I know many others have too. So keep being awesome. It's really fantastic. And thanks to your listeners for listening.
Karen: Thank you. Yeah. And I'd love to be on your PLANET podcast sometime. I think that would be a lot of fun. It's been so great talking with you. I'll include your Substack link, your LinkedIn link, and I think you have a Gen 2200 website as well, so we'll include that link in there for people. So thank you so much!
Jax: Fantastic. Yes, thank you. It's been a delight.
Interview References and Links
Jax NiCarthaigh on LinkedIn
Jax NiCarthaigh on Bluesky
Jax NiCarthaigh on Medium
“Out of the Mouths of ChatBots*: How AI (Really) Supports Human Creativity” (introducing Clarkie), 2024-10-13
Jax NiCarthaigh on Substack (Towards Gen2200 with Jax, PLANET Collab)
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Audio Sound Effect from Pixabay
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to Beth Spencer for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)
“AI Exhibits Racial Bias in Mortgage Underwriting Decisions”, Lehigh University, 2024-08-20. https://news.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions
Full paper by Donald E. Bowen III, S. McKay Price, Luke C.D. Stein, and Ke Yang is available as “Measuring and Mitigating Racial Disparities in Large Language Model Mortgage Underwriting”, last revised: 2025-02-07. DOI: 10.2139/ssrn.4812158. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4812158





















