Introduction - Aarna Sahu
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.

Interview - Aarna Sahu
I’m delighted to welcome Aarna Sahu from the USA as my guest today on “AI, Software, and Wetware”. Aarna, thank you so much for joining me today for this interview! Please tell us about yourself, who you are, and what you do.
Hi, Karen. Thank you so much for having me on the show. I'm Aarna Sahu, and I am a current high school junior at Prospect High, which is in sunny San Jose, California. And I would say I am a 1st generation student. I'm Indian American.
I've been running a podcast channel called Aarna’s News since the 6th grade because I've been very very interested in bridging the gender gap in STEM and encouraging my peers and other younger girls to pursue STEM. So, a little bit about that. I have around 3,000 listeners and I have hit my 100th episode at the time of recording. So, super exciting.
And then, on the side, I am an athlete, so I've been playing track and field, field hockey. And then I recently published 2 books, and I'm working on my 3rd.
Well, that's awesome. And I love that you're taking on this STEM initiative as a teenager. And those are some great accomplishments. There are some adults that don't have that track record yet.
Thank you so much.
Yes. So tell me, what is your level of experience with AI and machine learning or analytics, if you've used it in your schooling? Or for your work, for your blog or podcast, or if you’ve studied the technology in school?
Absolutely. So my level of experience with, you know, AI and ML is, I would say, pretty beginner friendly, and I've dabbled with it. I haven't used it professionally, but, you know, I like using DALL-E, ChatGPT, Gemini, all of the main ones.
And I have studied a little bit of technology my sophomore year. I took a couple of these courses and certifications offered by Google. It was, like, “Intro to Generative AI”, and it talked about encoder/decoder architecture and a lot of that. And I thought it was pretty interesting, so I took a couple of those. They were free, so they were really fun.
And then my freshman year of high school, I actually joined this research group where we research self-organized maps. And I thought that was really interesting because we dabbled with a little bit of machine learning.
And it was kind of the time before ChatGPT, so I thought, like me, personally, I thought that AI and ML was really cool, even before this whole ChatGPT thing happened. But it was really cool to research this. And, yeah, just as a student, playing around with ChatGPT, asking it to help me with some math homework here and there has been one of my fortes.
That sounds good. It's great to hear that you've experimented with it in a couple of different ways. So the Google certifications and SOM studies that you took: was that in any way related to your schoolwork? Or was that something that you took on your own initiative outside of school?
Yes. They were actually my own initiatives that I did outside of school. So my school is a Title One school, so a majority of our students are under the poverty line. So, I would say that our school has a lot of cool courses, but at the same time, we're still a little bit limited. So, I was just on LinkedIn, saw some of my friends just posting these things from other schools, and I was like, “Wow, I really wanna join that.” So I just applied. Yeah, so I did that a couple of summers ago.
Yeah, I know that you're on LinkedIn. I didn't think it was that common for high school students to be on LinkedIn nowadays.
Yeah. A lot of us are on there.
Awesome. Getting an early start on your professional career!
Yeah. Yeah.
That's great! I'd like to ask if you can share a specific story on how you've used one of the tools that includes AI or machine learning features. I'm interested in your thoughts on how well the AI features of the tools worked for you or didn't? You know, what went well and what didn't go so well?
Yeah. So I think I mentioned this a little bit before. But as a student, sometimes I take AP Calculus BC, AP Physics 1, and a lot of heavy STEM classes. And sometimes, you know, I can't always ask my teachers questions. My parents are always busy. And, you know, just playing around with Gemini and ChatGPT, and asking them questions like, “Hey, how do I solve this math problem? Did I solve this right?” And questions like that. I’d say that that's how I primarily use AI right now.
And, in terms of how well it worked for me, I would say that sometimes I would get wrong answers from ChatGPT. Like, if its solution was like a “4 X cubed”, it would give me a “20 X to the 7th” or something. Like, I would get the wrong answers. And if I didn't know any better, I would just probably write that on my answer key and turn that in.
But then, I've learned that, instead of just asking for the answer, instead I should be using AI ethically, and I should ask, like, “How would I go about this?” instead of just copying down the answers. Because sometimes AI is wrong, and it's good to have a moral sense to figure out when it's right and when it's wrong. And, yeah, that's how I pretty much use it.
And I've heard from a lot of people - for instance, I was interviewing a medical student. And he was talking about how he and his friends that he studies with use it to help them bridge a gap in their knowledge. But if they don't really know enough about it, they won't necessarily know if they got a wrong answer.
And so you need to know a little bit, and to be able to prompt it more specifically, in order to get a useful explanation. Like, “How do I get from here to there?” if you already know what the ‘here’ and the ‘there’ are. But it doesn't necessarily tell you, “How do I tackle something?”
I've heard from other guests that using it for words can work great, but that LLMs are much worse at math than they are at other subjects. There was some talk about having a Wolfram Alpha plugin for ChatGPT that would do the math for it, and things like that. The example that you gave, is that a real one where ChatGPT gave you a very wrong answer?
Yeah. ChatGPT has given me really wrong answers. And then, my teachers, they give us the answer key, and then you have to solve it yourself. So I would look back at the answer key, and I'd be like, “That is not what I'm getting at all.” So I would tell ChatGPT, “Hey, this is actually the wrong answer. The right answer is this.” And then it would go back and try to solve. And then it would still get the wrong answer. And then I would just have to iterate this multiple times until I somehow got the right answer, then see how they did it, so I could kind of mimic that myself. But, yeah, it is a real-life scenario.
Is there an example you can think of where it DID give you a good answer, pretty much right away?
Yes. Last year, I took AP Computer Science A, and I was running into this one bug. It was one of our projects. It was end of semester or something. And me and my dad actually sat down. My dad is in STEM. He's a software engineer, cloud architect, all that. And so we were sitting down and going over this one bug for hours. We had been spending so long and we had everything. We had ChatGPT. We had my teachers, course guides, everything. And I just couldn't get it. So I asked ChatGPT, like, “How would I go about this?” And just a couple responses that I said in a couple of prompts, it actually told me what was going on and how I could fix that. And then I realized, and then it was like the greatest moment ever, because I was able to fix that bug, and me and my dad were so relieved. So while there are some cons, there are also a lot of pros with ChatGPT and AI.
Yeah. Those are great examples. Thank you.
Of course.
Yes. So I'm wondering if you have avoided using AI-based tools for some things or for anything? And if you can maybe share an example of a case when you did avoid it and why you chose to avoid it?
Yeah, so, in a lot of schools, artificial intelligence and using any type of AI tools is strictly forbidden. And, I mean, with good reason. I mean, you shouldn't really cheat your way through school. You can't just ask ChatGPT to write you an essay, and then just turn that in. You have to have your own thoughts.
But I think personally, if you don't have access to your math teacher, if you can't stay after school, if you don't have the resources to get tutoring or all of that: then using it for math questions, using it for STEM, just generally asking questions, is a really good way.
But what I do try to avoid, and what I do avoid, is using it for my humanities classes. Because if I'm answering a question that my teacher gives me, I want to be able to think for myself. I don't want that to be limited, because I know how addicting it might be. Once I enjoy something, and once I think it's really, really easy to use something, then I'm going to be addicted to it. There's no way.
And it kinda comes back for social media. To be quite frank, I use Instagram a lot. And with TikTok, now that it's kind of gone1, my social media time has been limited a little bit. But I've just been addicted on social media.
So if I were to get addicted to ChatGPT, then I wouldn't be able to think for myself. I wouldn't be able to answer any questions without the use of that app. So I try to stay away from my writing classes, my history classes. And yeah.
Okay, yeah, those are good examples. Could you explain a little bit for our audience what the current code of conduct on AI is in your high school? Is it strictly, you know, you cannot use it for anything that you turn in? Or what is the current policy?
Yeah. So I'm a junior, right? So I'm an upperclassman. But for freshmen and sophomores, they have a little bit more leeway. Like, if you get caught, then you're going to get admonished. Teachers are going to be a little bit harsh on you. But then you have another try at turning something in, with good reason.
But for some teachers at my school, I know that if a sentence of your work is detected, then it's game over. You can't turn in anything - get a straight zero. It actually got to the point where — I don't know if this is true — but one of my teachers caught a couple of seniors who used artificial intelligence, who already got accepted into their colleges. She was going to threaten them by emailing some of their colleges based on the AI thing.
So I would say that it's pretty strict at our school. But, I mean, it kind of depends, teacher to teacher.
So are the teachers using tools, that actually were probably written with AI themselves, to try to check students' work for AI or for plagiarism?
Yeah. So we use something called Canvas. So when you turn in your assignments, you have to turn it in through there. And there's this thing called turnitin .com .org. I don't really know. And it actually just checks if you use plagiarism, if you use AI, I think. And if it hits this percentage, I'm pretty sure, then that means you probably used artificial intelligence. So that's how some teachers have been using AI to check if students have been using AI.
Yeah, I've heard some stories about some of those tools. I mean, any tool basically is going to have some percentage of what we call false positives and false negatives. And the false positive, in this case, would be saying that a student used an AI-based tool or had ChatGPT write it for them, when they actually didn't. I know sometimes it keys on use of certain words - like, “delve” is one that I've heard a lot about.
Yeah.
There's different keywords that they pick on. But it's going to be wrong some of the time. And when it's wrong, do students have any opportunity to appeal and say, “Look, I didn't do it”? To basically help them to recover from being falsely accused of using AI when they really didn't?
Yeah, I mean, I think there is. If a teacher flags you for AI, then you can definitely talk to them and try to figure out what went wrong. Like, for me, I honestly never learned punctuation formally. So I might have a comma here and there that shouldn't be there, or an em dash there. And that's kind of how ChatGPT writes as well. So I get really scared whenever I turn in my own work because I'm like, “Uh-oh, what if my grammar is wrong?” That's what ChatGPT thinks too.
But there have been some cases at our school where some students were wrongfully flagged. And they just talked to their teachers. And then, because the teachers knew their academic integrity and they knew that the student actually cared about school, then they were able to just turn in their work.
Yeah, it's good that they have some sort of way to appeal that it's not insurmountable to have your work flagged as AI when it's really not.
True. Yeah. Yeah.
So one thing I wanted to explore with you is that one of the concerns that has come up with these tools, ChatGPT and Gemini and others that you've mentioned using, is where they get their data and the content that they use for training. In some cases, the concerns have to do with whether or not they're legally entitled to use it, or if it's basically stealing from people who have copyright on the content that was consumed by these tools.
Another concern can come up when the content is biased. For instance, pictures of scientists, and it always comes up with a white male scientist, instead of fair diversity of the world of science.
And so there are a lot of concerns around where these tools get their data and how they get it and whether it's biased, whether it's been consented and credited and compensated to the people who provided it. A lot of times, they will just scrape data from an online system. And it may have been published years before AI was a common thing, and so people wouldn't think about their work being used for those purposes. And in other cases, book publishers and authors are having their entire books scraped and used.
So there's a lot of controversy on this, and I don't know how much that has come on to your radar. But I'm wondering what your thoughts are about these companies that are using this data and this content for training their AI and ML systems and tools, and what your thoughts are about the ethics around that?
Yeah. So, I mean, I think that it's definitely a pretty scary topic - you know, just thinking that your own honest work could be just taken away. So I definitely think that AI tool companies should get consent from people. Because, you know, it's their own data. It's their own work. It shouldn't be breached in that sense.
And it's kind of important to know, as the user who's using maybe some type of AI platform, that there's bound to be errors. Say, for example, I was trying to get something on ChatGPT. Like, I was trying to search for something. I don't know what I was trying to search for, but I was trying to get some credible links for that. And so I asked ChatGPT, “Where did you get these resources from?” And it gave me random links. These links didn't even work when I tried to paste them. So there's bound to be errors.
And I feel like if ChatGPT were to pull things off of the Internet, they should probably cite their sources or something, like how students do. So at least they could give credit to the people who actually wrote those.
Yeah, the made-up citations have been a lingering problem for quite a while now, and it used to be really bad. I get the sense that some of the newer tools have gotten better about being able to provide that traceability to the source. But there's still a lot of what my friend Charlotte Tarrant () calls “hallu-citations”.
Absolutely.
Yeah. And the quality of the data is also really important, obviously.
Oh, yeah. So something that I just want to point out is that, say, for example, someone posts something on, like, a blog on the Internet. It's public. And I feel like if something has been publicized on the Internet, in the most untwisted way, I think that it's okay for a certain extent if AI platforms reference that. Because if it is publicized, like, say for example, there's a YouTube video and it's someone doing this really, really cool dance move or whatever. And they publicize that. Like, they put it on the Internet. So sure, people can watch that. People should be able to reference that.
So I think that if it's there, then that's okay. But at the same time, if people were to do anything with that, they would need to ask the person who actually publicized whatever it was.
Yeah, there are 3 main concerns that tend to come up, and some people call them the “3 C's for creative rights”. It's Consent and Credit and Compensation. And that's from a group called CIPRI, which is the Cultural Intellectual Property Rights Initiative. ChatGPT not being able to cite its sources, that fits with the Credit part of it.
The other part for Consent and Compensation: one nuance that isn't always very clear is that something being ‘publicly available’ does not mean that it's legally ‘public domain’ and free for anybody to use however they want. And in the US, at least, it's not the same legally. And people can post things publicly here, like the video of the cool dance move. That video is still copyrighted to them even though it's free for anyone to watch. But tools like ChatGPT or the different tools that generate video are really legally not allowed to use that content without the creator's consent. And that's something that I think is not always fully appreciated. And even some of the tech executives assert that it is, but it's really not.
Yeah.
There's over 30 lawsuits in the US right now trying to sort that out.
Oh, that's crazy. Yeah.
Yeah. So as a consumer or a member of the public, our personal data and content has probably been used by an AI-based tool or system somewhere. Do you know of any cases that you could share - without, obviously, disclosing any sensitive or personal information?
Yeah. So, I was actually with my friend one day, and we were hanging out. And I have the ChatGPT app on my phone, so I just randomly just decided to ChatGPT myself. And it was really cool because it actually gave me a summary of myself, my podcast, and some other initiatives that I've been a part of. And because I already have social media profiles for LinkedIn and all of that, it was really cool to see that through the lens of AI.
I think that another way that my own personal data and millions of people's personal data has been used is through iPhones and mobile devices. Like sharing your info to the cloud, and fingerprint scanning, face IDs, and doing all this facial recognition, and all of that. I think that's another way that my data has probably been used by an AI-based tool or system so that, you know, they could scale to make my needs easier to access my phone. So I think those are some ways that my data has been used.
Do you have any concerns about that use of your data?
I do. I do have a couple of concerns because it's a little bit scary. Like, I've heard that Apple has this constant tracker on your phone. I don't know if that's true. But I've heard that there's a constant red tracker on your phone so that, like, when you pick it up, it might recognize your face or whatever. So if I was with my friends, then it's kinda weird because it would be able to detect me compared to my friends or all of that. So I think that's a little bit iffy.
Yeah, there was a story that broke on January 2nd about Apple2. They were settling a lawsuit, dating back to about 2014, where they have been using Siri on iPhones to collect data from people's conversations that they were having, NOT while they were on their phone using it as a phone, but just having Siri on their phone - even if they thought they had disabled it. And they were collecting people's offline conversations and then selling that data from the conversations to advertisers, who then would try to market to these people.
And they're just now settling that lawsuit for it. Really, it's a paltry amount of money. But the fact that they were doing it - it's one thing if data is breached, or it’s an accident, or someone is careless, but that type of surveillance does not just *happen*. There's an infrastructure and there's a product feature, and it had to be intentional. Apple's always had this great reputation for privacy, but this is one case where they really did not live up to that reputation.
Absolutely. It seems pretty dystopian, like a scene out of 1984 or something.
Yeah. Have you ever read that, or watched a movie of it?
I have. I read the book for my English 2 honors class last year. It was pretty interesting, and I think it really does apply, now, more than ever.
Yeah. Definitely. It's a little scary. I probably need to go back and reread it. I read it quite a long time ago.
One other aspect is - you mentioned about your phone and it using your data for that. Are there any cases where you knowingly gave a company any of your data or content, and then expected or allowed them to use it for training an AI or an ML system?
Yeah. So, I'm a junior. And right now, for college admissions, you have to take the SAT or the ACT. Like, it's not optional anymore. So, you know, I've been practicing for that.
I guess it was pretty optional because when I take these practice tests, you can opt in or opt out to let College Board use your data to see if the test really was adaptive, or your solutions, and all of that. And I think they would train their models based on, like, my data and millions of other students. So they kind of just asked me a couple of questions at the end of my test, if they could just use my data and see how it was. And so I obliged.
Another case where I was pretty shocked, I would guess, would be, as someone who doesn't really like to read, I would just kind of skim over the terms of services of things. And I would just click “Okay. Agree. Agree. Agree.” And then, after all of these things that have been popping up about, like, AI and all of that, I've actually taken the time to go through what I'd be signing up for and consenting to. And I realized that I probably should be reading these terms of services, not just skimming them, because there's some pretty shocking things there.
Yeah. There was an “April Fools” joke once where someone put in something about giving up your firstborn child to them, you know? Just as a joke. It was really meant to be a test to see if people actually read them.
But there was a very recent study that said that over 90% of people never read them. And I think for folks under 25 or under 30, that it's more like 94%. Just never ever read them. But, honestly, they're always written in legalese, and they’re 10 or 20 pages, and you're supposed to scroll on them in these little tiny windows on your phone. And it's just kind of ridiculous. If it's too hard to read, takes too long, it can't be understood, you can't really fault people for not reading them.
I spoke with someone who was a lawyer but now a consultant in data protection in Ireland []. And she was telling me of a study where they said that the typical person would use these 96 different common tools. And if they had to read all the terms and conditions, it would take 47 hours of her time just to read all of the terms and conditions. And that's ridiculous. Nobody's going to do that. So terms and conditions are really not a good way to truly get informed consent.
Yeah. I mean, that's crazy. Like, 40 plus hours?!
Just to read them. And then you would have to almost be an attorney. Some people I know have actually tried asking ChatGPT or Gemini or Claude to explain. Like, “Here's this terms and conditions document. It's 20 pages. Tell me what it says.” or “Tell me what it says about data retention or privacy and the right to be forgotten.” And so they're trying to use those tools to help them figure out what's important about what's in there. And so a lot of people try different things.
But sometimes lawyers don't even necessarily grasp the implications. There was another case where some law firm was looking at using an AI-based tool to help them with their legal work. This tool had terms and conditions. And the law firm, you assume that they read it and digested it. But what THEY didn't even realize was that, by using that tool, if they put their customers' confidential legal information into that tool, that they would be exposing that information to the provider of the AI platform behind it. And that's a HUGE no-no for client confidentiality. But even the lawyers who were looking at this tool didn't catch on to that. So it's just really, really hard to know for anybody. And if it was certainly hard for them, it's really going to be hard for us, right?
Absolutely. Yeah. I mean, like, I myself don't enjoy reading, and that's just like books. I'm not the greatest reader. So seeing, like, a 10, 15 page, 9 size Times New Roman font, it's not my cup of tea, and that's not really what I want to do. So I would just skim and just press “Accept”. But yeah.
Yeah. That's almost nobody's cup of tea. I don't know if I know anybody who wants that!
Yeah.
So you mentioned, with your College Board tests, it sounds like it would have truly been optional. It was a practice test? Or it was a real test?
It was a practice test. Yeah.
And it was at the end of the test, so you got through the test and you completed, you know what your score would be and your results. And then they asked, “Oh, by the way, can we use your data to help tune the test”, basically?
Yeah. And, I mean, I feel like I did have a choice when opting out. There was the yes or no button. But at the same time, if I really didn't want to send in any info, then, I mean, I feel like I was able to decline. So I think just having that option to do so is really comforting, I guess. Because sometimes other companies, they just take your data without even telling you, and that's just bizarre.
Yeah. Well, kudos to the College Board then. I always like to call out when I hear someone trying to do the right thing, so I'll give them credit for that.
Do you know of any times when a company's use of your personal data and content has created any specific issues for you, either privacy or phishing or losing money, anything like that?
I mean, not really for me, because I'm only a student. But something that's really interesting is when I was at that hangout with my friend. I ChatGPT’d myself, and I searched my family members online. And what's crazy is that I found some of their phone numbers. I found some of their addresses, and, like, a lot of their personal information. I know that that information might be right or wrong. It's still crazy that it's still out there for anybody to see. That's insane that your own personal information can just be out there for anybody to just click on. So yeah.
Yeah. And there's a lot of different ways that that data leaks out and data brokers gather it and then sell it. It's not one of the most ethical industries I've ever heard of, let's just say.
Yeah.
We're discovering or uncovering more and more of the cases where companies are using our data. Either doing it well and with our consent, or doing it underhandedly in a way that is basically disintegrating our trust in them as companies and as an industry. What is the one most important thing that a company would have to do to either earn, or earn back, and then keep your trust in them, to trust them with your data and with the things that you create?
I think the most important thing is just emphasizing that people's data should be used with their consent. If someone is really, really not comfortable with sharing something, then that shouldn't be used. But if someone's more okay with sharing it, then I think that that should be okay, so long as they give their consent. I feel like people should be given the right to give their data to companies, and that shouldn't just be taken from them or secretly pulled. And they shouldn't just be a part of a case study that they never really signed up for.
Yeah. You mentioned Instagram. There's been a lot of noise lately about Meta and some of the things that they've done with using people's data, and in ways that maybe weren't necessarily agreed to at the time that they initially signed up. So do you have any concerns about them and their use of your data?
I do have concern. I mean, I'm going to be a little bit biased here because, for some companies, I don't really use their platform. So with Meta, it controls Instagram, WhatsApp, and I do use those platforms. So I am a little bit more apparent to those. It is still insane that they do use people's data without even telling them in some cases.
And I think it just goes to show that you can't really trust what you're using. Like, with Apple secretly listening on your conversations. If I'm texting my friend, I don't want that info to be shared with some random person from a company. So it just makes me a little bit more cautious about what I'm doing online.
Yeah. There are a lot of initiatives - that’s gotten more prominent lately just because of all the changes and the things that we're discovering. But tools like Signal, which encrypt your messages so that they aren't intercepted and don’t get shared. And there's a lot more inertia that is starting to build around using those tools, instead of using tools where you opt in for one thing, to let them recognize your face for sign in or to tag you in a photo, and then they use it for all kinds of other purposes.
One common thing I hear from people is that what they want most is transparency, is for the companies to truly be upfront. About what they are going to do with your data if you click on to say “Yes”. And what happens if you say “No” and you can't use this feature, but you can still use other things. A lot of times, it feels very coarse, like, “You want to use this one small feature that we want your pictures for. But if you consent, we're going to use your pictures for ANYTHING we want.” And that just doesn't seem right.
Yeah. Just doesn't seem right at all.
Yeah. So there's a lot to be aware of. We're still going to be dealing with this for quite a while. I don't think those 30 lawsuits are going to settle out anytime real soon. And the tech landscape is also evolving too, which is interesting too.
There are also tools. I don't know if you heard about the images, but there are tools for ‘poisoning’ images now [Nightshade and Glaze]. So that you can see them and look at them - it LOOKS like the picture of the bird or whatever. But if a machine learning tool tried to use it, it wouldn't come out looking like a bird. And so people are using it to protect the things they do put online.
Of course, then it turns into a bit of an arms race where those poisoning tools get better; and the people that want to harvest the images come up with something new; and then the poisoning tools get better ... So it's a bit of a race.
But there's a lot of activity in those areas. It's, I think, really interesting from a scientific perspective just to see what's going on and what's happening with the technologies.
Yeah. That's really cool. And, I mean, I've never even heard of that. That's really cool. I'm the president of my school's Society of Women Engineers Club, and I've been running that since freshman year. And we actually had a meeting a couple months ago about data, ethics, privacy, AI. And one of our activities was pretty low-key. We just went on our phones, went in our Settings, looked at Privacy, and saw how many apps we gave full access to.
I put a photo of all of the apps that I gave access to, and it was like this big <gesture>. It was huge. There were SO many apps that had access to all of my photos, my location. And it was insane, because a lot of us just don't even realize that we're giving this up to someone, to something, without even realizing how many strings are attached.
Yeah. And as a matter of principle, being opted IN by default is really concerning for a lot of reasons. And one is that, for instance, if you put this app on your phone, by default, does it assume that it has full access to all of your photos, or only the photos that you want to give it access to? Things like that. Again, this is a case where I think we need to raise our expectations as consumers and say, “No. I will only use the apps that ARE ethical, and treat me fairly, and are transparent with me.” And that's hard to do. But we aren't powerless. We do have some influence that we can wield.
Yeah.
Well, Aarna, thank you so much for joining me on this interview! It's been a lot of fun talking with you. Was there anything else that you would like to share with our audience today?
Of course. Thank you so much, Karen, for letting me be on the show. Some things that I'd like to add:
Check out Aarna’s News, my podcast channel! Little self-plug here. 🙂
And I do have a book coming up, and I'd say it's a little bit of a self-development guide where, after interviewing over a hundred women in STEM, their advice, which is super profound, and my advice after interviewing these people. And it's bundled up in this book so that other people, the next generation, can listen on. And it'll come out in around March, I'd say.
And if you guys have any questions, or you want to follow me, then check out my LinkedIn. It's just my name, Aarna Sahu, and I'll be posting a lot there. Thank you so much.
Great. You're welcome. And if you could share a link for the book that's going to be coming out, we'll include that in the article as well, so people can jump right in there and get in on preorders maybe, if you're set up for that.
I will. I will try once I get the link from my publisher.
Awesome. Well, congratulations on the book and on your podcast milestones. That's awesome. And, thanks so much for joining me today, Aarna. Appreciate it!
Thank you so much, Karen.
Thank you.
Interview References and Links
Aarna Sahu on LinkedIn
Aarna Sahu’s podcast “Aarna’s News” (Apple, Spotify)
Aarna’s books (preorder link to follow for book #3)
Other articles referenced in this interview:
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Audio Sound Effect from Pixabay
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊
This interview was recorded on Jan. 28, 2025, at a time when the US administration’s TikTok ban had caused the app to be unavailable to users in the US. See this article for a full timeline of those events: “The US TikTok ban – a full timeline”, by Olivia Powell / Tom’s Guide, last updated 2025-02-14
“Apple to pay $95 million to settle lawsuit accusing Siri of eavesdropping”, by Michael Liedtke / Associated Press, 2025-01-02
Share this post