6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #039: Dr. Kristen Parrish, Malaysia-based engineer and technical storyteller (AI, Software, & Wetware interview)
0:00
Current time: 0:00 / Total time: -30:02
-30:02

🗣️ AISW #039: Dr. Kristen Parrish, Malaysia-based engineer and technical storyteller (AI, Software, & Wetware interview)

An interview with Malaysia-based engineer and technical storyteller Dr. Kristen Parrish on her stories of using AI and how she feels about how AI is using people's data and content (audio; 30:02)

Introduction - Dr. Kristen Parrish

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Dr. Kristen Parrish - provided by Kristen and used with her permission

Interview - Dr. Kristen Parrish

I’m delighted to welcome Dr. Kristen Parrish from Malaysia as our next guest for “AI, Software, and Wetware”. Kristen, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.

Hi, Karen. Thanks for having me. My name is Kristen Parrish. I'm an electrical engineer on the semiconductor hardware side; have been for a while. I'm currently on kind of a sabbatical in Kuala Lumpur. We moved here for my spouse's job, so I live here with my husband and my toddler here in Malaysia.

I write for the Power Electronics magazine. The Industry Pulse column covers whatever I'm interested in that quarter, looking at industry trends and how they'll affect power electronics engineers. My upcoming article is actually about EDA tools, and especially some of the open source and AI options that are maybe going to impact our fields.

So you mentioned EDA, and I know that's an overloaded acronym - for instance, that can mean ‘exploratory data analysis’. For our audience, what do you mean by EDA?

Yes. Great question. It is Electronic Design Automation. So way back, I want to say in the 70s and when integrated circuits were designed, people were still doing designs by hand, on grid paper, of how to lay out transistors. They started doing them with computer-based tools, so CAD, Computer-Aided Drawing tools. And then simulation, circuit simulation, and things like that. And then how they interact, which is really important when you're making integrated circuits.

Very interesting. Thank you for sharing that.

So what is your personal level of experience with AI and machine learning and analytics? Have you used it professionally or personally or have you studied the technology?

Well, when I was using these kinds of things professionally, I think they weren't really calling them AI or machine learning. Definitely algorithms has been a big thing in the EDA space and optimization tools and things like that. But only recently, I think, have those all been kind of highlighted under the AI / machine learning bucket.

I did an expat assignment in Japan, actually. And one of the things that they're really proud of in the engineering space is the genetic algorithm, which is used to design the bullet trains, which is why they have that funky kind of flowing shape.

“Figure 1. Coordinate system for train models.” in “Head shape design of Chinese 450 km/h high-speed trains based on pedigree feature parameterization”, doi.org/10.21606/iasdr.2023.231, licensed under CC-BY-NC-4.0

So yeah, algorithms and things have definitely been a part of my career and experience as an engineer.

Personally, I have definitely used AI tools. It seems like it's impossible to avoid them. Even image search, and trying to identify a plant that I have that's dying, and things like that. It's a default option, I think, when I Google something now, is the AI summary. So yeah, I can't avoid it in my personal life.

And you mentioned that you've seen the trains, and I guess in transportation, you've probably seen other uses of AI in that environment as well? For instance, at the airport?

Well, we lived in China, in Beijing, for three years before coming out to Kuala Lumpur. And certainly, if you live in Beijing, or in China for a day, you'll notice a lot of cameras around. And I am almost sure that a lot of that is getting fed to AI. But I don't know how easy it is to verify some of that.

But definitely a lot of AI, like facial recognition stuff, even going through the Hong Kong airport. Once I was surprised, transiting, that I didn't have to take out my passport to go between the terminals. They just scanned my face and that was a little - a little interesting. Definitely not something I consciously opted into, but they have our information. So I don't know how to opt out of that.

But yeah, definitely facial recognition and image recognition technology, especially out here. All the parking garages have scanners for the license plate. So you often don't have to key anything in. Or you just can drive in and they know your license plate and where you parked.

And actually, yeah, we went to the airport here in Kuala Lumpur. And we didn't opt into this consciously, but when we were leaving, we're like, “Oh, there's a little kiosk where you can type in your license plate number and it tells you where you parked.” Interesting, a little dystopian, but interesting. So yeah, it's everywhere.

That is interesting. Yeah, I hadn't thought about using the facial recognition once you've scanned in as a substitute for your passport, once you're inside the airport terminal. That’s interesting to hear about.

Interesting. Yes.

Yeah. And the license plate scanning, they have that somewhat here. Some of the toll beltlines will use that - they have the transponder.

Oh, yeah. That's true. Yes.

But if you don't have a transponder, they will actually take pictures of your plate and use that to try to track you down and send it to you. So if you loan someone your car, you're going to get the bill for their tolls.

I remember that. Yes, I think that's been a thing for a while. This was so interesting. I immediately had to start thinking about how they did this. Like a big multi-story garage, and they knew your plate, and they're like, “You're in this spot”. Like how… I have questions. Yeah, I want to know. So yeah, definitely a new thing for us.

Yeah, it definitely is everywhere. Can you share a specific story on how you have used a tool that includes AI or machine learning features? I'd like to hear your thoughts about how the AI features of those tools worked for you or didn't - basically what went well or what didn't go so well.

I use a lot of Google and search tools, especially when preparing articles for my column. I don't know how to turn it off. I haven't, I guess, researched in depth, but the AI summary now feels like the default whenever I Google something. And so, whether I like it or not, I'm using that. It's often not correct, has been my experience.

I have noticed myself playing with the prompts a little bit more to try to get it, not even just the AI summary, to even get Google to be better. Maybe a different podcast!

But because of the challenges with Google search, I have tried ChatGPT and Perplexity a couple of times to see if they were any better. And I found that they simplified some things. Again, I'm Googling maybe some technical terms or some specifics. I was trying to Google something about one of the EDA tools I mentioned. It’s software that helps you design printed circuit boards. And Perplexity and ChatGPT could not understand that I was looking for a software tool, even with the word ‘software’ in my search. Versus like a company that would make a printed circuit board, which is a very different thing.

So yeah, my experience with the tools that have been highlighting the large language models and generative AI has been that they are still a little bit buggy, as far as finding concrete answers. And that is, to me, the thing I'm most interested in having an AI do is get me something that is verifiable and real and accurate.

I know a lot of people also use generative AI for generating images or text or something more, maybe more squishy, or harder to define what is ’good’, what is ‘accurate’? But I'm looking for answers from AI, or quick answers, and haven't gotten to the point where I really trust what's coming out of it.

Yeah, and that sense of distrust is well-founded, from a lot of the examples that we've seen. I'm curious, when you're writing your column, obviously there are some concerns there. One is that it could end up giving you sources which aren't valid.

Yes.

And the other is - it depends on how you're using it to help you write the column. If it's just for research, that's one thing. But if you used it to help you write parts of the column, there's some risk that it could be plagiarized content and you wouldn't even realize it.

Yeah.

So I'm curious about your experience with that, and also if you ever tried any of the AI plagiarism checkers. Using AI to check AI, in other words.

Huh.

I've heard of some people doing that, and I'm curious what your experience is.

Yeah, so right now I've only really used those AI tools as kind of a Google alternative. I haven't tried to generate text for a column or any writing I've done. I don't know. I feel like I'm already kind of at the point where - even when I am Googling articles, or I was Googling or I was looking up something on LinkedIn trying to find, like, a LinkedIn post. And what I found was 15 different posts that had almost the exact same language from all these different sites.

You know, I don't write a ton. Maybe if I wrote, you know, 10 hours a day or something - I'm writing one column a quarter - but I'd like to still sound like me. And I have a distinctive style. My style is probably a little bit more casual. And I feel like if I got an AI to generate some text, it would obviously not feel like me.

So I've only been using it to research information, but not to generate output. I actually am pretty meticulous about referencing things in my column. I have a graduate school publishing background. Everything has to have a reference. And so I'm skeptical of anything that isn't referenced, or that I can't click this link to go read it for myself. And having an AI output that isn't well referenced does not help me.

And kind of as an aside, I very briefly worked on some self-driving car stuff. And this is still an ongoing conversation, because we don’t have full self-driving cars yet. But I think just emotionally, on the emotional side, I don't know if it's considered in tech enough, as it should be. People are less forgiving of machines for being wrong than they are of people. We had this conversation a lot. A lot of people get hurt and die in car accidents every year. But you have one autonomous vehicle that makes a wrong decision, and you shot yourself in the foot for the next couple years.

And that's kind of how I feel about AI. It's one thing for a person to be wrong and you can yell at them or go ask them to fix it. But what do you do when the AI is wrong? You know, “How many R's are in strawberries?”, that kind of thing. You can't even convince the AI it's wrong. That's something that's a lot harder emotionally, I think, for people to accept, than if a person was wrong.

Yeah, that makes a lot of sense. So we've drifted into my next question, which is whether or not you've avoided using AI tools for something. It sounds like you are avoiding them for use with your writing, which I think makes total sense and I fully support that.

You mentioned also about LinkedIn, about seeing all these repeated posts that looked a lot the same. There was a recent study that said over 50%, maybe 54%, of LinkedIn content is AI-generated now. Because people paste a post into an LLM and they get a reply, and they just paste it in. Or people just copying. So much less signal, versus all the noise. It's just increasing the noise dramatically and really makes it harder.

I don't know. You can probably tell when you read something that sounds like, “Okay, yeah, that person did not write this.”

Yes. Even when researching for my column, and I'll find, like, a market report, or a news report - that I think is a news report, and I'm reading it. I'm like, “Hey, is this written by a human? This isn’t …”

I don't know. It makes me appreciate my own kind of writing style and quirks. Again, my style is probably a little casual, a little bit like getting happy hour with a coworker and talking about some business thing, more than a Bloomberg-style article. And I feel a little bit more protective of that, I guess.

And so, yeah, I have been avoiding using AI for generative stuff. I'm a power electronics engineer and I'm interested in the power and energy consumption impact of these tools. I don't necessarily see good value for anything I'm interested in doing.

Again, maybe there are applications that people see that these tools are really exciting. But for what I'm doing right now, I haven't had a big need. And it feels like the cons currently outweigh the pros. And maybe that will change, and I'm hopeful that it does.

Yeah, I think it's especially true for the generative AI tools, which are the ones that seem to be getting most of the press.

Yes.

I used an analogy to compare the whole world of AI to an iceberg. And the GenAI, and the robots that walk around and look human-like, are above the waterline. But there's so much that's below the waterline that is AI that we don't always think about being AI - in things like Netflix recommenders, or different optimization tools, or routing in a GPS. So there's a lot of AI and machine learning in our lives that we don't always think about. And I think sometimes there's more value there, but it's also sometimes harder to see that there's really an algorithm which is based on machine learning, which is under the hood, may be driving that feature.

Right. And those things too, especially, have existed for a long time and have been kind of slowly chugging along. I think we just only started lumping them into the AI bubble because AI is such a hot topic.

Yes. And there's a tendency with marketing to think “Oh, well, we have AI inside”, or the product won't sell, right? It's the hype you were mentioning earlier.

Right. Exactly.

So are there any other examples of times when you've avoided using AI tools, or any specific tools that you've chosen not to use?

I had the Apple intelligence turned on for a little while. I was just kind of curious to see. My life isn't so busy that I feel like I need more of an assistant on my phone. I guess I've never been a big Siri user. But it was kind of hilarious seeing, not just my AI text summaries of things, but also people on the internet were like, they got like a long text about their girlfriend breaking up with them, and the AI summary was like, “relationship over” or something like that. 😆

When I think about, like, the LinkedIn posts that all sound the same - I don't need to spend less time interacting with humans. I'd like to spend more time interacting with humans. To me, the nuances of my friends and family's communications - I don't know, it's a little dystopian, having it boiled down to, like, “dog died, need to go to vet”, or something like that. Like, not a thing that I need in my life.

I have seen some very niche, cool applications of AI and machine learning. But I think right now it feels like many tech companies are trying to say, like, “We have a machine that just does everything for everyone.” And I'm like, “Not everything is for everyone.” I think they will have more impact if you're solving some of these more specific problems. But it feels like they're just trying to do too much. And it's just not, it's not for me. Maybe it's working for some people, but yeah, just not needed right now.

Yeah, there's some subset of researchers who are actually trying to solve the really, really hard problems, and to try to achieve these really grand visions. But in a lot of other cases, I think it's just hype. “We hype it. We get the money from the investors, and then we bail out.” Seeing that pattern happen, in more places than it feels like it ought to be, that's disillusioning.

Yeah. When everyone is trying to find the next Facebook, the next trillion dollar company. Well, I have a million-dollar idea that's now nothing in comparison.

It is kind of interesting just how companies, it seems like you have to opt out, and you have to figure out how to opt out. I don't actually know how to turn off the AI summary in Google. I haven't looked it up. Maybe I should, but it's definitely not very obvious.

A couple of weeks ago, I think Microsoft just turned on AI scraping in Microsoft Word and maybe some other products. And I had to manually go say, “no, you can't use my data”, which felt very weird. I found out about it on LinkedIn or something. There wasn't an obvious notification from Microsoft. I don't think I got an email about it. But it just felt, I don't know, felt kind of weird. I turned it off. I’m like, a little bit of trust issues that Microsoft is not looking at all of my stuff.

Yeah, I don't know. Google is probably using my Gmail history to anonymously train some language model. You can turn it off, but also you can't untrain a large language model, right? So it's kind of “ship has sailed”. So yeah, it feels wrong to make that opt-out and not explicitly opt-in.

Yeah, I agree with you there. A lot of the defaulting to opt-in feels manipulative. I don't know if manipulative is the best word, but it's deceitful, maybe.

Feels like, yeah, they know they have to trick you, otherwise you wouldn't do it. Maybe we should stop and ask ourselves some deeper questions then, before going down that road.

You talked a bit about trust already. One common and growing concern that we're seeing is where AI and ML systems get the data and the content that they use for training. And we're seeing a lot of times they'll use data that we put into online systems, like the document clouds, or that we published online, like on LinkedIn. And the companies that run these tools are not always transparent about how they intend to use our data when we sign up for the services.

So could you talk a little bit about how you feel about companies that use data and content for training their AI and ML systems and tools? We've talked about consent, but also there's some questions for some tools of whether they should even be required to compensate people whose content they use for training their tools, and then they make a bunch of money on it.

Yeah, I think having seen the Mira Murati interview, the ChatGPT executive who said she didn't know where some of the training data came from. Or a couple news articles I've seen about how the world is running out of training data. There's a lot of copyrighted data. So if they're just running out, that means that they don't feel like they have the option to buy those copyrights. So that is indicative to me that something's going on. I think too, again,

if you're trying to build a machine that does everything for everyone, then it has to also eat all the data of everything that ever was.

It feels like a kind of Silicon Valley tech attitude of “break it now and ask for forgiveness later”, which maybe has worked in the past. But when you start talking about some of these ethical things or medical things or again, bringing it back to car stuff, you can't do these kinds of things and then walk it back. It's hard to rebuild trust with the public. And again, I don't know how you untrain an LLM. Feels concerning.

One of my other interview guests who works with startups was saying that, if you are starting a new company, you really want to ‘imprint’ it, is their term, from the beginning with these ethical principles and start with ethical data. Because if you start without it, fixing it later is really, really hard.

Yes. Going, again, back to that energy intensity - it's just so difficult and energy-intensive to train these LLMs that, even if you wiped it and started over, that's like a huge, huge process.

Yeah, and in terms of the environmental impact and the resources: it's one thing to spend all the energy on training, but the impact of repeatedly running these for people generating hundreds or thousands of images with too many fingers - it's just WASTE. We're just throwing away all that energy, throwing away all that water, for nothing of any value. And it's one thing if we're helping to solve cancer or finding a more efficient way to design a device.

Yeah. It feels a little like an answer looking for a problem maybe right now. There's lots of really niche, interesting problems that I think would be great to apply algorithms and machine learning to. But again, solving everything, which feels like what OpenAI wants to do, just feels like maybe too tall an order and too unspecific. We can definitely talk about specifics and benchmarking and stuff, because that's the thing I'm passionate about too.

Okay! Good to know! Yeah, let's follow up on that.

So when you've been using these AI-based tools, do you feel like the tool providers have been transparent with you about sharing where their data came from, that they use for building tools, and whether or not the original creators of that data had consented to it being used?

Yeah, no, I do not feel like that. Even before AI and OpenAI and ChatGPT and these were all big buzzwords, I think there were challenges with Google. Because they had this Google preview, before it was the AI preview, and people were like, “Well, you're just taking the data from these web pages and you're making it so people don't actually have to go to the webpage. And that messes up how everyone knows how to make money from the internet.”

And so that has been going on for a long time, it feels like. So yes, again, it's, ugh, “break things now and ask forgiveness later” and wait until regulation keeps up, which is very, very slow.

Yeah, I think regulation is pretty much always going to be behind. And it's also more complicated because it's not just a single country or a single region.

Right.

The data is sourced globally, the data is being reused globally, and how do we coordinate that? It's a big thing, to figure it out. But that doesn't mean that we shouldn't be trying.

Right. Yes.

As consumers and as members of the public, our personal data and content has almost certainly been used by AI-based tools or systems. Do you know of any examples that you could share when your information may have been used?

Like I said, it feels like it's easier to figure out where it hasn't been! I feel like Apple, they're trying to be really clear, like, if you use this Apple AI thing, it doesn't send your data. We keep it on premise. Those kinds of things. They have at least attempted to.

But again, in that example I mentioned where the Hong Kong airport was able to use my face to let me get through: they have information that I didn't consciously opt into at some point, but maybe I did. Or maybe it's just, again, you sign a bunch of fine print when you fly somewhere. And that's a new add-on to every term and condition for every software I ever use, maybe.

So yeah, I think it's a lot easier to ask what one company has made me aware, than what company is using my data and not informing me.

I have lots of bubble tea memberships. They just want your number, and it's free. But definitely yeah, I don't know what the number is used for. And that's on me. I think [it] means that I get spam texts or calls about new job opportunities occasionally. So yeah, it's hard to say.

But information security or data breaches and things, it feels like it's been going on longer than this recent AI bubble. And I used to have my credit card number stolen, I feel like once a year.

Oh, wow.

But it hasn't happened in a while. And I don't know if that's because information security has gotten better, or I haven't lived in the US in a while. So yeah, it's hard to feel like any of my data is really secure, I think.

Your bubble tea rewards - is that something where you have to give your phone number to be a member of the rewards program? Or can you actually opt out, and still get the rewards?

I don't know that there's a way to opt out. You can order online, or, like, you scan a QR code to order. And you have to sign in with your phone number, and you get a text message that verifies your phone number. So the number of random restaurants that have my phone number, because I wanted to order a pizza one day, is very high. It's almost like, if you were really trying to opt out, it would just be so hard that it's not worth it, which I guess is how they get you. It's too hard in this digital world. I'm just like, “I just want to pay with cash and talk to a person and order.” And it's not an option at many places.

Have you ever been surprised to find out that a company was using your information for AI?

I think the biggest example is the Microsoft Word thing, where I guess I was opted into this and I was using it. And I don't know actually how long. Was it like every time I type something in Word? Or if I opened an old Word document during that period? Where they use [it], I have no idea. But definitely was surprised that they didn't notify me before turning it on.

Yeah. So in general, how do you feel like your information's been handled in cases like this?

I have no idea, right? It's all anonymized, sure. But again, you can't go and say, “I want my data back”, right? “I want you to not use my data.”

It's kind of like giving my phone number out to every bubble tea stand here. It just feels like it's only going to get harder, unless there's maybe actual regulation. But it feels like me as an individual resisting some of these, and it's pretty tough. I mean, I've definitely seen articles and people are like, “Okay, this is how I'm opting out of Gmail”, and Microsoft and all these tools. It’s really hard. It's the equivalent of going to a cabin in the woods, kind of.

Yes, it very much is!

So we talked a lot about trust and how our distrust of these AI and tech companies has been growing. What do you think is THE most important thing that these companies would need to do to earn and to keep your trust? And do you have specific ideas on how they could do that?

For me, the number one thing is: so, we've talked about some of the drawbacks, and maybe illicitly using people's data, and all these things. But again, I'm not convinced, especially for these LLMs, these Large Language Models, that they're doing something that's worth all of this. I don't know what would be worth it, but I just haven't seen the thing.

I think ChatGPT released some benchmarks, maybe, where they're like, “This new model uses less power or is X much more accurate” or something. But those aren't independently verifiable benchmarks. Those were internal benchmarks.

To me, I think demonstrating some real value, and also being able to participate in these benchmarks and standards, which I'm sure are rapidly being developed, as there are all these different AI companies participating in that.

And being more transparent about what the outputs are. Because right now it just seems very fuzzy, right? It's like “We're going to solve everything - don't worry about it”. And I'm a little bit skeptical of that. Definitely like the data privacy issues we've talked about - I don't know how you walk some of those back.

And then, yeah, I think getting out from some of these big AI tools that, again, is not subjective. Like I mentioned, I worked on the written part of the interview at a coffee shop. I was like, “Oh, yes, this is a coffee shop, like many other coffee shops, with a neon glowy sign for a selfie and lots of plants and things.” And - I don't know how to pronounce his name, I think it's Kyle Chayka - wrote the book about algorithm flattening of our world.

And like I said, it's about those LinkedIn posts that are all kind of the same, or how things just have a sameness to them. I enjoy uniqueness of the world and of humanity. And I feel like an AI flattens that. That's their main output, flattening it, and making it all kind of the same. I don't really think that's a good use of energy and all this work, but my personal opinion.

Yeah, I saw a cartoon just the other day. And it shows a machine that's an assembly line, and as input there's all different colors and shapes of bottles. And people were standing there saying, “Oh, I can't wait to see what creative things it comes up with”. And out the other side of the machine come all the same gray bottles with the same shape.

Yeah, it's optimized though!

Yes. Yeah, it is.

Going back to the genetic algorithm I mentioned, that's how they got that weird funky shape of the bullet trains. And something went wrong, if that is exactly what the genetic algorithm, the idea was. They'd make something that fit some parameters, but was so wild that they would never have come up with it themselves. And yeah, it seems like we're doing the opposite with what we have now.

We've talked about the energy usage, and that we want AI and machine learning tools to be used for actually adding value to the world. What things do you think they could be used for that WOULD add value to the world?

Even like 10 or 15 years ago, I think people were talking about doing this in the electronic design automation space, to improve circuit design. How do you make sure that the circuit is laid out and will actually work when it comes out? So anything with a verifiable, measurable output, as an engineer, I'm very interested in. That seems like a good thing to use these for.

My husband started playing this video game - you can see this racing rig behind me with the steering wheel and pedals. And there's an AI trainer for it, which I'm very amused by, where it watches your driving and it's like, “You need to accelerate 10% more into this curve to improve your time”.

That's hilariously niche of a problem, but I feel like it’s exactly the kind of thing. The inputs are physics and I think this was designed by people who used to be F1 coaches or something. The data inputs are ethically sourced and controlled. It has a measurable output. “Okay, you did two seconds faster because of this, because of these results.”

And again, it's not the kind of thing where someone is saying, “We're just using all the data in the world to solve all the problems.” It's a very well-defined problem, I would say. As an engineer, I like well-defined problems because when the problem is well-defined, it's easier to find and measure a solution.

So I think that's a hilarious and very niche application, but checks all my boxes. And so I think, again, in my field of electronic design and tools, we have tools that simulate and say, “okay, this is the efficiency of this circuit”. So this to me is just like maybe an evolution of algorithms and optimizations that we've already had. Problems with measurable improvements that are benchmarkable seem like a good use.

But those aren't going to make all the headlines. So I think it is up to smaller groups, or smaller companies and things, that can go figure it out. And I'm interested to see as AI grows and there's open source tools, like open source AI, that maybe becomes a part of the solution. But yeah, I don't know that I'd see OpenAI and ChatGPT being a big part of some of these solutions.

Yeah, and I think there's also a tendency, like you were saying, it's a hammer looking for a nail.

Exactly.

And it's not always the right thing. Sometimes you just need a screwdriver.

Yes. Yeah.

Anything else you'd like to share with our audience?

Yes. As I mentioned, I'm interested in open source electronic design automation and open source AI. I'm coming up to speed on some coding tools. So there's some things I'm pursuing in parallel. But I have an upcoming article in the March edition of the IEEE Power Electronics magazine that touches on some of these topics. So if you're interested or you want to chat more, definitely drop me a line at kristen at i triple e dot org. I snagged that email address. 🙂

I’m interested to see what interested engineers and software people can do, all together in this space. There are three mega companies. And I'm interested in open source options, and especially open source AI. These three big companies are also using AI. So definitely relevant.

If anyone is interested in chatting more about this, I am definitely interested in chatting, and coming up on some coding languages. So please drop me a line at kristen at I triple E dot org. And yeah, definitely check out the article! And interested in everyone's feedback and thoughts.

Great. If you can share a link to that, we'll include it in the interview, and hopefully we'll get some people that are interested in this topic along with you.

Awesome.

Kristen, thank you so much for making time for this interview. I really had fun talking with you and learned a lot. So thank you very much.

Thanks so much, Karen. I enjoyed it too. Yeah, it's an exciting new world. So we'll see what happens and find some small, well-defined problems to solve, maybe.

That was great. Thanks!

Interview References and Links

Dr. Kristen Parrish on LinkedIn

IEEE Power Electronics Magazine (latest issue, Kristen’s articles)

Some of Kristen’s recent Industry Pulse columns:

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!

Series Credits and References

Audio Sound Effect from Pixabay

If you enjoyed this AI6P interview, I’d love to have your support via a heart, share, restack, Note, one-time tip, or voluntary donation via paid subscription!

Share

Discussion about this podcast

6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
AI is affecting People & Places we care about, Practices & Processes we use every day, and Products & Platforms we build & use. Want to understand how use (& misuse) of data and AI impacts these 6 'P's, and what you can do?
This podcast is for you! We share episodes on ethics in AI, use of generative AI in the music industry, and selected audio interviews from the new “AI, Software, & Wetware” series.
All words are 100% human-written - we do not use AI content generators (unless as a clearly-labeled demonstration of the technology).