6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #062: Jennifer Spykerman, USA-based tech executive
0:00
-54:54

🗣️ AISW #062: Jennifer Spykerman, USA-based tech executive

Audio interview with USA-based tech executive Jenn Spykerman on her stories of using AI and how she feels about AI using people's data and content (audio 54:54)

Introduction - Jennifer Spykerman

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Jenn Spykerman, provided by Jenn and used with her permission. All rights reserved to her.

Interview - Jennifer Spykerman

I’m delighted to welcome Jennifer Spykerman from Colorado as my guest today on “AI, Software, and Wetware”. Jenn, thank you so much for joining me today for this interview! Please tell us about yourself, who you are, and what you do.

Jenn: Yeah. Thanks Karen. Thanks so much for inviting me to join your podcast. A little bit about me. As you mentioned, I live in Denver, Colorado. I grew up in aerospace. I worked for a large aerospace company here in the Denver area. And I worked on all sorts of things that are space-related - mostly software, embedded software including flight software and guidance/nav/control, and then systems engineering.

And the nice thing about working for a large company is you can do all sorts of things. And I did. I took advantage of all of that. So I got a really good taste of all parts of the business of aerospace. And it included some AI activities for satellite systems as well. So that was really helpful, especially now that artificial intelligence and machine learning are becoming so prevalent today.

After I left there, I went to work in the tech industry for a small company called Pivotal, helping companies do large scale transformation. And so now what I do is I bring those pieces together, because I understand how large enterprises work. And I know that a lot of companies are struggling with getting value out of their AI activities and projects that they are delivering. And so that's what I'm doing today.

Karen: That sounds good. Thank you for that background. Very cool that you have worked in aerospace so much. I don't run into a lot of women who have worked in aerospace, and I had spent some time there myself working with real-time systems and software. And yeah, the mission-critical aspects tend to be underappreciated by people who haven't worked in that space.

Jenn: Definitely. Definitely, yes.

Karen: Yeah, "move fast and break things" does not work.

Jenn: Does not work. No. At least not when you get onto the satellite, that's for sure. There's a lot of money at stake, and it becomes very painful sometimes to make sure everything works just so, right?

Karen: Yeah, absolutely. So tell us a little bit about your level of experience with AI and machine learning and how you've used it professionally, or if you use it personally.

Jenn: I use AI all the time personally. As far as professionally, I've designed systems to help. For example, the satellite system AI that I did was really a machine learning program to help the test bed engineers understand if there were some slight anomalies, based on patterns, that should be focused on.

Because sometimes when you're just looking at error messages or other telemetry coming off of a satellite or a spacecraft, you can become overwhelmed, and it'll all look the same after a while. And so this way, if there were some strange changes in what that telemetry was saying that was hard to pick up, by a human, we could identify what that issue might be prior to launch, which is always the best time to find any issue.

So that was my initial foray into AI. And then since, I have been advising large companies, large financial institutions, and now I'm working with the US Army on how to implement AI solutions and how to demonstrate that they are delivering value to their organization.

Karen: Yeah, that's a great perspective to have, looking for where it adds value. Because there are a lot of people that, it's almost like running around with the hammer, looking for the nail. “Here's AI, what can I do with it?”

Jenn: Exactly. And there's a lot of things to think about. Just recently I was speaking with some folks who are in the financial industry. And the individuals are very knowledgeable in artificial intelligence and what it can do for them. However, what they wanted to do was very focused on their own personal roles and not necessarily tied to what the organization's goals were. And so helping them understand, here's how you might prioritize a project that would not only meet some of the goals of your sub-organization, but you can also demonstrate that it meets the larger organization's goals for the year, around profit and loss or reducing costs or productivity or something like that.

Karen: You mentioned using AI to look at changes in the telemetry before launch. That's a really interesting thing, because sometimes you can see anomalies just by visualizing the data, but sometimes you do need the analytics to point out where the real anomalies are, or the things that you do need to be worried about.

Jenn: Oh, absolutely. Yeah. One thing it wasn't necessarily that you wanted just to see “Where are there errors? And where might something be going wrong?” because that's usually pretty straightforward. It's pretty easy to identify those big failures. But where are you maybe seeing some successes where you shouldn't? Or where are you seeing a change in what kind of data is being pulled off of the spacecraft that may not be detected by the test bed itself?

So little, minute things that are difficult to pick up. Sometimes you can visualize some of these sorts of pieces of information. But it was so difficult to really tokenize the data apart and then understand, “Okay what am I really looking for here?”

Obviously there's a timestamp, and then there's different data about what subsystem it came from, and what the actual telemetry was, and whether or not it was within range or outside of range. But visualizing that, especially at the time, it was not a straightforward thing.

So it was nice to at least narrow down the types of things that the test engineers needed to look at. And there'd be so much. This was happening constantly. So if you're thinking of thousands and thousands of lines of telemetry, every hour, it just becomes overwhelming to look at and even visualize effectively.

Karen: Yeah. Some of the really high data rates that get used can definitely be a problem. If you fit it all on your screen, you can't even see the bumps in it that you need to be looking for.

Jenn: Yeah. Yeah.

Karen: So you mentioned that you also use AI personally “all the time”. Can you talk maybe about an example of how you use it and how it's helped you? And what works well and what doesn't work well?

Jenn: Yeah, so I use a lot of the LLMs, like many folks do, to help me think through ideas. I'm working on a position paper right now to help the Department of Defense understand how they could transition to a new way to acquire software-centric systems. And understanding how to word things and make it a little bit more interesting, I can bounce ideas off of it.

Now I do know where I have limitations is, if I am really lazy and I just say, “Can you just write this whole section?” And it usually doesn't go really well, because it does start hallucinating pretty quickly, because it makes up all sorts of stuff. And so you have to use a little bit more precision, and it still takes the work of identifying the research and doing some deep dives and interviewing people and all that old-fashioned stuff. But it helps at least structure thoughts and to say, “Okay, how can I make this more interesting? What other kind of story might fit in here so I can at least set myself on a path on where to look?”

Some of the deep research tools now that are out, like ChatGPT, or Perplexity is really helpful also just to point me in the right direction. “Okay, where are some potential sources that are valid for me to use?” And the nice thing about working on stuff for the US government is there's a lot of good stuff that's out there because they're not quite as protective of what they're doing, like the private industry sometimes is. So you can find it, you just have to know where to look. And so AI definitely helps me find places to look that I hadn't thought of before.

Karen: That's good to hear about that insight. You had also mentioned using an LLM that you trained on some regulations. Do you maybe want to talk about that a little bit?

Jenn: Oh, sure. Yeah. There's this wonderful regulation called the FAR. It's the Federal Acquisition Regulation. I'm not sure how large it is, but it's pretty extensive, and there's SO much detail in there. In fact, one time I was working on a proposal a long time ago. And it was a 10,000 page proposal that we were submitting to the government. And we found out that the FAR actually dictated that we had to submit our proposal on paper that was a certain percentage recycled paper. So we had to make sure we had the right paper to submit.

And so it covers a lot. The basis for it is to allow for fairness and some transparency in how the acquisition process works in the federal government. However, it is so extensive, it becomes a little bit duplicative. And it's difficult to understand what all you need to be following in order to be compliant when you're working with the government.

So I have a little AI tool that I programmed with Claude. It sucked in all of the FAR. And so a lot of times, I'll just say, as I'm responding to a proposal, for example, I'll just ask Claude, “Does this fit with FAR, or can you reference the FAR, paragraphs that this would be applicable to, or all those things?” And it's able to help me structure it and make sure that I am compliant. And then also “How can I write a compliant proposal?”, for example. Yeah, it's really helpful in something like that because there's no way you'd want to go through the FAR all by yourself, that's for sure.

Karen: Yeah, so the FAR is huge. And did you say you have a 10,000 page proposal that had to be submitted on paper? My mind is blown about that.

Jenn: I was not the one that wrote all 10,000 pages. I ran a proposal when I was in my previous role as a defense contractor. And I had a team working with me. And yes, the proposal was 10,000 pages long. And we had to submit, I can't remember how many copies of that. So it was really a lot of paper.

Karen: Oh my gosh.

Jenn: It was probably like, I remember we had some statistics, but it was like 70,000 pages or whatever it was in the end, because you had to have five or seven copies delivered. So it was a lot of boxes, a lot of bankers’ boxes.

Karen: Wow. I thought we all got away from using physical paper in a lot of cases, but I'm just amazed that they required paper for that.

Jenn: Sometimes. Sometimes they do. I think they are getting away from it these days. So everything that I've done in the couple years since, especially as a small business now, which is what I am, I deliver it as, it's usually all electronic.

Karen: Yeah. So you mentioned that you use Claude for this, and I think you had said earlier that you use ChatGPT. How do you decide which tools you use when?

Jenn: Yeah, it's the style. And I also use Perplexity. I spend a lot of money on all of the different tools. I like ChatGPT occasionally for some of the deep research. It has some good deep research pieces. I like Perplexity. I like the reasoning model and how it demonstrates how it's doing the reasoning. So I use Perplexity and ChatGPT a little interchangeably, especially when I'm doing research and doing an initial outline, let's say of a post, or of a large paper or something that requires some research.

I like Claude because Claude has been able to train on my personal voice a little bit better, so that when I ask it to write something specific, I feel like it is something that I would write in the same tone and with the same style. So I really like the way Claude produces the voice.

So I switch between those three pretty regularly. So I've got a little tab with them all grouped together. And I go back and forth all the time.

Karen: Yeah. Having a LLM that was trained on the FAR requirements and then using that, seems like a really productive tactic.

Jenn: Yeah, exactly. And one thing I do like for Perplexity, I should mention, is you can turn off whether or not the data that you're sharing with it is sent back for training. So if I'm worried about where my data is going, I enjoy using Perplexity as well, because I have that little piece turned off, which is a really good feature.

I think ChatGPT has something similar. I should double check Claude, so don't quote me. And if anyone's out there and "she's wrong", I totally get it. I probably missed something. But it's a good thing to look for, especially if you're concerned about proprietary data or anything like that.

Karen: Yeah, I've heard from some people that if you have a paid ChatGPT account, that it does let you turn that off. The question then, I guess, is if you trust it.

Jenn: Right? Yes. Trust. And I think when we talk about AI in general, trust has been really the big topic. Should we trust it? Should there be regulations to manage it? And there's always that fine line of trusting something intrinsically versus making sure that companies are either A, held accountable or B, are transparent in how they're doing the data sharing or how they're training their models or how they're using your data in general.

I might add though, because I was reading this book by, what's his name? Chris Voss. He was a former FBI hostage negotiator. And I was struggling with how to have a difficult conversation and I went in, “Pretend that I'm Chris Voss - how would I respond to this?” And it pulled out everything that Chris Voss would probably say, based on what's in his book. Now, is that ethical? Since his material, I'm sure, is copyrighted in a book. I know that is something that we all as a society need to discuss. Because that IS proprietary data. That's artistic value that authors and creators need to feel protected. So there is a balance there.

Karen: Right, so instead of someone going out and buying his book and reading it for themselves, then for a tool like ChatGPT to basically offer that knowledge to anybody without compensating Chris Voss for it, there's a sense that it's not fair.

Jenn: Exactly. Exactly. Yes.

Karen: Yeah, and one of the concerns that tends to happen is, if writers are unwilling to make their content available publicly like that, then we all lose. Because then we don't get the benefit of his knowledge.

Jenn: Exactly! Yes. I mean we want Chris Voss and other authors like him to produce new ideas. And I think that's another caveat that I keep in mind as well is, when I'm creating something, I try to create it and then get ideas. But I don't want to rely on and give up too much of my creativity or thought power to any tool, right? So I want to make sure that it is mine, it is something that I would feel good about presenting at the level of quality that I approve of, based on my experience, right?

Karen: So you're using it to augment your own intelligence?

Jenn: Yes! Yes. And look for ideas and make sure that the words actually do fit together. Because sometimes, when you're in the middle of something, you're like, “Does this even make sense? Please help me make it make sense.”

Karen: Yeah, definitely. So you've talked a lot about using AI and LLMs for words. Have you ever tried any of the tools for generating images or music or any other modes?

Jenn: Yeah, I've done a lot of images. I've played around a little bit with vibe coding just as a software engineer. I have been out of actually engineering software for a hot minute. And the new project I'm working on with the Army, for example, is working with PySpark. And you can pick up Python pretty easily. So to help me refresh, “If I were to solve a problem in PySpark, how would I do that?”

Karen: So you said you have used it for images. For your writing for your newsletter? For just personal, for fun?

Jenn: Yeah, sometimes for fun. For my newsletter, the image there was generated by the ChatGPT DALL-E image generator. It's very handy. I also have used Adobe's AI image generator as well. They have different styles. I feel like the DALL-E, you can always kind of pick out like, “I know exactly where that one came from”. Some of the ones that are a little bit more in depth, or a little bit more nuanced, you can get from other tools.

So those are the two from an image generation perspective that I've used. Anything else that is image-related, I usually try to use my own photographs and my own representations, just because it helps me think through it anyway. And I just like to force people to look at the pictures of me, climbing mountains and stuff like that. So a lot of times I try to use my own personal pieces, because I feel like that's the most unique you can still be. But I have used some of those other ones.

Karen: This is a good overview of when you've used different AI tools. I'd like to hear about times when maybe you avoid using AI tools, and why you avoid it for those purposes, and maybe an example.

Jenn: If you're working on something that is not really commonly known. Because, right, these are general purpose models that you are relying on. So if I'm working on something that is really cutting-edge new technology, that's probably not very widely understood, I definitely avoid AI. Because in my limited experience of actually trying to use it, I normally get a lot of hallucinations.

I get hallucinations anyway, which is another reason why you should always read through exactly what you're writing, and ask the AI questions like, "Are you sure? Because I thought that this was this."

But if it is something that's really different or it's hard to find information on, I don't rely on it. I think it's tricky to do that. So that's probably the biggest area that I limit my exposure.

And same with the vibe coding tools. I am new to it, so I know that you need to have a lot of oversight in how you are using those vibe coding tools. And so if it's a simple task, then yeah, it makes a lot of sense. But if it's something a little bit more nuanced, something I really need to make sure don't have security problems, then I want to make sure I've got my hands in the code in depth, so that I'm not unwittingly exposing myself to something I don't want to be exposed to.

Karen: Yep. Yeah, very good points.

Jenn: Have you seen that meme where it said “I want AI to do the dishes and do my laundry so that I can do the artistic things, and I can do the creativity.” It's the same thing, right? I don't want AI to take over the things that I find personally fulfilling - the creative problem solving or creating something new. I want to make sure that I'm doing that, and the AI needs to do the boring stuff - can help me respond to this email that I don't want to respond to right away, in my voice. Help me organize my inbox. Help me do these types of things so I can prioritize my day. That's what I want it to do. I want it to be my personal assistant that doesn't ask for time off.

Karen: There you go.

Jenn: And it also helps me a lot because I have two kids that are in high school, and I help them with math. And if you haven't done trigonometry for a long time, remembering the law of cosines, some of these tools have really helped me. Because it'll explain it in such a way that I'm like, “Oh, that's right.” So I definitely use it to speed up the process of tutoring my kids in high school math right now. It's very handy.

Karen: Yeah. That's really interesting to hear because some of the earlier versions of the tools were really bad at math. And there was some effort to say, “Let's just integrate Wolfram Alpha and let it do the math, because the LLM is so bad at it.” But I'm curious if you've experienced that.

Jenn: Oh yeah, absolutely. It was fun. Even on the simplest math for example, I was structuring a milestone payment plan, based on a percentage of the total contract value. And I asked it, because I was tired and I said, "Can you just break this into even percentages across these different milestones?" And it came up with something completely nonsensical. It did not follow my directions. And I decided that Claude is not a happy math tool.

So I played around with it. I also had issues with when I'd help my kids, oh, probably even a year ago in some of their math, I remember catching some mistakes that I was like, "Wait a minute, what about this?" And it'll be like, "Oh yes, of course. Yes, good catch, of course." YOU should be catching this, ChatGPT, not me.

It's gotten a lot better. But I definitely double check it and make sure that it lines up with, hopefully we have an answer key. Sometimes we don't, but I definitely double check it.

Karen: Yeah. So I'm curious, with your kids being in school, what are their school policies about students using AI?

Jenn: Oh, yes. So they are not supposed to use AI, I know, to generate papers, for example. I'm not sure if they have specific policies that they cannot use it at all. I think it's the use of calculators a long time ago where math teachers were against calculators, and then the use of computers. And what can and can't you do, and what are you giving up if you use some of these tools?

I think from my perspective as a parent, I want to make sure that my kids know how to write and analyze data and consolidate it. So that they know how to do that in the real world, before they just 'dump and go' on ChatGPT and have it write their own paper.

I did one time get a note back, because I know that one of my kids was in a hurry and probably just wanted to get something over with and done. And they did use ChatGPT. It was pretty obvious that they did. And so I did get a note home from the teacher. And we had a conversation about, it's important to, even if it seems difficult, at least throw some things out onto a paper. Make sure that it's your thoughts and it's your analysis, so that you can defend it. Because ChatGPT's really good at analyzing 'To Kill a Mockingbird'. You can make it do a lot of stuff, especially with classic texts, right?

So definitely, from my perspective, I don't want them to do that. Because I think it's important. Because it helps with, not just writing papers for the sake of it, but it helps you for collating information and generating new ideas down the road. So that's my personal policy.

I haven't seen any formal policies come out from the school district, but I have a feeling that they at least don't want you to write a paper solely with AI dumped tools. Because that would probably go against their rules on plagiarism and that sort of thing. That's probably what they're thinking about.

Karen: It sounds like in that case, it was pretty obvious just from looking at it, yeah, this was not written by a human. Do you know if they use any of the AI-driven plagiarism checking tools on student submissions?

Jenn: Not that I know of. And I'm pretty involved in our school district and I haven't heard of any. I use them every once in a while because I am curious whether or not something really is AI-generated. And they're hit or miss on their accuracy.

I use it on my own to see if what I write, even if I just wrote it myself, if it shows up as being AI-generated. So far it hasn't. So I kind of test those out just to see.

But those are, I think, a little bit behind so far, at really being able to detect AI. Unless you're just using the free version of ChatGPT and doing a complete dump one individual in my household did. So that was pretty obvious, especially if your previous writings were not to that level, right?

Karen: Yeah. So I'm curious if there's anything in the kids' curriculums about teaching them how to use AI effectively?

Jenn: No, nothing in their curriculums to use it effectively. I know that our school board has been investigating it. We had a recent meeting to walk through what's really important, what does education need to look like here in the next 10, 15 years to prepare kids effectively? And I think careers are going to change, as we all can already see and attest to. So what really matters in the future?

I went to a small gathering in San Jose a few weeks ago with Gene Kim, the author of “The Phoenix Project”. And one of the topics during that forum was: Now that there's things like vibe coding tools and it's a little bit easier to pick up coding at a generic level, now you need to really make sure you understand the engineering part. And you really understand, what does it mean to have a good system put together, using those rigorous engineering tools that you learn in college, right? It's not just about slinging code anymore. It's about focusing on the design. It's focusing on the quality and all those pieces. Because, those tools, they're going to get better. But at the same time, there's a lot of nuances for every application. And so you need to have that insight to be able to apply it appropriately.

Karen: Perfect. So one concern I wanted to talk to you about is looking at where these AI and machine learning systems get the data and the content that they use for training. A lot of times they'll use data that people will put into online systems, or have published online, like our newsletters. And companies are not always transparent about how they intend to use our data, even if we sign up directly with them. So I'm wondering how you feel about companies that use this data and content, and whether they should be required to get consent from, and compensate, and credit the people whose data they want to use for training the tools.

Jenn: Yeah. I think, and we talked about this earlier, around making sure that we're still rewarding people for creating new ideas and new content for us to all consume. I think where it can be useful is if someone's using it for research and my blog comes up, your blog comes up, right, as a reference area, that can help drive traffic, and it can help you gaining more visibility. So that part could be good.

But at the same time, I do believe that we should know whether or not our data is being scraped off of a website, and how, what that looks like exactly? I'm definitely not a lawyer. I have friends that are lawyers. I could ask them. But I think we want to make sure that we're not setting ourselves up for a spiral, where we're just spiraling in the present and not rewarding people for generating the next idea that we'll continue our society from, to move forward in time.

So we do need to continue to reward people for the ideas that they generate and they produce and they put out into the world. And so when, you know, if you're using an AI tool, I personally want to know where the information is coming from. Because I want to understand, like, “Is this a book that I should be reading? Is this a magazine I should be subscribing to? Is this an author that I need to look into and reach out to?”

So I think it's also, as users, it's our responsibility as well to make sure that we understand where this data's coming from. Don't just pop something into Claude and then just use it at face value. If it's not data that you already knew in general, make sure that you can understand where it's coming from. So that you can give credit, if you need to buy the book or even get it from the library, that you're using it from the direct source.

Karen: Yeah, you mentioned about discoverability and finding our newsletters, so I'm curious if you have the setting in Substack for AI training, do you have it turned on? Or is it turned off?

Jenn: Oh, that's interesting. I think I have it turned off. It's funny that you mentioned Substack. I remember LinkedIn when that came out, that LinkedIn quietly added that little piece in there, and I definitely turned it off there. I'm pretty sure I turned it off in Substack as well, but maybe I'll experiment with it and just see if anything changes.

I'm not particularly protective of my own publication because I don't charge. I'm mostly doing it to help think through ideas and make sure that I feel relevant and have a voice. But if I start charging, or I'm looking for monetizing my blog more, then that's definitely something I would consider. But I'm actually looking now to see if I have it turned off. And it looks like I do. Yes. So maybe I'll turn it on, just see what happens.

Karen: Yeah. I've heard a couple of opinions. One is they say that it may hurt discoverability of your content if you turn it off because it will block some crawlers. On the other hand, there are some unethical crawlers that ignore that setting and scrape it anyway. But then other people say, "I want to be discovered. I think my ideas are important and I want these tools to hear them and use them and display them and make them available." So some people feel that way about it.

Jenn: And once again, it comes down to transparency. I was really disappointed with LinkedIn and how they handled that. I felt as authors, that should have been disclosed to us when that became an option or when they decided they wanted to start using that data.

All of these platforms have access to a wealth of information and knowledge about all of us. And I think with that comes a lot of responsibility. And to start with, you have to be transparent. You have to at least give the authors the option to manage their data and how it's being used.

Unethical crawlers, I think, it's like everything else - it makes me angry, but I feel like that's another hurdle that will be a very difficult piece to overcome. It's just like getting rid of all the scammers that I get texts from on a daily basis to pay a toll that I don't have to pay. That will be a difficult thing to overcome. But every platform has the duty and the requirement to their user base to be transparent and to say, "This is what we're doing with your data. Is this okay?"

Karen: Yeah, I think what bothered me the most about LinkedIn, it wasn't so much that they decided, "Hey, we want to have this option where we start using it and opting people out by default.", but that what they put in, at least for those of us who are not protected by GDPR or Digital Markets Act, that they were automatically saying, “Everything you've done up till now, we are claiming the rights to, regardless of whether you flipped this opt out button from now on.” That option did not protect anything we'd already done. They were just saying “We're already taking this, but we'll let you opt out for what you write in the future.” And that was very much not cool.

And it's not like they technically couldn't do it, because they were able to have the people who are covered by GDPR not subject to this scraping. So it can be done, but they just chose not to do it for the rest of us and not to offer that.

And Facebook did something similar. It's “Yeah, you can put in an opt-out request.” This was last summer. I don't know if you remember this. But they said you can put in an opt-out request, but you have to go through five different hoops to do it. But then they say, “No, we don't have to, so we're going to ignore your request.”

Jenn: And that's frustrating. And that's really frustrating. I have to admit, I don't use Facebook anymore. I've lost interest in that platform. And that doesn't surprise me that's how they're approaching it, because they're definitely putting profits over what their initial mission was, right?

I think they've taken a lot of missteps in the past and it's going to hurt their relevancy in the coming years for sure. From a Facebook perspective. Instagram, we'll see. I'm sure they're doing the same things with Instagram, same company. Threads, same company.

Karen: Yeah, I wasn't using it much, even as of last summer, but after that it was like, “You know what, if you won't take my opt out, I'm just going to delete all my content.” And that doesn't mean they didn't use it anyway and keep it. But I did at least say, “You know what, that's the last straw. I am really done here.” I keep the account alive just to talk with some of my late husband's friends and such that I don't connect with any other way. But I really don't use it anymore. And I'm not going to put anything personal there because they can't be trusted.

Jenn: Right.

Karen: I can see them going the way of MySpace, and I don't know if they realize that yet, but I think they're desperately trying to avoid it. Have you heard about this new thing where they're going to try to actually introduce ‘AI friends’ as Facebook users that people interact with? They say "People need more friends. They've only got about three and they should have about 15."

Jenn: Who said?

Karen: This is Zuck. This is his latest thing. So anyway, yeah, if you're not there and your kids aren't there, then it's probably a good thing. But that's all the more reason to say, "Boy, I do not want to be there."

Jenn: Exactly. That seems ridiculous to me, really. Somewhere recently I've been listening to, I forget who it was, if it was on Hidden Brain by Shankar Vedantam or someone else, where there was a podcast where they were talking about how many friends an individual really needs. And really it's no more than five. Maybe three. Because it does take time to nurture those relationships. It takes effort and energy.

And so most humans can't handle more than a couple close friends. Of course you have other acquaintances, and you have different friends that are a little bit less connected. But as far as close friends, you only really need a handful to have a fulfilling life. And so I couldn't imagine saying “I only have five, so now I have three more AI friends.” That sounds like a terrible idea. But at least you don't have to worry about hurting their feelings. That's for sure. You can stand them up all day and no one's going to complain about that, yeah.

Karen: Yeah. I don't see too many of the younger kids staying on Facebook nowadays. And a lot of the people that are in the generation behind me that are just now having kids, and some of them still share photos of their kids. And I say, “Are you really sure you want to do that?” But others are just very cautious. Like they won't put anything there. They only share photos through Signal now with family members and such. Just because they know that those companies can't be trusted.

Jenn: Right. Unless you're Department of Defense, then you just never know. You just have to know who you're inviting to your chat!

Karen: Yeah. Wow. That was pretty amazing.

Jenn: That was a mistake for sure.

Karen: Anyone who's ever held a security clearance knows that would be a firing offense. It's just absurd.

Jenn: I know. I couldn't imagine. Couldn't imagine. Yes. But that is definitely not my problem. Yet.

Karen: Yeah. But the whole question of being able to trust the companies is, I think, a really big one. And I think that's partly what underlies so many lawsuits, in the sense that sentiment of the public against the companies has been going down because "You're doing what with my data?"

Jenn: Right, and then there was a lawsuit at least a couple years ago against a couple of those AI companies. It was a class action with a lot of authors that had filed it because their proprietary information, their copyrighted material had been used to train ChatGPT, OpenAI, Anthropic. And I never heard what happened with that lawsuit. I don't know if you had, but maybe it's still working its way through the courts. But I think that will be interesting to hear about, what will come out of that and how will that change the way AI is used in the future.

The funny thing is, even if they win the lawsuit, it's un-mixing a bowl of cookie dough, right? You can't take the eggs back out. Once it's all mixed together, it's all in there, right? Unless you start completely over and throw the whole thing in the trash, which I'm sure no one would want to do.

So I think that's the difficult part — they made some decisions at the very beginning, and either didn't think of it, or didn't want to think of the obligations that they had to ask for permission of copyrighted material.

Which is another reason why, if you do use AI to do research and to do some writing, to make sure you know exactly where it's coming from. Because you have to give credit. If it's not something that you made yourself, then you better go and take the time to find that. Because you as the author will be the one that'll bear the brunt of that.

Karen: Exactly. And that's the whole infringement essence. There's a website called ChatGPTisEatingTheWorld.com.

Jenn: Ooh.

Karen: And it's run by a lawyer who keeps track of all of the lawsuits in the US that have to do with AI and copyright infringement. And the last time I checked it, a couple weeks ago, there were 39 active lawsuits.

Jenn: Oh.

Karen: And the one that you're talking about, with the authors whose works were all being used, I believe it's still one of the active suits. But there was a bit of news about that a few weeks ago. There's a way now that you can check, if you're an author, if you wrote a book or if you wrote some papers, there's now a site we can go and say, "Tell me if my content was scraped and used for this Libgen." [This interview was recorded on May 12; see this link to The Atlantic site for checking]

And I went in there and looked up some of my academic papers. I found, I think, 11 of them in there. And that's very minor compared to people that have written dozens of books and had them scraped without any credit or compensation. But yeah, as far as I know, that suit is still ongoing. That has not been settled.

Jenn: Yes, that sounds about right, because that one I had been keeping track of. But I'm definitely checking out this ChatGPT is eating the world dot com. Let's see what else is happening.

Karen: For sure. Yeah. That's my go-to for, when I want to see what's going on with the lawsuits, is I go out and check. I think that author's actually on Substack too. I think he joined recently. Edward Lee.

Jenn: Okay.

Karen: I'll verify that and put the link into the interview so people can find it. [link:

]

Jenn: Perfect. Yeah, I'll check that out.

Karen: For all of the AI-based tools that you've used, do you feel like any of those companies have been transparent with you about where they got their data?

Jenn: Whether they're actually transparent, I don't have a good answer for that. But one reason why I do like Perplexity is because it gives me references to where data is coming from. So that gives me some confidence on, “Okay, this is what you're referencing and maybe you're summarizing what is in this published article” or whichever. And so then I can go and do my due diligence and go and take a look at that article myself.

When they don't, like Claude does not normally provide a lot of detail on where it gets its data, then it's hard to say. You just have to hope for the best unless you specifically say, “Write Chris Voss”, right? So yeah, I think it depends on the tool that you're using.

As far as vibe coding is concerned, I have not dove into any of those tools and how they're exactly trained. If they're trained on proprietary coding techniques, if they're trained on anything specific, I don't have that sort of insight. So I'm not sure what those tools are using. Because to me, those tools are a little bit more nuanced around, “How did you come up with this logic for this solution?” Especially considering half the time it comes up with some pretty wild results, so you never know.

Karen: Yeah, yeah. I had used some of the tools a few years ago for generating some Python code for an application. I wanted to analyze some personal data. And I told it to use this one library which interfaced with this EDF data file type. And it made up API calls that didn't exist. But those are easy ones to catch, right? Because your compiler, when you try to run the program, it's going to say, “Nope, doesn't work. Nope.” So those are easy to catch. But the security holes and such are really a lot more subtle and you need to know what you're doing to be able to find those.

You mentioned Claude and traceability. I've heard a lot of people talk about how Perplexity is good for that. Claude did just announce something recently where they are now going to be able to trace to the sources.

Jenn: Okay.

Karen: I think they might have been introducing it in the paid plan first, and then roll it out to the free plan later on. But that is something new that they just came out with as well.

Jenn: Interesting.

Karen: I think even the ones that are giving credit aren't necessarily asking for consent or compensating. They're just taking them, still.

Jenn: No, that is true. As far as consent, we can all agree: no one, no matter how good of an author you are, a speaker, how much you have published in any medium, has given their own consent to use their information and their copyrighted materials. That's one thing we can definitely agree on is no consent.

Karen: Yeah. And definitely the same for images and for music as well. There's been a lot of scraping with no consent, and no compensation, and a lot of times not even credit.

Jenn: Right. And I think that's where others, whether it's government organizations - they need to come up with a standard for other groups that have some sort of governance and governing bodies need to provide some oversight into that. Because I think as individuals we can do our best. But right now, it's really on the individual users to make sure that they know where the data's coming from. That's how I view it right now.

Karen: So you've mentioned before that you've built some tools that used AI and machine learning. What could you share about where the data came from and how you obtained it for building those models?

Jenn: Oh, those models, it was data that I had access to, based on historical information. So I knew what good data looked like, and I could train it on good data. And I had plenty of that to set up, right? So I could say, "This is what good looks like. This is what I'm looking for." And it was really internal to the company that I was working for. I never took anything external. But there was no one else at the time that was really doing anything like this. So it didn't really make sense to try to pull in any sort of external data.

If I were to do it again today, maybe there'll be some other data on data.gov. That's one place where I look for a lot of shared open-source data. That would be my first stop, I think, for some of these activities, is to look for some of those open-source, free-to-use data sets that I can use to train a model. But yes, I never used any data that was not expected to be used, that's for sure.

Karen: So this application was for machine learning on some satellite data. Was it all actual data, like from a test system, or did you use some synthetic data?

Jenn: I created my own synthetic data. Most of it was actual data that I utilized. But yeah, I would create my own synthetic if I wanted to manipulate the model to look for certain anomalies or something like that.

Karen: You have to create the anomalies if you don't already have data that reflects them.

Jenn: Exactly. Yes. So that would be where I would create my own. But for the most part, I used pages and pages of log material that I could then tag with, “This is good”, “This is bad”, or whatever, “This is what I want flagged. And then if this error stops happening, I want that flagged.” All those types of things.

Karen: One last question, and then we can talk about anything else that you want with regard to AI! So based on everything that we're seeing, and the way companies are treating us and our data, it seems like public distrust is growing. And I feel like that's healthy in a way, because we're starting to understand what they're really doing with our data, with or without our consent. So what would be the one thing that you think these companies should do to, I won't say keep our trust, but maybe to earn it?

Jenn: I think transparency builds trust. You can't do everything behind a closed door and then just expect everyone to trust you. Once the novelty of this tool wears off, people have to stop and say, “Wait a minute” from an ethical perspective. I think we see a lot of issues around AI tools that kind of go off the rails and can cause mental distress, for example. Companies need to be forthcoming in how they're addressing some of these riskier areas that can impact the mental well-being of an individual.

Which is another thing I thought of when you mentioned the AI friends on Facebook. That just seems wrought with potential dangers, right? If there aren't good guardrails in place to make sure that you're not responding to someone who may be having a mental crisis and they need to talk to an individual.

So I think organizations need to be real clear on what the potential impacts are. Not just to the creators where they're getting the data and the models, but how it could potentially impact and cause harm to other people. And they need to be very proactive in what that looks like.

So technology companies have not always been really good about that. I think we saw that in the past with Facebook, even during the 2016 election. These leaders of these organizations need to take ethics seriously and make that part of their business plan. Because consumer trust is really what their businesses will be riding on, from my perspective. So otherwise you'll drive users away, and then, eventually you'll lose relevancy in the marketplace.

Karen: Yeah. So how do you see your kids responding to how much they trust or distrust these companies? I see a lot of varying perspectives.

Jenn: We have a lot of conversations about the use of AI. It's funny, my daughter, she has recently learned about the environmental impacts of AI, which a lot of us forget about. And so it's interesting because they actually have a much different view. They don't really worry too much about using it, but at the same time, I think they're thinking a little bit broader on what the impacts could be beyond the data piece, actually the energy piece and how we're utilizing energy and how that's going to drive up energy consumption and pollution and the environmental impacts of creating these massive data centers and across the world, right? So I think that was a real interesting conversation we just had like last week. So I really appreciate how broad they're thinking about this, the use of these tools, for sure.

Karen: Do they have concerns, or do you maybe have concerns, about how they work towards having a job and not having AI take their future jobs?

Jenn: I think as a parent you're always terrified that your kids aren't going to be able to make it. I try to have faith that they will figure it out. I think what's really important is that they understand how to read the tea leaves on the changing marketplace. You could read an article today and it's going to say, "All these people graduating from college, they can't find a job." The same thing happened to me when I graduated from college. It was right after the dotcom bust. And no one could find a job. And I was thankful that I did. People find jobs eventually, right? You just have to make sure that you're flexible.

And I think flexibility and creativity is really what's going to set you apart in the future. I was talking to an individual who works on DORAs, common metrics for software development. And he and I were talking about, if companies start reducing the number of junior engineers in their organization, how is that going to impact their talent pipeline, right?

Because the weird thing is that AI can do lots of things, but it doesn't seem to be stopping any of us from aging. I can tell you right now, my knee tells me all the time that I am not getting any younger. So yeah. So what does that cost look like when suddenly, yeah, it's great: instead of 20 entry-level employees, you hire 10 and that saves you this much money. But then 10 years down the road, and a bunch of your senior people are leaving, who's training the next generation of folks to take over your company? And how will that impact you over the long term?

I think that's going to be something that some companies will have to reckon with. And it could impact how their companies operate in the future. So it's definitely going to impact organizations, and it's definitely going to impact the talent pipeline if they say, “Okay, we only need 5 junior employees instead of 10.” But that means you only have 5 junior employees that, of those, how many of those can advance up and take over the company over the next 10, 20 years?

Karen: Yeah, it's definitely going to be interesting to see how that changes. And the effect on the pipeline is something I think people weren't thinking about. There's some studies looking at, who can use these tools the most effectively? Especially when we think about coding. Junior people, yeah, it helps them to move up to a certain level quickly, a certain minimum level. But the people that seem to get the most value from it are the ones who are experienced and they already know what they're doing. They know what to look for. They know how to be alert for security vulnerabilities and for some of the other problems that can creep in from generated code or from using somebody else's code. But then the question is, how do those junior people learn what they need to learn to become those senior people in 5 or 10 years or 20 years?

Jenn: Right. I'm saying understand: When do you make certain design decisions? How do you prioritize the use of one thing over another? When should you go all the way back in time? And I've done some assembly code. But sometimes it makes sense you have to do assembly code, right? Especially in embedded systems. When do you make those decisions and why? And vibe coding won't tell you that. You have to have the knowledge and the insight to know how to solve some of those problems.

Karen: Yeah.

Jenn: And I might put a plug in, there's a gentleman that I had met recently, his name is Kent Beck. He has an amazing Substack. It's called “Tidy First”. And if anyone's interested in vibe coding, that's his hobby, and he talks about it all the time. And I've learned a lot just reading his blog and seeing all of the comments that he makes on his Substack. So he does an amazing job and he's also an amazing human, so that always helps too.

Karen: Is it the same Kent Beck that was involved in the agile movement, way back years ago? Or is this a different Kent Beck?

Jenn: It's the same one. Yeah. Yeah.

Karen: Oh, very neat. I know a lot of the more famous writers and such are migrating to Substack, but I hadn't heard that he was there. I'll have to go look for him. Tidy First. [links:

, Tidy First]

Jenn: Yeah. Yeah. Very neat. It's a good one.

Karen: Yeah. Thanks. This has been really fun, Jenn, talking with you about AI. Is there anything else that you'd like to share with our audience?

Jenn: When it comes to AI, I think there's a lot of fear mongering. And I think we all have to understand what these tools are, and what our personal responsibilities are in using them. And not necessarily get wrapped up in all of the doom and gloom, because that just won't help us anywhere. We have to be responsible users and stewards of this technology and not be afraid to provide vocal feedback to companies to hold them accountable as often as we can.

So yeah, I think that's one thing. And I try not to get too concerned or too terrified when I think of, “There's not going to be any jobs.” I'm just not there yet. I think it will create new jobs. We just don't know what those look like yet. But we'll find out. My kids will probably find out. But yeah, that's where I am today. Tomorrow it might be a completely different story, so we'll see.

Karen: Yeah. The only constant is change, right?

Jenn: That's right! That's all you can count on.

Karen: All right. Thank you so much. It's great having this conversation with you. Again, I appreciate your time.

Jenn: All right. Thank you so much, Karen. It's nice talking to you.

Interview References and Links

Bridgeway Digital Advisors (Jenn Spykerman’s company website)

Jenn Spykerman on Bluesky

Jenn Spykerman on LinkedIn

on Substack (The AI Strategy Navigator)

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode