6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #046: Ronke Babajide, PhD, Austria-based cybersecurity leader (AI, Software, & Wetware interview)
0:00
Current time: 0:00 / Total time: -40:44
-40:44

🗣️ AISW #046: Ronke Babajide, PhD, Austria-based cybersecurity leader (AI, Software, & Wetware interview)

An interview with Austria-based cybersecurity leader Ronke Babajide PhD on her stories of using AI and how she feels about how AI is using people's data and content (audio; 40:44)

Introduction - , PhD

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Ronke Babajide, provided by Ronke and used with her permission

Interview - Ronke Babajide, PhD

I’m delighted to welcome

from Austria as my guest today on “AI, Software, and Wetware”. Ronke, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Thank you for inviting me, Karen. So as you said, my name is Ronke. I'm based in Vienna, in Austria. And currently, I work for a large cybersecurity vendor, which is actually one of the biggest in the world. And I work as a manager for a team of system engineers.

I used to work as a system engineer myself. I've been in tech for 25-something years now. So I always joke that nowadays, I don't actually work myself anymore because I'm a manager, but you know how it is.

Yes, I spent a lot of years as an individual contributor, and I spent more years as a manager and director and leader, and it's definitely still work.

It is. I know. I just say to make my team feel better!

The main job, I think, of a manager or a leader is to help the people on their team be more effective, and it's definitely work. It's not the same kind.

Absolutely. It's just so different from what I would use to do as work, you know, with the transition from being an individual contributor to being a manager or a leader nowadays. It's a big change, really, from the way you approach things. As you said, it's more about enabling other people to do their job as well as they can, than actually doing stuff yourself. And that's something you have to, especially if you were someone that worked in the space yourself, it's taking your hands off. It's a big, big step.

You said you worked mostly in cybersecurity?

Nowadays. Yes.

What's your level of experience with AI and machine learning and analytics? Have you used it professionally, in your work in cybersecurity or before that? Or have you studied the technology? Or do you use it personally?

Yes, yes, and yes, I would say. There are different levels of experience that I have with AI and machine learning. Of course, I've always been very interested in the topic even though it's not my focus experience.

In terms of how AI is a part of cybersecurity, for years now, we've been using machine learning models to augment cybersecurity tools. So that has always been a topic. If you think about behavior analysis, just observing what is happening, especially on the network level, or all these tools, especially NDR, XDR, they all have a level of artificial intelligence and machine learning algorithms in there. So this has always been a topic.

And in terms of what nowadays is artificial intelligence, which is for most people just large language models, I have a big interest in the topic. One of the reasons I'm in tech is because I'm really interested in new developments. That's one of the reasons I'm still in the space, even after all this time and different obstacles that you have when working here. But when ChatGPT kind of burst on the scene, I was definitely one of the first people to start playing around with these things. And I have played with different models. I've tested all kinds of image generation. And I've also had a lot of discussions and exchange with other people about this technology, especially in terms of what it means for society.

So, yeah, I have a big interest in the topic. I've spent quite some time thinking about it, talking about it, using it. I do use it, especially the large language models like ChatGPT. I use them for my work to make stuff more productive. I use Midjourney to create images from time to time, especially for my blogs, for example, or just to test stuff.

So, yeah, I think my interest in AI and machine learning and large language models comes from different angles, really. So one is the cybersecurity space that obviously use the new technologies. For example, in one of our tools we now have a chatbot that will talk to you and help you do configurations, for example. So we are at that level of integration of this new type of AI into cybersecurity.

But then on the private level, it's also something I enjoy using to make myself more productive. Just like a half an hour ago, I was talking to my husband, and we're thinking about buying a house. And we got the contract sent, and there was language in there that we absolutely didn't understand. So we just asked ChatGPT to explain to us what this was about. And that's really helpful, you know? Something you would normally have to go to a lawyer to explain it to you. You can just ask ChatGPT “What does this mean?”

Yeah. Those are all really good examples. So it's interesting that you're using it to interpret a real estate contract. Most people think about terms and conditions and how long and incomprehensible they are, but real estate contracts could be quite similar. So that's a good use for a language model.

You mentioned about generative AI and how everybody seems to think that that IS what AI is. I use an analogy to an iceberg, where AI is a very big thing, and most of it's under the water where people don't really see it. They see the GenAI, which is above the waterline. They see robots, and they see self-driving cars. But everything else that's going on with machine learning, using it for cybersecurity, using it for optimizations or recommendations – it's all under the waterline. A lot of people just don't realize that it's there and that it's affecting their lives. It's really interesting.

Yeah. And that's such a good point. And I was reminded of this just a couple of days ago. I was watching a talk, I forgot from some conference, about the topic of machine learning and the new language models. And what the guy said was actually very true, that we are at this point with the large language models where everyone talks about it without really knowing what to do with it.

So everyone says, “Oh, we have AI now.” But we have been using machine learning and all these things, even pattern matching and all these, for such a long time. But these things have just kind of vanished from our event horizon because they just work. They're just there, and they do something in the background. We don't even think about that anymore.

Like, there's so much machine learning and artificial intelligence and all the stuff we use on a daily basis, like even our social media feeds and all that. But you don't go on Instagram and say, “Oh, I'm using AI”. Because that's just something that's part of the technology. And at the moment, we're just in this moment of time where everyone realizes that there's a new technology that is impacting the way we do things. And this is a small subset of artificial intelligence.

But I do think that before that, most people didn't even realize that these things really exist. It's just now where normal people have this opportunity to interact with these tools without really knowing what to do with them, really. But they know they are there. They can talk to them. They're impressed even though maybe they just use them once or twice, but they know they exist.

And that's why now everyone just conflates AI with LLMs. Nothing else. Because that's the only visible part, as you said, actually, of the iceberg, of the things that people can actually grasp.

So you've given a few examples of AI-based technologies that you've used with ChatGPT and Midjourney and having a chatbot. Can you share a specific story on using a tool that had AI or machine learning features? I'm curious about your thoughts about the AI features of those tools and how well they worked for you or didn't, and whether you had to iterate with them to get them to do what you want, things like that.

Is your question in terms of these new AI tools like the LLMs, how well they work for me? Or just in general?

Any examples that you think are relevant. You talked about your chatbot for configuration, for instance. Maybe you could talk about that. Like, how did you get that to work, and what mistakes does it make, and how did you get it to be accurate enough to be useful?

Yeah. So, obviously, since I am in the front end part of the company where we are a team of technical people who, you know, work with customers, I’m not part of the development of these tools.

But what I can say is that this technology was released as part of our management tools. We have this big management appliance that manages all the firewalls and all the other components of our cybersecurity platform. And it was very impressive to see that you can now actually talk to this management tool and tell it, for example, to create a configuration for a specific set of firewalls in a specific network environment, and it would do that. And it actually works.

So, of course, there is a likelihood that there might be an error or issues from time to time. I can't tell because we're not using it at scale, obviously, since we are not the end customer and we don't use it on a daily basis. It's a bit hard to do real testing. I assume our development team has tested it well enough for this to be released, I must say that. So it is very impressive, actually, because it shortens the time for a rollout, for example, enormously. It just makes writing a configuration for something so much easier.

And I think that is a good thing, in terms of resources in the cybersecurity space and especially in terms of people to run these different tools. We have a big shortage here in Austria. So there's a talent gap that we have in this space. And I think having tools now that will enable people to save time for these repetitive tasks, and enable them to use the time for something else, would actually be a good thing. Because I do not know how we will be closing that gap anytime soon. I don't know what it's like in the US, but here we do have this issue. So this works really well, you know?

I'm not sure how well these tools would work in a really complex space. But for these baseline tasks, you know, these everyday repetitive things, which are easy to automate, for that, it works really well, I would say.

In terms of other tools I've used, like, for example, image generation, I have a big bone to pick with image generation. This is something that has been continually an issue – because I tested this, like, over a year ago and the problem is still there – is the whole bias that is inside the image generation. And that's gender bias, race bias. And I don't know what kind of data they use to train this stuff, but it's not good. And especially on Midjourney. I give talks on bias in AI, especially to women because I feel we need more women in this discussion.

I have this example because I went to Midjourney and I gave it a prompt. I think it was a doctor. I wanted an image of a doctor, and you know that the tool spit out, like, 4 images. It was all 4 white men. And then I asked again, and it was again, it was only white men. And then I tested this with different jobs like mathematicians, software development. And the result was always the same. It was always for white men. Different stereotypes within those white men, but white men. And if you wanted something else, you would have to add the gender or you would have to add the skin color.

And I feel like, especially if you look at the prompt, “imagine software developer”, it doesn't even make sense if you look at what software developers actually look like these days. A lot of them are Indian. We have so many Indian developers, and there wasn't a single image of a brown-skinned developer within those generated images. So that was over a year ago, or even one and a half years now, when I came to that conclusion, that there's a very deeply baked-in gender bias within the dataset they used to train. I don't know what they used to train, but that was the result. And then I tried this again a couple of months ago, because I did another talk on the topic, for the same organization actually. And the result was still like that.

And that's insane because there has been a discussion ongoing about the problem with AI image generation now for quite a while. How can this problem persist in this way? And you could say it doesn't matter. You just add, I don't know, the skin color or the gender or the race or whatever to your prompt. But the problem is the representational damage that comes with representing people in such a stereotypical way. If you are, I don't know, a young girl and you want to have a picture, an image of of a software developer, and this is what you get and it's always a man, that does something to your brain. And I have, how can I say, I have this fear that AI will actually augment all the biases we already have within society. So there's different answers.

I feel that there's a lot of stuff that will make you more productive. Like, for example, on a daily basis when you need an explanation for something, you want something reviewed, if you just want an abstract of a long text or something. That's something that these tools do really well. But if you look at the impact of the way society is represented within that data that is used within these tools, then we have an issue. And this is a discussion that's not happening. The only discussion we have is, “Oh, what tool can we build next?”, you know? So yeah. Not sure how we can find a balance between these two things, actually.

Yeah. Biases, especially in image generation tools, have gotten a lot more visibility lately at least. But, fundamentally, it really has to start with whoever's building the tool. They have to be careful about how they choose the data that they put into it, and they have to be continually checking for biases and mitigating them. And at least to date, that hasn't happened very much. And what we are finding, there are studies showing that AI is actually reinforcing biases and not helping to mitigate or reduce them in any way. And that's disappointing not just in images, but in things like resume reviews in different areas.

But even in images in particular, there was a recent report. Someone was trying to get an image of a black doctor who was treating children. And even when they put “black” into the prompt, they still couldn't get a black doctor. It was just very strange. Obviously, there's something fundamentally wrong in the way that they've been constructed. And like you said, little girls need to be able - there’s a phrase, “if she can see it, she can be it.” And these tools don't let them see it.

There actually had been some progress, with children who were asked to draw a picture of a scientist, and it being a woman more often lately than it used to be, and that was a sign of progress. But there's some concern that the AI tools that aren't developed ethically, in a way that mitigates biases, that they could actually undo some of that progress. And that's disappointing, something that we certainly need to find a way to do something about.

I just recalled - this is image generation. But as you said, there are other instances where this is a bigger problem, like, for example, CV reviews. But what happened, for example, in Austria was our employment agency wanted to be very at the forefront of AI-developed tools, and they released a chatbot. And that chatbot was supposed to give people an idea what kind of jobs they could choose. When a girl or a woman asked the chatbot for a recommendation, it would recommend typical female jobs, like cleaning, nursing, blah blah blah. But for men, it would recommend software development and IT and whatever. I think they took it offline for a while, and then they tried to reprogram it, and it's much better now.

But, you know, that's the kind of harm you can do to society with these tools, and that's another part of the whole hype discussion. Most people are not aware of the damage you can do. And they just see, “Oh, this is a new tool and it's so intelligent and you can do all these things”, but they don't understand that there is a discussion to be had. And that's why I'm so adamant that we HAVE to bring more women into this discussion. We have to discuss this more publicly, because we can do really big damage.

And the EU, thankfully, has released the EU AI act. But now it will take years to implement. And I'm a bit afraid that the whole development will be faster than the implementation of these tools. And we will be at the point where the problem is already there and you can't really fix it anymore. A bit like with the Internet.

Yeah. Regulations always lag, and the technologies evolve faster than the laws usually can. They're always playing catch up, and that makes it hard. Especially if companies are so eager to be first to market that they cut corners to get there. It's like, “Well, it works for 80% of the people”. Or “It works for white men.”

Yeah.

Young white men in some cases. There's a lot of ageism out there too in imagery. So that's a definite challenge. How do we course-correct that, and how do we make it so that it is better for companies, it's more in their business interest, to do it right than to cut those corners?

Yep. Absolutely. I think a lot of people are currently afraid that AI will cost them their job, some of them rightfully so. But I think the bigger discussion here is what AI is going to do to our society if we don't check it. Because there's so many decisions that can be taken automatically that you don't actually want to be taken automatically, if it's by a system that's not fair. Let's put it this way, you know? Yeah.

Yeah. And if the system is unfair, or if it's using AI or machine learning and it's producing a biased result, part of the concern, I think, comes when people don't even know that a machine learning algorithm was used to evaluate them, and they have no visibility and no way to escalate or to appeal. And that's certainly something that needs to be considered. But, yeah, you're right. The impacts on society as a whole, there are a lot of, LOT of interesting directions we could go. Maybe we can talk a little bit more about that toward the end.

So you mentioned some tools that you use, in different scenarios where you have used them. Are there any situations where you would avoid using AI-based tools? And can you give an example of when and why you wouldn't use it?

Well, I don't use AI-based tools when I am dealing with sensitive information. Because, obviously, the information - or maybe not obvious to everyone, but - the information you put into these tools are then part of these tools. So I refrain from putting sensitive company data into AI, for example, or even private data, my own private data. I don't put that into the tool. Because, yeah, that's a security risk.

And then also for example, I write a lot. I don't write with AI because I don't believe in AI writing, because it's just so bland and boring. I think if you have seen a lot of AI writing, you easily detect it, and it just adds no value at all, I feel. When the whole hype cycle started, I tested this with a couple of topics and I wrote a couple of blogs with AI, but it's just generic BS. I don't want to say something rude on your podcast! <laughter> It's just so bad, you know? It's like you're going to a party, and you're having small talk with the most uninteresting person in the room. That's, you know, like an AI blog.

Right. Yeah.

That’s also something where I wouldn't use AI. And then there's always the question over the extent of AI that I would use for certain things. Like, for example, I had a subscription to a service that was called Taplio, which logs into LinkedIn. And it offers you the opportunity to find viral posts, and you could then even automatically comment on that. And I tested that, and then I decided that is also something I'm not gonna use. Because nobody wants to read AI-generated comments either, because there's such inane stupidity in them.

Yeah, so these are things I don't do. I don't use AI for anything that is supposed to be human interaction. Let's put it this way because that just doesn't add any value.

Yeah, there was a recent study that showed over 50% of the content on LinkedIn nowadays is AI-generated, either people generating a post or automatically generating replies. I didn't know there was a tool, Taplio, that did that for you - that maybe explains why it's been so pervasive. But it's like you said, you get to where you can recognize it easily and say, “Okay. A human didn't write that”, and then discount. In some cases, you know, if I see too much of it, I just block the person. I don't want to see their comments anymore.

Yeah.

Because I know they won't add any value for me.

And, you know, the sad thing about it is that people get these recommendations to use these tools. They’re told this is a good thing because they will get more visibility. And it's not. It's just off-putting and people would block them or think they're morons, especially if you look at some of the comments that this stuff generates.

And that's the other thing that I'm worried about. I talked about society, but the other part is the quality of the Internet is deteriorating at a speed that's unimaginable. I never imagined that it would be that quickly. But I think it's, like, over 50% of the Internet are already AI-generated. I don't even know.

Wow.

And that adds absolutely no value, because everything that these tools generate is just like a variation of something that's already been there, or it's garbage, or it's even wrong.

So now we have filled up the Internet with all these useless bits of information, and it's getting harder and harder to find something useful in there. And I wonder what this will mean for the Internet as a tool to share information because that's why it was built. And I still remember the beginning of the Internet, when we all thought this would be this amazing tool where people would connect and share valuable information. Nobody told us that this would be the place where you share cat pics and porn.

So, yeah. And now, on top of the cat pics and the porn, you have AI-generated blogs that just, you know, regurgitate all kinds of information that nobody wants to hear. And what are we going to do with this Internet in the long run? Because it's just been, like, one and a half years maybe, and it's already that bad.

And also the other problem is that, when it's like this, there's no new data to train new models on. You cannot train a model on this because that's so useless. So how are we going to find a way to keep human-generated information and creativity accessible? How will we do that? Because it's already taking too much time to find useful information anyway.

Yeah. Those are all really good points. So I want to talk for a bit about where companies get the data that they use for training. In some cases, they've been scraping it illegally. In other cases, by scraping it, they're now picking up some of this AI-generated slop, and there's concern that the system's just gonna start collapsing in on itself, you know, this snake eating its tail or something.

So how do you feel about companies that are sourcing their data and their content from “publicly available”, I think, was the term that Mira Murati used, but not public domain? In other words, content that should be protected, or that people do have rights to. But they're ignoring or stepping on those rights. And what your thoughts are about what some people call the 3 C's, Consent, Credit, and Compensation, that creative people ought to be entitled to for their work?

Basically, I think the whole scraping of the Internet is theft. Because, I mean, yes, you posted something on the Internet at some point and you made it publicly available. But at the time you did this, there was no idea or even imagination that this will happen.

None of us envisioned that there would be tools that would just take these enormous amounts of information from the Internet, without our consent, and use them to create. And that is not even the issue. It's not even the issue that they use it to create new stuff. The issue is that they're making money off it. That they are profiting off the work of other people, without them being compensated. So I personally have a big, big bone to pick again with these companies that do that. Because what they are doing is they're taking the collective work of humanity, stealing it, and turning it into profit for just like a handful of people in the end. And no one was asked whether we want this or whether we don't want this.

And most of these companies and, of course, now there are a couple of lawsuits that are ongoing against some of these companies, like, I think, the New York Times has a lawsuit pending, and there's a couple of other lawsuits that are happening. But this will take a long time to be resolved, even these lawsuits. And in the meantime, companies like NVIDIA are stealing video data from all the YouTube creators to create a model that will just make all these people jobless. And, you know, so they don't just steal their creativity and their work and their data. They also use it to destroy their livelihood in the end.

And then, like, I listened to this podcast. It's called 404 Media. I don't know if you know it, but they have a lot of good research on these AI things. And I don't know if you saw this or heard the story about Niantic, that company behind Pokemon Go, who is now creating this geographical model out of the user data from the users who played Pokemon and Go. [link] And, I mean, come on! And it's such an interesting discussion really to be had.

I'm currently reading a book by the ex finance minister of Greece, Yanis Varoufakis, I think his name is, about how these cloud capital companies are creating something that's called “techno-feudalism” in his book, which is replacing capitalism in reality. [link]

And how they are doing that is that they have turned, not just a lot of jobs into precarious positions where people earn really a small amount of money for work they're doing, but that they've also turned all of us into contributors to their tools that they are selling, you know? And every one of us is somehow contributing to the tools that they are creating, and then they are making money off it, and we aren't.

So this is a very, very, very dark path we're going down, when you don't actually compensate people for what they're doing for you. And this is where we're going. We're going into a direction of companies who are making money out of these things, and they're just, in the end, yeah, stealing the work of others. As harsh as this sounds, but this is the way I see it, really.

Yeah. One of my previous interview guests, Dr. Mary Marcel, characterized this as “socializing the inputs and privatizing the outputs”.

Exactly. And that's the short form of this long sentence that I said! Yeah. That's what's happening. Yeah. Every single one of us is exploited to create input. And the output, the revenue or the profit goes to a handful of people.

Are there any cases that you know of where your own data or your content or your writing maybe has been misused online in a system?

Well, you see, the thing is you can never really tell. What has happened is - I write on Medium. I blog there. And what has happened is that people, you know, AI-generated profiles have actually stolen articles from my publication and just republished them - hoping, I don't know, that we don't notice or whatever.

But, since they republished, I'm under the same hashtag. Obviously, because I check that hashtag, because I want to see if there's anything interesting under that hashtag that I would like to read, I found these articles. And it wasn't just mine. It was also from writers in my publication. And, of course, that is something that happens, you know? Like, it's very easy with these new tools to automate these things. You know? You just scrape sites like Medium or Substack, and then you just repost it. And the Internet is vast, and people might not notice that you're using their content.

And what they do is they use especially the viral posts, you know? Those that get a lot of likes and claps, and they will target those. And that's all very easy to do. You can just automate that, and you can just steal stuff from other people and try to make money off it.

Yeah, one of the things that I've seen is some of the AI-based plagiarism checkers that are trying to use AI to counteract that, which I think is interesting.

Yeah. True. That’s also a thing. I feel like these AI checker tools, they're such an interesting discussion too because most of them are pretty useless. I sometimes test these tools because I want to see how efficient they are, and how good they are at catching stuff, especially if I feel that something has been written with AI. And if you use 4 different tools, you get 4 different results, obviously.

And then there's also the question, what actually constitutes AI writing? I am not a native speaker, for example, so I use tools like Grammarly, because obviously I make certain grammatical mistakes, which are due to the fact that my mother tongue is German, and we do say some things differently.

And so I will go over my writing with these tools. And there are AI checkers that will immediately flag that as AI-written, even though it's just corrected some parts. Or there are others that will not detect anything written with AI as AI. So what is the point of these tools? Except making students nervous because their professor might decide that their essay has been written with AI? Because as I said, I feel that AI-written stuff can be detected without any tools. you know, if you've seen a lot, you will see that somebody has actually written this with AI. You don't need an AI checker for that.

There have been stories here about the AI checkers for plagiarism. Any machine learning tool, any tool, is going to have a certain rate of false positives and false negatives. And the false positives are creating problems for students who truly didn't use AI to write it, but it's getting flagged. Their test results are getting thrown out or they're flunking a class unfairly.

And so, obviously, there's a lot of problems even if it's a highly accurate machine learning model as far as the standards. If you apply that across thousands or millions of people, it's going to cause pain for some people, especially when there's no effective way to appeal it. But either way, it's obviously quite stressful for anyone whose work is flagged as AI when it's not.

Yeah. And also, you have to realize that it's always like this race between the hare and, what was the other animal? I don't know. In this case, it's actually a race between a hare and a hare, between the AI checkers and the others who create these tools. Because, of course, they test their generated content with the AI checkers because they don't want to be flagged.

So it's always like this race, who's up front. And it's just like this pointless discussion, in reality. I think it may make more sense to, you know, educate students in proper use of these new tools. Because they will not go away. Instead of trying to catch someone plagiarizing whatever, because what's the point? I mean, really.

Yeah. There's a lot of discussions. I see a lot of Substacks, a lot of newsletters about AI and education and different tactics that teachers are using to try to teach their students how to use it effectively. And not just using it to generate an assignment, and then not really learning anything or learning about how to use it well. So there's some active and very interesting discussions about that.

Yeah. I can imagine, because it's not an easy fix in the end. But I feel like, as I said, there's an upside to AI, of course, in terms of productivity if you just use it for basic things. Maybe I'm wrong, but there's something I believe. I believe that people are actually interested in knowing things. You know, so if students just regurgitate stuff with AI, there's something else that's wrong. Because usually everyone just has fun learning new things if you do it right. But that's a whole discussion about the education system.

Right. Yeah, it's interesting to see that people talk about using AI tools for different things, and the whole learning process. But there's obviously a lot more aspects and impacts on society beyond education, even though that's obviously a really important one.

As far as society being impacted, what do you see as the main ones that concern you or that you feel we should be paying more attention to?

Well, I think that ties into the whole education thing. I think we need more education on critical thinking. Because what is happening, and this is also one of the dark sides of AI, is that these tools are used to generate a whole deluge of fake news and fake images. And, I mean, I don't know why, but some people don't seem to actually realize that these images, for example, are fake. And, you know, you go on social media, and then there's this obviously AI-created video or picture, and sometimes it's even tagged. And then you go to the comments and people applaud.

Like, just as an example, with the fires in LA currently, which are really devastating. Yesterday on Threads, I saw an AI-generated video, you know, with heroic firefighters, rescuing animals and stuff. And it was even tagged as AI, because it was an AI creator. And then there was these people in the comments congratulating all these heroes who were doing so much good. And I'm thinking, what is wrong with all you people?

And that's why I feel that we need a way to make people more aware of the possibilities that AI has created in terms of fake videos, fake images, and fake news. And we have to teach people critical thinking, and understanding what can be real and what can't. And that obviously ties back into the whole education discussion.

There's something fundamentally wrong with the way we teach people stuff, and it has been for a while. And we need to adapt new strategies for this world we are creating.

Unless we feel this is okay, and we want to manipulate people anyway, which is possibly the answer to why everything is so bad with the education system. But, in my opinion, this needs to change really quickly because this is dangerous.

Yeah, there's been a lot of emphasis on using tests to judge whether or not people are learning. And there's a saying that “Any measure, once you use it as a target, becomes less valuable as a measure.” [link]

And so, if you are using tests as a way to measure, “Should these students be allowed to move to the next grade level or to graduate or to get into a college?” Then people start - you know, people do what they're rewarded for doing. If they're rewarded for getting a good score on a test, then they focus on getting the good score, and not on the learning which is supposed to be reflected in the scores.

And so it's, I think, a logical extension of that. When people are rewarded, they feel pressure to get the good score. And if using AI gets them there, and takes off some of the pressure, then that's what they do.

I think that's such a good point. I think, in a time where, you know, the right answer will be at your fingertips within milliseconds, it doesn't make sense to condition people to answer questions correctly. And, you know, with tests, I think we really have to move away, with the education system, to a system where someone is willing to gauge if someone understands something or not.

If we had a system where teachers had enough time and resources to actually engage with the students on this level, like, understanding what they understand, what they have learned, we would be creating something more useful than, you know, just testing if someone has the right answer, which, you know, sooner or later will be easily circumvented in some way. You know, there will be a technology to just, you know, ace tests. A pair of intelligent glasses or contact lenses or whatever.

Yeah. Those are really good points.

So is there anything else that you'd like to share with our audience, or to say about AI and when you use it, what you think it's good for, how it benefits society versus the harms to society that we need to look out for and address?

One thing I want to say is, and, that is, I think, true for every technology that's once been launched. It's: AI is not going away, especially these large language models are not going away. Which means that even if you're skeptical or think they're not a good idea (which in some cases and some uses we have talked about, they're not a good idea to use), you should be familiar with the way they work and what you can do with them.

And you should just, you know, play around with them. Try stuff. Don't leave it to other people to explain to you what these things do. Engage with them and try to find out what works for you and what doesn't. Because it's going to be hard enough to live in this world, with all these tools that do things automatically around you, if you understand them. It will be even worse if you don't understand what's going on.

I feel like the whole AI LLM thing is a bit like one of those black swan events that Taleb talks about. We don't know what the next one will be. Where this will go, we have no idea. I mean, there's a lot of discussion, and there's a lot of people who have an opinion on this. But in the end we don't really know.

When the Internet was launched, nobody expected it to look like it does now. So I think it's important to understand that this technology will remain. It will develop into something, whatever that will be. It's important to engage with the fact that this is the world we live in now and acquire some skills.

And it's also important to be part of this discussion: what we as a society want to do with these tools.

  • Do we want to use AI in military?

  • Do we want to automatically calculate credit scores on AI?

  • Do we want to decide if people get health insurance based on AI and all these things?

This is a discussion we have to have, because there is a lot of risk within doing that. So, yeah, learn about it and have a discussion with everyone around you. That's what I'd say.

Alright. Well, Ronke, thank you so much for joining me today for this interview! It's been a lot of fun and I appreciated hearing your insights on it. So thank you.

Thank you, Karen. Thank you for inviting me. It's been fun talking to you. And I've probably chewed your ear because I never stop talking once I start! But thank you for having me.

Oh, it's a great conversation - thank you!

Interview References and Links

Dr. Ronke Babajide on LinkedIn

Ronke Babajide on Substack

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Audio Sound Effect from Pixabay

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊

Share

Discussion about this episode