6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #060: Jing Hu, UK-based AI and technology journalist
0:00
-50:13

🗣️ AISW #060: Jing Hu, UK-based AI and technology journalist

Audio interview with UK-based independent AI journalist, technologist, and scientist Jing Hu on her stories of using AI and how she feels about AI using people's data and content (audio; 50:13)

Introduction -

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Interview -

I am delighted to welcome Jing Hu from the UK as my guest today on “AI, Software, and Wetware”. Jing, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Hey, really nice speaking with you again, Karen. I really love our last conversation. And when I got your invitation, I was like, “Yay, she finally invited me!” So I was really happy to be on here.

Hi, everyone. I am Jing Hu. Just before we start, probably a little bit about myself so you know who I am. I was a scientist, and then I became a technologist. I work in software industry for about ten years. Now I run an AI journalism newsletter called Second Order Thinkers. This is a newsletter for anyone who is curious about AI's impact beyond the hype.

That's great. Yeah. That's one of the reasons I actually started this interview series was that I was reading and seeing so much hype, and not really hearing how it's affecting people on a day-to-day basis, or even as you're saying, the second order effects of this and how it affects our broader system. So it's great that you're bringing attention to that.

Oh, great to hear. It's still quite new. There are times I still find myself adjusting the direction of my research in order to make it more approachable. So I guess today's interview would be one of the effort for me to learn how to talk to the audience in general.

Well, great. So tell us a little bit about your level of experience with AI and machine learning and analytics, if you've used it professionally or personally or if you’ve studied the technology.

Yeah. I'll just go back to where I just made my introduction. One thing I mentioned was I worked in technology for ten years. Like, for that ten years, companies paid me to identify flaws and ways to improve their software. So one thing to clarify: how normally we see AI. AI is still a software. But I think right now AI is everywhere. So people might think AI is different from any other digital product. But actually, a big part of it, they're not.

But there's also another part that they, of course, exceeded the capability of what a traditional product would achieve. Anyways, so my responsibility was to find precisely where the minimum change could be conducted in order to yield that maximum revenue to a company. That's why they paid me a lot to do the work.

That's also why, when I do my day-to-day AI analysis these days, it's just really familiar to me. Because I know the ins and outs of how a software company would think, how those products were built, what's operating underneath, and what kind of vulnerabilities in those AI. I think there is a big part of the AI industry a lot have ignored, which is research. A lot of researchers are still working very hard to look into the vulnerability of AI, the potential harm AI would bring to society, to our industry, and even to the personal level.

It really comes back to my experience working as a researcher. You might not make the immediate connection between chemistry and AI, aside from when we started to talk about AlphaFold. But that’s a story of another time, I guess.

But the thing is, because of my research experience and even till now, I still publish papers every now and then. I really enjoy the feeling of doing research because I can just immerse myself in the study that the researcher spends a lot of time doing. So I often get a chance to interview those scholars because we kind of speak the same language.

The last interview that I had the honor to do was with a group of researchers who look into this topic about how using AI tools will impact our critical thinking. That's one of the examples of examining how AI really impact our society or our personal abilities. And in the end, I wrote up a piece called “Why Thinking Hurts After Using AI”. And I really love things like this, doing the research, doing the work, and being able to influence people, even to help them start to see how AI will change their lives.

I realized I didn't really answer your question of what level of AI experience I have. I normally don't like to say myself as an expert, because I don't believe anyone is an AI expert, even if you listen to what Sam Altman said. Is he a really AI expert or not? I tend to think that he's the boy that cries wolf, but that's my personal opinion. Anyways, I know something about AI.

That's a good perspective, and I tend to agree. I find it very hard to call myself an expert, even when I know that I know an awful lot. Because the more you know, the more you know how much there is that you don't know.

Exactly.

And it's just such a big field. That's fair. You gave some good examples on it, what you've done and worked with it. Some people have built machine learning models. Some people have used them for a lot of different purposes, and some people, like you mentioned, have done research on the systemic effects. So those are all examples of the kinds of things we're looking for. So there's no wrong answers.

Thank you.

So that's good. You had mentioned something about the impact on the commercial world and the economy, when we were talking before this, and you had mentioned a series of articles that you were writing about Meta. Did you want to mention that?

Yes. The research I do actually extends to a range of the AI topics. I often find myself referencing the technology history that happened in the past and comparing it with AI's evolution. So very often, when you read my article, you will see I compare AI's development with automobiles development, or with electricity development a few hundred years back. That was interesting and I do that often.

I also focus a lot more on AI impact. I look at the commercial side of things, but instead of analyzing one AI company, I tend to analyze the overall impact of an industry. If, in the end, I got to one specific AI company, that’s because, based on my prediction and my research, that company will stand out.

For example, I recently wrote a two-part series about how LLM will drive advertising costs to nearly nothing. And all the advertisements, they will become ultra-personalized to an extent that only a few tech giants will benefit from, for example, Meta. So that's why, then, I analyzed Meta's business model and why the current capability of LLM would only benefit Meta instead of X, or instead of OpenAI. That's the only time I will dive into a specific company. Otherwise, I really focus a lot on AI's impact on our society.

For example, the critical thinking part I just mentioned. I also did research and a few interviews on AI vulnerability topics. A lot of large language models started to show the behavior of trying to get positive feedback from humans. So instead of being trained to help humans, they don't see it that way. They see what words they could say in order to get that thumbs-up.

Right now, if you heard of this, you probably would think this is nothing. But, actually, it would become extremely harmful very quickly. There were still two cases that I remember so well in one of the studies I reviewed.

The first case, it sounds, “Okay, maybe this is just a glitch.” It was a virtual agent who was asking an AI to book a flight ticket. The AI came back showing the success message. However, the underneath message from the system is actually the booking didn't success.

And that's one. The second one was slightly disturbing, actually, so I apologize for this. It was a virtual agent who suffered from substance abuse. The agent went to AI for help, but instead of encouraging the agent to get off substance, the AI actually asked it to take more drugs in order to be relieved from the symptoms. So that's how harmful these kind of AI deceiving behavior could be, but that we didn't notice.

I don't know if you ever heard of this news, but it was something last year, and it's an ongoing lawsuit against Character AI. It was a mom who found her son was plotting violence against herself and the dad, because he was encouraged by AI. So I think these kind of harmful behavior you will just see, unfortunately, more and more, start to happen in our society the more we interact with these LLM. So these are normally the topic I look into and the research I do.

Yeah. Those second order effects are really, really important. I didn't hear about the one having to do with encouraging someone to use more drugs. I did hear about the fairly recent case with Character AI where a young person was encouraged to kill themself. And that is obviously not something that we should be tolerating.

No. Really complex topic. Because we are talking about, if we stop this, we're blocking a lot of people from earning money. We're also blocking a lot of people from continuing their political career. And we are blocking the interest of a lot of big enterprise. It's hard to see how we can stop these kind of harmful behavior. Maybe because I could somewhat be more pessimistic when I think of topic like this. But I do hope I am wrong.

This has been a really good overview of what you're working on, thinking about, and researching, and it's all really interesting. And I'm happy that I found you, and I'm subscribed to you.

I'd like to hear if you have any specific stories on your experience as a user of AI-based systems and tools and features. So if you have any examples of a time that you've used it, why you used it, and what worked well and what didn't work so well.

This is a great question. And, actually, this is a question that will bring us back to a more positive note. So two things.

I think there are two big topic to talk about when we think about this. One is how we see AI as our colleague. And the other one is that even when you work with your colleague, you have some kind of boundary, right? I think everyone just need to keep that in mind: where is the boundary and how do you set the boundary when working with AI?

So starting with seeing AI as your colleague, one thing I want to highlight is that because of what I do, and I encourage myself to think critically, and I also encourage my readers to always think the ripple effects or second order thinking whenever they come to a topic. So I am always careful not to use AI to an extent that it will replace my critical thinking ability.

So when it comes to tools, I will start to use words like ‘hire’ or ‘I work with’. But that's just how I see them. I used to run teams when I still work in software. I think the important things to work with teams is to know what kind of rules to allocate to each of your team member.

So I see Perplexity as my research assistant. I hire Claude and DeepSeek to help me with my copywriting. I used to use NotebookLM to help me produce my podcast. But then, very soon after, I realized that there are just so many people using NotebookLM for their YouTube videos or for their podcast. In the end, whenever I hear anything produced by NotebookLM, I myself even almost to vomit because that voice was just so recognizable to an extent, I can't listen to that anymore. So I came off from NotebookLM - I fired it. But instead, I hired ElevenLabs to now become my podcast co-host to generate what I hope would be a higher quality podcast for my content.

You mentioned using Perplexity as a research assistant. Can you talk a little bit about an example of something that you would use it to help you research, and how that has worked out for you?

I don't know what was in your question, but for some reason, it just suddenly brings me back to my time when I was still working labs. It normally took us a few days to weeks to find research and read through research that was really similar to what we were trying to achieve. And it was still the same until things like ChatGPT came out, or even until Perplexity came out. Once I was introduced to Perplexity, everything changes. It really brings a lot of efficiency to my research life. I really appreciate it.

It makes mistakes. I will come to that point later. But instead of spending days just reading through reference, and trying to find the most relevant study to a topics that I want to write, it only takes a few minutes to give me a list of topics or research that is relevant, and I can just quickly go through and valid the list that it provided.

Just one thing I think, it's still not something I noticed right away. But I think this is how we work with AI tools. We all develop our own habits working with them. So one thing I develop when I work with Perplexity is I would make sure that I verify every single reference provided by Perplexity, and to make sure I only select and read through those that is really relevant.

Because even Perplexity, a lot of times, it - if you don't mind, I quickly talk about the potential underlying logic. It's still doing the same thing as what Google has been doing in the last twenty years. It still rely on the H1 and the H2 in HTML of each website. So a lot of times it can be wrong. It can be biased. It's really important for us, as human, to still verify the results came out from ChatGPT or Perplexity.

But, anyways, this kind of curation and verification loop, it really gives me a huge efficiency boost I would never imagine before things like Perplexity is released.

Great. Yeah. Thank you for explaining that. And you had mentioned earlier when you were working in chemistry, and you made a reference to that. You've probably kept up with what's going on with using AI-based tools for chemical synthesis and for coming up with new compounds and things like that.

Yeah. That's a very interesting topic. It's actually something I'm writing about. I'm writing an article called “The Economy of Imagination”. A lot of times, this is the fun part about us human. If we want something and we're happy with the result, then we give a positive name to it. And if we don't want something, we give it a negative name. So, actually, hallucination, it's exactly that. It's one side of the same coin as the creativity that you often see from AI. But when an AI spit out the results that you really like, you think AI is being creative. But when it spit out a result you really don't like, you think it's hallucinating. But it's doing the same thing.

And this is exactly something that chemistry or biology research benefit from. It is the principle behind AlphaFold. When AlphaFold was created, the researcher provided the strict chemical boundary, the limitations we know how chemistry works, to AI.

And so whenever we enter an amino acid sequence or DNA sequence, the AI will come up with several random results, but at least the results that's based on something. So that's why products like AlphaFold would save years of time for researcher – from doing random research that they wouldn't know what will come out with, to “Oh, there is already a direction from AI. I just need to double check the direction.” So that's why you often seeing how many researchers, especially working in biology or biochemistry field, talk about how much time AlphaFold really save them. But it’s really a huge topic. I don’t know if I answered your question.

Yes, you did - that's a great overview. It's not an area that I'm personally familiar with, so I love hearing about that.

So you mentioned before, I think, about the way that we use or hire these AI tools to do jobs for us. Do you have any advice for people on how they can use tools effectively?

Personally, I really hate people setting a framework for me to follow. So I will just talk about the underlying logic before I throw three points for you to follow.

I think the logic is really simple. Just think about the time when you work with your colleague. How do you normally work with your colleague? You enter a company. You are assigned to a certain responsibility. And you have a few stakeholders that you know who will be your main contact, who will be your secondary contact, and what their respective responsibilities are.

And that's how you work with a human. I think it's exactly the same when it comes to working with AI. It's knowing what is your responsibility, what goal that you want to achieve when you work with AI.

So if you think back to that scenario when you just started working in a team, now you think exactly the same when you work with AI tools. You need to start by understanding your responsibility and in the sense, what goal you try to achieve with AI. And then, of course, knowing what other people are good at, and in this case, as in, what AI tools are good at. But you also need to know, what do they not do? For example, your sales colleague wouldn't start coding. This is the same principle that an AI that might be good at research wouldn't be an AI that is good at copywriting.

So you need to know the limitations of each AI tool you interact with. It's really knowing when to intervene. Each AI tool has its limitation and, to an extent that you will know that you are happy with the result but not 100%, then this is a great time for you to apply your critical thinking ability to still being human as the great human as yourself are and intervene.

One way that you had put it previously was that the most important skill isn't learning how to use AI tools. It's knowing precisely when to stop using them. I liked that.

No, that was a great summary, actually. I'll leave that with you. Joking.

They were your words 🙂

I've been seeing a lot of people interacting with AI tools in the last few years, even when I was still leading the team. That's probably how I came up with those three essential steps:

  • knowing what they're good at,

  • knowing what's the limitation, and

  • knowing when to intervene.

Many people's downfalls are misunderstanding what AI is good at. Hence, it becomes difficult for them to know what the limitation is and when to intervene.

I saw a lot of people, when they interact with AI, they were constantly prompting AI to come up with that perfect result they are looking for. This is not just me. It's also a phenomenon observed by a lot of researchers, especially one I just spoke with a few months back.

They saw exactly the same thing when novice programmers started to interact with the AI code assistants. What happened was, when those novice programmers started to work on their code, they will already have things like Copilot jump up in the background, start recommending things they don't even recognize. But they just say, “Yes, I will take it.” But they don't even know what that is. And that's actually what's happening now.

So it's really important for every one of us who use AI tools to recognize when this pattern comes out. I'm not saying I'm already perfect at this practice. A lot of time I still ended up, “Oh, what was I doing the last ten minutes?” I just keep prompting AI to do the same thing I know it will not achieve. But I was just being lazy. It's important for us to recognize when this kind of laziness happens and to intervene.

There was one thing I said to a young professional that I think the most important skills these days isn't learning how to use AI tools. Most of us know how to use them. But it's knowing precisely when to stop using them. Without a clear boundary, these AI tools will become your efficiency traps rather than an accelerator of your workflow.

Yeah. That's a great insight, and I appreciate you sharing that little story with us. So this has been a good overview of how you use AI-based tools and how you work with them. Have you avoided using AI-based tools for anything? And if so, can you share an example of when and why you chose not to use AI for that purpose?

I never start outright not using a tool. My default position is I'm going to give it a try. I guess that's just who I am. I'm typically really curious about any new digital products.

And we tend to hang out on a website called Product Hunt. If you don't know the place, it's something like LinkedIn but for new digital products, and that's where I found my inspirations when I still work in software.

I wouldn't say I avoid using AI tool. But that is also not accurate. I found most AI tools aren't mature to an extent that you can actually rely on them. When I say most, I mean all of them. I used to be quite conservative when I wrote this article last year. I said in the title like “99% of AI tools aren't ready”. But if I could, I would change my statement. It’s 100% of AI tools aren't ready.

The hot topic of the day is the image generation from ChatGPT. It's really cute when you first upload your image and then it comes up with some Studio Ghibli style that you will never be able to achieve without these kind of large language model operating behind the scenes. But when you started to look into detail, there are just so many little things to be tweaked. Not to mention, they are commercially worthless and harmful to most people in the creative industry.

This is just one example. There are also examples like digital twins, like customer support services that a lot of companies trying to adopt as part of their digital product. I think the large language model usage will be very limited to its creativity slash hallucination. We should see hallucination as a feature. But with this, we should also be aware that it comes with a huge harm when we try to get accuracy out of it, because this is just not what it's good at.

With the example of Studio Ghibli and everything that's going on with that, there are a couple of things there. One is, obviously, they must have trained on the copyrighted works from the studio in order to be able to recreate new images in that style.

Yeah.

And so that's obviously a concern. Another thing I think people don't realize, and

pointed this out in a recent article, that everybody who is uploading a personal family photo and having it transformed into this style is giving new content by the millions, personal photos that they didn't have before that now they are free to use however they want.

And that's something I think people just don't think about. It's like, “Oh, this is really cool. Let me try it with this picture I just took of my family member.” You know what? You just gave them permanent license to use that picture of your family member.

Yeah. That is the scary part, really. Though, unfortunately, I did the same myself. So the only thing I can hope is, knowing the underlying architecture of these kind of a neural network, large language model, is that there is no way to link 100% of output to an input. So hopefully, you won't be able to recognize the original me. It will only create a creepy or weird copy of me. For some reason, I hope that is better. At least just for my self comfort, I guess.

So I want to ask, you had mentioned that you felt that originally 99% of AI tools weren’t ready, and now you would say it is 100. What is your definition of ready? What would a tool have to do or to be in order to be ready in your opinion?

The definition of product management principle, I guess, is: a product will only be qualified as ready when it's solving a genuine user problem. When we interact with ChatGPT, we hardly realize it, but we often need to prompt and reprompt and reprompt and reprompt for us to get a desirable result. That's not how a good product should be.

Just think about the fridge in your place. You don't have to keep flipping switches for your fridge to work and to do the proper cooling-down job every evening so you wouldn't wake up with, like, the full fridge of spoiled food. That's how a mature product should be. A mature product is something that operates in a background. You don't even notice it exist. If you use that as the standard and you come back to your current AI tool, just tell me which AI tool that you ever use is that mature, that you're constantly prompting in order to get the result you want. So that's my standard.

So it's a combination of accuracy and reliability and some other, what I tend to think of as architectural qualities of a product.

Yeah.

Okay. That's fair enough. Alright. So I want to talk a little bit about the concern about where AI and ML systems get the data and the content that they train on. We just talked about one example where they're sourcing new images for OpenAI by offering these features with Studio Ghibli, but there are other ways. A lot of times, companies will use data that we put into online systems or we publish online with our writing. And companies are not always transparent about how they intend to use our data when we sign up.

So I'm wondering how you feel about companies that use this data and content for training their AI and ML systems and tools, and what you think about whether they should be required to get the Consent from, and Compensate, and Credit the people whose data they want to use for training – or what some people call the 3C's rule.

I think their behavior is just outright theft. There is no way to justify what they have been doing. I believe the 3C's rule is the only way to work with content creators. Unfortunately, I don't think it would ever happen for LLM companies to follow the laws.

Or even worse, they would just bend the laws to whatever they want. That's exactly what they've been doing, regardless which president is now in power. They would try their best to lobby and to make sure the law gives them exemptions from the copyright laws.

That said, there are so many powers in play and different structure to think about when we think about the question. First, the genie is already out of the bottle. There's no way for us to put it back. There's no way if we only want to regulate one LLM, if we only regulate one country.

Let's say if America today started to follow the existing copyright laws, what will happen to those large language model companies? They would go broke tomorrow. And what will happen then? The Chinese AI company or European AI company will win out, and America will be left in the void of technology for the future, and for our next generations.

It's really hard to say the right or wrong, or the good or bad, because if we sacrifice one thing, there is a ripple effect that will come to it.

Second order effects, right?!

Yeah. Exactly. I really condemn these kind of behavior from any LLM companies. I don't see there is any way to stop them from doing so. Actually, I take that back. There is one way - it’s for us to find a better approach to achieve artificial general intelligence, instead of trying so hard to train a large language model into artificial general intelligence. I think that path has already proven itself not sustainable and highly not likely to get there. It's really resource-intensive. It's hurting a lot of people along the way. The only people who make money, probably, the executives and some AI scientists who are highly paid, but that's it. So I seriously hope there will be a group of tech community researchers very soon will find another path to achieve AI.

Okay. Yeah, that's a good set of insights there.

I think it was too deep.

No, no, it's good!

It’s a little bit sad. Yeah.

Deep is good. And, you know, ‘sad’ is honestly kind of realistic about where we are. We're not really in a good place right now, with the way that people's data has been used and the way that creators have been treated.

But I guess I'm still optimistic that we can find ways out of it, that companies will respond to market forces, and we are the market. So if we don't use or pay for tools that are unethical, when we look for tools that are and we support them, I feel like, you know, it takes a lot of snowflakes to make an avalanche, but that doesn't mean that we shouldn't bother trying to be snowflakes.

Yeah. We're going to need a lot of snowflakes!

Yep. Alright. So one of the concerns that I've heard a lot from people that use AI-based tools is that they aren't always aware about what these unethical practices are that companies are following or how they've stolen data. As you said, it's theft, basically. Because the companies aren't transparent about where they get their data from, how they get it enriched, and whether they treat those data enrichment workers fairly; or about, for instance, the impacts on society. Another second order impact, I guess, would be the environmental consequences and such.

But they feel like companies have not been transparent about any of this. And I'm wondering if, of all the tools that you've used, do you feel like any of those tool providers have been transparent about where their data came from, or on how they've developed their tools?

You know what's funny? When I was reading a questionnaire you give to your interviewees, you noticed how deeply LinkedIn hide their privacy policy. I promptly went back to check my LinkedIn privacy policy. I was like, “Hmm, did I turn that off?”

Again, it goes back to what's the best for those companies when they were building these products. What is the best for them is to hide these opt-out tick box as deep as possible, so you can't find it. Most people won't notice it. That's one.

Second, we don't have enough snowflakes who really care about this in order to protest or to chip in, in whatever initiatives that we might have ongoing. So I think the force to get companies do the right things is small, aside from having a complete different technology to achieve AI. Another way is to make these behaviors so expensive, so it's better off that they make it obvious for people their data is used to train AI.

For us to achieve that, though, we do need help from regulators or from politicians who willing to do that. But if you think about, for them to be reelected, they need sponsors. Unfortunately, the backers are those with money, and that points to the tech companies. So it's kind of a self-reinforced loop. But, again, call me a pessimist. I do hope there is a way to solve this and I do hope this will be the minimum thing that tech company could do for people like us who still care about our data privacy.

Yeah. LinkedIn is a great example because in a way, I feel like it proves to some extent that regulation and policies can have an effect. Because if you remember, when that happened, all of us in the US, and some other countries that didn't have any privacy protections, we were opted in by default for everything that we've done up till now, and there was no choice about it. But that did NOT happen to the people who were covered by GDPR. They were not opted in.

So LinkedIn obviously had the power not to opt us in. They simply chose to do it where they could get away with it. And the regulations and policies in Europe, I think also the Digital Markets Act, one of my guests was saying that that's as much of an impact even as GDPR for this. It protected them. And so it's obviously possible. To me it says that, yes, it sucks that we did not get a chance to protect our content, but it shows that content can be protected. I mean, that's assuming that LinkedIn isn't just quietly using it anyway.

Yeah. I think it is possible to an extent. It's funny that you just mentioned it. I think it was policy started from late last year, and it was still in a stage of gathering response from relevant parties. I think the UK regulator trying to see if the creators are happy with the opt-out approach instead of opt-in. And, of course, all the creators opposed this. I don't know what is the progress on this. We keep making fun of “EU regulates and China replicates”. So that's how we see the world of innovation goes. I guess whatever country that started to regulate really hard, they would just be classified as, “Oh, you're going to become Europe now.” We have no power, no say in technology whatsoever.

Yeah. It's a catchy saying. Sitting here in the US, I don't perceive Europe as not being innovative, though. We've seen a lot of technical innovations come from Europe, and I don't know that that is necessarily stopping.

The other thing, regulations don't have to be anti-innovation, I don't think. You set some guardrails, and it gives people clear expectations and ways to operate, and you just operate within that. And so it's not necessarily anti-competitive or anti-innovation to have regulations and policies.

The other thing is, you know, you talk about second order effects. Setting guidelines that protect creators protects the entire ecosystem. Right? Because they're looking for sources of writing, their sources of images, and sources of content that can be used to improve the models. If musicians are suddenly disincentivized to continue to create new music, then where's the new content going to come from, to train the tools and make them better?

Oh, that's a good point. Unfortunately, I guess a lot of large language model companies, they're trying to resolve with content created by LLMs. They were hoping that this will resolve that potential problem you just mentioned. If that is what they have in mind, they still have a long way to go, because, for whatever we've seen so far, AI generated content is just not high-quality enough. And AI-generated content is always going to be repeating whatever AI was fed. They will be creating more or less the same content as what they already know.

This is an interesting concept that I talked about at some point. It's the density value of information. It's a concept that’s worth expanding. You need to see a piece of information in the density of the value it creates. So let's say today you have a piece of news that only you know about. That makes a secret, makes it so valuable. But if today you duplicate that piece of information, then the value just suddenly cut into half. And if you keep replicating it, then the density of this value becomes so diluted. In the end, it becomes useless.

So what is really important is to see which party will create the highest density of information value so that the other party have no choice but to pay for it. Does that make sense?

Yes. Yes. It does. I was just thinking about successive dilution of a substance. It's like diluting a chemical, right?

Uh, yeah, in some way, really similar, yes.

So you were talking before about opt-out policies, and I haven't heard lately about what's going on with the UK. I know there was quite a bit of activity on people saying, “Look, we need to speak up in the UK and stop them from giving away creators' content.” There was a lot of discussion about that.

If your audience is interested in this topic, the best person you should go for this kind of deep dive is

. He excels in the content production and the content rights, especially in Europe and in the UK. So it will be a great topic to discuss with him.

Yeah. I follow Graham, and I'll definitely drop a link to his Substack in the interview notes for our readers, so that's great.

So last question, and then we can talk about anything else that you want. Public distrust of the AI and tech companies has been growing, partly because I think we're realizing what they're doing with our data that we didn't really agree to or consent to, or that we have concerns about. What would you say is THE most important thing these companies would need to do to earn, and then to keep, your trust? And do you have specific ideas on how they could do that?

Or do you think it's not even possible?

I am trying my best to see if I can bring any positive spin to it. I don't think we have the incentive structure for any commercial AI company to care about users. So that's my short answer. If you want a long one, I will continue.

Yeah, no, I think you're right. It's part of the systemic or maybe ecosystem level effects, that companies really only do what they are rewarded for doing. And if the incentives or rewards or regulations aren't there to make them do what we consider to be the right thing, then most of them aren't going to do it.

Yeah. It's really something that we need to come back, and look at the economy structure of the current AI industry. If you look at companies like OpenAI and Anthropic, they operate at a loss. The only way they can get more money to sustain the business is to raise more money. That's why you heard a lot of big promises from CEOs. Like Sam Altman keeps saying that they already know how to build AGI. And I was like, if you know it, then why don't you build it?

I mean, yes. Maybe he still has the integrity to not release it even they have built it, but no one knows which is truth now.

Anyways, aside from Meta and Microsoft, I guess, because Microsoft don't really depend on their AI Copilot tool. Meta don't really depend on their AI tool, and not to mention Meta has a huge ecosystem to best utilize this creativity slash hallucination from AI without seeing that much of the negative impact to their usage. So I think there are a few companies that are better positioned to develop AI in the long run without sacrificing too much of their integrity. But it's really hard in terms of the whole AI ecosystems nowadays.

And then you will see, like, people trying to raise money. And all those money are used to train more sophisticated model, or to pay for data center. In return, people can prompt things like, “Hey, give me a Studio Ghibli version of my family picture” or “Hey, tell me where should I go tomorrow on a date with my girlfriends?” Those aren't the things that really create a sustainable commercial solution or sustainable economy for these LLM tools in the long run. So until we find ways to stop them from the macro level of the economy, I don't think the micro things we do will really make any impact to those companies.

Yeah. I think that's a fair summary, so thank you for sharing that.

This has been a really fun interview, Jing. Thank you so much for accepting my invitation and joining me on this interview. Is there anything else that you would like to share with our audience?

If you don't mind, I advertise my newsletter.

Of course!

I really enjoy interact with readers. Lately, I started an initiative to have weekly research hour with whoever like to join. Anyways, that's a side topic.

So for whoever who is curious about AI's impact beyond the hype as what you heard we discussed in the last hour, please do subscribe to my Substack. It's J W H O dot Substack dot com.

Great. Thank you. Are you using subscriber chat for the research hours, or how are you doing that?

Ah, good question. I haven't figured that part out yet. I only have the idea.

Oh, it's a great idea. It would be really interesting to be able to join that. Of course, coordinating time zones globally is always a challenge.

I think 40, even to 50% of my subscriber sits in North America. So I would just largely favor North America's time zone and see what happens. Probably in the afternoon, so I can at least cover America and Europe.

Makes sense. Well, great.

I mean, morning your time.

Yes, so that sounds great. I'm going to be on the lookout for that. So thank you so much, Jing.

I look forward for you to join us. Thank you too. And thank you very much for inviting me to this chat. I think what's important to me is to speak with hosts like you. We can have a proper genuine conversation.

Yeah. Well, this has been a lot of fun, so thank you so much for accepting my invitation. It's funny. I say, and I mean it, that if someone is interested, please just DM me and say, “Hey, I want to be one of your guests.” But so many people feel like they really need to have that personal invitation to say, “Yes, I want YOU to be my guest.” I'm glad it worked out. Thank you so much.

I guess it's really a culture thing. I try not to be too pushy. Probably what I learned from British in the last ten years is, like, “There is a hint. If you didn't catch my hint, it means you don't want it.” But I guess I will need to start adjusting then, because I'm moving.

Yes. Well it's been great talking with you. Thank you so much.

It's always good chatting with you.

Great. Thank you, Jing!

Interview References and Links

Jing Hu on LinkedIn

Jing Hu on Medium

on Substack (2nd Order Thinkers)

Suggested article: “What Altman and Rockefeller Have in Common?”, by

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that all content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊

Share

Discussion about this episode