📜 AISW #058: Grace Shao, Hong Kong-based AI consultant (AI, Software, & Wetware interview)
An interview with Hong Kong-based AI writer and consultant Grace Shao on her stories of using AI and how she feels about AI using people's data and content
Introduction -
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
I’m delighted to welcome from Hong Kong as my guest for “AI, Software, and Wetware”. Grace, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
I started my career as a tech and business reporter. Over the years, I’ve made a few pivots, but I have come full circle now, running AI Proem and focusing on AI and tech developments. With the increasing information gap between Western audiences and China, I spend a fair amount of time trying to help decipher China’s AI and tech space in a nuanced way for my English audience, which is mostly investors and industry professionals.
I am a tech and AI analyst and writer. In addition to writing the newsletter AI Proem, I often write about AI, technology, and corporate governance for Fortune, The Diplomat, Diplomat Risk Intelligence, Economist Intelligence Unit, and FT Chinese.
Beyond my editorial work, I am the founder of Proem Communications, where I work with AI and tech companies on their international positioning or media training needs. I have advised clients such as Lenovo, Ant Financial, Kuaishou, PayPal, Alibaba, AI companies, and growth-stage consumer tech and biotech firms.
I grew up in Canada and Mainland China, but I have also lived / worked in Singapore, the US, and Hong Kong. I enjoyed traveling a lot before the kids came and was fortunate enough to visit over 45 countries before a more settled-down lifestyle now.
I’m curious about the “Proem” name you chose for your newsletter and your company. It’s not a commonly used word in English, so many people in our audience may not know it. (The dictionary definition says it means ‘preliminary comment’ or ‘prelude’.) May I ask, how did you come to choose the “Proem” name for your newsletter and your company?
I really liked the word “preface” and I wanted to use that at first, but that word was quite commonly used in businesses, and in fact, in Hong Kong there is a coffee shop called Preface that is quite popular and I think an IT consulting firm I found on Google search as well under Preface. So, I thought for a long time that I liked the simplicity of a single word, and I looked up similar words to “preface” to find the word “proem.” I was a bit worried that the word isn’t commonly known, but now I kind of like that; it’s a bit geeky and nerdy. And people ask about it, just as you have. So it’s an opportunity for me to tell the story.
It means the introduction and to me for the newsletter AI Proem - I wanted to show people that I was also on a learning journey, and they could follow along to learn the basics and dive deeper as we progress. For the consulting side, Proem Communications - I thought it flowed well too, as the work I do is mostly advising companies on how to prevent or mitigate crises and conflicts, so it’s good to be prepared and “know the basics”.
That’s a cool origin story! Thank you for sharing it.
What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I cover the development of AI mostly from a business perspective. I combine my experiences working in tech strategy and journalism and bring analyses of industry trends, business models, and the impact of tech development on society. I don’t do many product reviews, but I’m always open to learning more about new products on the market and new vertical integration of AI.
Do you use any AI-based tools personally, outside of your work?
Not as much, as I’m quite mindful of the excessive energy it uses up for each query, so if I can just use a simple search for an answer, I try to still use search engines.
Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
I use Perplexity, OpenAI Deep Research, DeepSeek, and Kimi for my research. I admit, they’ve made research so much more efficient. However, I am still quite wary of the factual accuracy of some sources the chatbots cite, so I usually double-check the original source and conduct traditional desktop research when writing my pieces.
I wasn’t familiar with Kimi - that’s interesting to learn about it. How do you decide which of those four tools to use, when, for your research?
Kimi is made by a Chinese AI company, so I use it for when I research local Chinese material or need help with translation copyediting.
I still find Perplexity the best at research and I often double-check sources and fact-check with ChatGPT. DeepSeek is just another alternative, but I don’t use it as often.
I have not been able to find a really good image-generation tool yet, so if you have any tips, please send them my way.
Unfortunately, there aren’t yet any image generation tools that I’d be comfortable recommending, based on the ethics of how they’ve been developed! But if I hear of one, I will definitely let you know.
The ‘most ethical’ at present seem to be Adobe Firefly and NVIDIA Generative AI (link). I’ve read positive comments about Canva’s ethics, but their partnerships and their acquisition of Leonardo have raised some concerns (link1, link2, link3; here’s one guide to using Canva legally).
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I think I try not to let it write for me. However, I do occasionally seek editorial advice. I do not depend on the AI tools for full-text drafts, but I sometimes prompt the bot to tell me how I could improve the flow of a few paragraphs. I find it quite useful. It’s almost like having a copy editor, and if you prompt the programs right, you can even have an editor checking to see if you’ve missed key arguments or challenge you. Essentially, having an editor at your disposal.
I have obviously had editors at my full-time journalist roles, and they were wonderful to 1) offer guidance on pieces I might be missing, 2) provide fact-checking, and 3) copyedit grammatical errors. The beauty of a human editor is in the human interactions, the ability to talk through nuances or misunderstandings. The challenge is that sometimes they may carry biases and influence your coverage. An AI tool, in many ways, can do the three mentioned tasks, but it doesn't challenge your judgment as much, I guess. In a way, it’s beneficial; I feel like my editorial decision is then solely mine. However, on the other hand, it’s always good to have your biases checked.
That’s a good insight - thank you.
A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get consent from (and compensate) people whose data they want to use for training?
Everything is on the public internet, so it is very hard to control, especially when we join the platforms. We’ve all agreed that they can use our data. That said, as a trained journalist, I always say to give credit or link sources if the information is not first-hand.
Fair enough :)
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
Not the technicalities of how they obtain the data.
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Not familiar with any.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I think if the companies can be more transparent with how they collect data and utilize it for training and how that may impact the outcome of information (how it could be biased), it would be immensely helpful, especially for the AI-native generation. What I worry about is that the younger generation that is fully AI-native won't have the eyes to spot AI-generated content vs. human-written work anymore. Obviously, as technology advances, it will become more challenging. Still, if companies can disclose their model biases or even faults, then it would at least help people be more aware of potential misuse of information or have a sense of alertness or awareness of potential misinformation.
Do you have any ideas or recommendations on how we can help our next generation become more savvy about detecting AI and using it effectively?
I’m still struggling with it and trying to learn how to teach our next generation about this, especially as content becomes more “human-like”.
Anything else you’d like to share with our audience?
My goal is to provide insightful analyses on new technology, apply my knowledge of consumer tech strategy to frontier sectors, and use my bilingual/ bi-cultural background to bridge the gap for you, the audience. I want to meet more of you online and offline, build relationships and understanding, and maybe find some *spark* in unexpected opportunities.
There is so much noise out there. I aim to filter out the noise for you and provide thoughtful insights that could be valuable for your decision-making, whether you’re an investor, a researcher, a policymaker, or an entrepreneur.
While I can't predict the future, I work hard to analyze current developments and bring my connections and learnings across Asia. So, join my learning journey by subscribing to AI Proem!
See more on why I started AI Proem here.
Grace, thank you so much for joining me on this interview, and I wish you continued good luck with your newsletter and consulting business!
Interview References and Links
See Grace’s recent video interview with Alex Kantrowitz on Big Technology Podcast:
Grace Shao on LinkedIn
Grace Shao on Substack (AI Proem, Proem Communications)
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)