📜 AISW #087: Jean Gan, Singapore-based Senior Legal Manager
Written interview with Singapore-based Senior Legal Manager Jean Gan on her stories of using AI and how she feels about AI using people's data and content
Introduction - Jean Gan
This article features a written interview with Jean Gan, a 🇸🇬 Singapore-based lawyer. Jean Gan is a senior in-house legal counsel with 15 years’ experience across APAC, specialising in contracts, compliance, and cross-border transactions. She is completing a Global MBA and pursuing a PhD in Law focused on AI and dispute resolution.
Beyond her corporate role, Jean is the Founder of Global Legal AI, a platform that provides research, tools, and insights to help professionals navigate AI laws and practise responsible AI governance. The Global AI & Law (GAIL) Network is the community arm of Global Legal AI that hosts panels, expert sessions and discussions to connect professionals and drive shared learning.
On LinkedIn, Jean is the founder and leader of AIgnite Women, an international initiative driving collaboration and inclusion in the AI governance space. She also runs How to Legal AI and Beyond the Clauses, where she shares practical insights on legal strategy, AI, and the future of work.
In this interview, we discuss:
using AI tools to help craft clear, concise materials as an in-house counsel
how the prevalence of AI overviews is impacting law firms and other companies who rely on internet traffic click-throughs
why she supports the 3Cs (consent, credit, and compensation)
her ideas on what AI companies can do to build transparency and trust
and more. Check it out, and let us know what you think!
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack. If it doesn’t fit in your email client, click HERE to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Jean Gan
(1) I’m delighted to welcome Jean Gan from Singapore as my guest on “AI, Software, and Wetware”. Jean, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Jean: Hi Karen, thank you so much for having me. I’m really excited and honoured to be part of this conversation.
I wear a few hats. I’ve been a senior in-house legal counsel for about 15 years, and I’m currently completing my MBA by the end of this year. I’m also pursuing a PhD in Law at Leicester Law School, focusing on AI governance and dispute resolution.
Beyond my legal career, I founded Global Legal AI and AIgnite Women. Global Legal AI focuses on responsible AI, compliance, and governance, where I speak, host, and moderate webinars - from panel discussions and solo talks to “Ask Me Anything” sessions. AIgnite Women centres on advancing women’s leadership and equality in tech governance.
At the heart of what I do is helping organisations navigate the intersection of law, ethics, and AI in a practical way. I also started How to Legal AI, a platform designed to help legal professionals use generative AI in their work smartly, efficiently, and responsibly.
Those sound like excellent initiatives for helping people use AI constructively and ethically in law work!
(2) What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
Jean: My experience with AI is both professional and academic.
At a professional level, I use AI in my work as an in-house legal counsel to improve efficiency in contract review, compliance monitoring, and risk analysis.
Academically, my PhD focuses on AI regulation, arbitration, corporate legal frameworks, and access to justice.
Of course, I’m always mindful of the ethical issues and potential errors that come with using these tools. On a personal level, I use AI tools daily, from drafting assistants to analytics dashboards and content creation platforms, while maintaining a cautious approach toward their risks and limitations.
(3) Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
Reference for this question: “But I don’t use AI”: 8 Sets of Examples of Everyday AI, Everywhere
Jean: Yes, AI overviews on Google. They’re easier to access and appear right at the top of your search results. However, there’s growing discussion about how they might be affecting the content and search industry. When AI overviews summarise information instantly, users are less likely to click through to the original websites. This means publishers, law firms, and content creators could see less web traffic and fewer opportunities for visibility, even when their work informs the AI’s summary. It’s a shift that challenges how information is shared, credited, and monetised online.
Still, people value convenience and speed, especially in a fast-paced world.
For in-house counsel, using AI tools can be particularly useful because business leaders don’t want long legal memos. They want sharp, clear, and commercially focused advice. Visual tools like tables or diagrams make complex issues easier to grasp and communicate.
(4) If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Jean: I avoid using AI for sensitive drafts or confidential contracts, especially when I’m unsure how the platform retains or reuses data. If the privacy policy isn’t transparent, I always use secure internal tools instead.
It’s also important not to outsource thinking. There’s a growing concern that over-reliance on AI could weaken our critical thinking skills. The substance should still come from you, while AI handles the labour-intensive parts such as collating data, summarising information, or organising it into tables or visuals.
(5) A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get Consent from (and Credit & Compensate) people whose data they want to use for training? (the “3Cs Rule”)
Jean: I strongly support the 3 Cs Rule: Consent, Credit, and Compensation.
If an AI model benefits from someone’s creative or intellectual contribution, ethical governance requires that the individual’s agency, attribution, and economic value be recognised.
In practice, this means organisations should obtain clear consent before using data or creative works, provide proper acknowledgment where human input shapes outcomes, and ensure fair reward structures for contributors. These principles form the foundation of transparent and trustworthy AI ecosystems.
Without such safeguards, AI risks deepening inequality by concentrating benefits among technology owners while diminishing the value of the human expertise and creativity that make innovation possible.
(6) As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
Jean: Transparency is one of the weakest points in the current AI ecosystem. Most users don’t know where training data comes from, and disclosures are often vague or overly technical.
As someone researching AI governance, I believe explainability should go beyond algorithms and extend to data provenance. Users deserve to know whose work trained the model and whether it was done lawfully and ethically.
(7) As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Jean: Before GenAI became popular through OpenAI and ChatGPT, AI systems had already been around. Most of us have contributed to AI datasets, often without realising it. Social media posts, biometric scans, and behavioural analytics are routinely used for training. Even at airports or online verification portals, facial recognition systems collect and retain data by default. The problem is that consent is usually implied rather than informed, which weakens individual control.
Just take a look at your phone. If you’ve noticed that advertisements or sponsored posts start appearing with content similar to something you recently discussed with a friend, it’s quite obvious what’s happening.
(8) Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
Jean: Yes, I’ve seen my data used for AI training without clear notice.
Reddit sold user content to OpenAI and Google in 2024 without direct user consent or real opt-outs. Deleting posts didn’t undo it.
LinkedIn later updated its terms to use profile and post data for Microsoft’s AI tools, with opt-outs buried in settings and unclear for many users.
Google also broadened its 2024 privacy policy to include Gmail and Docs data for AI use, offering only partial controls that limit functionality if turned off.
Across all three, consent felt more like a checkbox than a real choice.
(9) Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Jean: While I have not been personally affected, I have seen professionals experience phishing, impersonation, and data leakage due to weak data governance. Once information is copied or sold, it cannot be retrieved. Data risk is cumulative and irreversible. Organisations must treat data as a lifecycle asset, accountable from creation to deletion.
(10) Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Jean: Trust depends on transparency, accountability, and inclusion. AI companies must:
Disclose data sources and governance practices clearly.
Embed ethical oversight and independent auditing into model development.
Include multidisciplinary voices such as legal, ethics, social science, and user communities in decision-making.
Public trust is not a communications exercise; it is the outcome of sound governance.
(11) Anything else you’d like to share with our audience?
Jean: AI is redefining how law, business, and leadership operate. In my view, there are currently three camps: #YesAI, #NoAI, and #IDontKnowAI, made up of those who prefer to wait and see what others are doing and how this develops. Nobody is being forced to use AI, although some organisations are already making it part of their workflow.
Over time, AI skills will, in my opinion, become one of the key requirements employers look for in many industries. But, and it’s an important but, if you choose to use AI, it should be used smartly, efficiently, and responsibly.
My focus is to ensure that this transformation remains accountable, equitable, and human-centred. I currently have a full plate: I am finalising several playbooks to help legal professionals navigate the AI landscape, publishing insights on LinkedIn, writing academic articles and guest pieces, and preparing my first book on AI governance and law. Alongside this, I continue working with regional professionals, drafting white papers on corporate AI governance, and building the Global Legal AI and AIgnite Women networks.
We’ll look forward to your book on AI and governance, Jean. Thank you so much for sharing your AI experiences with us!
Interview References and Links
Jean Gan on LinkedIn
Global Legal AI and the Global AI & Law (GAIL) Network
AIgnite Women
How to Legal AI
Beyond the Clauses
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% human-authored, 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to Beth Spencer for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)







