Beyond ethics: risks of AI use
A summary of the business and personal risks that even people who aren't concerned about AI ethics should be aware of
This article is a preview of a chapter in my upcoming book, “Ethical AI In A Nutshell”. Comments are welcome & appreciated and will be reflected in the book where appropriate with credit to commenters.
Reminder: This article and any associated materials are not a substitute for legal or professional advice and are meant for general information only. See the Terms.
Beyond Ethics: Why All Of This Matters
Anyone reading this newsletter probably already knows that I care about AI ethics and write a lot about it. Let’s set ethics aside for a moment, though, and assume that knowing whether an AI tools is developed ethically isn’t a concern for you.
Whether you use AI for family or personal tasks or for your business, using outputs from AI tools can lead to trouble for you – especially if those AI tools were unethically-developed, but even if they’re not. Business risks – and the business case for AI ethics [135]1 – can include legal alignment, security and confidentiality, and the impact on the workforce.
“AI has never existed in isolation. Privacy, consumer protection, security, and human rights legislation all apply to AI.” - Kiran Minnasandram, VP and CTO of Wipro FullStride Cloud, quoted in [136]2.
Here are some tips on the dangers – even if you don’t care about the ethics per se – and what you can do to protect yourself.

Legal Alignment and Copyright Infringement
Companies that train their AI models with copyrighted content that they didn’t properly license are exposed to copyright infringement lawsuits. (As of the writing of this book in summer 2025, 41 lawsuits were in progress in the USA on these concerns.) In some cases, the damages being sought are in the range of US$100,000 per work infringed (and those tools have been trained on many thousands of works).
What people often don’t realize is that using an AI-based tool that wasn’t ethically sourced can put YOU at risk of legal and financial liability for copyright infringement, too. This applies whether you build a product on top of the AI platform, or if you’re just using anything you generate with the AI tool. With only one known exception, none of the major AI tool providers indemnify (protect) their users against this risk. The risks of using what you generate with the AI tool are all on you.
Even in cases where the AI tool provider has claimed that their tool is “enterprise-safe”, it might not be. One known example is Adobe. In mid-2023 they announced their Firefly image generation tool beta, and claimed it was trained only on properly licensed content. They were so confident it was “enterprise-safe” that they publicly promised to indemnify their enterprise customers against copyright infringement lawsuit damages from using Firefly [137]3. However, in April 2024, after Firefly was officially released, it came to light that Adobe Firefly was partly trained on Midjourney images (reportedly about 5%, or possibly a million images).
Midjourney’s founder, David Holz, is reported to have admitted to scraping millions of images off the internet without explicit consent (not licensed), which supports many who have concluded that Midjourney is not ethically sourced [138]4. If it’s not, then use of Midjourney images to train Adobe Firefly ‘poisons the well’ so that enterprise customers using Adobe Firefly might not be safe from copyright lawsuits after all [139]5. Even if Adobe keeps their promise to indemnify enterprise customers against damages from copyright infringement, those customers will likely still need to endure the expense and trouble of being sued.
An additional potential legal risk is that inaccurate information in the training dataset can propagate into inaccuracies in the results. As the user of the tool, whether for personal or business purposes, you’re on the hook for the accuracy of the outputs that you generate and share. If the training data contains errors that defame someone or cause them to be falsely accused of something, and you share or publish that information, any legal liability is on you — not the tool provider who used the dirty data to train their AI.
I am not a lawyer, but my understanding is that being unaware of where and how the AI tool you use got its training data wouldn’t be a reliable legal defense. (“Ignorance is no excuse.”) If you’re evaluating AI tools for your business, it’s a good idea to do the due diligence on the tools or retain someone trustworthy to do it for you. The book section “Four Things You Can Do” has some tips on what to look for.
Security and Confidentiality Risks
AI tools are, in some ways, no different than other enterprise (or personal) tools. This is especially true for cloud-based software. Most of the major LLMs and other AI-based tools are cloud-based. That means that what you – or anyone in your organization who’s using the tools – do will be relayed to and through the other company who hosts the tool in the cloud. How safe is that information?
Unless it’s otherwise stated that content remains totally private to that person, an AI tool is likely to use whatever content someone puts in – and how the person interacts with the tool – as further training data. Data can ‘leak’ into an AI tool that way, and ‘leak’ back out to someone else who uses the updated tool.
Now it’s true that some AI tool providers say that for paying enterprise customers, they’ll commit to ‘multi-tenancy’ protections so that your content won’t be shared outside of your organization.
Is that good enough? No, for at least 3 reasons.
Confidentiality: Not everything known to anyone in your organization should be shared with everyone. Imagine someone in HR asking ChatGPT to summarize information from a spreadsheet that has everyone’s salaries, or data about planned layoffs. Even if it doesn’t leak outside of your company, would it really be ok for it to leak outside of HR, or even to someone else in HR who isn’t authorized to know about the salaries or layoffs? No.
Tools are built on platforms: Even if an AI tool company promises they won’t train on your data, they may use a third party AI platform provider (e.g. OpenAI) to run their service. How sure can you be that the AI platform provider will respect the confidentiality of what you put into the tool and not use it for training their models?
Data breach risks. Even if the AI company genuinely intends and commits to keeping your organization’s data confidential, breaches happen.
Workforce Impact
As we covered earlier, due to the sources of data used to train them, generative AI tools are known to be biased towards western values and ways of thinking & working. If your business is global or outside the western part of the world, your tools may clash with your company values, and these biases may undermine your people. The AI tools may also not work very well in your workforce’s native or preferred language [140]6.
GenAI tools in particular also don’t work well at present for people with disabilities. This further disadvantages members of your staff who are already coping with disabilities. Differences have also been observed in the usage of AI tools by men and women in the workplace. Some initial analyses indicate that:
Women are penalized more harshly than men for using AI tools, even when use of AI tools is otherwise encouraged.
Women are more likely to have ethical concerns about using AI.
Telling women to “just lean in” is not the answer [141]7. Pushing for broad adoption of AI tools without addressing these biases will not be fair to them.
Additionally, CEOs and workers are not ‘on the same page’ about the productivity impact of AI tools. An Upwork study from July 2024 found a big disconnect between CEO expectations of increased productivity and workers' experiences and expectations on how use of AI affects their workloads.
"Despite 96% of C-suite leaders expressing high expectations that AI will enhance productivity, 77% of employees using AI say these tools have added to their workload, and nearly half (47%) of employees using AI report they do not know how to achieve the expected productivity gains." [142]8
Interestingly, freelancers seem to be benefiting more from AI. (Note: The study only covered U.S., UK, Australia, and Canada. It would be helpful to see newer data across a wider pool of people in more than these 4 countries.)
Branding Risks
AI image, video, and music generation tools can be fun to experiment with. But even if they were ethically developed, they can also give you outputs which:
(1) are not unique ‘enough’ and/or
(2) you don’t own the rights to.
That’s a concern if you want to use the images, videos, or music for your business.
Uniqueness
Not all generative AI tool providers commit to providing you with output that’s truly unique. Even the AI tools that claim your output will be ‘unique’ are not promising you that no one else will ever get an image that’s similar enough to be confusing.
Technically, all it takes for an image to be ‘unique’ is a short line that’s different – or one dot. And if you’re using the tool to create an image that’s a basis for your personal or business ‘brand’, you probably want something that will be distinctively yours.
Say you’re starting a coffee shop in a certain niche (e.g. for book lovers) or location (e.g. Austin, Texas) or with a certain style (e.g. French cafe), and you put those characteristics into an AI tool prompt to generate a logo. Any competitor (current or future) who gave a generative AI tool a similar prompt could get a very similar image to the one you end up using to build your brand on. (And they may already have done it before you.)
Building your business, logos, collateral, and reputation on AI-generated brand content that might be confusingly similar to a competitor is a risk worth keeping in mind.
Ownership Rights
Most of the generative AI tool providers say that they retain the copyrights and grant you a license to use the output. But you generally don’t own that output. And you may not have the rights to create any new derivative works from it. Or you may need to pay for a higher-end license on the AI tool site to get some of those rights. This is typically spelled out in the Terms and Conditions (T&Cs) for use of the site, or in the user agreement for the paid license.
It’s worth checking the T&Cs and user agreement to see what rights you do have. If the rights are tied to a monthly or yearly subscription, be sure you understand whether those rights will be perpetual – i.e., will your rights to use that output survive after you stop paying for your subscription?
What Happens to our Wetware (Brains)?
When I was in high school, I worked on the school newspaper. During my senior year, we decided to publish competing editorials – one pro, one con – on the question of whether calculators should be allowed to be used in our school. One of my debate team colleagues and I flipped a coin to pick who would write which one (we were well used to arguing both sides of a topic regardless of what we believed). I don’t have a copy of those old articles any more, but as I recall, their gists were: “calculators save time and let students focus on learning concepts” vs “manual calculations help students understand how math and physics concepts translate to solving problems”.
Sound familiar? 😊 Decades later, we’re having similar discussions about AI tools and whether they’re good or bad for students to use in school.
Impact of AI on Learning
I’ve interviewed high school students, a medical school student, a computer science undergrad, and a bunch of teachers. And I do my best to read what dozens of thoughtful people say about AI and education on Substack. (See the book resources page for a list of people who write well about AI and education.) Here’s what I’ve been hearing:
Yes, AI tools can save time and let teachers deliver core concepts quicker.
Yes, students can use AI tools to cheat and avoid learning. (Not that cheating in school is new since AI emerged.)
Yes, there are ways to design tests so that students can’t just use a LLM to generate the test answers.
Yes, there are tools to detect “AI plagiarism”, but those (AI-based!) tools sometimes wrongly accuse students who didn’t write with AI.
Impact of AI on Thinking
A related question is what using AI does to our brains outside of school. Again, the initial studies and stories range widely. Examples:
Some claim tremendous productivity increases from having a tireless virtual assistant that not only writes for them, it asks them questions that help them think more deeply.
Some neurodivergent people report that LLMs help them with executive function and organizing their thoughts.
Many people use LLMs to help them with spelling and grammar, particularly if writing in a second (or third …) language.
Some songwriters love being able to quickly mock up a demo track to help them figure out if a song is any good before wasting a musician friend’s time trying to play it.
All of these uses of AI augment our brains and seem like good things.
But AI tools can generate a lot of “slop” that isn’t worth reading, viewing, or listening to. Even without what some people claim as “tells” that really aren’t (e.g., use of em dashes), many of us can just tell when a comment or post has been AI-written or when an image is AI-generated. The result just has a bland sameness that makes folks go ‘ugh’ and stop reading. There’s still a big difference between prose written by a human and polished by a LLM, and prose ‘written’ by a LLM. Not only that, the internet is becoming saturated with AI-generated (not just assisted) content, which AI tools are now picking up as new training content. It’s a snake eating its tail.
It’s been reported since at least 2020 that relying on a GPS over map-reading causes people to lose their environment knowledge and sense of direction [143]9. Likewise, relying on a LLM as a “cognitive prosthetic” [144]10 can cause people to use less critical thinking and miss out on true learning, as this recent joint study by Microsoft and Carnegie Mellon indicated [145]11:
“The more the workers tapped AI for help, the less critical thinking they did. The study further notes that reliance on AI changed the way workers enacted critical thinking faculties, shifting their focus towards “information verification, response integration and task stewardship” in such instances.”
A common business example is writing a paper or a marketing strategy report that summarizes information from a bunch of references. Like writing a book report in school, the goal of writing the paper or report isn’t just to generate the artifact. The goal is for the writer to learn about the ideas in the references to help them figure out what the marketing strategy should be. Taking too many shortcuts with a LLM will indeed generate a report, but the learning process will be skipped, and the odds are that the strategy won’t be innovative. A LLM is going to generate a composite of other strategies it’s already read about in its training dataset, because that’s what it’s designed to do. That’s not innovation that is going to help you win market share.
Impact of AI on Career Development
Another common example from the tech world is using a coding LLM to write software. LLMs are getting better at quickly spitting out boilerplate code and jump-starting creation of a prototype. The code tends to degrade with additional requests for changes, to the point where the LLM will start breaking code it wrote a few prompts ago. (I’ve had this happen when using an AI code assistant for Python code.)
These phenomena aren’t unique to writing code, either. Interview guests have reported similar issues with writing marketing plans and legal terms & conditions.
And the coding LLMs still fall down on design, security, and other concerns that professional software engineers learn through experience and mentorship.
All of this raises the question: if LLMs do the work that a junior staffer would do, how are junior staffers ever going to build the skills they need to be effective senior staffers? How will newbie software developers learn how to design a secure, performant system and how to find and fix the bugs the LLMs create?
Impact of AI on Creativity
A recent study indicates that reliance on AI tools may give a short-term performance boost while impairing human creativity in the longer term, rather like “steroids in sports” [146]12:
“Our findings reveal that while LLM assistance can provide short-term boosts in creativity during assisted tasks, it may inadvertently hinder independent creative performance when users work without assistance, raising concerns about the long-term impact on human creativity and cognition.”
Bottom Line: Impact of AI on our Wetware
It’s too soon to fully understand the long-term impacts of AI on our brains. But we can already see some of the risks. And studies reported by IAPP already show that paying attention to ethical AI practices, or ‘responsible AI management’ (RAIM), brings business benefits [147]13:
“Respondents clearly stated RAIM produced substantial value and did so in areas important to business strategy and competitiveness such as product quality, trustworthiness and reducing regulatory risk.”
To use AI tools wisely, learn:
How to use AI tools to augment our wetware (not replace it), and
When to stop using AI tools and switch back to our human brains.
To build AI tools wisely: If you can influence how AI tools are built, learn:
How to take advantage of research like I’ve reported above that can help you design and deliver AI tools that actively support critical thinking and creativity.
What AI ‘governance’ is and how it can help you manage risks
My upcoming book, “Ethical AI In A Nutshell”, covers specific recommendations for actions you can take to learn and to act, so you can better protect yourself and your business from these risks. To be notified of future previews and release announcements, subscribe (free):
Credit goes to Beth Spencer for the “Created With Human Intelligence” badge we use to reflect our commitment that all content in these articles will be human-created:
[135] Agbese, M., Halme, E., Mohanani, R., & Abrahamsson, P. (2024). Towards a Business Case for AI Ethics. In S. Hyrynsalmi, J. Münch, K. Smolander, & J. Melegati (Eds.), Software Business - 14th International Conference, ICSOB 2023, Proceedings (pp. 231-246). (Lecture Notes in Business Information Processing; Vol. 500 LNBIP). Springer. doi.org/10.1007/978-3-031-53227-6_17
[136] “Why neglecting AI ethics is such risky business - and how to do AI right”, by David Gewirtz / ZDnet, 2025-04-06.
[137] “Adobe is so confident its Firefly generative AI won’t breach copyright that it’ll cover your legal bills”, by Chris Stokel-Walker / Fast Company, 2023-06-08.
[138] “What We Know About the Midjourney Model”, by Christian Heidorn / Tokenized, 2025-05-31.
[139] “Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images”, Maitreya Shah / Harvard, 2024-04-03.
“Legal risks loom for Firefly users after Adobe’s AI image tool training exposed”, Constantine von Hoffman / MSN Martech, 2024-04-22.
[140] “Fumbling in Babel: An Investigation into ChatGPT's Language Identification Ability”, by Wei-Rui Chen, Ife Adebara, Khai Duy Doan, Qisheng Liao, Muhammad Abdul-Mageed, in NAACL 2024 Findings, last revised 2024-04-08.
[141] “Leaning in is not the answer for women not using generative AI”, Karen Smiley / Agile Analytics and Beyond, 2025-03-15 - commentary on “Women Are Avoiding Using Artificial Intelligence. Will Their Careers Suffer?”, featuring Rembrand M. Koning, by Michael Blanding / HBR, 2025-02-20.
[142] "Upwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence", Upwork, 2024-07-23. The full study is available via “From Burnout to Balance: AI-Enhanced Work Models”..
[143] “Habitual use of GPS negatively impacts spatial memory during self-guided navigation”, by Louisa Dahmani and Véronique D. Bohbot, in Nature Sci Rep 10, 6310 (2020). DOI: 10.1038/s41598-020-62877-0.
“GPS use and navigation ability: A systematic review and meta-analysis”, by Laura Miola, Veronica Muffato, Enrico Sella, Chiara Meneghetti, Francesca Pazzaglia, in Journal of Environmental Psychology, Vol. 99, Nov. 2024, 102417, 2024-09-14.
“The Forgotten Art of Map Reading: Boosting Spatial Awareness”, Very Big Brain, 2024-11-29.
[144] “Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?”, by Wolfgang Messner, The Conversation, 2025-06-02. (Via Guy Kawasaki, 2025-06-10)
[145] “AI Is Making You Dumber, Microsoft Researchers Say”, By Dimitar 'Mix' Mihov / Forbes, 2025-02-11.
“The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers”, by / Microsoft, ACM ISBN 979-8-4007-1394-1/25/04, published in CHI ‘2025, DOI 10.1145/3706598.3713778, 2025-05-01.
For a less academic take without a paywall, try: “The LLM Paradox: The More We Use LLMs, The Less We Can Use Them Well - Part 2“, Arun Palanichami / Attending To Attention, 2025-03-31.
[146] “Study explores the impact of LLMs on human creativity”, by Ingrid Fadelli / TechXplore, 2024-10-30. Based on DOI: 10.48550/arxiv.2410.03703.
“Human Creativity in the Age of LLMs: Randomized Experiments on Divergent and Convergent Thinking”, by Harsh Kumar, Jonathan Vincentius, Ewan Jordan, Ashton Anderson, in CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Article No.: 23, Pages 1 - 18, 2025-04-25. DOI: 10.1145/3706598.3714198
[147] “Responsible AI Management: Evolving Practice, Growing Value”, by Professor Dennis Hirsch, Jared Ott, and Angie Westover-Munoz / The Ohio State University Program on Data and Governance and IAPP (the International Association of Privacy Professionals), 2024-06 (PDF).
Such an insightful take, Karen! I resonate with the calculator debate, sometimes I struggle with similar thoughts only the subject changes - for instance using Google to make presentations at my college or completing a dissertation. Earlier it involved loaning books from the library, now that has been reduced to a great extent. Not that I am against it or something but someday I would love to see a research on how human brain evolved post google vs pre-google era.
This was such a thoughtful read. AI's effect on learning is something that I have seen firsthand and personally struggled with at times myself—one of my favorite statistics professors' favorite sayings was "garbage in, garbage out." A lot of people don't realize the importance ethical use and migration of data is in training LLMs in an ethical way. As a proud "vibe coder" (my definition might be different from yours in this sense, Karen!), I fear all of the things you mentioned in this article might harm people who aren't so technical being able to have AI properly help them in tasks in the pursuit of creating a better world.