📜 AISW #073: Dee McCrorey, USA-based transformation strategist
Written interview with USA-based transformation strategist Dee McCrorey on her stories of using AI and how she feels about AI using people's data and content.
Introduction -
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack. If it doesn’t fit in your email client, click here to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
Karen: I’m delighted to welcome Dee McCrorey from the USA as my guest for “AI, Software, and Wetware”. Dee, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Dee: I’ve worn a lot of hats over the years, Karen, but the common thread is this: I’ve spent a lifetime navigating change — personally, professionally, and institutionally.
My roots in Silicon Valley run deep. My first job with a real paycheck was in semiconductor manufacturing at National Semiconductor while attending college in the early ’70s. Growing up in a military family, and later living abroad in Europe and the Middle East, taught me early how to adapt across systems and cultures.
When I returned to the Valley in the ’80s, I deepened the systems lens I’d first glimpsed earlier in my career — bringing that perspective into organizational change work. I became known for helping large tech companies transform from the inside out, especially during the messier post-launch phases when things went off the rails. Over time, those "triage" instincts became part of my reputation.
Over the decades, I’ve led major initiatives across the hardware and software landscape and later authored a book on innovation. Today, I write A Bridge to AI (AB2AI), a Substack newsletter that cuts through the hype to explore how AI and automation are reshaping work — and the workplace writ large —and what people can do to stay relevant and resilient.
Since launching in January, I’ve introduced Pink Slip Pivot (PSP), free monthly strategy sessions for those navigating layoffs or career transitions, which run through December.
Thanks to a bit of serendipity, in early August of AI Supremacy decided to build a Mighty Networks community in an effort to design a broader ecosystem sparking deeper connections, collaborations, debates, and shared discovery among members. He invited me to host The AI Inflection Point, a space inside the AI Vanguard Society, which I recently launched and announced to my subscribers. I believe congrats are also in order for your SheWritesAI Community that’s now part of this same network!
On the side, I’m developing a narrative podcast, Seeing Through Silicon, about the early “Wild West” days of Silicon Valley. Off the page, I’m a mom, a storytelling ‘Geema’, a mentor to local college students, and a grateful eco-volunteer living oceanside — not far from where it all began.
Karen: Thank you for that self-introduction, Dee – such a wide-ranging set of experiences! Yes, the SheWritesAI Community in AI Vanguard Society is working out beautifully so far, and I’m setting up an Everyday Ethical AI Community as well for my subscribers. I’m so grateful to for being a wonderful ally and sponsoring us, and I love that we’re building such rich network connections there. And I want to hear more about your podcast on the “Wild West” of 1970’s tech! 🙂
Dee: Seeing Through Silicon pulls back the curtain on the semiconductor industry’s rise in the ‘70s — and the predominantly female workforce that powered it from behind the scenes. Through a blend of memoir and creative storytelling, it traces how the cultural, social, and political forces of the time shaped the Valley’s values and practices — many of which still echo in today’s debates around AI and emerging technologies. By offering a more grounded lens on how we got here I hope that it will help us navigate what comes next.
Karen: That sounds so useful, Dee — it’s great that you are working on sharing that perspective. With your background in tech and Silicon Valley, I’m also wondering if you ever saw the series “Halt and Catch Fire” and if so, how realistic you think it was?
Dee: Yes! “Halt and Catch Fire” was realistic for what it was aiming to portray: a fictionalized insider's view of the personal computer revolution of the 1980s and the early days of the WWW in the early 1990s. Although it took awhile to find its place in TV lineups, the series would eventually find its fan base (including me).
Karen: My husband and I are both computer geeks and we enjoyed the show too 🙂What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
Dee: I’ve always been an early adopter of technology, but with AI, my starting point was more of a curious explorer than a technical builder — driven by an interest in its societal, cultural, and historical implications. Curiosity, though, has a way of opening unexpected doors.
Over time — and especially through Substack conversations with wickedly smart tech builders — I’ve found myself venturing into deeper waters. While I’m not a developer by training, I’m now exploring development partnerships using tools like Lovable, an AI-powered app builder, to prototype a concept I first imagined over a decade ago: a Collaborative Risktaking™ Platform. The goal? To reimagine how organizations navigate high-stakes decisions by tapping collective intelligence in more dynamic, inclusive ways.
Karen: You’ve had a dynamic career journey. I’m curious about how you made your decisions and whether any mentors helped to guide you.
Dee: I’ve been fortunate to have early-career opportunities working alongside some of the sharpest technical minds — people who helped shape my systems thinking muscle. That early mentoring laid the foundation for how I make sense of complexity today, especially at the intersection of innovation, transformation, and human behavior.
Interestingly, the dots I connected early on led me straight into the messier side of innovation — the part where things don’t go as planned. Over time, it became part of my toolkit. I developed a reputation as a kind of "tech cleanup crew" — someone who could come in post-launch and steady the ship. Eventually, both employers and clients hired me for those turnaround instincts, and honestly, that’s a pretty solid skill to have in your toolbox.
These days, my mentorship style leans into tough questions, connecting the dots between strategy and tactics, and helping others make clear-eyed decisions—not necessarily “right” or “wrong,” but aligned with who they are and what they want next.
Karen: Makes sense. I’d like to hear your thoughts on how AI is reshaping roles and career paths, and where you think mentorship can have the greatest impact—early on, during mid-career reinvention, or even later in helping others adapt or transfer skills?
Dee: Mentoring first-time and returning college students — especially amid today’s unpredictable terrain — has been deeply grounding. That experience even inspired two recent pieces I wrote on Substack, which may resonate with your readers:
• Strategic, Surgical, and Scrappy: A Career Mindset for Rough Terrain
• The AI Productivity Boom Feels Risky — But It Might Be Your Invitation to Experiment
Do I ever look back and wonder if being cast as the ‘cleanup pro’ was a career detour? Sure. Did some choices steer me away from the design table and into triage mode? Maybe. But in hindsight, that lens has served me well — especially now, at a moment when AI is reshaping everything.
Still, there’s risk in being perpetually cast in the damage-control role rather than being part of the design and direction from the start. And for women in tech, that pattern is all too familiar. I saw it during my semiconductor days — and I’m seeing echoes of it now in AI.
Karen: That’s an intriguing insight about women being sidelined into reactive roles in tech and AI. I don’t think many women will be surprised that it happened years ago, but it’s disappointing that it’s still happening today. Do you perhaps have a specific story you can share from your time in the semiconductor industry about a tech mess that women were tasked with cleaning up? And is there an example you can share of this happening in today’s AI world (without compromising confidentiality or anyone’s privacy)?
Dee: Absolutely. Back in the fabs, I saw firsthand how high-risk decisions — often made upstream by men in technical or leadership roles — created real downstream danger. I wasn’t a hazmat worker, but I was on the floor when toxic gas leaks and acid spills happened. These weren’t theoretical risks; they were physical, immediate, and often fell hardest on women — the majority of those working on the front lines.
That experience pushed me to move into training and eventually management, where I could influence decisions earlier in the process. I wanted those closest to risk to have a voice upstream. Sometimes it worked. But too often, decisions were still made without input from the people who had to live with the consequences.
Fast forward to today, and I see the same dynamics in AI. High-stakes choices — how systems are designed, which data is used, how automation is deployed — are often made without foresight, again mostly by men in top technical and executive roles. And once again, women are brought in after the fact — in ethics, HR, or PR — to clean up the fallout: regulatory, human, or reputational.
It’s especially troubling in HR, where AI tools are being used to justify so-called “efficiency gains” — including termination decisions made without human review. That’s a new form of harm, but the pattern is familiar: those excluded from decision-making are tasked with managing its consequences. [links below in References]
That’s why it’s critical to get women — and anyone who's ever been part of the cleanup crew — at the table before the big calls are made. People who see the whole system and ask the hard questions are exactly who we need shaping what comes next.
Karen: I completely agree! Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
Dee: I regularly use a range of AI and machine learning tools — ChatGPT, Claude, Perplexity, Gemini — for research, analysis, and ideation. I also use Notion’s AI features to build automated training and educational solutions, which has opened up new ways to deliver content efficiently without sacrificing personalization.
For communication workflows, I’m going to leverage Notion more, especially since it added a meeting recording feature that captures both sides of a meeting–a huge time saver, though I always double-check for nuance.
Lately, I’ve been exploring Mistral Le Chat, especially for building research and analysis agents. What excites me about Mistral is its commitment to open, portable, and privacy-preserving large language models. That philosophy of user-centered design and transparency really aligns with how I believe these tools should evolve.
Because I care deeply about responsible innovation, I approach AI tools with both curiosity and caution. I’m not in the “move fast and break things” camp. I believe we need strong governance and real accountability — not just for the developers, but for those funding and deploying these systems. That said, I’ve also seen how excessive bureaucracy can stall progress. So I advocate for balance — designing with intention, not just speed.
My own history in tech reminds me that bias doesn’t just show up in datasets, it shows up in who gets to ask the questions, define the problems, and build the tools. That’s why including diverse perspectives in AI development isn’t optional but essential.
Karen: I couldn’t agree more, Dee. If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Dee: Yes, I intentionally avoid entering any sensitive personal information into AI tools — things like my Social Security number, driver’s license, or birth date. I’m especially cautious when a tool’s data practices aren’t transparent or when it’s unclear how that information might be stored, shared, or repurposed.
Ironically, while I’m careful about what I input directly, many companies are already using our personal data behind the scenes — particularly in areas like hiring and healthcare. Algorithms now help screen resumes, rank candidates, and even assess “culture fit.” In healthcare, AI is increasingly used to guide treatment decisions and flag potential risk — often without patients fully understanding how those decisions are made or what data is being used.
One of the biggest concerns is the lack of clear boundaries. Where does automation end and human judgment begin? That ambiguity makes trust the real tipping point.
Who are we trusting — the developers who built the model? The company that licensed it? The vendor managing it day-to-day? Too often, those roles are siloed, and with that comes a lack of accountability.
We may not always be able to opt out, but that doesn’t mean we should opt out of the conversation. Whether it’s hiring, healthcare, or other high-stakes domains, we need to push for transparency, human oversight, and ethical alignment early in the tech lifecycle — because these systems are already shaping outcomes that affect all of us, whether we engage with them directly or not.
Karen: Absolutely.
A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get Consent from (and Credit & Compensate) people whose data they want to use for training? (the “3Cs Rule”)
Dee: My optimism on this topic swings wildly — sometimes within the same day 😣 In a better world, creators wouldn’t have to worry about their work being scraped or repurposed without consent. But that’s not the world we live in. Economic sovereignty — especially for creatives — is rarely granted. More often, it has to be fought for, clawed back inch by inch.
AI companies often justify their practices by saying they use only “publicly available” data. But as developer Ed Newton-Rex, who resigned from Stability AI over this issue, pointed out: “Publicly available” doesn’t mean anyone gave permission. It just means no laws were broken to get it. That distinction matters.
The deeper problem is how we’ve normalized trading privacy for convenience. Millions of people sign up for “free” tools and platforms without realizing what’s being extracted in return. Our rights aren’t lost all at once — they erode quietly, click by click, platform by platform.
And with recent court rulings leaning toward generative AI companies and expanding the definition of “fair use,” we’re entering what will likely be a long legal reckoning between creators and platforms.
The real question now is whether we’re willing to design systems that put human creativity, consent, and fairness at the center — or whether we’ll keep letting convenience and corporate interest write the rules by default.
Karen: All great observations, Dee. The recent rulings on fair use were a let-down for creators. But the judges’ rulings did leave the door pretty wide open for them to win damages for market disruption and for how the companies stored the content they scraped. So all isn’t lost yet. It’s definitely going to take a long time to settle out, though.
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
Dee: Why prioritize transparency when there are no real consequences for hiding the truth?
Karen: That sounds like a NO 🙂
Dee: I’d love to see AI companies lead with trust and transparency instead of power and profit — but we’re not there yet. And my deeper worry is that the pendulum won’t just swing away from transparency — it could collapse entirely. We may be heading toward a future with no real guardrails at all: ethically, operationally, or legally. And history tells us it often takes something catastrophic to force change.
What does catastrophic look like? It’s not just creative theft or misinformation. It’s militarized AI. Mass surveillance. Algorithmic control over speech, behavior, and thought. And it may not come from the usual suspects. The real danger might lie with fast-moving, well-funded startups that treat governance like a joke and see self-regulation as optional.
In a world without accountability, no one feels obligated to act responsibly. And in that vacuum, the old question applies: If a tree falls in the forest and no one hears it — did it really fall? That’s where we are with AI transparency right now.
Unless we create real systems of accountability — for those building, funding, and deploying these tools — we’ll keep mistaking silence for safety. And that’s a risk none of us can afford.
Karen: Good points - and I think a lot of people assume we have no power to make those demands for accountability, but I feel like we can’t (and shouldn’t) just give up.
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Dee: At this point, we should all assume our personal data has been used by AI systems — often without our full awareness or meaningful consent. And given the lack of enforceable guardrails or regulatory teeth, it’s likely happening far more than we realize.
Take online job applications. Candidates are often required to hand over highly personal information just to be considered. Increasingly, that data is analyzed by AI-driven recruitment tools, which can create invisible barriers — perpetuating historical bias or filtering candidates based on flawed or opaque criteria. Most applicants never find out how their data was used or why they were screened out.
These risks aren’t theoretical. Data breaches are already happening. Bias is already baked into many systems. And the stakes are real — because access to jobs, hiring decisions, and even performance evaluations are being shaped by tools that lack transparency and oversight.
What we need is a multi-layered response: strong regulatory oversight, ethical design practices, transparent policies, and meaningful data protections. But we also need ongoing cross-sector dialogue to ensure that speed and convenience don’t come at the cost of human dignity, opportunity, or fairness.
As AI becomes more embedded in everyday life and business, protecting privacy can’t remain an afterthought. It has to be foundational.
Karen: Yeah, hiring and staffing decisions are a great example of your earlier point on how biases get baked into AI features. You may have heard about the Amazon experiment with AI for resume screening? They had to discontinue it because they realized it was reinforcing historical biases against women. [link]
I found one recent study while I was researching my upcoming book on AI ethics [Everyday Ethical AI: A Guide For Families & Small Businesses]. The researchers at Lehigh University were evaluating racial and other biases in mortgage underwriting decisions. And they found that all of the major LLMs were unfairly biased. But they also found that giving the LLMs explicit instructions to ignore race as a criterion almost completely eliminated the bias. [link] So these problems *aren’t* unsolvable. People do have to care enough about fairness to work on them!
Dee: I think your book is excellent — and so timely! I’m helping you to promote it on Substack and LinkedIn because I feel that it should be in the hands of everyone impacted by artificial intelligence, whether in their small business, on the job, or educating their children.
Karen: I appreciate that, Dee! I’m setting the launch price for the full ebook as low as Amazon will let me. I feel it’s SO important for everyone to understand what’s behind the chatbox prompts and the smart systems that we’re all using every day, whether we realize it or seek them out or not!
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs? How do you feel about how your info was handled?
Dee: Transparent policies and informed consent aren’t just ethical checkboxes but foundational to building trust. That’s one reason I made the decision in 2024 to significantly reduce my engagement across social media platforms. I simply no longer trusted that these companies were being transparent in how they govern or use our data.
We’re seeing what happens when Big Tech is left to regulate itself. Take LinkedIn, for example. In 2024, it quietly updated its terms after reports by 404 Media surfaced that the platform — along with parent company Microsoft — had likely used member data to train AI models. In contrast, LinkedIn users in the EU, EEA, and Switzerland were offered opt-out settings, thanks to stronger regional data privacy laws. U.S. users were not.
Karen: Yes, that was an egregious example of how NOT to handle people’s consent to the use of their data!
Dee: The consequences go far beyond social media. As AI adoption accelerates in the workplace, there’s growing tension between employers eager to automate and a workforce that remains unsure — and often unaware — of how their data is being used. What’s at stake? Job security, workplace surveillance, and erosion of personal privacy.
Employers already have vast stores of employee data, and it’s not far-fetched to imagine that some will begin monetizing this information under the guise of it being “anonymous.” But anonymization doesn’t always mean protection.
Karen: Absolutely, and I think a lot of people who aren’t familiar with how anonymization works just assume — trust — that it’s effective at protecting their privacy. But it’s often not.
Dee: AI is now being embedded throughout the employee lifecycle from onboarding to performance tracking to exit interviews. Some companies are even using these tools to influence decisions about promotions or terminations, often hidden in compliance policies employees must accept as part of standard hiring agreements.
And transparency is being further undermined by legal tools like NDAs. We’re seeing a trend where hiring agreements, non-disclosure clauses, and compliance documents are being rewritten to quietly authorize AI-driven surveillance. Ironically, AI is also being used to manage those very same NDAs — automating enforcement without necessarily protecting the individual.
We’re not just facing a technological challenge but a cultural and governance reckoning as well. If we don’t intervene with stronger protections and shared standards now, the normalization of invisible monitoring and consent-by-default could become the new workplace baseline.
Karen: Those are all excellent observations, Dee. And I think we agree that these are not the kind of workplace environments we want our children and grandchildren to have to live with.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Dee: Over the years, I’ve received notifications about personal data breaches, though I can’t definitively trace the few phishing attempts I’ve experienced back to any one incident. Like many, I’ve dealt with the inconvenience of updating passwords and login credentials — more of a disruption than a direct harm.
But I don’t take that for granted. It feels like a matter of when, not if, especially with the healthcare industry becoming an increasingly attractive target for cybercriminals.
And when that happens, it won’t just impact individuals — it will ripple across entire communities. The intersection of AI, data security, and critical infrastructure needs far more attention before we cross into territory we can’t easily walk back from.
Karen: Do you feel like there’s still time, that we aren’t yet past the point of no return? We haven’t yet crossed into that territory?
Dee: I think we’re standing at the edge of two thresholds: a tipping point and an inflection point. How do we distinguish between the two?
A tipping point is when something snaps — often quietly — and suddenly the damage becomes irreversible. In the context of AI, data security, and critical infrastructure, a tipping point could look like a coordinated cyberattack that doesn’t just bring down a hospital or a utility — but an entire region’s ability to function. If a large-scale breach erodes public trust so deeply that people begin to avoid care, question diagnoses, or reject digital systems outright, we won’t just have a technical problem — we’ll face a legitimacy crisis.
And we’ve already brushed against that edge.
The Colonial Pipeline hack disrupted fuel supplies across the East Coast. Ransomware attacks have frozen hospital systems, delayed surgeries, and jeopardized access to life-saving data. The Cambridge Analytica scandal, the Equifax breach, and AI-driven election interference didn’t just exploit system vulnerabilities — they fractured public trust.
Each of these moments was a warning flare. High-stakes, high-impact — and still, not enough to trigger the kind of collective response this moment demands. The next breach may not offer that luxury.
But we’re not quite there — yet. Right now, we’re still at an inflection point — a window, however narrow, to shape the trajectory of trust between humans and AI. We still have time to harden systems. To increase transparency. To demand meaningful oversight. To ensure that trust is earned—not assumed.
But that window won’t stay open forever. Every breach, every flawed deployment, every opaque algorithm chips away at that fragile foundation.
That’s why I believe trust between humans and AI is the inflection point of our time.
The question isn’t just can we build better tools — but will people believe those tools are safe, fair, and aligned with human values? Because once trust is lost at scale, it’s incredibly hard to rebuild.
Karen: Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
Dee: There’s a long-standing mantra in Silicon Valley: “We just make the tech — how people use it is another story.” That kind of detachment has fueled years of unchecked innovation, where proprietary algorithms scaled rapidly without meaningful oversight or responsibility.
But if tech companies want to build real trust, they need to do more than issue polished statements or go on yet another technology apology tour that fails to land. Trust must be earned through action, accountability, and a demonstrated willingness to operate with integrity, not just innovation.
The late Dr. Aaron Lazare, a leader in the study of apologies, outlined four core elements that define a meaningful one:
Acknowledgment of the offense
An explanation of what happened
Genuine remorse and humility
Reparation or justice that addresses the harm
Not every apology needs all four, but at least one must speak to the psychological needs of the offended party. And in this case, the “offended party” is increasingly everyone impacted by opaque data practices, biased models, and tech deployed without consent.
If we can't trust how systems are built, trained, and governed — if we can't trust the people behind them — then the technology itself, no matter how powerful, risks becoming irrelevant. Or worse, dangerous.
Karen: Agreed, Dee. I’m curious, regarding oversight and responsibility, what’s your take on AI regulation? For instance, do you think a federal approach to AI regulation here in the US can create a more cohesive and effective framework for ethical guardrails and accountability, vs. the patchwork of state laws we have now?
Dee: For a while, many hoped the EU’s data protection laws might set a global precedent, especially as the U.S. remained stuck debating whether AI oversight should be federal or state-led. In hindsight, that may have been overly optimistic on both counts.
Just in the last month:
The U.S. released its 2025 AI Action Plan, emphasizing deregulation, open-source innovation, and infrastructure scale.
The EU began enforcing the AI Act, tightening privacy, liability, and oversight.
China doubled down on centralized governance, linking AI access to national objectives.
And tech giants began offering $300M packages to lure top AI researchers across borders.
From a distance, these sound like policy debates. Up close, they’re reshaping how we work, adapt, and build new ways to earn a living.
Which tools your team is expected to adopt — and who gets a say
How your resume gets screened — or whether your role gets re-scoped
Whether retraining is funded — or becomes a personal burden
Whether AI decisions in your workplace are accountable — or opaque
Whether the rules around automation protect your path — or help you pivot
While the AI Action Plan isn’t why I’m launching The AI Inflection Point — a companion series to A Bridge to AI and the community that sits inside the AI Vanguard Society — it was the signal I needed. A reminder that we’re standing at a narrowing window. I believe we’re still at the inflection point, where trust, oversight, and human values can shape the road ahead — if we act now.
Karen: That’s all of my standard questions; thank you so much for sharing these thoughts! Is there anything else you’d like to share with our audience?
Dee: I’d love to invite your readers to explore The Relevance Project™ if they’re navigating change — or questioning their relevance in an AI-shaped world.
This four-part, interactive toolkit series — paired with an engaged peer community —is designed to help you reset your direction, regain momentum, and lead with clarity.
Each toolkit includes practical, actionable tools such as:
🧠 self-assessments and diagnostics
🗺️ 90-day strategic roadmaps
🤖 AI adaptation plans
🪞 leadership reflection guides
All crafted to help you respond to real-world shifts with insight and confidence.
The full series includes:
🔹 The Relevance Reset™ (featuring the Relevance Pulse Check™) - Available now
🔹 Strategic Jumpstart™ (Coming in September)
🔹 The AI Adaptation Blueprint™ (October release)
🔹 Leading Through the Shift™ (November release)
All four toolkits are included with a Premium subscription via my Substack, A Bridge to AI.
Prefer to purchase individually or as a bundle? Visit my StackShelf creator page for à la carte access and special offers launching this fall.
(StackShelf was designed by product developer, Karo Zieminski, who writes the Product with Attitude newsletter.)
Karen: That sounds like an amazing toolkit, Dee. Thank you so much for sharing your thoughts and experiences on AI with us!
Interview References and Links
References for Question #3:
AI-driven layoffs and performance reviews risk violating anti‑discrimination laws under Title VII, the ADEA (Age Discrimination in Employment Act), and the ADA if algorithms penalize protected groups. Employers remain fully liable–even if decisions are made by AI (wired.com, natlawreview.com).
The case Mobley v. Workday has been certified as a collective action lawsuit, alleging that Workday’s AI-based hiring platform systematically disadvantaged older, disabled, or Black applicants—possibly violating federal law (culawreview.org).
A recent survey found that 94% of U.S. senior managers using AI rely on it for decisions about promotions, raises, and layoffs—creating new liability risks as AI tools increasingly guide, rather than support, decisions (axios.com).
Dee McCrorey’s website
Dee McCrorey on LinkedIn
on Substack ()
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)











What an insightful conversation!!!!
What an enjoyable read! Great job ladies.