AISW #012 Amanda Rose Fadely, USA-based aerospace professional 📜(AI, Software, & Wetware interview)
An interview with USA-based aerospace professional, investor, and author Amanda Rose Fadely on her stories of using AI and how AI companies are using her data and content.
Introduction
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Interview - Amanda Rose Fadely
As our next guest for “AI, Software, and Wetware”, I’m delighted to welcome Amanda Rose Fadely. She writes on Substack as at “” and is a member of the SmallStack Team.
Thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
Hi Karen, thanks for having me as a guest! I have a background in engineering, program management, and technical process development in the aerospace industry. I’m also a real estate investor, a writer here on Substack (currently authoring my first memoir), a mom, and a grandma.
You sound busy! 😊And I love that you’re referring to it as your ‘first’ memoir.
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, studied the technology, built tools using the technology, etc.?
Professionally, I have managed satellite programs that implemented machine learning algorithms for use in an experimental on-board processing application. These satellites were also equipped with hardware / software environments that were meant to support customer AI applications.
Personally, I use only very basic direct AI applications, such as ChatGPT and image generation software.
Can you share a specific story on how you have used AI/ML? What are your thoughts on how well the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
When I first started using ChatGPT, I thought “Yay, here is an on demand ‘smart friend’ to bounce ideas off of or check my work – woo hoo!” But in time, I saw so many errors in responses that I’ve lost trust. I still use it, and I know it will get better in time, but I use it for more basic checks than I originally thought I would.
For image generation applications, it seems that becoming adept at prompting techniques is key, and I’m still very much a beginner in this space.
I’m seeing a lot of ‘AI experts’ marketing courses or articles on best practices for prompting. This doesn’t seem sustainable, and I’d love to see a future where it is normal to have very simple and clear documentation for AI applications that guide users on these practices.
In general, if AI continues to evolve at its current rate, then methods of communicating its capabilities and how to use it need to evolve at essentially the same rate, lest it become the playground of ‘expert users’ vs. being truly accessible to the general public.
That’s a great point, Rose - a truly intelligent system ought to be able to guide its users on how to use it effectively!
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you didn’t use it?
I’ve struggled a bit with the idea of applications that modify personal photos using AI. For example, the ones where you input your regular old selfie and the output is ‘beautiful you’ in a fancy dress at a gala you’ve never attended.
It concerns me that girls are using these applications and makes me wonder if it only adds to their struggles to accept their imperfect beauty in favor of machine-generated versions of perfection.
On the other hand, it could prove to be a positive thing in the long run. Perhaps eventually it will be understood that essentially any image you see on the internet will have been modified by AI, such that girls will truly understand that what they see online isn’t real compared to their real bodies and their friends' real bodies – leading to more vs less self acceptance. This could prove to be even better than a world where it is never quite clear if you are looking at something real, or something modified.
I like that idea, Rose; greater self-acceptance would definitely be a positive outcome.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do YOU feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?
I’m very concerned about the introduction of bias into AI, so in general I’m extremely open about the use of my data for training algorithms. For me, I’m willing to give up some privacy in exchange for being sure that the algorithms aren’t only being trained by data from over-represented groups.
That said, I still want a choice about the use of my data. I want to be alerted and asked about that usage beforehand in an open, obvious, and clear way. I want to be asked often enough to the point of it being annoying!
Being willing to give up some of your own privacy to help ensure representation is an unselfish perspective, Rose - I admire that. And I completely agree that consent is critical.
As a member of the public, there are probably cases where your personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
Hmm. In short, I’ll just say I don’t trust Meta.
That’s reasonable 😏
Public distrust of AI companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I’d love to see companies sharing easy-to-digest data on how they are continually ensuring that their models will take into account the diversity of humanity and (where applicable) sharing how they are keeping the front ends of their products accessible and useful to the general public.
Almost everyone I’ve interviewed says that transparency about models and data usage is a top priority for them to be able to trust an AI company - or really, any tech company. I agree that accessibility is important too, especially when an AI or ML model is controlling access to physical or online resources.
Anything else you’d like to share with our audience?
I’m thrilled to announce that my first book, an anthology titled The Evolution of Leadership in STEM: Women Catalyzing Change will be published this fall! Keep an eye on Read the Instructions and/or LinkedIn for updates on the book launch date and opportunities to order your copy!
Subscribe to Read the Instructions for memoir development progress reports and other personal musings.
I can be found here on LinkedIn, or on my Substack publication Read the Instructions, as
Amanda Rose, thank you so much for joining our interview series! It’s been great learning about what you’re doing with artificial intelligence, why you still use human intelligence for some things, and how you feel about AI and tech companies using your data. Your anthology book sounds amazing; congratulations, and I’m excited for it to come out 😊
Final Thoughts
Be the first to know about Amanda’s anthology, and get updates on her memoir, by following her on LinkedIn or Substack:
To learn more about how Amanda & her SmallStack teammates are helping smaller newsletters succeed on Substack, check out the SmallStack newsletter:
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains) with AI-based software tools or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read. To automatically receive new 6P posts and support our work, consider becoming a subscriber! (If you like, you can subscribe to only People, or to any other sections of interest. Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊