📜 AISW #084: Emily, USA-based computational memory researcher
Written interview with USA-based computational memory researcher Emily on her stories of using AI and how she feels about AI using people's data and content
Introduction -
This article features a written interview with Emily, a 🇺🇸 USA-based computational memory researcher, author of , and creator of EXD-Net (Emotional Experience Destabilization Network). We discuss:
the “heartbreak model” that she uses to “make the math more human” and help people understand EXD-Net
why she avoids AI systems based on biometric data
how confidentiality of patient data has been protected at clinical trial companies where she’s worked
and more. Check it out, and let us know what you think!
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack. If it doesn’t fit in your email client, click HERE to read the whole post online.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
(1) I’m delighted to welcome Emily from the USA as my guest for “AI, Software, and Wetware”. Emily, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Thank you so much for having me, Karen! I’m so excited to be here. My name is Emily and I’m a researcher. Right now, I’m working on an original theoretical and computational framework called EXD-Net which stands for Emotional Experience Destabilization Network. It uses attractor dynamics inspired by Hopfield networks to theorize how destabilizing experiences perturb memory states leading to drift, topological reorganization, and agentic-centered identity-level change.
I know that sounds a bit dense, but I try to make the work accessible and relatable by writing about it on my Substack from a humanistic angle. My goal is to help people feel the concept, not just study it. I like to think of it as “humanized computation”.
One of my most popular pieces introduced EXD-Net v1 through the lens of heartbreak which my friends jokingly call “the heartbreak model”. It’s silly but it does make it more digestible. These aren’t just abstract algorithms. EXD-Net theorizes how people remember, change, and grow. I think there’s something powerful about making complex systems feel intimate and emotionally resonant. Most people don’t care about theory unless they can resonate with it. And my hope is to make that feeling the entry point. The more we can connect science to lived experiences the more accessible and meaningful it becomes. That’s exactly what opens the conversation.
Thanks for that introduction, Emily. Can you explain your heartbreak model briefly? (And we’ll include a link to your article in the interview post.)
So EXD-Net is a hybrid theoretical and computational framework for understanding memory, specifically how memory changes in response to chaos. What this means is that it looks at how certain emotionally intensive or disruptive experiences, or what I call destabilizers, lead to adaptive learning and even identity transformation. So for example, in a breakup, if you’ve been in a relationship for quite a while, your identity often associates stability with that relationship, and it becomes a part of your emotional architecture. But when that ends, your entire foundation (which was associated with the relationship) collapses, i.e. destabilization.
And because memory is associative, everything starts to feel unstable. You wake up thinking about them unprovoked. You hear a song and you recall an old memory. Your internal landscape has been disrupted. But this destabilizing event provides the opportunity for internal reorganization. You relearn who you are, you reassociate old memories to new contexts, and you rebuild a new sense of stability for this new version of yourself.
So now heartbreak isn’t just an emotional event, it now becomes a computational event (mathematical even!). So that’s why it’s been dubbed as the heartbreak model. A bit silly and quirky, but I think it’s also fun, because it makes the math feel more human.
(2) What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology, or built tools using the technology?
I got my Master’s in AI, and I’ve been studying the field for a good bit now before tools like ChatGPT became mainstream. Prior, I was working in a research lab in my free time analyzing latent spaces in autoencoders to understand how they modeled graph-structured learning in humans. Like mental maps or conceptual navigation. Essentially applying graph theory to neural architectures. Professionally, I was a clinical data scientist at a global medical device company. So, I’ve worked with AI both in academic research settings and real-world applications.
That’s a good overview of your professional experience. Have you used AI outside of work as well?
Yes definitely. I do use AI when I’m doing research or am learning new concepts. As much as I love scrolling on stack exchange forums, AI tools have been really helpful in condensing the amount of tabs I have open and synthesizing the information across them. It’s a great way to process when I’m working with concepts from multiple fields.
(3) Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well? (Reference for this question: “But I don’t use AI”: 8 Sets of Examples of Everyday AI, Everywhere)
I use ChatGPT constantly. I find it really helpful to fine-tune and mirror my thoughts back to me. Sometimes I just need things worded differently to click. It’s been a great reflection tool in my thinking and learning process, honestly.
While I think it’s great for reflection, it doesn’t have the ability to offer new insights. It’s not omniscient. Which I guess is a good thing, but if I were to solely rely on what it does reflect back then there would be this huge disparity. So while it’s great to synthesize, your work doesn’t end there.
Additionally, it does get confused and sometimes it might reflect the wrong thing back. So if you’re not being cognizant, you might not even notice it, because the thing with LLMs is that they come off pretty confident even when it’s wrong, which is particularly dangerous.
Yes, there was that study earlier this year which said that 47% of the time when they are wrong, LLMs remain highly confident that they are right!
(4) If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Some AI-based tools I avoid are biometric systems such as facial recognition. I understand there are cases where it’s inevitable, like at TSA, but in my personal usage I’m pretty selective with what tools I use and what information I provide. I find that in a world where all our information is out in the open, the one thing I do have control over is my choice in what and how I interact with these tools and platforms.
I haven’t been flying, but Tracy Bannon told me in her interview last year that we actually can opt out of TSA taking our photos and using facial recognition. But understandably, many people didn’t feel comfortable opting out, and that’s probably even more true now.
(5) A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get Consent from (and Credit & Compensate) people whose data they want to use for training? (the “3Cs Rule”)
I think it is something that is inevitable. AI systems and tools do require data to learn and improve. But at the same time, there are more “sensitive” information out there that I believe does need prior consent and attribution.
When your work is publicly available, or whatever you choose to share is publicly available, scraping is almost always certain. This is important because of course you would want your work to be searchable. However, the other side is that your work may also be used to train models without your consent.
In regard to this, I do believe companies should be explicit about what and how data is being collected from users, and there should be attribution when the work is being used. I think some attempts at this are when sources are cited which is a good step. However, I think there should be a more easily accessible option on the user’s side on whether they allow scraping of their work. Ultimately, this still comes down to your own choice of what you choose to publicly share online but having a scraper block or requiring prior consent and giving credit where credit is due is definitely important. You have a right to your work. That is your IP and companies should acknowledge and honor that.
(6) As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
I like to believe, for the most part, that providers aim to be as transparent as possible. When you use a product generally there are terms and conditions you have to agree to prior usage. These are there for a reason. Whether we choose to actually read it or gloss over it is another thing. I know for myself I don’t usually read the terms, but I do know that typically when you agree, there will be a section that tells you how your data will be collected and used. But just because we click accept without reading doesn’t absolve companies from being misleading. Just because something is technically stated doesn’t mean it’s actually transparent.
True! Since you’ve worked with building an AI-based tool or system, what can you share about where the data came from and how it was obtained?
When I worked as a clinical data scientist, our data came from the patients, the doctors, the products used, and anything else that may be related. HIPAA compliances and anonymization were obviously implemented, but everything from the case and anything that was connected to the patients’ health was tracked for up to a year or so to ensure product efficacy and safety outcomes. Of course patients are only admitted into the trials after meeting certain requirements and understanding that their data will be used and studied.
Were there any limitations on how patients’ data could be used, or by whom? Or was it generally pretty open? I can imagine that for research purposes, the companies would want to keep the use of the data unrestricted.
Patient privacy is actually very important, so not just anyone could access the data. Even when I worked directly on a specific clinical trial, I would have to request access to that trial’s data. And after a certain amount of time, we had people who would check back in to see if data access was still needed to do the analyses.
Different clinical trials had different people responsible for overlooking the databases, while data scientists would have to request access and determine what variables from the given data were important in regard to efficacy and safety analyses. It was quite a hassle at times to figure out who was in charge of which clinical trial and to request access, but it goes to show how important the company valued our patients’ data. I can’t speak for all companies, but that was how my company handled data privacy.
(7) As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Oh, I’m sure there are so many, but facial recognition for one. Recommendation systems in our shopping habits or content consumption would be another. I personally find recommendation systems incredibly helpful when it comes to finding new shows to watch or new songs to listen to.
(8) Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
I think it’s well known that the majority, if not all, companies nowadays definitely collect and use our data. Terms and conditions give us a sense of acknowledgement, but I don’t necessarily think we typically have much of a choice in declining if we want to use the product. And even if we were to choose to opt out, it may actually be more of a hassle more often than not.
As a data scientist, I would like to think that the data I provide is a minuscule part in the grand scheme of it all. Still, I would hope that I fly under the radar a bit when it comes to targeted data usage, but that doesn’t mean I’m unaware of how tracking works.
(9) Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Fortunately for me I haven’t experienced any issues like such that I can recall, so I cannot speak much on that matter.
(10) Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I think the most important thing AI companies can do is be open with the work and research they’re doing, and what they’re learning. For example, I believe Anthropic comes off as an AI company that is more open and transparent than others. They talk about safety and alignment very early on and write extensively about how Claude is trained, and explicitly state what isn’t trained. But more so, I think the position and tone a company takes on really shows where their priority is. Anthropic’s tone and documentations make you feel like they’re focused on ethics and alignment which isn’t always the case in this field.
(11) Anything else you’d like to share with our audience?
Thanks so much for having me, Karen!
I really love how you humanize AI by showing the reality of how people actually use it in everyday life. That really resonates with me because that’s exactly what I’m trying to do with my own work. Giving it a more humanistic lens to make the complexity of it all feel a bit more intimate. It’s really nice to be able to speak about this not just as code or systems, but as things that actually affect us and how we live.
If anyone reading this wants to connect, exchange ideas, or chat about research, especially around memory, cognition, or computational modeling, I’d genuinely love that. Feel free to reach out to me directly. I love a good conversation with curious minds.
Thank you again!
Interview References and Links
Emily on GitHub (direct contact)
Emily on Substack (The Nth Dimension)
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% human-authored, 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber:
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”
Credit to for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)











