š AISW #078: Jeremiah Cioffi, JD, MA, USA-based attorney and AI entrepreneur
Written interview with USA-based attorney and AI entrepreneur Jeremiah Cioffi on his stories of using AI and how he feels about AI using people's data and content
Introduction - Jeremiah Cioffi, JD, MA
This post is part of our AI6P interview series on āAI, Software, and Wetwareā. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack. If it doesnāt fit in your email client, click here to read the whole post online.)
Note: In this article series, āAIā means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and āAI Fundamentals #01: What is Artificial Intelligence?ā for reference.
Interview - Jeremiah Cioffi
Iām delighted to welcome Jeremiah Cioffi from the USA as my guest today for āAI, Software, and Wetwareā. Jeremiah, thank you so much for joining me for this interview! Please tell us about yourself, who you are, and what you do.
Hello! My name is Jeremiah Cioffi, and I live just outside Washington, D.C, though I am originally from Vermont. I spent twelve years on active duty in the Army, first as a military intelligence officer and then as an attorney. I currently work as an incident response and data privacy attorney at Octillo Law, as a defense counsel in the Army Reserve, as an adjunct professor at Vanderbilt Law School, and as an advisor at an AI start-up called AkivaOS.
Wow! You sound busy š
What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I started using AI several years ago for personal research. A ChatGPT query here and there to get me on the right path. My use evolved when I attended graduate school at the Johns Hopkins School of Advanced International Studies. I created podcasts out of assigned readings using NotebookLM and listened to them while exercising and in the car. I also became fascinated by the game GeoGeussr, where players are dropped into a random location via a street-level panorama and must guess their position on a world map. I used LLMs to learn how to identify locations by uploading an image and prompting it to walk me through how it identifies locations.
Another great use case is resume building. I recently conducted a job search and used various LLMs to build my resumes. I would submit a complete job description and ask the LLM to provide me with key terms to use in my resume based on the description. I also used an LLM to create my current exercise plan, tailoring it with strict prompts and fitness goals.
Another use case is AI detection tools such as Grammarly and GPTZero. I use these when I suspect someone used AI to generate a document. I input the document and they give you a percentage breakdown of what percent is likely human generated and what percent is likely AI generated. The tools are not perfect - for instance, people say that em dashes indicate AI use, but I personally love using an em dash. But they are helpful, especially when the AI-generated percentage is high.
I love using em dashes too, and I see lots of writers comment that they doggedly refuse to give them up! Iām curious, what do you do when you conclude that a document was most likely written by someone using AI?
Itās an interesting question. In a context where I hope to learn either something from or about the author, I feel let down when I discover it was likely written by AI. It loses any sense of being genuine and makes me question the authorās motives. In an educational and professional setting, it further leads me to investigate whether the author violated any ethical principles.
The nuance here is that if the author is up front about the use of AI, either in the body of the text or a footnote, my feeling of disappointment disappears. In a context where the author is immaterial to my purpose - such as in a factual documentation of an event - I donāt feel quite as let down. But it does lead me to more thoroughly check underlying sources to ensure accuracy.
That makes sense. What else can you share about how you got involved with AI technologies?
I became so fascinated with AI during my masterās program that I decided to study it for my thesis. I had been depressed reading about how authoritarian regimes use AI to marginalize and surveil their citizens and set out to identify how opposition groups use AI to challenge these regimes. I identified three positive use cases centered on 2024 election cycles.
(1) In Venezuela, journalists were targeted for reporting news counter to the Maduro regime in the wake of the 2024 elections. They turned to AI, creating AI newscasters to report the news, which insulated individual journalists from regime repression and enabled them to provide a fuller picture of life in Venezuela.
(2) In Pakistan, jailed opposition leader Imran Khan and his social media team used ElevenLabs to create speeches from his jail cell. Khan passed notes to his social media team through his legal team. His social media team turned the notes into speeches and uploaded hours of Khanās voice to ElevenLabs, resulting in speeches in Khanās voice that they then broadcast to his followers, galvanizing them ahead of the 2024 elections.
(3) In Belarus, the opposition endorsed an AI candidate for parliament named Yas Gaspadar. He was a chatbot built on ChatGPT. While he was not allowed on the ballot, he was a source of otherwise-censored information for Belarusians who sought opposition viewpoints. These innovative uses of AI had real-world impact, primarily through creating a more open information space and protecting people the authoritarian regimes sought to repress.
Those are three great positive examples of using AI for support to marginalized people!
Once I became familiar with AI, I sought ways to work it into my professional life. This led me to AkivaOS. I started working with Awab Shamsi, the founder of AkivaOS, as an advisor because I was drawn to his vision for transforming how AI and automation can handle high-stakes, compliance-heavy workflows, and how it can do so transparently and ethically. Iāve had the opportunity to work closely with the incredible AkivaOS team and witness their orchestration of complex workflows in legal, compliance, and regulated industries, using a blend of ChatGPT, Gemini, Claude, retrieval-augmented pipelines, and human-in-the-loop review to deliver speed, precision, and security at scale.
Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didnāt? What went well and what didnāt go so well?
(Reference for this question: "But I don't use AI": 8 Sets of Examples of Everyday AI, Everywhere)
One of the coolest projects Iāve been involved with was advising Team Akiva as their engineers built out a Freedom of Information Act (FOIA) request automation workflow inside AkivaOS. This workflow is complex ā hundreds of pages, cross-referenced exemptions, and strict deadlines. Traditionally, processing a single FOIA request could take weeks. The team designed a workflow where GPT-4, Gemini, and Claude acted as specialized AI agents within AkivaOS:
One agent classified documents based on exemption codes and sensitivity.
Another summarized findings and drafted plain-language responses.
A third agent cross-checked everything against statutory guidelines via retrieval-augmented knowledge bases.
And critically, human-in-the-loop checkpoints ensured attorneys and paralegals reviewed every decision before release.
The result? A process that used to take 40+ hours was reduced to under 6 hours ā without sacrificing accuracy or compliance. It wasnāt perfect at first. Early on, we noticed that LLMs occasionally hallucinated exemption codes or misinterpreted edge-case statutes. The team quickly responded by adding structured prompt chaining, a fact-checking layer, and agency-specific RAG pipelines.
This experience reinforced something we should not lose sight of: AIās real power isnāt in replacing people, itās in augmenting human expertise and unlocking efficiency without losing control.
That sounds like a constructive use of AI. Iām curious about how their experiences will be with human reviews and trust of the results. On the one hand, the human-in-the-loop checkpoints are clearly essential. But there are reports though that over time, humans tend to trust the AI system more, and their oversight becomes less effective. So thatās something I watch out for, and Iām looking forward to someday hearing about an AI company handling it well!
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
My basic rule for using AI-based tools is that I never allow the tool to have the final say. Let me illustrate this with a common example. When I research a new topic, I often use an LLM to identify and summarize on-point secondary sources. But I NEVER blindly accept the output as truth. I review the summary to get a sense of the source and then independently verify the source ā both for its existence and for its content. This is an area where lawyers get into trouble. It seems every couple of weeks there is a news story about how a lawyer submitted a written product to a court with hallucinated case citations that end up being LLM generated. Not only is this unethical and lazy, but itās also allowing the AI tool to have the final say. I donāt let AI have the final say.
Thatās definitely wise. Iām curious, of all of the LLMs out there, do you find yourself preferring one tool over another for researching new topics? For instance, some of the LLMs now tout their ability to provide specific source links for their citations. Others donāt yet have it, or they may only offer it in higher-level plans.
My favorite tool is Perplexity Pro. It allows you to select from a variety of LLMs and offers three different modes: search, research, and labs. I find myself using the research mode most often. Iāve found that its source links for citations are largely accurate. I also like that Perplexity Pro has an incognito mode where my activity is not saved.
I also like the GPT-5 prompting guide (https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide?utm_source=tldrai). Iāve gotten better results across LLMs when I incorporate its prompting guidance. One of the tips is to prompt different reasoning effort levels depending on the task. Reducing reasoning effort limits tangential tool-calling and latency, which is helpful for simpler tasks.
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
No - most companies are vague when it comes to training data sources. When providers say ātrained on publicly-available data,ā that could mean anything from open datasets to scraping copyrighted books, private blogs, or personal health forums, often without creators knowing. The lack of transparency can create legal and ethical risks, for instance users can generate derivative content that carries IP liabilities. It can also erode trust. If creators donāt know how their work is used, confidence in the tools collapses.
Absolutely. Have you seen any AI tool yet that does a good job on this? I like to call out companies that are trying to do the right thing.
While I have not seen enough transparency from AI companies, I am heartened by the movement to establish this transparency. From the European AI Act to nonprofit organizations such as the Mozilla Foundation, we are seeing both legal and public pressure to force more openness on training data. I expect to see this effort grow in the near future.
If youāve worked with building an AI-based tool or system, what can you share about where the data came from and how it was obtained?
AkivaOS segregates sensitive data, secures client pipelines, and requires explicit consent for fine-tuning. They use tools like ChatGPT, Gemini, and Claude, and make sure they clearly articulate this use and explain to users how these tools train their models and use the information they input.
It sounds like your customers are providing the sensitive data and youāre taking steps to sequester it, maybe with multi-tenancy or a similar capability. So thatās good.
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
During my transition from active military service to the civilian sector I elevated my LinkedIn game. This included providing robust amounts of professional information. It did not even cross my mind that this information could be used by an AI-based tool. It wasnāt until I saw an announcement on LinkedIn giving users the ability to opt out of having their content used to train AI that I learned the errors of my ways.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? Itās often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
About a year ago, LinkedIn gave users the option to opt out of having their content used to train AI. I have been on LinkedIn for more than a decade and never thought to dive into their terms and conditions to see how they were using my content. Once I found out, I opted out. Here is a link for folks interested in doing the same: LinkedIn and generative AI (GAI) FAQs
To LinkedInās credit, they make it simple to opt out - all it takes is a toggle from āyesā to āno.ā However, I do not know for how long LinkedIn used my content to train AI, and they are not going back and removing my content from training data prior to my opting out. This episode solidified that I need to take charge of my content. I need to review terms and conditions and take the necessary steps to opt out where appropriate. I recommend everyone undertake a review of their social media accounts to determine if they are comfortable with how their content is being used, and to change that use where they can.
LinkedIn seems to be handling their opt-out notifications better this year. In September 2024 they didnāt handle it well at all. The default was opt-in unless you were in a country covered by GDPR or a similar regulation. And the opt-out they offered didnāt cover their use of our data up to that point in time, which they did without getting our consent. (For anyone interested in this, see my Sept. 19, 2025 LinkedIn post.)
Has a companyās use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
I think this is a combination of luck and good cyber hygiene. I use MFA, change passwords regularly, and spend time understanding privacy and security policies and choosing a course of action appropriately. That said, some of my information has been part of various breaches, but in each case I acted promptly to change passwords and activate provided credit monitoring services. These actions helped minimize my privacy concerns.
Those are solid practices that it would be good for anyone to follow.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
From my perspective advising at AkivaOS, it comes down to one thing: radical transparency. Right now, most AI companies are black boxes. People donāt know what data was used, how models make decisions, or where their information goes. That opacity fuels fear and skepticism.
Hereās what companies must start doing:
Be Honest About Data ā Explain clearly whatās collected, where it came from, and how itās used ā no legal gymnastics.
Give Users Control ā Let people opt in, opt out, and manage their data.
Build Accountability ā Provide audit trails and explainable AI so users can trust not just the output, but the process.
This is the philosophy built into AkivaOS from the ground up. Because it serves legal, government, and compliance-heavy industries, Awab designed it with explicit consent, human-in-the-loop control, and full auditability baked in.
The companies that succeed in AIās next chapter wonāt just have the smartest models, theyāll have the deepest trust. And from where I sit, watching AkivaOS evolve, thatās the future AkivaOS is building toward.
You bring up a really good point on transparency about how models make decisions. Explainability has been a weak point for models based on neural networks. And consent - true informed consent - has been a big challenge for many companies (some might say all).
Thatās all of my standard questions. Is there anything else youād like to share with our audience?
I encourage everyone to experiment with AI tools. And do so responsibly. It will save you blood, sweat, tears, and, most importantly, time. A healthy foray into LLM use could be creating a workout plan or meal plan. Just make sure you donāt give it the final say - confirm its output with a health professional! The following link is a great place to start: https://www.heart.org/en/news/2025/03/27/ai-can-serve-up-ideas-for-healthy-meals-in-a-snap.
Thatās a great suggestion, Jeremiah - and as you said earlier, always verify what a LLM tells you. Thank you so much for making the time for this interview!
Interview References and Links
AkivaOS website
Jeremiah Cioffi on LinkedIn
Jeremiah Cioffi on Bluesky
on Substack
About this interview series and newsletter
This post is part of our AI6P interview series on āAI, Software, and Wetwareā. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And weāre all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post āBut I Donāt Use AIā:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If youāre interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (itās free)!
Series Credits and References
Disclaimer: This content is for informational purposes only and does not and should not be considered professional advice. Information is believed to be current at the time of publication but may become outdated. Please verify details before relying on it.
All content, downloads, and services provided through 6 'P's in AI Pods (AI6P) publication are subject to the Publisher Terms available here. By using this content you agree to the Publisher Terms.
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
Credit to CIPRI (Cultural Intellectual Property Rights InitiativeĀ®) for their ā3Cs' Rule: Consent. Credit. CompensationĀ©.ā
Credit to for the āCreated With Human Intelligenceā badge we use to reflect our commitment that content in these interviews will be human-created:
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too š)









