📜 AISW #044: James Presbitero, Philippines-based writer and strategist (AI, Software, & Wetware interview)
An interview with Philippines-based writer and strategist James Presbitero on his stories of using AI and how he feels about AI using people's data and content
Introduction -
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview -
I’m delighted to welcome James Presbitero from the Philippines as our next guest for “AI, Software, and Wetware”. James, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
No, I should be thanking you! Thank you for this opportunity, Karen. I’m James Presbitero, a content strategist and writer from the Philippines. I’m the owner of Write10x, a fast-growing publication centered around the promise of helping writers be 10x faster, better, and more human. Currently, I talk a lot about my philosophy regarding AI and writing, which I call the “Mindful AI Mindset”. Its main idea is that we should use AI mindfully because mindless use of AI exposes us to danger now and in the future. I’m seeing how that philosophy applies to writing.
Thank you for that introduction, James! We’ll definitely include a link to your Write10x newsletter for those of us who care about AI and writing. For now, can you elaborate a bit on what you see as the main dangers of AI for writing, and how it can be used mindfully?
Right, I’ve explored this topic in depth across several articles, and I feel like there are layers to the dangers of AI in writing.
On an individual level, the main danger is over-reliance on AI. When AI tools first started gaining traction, many people jumped on the hype, treating AI as a shortcut to writing success or a quick way to make money. While there’s more awareness around mindful AI use now, I still see people relying on it without developing core writing skills.
Here’s the key thing to remember: AI and writing are two very different skills. AI can only scale what already exists. If you lack foundational writing ability, AI will simply amplify that lack—it won’t create a compelling, authentic voice for you. For new writers especially, learning AI before mastering the craft of writing can be dangerous. It creates a risk of over-reliance, and over time, it can actually erode your writing skills if not used thoughtfully.
On a broader scale, the biggest danger of AI in writing is its potential misuse by harmful actors. I’ve written about scenarios like AI-powered influencers—essentially AI posing as real people to spread propaganda or manipulate audiences. This could be exploited by political parties or unethical companies, like Ponzi schemes, to generate content at scale and lend credibility to their actions.
As AI becomes more advanced, I believe the unethical use of it will proliferate unless we act now. We’re looking at a significant challenge in the future if frameworks and safeguards aren’t put in place.
That’s why I advocate for democratizing AI knowledge. The more people understand AI and how to use it responsibly, the better equipped we’ll be to counter harmful uses. By giving AI tools to individuals and organizations with ethical intentions, we can create content that challenges and outpaces bad actors.
In practice, this looks like integrating AI into the writing process rather than letting AI be the process. Developers and users must take accountability for the impacts of their AI systems. It also means being aware of the ethical repercussions of the content we create and being transparent about how we use AI in our work.
Ultimately, mindful AI use involves blending human creativity with AI’s efficiency—keeping humans, their values, and their intentions at the center of everything we produce.
What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I use AI a lot, both for my work and personal writing practice. Primarily, I use ChatGPT. I use it professionally, as a superpowered writing assistant. It helps me strategize, brainstorm, write content, and create image assets. I also use it personally for much the same ways.
Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
Oh I’ve used ChatGPT for all levels of writing and research, from ideation down to editing. At first, I’d say that ChatGPT wasn’t giving me satisfactory results at all. But as the technology improved and I learned to use it better, the rate at which it gives me favorable results has only increased. That's true for both creating text content and image assets.
Can you share some specific examples of text or image tasks that you used ChatGPT for? Maybe one task where it didn’t give you satisfactory results, and one where it did?
AI has been incredibly reliable for generating content ideas, especially when they’re tied to something specific like a particular offer or a broad topic. In fact, I’d say AI is better than me at producing linear, structured content ideas—like generating a content pillar or brainstorming related topics around a central theme.
However, I find that AI struggles with providing ideas that feel truly original. That said, it’s excellent at helping me harness and refine my own original thinking. For example, if I have a unique concept that hasn’t been explored much, I can use AI to brainstorm angles, identify potential gaps, or even predict what my audience might be thinking. It acts as a valuable sounding board for amplifying my own creativity, but for broader, more generic ideas, AI often outpaces me in efficiency.
On the other hand, one area where I’ve had less success with AI is image generation. I primarily use ChatGPT because it meets about 90% of my needs, but when I’ve experimented with tools like DALL-E for image content, the results haven’t always been satisfactory. I suspect this is partly because I’m still learning how to craft optimized prompts for image-based tasks. Image generation feels more complex and finicky, so it’s an area where I’m still figuring things out.
Aside from ChatGPT, I also regularly use Grammarly for grammar suggestions, though I'd say I only take its suggestions about 80% of the time.
I did some experiments with Grammarly last year when I was looking for a tool to measure readability of my writing. I also found I disagreed with some of its suggestions ;)
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
Yes, there are use cases where I deliberately avoid using AI. The biggest one would be using AI as a friend or therapist. A lot of AI tools are marketed as companions—they can mimic a specific character you choose or act like someone straight out of a movie. But something about that doesn’t sit right with me.
Talking to AI for leisure, especially about personal vulnerabilities, feels wrong. Maybe it’s because, deep down, I know they don’t truly understand me. They’re just pre-programmed to generate responses that are most likely to feel favorable or seem appropriate in the moment. They don’t empathize, they don’t understand, and relying on them in such an emotional capacity doesn’t feel genuine.
Marketing AI as a “companion” also opens a whole can of worms that we need to approach carefully. Misuse in this area could lead to serious consequences. For example, there’s the recent situation with Character AI, where their model allegedly encouraged a vulnerable child to engage in antisocial behavior, and ultimately resulting in an unfortunate death. That’s exactly the kind of issue that arises when we don’t adopt mindful and ethical practices around AI companions.
Those are great points, James. I interviewed a psychotherapist last year, before that Character.AI incident, and asked her about the use of AI for ‘therapy’ or ‘friendship’. Her take was that while she could see a few situations where people could find those kinds of tools supportive, that they really weren’t likely to help most people solve their loneliness or their problems. And those tools really need to be designed and trained with safety at top of mind, just like robots should be. It sounds like it fell tragically short.
I completely agree—there’s still a lot of work to be done in terms of creating frameworks and guardrails for using AI as a companion.
Funny story, though: my girlfriend actually uses AI as a companion sometimes. She describes it as a kind of interactive journaling—like journaling, but with a journal that responds back. She’s mentioned that she enjoys the impartial responses and the brief acknowledgment of her feelings that AI can provide. It’s not like talking to a person, but she finds it helpful for small moments of self-reflection.
I also have a friend who’s used Character.AI before, primarily for entertainment. She often talks about the weird, wacky responses the AI gives, and while she doesn’t take it seriously, she finds it fun to use.
Both of them, however, are very aware that these tools are just that—tools. They know they’re talking to chatbots, not real people. This recognition has its benefits, like appreciating the AI’s impartiality and its almost textbook-perfect bedside manner. But I can imagine how dangerous these tools could become for people in vulnerable positions—those who might rely on AI companions as a crutch for loneliness or deeper emotional struggles.
On a more troubling note, I think there’s an even larger danger here, one that Scott Galloway introduced me to: the potential for AI companion tools to be weaponized by bad actors. For instance, corporations, terrorist organizations, or even foreign adversaries could use these tools to radicalize lonely individuals over time. These could be people in positions of influence, like government workers, or simply individuals with vulnerabilities that can be exploited. By creating AI companions that are endlessly patient, kind, and subtly manipulative, bad actors could gradually steer these people toward harmful ideologies or actions.
We already know that loneliness is at an all-time high, and covert tactics like this have existed in other forms, such as espionage. There’s no reason to think bad actors wouldn’t take advantage of this incredibly powerful technology. That’s why I believe it’s crucial to implement robust guardrails for AI companion tools—to minimize these risks and ensure that their use is safe, ethical, and supportive rather than harmful.
A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get consent from (and compensate) people whose data they want to use for training?
Yes, I think it’s ideal for all AI companies to disclose where they’re getting their data from and to compensate people or parties whose data they use, depending on the type of data. Transparency and fairness should always be prioritized.
That said -- and this might be a hot take -- but I’m not particularly concerned about where companies are getting their data—unless, of course, they are violating laws or infringing on copyright. Right now, this issue is still being decided by the courts, as they work to determine whether AI data usage constitutes a copyright violation or if it falls under an extension of fair use.
In my own view, the way AI models use data feels more like an extension of fair use. They’re not copying anything outright—they’re using large datasets for research and pattern recognition. It’s akin to how a person might read a wide range of books, internalize them, and then be able to reference or write in the style of those books.
Because this approach isn’t a direct reproduction, I think it’s challenging to establish a legal precedent that would outright punish it. After all, most creative endeavors draw inspiration from existing works in some way. However, I’m leaving it to the courts to make that final call, as these are complex issues that go beyond personal opinion.
Well, we have well over 30 active lawsuits in the US alone on AI infringement and ‘fair use’ - it’s definitely going to take a while for those to get sorted out!
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
Oh, I don’t think they’ve been transparent—at least not in a way that’s obvious to most users. Or maybe it’s just that I haven’t been particularly focused on it myself. That said, whenever I use an AI tool, I don’t see the sources of their data being disclosed upfront. Maybe the information is available if you dig for it, but only people who are really interested or knowledgeable would even think to look.
I think AI tool providers could do a much better job of publicizing where their data comes from. For example, they could include this information in their marketing materials or prominently display it during the signup process. It needs to be visible and accessible—not buried in fine print—so people are aware without having to go out of their way to find it.
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
I don’t have a specific case to share, but I feel like this has been happening for a long time. It’s just getting more attention now because of tools like generative AI. Big companies like Google, Facebook, and Microsoft have already been using personal data for years—to draw conclusions about users, send targeted ads, or design marketing campaigns that appeal to specific demographics.
Even older, non-generative AI systems have used data in some capacity. The difference now is that people are more aware of it because of how visible and interactive generative AI tools are.
That’s an astute observation - use of our data is certainly not new to AI or to generative AI in particular.
My current mentality when using AI tools is to assume that any data I willingly input—like my writing style in tools like ChatGPT or basic information during signups—could be used by the system. To me, that’s just the nature of using technology today. I do not know if this is a healthy outlook, but I feel as if I’ve accepted it as a fair tradeoff for the capabilities I enjoy. Of course, it’s a different matter if this data is accessed without consent, like retrieving private information I didn’t knowingly share.
I believe that we should push for better regulations and policies around tech, and AI specifically. But law by nature moves very slowly. It will never catch up to tech’s exponential growth. Therefore, the onus is on us, as users, to be mindful of the information we put online. We should also take steps to protect data we want to keep private. It’s all about practicing due diligence in an increasingly connected and data-driven world.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact.
I know for sure that big tech companies like Google, Meta, and others use my data for training purposes. But I’m generally comfortable with the level of data I expose to these companies.
One thing that did surprise me recently was LinkedIn. A few months back, there was news about LinkedIn training its own AI algorithms using content posted on the platform. While I’m not entirely sure if that initiative went through, it definitely caught me off guard. I don’t feel LinkedIn was particularly transparent about this practice or emphasized how user-generated content might be used to train their AI.
Yes, there was a huge uproar about LinkedIn opting many of us in by default, without our consent. Our only ‘choice’ was to opt out going forward. In the US, we didn’t have any recourse on them using our older data. But in EU countries covered by GDPR, they were protected from being opted in. Were people in the Philippines also opted in without consent?
The Philippines has a GDPR equivalent, called the Data Privacy Act. It mandates that companies obtain explicit consent from users before processing their personal information. So users should actively agree before their data is used, often by ticking a box or similar action. Implied consent, where users are automatically included unless they opt out, is not recognized under Philippine law.
But I don’t think LinkedIn specifically asked me to opt in for that, or if it did I forgot about it already. I should probably be more mindful about that.
In general, it feels like companies don’t provide real, meaningful choices when it comes to opting out of these policies. Often, the changes are buried in T&Cs, which most people don’t read. Even if you do read them, declining usually means giving up access to the platform altogether—a decision that isn’t practical for most people, especially professionals who rely on services like LinkedIn for their careers.
While I personally haven’t felt violated in how my data has been used, I think companies should be more upfront about their practices and make opting out easier and more accessible. Transparency and consent should be prioritized, and users should have control over how their data is used.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
Not really—not for me personally. I have a personal philosophy that what makes you human, and even what makes you an excellent writer, isn’t just your words or writing style. It’s your ideas, your core values, the way you interact with people, and the things that AI inherently cannot recreate.
That said, I do recognize that this problem exists for many people. For example, I know a number of writers whose livelihoods have been impacted by AI. They’ve been accused of producing AI-generated content even when they’ve heavily edited their work—or in some cases, when they haven’t used AI at all.
Yes, that’s a downside of the ‘arms race’ in AI - where people are developing AI-based tools to detect AI-generated content, and those tools have false positives which unfairly flag human-created content. Did your writer friends have any recourse to appeal when their work was flagged as AI-generated? Did this block them from publishing, or from earning their living?
No, often they didn’t have any recourse. Especially when the “correction” happened after the contract had been signed. This has affected their earnings, mostly because of the additional wasted time they took to “fix” whatever issue may have been found. However, I’ve seen some other writers put a limit specifically for AI edits. I don’t do freelance clients, but I would recommend that. They should also stipulate in their contract their stance on AI-based tools to filter out clients who might be very nitpicky about things like that.
I’ve also seen how AI is affecting graphic artists and designers. AI tools are increasingly encroaching on their industry, automating tasks that once required their unique skills. This has caused significant challenges for many creatives.
I think more steps need to be taken to alleviate the kind of displacement AI is creating across industries. Free training programs, for instance, or better education on how roles might evolve due to AI could help affected professionals transition more smoothly. These efforts could make a big difference in minimizing the negative impact of AI while empowering people to adapt to new realities.
Those are good suggestions and I agree that as a society, we need to look out for the people affected by these seismic changes.
What are your thoughts on how shrinking the pool of human creators may dry up the flow of fresh, original content which AI tools need to continue to improve?
I'm not sure that's true.
I see two key points regarding this concern:
1. The creator pool isn't shrinking; it's expanding.
We're living in a golden age of the creator economy. It has never been easier to become a creator thanks to widespread access to education, tools, and training. The barriers to entry are lower than ever, and this means more people are stepping into creative roles.
With such a massive influx of creators, there’s an abundance of fresh ideas being added to the ecosystem. AI benefits from this growing pool of human creativity rather than being limited by it.
2. AI thrives on good content, not necessarily human-exclusive content.
AI doesn't need content to be "purely human-made" to improve. It needs content that is thoughtful, high-quality, and meaningful. When creators use AI tools mindfully to amplify their skills, they produce better work, which raises the quality of content available for AI to learn from. Over time, this creates a positive feedback loop where both AI and creators grow.
However, there’s a flipside.
If AI tools are misused to churn out low-quality or inauthentic content, the quality of the ecosystem may deteriorate. This is why promoting mindful use of AI is crucial—it ensures that AI and creators work together to elevate the baseline quality of online content.
In the end, AI can be a tool to expand humanity's creative boundaries, democratize creativity, and push innovation further. But how society handles AI—encouraging responsible use while preventing misuse—will determine whether it enhances creativity or diminishes it.
That’s a thoughtful reflection on why mindful use of AI is so important - thank you, James. The main concern I hear is that so much AI use at present is not mindful, and that low-quality ‘slop’ being created with AI isn’t adding value for society, and isn’t useful for the AI systems to use for further training. It’s hard to measure ‘quality’ systematically, though!
This has been a great conversation! My last question: 🙂
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I think the most important thing AI companies need to do to earn our trust is to be fully transparent about what they’re doing and how they’re doing it. One of the best steps they can take is to actively contribute to the creation of policy guidelines and laws for ethical AI use—not just for today, but also with a focus on future implications.
On a more practical, day-to-day level, I think it would be beneficial for companies to establish some kind of educational arm. For example, they could create blogs or YouTube channels that explain in detail the capabilities, potential harms, and dangers of their AI products. This would demonstrate that they’re genuinely committed to keeping humans at the center of their processes, rather than just focusing on profits or technological progress.
By educating the public and being transparent, they can show that they value accountability and the long-term well-being of their users. That kind of effort would go a long way in building trust.
That’s a great point. I do see some startups and initiatives that focus on educating the public about capabilities and risks. But the companies that are creating the tech don’t seem to be doing that - they seem to be avoiding it like the plague.
You're absolutely right—many tech companies seem reluctant to educate the public about AI’s risks and capabilities, likely because it could slow their growth or raise regulatory scrutiny.
That said, this creates a paradox. To pressure these companies to act responsibly, more people need to be educated about AI to build enough social and political leverage. But waiting for profit-driven companies to take the lead on education is unrealistic.
While the responsibility shouldn’t fall entirely on users, we do play a part. By using AI thoughtfully and sharing knowledge, we can create a ripple effect. The more people understand AI, the better positioned they’ll be to demand the policies, regulations, and transparency needed to ensure safer AI use.
It’s a cycle—educating others leads to greater awareness, which then builds the pressure necessary for meaningful change. The key is starting small but staying consistent.
Anything else you’d like to share with our audience?
Thank you so much for this opportunity, Karen! I’ve really enjoyed answering these questions and articulating my thoughts on such a significant topic as AI.
For anyone interested in the intersection of AI and writing, and how to use AI mindfully, I’d love to invite you to subscribe to Write10x. It’s a newsletter packed with highly actionable tips, prompts, templates, and guidelines that you can immediately apply as soon as you read them. The goal of Write10x is simple: to help writers, content creators, and businesses write 10 times faster, better, and more authentically human.
As a subscriber, you’ll also receive some amazing freebies, including The Outline + Prompt Kit and the Active Prompt Vault—both designed to supercharge your content creation process.
And if you’re interested in scaling your content strategy authentically with AI but aren’t sure where to start, feel free to reach out. I’d be happy to chat about how we can make it happen together!
Once again, thank you for having me, Karen. All the best to you and your audience!
Thank you for your time and for sharing your thoughts on AI, James!
I’m truly honored to be here, Karen.
Interview References and Links
LinkedIn: linkedin.com/in/jamespresbitero2022/
Medium: medium.com/@jamespresbiterojr
Substack: substack.com/@jamespresbitero
’s “Write10x” newsletter:
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊