📜 AISW #040: Aysu Keçeci, Turkey-based sustainability consultant
An interview with Turkey-based sustainability business development consultant Aysu Keçeci on her stories of using AI and how she feels about AI using people's data and content.
Introduction - Aysu Keçeci
This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content. (This is a written interview; read-aloud is available in Substack.)
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence?” for reference.
Interview - Aysu Keçeci
I’m delighted to welcome of from Turkey as our next guest for “AI, Software, and Wetware”. Aysu, thank you so much for collaborating with me on this interview! Please tell us about yourself, who you are, and what you do.
I’m a 24 years-old passionate sustainability professional with a diverse background. I’ve experiences with Sustainability Business Development, AI & Sustainability, Corporate Sustainability, and Entrepreneurship. I’m currently living in Istanbul, Turkey, and preparing to move to Europe in the upcoming months.
My journey in sustainability began four years ago when my project, WE—a gamification-powered application designed to encourage recycling behavior, integrated with an IoT smart recycling bin—was recognized as one of the Global Top 50 projects in the Google Solution Challenge, an international sustainability competition. My story was also exclusively published on Google's official blog.
Since then, I’ve constantly worked to expand my skills and create impact.
Following this achievement, I initially founded the startup WE, offering solutions for corporate environments and partnering with the region's top industrial corporations. However, when the materials and chip crisis arose during the pandemic, I shifted my focus from hardware to software.
Then I founded GateZero — a gamification-powered SaaS platform designed for corporate sustainability. The platform leveraged AR [Augmented Reality] and gamification to promote sustainable behaviors among employees, tracking their progress through interactive sustainability dashboards, and provided automated reports for the ESG [Environmental, Social, & Governance] reporting process of the companies.
In addition to my entrepreneurial journey, I’ve also worked in sustainability-focused roles in technology companies. I’m currently taking consultant roles in the companies mostly for their Sustainability Business Development needs, but also providing content creation support around sustainability, AI and green innovation.
I also run Road to Earth 2.0, a blog dedicated to sustainability, AI, and systemic change. It’s a space for curious minds seeking to better understand our world, the systems we live in, and the future we’re shaping.
The blog is a place to envision a feasible alternative world with reimagined building blocks. It draws on elements of history, sociology, economics, and philosophy to offer a more holistic, macro overview perspective—aiming to contribute to the number, awareness, and, hopefully, the vision of those striving for this change.
That’s a commendable mission!
What is your level of experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I’ve come to realize that AI, ML, and analytics have seamlessly integrated into nearly every aspect of our lives—far beyond the moments we consciously notice. I’ve actively used these technologies in my previous startup experiences and in the products we developed. For example, in GateZero, I utilized AI for generating reports for companies, designing tools that analyzed current states, and provided actionable insights.
When it comes to generative AI, I frequently use it for media content creation. I’ve experimented with many different tools to generate visual content, as the models are constantly being updated. However, after certain updates, I don’t always get the results I’m looking for, which keeps me in a constant cycle of discovering new tools. It’s fascinating to see how much progress has been made in just a year—it’s truly impressive.
Additionally, since English is not my native language, I often use AI for quick grammar checks, which has been incredibly helpful.
One tool that has become a constant part of my daily life recently is Rewind. I feel like I’m just at the beginning of this journey, but the idea of having my augmented brain partner—an AI companion of sorts—within my digital world is exciting and full of potential.
Can you share a specific story on how you have used a tool that included AI or ML features? What are your thoughts on how the AI features [of those tools] worked for you, or didn’t? What went well and what didn’t go so well?
One specific example that comes to mind is when I was able to make website adjustments or add features to an application we were developing, even without much coding knowledge, by simply following GPT’s guidance. It was incredible not just to depend on my developer teammate but also to contribute directly to the project itself. In a way, AI feels like it’s reducing the exclusivity of knowledge, and I think that’s a beautiful thing. It eliminates the need to spend time on mechanical tasks that don’t require creativity, allowing more room for innovation.
Another instance was when we created a promotional video for GateZero, where I heavily relied on AI tools. AI proved invaluable in terms of cost-effectiveness and speed, enabling me to produce a video that met my expectations without being an expert in video production. This was about a year ago, and considering how much AI has advanced since then, I’m sure even better results could be achieved today.
I used several tools, including Runway, Eleven Labs, Adobe AI tools, and D-ID, to bring this project to life. With Runway, I generated unique stock videos and replaced my own appearance with a fun virtual character to better showcase the AR product idea—one of my favorite uses of AI. For voice generation in parts of the video where I wasn’t speaking, I used Eleven Labs, and when I ran into synchronization issues between visuals and audio, I resolved them using D-ID, avoiding the need for re-recording. Additionally, I used Adobe’s AI tools for tasks like resizing mockups and enhancing generated visuals.
Those are great examples of how you combined AI tools to accelerate your work. Can you talk a bit about your experiences with using AI in GateZero operations? Specifically, “I utilized AI for generating reports for companies, designing tools that analyzed current states, and provided actionable insights.” Which AI tool did you use? What parts worked well, and what parts didn’t work so well? For instance, did you ever find mistakes or ‘hallucinations’ in the reports or bugs in the tools, or did you need to iterate on the insights to get something you could deliver to companies?
Using the GPT-3 API, we developed several tools. For instance, one tool provided recommendations to improve a company’s sustainability performance based on their existing sustainability reports, structured within a framework we designed. Another tool analyzed sustainable behaviors among employees and generated actionable outputs and improvement suggestions.
While we didn’t encounter any critical errors—mainly because the tools weren’t handling highly sensitive tasks—there were instances where the model presented outdated information, as GPT’s knowledge didn’t include the most recent years at the time.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
When it comes to AI—or technology in general—I consider myself an early adopter. I enjoy experimenting with and using new tools, and I rarely feel hesitant about it. At most, a few seconds of worst-case misuse scenarios might cross my mind before diving in. But that’s just part of any technological advancement, and ultimately, how we choose to use these tools—positively or negatively—is in our hands. For this reason, I actively use AI and love trying out new things.
That said, I try to distance myself from the culture of fast consumption as much as possible. When I want to deeply research or understand something, I still turn to books or human-written content rather than relying on quick AI-generated answers. These sources provide depth and nuance that AI often cannot replicate.
For example, in the context of art, while I find AI-generated music assets impressive and occasionally enjoy listening to them, I still prefer music created by human artists. One of the things that makes me connect with art is the story behind it and the bond I feel with the artist.
That’s a sentiment I hear often (and agree with) about music and art!
A common and growing concern nowadays is where AI and ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI/ML systems and tools? Should ethical AI tool companies be required to get consent from (and compensate) people whose data they want to use for training?
This doesn’t personally worry me too much, as long as something unique to me isn’t being copied or commercialized without my consent—like training an AI on Scarlett Johansson’s voice and monetizing it, for example. I think my lack of fear stems from my belief in the necessity and potential of AI.
I believe ethical AI regulations, restrictions, and oversight are absolutely necessary—and urgent.
I know many people feel that they’re helping humanity by generously allowing their work to be used for these purposes without compensation. That is commendable, and it sounds like you feel that way. Where I get uneasy is with companies basically forcing *everyone* to do this, against their best interests, when a few other people will profit greatly from it. And across the world, not everyone can benefit from the tools. That doesn’t seem fair.
I’m also unsure how realistic it was until today to expect a clear and enforceable structure for distinguishing between public and protected data on the internet, especially considering the massive scale of information involved. AI models like LLMs require immense amounts of data to train, and naturally, they rely on public data from the internet, the world’s largest information source.
In an ideal world, there would be clear distinctions about which data is shared under specific licenses and permissions, ensuring only appropriately authorized data is used. However, in today’s reality, the internet lacks such a structured system to enforce these distinctions effectively, making it no surprise that issues arise when working with data at this scale.
Even the multi-billion-dollar film industry, supported by thousands of pages of legal texts, laws, and teams of lawyers to protect its content, struggles to prevent pirated copies of films from being shared as public content. Once these works are circulated across the internet in this way, they are falsely treated as public domain, regardless of their original protected status. Without a centralized authority or reliable tracking mechanism to determine content ownership and associated rights, it becomes nearly impossible to manage and monitor millions of pieces of data in such massive data pools.
Now, consider the vast amounts of data on the internet, much of which was previously deemed to hold little or no value. It’s hard to imagine this data existing in a structured format, clearly labeled as public or protected, and being managed effectively across the digital landscape.
I think things will work differently from now on, especially regarding consent, data ownership, and the compensation of contributors, as we have only recently encountered and faced the practical implications of these concepts.
I believe much of the concern comes from fears about certain creative professions becoming obsolete. However, when it comes to creativity, I’m not convinced it can truly be replicated or surpassed by AI. Creativity, at its core, is unique and holds an intrinsic value that can’t be replaced. Of course, this depends on both the creator and the audience, but as long as people continue to seek genuine creativity—and I believe they will, despite commercial pressures—its value will endure.
Studies are already showing that AI tools ARE cutting into livelihoods for many creators - writers, artists, musicians, and others - and diminishing their value in real terms. My sense is that the concerns go beyond obsoleting creative work, though, into fundamental questions of fairness and ethics.
Until AI advancements, humanity and nature have been somewhat isolated in their relationship, and when it comes to tackling climate change or global challenges, we’ve struggled to make sufficient progress on our own. We could certainly use all the help we can get. Now, we’re developing technology that can actively participate in problem-solving and design processes, even pointing out, “No, this isn’t beneficial”, and producing solutions that may surpass what we could imagine unaided. In that sense, AI holds immense promise.
One major concern is that most of the business people who say they want to solve all of the world’s problems are profiting tremendously from the tools they build with the data they are taking without compensating people, so their work isn’t exactly altruistic.
As a user of AI-based tools, do you feel like the tool providers have been transparent about sharing where the data used for the AI models came from, and whether the original creators of the data consented to its use?
Unfortunately, there is a clear lack of transparency when it comes to the data sources used for AI models, especially with something as massive as large language models (LLMs). Even the companies developing these models often don’t have a full understanding of where all the data originates.
In my experience, when working on products I’ve developed, we made a point of tracking and sharing the data we used and its sources because it was an important aspect for the company. However, since the scale of these products wasn’t as vast as an LLM, it was much easier and more practical to implement.
That makes sense.
As consumers and members of the public, our personal data or content has probably been used by an AI-based tool or system. Do you know of any cases that you could share (without disclosing sensitive personal information, of course)?
Since it’s something I encounter so frequently, I think I’ve become much less attentive to it. One of the more innocent AI interactions that sometimes feels strange is when WhatsApp suggests the next word I’m about to type—and gets it surprisingly accurate.
As I mentioned, because I encounter these situations so often, I’ve likely become desensitized to them, which makes it harder to recall specific examples. However, the moments that stand out the most usually involve audio rather than text. For instance, after discussing a particular topic with a friend, I often end up seeing ads or solutions related to that conversation shortly afterward.
That’s a common experience - several of my past interview guests have shared stories about this, too!
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you ever been surprised by finding out that a company was using your info for AI? It’s often buried in the license terms and conditions (T&Cs), and sometimes those are changed after the fact. If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs? How do you feel about how your info was handled?
Rather than examples of companies that might use my data, it’s easier for me to think of examples of those that likely don’t—like Apple, which feels more secure to me. Many companies probably ask for consent or disclose my data usage, but it’s often buried within pages of terms and conditions or casually mentioned during account creation. Keeping track of or preventing such usage feels like such a high-friction process that I usually don’t manage to do it effectively.
From my experience working directly with users, I’ve seen that almost no one reads the agreements or notices shown at the start of an app. In daily life, we’re exposed to so many of these interactions, often alongside regulations, that it creates a sense of fatigue—I think many of us feel this way. These regulations sometimes feel less like meaningful protections and more like a way to confirm that they’ve obtained our consent, even if it’s not entirely clear or deliberate. As a result, I rarely feel like I’m given a real choice in these matters.
Yes, there was a study that over 90% of people never read the terms and conditions. And since they’re so hard to understand, it’s hard to fault them for it!
At least for the cases I can identify, I choose not to give permission. However, when I consider the data I’ve unknowingly shared or how it may be inferred, those efforts don’t seem very effective in providing actual protection.
On occasion, particularly with beta versions of products I like, I do allow my data to be used if I believe it could genuinely benefit the product’s development.
Unfortunately, I’m no longer surprised when I find out a company has used my data. If anything, I react more strongly when I see that it has been used in a beneficial way. In many ways, this issue isn’t new with AI—it’s part of the broader, ongoing problem of data security that remains unresolved.
That’s true. You mentioned Apple. They have definitely cultivated a reputation for protecting privacy. So a lot of people are disillusioned by the recent revelations about Apple settling the lawsuit over using Siri on the iPhone to listen to people’s conversations and then selling the data about the conversations. How do you feel about learning that?
Every software and system we use collects data for analytics, telemetry, or to analyze our preferences and digital behaviors. At this point, what matters to me as a user is whether that data is collected in an anonymized way or in a manner that can be linked to my digital identity. This distinction makes all the difference in how I perceive the ethics and privacy implications of such practices.
That’s a good observation. In cases where the data is being sold to brokers and merchants for them to target selling to us, the data IS connected to us as individuals. If it’s being aggregated and used for general product improvement, like you mentioned with a beta version, that’s more acceptable. We don’t always know how they will use it, though.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy, phishing, or loss of income? If so, can you give an example?
While not directly related to AI usage, I recently experienced a challenging situation involving data insecurity. In Turkey, there’s an electronic platform called E-gov, a government service that provides citizens with access to various digital services. It enables many administrative processes to be handled efficiently online. However, your inclusion in this system is, of course, independent of your consent or choice.
Not long ago, it was reported that the personal data of nearly all citizens was likely stolen through this platform and is being sold online on platforms like Telegram for very small amounts of money. The stolen data includes identity information, addresses, phone numbers, information about relatives, and more.
In my case, someone who had only my phone number was able to find my address and use it to threaten me, claiming they could harm my relatives. This was a deeply unsettling experience and a stark reminder of how data insecurity can have real, tangible consequences.
I must emphasize again: data insecurity isn’t something new that arrived with AI. In fact, it has likely followed a steep upward curve since the advent of the virtual environment.
Oh wow. Having all of the government’s data on people breached is a nightmare! How awful that you were threatened like that 🙁And you’re right, data theft and exploitation started long before generative AI tools became so broadly used.
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I don’t think this is a problem we don’t know how to solve—it’s something that companies either choose not to do or, in some cases, are unable to do. Transparency, radical transparency, and efforts to close the gaps that allow for misuse are what we all expect from these companies.
We also need information from companies like the MAANG giants or OpenAI that shows they’re trying to understand and address what’s happening within their systems. Yet, even they admit in interviews that they’re often uncertain about certain aspects. Sometimes, they avoid sharing information simply because it doesn’t reflect well on them.
For example, just as they fail to fully disclose the emissions data from the data centers driving the AI boom, they are equally opaque about the implications of their systems. What’s different here is that they often don’t fully understand the potential consequences themselves—or they don’t seem particularly motivated to figure it out, given the cutthroat competition of this capitalist environment.
I agree with you that the biggest obstacle is the financial factors that steer company decisions. Until they have a financial incentive to earn our trust by operating transparently and ethically, and by doing diligence on understanding the risks and issues, most of them won’t bother.
Anything else you’d like to share with our audience?
I’d like to invite everyone to join me in imagining Earth 2.0—a feasible alternative world with reimagined building blocks.
When we consider that our systems operate on supply and demand, the reason we haven’t solved the climate crisis or other global problems isn’t because we lack the means but because we haven’t truly wanted to solve them. So why, despite our constant complaints about the state of the world, haven’t we taken real action? Perhaps it’s because we struggle to imagine the rigid building blocks of an alternative world.
What if we knew there was a tangible alternative worth fighting for? Would things change? That thought is what inspired me to start my blog.
If you have ideas about what Earth 2.0 could look like—with all its foundational systems—I warmly invite you to share them. Let’s reimagine, learn more, and explore the possibilities and pathways together.
And Karen, thank you so much for inviting me. It was such a pleasure meeting you and doing this together!
Thank you for sharing your experiences and thoughts, Aysu! And I love that you’re focusing on inspiring people to envision what our new world could and should be. Good luck, and I hope we can continue this conversation and explore the parallels between the AI ecosystem and the climate crisis and other aspects of sustainability. Motivating companies and people to care about solving both is a core challenge!
Interview References and Links
Aysu Keçeci on LinkedIn
on Aysu Keçeci’s Substack:
About this interview series and newsletter
This post is part of our AI6P interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.
And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:
We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!
6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!
Series Credits and References
If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊