AISW #010: Sofia Zätterström, Sweden-based strategic tech & education executive 📜(AI, Software, & Wetware interview)
Interview with Sweden-based strategic tech & education exec & board member Sofia Zätterström on her stories of using AI & how she feels about AI using people's data & content
Introduction
This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.
Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary for reference.
Interview - Sofia Zätterström
I’m delighted to welcome Sofia Zätterström as our next guest for “AI, Software, and Wetware”. Sofia, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.
My name is Sofia Zätterström, and I’m from Stockholm, Sweden. As an engineer with a deep passion for innovation and transformation, I have spent my career working across various roles and sectors, often in an international context. Currently, I serve as a strategic executive in tech and as a board member for two educational organisations. My commitment, both professionally and personally, is to contribute to sustainable development.
That is an admirable commitment!
What is your experience with AI, ML, and analytics? Have you used it professionally or personally, or studied the technology?
I am exploring AI; initially incorporating it into my professional activities where it has proven to be highly effective and efficient. I've also taken two courses on AI, focusing on its practical applications and usage.
While I won't go into specific details; AI, ML, and analytics have naturally been important in my work in the tech sector. Additionally, in my role as a board member of educational organisations, we are actively exploring how to introduce various AI tools while ensuring that integrity and personal data security are not compromised.
Can you share a specific story on how you have used AI or ML? What are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?
As a parent (and not the best cook), I often use AI to find simple recipes for dinner 🙂. With my children, we’ve also explored using AI to get started with programming. While it might not yet replace a formal programming education, it’s a fun way to quickly see results and maintain their interest.
From a professional standpoint, I have several examples of AI’s impact. One experience was when I was involved in creating an educational program on sustainable development. Several research groups were responsible for writing different modules, and AI was incredibly useful in unifying the tone of voice across the content. It helped elevate the language to a level appropriate for our audience and ensured there were no overlaps between modules. Although everything was carefully proofread by humans afterward, I’m confident that we wouldn’t have achieved the same quality and consistency without the help of AI.
It’s great that you’ve found constructive ways to use AI in your home life. And unifying tone of voice from multiple contributors is a great example of leveraging large language models for what they’re best at, instead of generating content from scraped sources.
If you have avoided using AI-based tools for some things (or anything), can you share an example of when, and why you chose not to use AI?
I currently avoid using AI for sensitive tasks at work, such as strategy development, unless I have a clear understanding of how the data is stored and used. I refrain from utilising AI in projects involving personal data or other sensitive information. Initially, I experimented with using AI to create LinkedIn posts but quickly stopped, as I felt that AI-generated text didn’t truly represent me in that context.
I agree - and I think people can mostly tell when they’re hearing our authentic voices vs. reading pablum generated by a tool.
A common and growing concern nowadays is where AI/ML systems get the data and content they train on. They often use data that users put into online systems or publish online. And companies are not always transparent about how they intend to use our data when we sign up.
How do you feel about companies using data and content for training their AI and ML systems and tools? Should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?
I appreciate the value of using diverse sources for training AI/ML systems, as it can lead to better outcomes and help mitigate issues like bias and discrimination that might arise if training data were solely sourced from the internet.
However, I also recognise the importance of fair compensation for creators and individuals whose data is being used. Just as a store pays licensing fees to play a certain song as it enhances its brand and sales, a similar model could maybe be implemented for AI training.
Good comparison - and I know there are initiatives to develop the traceability and ‘data provenance’ to track which sources are being used in generating new outputs. That definitely needs to be part of the solution.
When you’ve used AI-based tools, do you as a user know where the data used for the AI models came from, and whether the original creators of the data consented to its use? (Not all tool providers are transparent about sharing this info)
I haven't built AI-based tools myself, only used them, and I have to admit that I haven't always known where the data used to train these models came from with complete certainty.
As a user, I believe it's crucial for companies to be more transparent about the origins of the data they use, so that we can make informed decisions about the tools we engage with. However, it's equally important that we, as users, start asking questions and educating ourselves on this subject.
I agree - I’m hearing more and more about how much people value transparency and building ‘data literacy’.
As members of the public, there are cases where our personal data or content may have been used, or has been used, by an AI-based tool or system. Do you know of any cases that you could share?
I'm aware that my personal data may have been used by AI-based tools, though I don't have specific examples. It likely occurred in areas like social media, online test-taking tools, or when colleagues introduce AI-powered note-taking applications in online meetings.
Yes, I’ve had all of those too. It’s always a bit concerning when a new AI-based note-taking ‘attendee’ shows up in a call. One of the participants may have opted in, but the other participants generally don’t have the choice to opt in or out, even though their participation will be recorded and analyzed.
Do you know of any company you gave your data or content to that made you aware that they might use your info for training AI/ML? Or have you been surprised by finding out they were using it for AI? If so, did you feel like you had a real choice about opting out, or about declining the changed T&Cs?
When we piloted Microsoft Copilot at work, we were informed about how our data was secured. However, I can't recall most companies explicitly informing me that my data might be used for AI/ML training. In many cases, it still doesn't feel like there's a real option to opt out.
Agreed - it’s not simple: we have to look at whether the use of data is made transparent, whether opt-out is offered or the default, and whether opting out is really a viable choice.
Has a company’s use of your personal data and content created any specific issues for you, such as privacy or phishing? If so, can you give an example?
No.
That’s great to hear, Sofia; you are lucky and I hope it stays that way!!
Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? Do you have specific ideas on how they can do that?
I understand this is a tricky area because, on one hand, many of us eagerly anticipate further technological advancements, but on the other hand, we rightly raise concerns about how AI is used.
To earn and keep my trust, I believe AI companies need to prioritize transparency, actively combat bias and discrimination, involve diverse perspectives in development, and engage in open discussions about fair compensation for training data. These are key actions that could help maintain trust.
However, it's also important not to over-regulate at this stage, as that could stifle crucial innovation. Striking the right balance is definitely challenging!
Absolutely!
That’s a wrap for today. Sofia, thank you so much for joining our interview series. It’s been great learning about what you’re doing with artificial intelligence tools, how you decide when to use human intelligence for some things, and how you feel about use of your data!
Final thoughts
Sofia on LinkedIn
About this interview series and newsletter
This post is part of our 2024 interview series on “AI, Software, and Wetware”. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or being affected by AI.
We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!
6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)
Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊