6 'P's in AI Pods
People
AISW #019: Roberto Becchini, software architect 🗣️ (AI, Software, & Wetware interview)
0:00
-25:36

AISW #019: Roberto Becchini, software architect 🗣️ (AI, Software, & Wetware interview)

An interview with Italy-based software architect and artist Roberto Becchini on his stories of using AI and how he feels about how AI is using people's data and content (audio; 25:36)

Introduction - Roberto Becchini interview

This post is part of our 6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available in text and as an audio recording (embedded here in the post)

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Roberto Becchini, provided by Roberto and used with his permission

Interview - Roberto Becchini

I’m delighted to welcome Roberto Becchini as our next guest for “AI, Software, & Wetware”. Roberto, thank you so much for joining me today! Please tell us about yourself, who you are, and what you do.

Thank you, Karen. I'm Roberto Becchini. I'm a father, technologist, artist, and curious mind. I live in Italy and work for a global technology company as a system architect and technologist. I've been in technologies fields since forever. The first machine I touched was a DEC PDP-11 with Winchester drive containing a whopping 20 million bytes of storage, quite large for the time.

Yet my interests span many other fields. Visual art and foreign languages predominate in this period. I'm also a non-professional artist and founded, with a group of other people, a small nonprofit local artist association where I live, to promote culture and artistic expression. This is who I am.

Study for Alcione - chalk and pastel pencils on paper. Roberto Becchini © 2023

Great. Thank you for sharing that background. And for those who are reading the article, you'll see an image of a gorgeous drawing by Roberto. You're very talented. 😊

Thank you.

Tell us about your experience with AI, machine learning, and analytics. Have you used it professionally or personally or studied the technology?

My interest in Artificial Intelligence started many years ago, when the field was not really as important as it is today. 

I studied cybernetics, natural and artificial neural networks in the ‘90s at the university. Since then, I kept following the progress and keeping interest in a few specific areas related to AI: 

  • Pattern recognition and content generation algorithms, 

  • The impact of technology on visual arts, especially painting, drawing, and photography and on society, and

  • The enablement of these technologies in the edge and far edge devices: this is more related to my profession, but still in my area of focus for this field.

I still consider myself a practitioner in the field of AI, machine learning, and all the disciplines that revolve around the application of artificial neural networks and cybernetics. The field is immense and always evolving, providing a lot of stimuli for the curious mind. 

Alas, I’m a complete beginner into the analytics fields, statistical and the data management and processing. While AI is not yet part of a major activity in personal and professional life, I still keep myself up-to-date with trainings and readings, since the trends are quite clear nowadays. And as a technologist, I need and want to know as much as possible about this field. So yeah, I keep myself relatively involved.

That’s great context, Roberto - thank you for sharing it!

Can you share a specific story on how you have used AI or machine learning? And what are your thoughts on how well the AI features of those tools worked for you, or didn’t? What went well and what didn’t go so well?

Mostly interesting stories are hidden behind corporation walls, as one may guess. But on the personal side, my goals are: 

  • to understand what tools are being developed for both individuals and businesses, 

  • what tools revolve around technical countermeasures to counteract Intellectual Property and copyright infringements, especially in the field of photography, and

  • what technologies and architectures support the use of AI in edge devices and in non-connected or loosely connected scenarios.

So this is very interesting. It's quite fascinating because it's not related to specific algorithmic or mathematical subjects. It's more like integrating the technology into a certain type of devices. That's quite interesting.

As a simple user (thus not developing the core of AI technologies or their algorithms), I am experimenting (as many of the people) with:

  • A few generative tools, such as OpenAI’s ChatGPT, the AI integrated into Zoom chat application, Microsoft Copilot, Leo AI (which is an AI integrated into the web browser Brave) which is quite interesting - it’s not that powerful, but it’s good enough; and Midjourney and Perplexity AI. Many others I still need to try and explore. These are what I played with so far.

  • And then on the other side, Nightshade and Glaze are image poisoning tools from University of Chicago that are meant to avoid as much as possible that your images are used or to be useful for AI training where you did not accept that use.  So these are also 2 tools that I’m playing with. More and more interesting and impacting in my opinion than the generative, at least in my universe.

The experience with generative tools has been so far two-fold: 

  • On one side, the technology looks very promising. And for specific tasks, it’s already very capable, provided you have the skilled human driving it and getting the results.  Think about, in medicine or, research on new materials or new proteins. So that's pretty amazing.

  • On the other hand, my generalistic experiments often suffer from errors. Some of them are very subtle, so you don't see them at first, and they require knowledge to spot them. 

I ended up using generative AI to get inspirations, to address specific subjects or draft ideas, to overcome the “white page” syndrome, kick start. And it's a sparring partner to generate thinking when you start something.

The most important outcome I got is that you still need a human in the loop, at least until the machines will be able to take initiative on their own. The human is still important in the work process, even if it changes role.

Then you need a human to program them. Now we program computers with computer languages, and the trends I've seen that we are trying to transform these computer languages, the classical traditional computer languages in something like you can program or tell the machine what you want with, regular languages, like in English. The prompt is a way to give instruction to a computer. That's some kind of programming, a new type. It's new, so it will evolve. What we see today, it's probably in its infancy.

And then you generally need a human to interpret the results, unless you're blindly trusting the machine that can be only statistically right. And when it's wrong, you don't know why. So you still need the human.

I'd assume that more specialized and not generally available AI will provide more interesting and valuable outcomes. As I said, I use the chemical research, finance operations, and so on. Probably those ones are more powerful than the ones that one can try for free or available to general public.

So this is my experience with generative AI. The experience with Nightshade and Glaze is still in the early stage. Quite interesting. It looks promising. The bird picture attached to this interview, it's a photograph of one of my works, as you said, and it's poisoned with Nightshade. So in theory, harder to use for AI training.

I need to trust the creators of the tool and their advancement with the technology, since I don't have or know at present the possibility to test if that is really doing what I think it's doing. Of course, there are tests. There is this possibility. It's just, I don't have in this very moment the opportunity to do that.

The tool runs locally on your machine, so you don't need to upload your image anywhere. It uses a CPU or GPU. It's quite convenient and anybody can try. It's free to use.

Thank you, Roberto. These are great examples and insight, and I'll add the links to Nightshade and Glaze in the end notes of the article. I already have them on my ethical shoestrings tool page.

Could you talk a bit about your use of Nightshade to protect the drawing you shared above in this post? Obviously we don’t want to share the unprotected version online — but can you give your opinion on whether any differences between the protected & unprotected images are visible to your eye? Or whether someone who didn’t create the drawing would be likely to be able to tell that it’s protected? 

The use of Nightshade and Glaze tools are quite simple. The UI is super simplified. The settings are very minimal, and the controls are just a handful. This is an advantage since the user might not want to really be the expert of the underlying technologies that drives the tool. I attached to this interview a link of the paper that talks about this technology into the Arxiv.

The tool allows you to select the level of poisoning. So the lower the level, the less visible the modification are, the faster the generation of the poisoned image, especially if you don't have a powerful machine. And most likely, the resulting protection of these tools with a lower setting may be less effective.

Nightshade and Glaze, they do different jobs. They have different goals, but the controls you have are pretty much the same. It's about trade-off.

I haven't seen so far any method to identify or remove the poisoned pixels from the images. Consider this is a classical attack countermeasure race. Nightshade is the attacker. So you want to attack a content, kind of modify and poison it, while the defender, which is the AI training people, will try to find a way to avoid these attacks. So it's still in progress, by no means a static situation here.

I would assume this scenario. So we see improvements on both sides of this game. By the way, the modifications in the lowest setting are not really visible. In the higher setting, depending on the picture, there may be artifacts that you can see. But all in all, it's okay so far, and I hope it will progress further.

That's good information, so thank you for sharing that. So how much effort was it for you to poison and protect this one image? Did it take a long time to run on your local machine?

Oh, yeah, quite a bit. That is due to my setup that is a bit old now. This image took a couple of hours on a 4 core I7 CPU. My GPU was not sufficiently powerful for this, so I had to run this on CPU. So it's quite slow. On more modern machines, it could take between 20 and 90 minutes.

I've seen benchmark on the recommended GPU types, which again, I don't have. I would encourage people to give it a try. It's really interesting.

Yeah. That's great to hear. Thank you for that.

If you have avoided using AI-based tools for some things (or for anything), can you share an example of when, and why you chose not to use AI in that case? 

That requires a long answer, probably. AI, in all its forms, is just a tool. And as such, its usefulness depends on what one wants to achieve.

Also, the choice of tools is a very personal choice, dictated by a number of circumstances that can be different from case to case. There is nothing inherently into the AI that makes it as a bad tool. As usual, it's the human or the organization only in the tool that makes the difference. This applies to all technologies.

For the moment, I'm avoiding AI to generate art. It's a personal choice since my path to art requires traditional tools, the materials, the effort, and the relation with the work you're doing. It's super important. That is my reason.

And for the same reasons, I also avoid digital graphic tablets and computer art, even if I love them and I know how to use them. And the underlying technology is super fascinating, and it was one of my branches of study in the past. Again, my path is different as far as the art is concerned.

As a business employee, I avoid public AI tools in order to reduce to a minimum the risk of leaking company, customer, or people information. So that's practical reason why I try to avoid the public AI tools, unless I am experimenting for personal use. So not really big things.

In this context, the use of these tools must be considered very carefully. Normally, they are regulated inside the business, and I must say that AI / ML related technologies will be a key factor in the future for automotive field, for example. And use of AI in a controlled environment is, of course, very welcome. But for the moment, as a business employee, I just try to stick with company directives.

As a consumer, I try to avoid tools that do not respect IPs or copyright, even if enforced in fine prints and service contracts. Adobe's case is in the news recently. I try to avoid tools that try to limit or remove access to the means of my intellectual and physical production. I avoid tools that diminish my value by augmenting value elsewhere. I don't want that. It's alienating.

And at present, the use of AI-powered technologies presents a number of what I consider huge downsides. Some of them are not specific to AI tech only, but, yeah, they are there and must be considered very, very careful.

You mentioned IP and copyright concerns. A common and growing concern nowadays IS where AI and machine learning systems get the data and content they train on. They often use data that users put into online systems or publish online.

You mentioned Adobe. That's a case where artists and photographers put their images online, and then found out that they were being used. And companies are not always transparent, even at the beginning, about how they intend to use our data when we sign up.

How do you feel about companies using data and content for training their AI and ML systems and tools? For instance, should ethical AI tool companies get consent from (and compensate) people whose data they want to use for training?

We are observing, in the news, cases involving corporation allegedly stealing IP, copyrighted content from the public domain, disregarding the licenses they're under. If something is public, it doesn't mean the license allows you to take it.

This is, of course, something that should be countered. And there are already laws that can be enforced by government to that purpose. Some of them may need a review to rebalance the power between big corporation and the nations, so the citizens. And now the balance I think is favoring the former. Citizen, consumers, stakeholders, and business customers can also do their big part in steering the behavior of this company in the right direction.

Consent and compensation are a good starting point as long as they are fair:

  • Consent that is free from restrictive clauses, declining which shall not prevent the rest of the agreement to be executed, especially if I’m already paying for the service or the product.

  • The proportional compensation shall be indexed to public revenues of the company. Compensating peanuts for content that generates trillions in revenues is not really fair.

And interestingly, these considerations can apply to all data we give to the corporations, not just data for AI training.

Yeah. That's an excellent observation, Roberto, because the training data is only part of the data that these tools actually gather and use.

When you’ve USED AI-based tools, do you, as a user, feel like you've been informed about where the data that's used in the AI models came from, and whether the original creators of the data consented to its use? Not all tool providers are transparent about sharing this information.

Actually, I do assume by default that the data are used to train AI and are taken without consent, or with consent given via accepting restrictive clauses. I normally look in their documentation if they state the opposite in an explicit way. Some of them does. It's not that fair.

Yeah. I would guess that you wouldn't very often find such statements. I've looked too, and I very seldom find them. 

Is there any provider of a tool that you've used that does mention getting consent for the data that they use? If so, I'd like to give them a shout out and give them some credit.

So far, I have not found the tools that allow selectively opt out from data gathering — or better, to not have default data gathering, and you need to opt in for that. Yet it's my statistics, and it's very small. Most likely, there are some out there.

Remarkably, there are tools that deliver on exactly what their main function is (for example, for digital art production), and explicitly tells you that you own the data, and they don't train any AI with your work. I didn't try it yet, but in my stream of news my attention was captured by this procreate.com, a promising approach with apparently no shady terms.

DeviantArt is another one that explicitly protects and gives tools to protect its own material, the material uploaded by users, even if they tell you they cannot protect from external scraping. That's fair, at least it's written.

The Substack is also another good one, allowing you to set an opt-out AI training, even if I think it should be better to have an opt-in approach instead. But at least the setting is there.

Agreed. Yes. So as members of the public, there are cases where our personal data or content may have been used or has been used by an AI-based tool or system. Do you know of any cases of this that you could share?

Yes, besides those cases that we can now read almost every week in the news, TSA, biometrics. Another example, a peculiar one made quite an impression on me. I use a popular chat application on my smartphone, and some months ago, that application created digital avatar of my face by scraping my personal pictures out of the blue. The avatars were made automatically, and I didn't even know about that feature until my avatar was available to me in that application to use as an icon, as a synthetic face.

It's impressive technology, I must admit. I was really impressed and the avatars were really capturing my likeness even though in a synthesized way. But it left me a bitter feeling, both about the messaging application itself and the moral compass of the company controlling it. So not pleasant.

Wow, yeah - scraping your pictures like that without your consent feels creepy, even if the intention to create an avatar ‘for you’ was well-intended. That’s not cool.

Do you know of any company that you gave your data or content to, that made you aware that they might use your info for training AI and machine learning? Or have you been surprised by finding out that they were using it for AI? It sounds like this is one case where you got surprised.

I know about the cases in the news and about the services I use. To spot shady terms, all it takes is to carefully read the terms and conditions of those services and products. The vast majority of these cases that I've seen, you have no real choice.

A word about choice. Very often, the choice to not use the service is not something one can really offer to take. Most, if not all, the time in the modern age, in the western world, the choice to not use certain services cannot be realistically taken for several reasons.

Yeah. That's definitely true, especially in cases where government or financial services companies are involved, or under the guise of security even. It's all the more reason why their uses of our data need to be constrained more carefully.

Has a company's use of your personal data and content created any specific issues for you such as privacy or phishing? And if so, can you give an example?

So far, this case has never happened to me. Yet I want to mention here a piece of art from a US filmmaker, Mitch McGlocklin, titled “Forever”, that addresses this specific issue. I’ll post the link into the interview text. I think it’s time well spent watching that 7-minutes LiDAR-based movie. And the feeling of a life reduced to a set of data-points may be unsettling. Give it a try. It's a nice piece, nice script.

Thank you for sharing that. I will definitely have to check out that movie, and the movie link is in the article for the reader. So thank you for sharing that!

Final question. Public distrust of AI and tech companies has been growing. What do you think is THE most important thing that AI companies need to do to earn and keep your trust? And do you have specific ideas on how they can do that?

In my view, it's very simple. Do not harm society. Companies are part of the society, not something alien. They should not harm society. They should submit to regulations that try to protect the citizens even if this may reduce innovation, progress, and profit capabilities.

Debates around a recent European regulation on AI ("AI Act") shows a bit of the battlefield between businesses and nations, that is citizens. It's a fine balance. So far, it's a bit unbalanced towards businesses and corporations.

Yeah. It definitely shows the battlefield is active. The good thing is that there is now some traction on regulations, and the conversations are happening. So that's progress.

And public sentiment seems to be pretty strong, and it's growing as awareness is increasing. So that all seems good.

Yep.

Is there anything else you’d like to share with our audience?

Thank you, Karen, for having me included in your interview series. It has been fun and useful to organize my thoughts around this contemporary important subject. You're doing a great job and I always enjoy your Substack.

Oh, thank you, Roberto. Thank you. And thank you so much for making time to contribute to the conversation. It's been a great pleasure for me to enjoy learning what you're doing with artificial intelligence tools, and how you're deciding when to use your human intelligence for some things and how you feel about use of your data. Appreciate it!

Thank you.

Interview References

Roberto Becchini on LinkedIn

Roberto Becchini’s DeviantArt portfolio

Leave a comment


About this interview series and newsletter

This post is part of our 2024 interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or being affected by AI.

We want to hear from a diverse pool of people worldwide in a variety of roles. If you’re interested in being featured as an interview guest (anonymous or with credit), please get in touch!

6 'P's in AI Pods is a 100% reader-supported publication. All new posts are FREE to read (and listen to). To automatically receive new 6P posts and support our work, consider becoming a subscriber (free)! (Want to subscribe to only the People section for these interviews? Here’s how to manage sections.)


Enjoyed this interview? Great! Voluntary donations via paid subscriptions are cool, one-time tips are appreciated, and shares/hearts/comments/restacks are awesome 😊

Share 6 'P's in AI Pods


Credits and References

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Thanks for reading 6 'P's in AI Pods! This post is public, so feel free to share it.

Share

Discussion about this podcast

6 'P's in AI Pods
People
AI impact on people in all roles (not just software, and not just in professional scenarios). Includes our "AI, Software, & Wetware" interviews.
Listen on
Substack App
RSS Feed
Appears in episode
Karen Smiley
Roberto Becchini