ScarJo, pizza glue, Recall spyware? AI ethics & safety news 🗣️
The major AI companies have all taken some hits this past week with discovery of bad behavior by their humans or by the AI systems their humans built. Here's a short rundown. (Audio; 5:47)
AI Ethics in the News
Alarms have been going off lately regarding inadequacy of attention to AI ethics, fairness, IP risks, safety, and privacy. Here are some examples from the major companies we’ll be profiling in PART 4-7 of our article series on ethics of genAI for music.
Adobe
Adobe’s profile took a bit of a hit when it was disclosed on April 22 that Firefly (for images) wasn’t actually 100% trained on its own stock images, as they had claimed. Rather, Midjourney images (which were likely generated from unlicensed images) were included as a small percentage of the training set (potentially over 1 million images). 1
This has had two consequences:
Inclusion of Midjourney-generated images for training Firefly creates exactly the risk of IP infringement that Adobe was otherwise well-positioned to help their customers avoid.
The discovery has damaged Adobe’s credibility somewhat with regard to AI, since their original statement about their training data has been revealed as untruthful.
Google
Google has caught flack for multiple recent goofs in AI-based search results. The example that’s gone viral in the past 24 hours is recommending the use of glue to keep cheese from sliding off a pizza.2
The source turned out to be a probably-sarcastic Reddit post from more than 10 years ago. The original Reddit thread has come alive again, with:
people commenting on the correct amount of glue to use on pizza, and
others hijacking the thread with unrelated comments in hopes of getting their personal “facts” included in a future Google Search AI overview.3
The pizza glue incident is a great example of the importance of scrubbing content before using it in an AI model and presenting it as credible or authoritative.
Meta
Meta is coping with legal issues newly raised in Europe about unsafe, improper handling of children’s accounts and information.4
This is obviously an extremely serious issue that doesn’t need further comment.
Microsoft
Microsoft’s announcement of their AI-based Windows “Recall” feature this past week at their Build conference has raised serious questions about whether it’s spyware. It’s being flagged for potential security and privacy issues, such as taking automatic screenshots of passwords and PII (personally identifiable information). 5
This is contributing further to public distrust regarding privacy and security when AI is used.
OpenAI
OpenAI’s Superalignment team (roughly, the group responsible for developing AI guardrails to ensure long-term safety) has been decimated in recent weeks by departures.6
Chief scientist and co-founder Ilya Sutskever, who co-led the Superalignment team, quit on May 15.7 As context, he had been instrumental in the initial firing of CEO Sam Altman by the board due to him not being “consistently candid”, before Altman was rehired (all in November 2023).
Superalignment team co-lead Jan Lieke resigned hours later, and has cited ethical concerns and under-resourcing of the Superalignment team.8
Other team members have left9, and the rest of the Superalignment team has since been disbanded and dispersed elsewhere within OpenAI.
A related major event, also involving OpenAI, is the debacle over likely-unauthorized use of Scarlett Johansson’s voice for “Sky” in GPT-4o. 10 There is some speculation online about whether release of GPT-4o with an unauthorized clone of her voice was the “last straw” that prompted the resignations. However, there’s no evidence of a connection.
Relevance to our series on genAI ethics for music
As we reported in March, most people in the US (at least) already believed that content contributors deserve to be compensated for use of their work. The UN AI Act also provides global guidance on providing the traceability for use of content.
All of these incidents are unrelated to music per se, although the ScarJo incident does seem to involve voice cloning (which we’ve covered in earlier articles 11). They do impact our understanding of the ethics of the companies involved, and they definitely affect public opinion about AI.
One consequence of these events is that public sentiment has been solidifying towards enforcement of content ownership across the board, as well as regulation to help ensure safety, security, and privacy. More US states are introducing laws to govern IP and NIL protections relating to AI, and talk at the federal level has picked up.
This is directly relevant to our series on music, since the “4Cs” (consent, control, credit, and compensation)12 are essential for treating music contributors fairly.
This is obviously a fast-moving area; I’ll post short updates as news comes out.
References
See this “AI for Music” page for a complete set of links to all posts and company profile pages in the article series on ethics of generative AI for music. Posts published to date, and pending, cover voice cloning and all 5 companies listed in this post.
“Legal risks loom for Firefly users after Adobe’s AI image tool training exposed”, by Constantine von Hoffman on MSN/Martech, 2024-04-22
“Google promised a better search experience — now it’s telling us to put glue on our pizza”, By Kylie Robison / The Verge, 2024-05-23
“My cheese slides off the pizza too easily”, r/Pizza on reddit
“Meta slapped with child safety probe under sweeping EU tech law”, by Ryan Browne / CNBC, 2024-05-16
“Microsoft’s new Recall feature for Copilot+PCs criticized as ‘spyware’”, by Carl Franzen / Venture Beat, 2024-05-21
“Another OpenAI employee announced she quit over safety concerns hours before two exec resigned”, Jyoti Mann / Business Insider, 2024-05-22
“Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs”, Kyle Wiggers / TechCrunch, 2024-05-14
“Sam Altman gracefully thanked his OpenAI cofounder who quit. Then another exec quit hours later.”, Meghan Morris / Business Insider, 2024-05-15
“OpenAI’s Long-Term AI Risk Team Has Disbanded”, Will Knight / Wired, 2024-05-17
“Scarlett Johansson says she is 'shocked, angered' over new ChatGPT voice”, by Bobby Allyn / NPR, 2024-05-20
Credit for the 4Cs (consent, control, credit, compensation) phrasing goes to the Algorithmic Justice League (led by Dr. Joy Buolamwini).
Credit for the original 3Cs (consent, credit, and compensation) belongs to CIPRI (Cultural Intellectual Property Rights Initiative) for their “3Cs' Rule: Consent. Credit. Compensation©.”