4 Comments

Hi Karen, I read the first two parts of your Ethics in Generative AI for Music with great interest. Although I've been writing about developments in GenAI for a while, my focus was not on Ethics. I'm also an avid listener and music enthusiast, so this topic has a specific appeal for me. I have a few points to ponder after I read your first "deep-dive" article on beatoven.ai:

1) If a tool did not use Deep-Learning and neural-net-based technology but used a "symbolic" formulation (doubtful if this is possible since Symbolic AI was more or less abandoned after the early failures in the 70s to contrast the very optimistic view on AI in the 60s), would we still have ethical concerns? To put it more clearly, if it did not use any music to train it but relied on "algorithms" a human has formulated in order to produce "passable" music, how would the ethical concerns look like?

2) I believe a global agreement on properly marking AI-generated material is one of the building blocks of ethical generation. I am not sure if there is any work on standardising the label format etc.

3) Let us assume for a moment that the global AI companies agreed to properly compensate all of their source creators, including any work created through their models, would we still have an ethical problem? Let me use an analogy: If an LLM is trained with non-copyrighted material (e.g. anything 75+ years old according to U.S. law) would there be any ethical issues?

I'm not offering any solutions or reaching any verdict, but just trying to enlarge the scope of the ethics argument to move away from White=Ethically sourced and Black=Unethically sourced.

Please continue the very useful deep-dives you've been performing on these tools, for people who may not have time or opportunity to try out all these tools (and more being introduced every day).

Expand full comment
author

Thank you for sharing these comments & questions and the encouragement!

The emerging view on AI ethics is that unfair use of unlicensed source material is unethical. Ethical use requires 4Cs: Consent, Control, Credit, and Compensation for contributors. This applies to source material built into a genAI model, as well as source material provided to the tool by a user as a reference.

Some quick replies to your thoughtful questions:

1) Whether symbolic-based or not, AI-based tools which rely on rules or patterns defined by human experts and which don’t exploit music creators can definitely be ethical. Some do exist, like Beatoven.

2) As you noted, making ethical AI work requires traceability and transparency. Globally agreed standards are a ways off, but there is traction on identifying music and whether AI was used to create it. (Example: Content ID)

3) Some tools train their genAI models only on public domain (not ‘publicly available’) material, or commissioned works. Those tools can also be ethical. Beyond that, Compensation alone isn’t enough. Ethical use has to start with Consent and Control, and must include Credit.

What do you think? :)

Expand full comment

I think you covered all my points. It’s still early days, so I hope the AI industry will find the right way to stay ethical and still release great products

Expand full comment
author

Exactly! And I think the more we all can do to raise awareness and ideas about ethical use, the more likely it is that the AI industry will get there quickly. 😊

Expand full comment