Wondering why some people are saying that most current AI tools (especially generative AI) weren't developed ethically or responsibly, why it matters, and what you can do? This deep dive is for you.
Another thought on concern 5, the impact of human use of AI tools on people & jobs: an Upwork study from last summer found a big disconnect between CEO expectations of increased productivity and workers' experiences and expectations on how use of AI affects their workloads. Interestingly, freelancers seem to be benefiting more. It would be helpful to see newer data across a wider pool of people in more than these 4 countries!
"Despite 96% of C-suite leaders expressing high expectations that AI will enhance productivity, 77% of employees using AI say these tools have added to their workload, and nearly half (47%) of employees using AI report they do not know how to achieve the expected productivity gains."
I came across an article today about how generative AI tools are falling short in the accessibility of their tool designs for low-vision or disabled users. This is a different kind of bias -- not what the tool generates, but how the tool can be used & by whom -- which I didn't really cover in this article. Here's the article for anyone interested:
Here are some additional references on environmental impact and water use that surfaced today:
UNEP (different link than I included in the original article, similar date): "Artificial Intelligence (AI) end-to-end: The Environmental Impact of the Full AI Lifecycle Needs to be Comprehensively Assessed - Issue Note", https://wedocs.unep.org/handle/20.500.11822/46288
"Using ChatGPT is not bad for the environment", https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for - examines calculations on water usage, points out flaws in older reporting on research, and compares genAI use to other daily activities that consume water. Worth a read; my initial thoughts, from a quick skim, are that it's not clear if 50k "questions" is a reasonable basis for comparing one year's worth of use of genAI to other activities, and in what adoption timeframe. The other factor is that water demand for data centers IS already directly competing with agricultural needs in some geographies. Even if worldwide supplies are abundant enough on paper, location does matter.
Please feel free to add any additional articles or references you come across.
This article covers geographic locations of data centers and localized water usage issues: https://www.theguardian.com/environment/2025/apr/09/big-tech-datacentres-water (thanks to Janet Salmons for the link)
New article by Masheika Allgood on water use: https://open.substack.com/pub/eatyourfrog/p/there-isnt-enough-water-for-all-of
Another thought on concern 5, the impact of human use of AI tools on people & jobs: an Upwork study from last summer found a big disconnect between CEO expectations of increased productivity and workers' experiences and expectations on how use of AI affects their workloads. Interestingly, freelancers seem to be benefiting more. It would be helpful to see newer data across a wider pool of people in more than these 4 countries!
"Despite 96% of C-suite leaders expressing high expectations that AI will enhance productivity, 77% of employees using AI say these tools have added to their workload, and nearly half (47%) of employees using AI report they do not know how to achieve the expected productivity gains."
"Upwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence", https://investors.upwork.com/news-releases/news-release-details/upwork-study-finds-employee-workloads-rising-despite-increased-c, 2024-07-23
Full study is available at https://www.upwork.com/research/ai-enhanced-work-models
Here's an excellent, thoughtful article on generative AI and the myths that surround it:
"Challenging The Myths of Generative AI", Eryk Salvaggio / TechPolicy.Press, 2024-08-29
https://www.techpolicy.press/challenging-the-myths-of-generative-ai/
I came across an article today about how generative AI tools are falling short in the accessibility of their tool designs for low-vision or disabled users. This is a different kind of bias -- not what the tool generates, but how the tool can be used & by whom -- which I didn't really cover in this article. Here's the article for anyone interested:
https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
Here are some additional references on environmental impact and water use that surfaced today:
UNEP (different link than I included in the original article, similar date): "Artificial Intelligence (AI) end-to-end: The Environmental Impact of the Full AI Lifecycle Needs to be Comprehensively Assessed - Issue Note", https://wedocs.unep.org/handle/20.500.11822/46288
"Using ChatGPT is not bad for the environment", https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for - examines calculations on water usage, points out flaws in older reporting on research, and compares genAI use to other daily activities that consume water. Worth a read; my initial thoughts, from a quick skim, are that it's not clear if 50k "questions" is a reasonable basis for comparing one year's worth of use of genAI to other activities, and in what adoption timeframe. The other factor is that water demand for data centers IS already directly competing with agricultural needs in some geographies. Even if worldwide supplies are abundant enough on paper, location does matter.
Please feel free to add any additional articles or references you come across.
Scale is yet another example of how AI companies or their outsourcers mistreat data workers, via Devansh:
https://substack.com/@chocolatemilkcultleader/note/c-99476073?r=3ht54r