The hidden costs of AI adoption

I don’t hate AI. I want to make that clear upfront. I spent part of the Christmas break building my own AI server. I use AI tools in my work. I see genuine potential in the technology for automating tedious tasks that drain time and energy, freeing us to do work that matters.

But I’m also a musician and my work, like that of countless other artists, has almost certainly been scraped to train AI models. My permission wasn’t sought. I haven’t been compensated.

So, when CultureSuite started adopting AI tools more widely, I felt compelled to look harder at what we were signing up for. What I found troubled me. And what troubles me more is that hardly anyone in our sector seems to be talking about it.

The uncomfortable findings

Most of us know about the copyright issues. The scraped archives, the stolen art, the lawsuits. But the problems run deeper.

The human cost is staggering. Major AI providers have outsourced essential work to countries like Kenya, the Philippines and India. Workers are paid as little as $2 an hour to review and label content. These workers are exposed to graphic violence and traumatic material, often with minimal psychological support. Investigations have described conditions as “digital sweatshops”.

This is the hidden labour that makes AI ‘work’. Every time we use these tools, we benefit from it.

The environmental impact is significant. A single prompt to a large language model uses enough energy to charge a smartphone. Data centres require enormous amounts of water for cooling. For a sector that cares about sustainability, this should matter.

The bias problem undermines our values. Research has shown AI systems discriminate based on dialect and associate women disproportionately with domestic roles. For a sector that champions diversity and representation, using these tools uncritically risks undermining values we claim to hold.

What I still don’t know

I’ve spent months on this research and I’m not close to having all the answers. I don’t know how to square AI’s genuine usefulness with its significant harms. I don’t know where the line is between pragmatic adoption and ethical compromise. I don’t know how small organisations with no technical expertise navigate this landscape when even those with resources find it overwhelming. I don’t know if collective sector-wide action would make any difference to companies operating at global scale.

What I do know is I’ve been researching this largely alone. And I suspect many of you are too.

What I do know

AI isn’t going away. We need to find ways to use these tools without compromising our ethical, moral and environmental responsibilities.

I see a lot of promise in the open-source community. I see a different focus, with huge effort into making AI available on consumer grade hardware, massively reducing the environmental footprint.

There is also a drive to train smaller models on similarly open-sourced data, ultimately giving us more focused tools with proven providence in terms of data sourcing.

We shouldn’t be doing this separately

The cultural sector is full of people trying to figure out AI adoption in isolation. Some are rushing in. Some are holding back. Most are making decisions without full information because who has time to read academic papers on algorithmic bias when there’s a season to programme and tickets to sell?

I understand the pressure. The promise of AI solving our resource constraints is seductive, especially for a chronically underfunded sector.

But we’re better when we work together. We share knowledge about ticketing systems and CRM platforms. We collaborate on accessibility standards. Let’s bring that same collaborative mindset to AI.

What to look for

For those who want a starting point, here’s what my research suggests:

  • Search for green AI providers – A growing number of companies are building sustainable alternatives using open-source models requiring a fraction of the energy.
  • Look for zero data retention policies – This means the provider doesn’t store your queries or use them to train their models. This is particularly important if you hande customer or artist data.
  • Check where your data is stored – GDPR compliance matters. Some providers guarantee European servers. Major platforms often don’t say where your data ends up.
  • Ask vendors direct questions –  If a provide can’t explain their environmental practices, worker conditions or data policies, that’s a red flag.
  • Keep AI away from customer-facing touchpoints until you’re confident – Internal experimentation is low-risk. Where AI touches your audience or artists, proceed with caution.
  • Don’t force AI use on your team – Trust their expertise to know when it might be a useful tool to adopt.
  • Build skills, not dependencies – If your team can’t function without AI, that’s a vulnerability, not a strength.

An invitation

I’d like to hear from others grappling with this. What are you finding? What have you learned? If there’s an appetite, perhaps we can build something more formal collectively – a space for the sector to share research, vet tools and develop guidance. But it needs to be genuinely collaborative, not one company’s initiative.

If you want to be part of this conversation, you can reach me here. I can’t promise I’ll have answers. But I can promise I’ll listen.

CultureSuite logo

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top