What does 2026 hold for Australia’s artists? They’re going to need us more than ever before: this is the year we pass irreversible turning points in AI’s devastation of artists’ rights and respect for their work, let alone any control we still have over its encroachment on our everyday lives. And yet instead of safeguarding against those risks and protecting artists, the global conversation is moving towards whether human rights should instead be attributed to AI itself.

How deep and how risky has that encroachment become? It’s not just AI slop – but that’s how the normalisation begins. 

The carols I heard in supermarket chain Lidl over Christmas, for example, were AI-generated – and I wasn’t the only one who noticed: a Reddit user complained “It makes me want to jump off a bridge every time I’m forced to hear it”. Last year a Tiktoker used an AI-generated image to fake a memory for her grandmother, a person with dementia , then shared her reaction in a video widely condemned as manipulative, exploitative and unethical. And political deepfakes have further deteriorated public debate and trust in democracy, from election campaign material to the gross “Trump Gaza” video glamorising genocide as real estate fantasy. 

Writing for Time magazine in 2023, CEO of Microsoft AI Mustafa Suleyman described AI as a radically democratic technology, announcing: “We are about to see the greatest redistribution of power in history”. Speaking to BBC Radio 4 in the last days of 2025, Suleyman made a stark confession: “I honestly think that if you’re not a little bit afraid at this moment, then you’re not paying attention.”

It’s not just artists: unchecked AI threatens all of humanity 

2023 was also the year that an eminent group of over a thousand tech leaders wrote an open letter published by the Future of Life Institute, demanding that their peers “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” to avoid “profound risks to society and humanity.” The letter sought a “public and verifiable pause – enforced by governments, if not enacted voluntarily – affording time to introduce ethical and regulatory guardrails and legislate “liability for AI-caused harm.” With 33,705 signatures as I write, the experts caution that AI threatens the “loss of control of our civilization.” And while former governance researcher for Open AI Daniel Kokotajlo predicted last year that unchecked AI “releases a bioweapon, killing all humans” by 2027, he’s now a little less cautious – but barely marginally: he’s now thinking that humanity’s got til the early 2030s

Almost three years after the Future of Life letter there’s been no pause, no guardrails and no ethical frameworks. Open AI have just advertised a “Head of Preparedness” role to redress AI’s increasingly powerful psychological, ecological and military threats, as well as “new risks of severe harm” as yet unknown. In that same 2023 Time article, Suleyman was still welcoming AT tools that “will plan and run hospitals or invasions just as much as they will answer your email.” The distance between Cory Doctorow’s “enshittifcation” and Hannah Arendt’s “banality of evil” grows smaller and smaller.

At best, AI slop deadens our imaginations and desensitises our cultural and ethical judgement. At worst, unchecked AI transmogrifies creativity into a weapon of mass destruction. While there are many artists voluntarily and venturously using AI in their practice, for writers such as myself whose work has been stolen with neither consent nor compensation to profit AI developers, our involuntary complicity is truly sickening. 

Artists create our future; AI diminishes our imaginations

2026 is the year to champion artists’ rights and artists’ work as though the future of the planet were at stake.

We’re starting from a low base: despite copyright and moral rights legislation, using artists’ work without permission has long been normalised and the defence of their rights derided. In 2015, anti-migrant and anti-Islam group Reclaim Australia used Jimmy Barnes’ Khe Sanh at their rallies without consent, and when Barnes insisted they cease, their condescending response began: “We are deeply saddened”. In 2018, conservative politician Cory Bernadi made political use of Australian songs without the artists’ consent and, when challenged, ridiculed their objections, claiming that “music is for everyone”. This approach was recently echoed by Richard Grenell, president of the renamed “Donald J. Trump and John F. Kennedy Memorial Center” who, after musicians objected to the politicisation of their work, claimed that “true artists perform for everyone”. 

Australian law in this area is clear: using music for any political purpose – whether at events, on social media, in videos, or any form of advertising – requires written approval and the payment of a licence. There’s hard work behind the cultural value created by Australian artists; just because a song, a painting or a poem is publicly available does not make it copyright-free.

Rather than moving towards improved understandings of artists’ rights, instead we’re seeing the conversation shift in the opposite direction: the question is now being asked whether AI should be granted recognition akin to human rights. This is a grotesque reversal of exploitation: any AI rights negotiation must begin and end with the rights of the artists exploited. Anthropomorphising AI to shift the debate on what happens if it threatens us only distorts responsibility away from the real people causing real harm. After all, even the gun lobby’s wildly popular yet patently nonsensical “guns don’t kill – people do” slogan sets the responsibility squarely on the human being wielding the weapon.

Late last year the Albanese government ruled out any Text and Data Mining (TDM) exemption against copyright breaches by AI developers, but stopped short of introducing regulations, launching instead a National AI Plan that establishes an “AI Safety Institute to monitor, test and share information on emerging AI capabilities, risks and harms”. Without urgently implemented regulation, these risks and harms will continue to multiply – but more is needed.

Centre artists in policy to make their work visible – while AI scrambles to make all work invisible

Having led the world on banning young people from harmful social media content, it’s time for the Australian government to lead the world on an AI approach that makes the work of artists visible and valued. This means centring their work – and indeed, work itself – in policy and regulation. 

On this front, we’re not starting from behind at all: Australia’s national cultural policy, Revive, has “Supporting the artist as worker and celebrating artists as creators” as one of its five pillars. Revive explicitly recognises the precarious nature of creative work and, characteristic of its time of writing, focuses on upholding copyright, fair payment and safe workplaces, rather than the imminent threat AI poses across all of these areas.

How best to articulate nature of creative work into policy? How will we stop treating the technologies of “content creation”, artist exploitation and social harm as separate things? How comfortable are we in articulating the difference between the output of generative AI and the work of artists? How can policy and regulation make explicit the connection between artists’ work and Australian culture in ways that disrupt big tech’s relentless mission to render that work invisible and reduce culture to commodity?

These are just a few of the big questions facing Revive – and just in time: it’s up for review this year. Australia’s first cultural policy ever to be implemented is also the first arts policy ever to survive into a next term of government; a senate inquiry into its impact is due to report on 25 June 2026.

Art is indeed for everyone – and just like the public service we value in health and education, art is hard work. Bernadi and Grenell’s claims fail as arguments for wholesale theft: they’re arguments for sophisticated policy approaches and ambitious public investment. Such exclamations come so close to taking the next logical step: if we want everyone to be able to experience art, we need to value, support and champion the people who create that work. 

We also need to reality-check the AI conversation – urgently. We need to jettison the notion of AI development as plucky, limitless and ethically inert opportunity, and move rapidly towards recognising its threats and managing its risks.

Artists are the most powerful people in our society. They ignite our imaginations, focus our emotions and strengthen our courage. AI’s mission to render artists invisible diminishes us all. Making their work visible is a crucial step away from sci-fi dystopias and towards an inspirational future. 

IMAGE: Bruce Queck, Hall of Mirrors: Asia-Pacific Report (2011). 24 clocks record the instance of a social or ecological tragedy. A barcode scanner allows visitors to measure the quantity of destructive events taking place during the duration of their visit.