Welcome to a midweek update from Unmade. Today: Unmade’s AI expert shares her misgivings about the speed with which are sliding into a new world.
If you’ve been thinking about upgrading to an Unmade membership, this is the perfect time. Your membership includes:
Member-only pricing for our HumAIn (May 28) and REmade (October 1) conferences;
A complimentary invitation to Unmade’s Compass event series (November);
Member-only content and our paywalled archives;
Your own copy of Media Unmade.
Experience the power of next generation audience intelligence, with News Corp Australia’s evolution of Intent Connect.
Collecting more than 2bn audience signals monthly from 17.2m Australians across over 25 premium digital brands, Intent Connect empowers you to plan targeted campaigns, book premium media and measure real outcomes effortlessly.
Step into a new era of digital advertising and place your brand at the heart of marketing’s most in-demand commodity: consumer intent, with Intent Connect.
Future Perfect?
During another epochal week for AI, Cat McGinn, curator of our AI conference for media and marketing, humAIn, writes about her uneasy relationship with our robot overlords
The curse, and sometimes the saving grace, of being a writer is the tendency to chronicle events as they happen, viewing the world in the future perfect tense: that which will have taken place.
The English novelist Graham Greene talked about the “splinter of ice at the heart of every writer,” - this perspective can create a sense of disconnection, but it can also be a coping mechanism.
I think about, use and build with AI tools daily. I find these tools offer a lot of benefit in my work, from analysis, synthesising large volumes of copy, research, giving me a (terrible) first draft to tilt against, removing the inhibition of the blank page.
I use it to check readability, to preempt the criticism of my real-life editor, and much like my real-life editor, I generally disagree, then grudgingly accept the feedback1.
But being a writer in the maw of AI disruption feels a little like arming a polar bear on an already melting ice cap with a flamethrower. It’s powerful, exhilarating and self-sabotaging at once.
I am struck by the realisation we have moved past the point of exploration to acceptance. Amidst yesterday’s flurry of reaction to OpenAI’s new GPT-4o, which makes the GPT-4 AI technology freely available to everyone, whispers of dissent about data, ethics, copyright, and broader societal concerns have been all but silenced by the buzz.
UK comic Daniel Kitson’s theatre piece “It’s Always Right Now Until It’s Later,” examines the moment when the past becomes the future; when things that had seemed impossible suddenly elide into inevitability. That moment—that's where we find ourselves with AI.
We seem somehow to have allowed ourselves to fast-forward through the period when we were still questioning the moral, legal, logistical rectitude and feasibility of using large language models (LLMs) trained on the work of artists, writers, journalists, creators to generate new art, writing, creativity, media.
Now the debate is about the practicalities of how to licence LLMs trained on stolen work, rather than whether we should do this - without remunerating creatives - at all. However deeply you might value the importance of craft, the return on investment on an hour’s copywriting or concepting, weighed against the ROI on an image or piece of copy that costs a junior prompt jockey’s wages plus 0.03 a pop to generate, will be increasingly hard for agencies or clients to justify - in this or any other economy.
Australia, like many other countries, has specific defences under our copyright legislation, meaning the creation of new, innovative systems may well not be a legal justification for appropriating or repurposing the entire body of a creator’s original work. The US does have a fair use provision but the argument is far from resolved there.
Forgive the distant crinkling of my tinfoil beanie, but I have the sense that the dramatic discourse about the threat to human existence that accompanied the release of GPT 3 et al in late 2022 may have been a deliberate shifting of the Overton Window - the range of ideas considered acceptable in public discourse at any given time - away from notions of creative ownership and compensation to “well, at least the robots probably won’t eat us”.
Our current state of affairs is akin to Goldilocks sitting amidst the wreckage of Baby Bear’s chair and announcing “until there’s a hefty legal challenge, let’s all carry on as though this is completely fine.”
The probable TikTok ban in the US and its likely fallout in this market highlights the absurdity of mitigating one platform's misinformation while ushering in total societal disruption, on the flimsy premise that Western ownership equates to Western interests. Large tech corporations exist outside notions of nationhood; their governance is material, not ideological.
If we are to decide that restricting tech companies based on their lack of concern for the wider population is a viable move, then we must apply that logic to the AI giants. I am aware that every word I’ve ever published has been chewed through in nanoseconds by large language models, and I continue to feed our AI overlords with these tiny sacrifices.
Unmade's decision to not yet take steps to prevent our content to be used for AI training data reflects a calculated risk: our content, independence, expertise and perspective may be worth more within the machine than without.
Like wolves licking the bloodied blade, we hope the gains will offset the cuts and we won’t bleed out.
You may feel as though the progression of technological change is already determined. It’s a well-established pattern: tech companies make rapacious advances and we’re left to nurse the casualties.
I believe this is a road as yet untravelled: we can still consider and implement alternatives.
For example, Japan’s leading media organisation Yomiuri Shimbun has joined forces with tech giant NTT Corp to work together on training and utilising LLMs, announcing their “hope to contribute to the creation of a better society by working collaboratively.”
I’ve said it before, and no doubt I’ll be carried to my rest still shouting it: AI is a collective technology, and it requires a collective response.
We’re surveying our readers about their experience and sentiment towards AI. At the time of writing, 83% of respondents to our AI survey said they felt there is insufficient regulation in place for AI. I think this reflects a general and legitimate unease, and a need for greater guardrails. We’ll talk more about that at HumAIn, the week after next.
Whether it’s government legislation or a code of practice we agree to as an industry, we can aim to influence the changes and disruption.
Ride the wave, instead of being rag-dolled by it.
Tickets to HumAIn, curated by Cat McGinn are currently on sale
Unmade Index dips as SCA takeover stumble reverberates
Tim Burrowes writes:
The share price of Southern Cross Austereo sagged a little more on Tuesday as the market continued to contemplate whether ARN Media’s takeover bid can be saved.
On a day where most media and marketing stocks dipped, SCA was down another 1.16% to a market capitalisation of $203m. Meanwhile, ARN was flat.
Of the larger media and marketing stocks, Seven West Media had the worst day, losing 2.5%. Nine bucked the trend, improving by 0.65%.
The Unmade Index was down 0.35% for the day, landing at 533.5 points.
Time to leave you to your Wednesday. We’ll be back tomorrow with an audio-led update in which we talk to News Corp’s sales supremo Louise Barrett about this week’s D_Coded announcements
Have a great day.
Toodlepip..
Tim Burrowes
tim@unmade.media
Editor’s note: The acceptance of feedback referred to by the author is not an easily observable phenomenon