It would be harsh, but not entirely inaccurate, to observe that Scott Farquhar’s contribution to the intellectual property debate has been ironically light on intellect.
The concept of expecting tech companies to act in the best interests of the public or of content owners is ridiculous. They have consistently proved willing to run roughshod over norms of business conduct, push local laws and lobby to the detriment of competitors.
Content owners can begin to take matters into their own hands to force negotiations with AI companies - NYT has done so through the courts, but cloud security provider CloudFlare has proposed a novel and ingenious way to help content owners recoup costs and get paid a reasonable fee for use of their content in AI training.
CloudFlare sits in front of many of the world's largest publishers, and can proactively block AI-crawlers from scraping websites when they are detected.
However, just blocking them is not enough of a disincentive - CloudFlare's novel approach is to trap unauthorised AI-scapers in a labyrinth of, ironically, its own AI-generated content slop, which would have the ultimate effect of filling the model data sources with large volumes of nonsense, and potentially reducing the value of model pre-training by introducing unreliable source data.
This has a very real cost to AI-companies if the billions they are investing in training is undermined by poor source data.
CloudFlare are proposing a gate-keeper style relationship on behalf of publishers that would allow paying partners into the sweet, sweet copyright material, and lock out freeloaders in a world of slop.
As is typically the case, I can see both sides of this.
On the fair dealing argument, how is it any different when a consultant googles nine gated content to put together a report that he sells to a client without compensating the owners of the intellectual property? Large language models are essentially doing the same thing, only they do it (for me at least) for a flat monthly fee instead of hiring somebody to be a researcher. I just prompt ChatGPT with a question like “What’s the lay of the land on fair dealing in Australia to the regulatory and legal environment for paying royalties to IP owners”. I could ask McKinsey the same question and they would use the same sources and in either case, as far as I know, the owners of the intellectual property wouldn’t get a dime out of it..
Conversely, if some of the wilder tech bro dystopian fantasies come to pass and we are all put out of work by AI, marketers are still going to need customers and customers are gonna need to have some way of raising money to buy stuff, so the idea of the AI firms paying people to compensate them for their intellectual property is one way to put some money in some people‘s pockets so that they can buy stuff, because otherwise, unless you can automate demand (which I don’t see how you can), the whole economy collapses in a global basis if these tech bros aren’t just raising the spectre of eliminating all work as a way to raise funding from gullible investors.
The thing about fair dealing (its name in Australia, not fair use which is USA, something I would expect him to know) is that the list of fair dealing uses doesn't impact revenue. Hard to argue the same for use of AI.
Also, many of the AI examples he gave are not, or certainly could be non-AI, such as Spotify recommendations and traffic re-routing. Just because something is algorithm-driven does not make it AI. There are better examples which can only be AI not algorithm. Or, explain how AI does say Spotify recommendations better than a simple algorithm - not too hard for a tech chief I would suggest.
I also disagree that the current legislation is able to cope with AI. His interview is proof.
The "let AI use IP because it's transformative" is interesting. I see the argument, as we all research from others, but AI is different because of the sheer scale, speed and volume. If Person X rips off Person Y's work, at least Person X is identifiable and has put in some effort to create something.
The iPod argument is a completely different matter and actually shows how legislation *does* need to evolve with tech, and to say no at this relatively early stage in the game is naïve, at least leave the door open. And drawing a parallel to the ABC quoting someone for a news piece really is desperate.
I think the last exchanges are telling - his view is simply what's best for society is AI, therefore creators should subsidise AI. The flaw there is what will then motivate creators, researchers, writers etc if they cannot get compensation or even recognition for their work? I would have liked to explore that point, but based on his previous answers, I don't think there would have been a compelling solution.
"let AI use IP because it's transformative" - I wonder whether Scott would apply the same thought to other forms of IP protection. Let's allow anyone to rip of patented inventions because that would also be super transformative?
The concept of expecting tech companies to act in the best interests of the public or of content owners is ridiculous. They have consistently proved willing to run roughshod over norms of business conduct, push local laws and lobby to the detriment of competitors.
Content owners can begin to take matters into their own hands to force negotiations with AI companies - NYT has done so through the courts, but cloud security provider CloudFlare has proposed a novel and ingenious way to help content owners recoup costs and get paid a reasonable fee for use of their content in AI training.
CloudFlare sits in front of many of the world's largest publishers, and can proactively block AI-crawlers from scraping websites when they are detected.
However, just blocking them is not enough of a disincentive - CloudFlare's novel approach is to trap unauthorised AI-scapers in a labyrinth of, ironically, its own AI-generated content slop, which would have the ultimate effect of filling the model data sources with large volumes of nonsense, and potentially reducing the value of model pre-training by introducing unreliable source data.
This has a very real cost to AI-companies if the billions they are investing in training is undermined by poor source data.
CloudFlare are proposing a gate-keeper style relationship on behalf of publishers that would allow paying partners into the sweet, sweet copyright material, and lock out freeloaders in a world of slop.
Seems only fair to me.
As is typically the case, I can see both sides of this.
On the fair dealing argument, how is it any different when a consultant googles nine gated content to put together a report that he sells to a client without compensating the owners of the intellectual property? Large language models are essentially doing the same thing, only they do it (for me at least) for a flat monthly fee instead of hiring somebody to be a researcher. I just prompt ChatGPT with a question like “What’s the lay of the land on fair dealing in Australia to the regulatory and legal environment for paying royalties to IP owners”. I could ask McKinsey the same question and they would use the same sources and in either case, as far as I know, the owners of the intellectual property wouldn’t get a dime out of it..
Conversely, if some of the wilder tech bro dystopian fantasies come to pass and we are all put out of work by AI, marketers are still going to need customers and customers are gonna need to have some way of raising money to buy stuff, so the idea of the AI firms paying people to compensate them for their intellectual property is one way to put some money in some people‘s pockets so that they can buy stuff, because otherwise, unless you can automate demand (which I don’t see how you can), the whole economy collapses in a global basis if these tech bros aren’t just raising the spectre of eliminating all work as a way to raise funding from gullible investors.
The thing about fair dealing (its name in Australia, not fair use which is USA, something I would expect him to know) is that the list of fair dealing uses doesn't impact revenue. Hard to argue the same for use of AI.
Also, many of the AI examples he gave are not, or certainly could be non-AI, such as Spotify recommendations and traffic re-routing. Just because something is algorithm-driven does not make it AI. There are better examples which can only be AI not algorithm. Or, explain how AI does say Spotify recommendations better than a simple algorithm - not too hard for a tech chief I would suggest.
I also disagree that the current legislation is able to cope with AI. His interview is proof.
The "let AI use IP because it's transformative" is interesting. I see the argument, as we all research from others, but AI is different because of the sheer scale, speed and volume. If Person X rips off Person Y's work, at least Person X is identifiable and has put in some effort to create something.
The iPod argument is a completely different matter and actually shows how legislation *does* need to evolve with tech, and to say no at this relatively early stage in the game is naïve, at least leave the door open. And drawing a parallel to the ABC quoting someone for a news piece really is desperate.
I think the last exchanges are telling - his view is simply what's best for society is AI, therefore creators should subsidise AI. The flaw there is what will then motivate creators, researchers, writers etc if they cannot get compensation or even recognition for their work? I would have liked to explore that point, but based on his previous answers, I don't think there would have been a compelling solution.
"let AI use IP because it's transformative" - I wonder whether Scott would apply the same thought to other forms of IP protection. Let's allow anyone to rip of patented inventions because that would also be super transformative?