Meta’s global policy head, Sir Nick Clegg, has backed calls for an international agency to guide the regulation of artificial intelligence if it becomes autonomous, saying governments globally should avoid “fragmented” laws around the technology.
But Clegg downplayed suggestions of payment for content creators like artists or news outlets whose work is scraped to teach chatbots and generative AI, suggesting such information would be available under fair use arrangements.
“Creators who lean in to using this technology, rather than trying to block it or slow it down or prevent it from drawing on their own creative output, will in the long run be better placed than those who set their face against this technology,” Clegg told Guardian Australia.
“We believe we’re using [data] entirely in line with existing law. A lot of this data is being transformed in the way it’s being deployed by these generative AI models. In the long run, I can’t see how you put the genie back in the bottle, given that these models do use publicly available information across the internet, and not unreasonably so.”
Clegg, Meta’s president of global affairs and a former British deputy prime minister, said the company sought to set “an early benchmark” on transparency and safety mitigations with the release this week of Llama 2, its large language model developed with Microsoft.
Large language models, or LLMs, use huge datasets – including data publicly accessible online – to produce new content. OpenAI’s ChatGPT textbot is an example. The rapid acceleration of such services has prompted evaluation of the ethical and legal concerns around the technology, including copyright issues, misinformation and online safety.
Australia’s federal government is now working on the regulation of AI and released a consultation paper floating a ban on “high-risk” uses of artificial intelligence, with concerns raised about deepfakes, automated decision-making and algorithmic bias.
With two weeks left in the consultation, major themes aired have been around safety and trust. Ed Husic, Australia’s minister for industry and science, said the government wanted better frameworks so they could “confidently deploy” AI in areas as diverse as water quality, traffic management and engineering.
“I have been saying to the roundtables, the era of self-regulation is over,” he told Guardian Australia.
“We should expect that appropriate rules and credentials apply to high-risk applications of AI.”
In his only Australian interview, Clegg encouraged the creation of consistent AI rules internationally, pointing to processes under way through the G7 and OECD.
“Good regulation will be multilateral regulation, or aligned across major jurisdictions. This technology is bigger than any company or country. It would be self-defeating if regulation emerges in a fragmented way,” he said.
“It’s terribly important the main jurisdictions, including Australia, work together with others. There’s no such thing as a solo solution in this regulatory space.”
Clegg said Meta was encouraging tech companies to start setting their own guidelines on transparency, accountability and safety while governments formulated laws. He said Meta, Microsoft, Google, OpenAI and others were developing technology to help users detect content produced by AI, but warned it would be “virtually unfeasible” to detect AI-generated text.
OpenAI’s Sam Altman last month suggested an international agency oversee the development of AI technology, raising the International Atomic Energy Agency as an example. Clegg stopped short of endorsing such a measure to guide current technology, describing LLMs as “sophisticated guessing machines”, but backed some international oversight if AI became more powerful.
“The fundamental idea is, how should we as a world react if and when AI develops a degree of autonomy or agency?” Clegg said.
“Once we do that, we do cross a Rubicon. If that happens, by the way, there’s debate among experts; some say in the next 18 months, some say not within 80 years. But once you cross that Rubicon, you’re in a very different world.
“The large language models we’ve released are very primitive compared to that vision of the future. It’s not our mission to build artificial generative intelligence, that’s not what Meta is about and trying to build. But if it does emerge, I do think, whether it’s the IAEA or some other regulatory model, you’re in a completely different ballgame.”
A more immediate concern raised about existing technology is how AI models are trained and what data they scrape, leading to questions about copyrighted material. Clegg said Llama 2 is not trained on data from Meta users, but said the company believed it was respecting intellectual property rights.
“We believe we’re doing so squarely in line with existing law, existing standards of fair use of data,” he said.
“I totally understand copyright owners and others are going to start raising questions about exactly what the implications are for their own intellectual property.”
Clegg dismissed comparisons with Australia’s news media bargaining code, which paid news outlets for the value their content brought to Facebook. He said consideration was ongoing about how public information, which is then “transformed” and used in different settings, should be treated.
“It seems to us, at least, that this is something good for the world, it’s innate to this technology, but I strongly suspect what you’re going to have is lots of very arcane arguments about whether the data is used in unaltered form or in a transformed form,” Clegg said.
“The internet in a sense operates, particularly on the principle of fair use, that people can use publicly available data in a pretty versatile way. Otherwise the internet simply wouldn’t operate.
“I accept the laws, on the statute book, are written for a world in which generative AI didn’t exist. I strongly suspect this is something that will play out in courts, parliament and so on.”
Mia Garlick, Meta’s regional director of public policy, said the company was reviewing the Australian government’s plans and would provide feedback on the discussion paper, saying the technology must be developed “responsibly”.
“We support reform that recognises the rapidly developing space of AI and support the development of industry standards to ensure we all innovate responsibly,” she said in a statement.
“We’re already seeing the positive impact of AI for many people, and we believe it should benefit everyone, not just a few companies.”