Meta’s global policy head, Sir Nick Clegg, has backed calls for an international agency to guide the regulation of artificial intelligence if it becomes autonomous, saying governments globally should avoid “fragmented” laws around the technology.
But Clegg downplayed suggestions of payment for content creators like artists or news outlets whose work is scraped to teach chatbots and generative AI, suggesting such information would be available under fair use arrangements.
“Creators who lean in to using this technology, rather than trying to block it or slow it down or prevent it from drawing on their own creative output, will in the long run be better placed than those who set their face against this technology,” Clegg told Guardian Australia.
“We believe we’re using [data] entirely in line with existing law. A lot of this data is being transformed in the way it’s being deployed by these generative AI models. In the long run, I can’t see how you put the genie back in the bottle, given that these models do use publicly available information across the internet, and not unreasonably so.”

Clegg, Meta’s president of global affairs and a former British deputy prime minister, said the company sought to set “an early benchmark” on transparency and safety mitigations with the release this week of Llama 2, its large language model developed with Microsoft.
Large language models, or LLMs, use huge datasets – including data publicly accessible online – to produce new content. OpenAI’s ChatGPT textbot is an example. The rapid acceleration of such services has prompted evaluation of the ethical and legal concerns around the technology, including copyright issues, misinformation and online safety.

Australia’s federal government is now working on the regulation of AI and released a consultation paper floating a ban on “high-risk” uses of artificial intelligence, with concerns raised about deepfakes, automated decision-making and algorithmic bias.
With two weeks left in the consultation, major themes aired have been around safety and trust. Ed Husic, Australia’s minister for industry and science, said the government wanted better frameworks so they could “confidently deploy” AI in areas as diverse as water quality, traffic management and engineering.
“I have been saying to the roundtables, the era of self-regulation is over,” he told Guardian Australia.
“We should expect that appropriate rules and credentials apply to high-risk applications of AI.”
In his only Australian interview, Clegg encouraged the creation of consistent AI rules internationally, pointing to processes under way through the G7 and OECD.
“Good regulation will be multilateral regulation, or aligned across major jurisdictions. This technology is bigger than any company or country. It would be self-defeating if regulation emerges in a fragmented way,” he said.
“It’s terribly important the main jurisdictions, including Australia, work together with others. There’s no such thing as a solo solution in this regulatory space.”

Clegg said Meta was encouraging tech companies to start setting their own guidelines on transparency, accountability and safety while governments formulated laws. He said Meta, Microsoft, Google, OpenAI and others were developing technology to help users detect content produced by AI, but warned it would be “virtually unfeasible” to detect AI-generated text.
OpenAI’s Sam Altman last month suggested an international agency oversee the development of AI technology, raising the International Atomic Energy Agency as an example. Clegg stopped short of endorsing such a measure to guide current technology, describing LLMs