LONDON, Sept 20 (Askume) – The world’s biggest technology companies have made a last-ditch effort to persuade the European Union to take a softer stance on regulating artificial intelligence in a bid to avoid the risk of billions of dollars in fines.
In May, after months of intense negotiations between different political groups , EU lawmakers agreed on an Artificial Intelligence bill, the world’s first comprehensive set of rules governing the technology.
But until the code of conduct attached to the law is finalized, it’s unclear how strictly the rules for “general-purpose” artificial intelligence (GPAI) systems like OpenAI’s ChatGPIT will be enforced, and how the company will respond to the many copyright lawsuits it faces. A billion-dollar fine?
The EU has invited businesses, academics and others to help draft a code of conduct and has received about 1,000 applications. A person familiar with the matter said the number is unusually high. The sources requested anonymity because they were not authorized to speak publicly.
The code of conduct on artificial intelligence will not be legally binding when it comes into force late next year, but it will provide companies with a checklist they can use to demonstrate compliance. Companies that claim to comply with the law but ignore the guidelines could face legal challenges.
“The code of conduct is important. If we apply it correctly, we can continue to innovate,” said Boniface de Champris, senior policy manager at trade group CCIA Europe, which includes Amazon (AMZN.O) , Google (GOOGL.O) and Meta (META.O).
“If it’s too narrow or too specific, it’s too difficult,” he said.
data capture
Companies like Stability AI and OpenAI face the question of whether using best-selling books or photo archives to train models without the author’s permission is a copyright infringement.
Under the Artificial Intelligence Act, companies will be obliged to provide a “detailed summary” of the data used to train their models. In theory, content creators who find that their work has been used to train artificial intelligence models may be able to seek compensation, although this is being tested in the courts.
Some business leaders say the required summary should include little detail to protect trade secrets , while others say copyright owners have a right to know if their material is being used without permission.
OpenAI, which has been criticized for refusing to answer questions about the materials it uses to train its models, has also applied to join the working group, a person familiar with the matter said, speaking on condition of anonymity.
Google has also submitted an application, a spokesperson told Askume. At the same time, Amazon said it “looks forward to contributing its expertise and ensuring the success of the code of conduct.”
Maximilian Gehntz, head of artificial intelligence policy at the Mozilla Foundation, the nonprofit behind the Firefox web browser, expressed concern that the company “goes to great lengths to avoid transparency.”
“The Artificial Intelligence Act is the best opportunity to shed light on this important aspect and uncover at least part of the black box,” he said.
Big business and priorities
Some business people have criticised the EU for prioritising technology regulation over innovation, and those responsible for drafting the code of conduct will try to find compromise.
Last week, former European Central Bank President Mario Draghi told the EU it needed better coordinated industrial policy, faster decision-making and massive investment to keep up with China and the United States.
Thierry Breton, a vocal supporter of EU regulation and critic of non-compliant tech companies, resigned as the EU’s internal market commissioner this week as he faced criticism from EU executive Ursula von der Leyen.
Against the backdrop of rising protectionism within the European Union, domestic technology companies are hoping that the Artificial Intelligence Act may be relaxed, benefiting emerging European companies.
Maxime Ricard, policy manager at Allied for Startups, a network of trade groups representing small tech companies, said: “We emphasize that these obligations should be manageable and, if possible, quantified for new startups. Customized.
Tech companies will have until August 2025 to measure their compliance efforts when the code is released in the first half of next year.
Nonprofit organizations including Access Now, the Future of Life Institute and Mozilla have also applied to help draft the code.
“As we enter a phase of clarifying many of the obligations of the AI Act in greater detail, we must be careful not to allow large AI players to undermine important transparency provisions,” Gantz said.