#AIAct published in the Official Journal of the European Union! AI-Sustainability Nexus

#AIAct published in the Official Journal of the European Union! AI-Sustainability Nexus

#AIAct is published: The final version in all EU languages is available here.

That was one — quite quick step for the Official Journal, and one giant leap for Europe. 

Next steps: The AI Act will become effective as of 1st August 2024 and enter into force on 2nd August 2024, and be fully applicable 24 months after its entry into force, except for 

  • Bans on prohibited practises (will apply six months after the entry into force date, i.e. 2nd February 2025);  
  • Codes of practice (nine months after entry into force, i.e. 2nd May 2025);  
  • General-purpose AI rules including governance (12 months after entry into force, i.e. 2nd August 2025); 
  • Obligations for high-risk systems (36 months after entry into force, i.e. 2nd August 2027). 

A Leap forward

The pandemic has accelerated the digital transformation: the transition from the extractive economy to the new economic space—the digital economy happened like a leap. In this new space, the algorithms that now shape our economy and society are often developed with few legal and regulatory restrictions or commonly held sustainability and ethical standards. Consequently, ensuring that technologies align with our shared values and existing legal and regulatory frameworks is crucial.  

 The EU appears to go further in this arduous feat by striving to create socially responsible and environmentally sustainable tech, fostering not only a trustworthy market, but also, and, indeed, especially, a whole safe and transparent socio-economic and bio-cultural ecosystem. 

Namely, as covered in our previous post on the subject-matter, the latest text of the EU AI Act features a revised definition of AI systems aligned with the OECD definition and the risk-based approach, in assessing the lawfulness of AI system development and usage based on the level of risk to fundamental rights. According to the EU Parliament, the Act aims to ensure that fundamental rights, together with democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. Obligations for high-risk systems will enter into force three years after the Act’s entry into force, meaning 2nd August 2027. 

 For high-impact general-purpose AI (GPAI) models with systemic risk, obligations are more stringent. If these models meet specific criteria, they will be required to conduct, among others, model evaluations, assess and mitigate systemic risks, report serious incidents to the Commission, ensure cybersecurity measures, and report on their energy efficiency. These obligations will enter into force one year later counting from Act’s entry into force, meaning 2nd August 2025. 

Why is this important for sustainability?

Unsustainable development of the digital economy in general contributes to apparent environmental harms, such as CO2 emissions which are nowadays roughly equivalent to those of the global aviation industry 

In the AI sector, for instance, research shows that training generative AI like ChatGPT can directly evaporate 700,000 litres of clean freshwater, although this information has been largely undisclosed. More significantly, the increasing global demand for AI could lead to a staggering 4.2 to 6.6 billion cubic meters of water withdrawal by 2027. To compare, this means surpassing the annual water usage of countries like Denmark. 

On top of outstanding water footprint, by 2027, AI servers could gulp down a staggering 134 terawatt hours per year, rivalling the annual energy consumption of entire countries like Sweden, the Netherlands, or Argentina. 

Therefore, the data acquisition and the AI training must be done in much less resource-consuming ways. 

To address these and related environmental sustainability issues, under the AI Act, the generative AI models such as the one powering ChatGPT fall within the models that could be held accountable for, among others, environmental unsustainability. Namely, they will have to mitigate systemic environmental risks and report on their energy efficiency. 

Last but definitely not least: breaching the EU AI Act obligations could be quite expensive, as the fines could reach up to 7% of global sales of the entity violating the Act. 

What can we do?

At a ‘micro-level’, i.e., for the AI developers, for example, more responsible data practices, not only in terms of privacy, security, and transparency, but also in terms of environmental responsibility mean reducing the data load in data acquisition and AI training. More specifically, for instance, to save energy, we need federated learning algorithms, and, for this, we need 6G. 
At a ‘macro level’, to achieve truly sustainable AI, it is essential to holistically address not only the carbon footprint and/or the water footprint, but also, and, indeed, especially, the systemic environmental and broader sustainability risks. By considering all aspects of sustainability, we can strive to ensure that AI development and deployment minimize their environmental impact across multiple dimensions. This comprehensive approach acknowledges the interconnectedness of various resources and the need to manage them sustainably. 

Share with Your Network

Twitter
LinkedIn
Email