Tuesday, June 25, 2024

The AI Act’s fine line on critical infrastructure

Must read

As EU policymakers make progress in defining an upcoming rulebook for Artificial Intelligence, the question of to what extent AI models employed to manage critical infrastructure should be covered by tight requirements still remains open.

The AI Act is reaching a critical stage in the legislative process, with the European Parliament set to reach a common position in the coming weeks. The legislative proposal is the world’s first attempt to put in place a comprehensive set of rules for Artificial Intelligence based on its potential risks.

A critical aspect of the draft law is the category of AI models that can cause significant harm, which must comply with stricter obligations regarding quality and risk management. However, concerning critical infrastructure, how to assess risk remains a matter of debate.

AI in critical infrastructure

Artificial Intelligence is increasingly employed in managing critical infrastructure, notably for project development, maintenance and performance optimisation.

An example on the construction side is Sweco Netherlands, an engineering consultancy company tasked to extend the light-rail system of Bybanen, Norway’s second-largest city, considering the existing tram lines, adjacent roads, cycle lanes, pedestrian zones and surrounding public areas.

To put together these different factors, Sweco NL used a digital twin model to visualise its project and understand how design changes would impact the timeline, costs and surroundings. The company estimates it reduced construction errors by 25% as a result.

Another area of application for this technology is dams. In 2017 HDR, a US construction company, applied machine learning to a dam’s digital twin model to simulate how the infrastructure would be affected by changes like natural shifting and erosion of the surrounding soil over time.

The model allowed dam operators to detect anomalies like cracks with an accuracy of two centimetres, differentiating them from harmless algae growth, and taking corrective measures before they grew into more significant problems.

Regulatory approach

The original AI Act proposal noted that “it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.”

In the EU Council of Ministers, member states clarified that the concept of the safety component should be distinguished by the management system itself. In other words, in a dam, the mechanism to open the volts is the management system, whilst the technology that monitors the water pressure is a safety component.

In the European Parliament, the MEPs spearheading the work on the AI Act proposed to differentiate the management of traffic on roads, rail and air, from supply networks like water, gas, heating, energy and electricity, in compromise amendments obtained by EURACTIV.

While the Council included digital infrastructure like cloud services and data centres in the list of high-risk use cases, since the intent is to prevent “appreciable disruptions in the ordinary conduct of social and economic activities”, EU lawmakers have so far not done so.

The addition to the high-risk list caused significant anxiety in the telecom industry, which uses AI to manage network capacity, plan upgrades, detect frauds and improve energy efficiency. The question is whether the malfunction of any of these algorithms might bring the whole system down.

Where to draw a line

For example, if a telecom operator miscalculates traffic peaks in different areas of its network, would that lead to internet outages? A representative of telecom operators told EURACTIV they are not aware of any situation where that occurred, branding the issue as ‘highly hypothetical’.

More generally, critical infrastructure operators are concerned that, by casting the high-risk category of the AI regulation too wide, they might be precluded from useful tools that contribute to making their systems more efficient and secure.

A case in point is that member states excluded AI-powered cybersecurity tools from the definition of safety component.

Anti-virus malware analysis is based on predictive models and machine learning, meaning critical infrastructure service providers would have been precluded from using virtually all commercially available anti-viruses.

At the same time, AI-powered management systems are not without risks. Kris Shrishak, a technologist at the Irish Council for Civil Liberties, made the case of India in 2012 when a miscalculation of the electric grid’s peak traffic led to perhaps the largest blackout in history.

Therefore, the argument for a more granular approach in the high-risk categorisation relates to when the AI solutions make the infrastructure safer and if their failure does not entail an imminent threat.

Physical maintenance, for instance, is often costly and time-consuming, which might lead to infrastructures falling into disrepair. Not employing AI’s capacity to identify patterns and spot anomalies before they develop into bigger problems can also come at a cost.

Last year, amid the Russian-prompted energy crisis, France, usually Europe’s largest energy exporter, became a net importer as a record number of its nuclear reactors were put out of service due to maintenance stoppages.

[Edited by Nathalie Weatherald]

Read more with Euractiv

Latest article