As with comparable reforms in the past (such as the GDPR), the EU framework is likely to influence the development of a regulatory response to AI in New Zealand and in other jurisdictions. In addition, the AI Act could be of direct relevance to certain New Zealand businesses, given the possibility of extra-territorial application where AI systems are used in the EU.
This article summarises some of the key features of the new regime.
Overview of the AI Act
As in the original draft legislation (summarised in our article here), the AI Act adopts a risk-based approach to regulating AI systems with various tiered obligations.
The risk categories are detailed, but we set out a brief summary of the key features below:
Unacceptable risk |
AI systems with an “unacceptable” level of risk to safety or human rights are strictly prohibited. The list of prohibited systems was extensively negotiated between the institutions of the EU last year. In the finalised text, it includes:
Some of the other categories reflect the fears of dystopian misuse that underpinned the original draft legislation in 2021 – including social scoring systems, systems that predict the likelihood of a person committing a criminal offence, or systems that deploy “subliminal techniques” to distort behaviour (an example given is “machine-brain interfaces"). |
High risk |
Systems that create a “high risk” to health and safety or human rights are not banned, but are subject to various quality management obligations including strict “conformity assessments” (to assess conformity with existing laws where applicable, and to confirm compliance with various other safeguards set out in the Act). The majority of these obligations apply to the “providers” (i.e. the developers) of high-risk AI systems.
|
Low / minimal risk |
For lower risk activities, the AI Act imposes more general requirements, such as transparency obligations. That requires that AI systems should make clear to users that they are interacting with an AI system (unless that is obvious based on the context), and any limitations on the system’s capabilities should be made clear.
Providers of all AI systems are encouraged under the AI Act to adhere voluntarily to codes of conduct. These are expected to set out requirements related to a range of factors including sustainability, accessibility for persons with a disability, and diversity of development teams. |
Various exceptions will apply. In particular, the AI Act will not apply to systems which are used for the sole purpose of research and innovation or for non-professional use. In addition, it will not apply to systems used exclusively for military or defence purposes.
Next steps and implications
The AI Act is expected to enter into force in May 2024, after passing final linguistic and proofing checks and formal endorsement by the European Council. Implementation will then be staggered: the bans on unacceptable AI systems will apply within six months; codes of practice take effect within nine months; and obligations for high-risk systems take effect within three years.
We expect that many New Zealand businesses will have a close interest in the reforms. The framework captures any providers where a system’s “output” is used in the EU, regardless of where the provider is based. Therefore, the AI Act could apply directly to New Zealand developers of AI systems if used in the EU. It will be important for such businesses to ensure they are compliant with the new regime. Breach of the prohibition on unacceptable uses can result in fines of up to €35 million or in some cases up to 7% of total worldwide annual turnover. Breach of the obligations for “high-risk” systems can result in fines of up to €15 million or up to 3% of total worldwide annual turnover.
In addition, the reforms will be of more general relevance to other businesses, given the likely influence of the AI Act on regulatory developments in New Zealand. As with the GDPR, which triggered a convergence in international privacy laws towards similar standards, the detailed and prescriptive requirements of the AI Act could serve as a “best practice” framework which drives other AI regulation here and overseas. At the same time, other jurisdictions are considering more flexible approaches (see our article here) which may be better suited for keeping pace with the accelerating development of AI technology. It will be interesting to see which approach the New Zealand government favours, should it look to implement its own reforms in due course.
Bell Gully’s Consumer, Regulatory and Compliance (CRC) team have been closely monitoring international developments in the regulation of AI. If you have any questions, please get in touch with the contacts listed, or your usual Bell Gully adviser.