Four key AI developments to close out 2024 – holiday reading for NZ businesses

20 December 2024

In a year that has seen major technological changes for many New Zealand businesses through the increasing adoption of AI, it seems appropriate that 2024 finishes with a flurry of noteworthy regulatory developments in New Zealand and overseas. To assist with staying ahead of the curve and future-proofing AI initiatives that are currently underway, this article sets out a brief summary of some key recent developments that businesses should be aware of, including an update on biometrics reform, regulatory sandboxes, and relevant deadlines for consultations that close early in the New Year.

New Zealand Developments

1. The Financial Markets Authority launches “regulatory sandboxes”

On 10 December, the FMA announced its intention to conduct a “regulatory sandbox” trial from January to July 2025. The “sandboxes” are intended to provide a controlled environment where businesses in the financial services sector can test innovative technologies (including AI products) under the supervision of the FMA. The purpose of the trial is to facilitate innovation and reduce costs by allowing firms to test upcoming AI products and other technologies to ensure compliance with regulations and supervisory expectations before the products are launched. Applicants must have either an initial concept, minimum viable product or fully functional product ready for testing.

The sandbox also provides an opportunity for the FMA to ensure regulations are effective and fit-for-purpose. If the trial period is successful, the FMA will decide on whether the sandbox initiative should become permanent later in 2025. The sandbox initiative aligns with approaches in other jurisdictions, including the UK, Australia, and Singapore.

Banks, lenders, fintechs and any providers of AI-driven financial services should consider this opportunity for testing new offerings in a controlled environment, and potentially contributing to the appropriate design of financial regulation in New Zealand. Applications for the sandbox pilot are expected to open early in 2025.

2. Proposed new Biometric Code

The Office of the Privacy Commissioner (OPC) has confirmed its intention to issue a Biometrics Privacy Code (Code). The announcement follows a consultation on a draft version of the Code earlier this year (summarised here). On the basis of that consultation, the Code has been amended to improve understanding and clarify the proposed obligations, such as new notification requirements.

Though not specific to AI, the Code will be relevant for many businesses using AI systems that process “biometric information” (i.e. an individual’s physical or behavioural features such as their face, fingerprints, or voice). That could include retailers using facial recognition systems for security purposes. Any businesses who process biometric information should carefully consider the Code which, though partially simplified from the initial draft, is still likely to impose significant new compliance burdens.

In particular, the Code will impose new obligations to undertake “proportionality tests” for the collection of biometric information (which will require consideration of whether any less privacy-invasive alternatives are available), obligations to provide notice to individuals, and limits on the use of biometric information. In relation to the disclosure obligations, agencies will be required to provide separate notices setting out their biometric processing practices, distinct from standard privacy policies.

The OPC has also issued draft guidance with examples of how the Code may apply. Submissions on the amended draft Code and guidance are due by 14 March 2025. The finalised Code is then anticipated to take effect in mid-2025.

International Developments

3. Australia’s Select Committee Recommends Comprehensive AI Legislation

In Australia, the recently-established Select Committee on Adopting Artificial Intelligence has published a report on the impact of AI technology (here). The report outlines 13 recommendations for the Australian Government, including the adoption of comprehensive “whole of economy” AI legislation to regulate the use of AI, rather than reliance on existing frameworks or piecemeal regulation. The Select Committee also recommended that AI should be categorised according to risk to enable stricter requirements for high-risk uses, with a principles-based approach to defining high-risk uses of AI including non-exhaustive examples. The report noted that any AI that impacts rights of people at work should be classified as a high-risk use, including automated AI tools for CV scanning, shift rostering and performance evaluation.

If adopted, the recommendations would align Australia more closely to similar approaches in other jurisdictions such as the EU, where a comprehensive AI Act has been adopted (summarised here). This stands in contrast to the New Zealand government’s preference, noted in a recent cabinet paper, to avoid standalone AI legislation and to ensure that “further regulatory intervention should only be considered to unlock innovation or address acute risks.”

4. European Data Protection Board (EDPB) Issues Guidance on AI and Personal Data

The EDPB published a formal opinion on 18 December, offering a significant analysis of data protection issues linked to AI. While not directly applicable in New Zealand, this guidance offers valuable insights for businesses seeking to establish strong data protection controls when training and using AI models.  

Among other things, the opinion considers whether AI models can be considered “anonymous”.  This is relevant in the privacy context because anonymised information will generally not constitute “personal information” (defined under the New Zealand Privacy Act 2020 as including “information about an identifiable individual”). The EDPB’s opinion outlined two criteria that must be satisfied for an AI model to be considered anonymous: (i) first, the likelihood of extracting personal data directly from the model must be “insignificant”; and (ii) queries to the model must not generate identifiable personal information, even unintentionally. Without meeting these conditions, the EDPB considers that an AI model is not anonymous, such that privacy laws will apply. 

The opinion highlights the need for businesses to carefully consider any claims of anonymity in the context of AI systems and ensure that appropriate technical safeguards and detailed Privacy Impact Assessments (PIAs) underpin those claims.

Looking ahead

From domestic initiatives like the FMA’s regulatory sandbox and the proposed Biometrics Code to international developments such as the EDPB’s opinion, the regulation of AI is advancing rapidly. Staying informed and proactive is essential - not only to ensure compliance but also to leverage emerging opportunities in this fast-changing technological environment.  We encourage New Zealand businesses involved in the development or adoption of AI technologies to participate in the consultations noted above, to ensure they stay informed about these important changes and remain well-positioned to adapt to a future increasingly shaped by AI.

If you have any questions about the matters raised in this article, or require any assistance with a submission on the Biometrics Code or an application to participate in the FMA’s regulatory sandbox initiative, please get in touch with the contacts listed or your usual Bell Gully adviser.

This article was written with the helpful assistance of Hannah Giles, a Summer Intern in Bell Gully’s Summer Internship Programme.


Disclaimer: This publication is necessarily brief and general in nature. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.