While the concept of artificial intelligence (AI) has been around for some time, first emerging through logic and rule-based systems in the 1950s, it had received only intermittent attention when it piqued the interest of the mainstream press. For example, IBM’s “Deep Blue” beating chess champion Garry Kasparov in 1997; and the introduction of Apple’s “Siri”, the first mainstream AI-powered virtual assistant in 2011.
However, the recent emergence and open access of Gen AI via OpenAI’s release of ChatGPT-3.5 in November 2022, has exponentially increased public awareness of the potential of Gen AI to impact both business and our daily lives. The ability of applications such as ChatGPT to write text or code, compose music and create digital art has caught the public imagination, and that has translated into widespread experimentation and adoption, as well as a high level of business activity and expectation around the technology.
A tech “arms race” of incredible scale and momentum
Eyewatering investments
Of course, the development of Gen AI did not happen overnight. Rather, it was the result of huge investment and the commitment of resources by some of the largest and most well-funded companies in the world. For example, Microsoft announced its multi-billion dollar investment in OpenAI in January 2023, and in October 2024 OpenAI announced that it had raised US$6.6 billion of new funding at a post-money valuation of US$157 billion. At that valuation, OpenAI alone is worth more than 87% of the companies that make up the S&P 500, making it one of the highest valued private companies in the world.
Microsoft has not been alone in making eyewatering investments. Amazon and Google have each invested billions in Anthropic, the developer of the “Claude” chatbot competitor to ChatGPT, and Google has separately made significant investments in Gemini, its AI search function. In April 2024, Meta announced it would invest US$35 to US$40 billion on AI development and Apple is following suit with its own significant investments in AI powered offerings.
Investments and growth in relation to AI hardware and infrastructure have been of similar scale. Nvidia has made significant investments in the development of advanced microchips that are critical to AI processing (and, as a result, has seen its market value surge by more than 1,000% over two years). There has also been an explosion of investment in data centres to support the high performance computing that is the “back office” for AI (although power constraints and environmental concerns are now growing issues of concern in this market). In that regard, Oracle has recently announced it is investing in small modular nuclear reactors to power new data centres given the AI demand outlook. Microsoft has also recently entered into a deal with Constellation Energy to restart the Three Mile Island nuclear plant in Pennsylvania to meet the surge in energy demand for data centres needed for AI use.
Enormous value growth
These investments resulted in remarkable growth in the value of these companies. In mid-2024, the “Magnificent 7” (Apple, Microsoft, Alphabet (Google), Amazon, Nvidia, Meta and Tesla) made up a combined 35.5% of the S&P 500 and earlier in 2024 the combined market capitalisation of these companies was roughly equal to the combined size of the stock markets of Canada, Japan and the UK. Analysts attributed much of the recent surge in performance of these stocks to the potential of AI.
Large productivity gains predicted to drive GDP growth
Most commentators are predicting significant productivity gains across economies as a result of the development and adoption of AI. A June 2024 McKinsey report estimated that Gen AI could add the equivalent of US$2.6 trillion to US$4.4 trillion annually to the global economy.1
Key legal challenges
The increasing adoption and use of Gen AI has given rise to various concerns and potential legal challenges. These include:
- The need for transparency and “explainability” of Gen AI’s conclusions.
- Potential copyright infringement arising from Gen AI, if models are developed based on third party content. The use of Gen AI has resulted in claims overseas by content creators and artists alleging misuse of their work to train Gen AI systems (discussed below).
- The impact of potential job losses due to automation.
- Misinformation and manipulation through use of AI algorithms – including the rising proliferation of deep fakes and “fake news”.
- Risks to data privacy including ensuring appropriate guardrails when using personal information in connection with training Gen AI models.
- The risk of embedded bias and discrimination in the algorithms leading to unfair treatment of individuals or groups.
- The risk of unreliable or fabricated outputs and ‘AI hallucinations’ and inadequate oversight and quality controls.
- Potential competition risks, if market share is concentrated in those leading the development of AI, or if data sharing between businesses increases using AI systems.
- The environmental impact of Gen AI given the substantial energy consumption required to operate Gen AI systems.
- What is the appropriate liability regime for harm and loss and which party should be liable?
These are a mixture of business, legal, moral and societal risks. Other risks will likely emerge as experience and usage grows.
Copyright suits indicate an industry in early stages
In terms of copyright, the New York Times’ proceeding against OpenAI and Microsoft2 is seen as a significant test case of the constraints applicable to the development of Gen AI. The New York Times is arguing that OpenAI and Microsoft have used its copyrighted material to “train” the large language models incorporated into ChatGPT and Co-pilot, and that this use and reproduction is infringing copyright as it is unauthorised.3 OpenAI and Microsoft are arguing “fair use”, including that their Gen AI products do not serve as a market substitute for the New York Times’ copyrighted content.
Perplexity has also faced scrutiny following a claim made against it by News Corp, the parent company of the New York Post and Dow Jones (owner of The Wall Street Journal) that its search engines were content scraping without permission and engaging in massive-scale copying to train its AI search engine.4
Similar claims by other content creators have been filed. Notably, in September 2023, a group of prominent US authors, including Jonathan Franzen, John Grisham, George R.R. Martin and Jodi Picoult, through the Authors Guild, brought a class action claim against OpenAI alleging copyright infringement by using their works to train ChatGPT.5 Other similar claims have been brought against Meta Platforms6 and the AI image generator, Stability AI.7 The claimants in the latter proceeding recently succeeded in partially defending an application to strike out the claim, which will now continue to the discovery phase.
The outcome of these cases will have a significant impact on how large language models and other AI models are trained in the future and how Gen AI tools are used and, ultimately, how much it costs to use them.
Other publications and content creators have taken the commercial route and sought to make deals with the tech companies for compensation. For instance, in April 2024 Japanese-owned The Financial Times announced a strategic partnership and licensing agreement with OpenAI.
Regulatory landscape of Gen AI – “Get ahead” or “wait and see”?
Varying regulatory approaches
As we wrote in June 2023 in this article, the pace of AI’s development has meant that regulation of Gen AI is evolving rapidly around the globe, with different jurisdictions taking different approaches and moving at different speeds. The European Union and China are the only jurisdictions that have enacted comprehensive AI-specific legislation. Other jurisdictions currently rely on existing regulatory frameworks to govern and regulate the use of AI and either have AI-specific principles or guidelines in place, or are in the early stages of proposing AI-specific legislation. Some jurisdictions have passed or proposed legislation with purported extra-territorial effect.
The map below shows the varying approaches:
Regulation is not straightforward as it needs to address the various risks referred to above, while being careful not to stifle the innovation and efficiency gains that AI has the potential to offer. How different jurisdictions address this balance over time will be interesting to watch.
What about New Zealand?
High levels of adoption and use indicated
This year, the AI Forum New Zealand, in conjunction with the Victoria University of Wellington and Callaghan Innovation, surveyed 232 New Zealand organisations to measure their use of, and views on, AI.8 The survey found that 67% of respondents reported using AI in their organisations, and 96% of respondents agreed that AI has made workers more efficient in their work.
High contribution to GDP expected
In an Analytical Note released in July 2024, The New Zealand Treasury thought that, as an “advanced, high-skilled economy, New Zealand is likely to make more substantial short-term productivity gains from AI than less developed, lower-skilled economies”.9 However, The Treasury went on to caution that New Zealand could lag behind comparable jurisdictions in reaping the benefits of AI use as the uptake of advanced digital technologies and digital innovation tends to be slower in New Zealand compared to other jurisdictions. The Treasury attributed this potential slower uptake partly to lower levels of research and development investment by New Zealand firms.10
In August 2024, Microsoft estimated that Gen AI is expected to add NZ$76 billion to New Zealand’s annual GDP by 2038.11
Regulation
There are currently no AI-specific laws in New Zealand. Rather, New Zealand relies on existing regulatory frameworks to govern the use and deployment of AI. These existing laws are supplemented by some non-mandatory principles issued by the Privacy Commissioner.
The New Zealand statutes relevant to the use and adoption of Gen AI are set out below. A wide range of other legal considerations may also be relevant depending on the particular context, including contractual terms, competition law, confidentiality and legal privilege, as well as industry-specific regulatory obligations.
Regulation | Statute Purpose and Potential Applicability to Gen AI |
Privacy Act 2020 (Privacy Act) |
This is supported by the Privacy Commissioner’s Artificial Intelligence and Information Privacy Principles, released in September 2023, and the initial set of expectations around AI use released in May 2023.
Personal information could be used to train Gen AI models which would engage privacy principles relating to how information is collected, disclosed or the ability of individuals to correct personal information. |
Copyright Act 1994 | Provides protection for original literary, dramatic, musical and artistic ‘works’ and sets out remedies for infringement of such protected works. Machine generated works can attract copyright protection in New Zealand. On the other hand, content created using Gen AI may infringe protected original works if a substantial part of the copyright work has been taken. Further discussion on New Zealand’s stance on AI and copyright can be found in our article here. |
Fair Trading Act 1986 | Includes various consumer law obligations and prohibitions including prohibitions against misleading or deceptive conduct and unconscionable conduct. This may apply where Gen AI promotes a product in a misleading way. AI can produce “hallucinations” where false or misleading information can be presented as fact. |
Human Rights Act 1993 | Makes it unlawful to discriminate on various grounds including sex, marital status and religion. If the data used for training Gen AI models is unbalanced or reflects bias or unfairness, this could be embedded into any decision or output produced by Gen AI. |
Harmful Digital Communications Act 2015 (HDCA) | Purpose of deterring and preventing harm caused to individuals by digital communications, and to provide victims of harmful digital communications with a means of redress. In some specific circumstances, the HDCA could be engaged, for instance, if harmful statements are made about individuals and involve a serious breach or repeated breach of the communication principles under the HDCA. |
In a paper to Cabinet from July 2024,12 the Office of the Minister of Science, Innovation & Employment recommended that rather than developing a “standalone AI Act”, existing frameworks could be updated as needed to balance the competing interests of enabling AI innovation and mitigating against AI risks. The Cabinet paper discouraged regulating AI based on “speculated harms” as doing so may harm productivity. The paper encouraged that a proportionate and risked based approach should be taken to regulating this technology to support the use of AI in New Zealand to boost innovation and productivity. This seems to signal that the New Zealand approach will be to adjust to the existing statutes above, rather than creating a new AI Act.
While these general purpose, principle-based laws can apply to the regulation of AI and its derivative products, it remains to be seen, how well this existing regulatory framework can cope with AI issues given their complexity.
The Australian Treasury recently released a discussion paper as part of a wider review into the impact of AI on consumer protection legislation.13 As discussed in our article on this topic here, the outcome of this review could potentially influence the regulatory response to AI in New Zealand, given the close parallels between Australian and New Zealand consumer laws.
With different approaches being taken in different jurisdictions, New Zealand has the advantage of being able to assess which of the various international approaches is most effective and fit-for-purpose in an economy of our size.
Key takeaways for New Zealand directors and managers
- The scale of global investment in AI and GenAI is so significant that its effect on New Zealand business and our daily lives will inevitably continue to increase.
- AI is likely to become increasingly part of software that businesses use, potentially through simple, routine updates. It will be important for directors and managers to understand their AI usage and be aware of the risks (see above) and opportunities this can present to the business.
- The continued proliferation of AI use will give rise to new business challenges, and force consideration of new solutions – e.g., next generation nuclear power is on the agenda as one answer to powering AI.
- Perhaps similar to their consideration of being best protected against cyber-risk, directors and managers will need to ensure they keep up to date with AI technology developments, so that they are well placed to utilise the available tools to optimise their businesses, and also put in place appropriate systems and guardrails as technology and usage develops and changes.
- Amendments to existing legislation rather than a new AI-specific statute looks like the route New Zealand will take under the current government. It remains to be seen how the existing legislation, even with AI-specific amendments, can cope with complex AI issues, given that the existing legislation was not drafted with AI in mind. This unknown potentially increases the importance of the self-protection New Zealand businesses need to employ through their own systems, policies and guardrails.
If you have any questions about the matters covered in this article, you can get in touch with Richard Massey (Partner, Consumer, Regulatory and Compliance), Laura Littlewood (Partner, Technology and Commercial) or James Gibson (Partner, Corporate and M&A), who can advise or connect you with our wider specialists in this area.
For more insights on AI, you may find our previous articles helpful:
- The AI Act – Europe leads the charge on AI regulation - Bell Gully (March 2024)
- AI Act: a step closer to the first rules for Artificial Intelligence - Bell Gully (May 2023)
- High Court provides guidance on artificial intelligence under the New Zealand Patents Act 2013 - Bell Gully (March 2023)
[1] McKinsey Global Institute, “The economic potential of generative AI: The next productivity frontier”, June 2023.[2] Filed in December 2023 in the US District Court for the Southern District of New York, New York Times Company v. Microsoft Corp., et al, Case No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023).[3] Harvard Law Review, “NYT v. OpenAI: The Times’s About-Face”, 10 April 2024.[4] The lawsuit was filed in October 2024 in the US District Court for the Southern District of New York, Dow Jones & Company, Inc. and NYP Holdings, Inc., v. Perplexity AI, Inc., Case No. 1:24-cv-7984 (S.D.N.Y. Oct. 21, 2024).[5] Authors Guild, et al., v. Open AI Inc., et al., Case No. 1:23-cv-8292 (S.D.N.Y. Sept. 19, 2023).[6] Christopher Farnsworth v. Meta Platforms, Inc., Case No. 3:24-cv-6893 (N.D. Cal. Oct. 1, 2024). See also, Reuters, “Meta hit with new author copyright lawsuit over AI training”, 3 October 2024, Blake Brittain.[7] Illustrators, Sarah Andersen, Kelly McKernan, and Karla Ortiz sued Stability AI in US District Court in California in January 2023. See Sarah Andersen, et al., v. Stability AI Ltd., et al., Case No. 23-cv-00201-WHO (N.D. Cal. Jan. 13, 2023).[8] AI Forum New Zealand, “New Zealand’s AI Productivity Report”, September 2024. [9] New Zealand Treasury, Analytical Note: The impact of artificial intelligence – an economic analysis, Harry Nicholls and Udayan Mukherjee, July 2024.[10] 0.8% of GDP in 2019 compared to an OECD average of 1.8%.[11] Accenture and Microsoft, “New Zealand’s Generative AI opportunity”, 21 August 2024.[12] Ministry of Business, Innovation & Employment, Approach to work on Artificial Intelligence Cabinet Paper, 25 July 2024.[13] Australian Treasury, “Review of AI and the Australian Consumer Law” Discussion Paper, October 2024.