AI regulation - Australia and the Pacific
By Damian Kelly
As AI technology advances at a rapid rate, we have seen various businesses begin to implement AI in their workplace.
We answer below some common questions on Legal AI, AI in business, the risks, and Intellectual Property rights comparing Australia and the Pacific islands.
Regulating AI. Is it possible? Are there any legal regulations on Artificial Intelligence in Australia and the Pacific? What are the current and/or expected legislative directions thereof?
There is no specific legislation in force in Australia which is designed to regulate artificial intelligence (AI). There have been some steps taken by the Australian government to create voluntary frameworks and most recently public consultations to assist in the drafting of legislation.
In 2019 the Australian Department of Industry, Science and Resources established the AI Ethics Framework which includes core ethical principles which businesses may adopt in order to build public trust and consumer loyalty toward AI-enabled products and services. This framework is voluntary.
The Department of Industry, Science and Resources has also commenced a consultation process via the Safe and Responsible AI in Australia discussion paper. The discussion paper highlights two types of legislation that may have the effect of regulating AI:
‘general regulations’ which have the effect of governing AI depending on its application (for example, the use of personal information in the development of an AI system would be regulated in part by the Australian Consumer Law).
‘sector-specific regulations’ which have the effect of governing AI when it is used in a particular sector (for example, AI when used as a medical device as defined in the Therapeutic Goods Act 1989 (Cth)).
Public consultations closed on 4 August 2023, and further legislative steps are expected once the findings are published.
Most Pacific Islands do not have AI-specific regulation.
Several Pacific jurisdictions are considering the development of regulations, including PNG, Vanuatu, Solomon Islands and Fiji. Beyond governments identifying the need for AI regulation, there has been limited steps toward institution regulation.
Is AI an author? – Who owns the right to works created by Artificial Intelligence from a legal perspective?
AI is not recognised as an author under Australian copyright laws.
In order for a work to obtain copyright protection an author must contribute “independent intellectual effort”. As an AI-system is designed, supervised and otherwise in the control of humans (i.e. lacking independence) it is unable to be an author of copyrighted works. However, a human who uses AI or creates an AI-system to create a work may be able to obtain copyright protection.
Determining who owns the right to works created by AI is yet to be addressed by legislators or the courts.
The drafting of copyright laws across the Pacific preclude AI from being an author of a copyrighted work.
Relevantly in Tonga, PNG, Vanuatu, Samoa and Fiji stipulate that a natural person, physical person or an individual may be an author. While, in Solomon Islands an individual or body corporate may be deemed an author.
AI in business – is this the end of human skills? What types of legal and economic risks associated with the use of AI you recognize in your clients’ businesses?
There are three key business risks associated with AI use:
Errors – in the context of coding, it was found that developers with access to an AI assistant were likely to produce less secure code than those who wrote the code manually. Errors in coding or automated processing can create a significant financial crisis or can effect the efficiency of software.
Misinformation – AI software produces information based on patterns that have developed from the data the software has been fed, and many AI tools cannot verify information.
Privacy risks – the main privacy concerns surrounding AI is the potential for data breaches and unauthorised access to personal information.
Without proper AI regulation, the person using AI or the AI-system’s owner, creator or controller is likely to be liable for mistakes, errors or the fall out from privacy risks. Its important that businesses have in place proper procedures and policies to limit the damage AI may cause to a business.
Legal AI – a threat or an opportunity for legal business? Do you use Artificial Intelligence in your legal practice? What benefits/risks do you recognize when using Artificial Intelligence as a lawyer?
The risks present for businesses are present for the legal professionals using AI. However, additional risks are relevant and like every nascent technology, a lawyer must exercise caution when using AI. These additional risks may include:
Legal Privilege – by disclosing information to an AI-system, especially one that is open source (such as ChatGPT), a lawyer may disclose confidential or privileged information;
Lack of discretion – AI-systems are created by humans with biases and are obliged to follow computational rules, this means they lack the requisite human discretion required in order to make judgments or decisions; and
Errors, misinformation and liability – AI-systems often provide factually errant information. This information is also often out-of-date and not peer-reviewed (i.e. it relies on sources of information that may be incorrect, inaccurate or flat out wrong). A lawyer that fails to correct errant information will be liable for those errors.
For a global perspective on AI regulations and legislation, see this report from IAG Global.