Safeguards are lacking in Australia’s foray into AI, writes Rachael Greaves.
The transformative tide of the Artificial Intelligence (AI) revolution is washing over the world, promising to reshape societies and systems.
Australia finds itself grappling with this wave, with a report commissioned by industry, science and resources Minister Ed Husic in February calling the nation “relatively weak” in the type of AI powering global entities, like ChatGPT.
Although the country faces a potential shortfall of skilled workers and computing power, the real journey extends far beyond mastering this technology. On the frontier of the digital revolution, Australia is not just confronting AI as a technological force, but as a societal one.
With the Federal Government’s deadline for input on responsible AI just past, regulating this transformative technology to address societal concerns represents an impending challenge for Australia and the rest of the world, given its advancements don’t respect national boundaries.
However, it’s crucial to emphasise that AI extends beyond generative applications like ChatGPT. The true AI revolution began when regulators and governments started adopting AI auto-classification for data risk and value, outpacing the wider market.
Today, AI’s applications are vast, spanning across rules-as-code, machine learning, and governance, in all of which the government continues to develop expertise.
Navigating ethics and compliance
The potential of AI to democratise services and information, improve healthcare, combat climate change, and address many other global challenges cannot be understated. We must also appreciate the potential for AI to create new professions we cannot yet conceive, just as the internet gave rise to professions such as app developers and social media managers.
The ultimate challenge is ensuring these benefits are distributed equitably across society.
As Australia follows in the footsteps of the European Union, the US the UK, and New Zealand in implementing AI regulations, we’re entering a period of introspection. Technologies with impacts on citizens will need to be analysed for compliance, creating a possible upheaval of existing algorithm-supported services.
This shift towards new regulation brings with it a ripple effect of pressure and anxieties that may, ironically, be more inhibitive to AI adoption than the supposed lack of skilled workers or computing power.
To this end, Australia’s foray into AI regulation must include safeguards against inevitable security risks, and improve its transparency for consumer and business protection.
Minister Husic is not reinventing a wheel for AI regulation. The concepts of responsible AI have been present for many years, with Australia being one of the first economies to publish a Responsible AI Ethics Framework.
These existing frameworks, along with those from UNESCO, the G20, and the OECD, embody a universal understanding of the standards necessary when an algorithm impacts a person: transparency, explainability, and contestability. They are our tools to ensure any AI system, no matter how sophisticated or simple, can be subject to scrutiny.
AI’s potency does not lie solely with innovative generative systems. Any algorithm, if inaccurate and not subject to transparency or explainability, can cause irreparable harm. Cases such as the Robodebt debacle illustrate that AI simply facilitates the propagation of these damaging algorithms. The debate remains over the thresholds of ‘high-risk’ systems, but the concept of ‘harm’ is universal and inevitably linked to AI.
As Australia approaches AI regulation, the strategies of the EU, one of the first likely to enforce an AI Act, will be pivotal. Even in a post-Brexit world, the UK did not abandon or reimagine the EU’s General Data Protection Regulation (GDPR), signifying the potential global influence of the EU’s AI Act.
There may also be a necessity for a substantial pivot by AI companies. They may need to rethink existing machine learning, neural network, and predictive AI solutions, and shift towards more ‘white box’ models. However, if vendors can meet the expectations of one significant jurisdiction, they are likely to meet the expectations of all.
The AI journey is about more than harnessing technological power. It’s about ensuring the responsible use of a tool with the potential to reshape society profoundly. In this journey, the narrative we shape around AI will be a testament to our commitment to transparent and responsible innovation, a commitment that will determine our position in the future of AI.
*Rachael Greaves is an analyst, security practitioner, and information management specialist, and CEO and co-founder Castlepoint Systems.
While it’s important to monitor the growth and influence of AI, it is important to not lump all technological advances into a single bucket.
Rules-as-code and (although I am loathe to mention these two together) and Robodebt are not AI. They are imperative algorithms – they are formulae that will return the same answers all the time. With Robodebt, for example, the issue was that the formulae employed used a known-bad method of assessing variable income based upon a calculating an overage over an extended period of time.
While there are different ways to employ A.I., and example of such an approach would be to sort payments into two categories: good and bad. You then let the machine do its best to figure out why those payments are good and bad, and then make its guesses in the future, without us understanding why.
And A.I. is very interesting technology; it just needs to be used correctly, as an assistant, not as an authority.