The need for ethical AI leadership

Setting clear standards for how the tech is being implemented, with honest and transparent communication, will help government allay public concerns, writes Sanja Galic.

Over the past year, many businesses as well as citizens have been using AI in their daily lives. ChatGPT, only released in 2023, has already become an increasingly ubiquitous technology along with similar generative artificial intelligence tools. From students using artificial intelligence as a tutoring tool to individuals creating workout routines or meal plans, there is broad acceptance and adoption of AI as a vital lifestyle tool.

As such, it’s not surprising that the Digital Citizen Report 2024 finds that the majority of Australians are supportive of the government using AI to improve services. Over half of the people (55%) are in favour of “extensive usage”, with support particularly high among younger people and higher-income households.

However, alongside the many advantages of AI there are growing public concerns about its risks and the potential it has to do harm as well as good. Australians also express a need for reassurance about risk management and clear governance, with 94% having concerns about AI and 92% wanting government regulation of AI.

This high concern over AI risks in government services needs proactive ethical leadership. With AI presenting risk as well as reward, there’s an opportunity for the Australian government to take a stronger leadership role on responsible AI implementation.

Advantages of AI in digital government

AI offers clear benefits to digital government. It can speed up communications,  service delivery and enable 24/7 support. Especially in high-migration countries like Australia, where around 300 different languages are spoken, AI can also provide easier translation capabilities for culturally and linguistically diverse communities.

The Australian government has enjoyed some success with AI, notably during the Covid-19 pandemic to analyse large datasets to monitor the spread of the virus, predict outbreaks and manage resource allocation.

The Australian Border Force has deployed AI-powered SmartGates that use biometrics to speed up identity confirmation at airports. The National Drought Map, developed by the Department of Agriculture, uses AI to analyse real time weather data to help government provide support where it’s needed most.

The Australian Taxation Office previously experimented with an AI-driven chatbot called Alex to help people with tax queries, and AI chatbots have since been rolled out across other MyGov platforms.

More than half of Australians (55%) would support extensive usage of AI by the government. They’re particularly supportive of use cases such as navigation and mapping (42%), predictive text and autocorrect (37%) and language translation (33%).

The trust and transparency problem

A lack of transparency over AI in digital government results in concerns about accountability, bias, and fairness. Citizens may struggle to trust decisions made by opaque algorithms, and the absence of clear oversight increases the risk of biased outcomes, misuse of data and undermining public trust in government institutions.

Governments overseas have already experienced challenges with AI. The UK’s university admissions system was thrown into chaos over a flawed exam grading algorithm that unfairly penalised pupils from schools that had performed less well in the past. The US had an issue with facial recognition bias in an app for asylum seekers, which struggled to recognise darker skin tones. In the Netherlands, low-income families were falsely accused of fraud due to racial profiling in a benefits algorithm.

Australians’ concerns about the risks of AI in government services are varied. They include a preference for speaking with a person (57%), data security and privacy issues (49%) and the potential for job losses (44%).

There’s a strong demand for transparency: 46% want full transparency into the code behind services and 88% want at least some transparency regarding AI and government services. This desire is higher among some of the most concerned groups, such as those with recent mental health struggles or with precarious finances.

Benefits of strong AI leadership

Although the pressure to deploy AI safely is high, this should be encouraging for government organisations. It is a mandate for strong AI leadership. Setting clear, ethical standards for how AI is being implemented, with honest and transparent communication, will help to allay public concerns, improve adoption and realise the potential benefits faster.

Several governments are doing this, such as Canada’s Directive on Automated Decision-Making, Singapore’s Model AI Governance Framework and the EU’s proposed AI Act.

Australia itself has developed AI Ethics Principles which are now being used as the foundation for a national framework for the assurance of artificial intelligence in government.

Developing this nationally consistent approach is an important step in setting clear expectations for appropriate practice, as well as helping all levels of government and government agencies to deploy AI safely and responsibly, and win public confidence.

Sanja Galic is senior client partner at Publicis Sapient

Comment below to have your say on this story.

If you have a news story or tip-off, get in touch at editorial@governmentnews.com.au.  

Sign up to the Government News newsletter

Leave a comment:

Your email address will not be published. All fields are required