Home Digital delivery Why we need to hit ‘pause’ on digital ID

Why we need to hit ‘pause’ on digital ID

Why we need to hit ‘pause’ on digital ID

Australia’s regulatory frameworks are not designed for digital technology and now we are playing Russian Roulette with our citizens through a dubious surveillance program, writes James Wilson.

James Wilson

Around 1.6 million Australians already use facial verification to access 70 different government services, and the government earlier this month revealed plans to pour $256 million into upgrading and expanding its opt-in ‘digital ID’ system.

All technology can be used for good and evil – nuclear, the internet, smartphones.

The bad typically rears its head when technology moves fast and regulation moves slow, a problem we have with facial recognition technology in Australia right now.

Facial recognition has many potential benefits, like identifying a trafficked child before they leave a country, processing passengers through an airport more efficiently or preventing under-age gamblers from entering a betting facility. The potential is exciting. On the other side, it can be used (and is) to identify and arrest anti-government protesters and quell democracy.

The challenge with facial recognition has arisen because the technology has developed at a blinding pace and found its way into the hands of public sector agencies and law enforcement before the rest of us could say ‘smile’.

The challenge with facial recognition has arisen because the technology has developed at a blinding pace and found its way into the hands of public sector agencies and law enforcement before the rest of us could say ‘smile’.

In 2017 the federal government established a new scheme to allow government agencies, telecommunications and banks to use facial recognition technology to collect and share images of people across the country. While this included a clause for “robust privacy safeguards” to prevent misuse, the details are particularly vague, leaving the door open for enforcement on the grounds of “community safety”. Basically, it could be used by a wide range of agencies to confirm the identity of any Australian with a passport or driver’s license, regardless of whether they are suspected of a crime.

Why is this so troubling?

Lack of consent

Firstly, consent. No one in Australia is asking whether anyone wants to share their photo.

Earlier this year it was confirmed that Australian Federal Police AFP officers trialed controversial facial recognition technology Clearview AI from late 2019. Clearview, founded by an Australian in New York, claims to hold a database of billions of photos pulled from platforms like Facebook, Instagram and employment websites. While the AFP admitted to a limited trial to support surveillance on child exploitation, where does leave the right to privacy for the millions of social media users who did not provide consent to be tracked in this way?

Digital and civil rights groups have voiced serious concerns about the lack of consent around police use of facial recognition technology in Australia – a concern we need to address.

In London last year, police officers sat in a van beside a train station trialling live facial recognition software, matching every face that went past to a list of wanted criminals. One man pulled his jumper over his face after being tipped by a bystander that they were being watched and did not want to participate. He was taken aside, and his face scanned forcibly on a mobile device. He was also fined for his protest.

Digital and civil rights groups have voiced serious concerns about the lack of consent around police use of facial recognition technology in Australia – a concern we need to address.

Worsening inequality

Then there is the other human rights problem linking appearance with criminality.

US researcher Joy Buolamwini has spent the last several years exploring how inaccurately facial recognition and Artificial Intelligence (AI) technologies work on people from different ethnic backgrounds. Not very well, she first learned with the robots she spoke to repsonded better when she wore a white mask.

Imagine how it would feel as an Indigenous Australian, already a target of bias and discrimination, being accused of something you did not do. In the wake of 2020’s Black Lives Matter protests, what level of social progress can we achieve if we allow this bias to spread further across law enforcement, and potentially beyond, from bank loan assessments to job screening?

Applying the brakes

With many question marks being raised, much of the Western World is taking a similar approach to facial recognition as to COVID-19 – with distance and caution to slow its spread. This year San Francisco was the first city to ban facial recognition in law enforcement, and Portland quickly followed suit.

In April, New Zealand announced a world first, flexible approach. Rather than legislating AI specifically, the country will update existing regulations ongoing to govern appropriate uses as the technology evolves.

Unlike our attitude to coronavirus, Australia’s approach to facial recognition is lagging both comparative economies and the private sector. Recently, major corporations including Microsoft, Amazon and IBM put a moratorium on selling to law enforcement.

In our business, we speak with large corporations every day interested in ways to leverage AI and Machine Learning technologies, and the first thing we do is step them through the risks. In lieu of a reliable government framework, we follow our own Responsible AI Framework. We address topics like racial or ethnic minorities and what it means for a company’s relationship with the community if they apply the technology in the wrong way.

Australia already has a shaky track record of data use and privacy, poor implementation and not articulating the value to the public – My Health Record the most notable example.

Australia already has a shaky track record of data use and privacy, poor implementation and not articulating the value to the public – My Health Record the most notable example. Now we have a federal facial recognition database that all states are signatories to. A database we arguably do not need, considering we already manage national security well. Australians should feel uncomfortable.

Our regulatory frameworks are not designed for digital technology and now we are playing Russian Roulette with our citizens through a dubious surveillance program.

The government would be wise to hit pause on its database until the risks are fully understood and addressed.

James Wilson is CEO at Eliiza and Host of AI Australia Podcast

Leave a Reply

Your email address will not be published.

SHORTLIST 0