Home Technology Meeting AI’s sustainability challenges

Meeting AI’s sustainability challenges

Meeting AI’s sustainability challenges

Despite AI’s  promise of efficiency and increased productivity, there looms a critical consideration: the significant energy demands associated with its operation, writes Kelly Brough.

Last year saw Generative AI propel itself into the forefront of public awareness with the introduction of ChatGPT. Now, it is forming a foundation for the way governments are looking to operate and innovate over the coming years.

However, with net zero targets proving an ongoing challenge, governments need to be careful to ensure inefficient AI practices aren’t harming their sustainability aims.

Australia is taking a pioneering role in the implementation of AI, with the Albanese Government recently announcing that it is partnering with Microsoft through the Digital Transformation Agency, making it one of the first governments in the world to deploy Generative AI service.

Australian Public Service (APS) staff will be able to trial new ways to innovate and enhance productivity, with a view to delivering better government services for the Australian people. In parallel to this, the federal government recently announced its plan to unveil the ‘Responsible Artificial Intelligence Adopt’ program: a $17 million program aiming to help Australian businesses adopt AI.

At this crucial junction, careful consideration and due care needs to be given to align these technologies with eco-friendly principles. As the influence of Generative AI continues to grow, striking the right balance between technological advancement and environmental responsibility remains an ongoing challenge that governments must navigate judiciously.

Bytes vs the Earth

The substantial processing power of AI presents both an opportunity and a challenge for public sector organisations looking to advance their sustainability goals. 

We have already seen AI put to good effect in environmental protection, with the Australian Institute of Marine Science’s (AIMS) open access AI tool acting as a prime example.

The tool employs Machine Learning and AI to extract and share data from coral reef images, aiding marine biologists when monitoring reef health in real-time. It reports on coral reef composition at an 80-90 percent precision rate and operates 700 times faster than traditional manual assessment methods. This method not only saves months of labour but also frees up valuable resources dedicated to reef management.

However, without mitigation, technologies like Generative AI can also carry a significant carbon footprint. This is due to the training of Large Language Models demanding powerful computing resources and consuming substantial amounts of energy.

For example, when used in drug trials, GPT-3 guzzles a staggering 1.287 gigawatt hours of electricity for training alone. Compared with the average energy consumption of Australian households, that equates to powering 215 homes for a year. The associated carbon emissions, around 502 tonnes, is roughly equivalent to the yearly emissions produced by 33 Australian citizens (estimated to be 15.4 tonnes per person). On a mass scale, this adds up very quickly.

Researchers estimate that routine utilisation of expansive language models contributes even more significantly to the carbon footprint dilemma. If current usage patterns persist, machine learning systems could monopolise the entirety of global energy production by 2040.

So how can we navigate this technological advancement within a broader sustainability agenda?

Efficiency is the key to sustainability

One of the keys to reducing the carbon footprint of Generative AI models is to lessen the computational load. This can be achieved through the adoption of a precise energy estimation approach that is non-intrusive, independent of vendors and encompasses a wide range of hardware. Through refining training and deployment processes, it becomes possible to cut energy usage, emissions and expenses.

Adapting previous models also doesn’t necessarily mean compromising on quality. Data suggests training an AI model on 70% of the complete dataset had minimal effects on accuracy (less than 1%), while achieving a notable 47% reduction in energy consumption.

Recycling models

Opting to modify pre-trained models for different tasks or areas, instead of starting from square one, is also an effective strategy. By refining existing large models for their specific needs, you can eliminate the need to gather new data or train entirely new models, reducing energy consumption and carbon footprint.

Experiments have shown that by training a significantly smaller “student” model, merely 6% of the size of the original “teacher” model, you can attain the same accuracy level (99%) while consuming 2.7 times less energy.

Exploring an AI resolution

Encouraging the use of AI within the public sector has the potential to amplify energy efficiency and foster innovations geared towards low-carbon solutions. Developing a sustainable technology strategy has become the core mission for most purpose-driven government CIOs.

Yet, it’s crucial to highlight that not every scenario requires a Generative AI solution, given the carbon footprint it can potentially create.  The equation between savings and cost in energy efficiency, needs to be weighed up. And recognising the nuanced need of each application will allow us to make conscious choices. Often, a simpler diagnostic AI method may be more environmentally sustainable and appropriate than opting for a Generative AI approach.

AI is becoming integral to government solutions in the coming years, but governments and businesses will need to take proactive steps to reduce potential negative environmental impacts – before it is too late. Without this, the significant energy consumption of Generative AI could stop them from achieving their sustainability aims.

*Kelly Brough is Managing Director Data and AI at Accenture Australia and NZ

Leave a Reply

Your email address will not be published.