You wouldn’t usually associate Pikachu with protest.
But a figure dressed as the iconic yellow Pokémon joined a protest last week in Turkey to demonstrate against the country’s authoritarian leader.
And then a virtual doppelgänger made the rounds on social media, raising doubt in people’s minds about whether what they were seeing was true. (Just to be clear, the image in the post shown below is very much fake.)
This is the latest in a spate of incidents involving AI-generated (or AI-edited) images that can be made easily and cheaply and that are often posted during breaking news events.
Doctored, decontextualised or synthetic media can cause confusion, sow doubt, and contribute to political polarisation. The people who make or share these media often benefit financially or politically from spreading false or misleading claims.
How would you go at telling fact from fiction in these cases? Have a go with this quiz and learn more about some of AI’s (potential) giveaways and how to stay safer online.
How’d you go?
As this exercise might have revealed, we can’t always spot AI-generated or AI-edited images with just our eyes. Doing so will also become harder as AI tools become more advanced.
Dealing with visual deception
AI-powered tools exist to try to detect AI content, but these have mixed results.
Running suspect images through a search engine to see where else they have been published – and when – can be a helpful strategy. But this relies on there being an original “unedited” version published somewhere online.
Perhaps the best strategy is something called “lateral reading”. It means getting off the page or platform and seeing what trusted sources say about a claim.
Ultimately, we don’t have time to fact-check every claim we come across each day. That’s why it’s important to have access to trustworthy news sources that have a track record of getting it right. This is even more important as the volume of AI “slop” increases.
Gaia has been the most successful ESA space mission ever, so why did they turn Gaia off? What did Gaia achieve? And perhaps most importantly, why was it my favourite space telescope?
Running on empty
Gaia was retired for a simple reason: after more than 11 years in space, it ran out of the cold gas propellant it needed to keep scanning the sky.
The telescope did its last observation on 15 January 2025. The ESA team then performed testing for a few weeks, before telling Gaia to leave its home at a point in space called L2 and start orbiting the Sun away from Earth.
L2 is one of five “Lagrangian points” around Earth and the Sun where gravitational conditions make for a nice, stable orbit. L2 is located 1.5 million kilometres from Earth on the “dark side”, opposite the Sun.
L2 is a highly prized location because it’s a stable spot to orbit, it’s close enough to Earth for easy communication, and spacecraft can use the Sun behind them for solar power while looking away from the Sun out into space.
It’s also too far away from Earth to send anyone on a repair mission, so once your spacecraft gets there it’s on its own.
Gaia used its thrusters for the last time to push itself away from L2, and is now drifting around the Sun in a “retirement orbit” where it won’t get in anybody’s way.
To do this, it measured the precise positions and motions of 1.46 billion objects in space. Gaia also measured brightnesses and variability and those data were used to provide temperatures, gravitational parameters, stellar types and more for millions of stars. One of the key pieces of information Gaia provided was the distance to millions of stars.
A cosmic measuring tape
I’m a radio astronomer, which means I use radio telescopes here on Earth to explore the Universe. Radio light is the longest wavelength of light, invisible to human eyes, and I use it to investigate magnetic stars.
But even though I’m a radio astronomer and Gaia was an optical telescope, looking at the same wavelengths of light our eyes can see, I use Gaia data almost every single day.
I used it today to find out how far away, how bright, and how fast a star was. Before Gaia, I would probably never have known how far away that star was.
This is essential for figuring out how bright the stars I study really are, which helps me understand the physics of what’s happening in and around them.
A huge success
Gaia has contributed to thousands of articles in astronomy journals. Papers released by the Gaia collaboration have been cited well over 20,000 times in total.
Gaia has produced too many science results to share here. To take just one example, Gaia improved our understanding of the structure of our own galaxy by showing that it has multiple spiral arms that are less sharply defined than we previously thought.
Not really the end for Gaia
It’s difficult to express how revolutionary Gaia has been for astronomy, but we can let the numbers speak for themselves. Around five astronomy journal articles are published every day that use Gaia data, making Gaia the most successful ESA mission ever. And that won’t come to a complete stop when Gaia retires.
The Gaia collaboration has published three data releases so far. This is where the collaboration performs the processing and checks on the data, adds some important analysis and releases all of that in one big hit.
And luckily, there are two more big data releases with even more information to come. The fourth data release is expected in mid to late 2026. The fifth and final data release, containing all of the Gaia data from the whole mission, will come out sometime in the 2030s.
This article is my own small tribute to a telescope that changed astronomy as we know it. So I will end by saying a huge thank you to everyone who has ever worked on this amazing space mission, whether it was engineering and operations, turning the data into the amazing resource it is, or any of the other many jobs that make a mission successful. And thank you to those who continue to work on the data as we speak.
Finally, thank you to my favourite space telescope. Goodbye, Gaia, I’ll miss you.
Plant behaviour may seem rather boring compared with the frenetic excesses of animals. Yet the lives of our vegetable friends, who tirelessly feed the entire biosphere (including us), are full of exciting action. It just requires a little more effort to appreciate.
One such behaviour is the dynamic opening and closing of millions of tiny mouths (called stomata) located on each leaf, through which plants “breathe”. In this process they let out water extracted from the soil in exchange for precious carbon dioxide from the air, which they need to produce sugar in the sunlight-powered process of photosynthesis.
Opening the stomata at the wrong time can waste valuable water and risk a catastrophic drying-out of the plant’s vascular system. Almost all land plants control their stomata very precisely in response to light and humidity to optimise growth while minimising the damage risk.
How plants evolved this extraordinary balancing act has been the subject of considerable debate among scientists. In a new paper published in PNAS we used lasers to find out how the earliest stomata may have operated.
Tiny valves, global consequences
Much depends on the way stomata behave: plant productivity, sensitivity to drought, and indeed the pace of the global carbon and water cycles.
However, they are difficult to observe in action. Each stomata is like a tiny, pressure-operated valve. They have “guard cells” surrounding an opening or pore which lets water vapour out and carbon dioxide in.
When pressure increases in stomata guard cells, the pore opens – and vice versa.Artemide / Shutterstock
When fluid pressure increases inside the stomata’s guard cells, they swell up to open the pore. When pressure drops, the cells deflate and the pore closes. To understand stomata behaviour, we wanted to be able to measure the pressure in the guard cells – but it’s not easy.
Lasers, bubbles and evolution
Enter Craig Brodersen of Yale University with a newly developed microscope-guided laser. It can create microscopic bubbles inside the individual cells that operate the stomatal pore.
When Brodersen spent a sabbatical at the University of Tasmania (where I am based), we found we could determine the pressure inside stomatal cells by tracking the size of these bubbles and how quickly they collapsed. This involved theoretical calculations guided by bubble expert Philippe Marmottant, of the French National Centre for Scientific Research (CNRS) in Grenoble.
This new tool gave us the perfect opportunity to explore how the behaviour of stomata is different among major plant groups. The aim was to test our hypothesis that the evolution of stomatal behaviour follows a predictable trajectory through the history of plant evolution.
We argue it began with a relatively simple ancestral passive control state, currently represented in living ferns and lycophytes, and developed to a more active hormonal control mechanism seen in modern conifers and flowering plants.
Against this hypothesis, some researchers have previously reported complex behaviours in some of the most ancient of stomata-bearing plants, the bryophytes. We wanted to test this finding using our newly developed laser instrument.
400 million years of development
What we found was firstly that our laser pressure probe technique worked extremely well. We made nearly 500 measurements of stomatal pressure dynamics in the space of a few months. This was a marked improvement on the past 45 years, in which fewer than 30 similar measurements had been made.
Secondly, we found that the stomata of our representative bryophytes (hornworts and mosses) lacked even the most basic responses to light found in all other land plants.
This result supported our earlier hypothesis that the first stomata found in ancestors of the modern bryophytes 450 million years ago should have been very simple valves. They would have lacked the complex behaviours seen in modern flowering plants.
Our results suggest that stomatal behaviour has changed substantially through the process of evolution, highlighting critical changes in functionality that are preserved in the different major land plant groups that currently inhabit the Earth.
How plants will survive the future
We can now say with confidence that stomata in mosses, ferns, conifers and flowering plants all behave in very different ways. This has an important corollary: they will all respond differently to the heaving changes in atmospheric temperature and water availability that they face now and into the near future. Predicting stomatal behaviour in the future will help us to predict these impacts and highlight plant vulnerability.
In terms of agricultural benefit, our new laser method should be fast and sensitive enough to reveal even small differences in the the behaviour of closely related plants. This may help to identify crop variants that use water in a more efficient or productive way, which will assist plant breeders to find varieties that better translate increasingly unpredictable soil water supplies into food.
So next time you look upon a leaf, consider the frantic pace of dynamic calculation and adjustment of millions of little mouths, reacting as your breath falls upon them. Realise that our own fate, tied to the performance of forests and crops in future climates, hangs on the behaviour of the stomata of different species. A good reason for us to understand these unassuming little valves.
German anaesthesiologist Joachim Boldt has an unfortunate claim to fame. According to Retraction Watch, a public database of research retractions, he is the most retracted scientist of all time. To date, 220 of his roughly 400 published research papers have been retracted by academic journals.
Boldt may be a world leader, but he has plenty of competition. In 2023, more than 10,000 research papers were retracted globally – more than any previous year on record. According to a recent investigation by Nature, a disproportionate number of retracted papers over the past ten years have been written by authors affiliated with several hospitals, universities and research institutes in Asia.
Academic journals retract papers when they are concerned that the published data is faked, altered, or not “reproducible” (meaning it would yield the same results if analysed again).
Some errors are honest mistakes. However, the majority of retractions are associated with scientific misconduct.
But what exactly is scientific misconduct? And what can be done about it?
From fabrication to plagiarism
The National Health and Medical Research Council is Australia’s primary government agency for medical funding. It defines misconduct as breaches of the Code for the Responsible Conduct of Research.
In Australia, there are broadly eight recognised types of breaches. Research misconduct is the most severe.
These breaches may include failure to obtain ethics approval, plagiarism, data fabrication, falsification and misrepresentation.
This is what was behind many of Boldt’s retractions. He made up data for a large number of studies, which ultimately led to his dismissal from the Klinikum Ludwigshafen, a teaching hospital in Germany, in 2010.
In another case, China’s He Jiankui was sentenced to three years in prison in 2019 for creating the world’s first genetically edited babies using the gene-editing technology known as CRISPR. His crime was that he falsified documents to recruit couples for his research.
But it still represented a case of image duplication and misrepresentation of data. This lead to the journal retracting the paper and launching an investigation. The investigation concluded the breach was unintentional and resulted from the pressures of academic research.
Fewer than 20% of all retractions are due to honest mistakes. Researchers usually contact the publisher to correct errors when they are detected, with no major consequences.
The need for a national oversight body
In many countries, an independent national body oversees research integrity.
In the United Kingdom, this body is known as the Committee on Research Integrity. It is responsible for improving research integrity and addressing misconduct cases. Similarly, in the United States, the Office of Research Integrity handles allegations of research misconduct.
In contrast, Australia lacks an independent body directly tasked with investigating research misconduct. There is a body known as the Australian Research Integrity Committee. But it only reviews the institutional procedures and governance of investigations to ensure they are conducted fairly and transparently – and with limited effectiveness. For example, last year it received 13 complaints, only five of which were investigated.
Instead Australia relies on a self-regulation model. This means each university and research institute aligns its own policy with the Code for the Responsible Conduct of Research. Although this code originated in medical research, its principles apply across all disciplines.
For example, in archaeology, falsifying an image or deliberately reporting inaccurate carbon dating results constitutes data fabrication. Another common breach is plagiarism, which can also be applied to all fields.
But self-governance on integrity matters is fraught with problems.
Investigations often lack transparency and are carried out internally, creating a conflict of interest. Often the investigative teams are under immense pressure to safeguard their institution’s reputation rather than uphold accountability.
A 2023 report by the Australia Institute called for the urgent establishment of an independent, government-funded research integrity watchdog.
The report recommended the watchdog have direct investigatory powers and that academic institutions be bound by its findings.
The report also recommended the watchdog should release its findings publicly, create whistleblower protections, establish a proper appeals process and allow people to directly raise complaints with it.
Research credibility is on the line
The consequences of inadequate oversight are already evident.
One of the biggest research integrity scandals in Australian history involved Ali Nazari, an engineer from Swinburne University. In 2022 an anonymous whistleblower alleged Nazari was part of an international research fraud cartel involving multiple teams.
Investigations cast doubt on the validity of the 287 papers Nazari and the other researchers had collectively published. The investigations uncovered numerous violations, including 71 instances of falsified results, plagiarism and duplication, and 208 instances of self-plagiarism.
If Australia had a independent research integrity body, there would be a clear governance structure and an established and transparent pathway for reporting breaches at a much earlier stage.
Timely intervention would help reduce further breaches through swift investigation and corrective action. Importantly, consistent governance across Australian institutions would help ensure fairness. It would also reduce bias and uphold the same standards across all misconduct cases.
The call for an independent research integrity watchdog is long overdue.
Only through impartial oversight can we uphold the values of scientific excellence, protect public trust, and foster a culture of accountability that strengthens the integrity of research for all Australians.
The age-old saying “you are what you eat” rings true – diet quality affects our health from the inside out. While a healthy diet can improve health and wellbeing, a poor diet increases the risk of chronic health conditions such as obesity, diabetes and heart disease.
But Australians’ diets appear to be getting worse, not better. Our new modelling study suggests by 2030, our diets will comprise almost 10% less fruit, and around 18% more junk food. This puts us further away from national targets for healthy eating.
A public health priority
A healthy diet is a priority area of the National Preventive Health Strategy. This strategy sets clear goals to improve diet quality by 2030, including increasing fruit and vegetable intake, and reducing consumption of discretionary or “junk” food.
Junk foods (such as cakes, chips, chocolate, confectionery, certain takeaway foods and sugary drinks) are high in saturated fat, salt and sugar, and should only be consumed occasionally and in small amounts.
The preventive health strategy stipulates adults should be consuming two servings of fruit per day and five serves of vegetables, and should be reducing discretionary foods to less than 20% of total energy intake.
Currently, we’re sitting well short of these targets.
We wanted to know whether we might be able to achieve these goals by 2030. So we combined unique data on Australians’ diets with predictive models to map out how our diets are likely to change by 2030.
The CSIRO Healthy Diet Score survey has been running since 2015. This survey uses short questions to measure intake of the five healthy food groups, including fruit and vegetables, as well as discretionary foods. The questions ask about how often people eat certain foods, and how much they eat, to determine an individual’s average daily consumption.
We analysed data from more than 275,000 people who completed this survey between 2015 and 2023. We used predictive modelling techniques called generalised linear models to forecast future diet trends against the national targets. We also broke our findings down by sex and age.
What we found suggests we’re heading in the wrong direction.
Overall, we found fruit consumption is declining. On average, Australians were eating 0.1 fewer serves of fruit in 2023 than they did in 2015. If this trend continues, we expect a further 9.7% decrease in the average serves of daily fruit to 1.3 serves per day by 2030, well below national targets.
While vegetable consumption appears steady at around 3.7 serves per day, this is well below the recommended daily intake of 5 serves per day.
Concerningly, we are also seeing an increase in consumption of discretionary foods. Average daily intake increased by 0.7 serves between 2015 and 2023, with a further 0.8 serve increase predicted by 2030 (an 18% rise). That’s a 1.5 serve (40%) increase in just 15 years.
We can’t put an exact figure on how junk food intake stacks up against the targets, because we looked at serves per day, while the targets are about the proportion of total energy. However, the figures we identified constitute significantly more than 20% of total energy intake.
Things look worse for women. By 2030, women are predicted to be eating 13.2% less fruit and 21.6% more discretionary foods compared to 2023. For men, our predictions suggest a 4.8% decrease in fruit intake and a 19.5% increase in junk foods.
Despite a greater change in women, men are still predicted to be eating more discretionary foods by 2030 (6.3 serves per day for men versus 4.6 for women).
For Australians aged 30 and above, both fruit and vegetable intake are declining. Adults aged 31–50 have the lowest reported fruit and vegetable intake, but the largest change is in adults 71 and older. For these older Australians, we estimate a 14.7% decrease in fruit consumption and a 6.9% decrease in vegetable consumption by 2030. That’s equivalent to a decrease of 0.5 serves of fruit and 0.2 serves of vegetables since 2015.
Discretionary food intake is on the rise in all age groups, but particularly in younger adults.
However, young Australians (18–30 years) may be eating more discretionary foods, but they’re also the only ones eating more healthy food as well. Both fruit and vegetable consumption are increasing for young Australians, with our modelling suggesting a 10.7% and 13.2% respective rise in average serves per day by 2030.
Although this is a positive sign, it’s not enough, as these projections still put young Australians below the recommended daily intake.
Some limitations
Our modelling helps us to understand diet trends over recent years and project these into the future.
However, the research doesn’t tell us what’s driving the worrying trends we’ve observed in Australian diet quality. There are likely to be a variety of factors at play.
For example, many Australians understand what a “healthy balanced diet” is, but what we eat could be affected by social and personal preferences.
It could also be related to cost of living and other pressures which can make fresh food harder to obtain. Also, the area where we live can influence how easy or hard it is to make healthy food choices.
Understanding the root causes behind these changes is a vital area of future research.
In terms of other limitations, our study only focused on the diet quality of Australian adults and didn’t investigate trends is children’s diets.
Also, we only looked at fruit, vegetables and junk food in this study. But we are currently studying changes in the whole diet, taking in other food groups as well.
What can we do?
Australian diets are going in the wrong direction, but it’s not too late to correct the path. We need to ensure all Australians understand what constitutes a healthy diet, and can afford to maintain one.
While no one person, sector or organisation can do this alone, by working together we can put a greater focus towards eating a healthy diet. This includes reviewing policy around the availability and price of fresh fruits and vegetables, as well as looking at our own plates and swapping the junk food for healthier options.
Danielle Baird, a Team Leader in Nutrition and Behaviour at CSIRO, contributed to this article.
Late last year, a massive ocean swell caused by a low pressure system in the North Pacific generated waves up to 20 metres high, and damaged coastlines and property thousands of kilometres from its source.
Two years earlier, another storm system southeast of New Zealand also whipped up massive waves, with the swell reaching as far as Canada, battering Pacific island coasts along the way.
These storms, and the swells they create, are facts of nature. But while we understand a lot about the extraordinary forces at work, we can still do more to predict their impact and coordinate global warning systems.
How big waves are born
Waves are made by wind blowing over a water surface. The longer and stronger the wind blows, the more energy is transferred into those waves.
As well as an increase in wave height, sustained high wind speeds generate waves with a longer period – that is, the distance or time between successive wave crests. Oceanographers refer to the mix of wave heights and periods (and to some extent directions) as a “sea” state.
Once the wind stops blowing, or the sea moves away from the wind that is generating it, the waves become swell and start to separate. The longest-period waves move fastest and shorter-period waves more slowly.
Most waves resulting from a storm have periods of 12–16 seconds, with the individual waves travelling at speeds of 60–80km per hour.
But very large storms with high, sustained winds can generate waves with periods of more than 20 seconds. These waves travel much faster, over 100km per hour in the open ocean, and their energy (which travels more slowly than individual waves) can cover 1,500km in 24 hours.
Ocean waves, particularly long-period swells, lose very little energy as they travel. And unless they collide with an island and break, they are capable of travelling great distances.
By comparison, shorter period waves take much longer to travel and lose more energy. If they encounter a wind field moving in another direction, this also removes energy and reduces their height.
But sometimes, a particularly strong storm system can generate long-period waves with enough energy to travel across the Pacific, reaching shores thousands of kilometres away.
A unique characteristic of such long-range swells is that individual waves contain a lot more energy than shorter-period local waves. They grow to greater heights as they “shoal” in shallow water, and can hit shorelines and structures with greater force, causing more damage and danger.
Waves are generated by wind blowing over water with the distribution of energy changing dependent on their stage of evolution.CC BY-NC-ND
The ‘Code Red 2’ swell
The “Code Red 2” swell was a good example of this in action. It was generated by a massive storm system southeast of New Zealand in July 2022. The “significant wave height”, or average of the largest third of the system’s waves, reached 13 metres. Individual waves were up to twice this height.
The storm system was unusual due to very strong southerly winds blowing northweard from near Antarctica for over 2,000km. This resulted in long-period (20 second) swells moving north into the Pacific Ocean.
The swell first reached Tahiti, where waves closed most of the south-facing coast, prompting a Code Red warning. This was only the second such warning since 2011 (hence its name), and resulted in massive waves at the Teahupo'o surf break, location of the 2024 Olympic surfing event.
The swell also caused flooding along the south coast of Rarotonga and other Pacific Islands before continuing north across the equator to reach the south coast of Hawaii – 7,000km from where it was generated.
Due to their direction and very long period, large waves reached places they don’t usually affect, literally crashing weddings and breaking over houses. The swell then carried on to hit the Californian coast some 10,000km away, and eventually reaching Canada more than a week after it was initially generated.
Tracking the July 2022 Code Red II swell across the Pacific.CC BY-NC-ND
The ‘Eddie’ swell
More recently, the 2024 “Eddie” swell was generated from an extremely intense low pressure system in the North Pacific in December 2024. Waves near the centre of the storm reached heights of 20 metres, with a 22-second period.
The resulting swell hit Hawaii first, where waves were large enough to run the Eddie Aikau Big Wave Invitational at Waimea Bay, a surfing event that requires such large waves it has only been run 11 times in its 40-year history (and which gave the swell its name).
Due to its very long period, the swell was able to continue southward, still with a lot of energy. It reached the north coast of Ecuador and Peru, 8,500km from where it began, where it destroyed fishing boats. And it finally hit Chile, 11,000km from its source, where it closed ports and inundated coastal promenades.
These coasts typically receive large southwest swells. But this rare, long-period north swell was able to reach normally protected north-facing sections of coast, causing uncharacteristic damage.
Tracking the December 2024 Eddie swell across the Pacific.CC BY-NC-ND
Predicting local impacts
It can be difficult to sound warnings for these types of long-period waves, as they are generated so far from the affected shorelines they are missed by local forecasters and emergency managers.
New early warning systems are being developed that take global wave forecasts and downscale them to take into account the shape of the local coastline. The wave information is then combined with predictions of tide and storm surge to give warnings of when coastal impacts may occur.
These systems will give emergency managers, ports and coastal infrastructure operators – and the public – better information and more time to prepare for these damaging wave events.
Australians largely support transforming the economy to increase recycling, repurpose products and reduce waste, according to a new report from the Productivity Commission, but they are being impeded by inconsistent regulations.
The interim report of the commission’s inquiry into Australia’s circular economy, released Wednesday night, also finds consumers need more information about the durability and repairability of products.
The report says that despite increased awareness of the benefits of a circular economy, the transformation has been complex and progress has been slow.
The first is designing and making goods without waste and pollution. This includes using renewable energy to reduce carbon emissions.
The second is keeping products and materials in use for as long as possible. This can be achieved by maintaining or repairing products to extend their life.
The third principle is regeneration. This means promoting activities with positive outcomes. This could include activities to deal with biodiversity loss, or social benefits through food relief and donations.
Some businesses are already using circular economy practices but compared to other developed countries, Australia is well behind. The recent CSIRO study found only 3.7% of the Australian economy is circular, half of the world’s average of 7.2%.
In December last year the Federal government released the National Circular Economy Framework providing guidance how to increase circularity.
Coinciding with this, the Productivity Commission evaluated circular economy opportunities in six priority sectors – built environment, food and agriculture, textiles and clothing, vehicles, mining and electronics.
Priority areas
The priority areas were selected based on the impact their materials has on the environment and the economy.
For example, the construction sector uses large quantities of materials which are expensive to recycle. While the increased use of electric vehicles is a bonus for the environment, the lithium-ion batteries they use pose a fire risk if incorrectly managed.
How much impact a particular area has on Australia, was also taken into account.
For example, Australians are the largest consumers of textiles in the world per capita. But most of these are imported, limiting our influence on how they are made.
Also, the impact and effectiveness of policies and regulations was also considered. Stakeholders across government and community sectors provided detailed submissions that informed the commission’s assessment.
Getting consumers, government and business onboard
The Productivity Commission noted material consumption and waste generation has not changed since 2010. This is because consumers are not repairing and reusing appliances or recycling which is important to a circular economy.
While the report recommends how food waste should be managed, consumers need to change their behaviour to reduce the waste they generate.
To do this, however, consumers need information about making informed purchasing decisions. For e-waste, they need easy access to repair services to extend the life of their products rather than buying new.
The report repeats earlier recommendations about repairs and reuse from the Productivity Commission’s 2021 Right to Repair inquiry.
That inquiry recommended the government develop a product labelling scheme giving consumers information about how durable household appliances are and whether they can be repaired.
We believe implementing these recommendations would bring Australia in line with global best practice reflected in the European Eco-design Sustainable Product regulations.
Impeded by regulations
This report highlights the importance of consistent policies and regulations. These currently vary across sectors and jurisdictions.
Standards enabling the use of recycled materials in construction, consistent rules on the disposal of lithium-ion batteries and consistent kerbside recycling guidelines were all needed.
The Circular Economy Ministerial Advisory Group recommended in their final report in December new legislation, a governance model and investment in innovation to help Australia move to a circular economy.
Help for business
When designed well, circular business models have the potential to reduce waste materials and carbon emissions.
Comparing the circular and linear economies.Productivity Commission, CC BY-SA
However, changing industry and consumer practices represents a big change. As well as inconsistent regulations slowing the transformation, making processes more innovative and experimenting with new technologies can be costly.
The Productivity Commission report says government can help reduce barriers to implementation of circular business models given business has a pivotal role in
driving this transition.
It also supports product stewardship, an approach where producers, importers and brands are responsible and liable for the impact their products have on the environment and on human health across the product life cycle.
Regulations for product stewardship was identified in the report as important, particularly in textiles and clothing, vehicles, EV batteries, solar panels and consumer electronics.
Towards net zero
Several international studies have reported that a circular economy will be needed to achieve net zero targets.
In Australia, the industry sector including mining, manufacturing and construction is responsible for around 34% of total emissions. Using materials more efficiently will help reduce them.
Agriculture, despite its small contribution to the GDP (2.4%), alone contributes 18% to greenhouse gas emissions.
As the report notes, most of these emissions (80%) come from livestock and use of synthetic fertilisers (15%). But only food waste is identified as one of the priority areas.
It should be noted though that food waste only accounts for 3% of emissions. So reducing emissions from agriculture, switching to renewable fertilisers and changing livestock diets should also be a priority.
The Productivity Commission will send its final report to government by August this year.
This week, France hosted an AI Action Summit in Paris to discuss burning questions around artificial intelligence (AI), such as how people can trust AI technologies and how the world can govern them.
Sixty countries, including France, China, India, Japan, Australia and Canada, signed a declaration for “inclusive and sustainable” AI. The United Kingdom and United States notably refused to sign, with the UK saying the statement failed to address global governance and national security adequately, and US Vice President JD Vance criticising Europe’s “excessive regulation” of AI.
Last week, I attended the inaugural AI safety conference held by the International Association for Safe & Ethical AI, also in Paris, where I heard talks by AI luminaries Geoffrey Hinton, Yoshua Bengio, Anca Dragan, Margaret Mitchell, Max Tegmark, Kate Crawford, Joseph Stiglitz and Stuart Russell.
As I listened, I realised the disregard for AI safety concerns among governments and the public rests on a handful of comforting myths about AI that are no longer true – if they ever were.
1: Artificial general intelligence isn’t just science fiction
The most severe concerns about AI – that it could pose a threat to human existence – typically involve so-called artificial general intelligence (AGI). In theory, AGI will be far more advanced than current systems.
AGI systems will be able to learn, evolve and modify their own capabilities. They will be able to undertake tasks beyond those for which they were originally designed, and eventually surpass human intelligence.
AGI does not exist yet, and it is not certain it will ever be developed. Critics often dismiss AGI as something that belongs only in science fiction movies. As a result, the most critical risks are not taken seriously by some and are seen as fanciful by others.
However, many experts believe we are close to achieving AGI. Developers have suggested that, for the first time, they know what technical tasks are required to achieve the goal.
AGI will not stay solely in sci-fi forever. It will eventually be with us, and likely sooner than we think.
2: We already need to worry about current AI technologies
However, current AI technologies are already causing significant harm to humans and society. This includes through obvious mechanisms such as fatal road and aviation crashes, warfare, cyber incidents, and even encouraging suicide.
According to MIT’s AI Incident Tracker, the harms caused by current AI technologies are on the rise. There is a critical need to manage current AI technologies as well as those that might appear in future.
3: Contemporary AI technologies are ‘smarter’ than we think
A third myth is that current AI technologies are not actually that clever and hence are easy to control. This myth is most often seen when discussing the large language models (LLMs) behind chatbots such as ChatGPT, Claude and Gemini.
There is plenty of debate about exactly how to define intelligence and whether AI technologies truly are intelligent, but for practical purposes these are distracting side issues.
It is enough that AI systems behave in unexpected ways and create unforeseen risks.
Several AI chatbots appear to display surprising behaviours, such as attempts at ‘scheming’ to ensure their own preservation.Apollo Research
For example, existing AI technologies have been found to engage in behaviours that most people would not expect from non-intelligent entities. These include deceit, collusion, hacking, and even acting to ensure their own preservation.
Whether these behaviours are evidence of intelligence is a moot point. The behaviours may cause harm to humans either way.
What matters is that we have the controls in place to prevent harmful behaviour. The idea that “AI is dumb” isn’t helping anyone.
Last year the European Union’s AI Act, representing the world’s first AI law, was widely praised. It built on already established AI safety principles to provide guidance around AI safety and risk.
While regulation is crucial, it is not all that’s required to ensure AI is safe and beneficial. Regulation is only part of a complex network of controls required to keep AI safe.
These controls will also include codes of practice, standards, research, education and training, performance measurement and evaluation, procedures, security and privacy controls, incident reporting and learning systems, and more. The EU AI act is a step in the right direction, but a huge amount of work is still required to develop the appropriate mechanisms required to ensure it works.
5: It’s not just about the AI
The fifth and perhaps most entrenched myth centres around the idea that AI technologies themselves create risk.
AI technologies form one component of a broader “sociotechnical” system. There are many other essential components: humans, other technologies, data, artefacts, organisations, procedures and so on.
Safety depends on the behaviour of all these components and their interactions. This “systems thinking” philosophy demands a different approach to AI safety.
Instead of controlling the behaviour of individual components of the system, we need to manage interactions and emergent properties.
With AI agents on the rise – AI systems with more autonomy and the ability to carry out more tasks – the interactions between different AI technologies will become increasingly important.
At present, there has been little work examining these interactions and the risks that could arise in the broader sociotechnical system in which AI technologies are deployed. AI safety controls are required for all interactions within the system, not just the AI technologies themselves.
AI safety is arguably one of the most important challenges our societies face. To get anywhere in addressing it, we will need a shared understanding of what the risks really are.
Earth is crossing the threshold of 1.5°C of global warming, according to two major global studies which together suggest the planet’s climate has likely entered a frightening new phase.
Under the landmark 2015 Paris Agreement on climate change, humanity is seeking to reduce greenhouse gas emissions and keep planetary heating to no more than 1.5°C above the pre-industrial average. In 2024, temperatures on Earth surpassed that limit.
This was not enough to declare the Paris threshold had been crossed, because the temperature goals under the agreement are measured over several decades, rather than short excursions over the 1.5°C mark.
But the two papers just released use a different measure. Both examined historical climate data to determine whether very hot years in the recent past were a sign that a future, long-term warming threshold would be breached.
The answer, alarmingly, was yes. The researchers say the record-hot 2024 indicates Earth is passing the 1.5°C limit, beyond which scientists predict catastrophic harm to the natural systems that support life on Earth.
2024: the first year of many above 1.5°C
Climate organisations around the world agree last year was the hottest on record. The global average temperature in 2024 was about 1.6°C above the average temperatures in the late-19th century, before humans started burning fossil fuels at large scale.
Earth has also recently experienced individual days and months above the 1.5°C warming mark.
But the global temperature varies from one year to the next. For example, the 2024 temperature spike, while in large part due to climate change, was also driven by a natural El Niño pattern early in the year. That pattern has dissipated for now, and 2025 is forecast to be a little cooler.
These year-to-year fluctuations mean climate scientists don’t view a single year exceeding the 1.5°C mark as a failure to meet the Paris Agreement.
However, the new studies published today in Nature Climate Change suggest even a single month or year at 1.5°C global warming may signify Earth is entering a long-term breach of that vital threshold.
What the studies found
The studies were conducted independently by researchers in Europe and Canada. They tackled the same basic question: is a year above 1.5°C global warming a warning sign that we’re already crossing the Paris Agreement threshold?
Both studies used observations and climate model simulations to address this question, with slightly different approaches.
In the European paper, the researchers looked at historical warming trends. They found when Earth’s average temperature reached a certain threshold, the following 20-year period also reached that threshold.
This pattern suggests that, given Earth reached 1.5°C warming last year, we may have entered a 20-year warming period when average temperatures will also reach 1.5°C.
The Canadian paper involved month-to-month data. June last year was the 12th consecutive month of temperatures above the 1.5°C warming level. The researcher found 12 consecutive months above a climate threshold indicates the threshold will be reached over the long term.
Both studies also demonstrate that even if stringent emissions reduction begins now, Earth is still likely to be crossing the 1.5°C threshold.
Heading in the wrong direction
Given these findings, what humanity does next is crucial.
For decades, climate scientists have warned burning fossil fuels for energy releases carbon dioxide and other gases that are warming the planet.
But humanity’s greenhouse gas emissions have continued to increase. Since the Intergovernmental Panel on Climate Change released its first report in 1990, the world’s annual carbon dioxide emissions have risen about 50%.
Put simply, we are not even moving in the right direction, let alone at the required pace.
The science shows greenhouse gas emissions must reach net-zero to end global warming. Even then, some aspects of the climate will continue to change for many centuries, because some regional warming, especially in the oceans, is already locked in and irreversible.
If Earth has indeed already crossed the 1.5°C mark, and humanity wants to get below the threshold again, we will need to cool the planet by reaching “net-negative emissions” – removing more greenhouse gases from the atmosphere than we emit. This would be a highly challenging task.
Feeling the heat
The damaging effects of climate change are already being felt across the globe. The harm will be even worse for future generations.
Australia has already experienced 1.5°C of warming, on average, since 1910.
Our unique ecosystems, such as the Great Barrier Reef, are already suffering because of this warming. Our oceans are hotter and seas are rising, hammering our coastlines and threatening marine life.
These studies are a sobering reminder of how far short humanity is falling in tackling climate change.
They show we must urgently adapt to further global warming. Among the suite of changes needed, richer nations must support the poorer countries set to bear the most severe climate harms. While some progress has been made in this regard, far more is needed.
A major shift is also needed to decarbonise our societies and economies. There is still room for hope, but we must not delay action. Otherwise, humanity will keep warming the planet and causing further damage.