Wednesday, 15 April 2026

Climate change - Antarctic Emperor Penguins and fur seals now endangered species

 

The beloved emperor penguin and Antarctic fur seal are now officially endangered. Here’s what can be done

The Conversation, CC BY-ND
Mary-Anne Lea, University of Tasmania; Jane Younger, University of Tasmania, and Noemie Friscourt, University of Tasmania

In 1902, British explorer Robert Falcon Scott spotted a large group of large black and white birds at Ross Island, Antarctica. This was among the many milestones of Scott’s famous Discovery expedition: the first breeding colony of emperor penguins.

Now, only 124 years since this penguin colony was discovered, emperor penguins have officially been listed as endangered, along with the Antarctic fur seal. As the world warms, Antarctic krill are shifting southwards and sea ice is shrinking at record levels. And these unprecedented changes are having a domino effect on these species.

These are the first penguin and pinniped – marine mammals that have front and rear flippers – to be given this conservation status in the Southern Ocean. Their perilous situation is a critical turning point, and shows how rapidly the Antarctic environment is changing.

At the same time, the spread of highly contagious avian influenza, or bird flu, adds a new and immediate threat to Southern Ocean wildlife, compounding the pressures of climate change on stressed species.

Antarctic fur seal with pups at Sailsbury Plain on South Georgia, with snow-covered hills in the background.
Antarctic fur seal with pups at Sailsbury Plain on South Georgia. The number of fur seals has dropped by over 50% since 1999. Posnov/Getty

Dramatic declines linked to climate change

The first emperor penguin breeding colony was discovered at Cape Crozier, on Ross Island, during Robert Falcon Scott’s Discovery expedition in 1902. A decade later, Scott’s Terra Nova expedition returned, in part to collect emperor penguin eggs. It was an ill-fated expedition, immortalised in Apsley Cherry-Garrard’s famous book, The Worst Journey in the World.

In the 1960s, Scott’s son, Sir Peter Scott, one of the founders of modern conservation, helped establish the International Union for the Conservation of Nature’s Red List. Just 124 years after those early discoveries at Cape Crozier, that same framework has now been used to classify emperor penguins as endangered. The swift arc from discovery to extinction risk is a striking reminder of how quickly the species’ fortunes have changed.

Over nine years, between 2009 and 2018, emperor penguin numbers fell by 10%. Their numbers are expected to halve by 2073.

A group of southern elephant seals at rest.
Southern elephant seals are now officially listed as vulnerable. Mary-Anne Lea, CC BY-ND

The decline is more pronounced for Antarctic fur seals. Hunted to the brink of extinction in the early 1880s, by 1999 their numbers had rebounded to an estimated 2.1 million mature seals. But since then, the global population has decreased by more than 50%, to about 944,000 mature individuals.

In just a decade, they have been reclassified on the IUCN’s Red List, going from of “least concern” – those species that are widespread and at low risk of extinction – to “endangered”. The IUCN’s red list is the comprehensive information source on the extinction risk status of species. This shows the remarkable speed at which these seals are declining.

Climate change and bird flu

Both of these dramatic declines are linked to climate change. Warming ocean temperatures and a reduction in sea ice affect the availability of the Antarctic fur seal’s key prey, Antarctic krill. Krill are shifting southwards and moving deeper, potentially making them less accessible to some predators. Competition with a growing population of whales has also increased.

Emperor penguins, by contrast, are completely dependent on sea ice. They use it as a stable platform for courtship, incubating their eggs and rearing chicks. But as sea ice declines and becomes less reliable, their breeding success is increasingly threatened. If the ice breaks up before chicks are fully developed, many are unable to survive.

At the same time, the spread of highly contagious bird flu adds a new and immediate threat to Southern Ocean wildlife. High mortality associated with avian influenza has also caused the uplisting of the southern elephant seal to “vulnerable” this week.

Some elephant seal populations have experienced more than 90% of pups dying, alongside sharp declines in breeding adults. These represent tens of thousands of animals lost, with many Antarctic fur seals also dying as a result of bird flu outbreaks.

emperor penguin chicks at Cape Crozier.
Emperor penguin chicks at Cape Crozier. Mary-Anne Lea, CC BY-ND

We need to know more

Emperor penguins, Antarctic fur seals and southern elephant seals are three of the more widely researched Southern Ocean predators. But there is still a lot we don’t know, because of the remote location and the difficulty of sustaining research over time. And there are many species we know far less about. Antarctic ice seals, including Weddell seals, crabeater seals, leopard seals, and Ross seals, have “unknown” population trends on the IUCN red list, meaning there is not enough data to know if numbers are declining.

These recent listings make clear the urgent and ongoing need for improved, real-time monitoring. We need to know much more about wildlife health and population trends, the Antarctic environment and sea ice quality.

Human-driven threats facing Antarctic wildlife are many, and cumulative. To respond, we need to better protect Antarctic habitat and the species that live there. We need to reduce the interaction of marine species with industrial fishing. And we must improve how we assess current and suspected threats in Antarctica, when there is growing evidence of impacts.

Defining these animals as endangered is a stark reminder of how quickly Antarctica is changing before our eyes. Without a rapid reduction in greenhouse gas emissions and sustained conservation action, these species may be lost forever.The Conversation

Mary-Anne Lea, Professor in Marine/Polar Predator Ecology, University of Tasmania; Jane Younger, Senior Lecturer in Southern Ocean Vertebrate Ecology, Institute for Marine and Antarctic Studies, University of Tasmania, and Noemie Friscourt, Research Associate, Institute for Marine and Antarctic Studies, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, 12 April 2026

Artificial intelligence Part 3: specific industry impacts - film and television

ChatGPT image
The impact of AI is the most pronounced in the film and television industry with a variety of occupations impacted by the technology. The WGA and SAG-AFTRA union strikes in the United States in 2023 highlighted the concerns of people employed in the creative industries. Breaking down the various subsectors in the film and television industry, the role of AI can be easily defined -

CGI and VFX production
AI now covers environmental generation, crowd simulation, rotoscoping, motion cleanup, texture creation, background characters. 
  • Rotoscoping, cleanup and compositing are traditionally large pools of junior labour and these tasks are being automated rapidly. Mid-tier VFX companies are under existential pressure with work bifurcating toward very high-end boutique work but the commodity work is fully AI generated. The roles that are disappearing are junior asset builders, repetitive compositing roles and the large teams that produce background elements.
  • AI tools are Unreal Engine, Blender, Runway, Sora and similar programs.
Acting and performance
AI can and is already producing synthetic actors to create digital doubles and AI-generated crowds.
  • Background artists are already displaced due to AI generated crowds and extras in a limited manner. This displacement of extras, crowd performers and minor background roles is expected to increase.
  • Voice acting is severely threatened as synthetic voices are increasingly now near indistinguishable from real human voices and can be used for minor characters, video games, commericals and dubbing. Studios can licence a voice and use it indefinately.
  • AI tools are Nvidia and Runway AI
Writing
AI can already operate to develop plot structure, dialogue drafts, storyboarding, episode outlines, alternate scene ideas. Writers room teams that once had 6-12 junior writers now only require a headwriter, 2-3 senior writers and AI-assisted drafting tools. A showrunner with AI-assistance may need only 2-3 senior writers rather than a full room.

Localisation and dubbing is already occuring using AI replacing human translators and lip sync dubbing artists at scale.

The reality is that with time and patience, AI will enable very small teams to produce cinema-quality films. Early versions of AI films can already be found on YouTube however many of these projects suffer from continuity failures and many technical deficiencies in storytelling structure.

The safest roles in the AI-era are those positions with creative authority, not basic production. Examples could be roles such as showrunner, art director, creative director, lead animator, production designer. These are decision-making roles and decide what should exist rather than merely producing it.

Saturday, 11 April 2026

Artificial intelligence Part 2: impact on the structure of employment and reduction of entry level roles

ChatGPT image
As artificial intelligence (AI) continues to be developed and implemented in various forms across workplaces, the exact impact for employment is becoming apparent even in this early stage of adoption. When discussing AI, it's important and extremely relevant to define the capabilities of AI.

AI carries out three activities across all industries -
  1. automates the repetitive layer
  2. compresses the workforce pyramid
  3. raises the value of senior decision-makers (to an extent)
An example only to demonstrate this impact is the organisational structure in industry. 
An industry that once had this structure -
  • 1 Director
  • 3 Senior professionals
  • 15 junior staff
Under AI capability becomes a structure with -
  • 1 Director
  • 3 Senior professionals
  • 3-5 AI-assisted operators
AI across many white collar industries removes what is called the "first draft economy''. Many jobs existed primarily to produce first drafts of various outputs such as reports, media releases, policy notes, research documents, scripts, designs, code for information technology. AI can now produce much of this instantaneously.

AI is starting to hollow-out the traditional 'career ladder'. The junior roles that people once used to enter professions are disappearing first. This situation does have long term consequences for how expertise and experience is developed in society. This creates a "pipeline problem" and is becoming one of the largest and dominant structural challenges of implementing AI.

AI does compress some organisational hierarchies and enables an increase in the number of people or functions that a single leader can manage. This is known as the 'span of control' which AI increases while reducing certain managemernt layers in organisations. Hierarchical compression is only one aspect of the AI's impact but equally the very shape of organisations also changes with -
  • fewer administrative workers
  • fewer reporting layers
  • smaller teams with higher productivity
  • leaders responsible for larger spans of activity
Roles that involve accountability, legal responsibility or political authority will remain human dominated. AI does reduce the documentation workforce that produces reports, compiles data, drafts documents and summarises information. It does not replace roles that have decision authority, physical presence, strategic judgement and/or legal accountability. 

As another example of structural change, before AI implementation, a very large organisation often had this structure -
  • Executive leadership
  • Senior managers
  • Middle managers
  • Supervisors/team leaders
  • Large operational workforce
After AI implementation, the organisation could be structured as -
  • Executive leadership
  • Senior specialists
  • Fewer managers
  • AI-enabled reduced operational staff
Effectively the middle and bottom tiers shrink.

The multi-part series covering AI, published in this blog, has been researched and compiled using Claude ai (Anthropic), ChatGPT (OpenAI), and Grok (Xai). Later posts on this topic will list specific industries where change is already happening.

Wednesday, 8 April 2026

Health and coffee

Does coffee raise your blood pressure? Here’s how much it’s OK to drink

Olga Pankova/Getty Images
Clare Collins, University of Newcastle

Coffee first entered human lives and veins over 600 years ago.

Now we consume an average of almost two kilos per person each year – sometimes with very specific preferences about blends and preparation methods. How much you drink is influenced by genes acting on your brain’s reward system and caffeine metabolism.

Coffee can raise your blood pressure in the short term, especially if you don’t usually drink it or if you already have high blood pressure.

But this doesn’t mean you need to cut out coffee if you have high blood pressure or are concerned about your heart health. Moderation is key.

So how does coffee affect your blood pressure? And if yours is high, how much is OK to drink?

What is high blood pressure?

Blood pressure is the force blood exerts on artery walls when your heart pumps. It’s measured by two numbers:

  • the first and biggest number is systolic blood pressure, which is the force generated when your heart contracts and pushes blood out around your body

  • the lower number, diastolic blood pressure, is the force when your heart relaxes and fills back up with blood.

Normal blood pressure is defined as systolic blood pressure of less than 120 millimeters of mercury (mm Hg) and diastolic blood pressure of less than 80 mm Hg.

Once your numbers consistently reach 140/90 or more, blood pressure is considered high. This is also called hypertension.

Knowing your blood pressure numbers is important because hypertension doesn’t have any symptoms. When it goes untreated, or isn’t well-controlled, your risk of heart attacks and strokes increases, and existing kidney and heart disease worsens.

About 31% of adults have hypertension with half unaware they have it. Of those taking medication for hypertension, about 47% don’t have it well-controlled.

How does coffee affect blood pressure?

Caffeine in coffee is a muscle stimulant that increases the heart rate in some people. This can potentially contribute to an irregular heartbeat, known as arrhythmia.

Caffeine also stimulates adrenal glands to release adrenaline. This makes your heart beat faster and your blood vessels to constrict, which increases blood pressure.

Blood caffeine levels peak between 30 minutes and two hours after a cup of coffee. Caffeine’s half-life is 3–6 hours, meaning blood levels will reduce by about half during this time.

The range is due to age (kids have smaller, less mature livers so can’t metabolise it as fast), genetics (people can be fast or slow metabolisers) and whether you usually drink it (regular consumers clear it faster).

The impact of caffeine on blood pressure from coffee (and cola, energy drinks and chocolate) varies. Research reviews report increases in systolic blood pressure of 3–15 and a diastolic blood pressure increase of 4–13 after consumption.

The effect of caffeine also depends on a person’s usual blood pressure. An increase in blood pressure may be more risky if you have hypertension and existing heart or liver disease, so it’s best to discuss your coffee consumption with your doctor.

What else is in coffee?

Coffee contains hundreds of phytochemicals: compounds that contribute flavour, aroma, or influence health and disease.

Phytochemicals that directly affect blood pressure include melanoidins, which regulate the body’s fluid volume and activity of enzymes that help control blood pressure.

Quinic acid is another phytochemical shown to lower systolic and diastolic blood pressure by improving the lining of blood vessels, allowing them to better accommodate blood pressure rises.

Can coffee cause hypertension?

In a review of 13 studies that included 315,000 people, researchers examined associations between coffee intake and the risk of hypertension.

During study follow-up periods, 64,650 people developed hypertension, with the researchers concluding coffee drinking was not associated with an increased risk of developing the condition.

Even when they examined data by gender, amount of coffee, decaffeinated versus caffeinated, smoking or years of follow-up, coffee was still not associated with an increased risk of developing hypertension.

The only exceptions suggesting lower risk were for five studies from the United States and seven low-quality studies, meaning those results should be interpreted with caution.

A separate Japanese study followed more than 18,000 adults aged 40–79 years for 18.9 years. This included about 1,800 people who had very high blood pressure (grade 2-3 hypertension), with systolic blood pressure of 160 or above or diastolic blood pressure of 100 or above.

Here, risk of dying from cardiovascular disease, including heart attack or stroke, was double among those drinking two or more cups of coffee a day compared to non-drinkers.

There were no associations with death from cardiovascular disease for those who had either normal blood pressure or mild (grade 1) hypertension (systolic blood pressure 140–159 or diastolic blood pressure 90–99).

The bottom line

There is no need to give up coffee. Here’s what to do instead:

  1. know your blood pressure, health history and which food and drinks contain caffeine

  2. consider all factors that influence your blood pressure and health – family history, diet, salt and physical activity – so you can make informed decisions about what you consume and how much you move

  3. be aware of how caffeine affects you and avoid it before having your blood pressure measured

  4. avoid caffeine in the afternoon so it doesn’t affect your sleep

  5. aim to moderate your coffee intake by drinking four cups or less a day or switching to decaf

  6. if you have systolic blood pressure of 160 or above or diastolic blood pressure of 100 or above, consider limiting to one cup a day, and talk to you doctor. The Conversation

Clare Collins, Laureate Professor in Nutrition and Dietetics, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, 7 April 2026

Artificial intelligence: fast html code of comets example

The example above is html coding done by AI in 1 second to show comets crossing the sky.

Friday, 3 April 2026

Environment - Microplastics have been located at every level of the world's oceans

ChatGPT image
Microplastics have been found throughout the world's oceans and at all levels of the water column following a comprehensive survey of over 1,885 sites across the planet. The survey conducted by researchers from Japan, China, New Zealand, Italy, the Netherlands and the United States located microplastics across depths in the ocean including the deepest parts. The Mariana Trench, for example, recorded more than 13,000 microplastic particles per cubic metre nearly 7 miles down. 

Of particular concern from the findings, is that the smallest particles were distributed almost evenly throughout the water currents, rather than being more at the surface level than at the bottom of the ocean.  Another key finding from the survey measurements is that the polymers in these plastics were accounting for very strong reading of the carbon in the water. At depths of 2,000 metres, the polymers comprise as much as 5 per cent of the carbon. 



These high carbon levels may reduce the capacity of oceans to aborb carbon dioxide from the atmosphere and thus enable global warming.

The full report can be accessed here: Microplastics in the ocean

Artificial intelligence: A glowing sun: coded by AI in 0.5s

Thursday, 2 April 2026

Easter 2026 - customs

AI generated image - ChatGPT
The Easter period for Christians celebrates the resurrection of Jesus from the dead, a chief tenet of their faith. It's a period of holidays, religious practices, rituals and the consumption of specific food such as hot cross buns and chocolate eggs (or chocolate rabbits/bunnies is another popular practice).  

For the religious faithful, the concept of resurrection is one where, through the faith in God, followers of Jesus are resurrected spiritually and walk a new existence through eternal salvation and dwell in the Kingdom of Heaven.

The custom of Easter eggs is a symbol of life and rebirth and connected with the empty tomb upon the resurrection of Jesus. Eggs were previously chicken eggs dyed in different colours, but in recent decades took on a sweet form through the use of chocolate. In orthodox religions, dyed eggs are still the common practice.

The hot cross bun or spiced bun is made with fruit marked with a cross on the top. Traditionally eaten on Good Friday in the Christian calendar, the bun marks the commencement of the season of Lent. Parts of the bun have different meanings however the cross on the top is umistakeable as the crucifixion of Jesus. The bun has a long history stretching back to the 6th Century with variations occuring in the centuries thereafter.

May the period of Easter be one of reflection and celebration in a conflicted world.

Tuesday, 24 March 2026

Climate change - the world continues to heat up

 

The latest world climate report is grim, but it’s not the end of the story

Andrew King, The University of Melbourne

It’s no secret our planet is heating up.

And here’s the evidence: we’ve just experienced the 11 hottest years on record, with 2025 being the second or third warmest in global history.

The annual State of the Climate report, published today by the World Meteorological Organization, suggests we’re still too reliant on fossil fuels. And that’s pushing us further from our goal to decarbonise.

So what is happening to our climate? And how should we respond?

The climate picture

Unfortunately, the most recent climate data makes for grim reading.

Let’s look back at 2025, through the lens of four climate change indicators.

Carbon dioxide

We now have a record amount of carbon dioxide in the atmosphere, about 50% higher than pre-industrial levels. And we’re still emitting large amounts of carbon dioxide through our use of fossil fuels. In 2025, global emissions reached record high levels. The carbon dioxide we emit can stay in the atmosphere for a long time. So each year we keep emitting large amounts of carbon dioxide, the more concentrated it will be in our atmosphere.

Temperature

In 2025, the world experienced its second or third warmest year on record, depending on which dataset you use. The average temperature was about 1.43°C above the pre-industrial average.

This is particularly unusual given we observed slight La Niña conditions in the Pacific region. La Niña is a type of climate pattern characterised by temperature changes in the Pacific Ocean. It typically creates milder, wetter conditions in Australia and has a cooling effect on the global average temperature. But even with La Niña conditions, the planet stayed exceptionally hot.

And each of the last 11 years were hotter than any of the previous years in the global temperature series. This is true across all the different datasets used in the report. However, this does not mean a new record was set each year.

Oceans and ice

In 2025, the heat held within the world’s oceans reached a record high. And as our oceans continue to warm, sea levels will also rise. Hotter oceans also speed up the process of acidification, where oceans absorb an increased amount of carbon dioxide with potentially devastating consequences for some marine animals.

The amount of Arctic and Antarctic ice is also well below average. This report shows sea ice extent, a measure of how much ocean is covered by at least some sea ice, is at or close to record low levels in the Arctic. Meanwhile, the amount of ice stored in glaciers has also significantly decreased.

Extreme weather

Research shows many of the most devastating extreme weather events of 2025 were exacerbated by human-driven climate change. The heatwaves in Central Asia, wildfires in East Asia and Hurricane Melissa in the Carribean are just three examples. Through attribution analysis, which is how scientists determine the causes of an extreme weather or climate event, this report highlights how our greenhouse gas emissions are making severe weather events more common and intense.

How does Australia stack up?

Compared to most other countries, Australia has a disproportionate impact on the global climate.

This is largely because our per capita carbon dioxide emissions are about three times the global average. That means on average, each of us emits more carbon dioxide than people in all European countries and the US.

Emissions matter because they exacerbate the greenhouse effect. That is the process by which greenhouse gases, such as carbon dioxide and methane, trap heat near Earth’s surface. So by emitting more greenhouse gases, we contribute to global warming. And research suggests Earth is warming twice as fast today, compared to previous decades.

However, Australia is also experiencing first-hand the adverse effects of human-induced climate change.

In 2025, we lived through our fourth-warmest year on record. The annual surface temperatures of the seas around Australia reached historic highs, beating the record temperatures set in 2024. And last March was the hottest March we’ve seen across the continent.

Here in Australia, we are also battling longer and hotter heatwaves and bushfire seasons. And scientists warn these extreme weather events will only become more common.

The Bureau of Meteorology’s annual summary highlights how Australia’s climate is changing.

So what can we do?

The 2025 State of the Climate Report shows how much, and how quickly, we are changing our climate. And it is worryingly similar to previous reports, highlighting the need for urgent action.

The priority should be decreasing our emissions. This would slow down global warming, which will only continue if we keep the status quo. Some countries are already decarbonising rapidly, in part through transitioning to renewable electricity supplies. Others, including Australia, need to move much faster to reduce emissions.

Crucially, we must also meet our net zero targets. In Australia, as in many other countries, we are aiming to reach net zero by 2050. The sooner we reach net zero, the more likely we are to avoid harmful climate change impacts in future. To achieve net zero, we need to significantly reduce our emissions while also increasing how much carbon we remove from the atmosphere.

Even if we meet our net zero targets, climate change will not magically disappear. However, by turning away from fossil fuels and cutting our greenhouse gas emissions now, we may spare future generations from its worst effects. That’s the least we can do.The Conversation

Andrew King, ARC Future Fellow and Associate Professor in Climate Science, ARC Centre of Excellence for 21st Century Weather, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, 22 March 2026

Artificial intelligence - graphic design examples

 
AI generated image - ChatGPT
One of the most immediately impacted industries from artificial intelligence (AI) is graphic design. Some images can appear as artistic creations (as shown above). Yet others can be created with a realistic appearance which is increasingly hard to detect as artificial (as shown below). Images can take only seconds to create and can be easily adjusted and edited.  

AI generated image - ChatGPT

Artificial intelligence Part 1: restructuring the workforce - what does AI do ?

AI generated image - Chat GPT
Media reports, opinion editorials and speculation by public commentators about artifical intelligence (AI) have been fuelling considerable instability for the sharemarket listed ICT sector in major economies as well as concerns in the workforce regarding the actual impact of potential employment losses. The reality is that the impact of  AI is not well understood or clearly defined as it is an emerging technology with the full ramifications yet to be fully measured. Most job losses and employment reductions have occured in information technology companies predominantly in the software developmnent and business support teams. This however does not represent the true extent of transformation that is coming.

A key feature of the articles and reports to date has been the under representation of actual impact as published in the media. The effect of AI has essentially been over emphasised in the technology sector and underplayed in the rest of the economy. Essentially AI will impact white collar occupations the most and be more far reaching than has been thus far reported.

Current state of play
AI is created using large language-based models and works best for rules-based, screen-based work with set parameters. Workplace transformation is already occuring in -
  • Knowledge work (particularly entry level)
- Administrative assistants.
- Data entry clerks
- Paralegals doing document reviews
- Junior accountants
- Basic market researchers
  • Content production (that is essentially formulaic)
- Copywriters for generic marketing
- SEO (search Engine Optimisers) article writers
- Basic graphic production
- Translation of common languages

These roles are not eliminated but fewer staff are needed as productivity rises.
  • Software roles (with a focus on junior roles)
- Junior coders
- QA testers
- Routine debugging of software

Senior engineers remain current however the career ladder below them is compressed and positions are reduced.
  • Customer interaction roles (accelerating an existing trend)
- call centre agents
- Tier-1 technology support
- Scheduling and booking staff

These roles can be reduced or be removed by use of Chatbots and AI voice agents.

This blog will be publishing a series of posts on the use of AI and its developing and continuing effect on the workforce and the economy. 

Sunday, 15 March 2026

Artificial intelligence - the fourth industrial revolution

Sunrise over the earth from space  AI created
The advent of artificial intelligence (AI) heralds the fourth industrial revolution, building on the three previous technology changes in the past. The AI revolution, as so termed, will bring with it a very strong reality of causing genuine employment reductions (not redeployment) and hence social dislocation. Occupations will be replaced and workforce reductions can and will occur often at lightening speed.

So what were the previous industrial revolutions ?

1st industrial revolution: essentially mechanisation of sorts such as water power, steam power and a move away from agrarian economies to mechanisation.

2nd industrial revolution: this was the era of electrification and new power sources. In turn this enabled new advances in mehanisation, the advent of the assembly line. Mass production became possible in both consumer goods and business-to-business methods such as machine tools.

3rd industrial revolution: the information economy and the internet. Computers, semiconductors and the use of automation and early stage robotics. The move from analog to digital also comes into this era.

And now the 4th industrial revolution which has heralded artifical intelligence, machine learning, quantum computing, biotechnology advances and connectivity between physical, digital and biological systems.

Why the 4th industrial revolution is so significant is the very characterisation of AI itself. The systems learn and improve on their own, make decisions and automate cognitive work. This a paradigm shift in reality and one where the end point and absolute objectives are not at all clear.

Tuesday, 10 March 2026

Future age weapons now a reality

 

Israel’s ‘Iron Beam’: why laser weapons are no longer science fiction

Rafael Advanced Defense Systems
James Dwyer, University of Tasmania

As conflict escalates following the US and Israeli attacks on Iran, and Iran’s subsequent retaliatory strikes, reports have emerged that Israel may have used laser weapons to shoot down rockets fired by Hezbollah from Lebanon.

While the reports are unconfirmed, video circulating on social media appears to show rockets being destroyed within moments of launching without visible intervention – consistent with the effect of a “directed energy weapon” such as a laser.

It wouldn’t be the first time Israel has used its cutting-edge Iron Beam laser air defence system, but the incident offers a glimpse into a changing landscape where high-tech militaries are scrambling to keep up with barrages of small rockets and cheap, increasingly capable drones.

What is Iron Beam?

Most defensive systems use rocket-propelled missiles against incoming threats. Iron Beam, however, uses a laser – also known as a directed energy weapon.

Where a missile destroys a drone, shell or rocket by crashing into it or exploding near it, Iron Beam destroys targets by burning them with an extremely powerful laser.

Manufactured by Rafael Advanced Defense Systems, which “serves as Israel’s High-Energy Laser National Center for Excellence and National Lethality Lab”, a smaller version of Iron Beam was first successfully tested in 2022. The system was first used in practice last year, to shoot down drones launched by Hezbollah.

Using a 100 kilowatt solid state laser mounted on a mobile trailer, Iron Beam can be strategically deployed and moved depending on the current threat vector, and adds an additional layer of defence to Israel’s existing, layered defensive systems.

How is it different to the Iron Dome, David’s Sling and Arrow air defences?

The biggest advantage of laser weapons over missiles is cost. A single Iron Dome interceptor missile costs about US$50,000 – which means the costs add up quickly when defending against large or frequent attacks.

Firing the Iron Beam laser costs a lot less. In 2022, Israel’s then prime minister Naftali Bennett said each shot cost around $US3.50, and more recent estimates suggest the cost may now have fallen as low as US$2.50 per shot.

Black and white photo of a laser beam hitting a small object
An infrared image of high-energy laser test targeting a drone. Office of Naval Research / Lockheed Martin

The economics alone present a powerful motivator for militaries to develop and deploy these weapons.

Another significant advantage of Iron Beam and other directed energy weapons is that they don’t run out of ammunition. Whereas a missile battery needs to be reloaded after use, an energy weapons just needs power.

The only limiting factor for the number of shots is overheating due to the huge amounts of energy expended. Eventually a laser weapon needs to stop firing to cool down, or it will be damaged by the heat.

There’s little public information on how many shots these weapons can fire or at what rate before overheating, but it is widely assumed they can still easily outfire most conventional munitions.

Of course, Iron Beam doesn’t operate in isolation: Israel still possesses its other defensive capabilities. The cheaper Iron Beam can be used first, then backed up with other systems if needed.

The other limitation for directed energy weapons is range. They can’t reach as far as missiles such as David’s Sling or Arrow, so they are only useful for countering drones, artillery and short-range missiles.

Directed energy weapons on the ground can’t reach high-flying long-range ballistic missiles. What’s more, they are less effective in rainy, damp or cloudy conditions.

What role is Iron Beam playing in the current conflict?

Iron Beam (and other directed energy weapons being developed and deployed by other countries) are not intended to replace existing defensive systems, but to supplement them. The radically lower cost per shot provides far greater flexibility to counter “low cost” threats such as one-way drones or artillery shells.

In last year’s conflict with Iran, the United States, United Kingdom and Israel rapidly discovered they were expending large numbers of extremely expensive missiles to counter relatively cheap Iranian missiles, rockets and drones.

The US has responded with a crash course program to quickly arm its fighter jets with larger numbers of cheaper anti-drone rockets.

Directed energy weapons offer many of the same (if not greater) benefits for ground and naval-based defences.

Both the US and Israel reportedly expended a large proportion of their defensive missiles during the last conflict with Iran in 2025. Using directed energy weapons can also help preserve stores of these munitions.

Missile stockpiles are not easily replenished quickly. Even then, a large or sustained attack would quickly deplete them again.

An option that provides defence against shorter-range or slower threats allows the more expensive missiles to be held in reserve.

Where to from here?

War lasers may still sound like science fiction. But Israel is far from alone in developing and deploying them.

The US has tested laser drone and missile defences on navy ships. Both China and Japan have also tested naval and ground-based directed energy weapons.

For naval vessels in particular the benefits of directed energy weapons are immense. Reloading defensive missiles at sea is difficult, or often impossible, requiring a return to port.

In a high-intensity conflict (or a lower-intensity but prolonged conflict) this can present a significant challenge. It can also leave vessels vulnerable when they have depleted their missile stores, or are in port to rearm.

Running out of munitions is often a significant concern for defensive systems. Directed energy weapons lessen this worry – so we are likely to see them more and more as technology develops.The Conversation

James Dwyer, Lecturer, School of Social Sciences, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, 5 March 2026

International affairs - regime change rarely works

 

Does regime change ever work? History tells us long‑term consequences are often disastrous

Matt Fitzpatrick, Flinders University

The latest US-Israeli bombings in Iran differ from last year’s, because one of the stated aims this time is regime change.

Engaged in the mass murder of civilians at home and fomenting violence abroad, the current Iranian regime has few friends internationally.

Many would be glad to see Iran undergo a far-reaching program of political reform. For many in the Iranian diaspora, regime change imposed from outside is better than none.

But the historical record of imposed regime change, particularly as undertaken by the United States, is patchy at best.

Things rarely go to plan, and the long-term consequences are often disastrous.

Afghanistan and Iraq

Some immediate examples spring to mind.

Still fresh in the public mind would be the shocking scenes of desperate Afghans trying to leave Kabul in 2021 as the United States conceded it could not permanently defeat the Taliban.

This admission came after two decades, thousands of deaths of US and allied troops and tens of thousands of Afghan deaths.

Many would also remember then-US President George W. Bush’s disastrous speech in May 2003 about America’s regime change efforts in Iraq, begun in March that year. Here, Bush addressed the press while standing in front of a huge banner that said “Mission Accomplished”; the implication was regime change had been achieved in just a few months.

In fact, what followed was another decade of US fighting to try to stabilise Iraq, with actions arguably not wound up until 2018 or even beyond.

Once again this came at a huge cost to civilian lives, with The Lancet estimating as early as 2004 that around 100,000 “excess deaths” had occurred as a result of the US attempt to effect regime change there.

Thereafter, Iraq was continuously wracked by violence and civil war. Notably, ISIS took advantage of its weakened state to establish its “caliphate” on Iraqi territory, leading to yet another wave of US intervention.

But US attempts to impose regime change have a much longer and equally unsuccessful history, as well.

From the Bay of Pigs to Iran

The phrase “Bay of Pigs” has become a synonym for the inability to overthrow a government.

Aimed at overthrowing Fidel Castro in Cuba in April 1961, not only was then-US President John F. Kennedy’s foray into regime change unsuccessful (Castro died in his sleep with his regime still in control of Cuba at the age of 90 in 2016), it also led to the execution of CIA operatives there.

The US also faced the embarrassment of having to swap tractors for the freedom of the Cuban exiles who had carried out the failed invasion for them.

In 1953, the US and Britain actually did succeed in overthrowing Iranian Prime Minister Mohammad Mossadeq after he’d announced Iran’s oil industry would be nationalised in response to Western oil companies’ intransigence on royalties and control.

This regime change effort by the US did “succeed” in the short run, but it led to a series of events that culminated in the repressive regime the US aims to replace today.

Mossadeq’s toppling led to the shah of Iran, Mohammad Reza Pahlavi, becoming an absolutist monarch in the cruellest tradition.

His savage repression led in no small way to the 1979 Iranian revolution, which became the vehicle for the present theocratic government to come to power.

It is one of the ironies of history that the son of the dictatorial shah is now presenting himself as the logical candidate to bring democracy to a new Iran.

From the colonial era to WWII

Some might reach further back and argue regime change in Germany worked after the second world war.

It is worth remembering, however, that this was far from a simple process. It involved occupying Germany for more than a generation, decades of trials against ex-Nazis and splitting the country in two for more than 40 years.

As the epicentre of the Cold War, this is hardly an experiment in regime change that could be easily replicated.

Earlier examples of regime change from the colonial period provide similar lessons.

Large armies of invading colonial forces were able to pull down governments in Africa and Asia and prop up unpopular ones.

But once the occupying forces sought to remove their militaries or lost the will to resort to massacres to reinforce their rule, the shift towards decolonisation or self-rule became increasingly irresistible.

In the Dutch East Indies, French-ruled Vietnam, British India and the Belgian Congo, governments imposed by external powers were rarely viable once the threat of force was removed.

Czechoslovakia’s Prague Spring protests in 1968 – an effort to throw off Soviet-imposed rule – were quickly crushed by the USSR, showing once again that regime change “works” for as long as you are prepared to enforce it with violence.

By 1989, however, the Soviet Union’s appetite for enforcing its hegemony across eastern Europe had waned, leading to a largely peaceful transition to democracy across the region.

A failure to learn from history

Today’s US leaders are unlikely to accept the counsel of history.

But they would do well to remember the simple message of former US Secretary of State Colin Powell’s “Pottery Barn” rule for attempts to overthrow governments: you break it, you own it.

At present, however, the view from Washington seems to be that you can just break states and hope someone else will fix it for you.The Conversation

Matt Fitzpatrick, Professor in International History, Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, 22 February 2026

14,000 satellites in orbit above Earth: more to come - too many ?

 

Too many satellites? Earth’s orbit is on track for a catastrophe – but we can stop it

Astronomer’s view of a star obscured by streaks from Starlink satellites. Rafael Schmall/Wikimedia Commons, CC BY
Gregory Radisic, Bond University and Samantha Lawler, University of Regina

On January 30 2026, SpaceX filed an application with the US Federal Communications Commission for a megaconstellation of up to one million satellites to power data centres in space.

The proposal envisions satellites operating between 500 and 2,000 kilometres in low Earth orbit. Some of the orbits are designed for near-constant exposure to sunlight. The public can currently submit comments on this proposal.

SpaceX’s filing is just the latest among exponentially growing satellite megaconstellation proposals. Such satellites operate with a single purpose and have short replacement life cycles of about five years.

As of February 2026, approximately 14,000 active satellites are in orbit. An additional 1.23 million proposed satellite projects are in various stages of development.

The approval process for these satellites focuses almost entirely on the limited technical info companies have to submit to regulators.

Cultural, spiritual, and most environmental impacts aren’t taken into account – but they should be.

The night sky will drastically change

At this scale of growth, the night sky will change permanently and globally for generations to come.

Satellites in low Earth orbit reflect sunlight for about two hours after sunset and before sunrise. Despite engineering efforts to make them less bright, truck-sized satellites from many megaconstellations look like moving points in the night sky. Projections show future satellites will significantly increase this light pollution.

In 2021, astronomers estimated that in less than a decade, 1 in every 15 points of light in the night sky would be a moving satellite. That estimate only included the 65,000 megaconstellation satellites proposed at the time.

Once deployed at a scale of millions, the impacts on the night sky may not be easily reversed.

While the average satellite only lasts about five years, companies design these megaconstellations for nearly continuous replacement and expansion. This locks in a continuous, industrialised presence in the night sky.

All this is causing a space-based “shifting baseline syndrome”, where each new generation accepts a progressively more degraded night sky. Criss-crossing satellites become the new normal.

And for the first time in human history, this shifting baseline means kids today won’t grow up with the same night sky every previous generation of humanity had access to.

A comic showing Earth satellites at different points in time.
The Conversation, CC BY-SA

Houston, we have a ‘mega’ problem

Concerns over the sheer volume of proposed satellites come from many sides.

Scientific concerns include bright reflections and radio emissions from satellites that will disrupt astronomy.

Industry experts also note traffic management and logistical concerns. There’s currently no form of unified space traffic management in the same way that exists in aviation, for example.

Megaconstellations also increase the risk of Kessler syndrome, a runaway chain reaction of collisions. There are already 50,000 pieces of debris in orbit that are ten centimetres or larger. If satellites stopped all collision avoidance manoeuvres, the latest data shows we can expect a major collision in 3.8 days.

Major cultural concerns abound, too. Satellite light pollution will negatively impact Indigenous uses of the night sky for longstanding oral traditions, navigation, hunting, and spiritual traditions.

Launching so many satellites uses up vast amounts of fossil fuels, damaging the ozone layer. After the satellites have served their purpose, the end-of-life plan is to burn them up in the atmosphere. This poses another environmental concern – depositing vast quantities of metals into the stratosphere, causing ozone depletion and other potentially harmful chemical reactions.

All this feeds into legal concerns. Under international space law, countries – not companies – are liable for harm caused by their space objects.

Space lawyers are increasingly trying to understand if international space law can actually hold corporations or private individuals accountable. This is especially important as the risk of damage, death or permanent environmental damage grows.

We can no longer ignore the gaps in regulation

Currently, the main regulations concerning satellite proposals are technical, such as deciding which radio frequencies they will use. At national levels, regulators focus on launch safety, lessening environmental impacts on Earth, and liability if something goes wrong.

What these regulations don’t capture is how hundreds of thousands of bright satellites change the night sky for scientific study, navigation, Indigenous teaching and ceremony, and cultural continuity.

These are not traditional “environmental” harms, nor are they technical engineering concerns. They’re cultural impacts that fall into a regulatory blind spot.

This is why the world needs a Dark Skies Impact Assessment, as proposed by space lawyers Gregory Radisic and Natalie Gillespie.

It’s a systematic way to identify, document, and meaningfully consider all the impacts of a proposed satellite constellation before it goes ahead.

How would such an assessment work?

First, evidence must be gathered from all stakeholders. Astronomers (both amateur and professional), atmospheric scientists, environmental researchers, cultural scholars, affected communities, and industry all bring their perspectives.

Second, it’s essential to model any cumulative effects of the satellites. Assessments should analyse how constellations will change night sky visibility and skyglow, orbital congestion, and the risk of casualties on the ground.

Third, it will define clear criteria for when unobstructed sky visibility is critical for science, navigation, education, cultural practice, and shared human heritage.

Fourth, it must include mitigation pathways such as brightness reduction, orbital design changes, and deployment adjustments to lessen harm. This should include incentives for using as few satellites as possible for a given project.

Finally, the findings must be transparent, independently reviewable, and directly tied to licensing and policy decisions.

It’s not a veto tool

A Dark Skies Impact Assessment doesn’t prevent space development. It clarifies trade-offs and improves decision making.

It can lead to design choices that reduce brightness and visual interference, orbital configurations that lessen cultural impact, earlier and more meaningful consultation, and cultural considerations where harm can’t be avoided.

Most importantly, it ensures that communities affected by satellite constellations aren’t finding out about them after approval has already been granted and bright lights crawl across their skies.

The question is not whether the night sky will change – it’s already changing. Now is the time for governments and international institutions to design fair processes before those changes become permanent.The Conversation

Gregory Radisic, Fellow at the Centre for Space, Cyberspace and Data Law; Senior Teaching Fellow, Faculty of Law, Bond University and Samantha Lawler, Associate Professor, Astronomy, University of Regina

This article is republished from The Conversation under a Creative Commons license. Read the original article.