Tuesday, 5 January 2021

Artificial Intelligence - AI - the jury is still out

                                                                                              Shutterstock
'Artificial intelligence' or AI. The terminology evokes images from film and television such as The Terminator film franchise which has the fictional 'Skynet' computer system become self aware and decide to wipe out the human race or the computer HAL from Stanley Kubrick's 2001 - A Space Odyssey which attempts to eliminate the astronauts on board their spacecraft. Or more laterally the androids from Star Wars, Star Trek or the Alien film franchise. 

For most people exposure to some form of AI already occurs with robo systems which do debt recovery, conduct surveys or provide basic insurance quotes. But what is AI exactly ?

The definition of AI can be applied to any method, process or technique that enables computers to mimic human intelligence. This is achieved with decision trees, logic algorithms, if-then rules and machine learning. Machine learning (or ML) itself operates with a core capability described as a neural nets which are created by programmers  using a learning algorithm supported by terabytes of data to train it. Computers therefore become able to train themselves with recognition of specific words or phrases or images.  How is AI applied at present ?

AI is used for multiple forms of modelling and then decision-making recommendations (predictive, detection-based, prescriptive), computer visuals (such as for facial recognition, image analysis, sensors) autonomous machines (drones, large vehicles in controlled environment, robotic assembly lines) and conversational platforms (virtual assistants, translations, inquiry assistance). 

There are however significant risks with the development of AI and unrestricted use. The late physicist, Stephen Hawking saw AI as a direct threat to the human race if not controlled and a range of AI-related failures already have been experienced by Amazon, Microsoft and the US justice system. The Loomis case in  the US state of Wisconsin in 2013 being a key example. In the Loomis case, AI was used to determine the length of sentence which the offender, Eric Loomis, should serve for car theft. The AI technology known as the Compas Program, gave a sentence at the higher end of penalty and more than would have been expected for the crime involved. This case remains controversial.

The Australian Institute of Company Directors (AICD) have warned that "AI and ML designed intelligently and deployed sensitively herald immense opportunity. But the technology is not without risk. Flawed algorithms and biased data sets can lead to unintended outcomes while increased automation will likely reduce the needs for employees engaged in repetitive work" (June 2019).

                                                                                                Shutterstock


No comments:

Post a Comment

Comments are welcome but are subject to moderation.