And it tells you this is a car.
It’s pretty clear what’s gone wrong.
From the limited number of images it was trained with, the AI has decided colour is the strongest way to separate cars and vans.
But the amazing thing about the AI program is that it came to this decision on its own – and we can help it refine its decision-making.
We can tell it that it has wrongly identified the two new objects – this will force it to find a new pattern in the images.
But more importantly, we can correct the bias in our training data by giving it more varied images.
These two simple actions taken together – and on a vast scale – are how most AI systems have been trained to make incredibly complex decisions.
How does AI learn on its own?
Supervised learning is an incredibly powerful training method, but many recent breakthroughs in AI have been made possible by unsupervised learning.
In the simplest terms, this is where the use of complex algorithms and huge datasets means the AI can learn without any human guidance.
ChatGPT might be the most well-known example.
The amount of text on the internet and in digitised books is so vast that over many months ChatGPT was able to learn how to combine words in a meaningful way by itself, with humans then helping to fine-tune its responses.
Imagine you had a big pile of books in a foreign language, maybe some of them with images.
Eventually you might work out that the same word appeared on a page whenever there was drawing or photo of a tree, and another word when there was a photo of a house.
And you would see that there was often a word near those words that might mean “a” or maybe “the” – and so on.
ChatGPT made this kind of close analysis of the relationship between words to build a huge statistical model which it can then use to make predictions and generate new sentences.
It relies on enormous amounts of computing power which allows the AI to memorise vast amounts of words – alone, in groups, in sentences and across pages – and then read and compare how they are used over and over and over again in a fraction of a second.
Should I be worried about AI?
The rapid advances made by deep learning models in the last year have driven a wave of enthusiasm and also led to more public engagement with concerns over the future of artificial intelligence.
There has been much discussion about the way biases in training data collected from the internet – such as racist, sexist and violent speech or narrow cultural perspectives – leads to artificial intelligence replicating human prejudices.
Another worry is that artificial intelligence could be tasked to solve problems without fully considering the ethics or wider implications of its actions, creating new problems in the process.
Within AI circles this has become known as the “paperclip maximiser problem”, after a thought experiment by the philosopher Nick Bostrom.
He imagined an artificial intelligence asked to create as many paperclips as possible which slowly diverts every natural resource on the planet to fulfil its mission – including killing humans to use as raw materials for more paperclips.
Others say that, rather than focusing on murderous AIs of the future, we should be more concerned with the immediate problem of how people could use existing AI tools to increase distrust in politics and scepticism of all forms of media.
In particular, the world’s eyes are on the 2024 presidential election in the US, to see how voters and political parties cope with a new level of sophisticated disinformation.
What happens if social media is flooded with fake videos of presidential candidates, created with AI and each tailored to anger a different group of voters?
In Europe, the EU is creating an Artificial Intelligence Act to protect its citizens’ rights by regulating the deployment of AI – for instance, a ban on using facial recognition to track or identify people in real-time in public spaces.
These are among the first laws in the world to establish guidelines for the future use of these technologies – setting boundaries on what companies and governments will and will not be allowed to do – but, as the capabilities of artificial intelligence continue to grow, they are unlikely to be the last.
Expert view: Raise them like children
“The answer to our future, if we were to re-imagine it, is not found in trying to control the machines or program them in ways that restrict them to serving humanity, it’s found in raising them like a sentient being, and literally raising them like one of our children. And as we observe how humanity has been behaving in front of those machines – the way we respond to tweets or the way we interact with the news and so on – we are not being very good parents; we are not showing the best of us. And if the machines were to mimic our intelligence, and become more of who we are, we are in trouble. The only way we can get our future to be re-imagined as a Utopia, is to actually start behaving like the kinds of parents who could teach those machines the values that would make them want to care about us.”
Mo Gawdat – author and former chief business officer of Google X
Source