Artificial General Intelligence (AGI)

This is something that will be talked about more and more. It is seen as different from Artificial Intelligence because its application suggests a more generalised way of understanding how to solve a problem. What I mean is it will tackle many different kinds of problems with the same model (AI brain) without having to retrain it or rebuild it with a different architecture. 

One example of this is AlphaZero. Originally an AI was created to play a specific type of game e.g. chess. It could only play that game. It wasn’t an intelligence that is general, it is very specific, it could only do one thing, play chess. It was structured that way. All its data was of chess, it learned how to play chess and play it very, very well. 

Show it another game and it was useless. However, they have no developed some AI models that can learn any game. Hence the term Artificial General Intelligence. It suddenly moves from a specific tool for a specific job to a general purpose tool for a range of jobs. 

The question is then, is it more intelligent? The bigger question is ‘what is intelligence?’. That is a long running debate on its own. Is the AI becoming more sentient or even more intelligent? We do at some point need to explain exactly what is going on inside the computer running an AI programme. 

But for now let us just talk generally about it so that you can frame it. It is a tool, it has been created by humans and isn’t really able to recreate or reproduce itself, at least not in any meaningful way. What you have is some clever maths and coding running in what is often colloquially called a black box with an artificial neural network inside. 

The neural network isn’t something sinister it is a series of algorithms or one big algorithm where the computer does something it does well calculations. These calculations are mainly matrix multiplication. Just lots of them millions even billions of calculations done very rapidly. 

So it learns from the data, keeps learning through these calculations until it can confidently tell the difference between a cat and a dog to a level of accuracy. If intelligence is the ability of a thing to learn then AI programs are intelligent but it is limited, specific. 

It has no emotions despite what the Google employee thought. They were just learned responses from trawling through the internet and assimilating similar responses. It told the employee what it thought it wanted to hear. It could not operate from independent thought. Yet is it not too dissimilar to how we learn?