2.1 Artificial Intelligence

This is nothing new although it feels like it. It started in the 1950’s with some conviction. But it is only as we have developed faster, smaller and cheaper processors that it has become a very powerful tool. The phrase Artificial Intelligence is a very broad and popular term but Machine Learning is probably more accurate. 

What Artificial Intelligence attempts to do is mimic the human brain in solving problems. It learns by making small changes, similar to a child learning to read. it is an area that uses Artificial Neural Networks, similar to how the brain works. 

Things that we take for granted e.g. recognising the difference between a dog and a cat we can do from a very early age with almost 100% accuracy. To us this is easy, but for a computer this is very, very difficult. It has to be trained to recognise a dog. It sees tens of thousands of images of dogs and cats and works out which is which using code. Then it basically has a good guess. 

Where the computer scores over humans is where it has to look at millions of bits of data, it looks for patterns in that data. It can do this so much faster than any human can and for longer, it never needs a tea break. 

However, what it cannot do is then go and make a cup of Yorkshire tea. Even robots struggle with that. The code behind Artificial Intelligence is a mixture of clever programming and some useful maths. This is not beyond the scope of mere mortals as there are ways round the maths. 

The code mimics the brain with what are called Artificial Neural Networks which work almost but not exactly like the Neurons in the brain. This is a fascinating area of development. Which is not beyond the reach of the ordinary coder as tools are being made available that make it less of the domain of the computer scientists.