Artificial Intelligence (AI) is talked about everywhere, from its admirable role in cancer diagnosis to scaremongering about robots taking over jobs. Where did AI come from though, and how did it creep up on us all of a sudden?
With a slant towards the UK and Europe, this article attempts to give a holistic view of AI’s journey and enlighten on its breadth and scale. It also intends to dispel some myths, raise awareness about aspects that surround AI, and suggest areas for consideration in the future.
It’s been a rocky road
With all the buzz around AI in the last few years, many people would be surprised to know that it has actually been around for over 70 years, starting way back in the 50s with Alan Turing and his now-famous Turing Test (popularized by the 2011 movie ‘The Imitation Game’). This test set the benchmark for intelligent machines for years to come and is still referenced today 76 years on.
The 50s also saw contributions from the likes of John McCarthy and Isaac Asimov who helped bring the term Artificial Intelligence into the public’s imagination through science fiction and research directives.
The 60s then expanded this perception with the popularization of robots (in films such as 2001: A Space Odyssey and for real by Stanford Research Institute, which created SHAKEY, the world’s first mobile intelligent robot) while at the same time dismissed the promise of neural networks (while not actually the first neural network, the perceptron is often cited as the first) with Marvin Minsky and Seymour A. Papert’s Perceptrons paper. This dismissal then extended to the rest of AI with the first of what was called an AI winter. Just over a decade after the term AI was coined, rejection set in, with unrealistic expectations and money wasted. AI, it seemed, was dead and left to science fiction writers with little room in the real world.
Was that it for AI? Not so. Expert and knowledge-based systems came to the rescue. The 80s saw the introduction of symbolic and logic-based reasoning through the ‘manual’ creation of rules (for example, IF-THEN statements). Unlike their predecessors, these systems were confined to specific problems, such as diagnosis and classification instead of general problem-solving capabilities. This knowledge-based period in the AI timeline again attracted a lot of funding, including the UK’s Alvey directorate (thought by many to be in response to Japan’s ambitious Fifth Generation Project) and the CYC project.
In time, however, disappointment returned, with the realization of these systems being too static and inflexible when scaled up or adapting to changing environments.
The AI winter was back again with disappointing memories of what is now termed ‘Good Old Fashioned AI’ (GOFAI). What next? Where was AI to go now? Dabbling with ‘brute force’ approaches brought successes with the likes of IBM Deep Blue beating world chess champion Gary Kasparov, but the real promise came in the resurgence of adaptive computation and neural networks.
This resurgence proved fruitful. After many iterations, it brought us to the new millennium, which saw the advent of machine learning, robotics, computer vision, and speech recognition. That is not to say it was an easy ride to the present day, there being many disappointments and realignment along the way. Successes, however, were now real and repeatable, the winning of the jeopardy game show in 2011 being a showcase.
Look where we are now
While the aforementioned timeline highlights the key events in AI’s journey to the present day, it is worth noting that a major factor in AI’s place today is due to the convergence of supporting technologies. The proliferation of data, the availability of computing power, and investment in accessible features have facilitated the entry of AI in every aspect of modern life.