A Brief History of Artificial Intelligence
How we experience AI today was nothing more than someone’s wildest dream just a century ago. In fact, the concept of artificial intelligence and what we now think of as robots can be traced back thousands of years too long before you or I or any of our ancestors were ever born.
I could spend an entire book going through the history of AI, from ancient concepts to today’s applications. But in the interest of space (and keeping your focus), I’m going to do a brief rundown of how we got where we are today, thanks to the many brilliant minds that came before us who dared to believe in the possibility of creating intelligent machines with the potential to change the world.
The concept of artificial intelligence goes back thousands of years. The myth of Talos in ancient Greece is the first recognized mention of artificial intelligence in literature. In this myth, Talos is a giant bronze man who is commissioned by Zeus to protect the island of Crete from invaders. Like other forms of AI in literature, Talos is eventually destroyed. It seems that even the gods couldn’t handle the fear of losing control of a machine.
This early conception of a robot emphasizes our human fascination with mechanized versions of ourselves. We’ve come a long way since then as a society, and the twentieth century represented a fast shift from simply dreaming about AI to developing and implementing it into our culture.
The Early to Mid-20th Century
All we need to do to understand the 20th century’s fascination with artificial intelligence is to take a look at the popular media from the early part of the century. In The Wizard of Oz, we encounter “the tin man,” a “man” made from the elements of tin. We can’t ignore the humanoid robot in Metropolis, which is itself a dark study of the dangers of mechanization in our modern world. And then there’s HAL in 2001: A Space Odyssey, an AI computer that decides to kill the astronauts onboard the spaceship instead of becoming disconnected from his server.
By the 1950s, there were enough mathematicians, scientists, and philosophers who believed in AI to start exploring it seriously as a way to incorporate machines into human society. The paper “Computing Machinery and Intelligence” by A.M. Turing explored the potential of teaching machines to “think” and marked a turning point for the field of AI. This paper introduced what is now referred to as the Turing Test, which is a test that can be applied to machines to determine whether they have reached a level of human intelligence.
While he had a lot of the fundamental ideas and elements of machine learning figured out, Turing lacked one essential component needed to test his ideas — the right computer.
See, in the 1950s, computers were extremely expensive. I’m not talking about a couple of thousand dollars for a high-end Mac expensive. In the early 1950s, you could expect to spend upwards of $200,000 in today’s money a month leasing a computer.
So, naturally, computers weren’t accessible to everyone. Only prestigious Ivy League universities and major technology companies had the finances to afford a computer. And even if you happened to work in one of the lucky universities or companies that had a computer, you still had to apply for funding to use it for your experiment. To get funding, you would need to demonstrate a reliable proof of concept, plus have the support of enough high-profile individuals who would effectively vouch for your abilities and ideas.
Lacking access to a computer and the money to rent one, Turing was not able to turn his ideas into a reality. However, his research and resulting paper were not in vain. Even though there had been stories, discussions, and papers about artificial intelligence, there wasn’t a formal field of AI until 1956, when the term “artificial intelligence” was officially coined at a conference that took place at Dartmouth College. The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) was dedicated to exploring and discussing the potential for this new field of AI.
Unfortunately, the conference itself was not a huge success as far as conferences go. Attendees did not collaborate as the organizers, John McCarthy and Marvin Minsky had hoped. Instead of engaging in discussions and collaborating on projects, attendees came and went as they pleased and appeared altogether disinterested in the ideas being proposed. In the end, no great consensus was reached as far as what artificial intelligence was or how it should be studied.
While it may have lacked as far as an academic conference, the DSRPAI had a lasting impact. The one big takeaway from the conference was a shared agreement that AI was, in fact, achievable. This was enough to take the field where it needed to go and provided the foundation for AI work that would take place over the next two decades.
The Late 20th Century
While it may not have met the expectations of its hosts, who had hoped for a more collaborative effort and acknowledgment of their work so far, the DSRPAI conference fueled the new field of AI for the next few decades. From 1957 until 1974, AI research and testing flourished!
Computers were able to store more information, and they became faster at processing. During this time, computers also became less expensive and more accessible to people outside of wealthy, private institutions, and corporations.
New developments led to funding from the American government, which had a particular interest in developing a machine with the ability to translate and transcribe spoken language. (This was during the Cold War when the U.S. was worried about Russian influences and spies.)
Unfortunately, there were still many technological developments that had to be reached for computers to perform these types of functions. While they were infinitely better and more able than computers in 1950, the computers of the 1960s and 1970s just didn’t have the power to store and process enough information to exhibit intelligence.
And so, without the computational power needed to reach the level of artificial intelligence the government was looking for, funding ran out and was not reinstated. AI went on hold for about ten years, what some call “the AI winter.”
AI all but disappeared until the 1980s, when commercial applications of AI technology came into play. The most prominent application of AI after the AI winter came in 1986 when a commercial expert system helped save the company Digital Equipment Corporation an estimated $40 million annually.
This was fueled by Edward Feigenbaum’s introduction of the concept of expert systems, which could mimic the human process of decision making. The Japanese government was especially interested in expert systems and funded related projects with $400 million between the years of 1982 and 1990. However, funding dwindled when developers were unable to meet most of the Japanese government’s ambitious goals.
In the New Millennium
A funny thing happened when governments backed off from AI — the field was able to grow and thrive as never before. Most of AI’s landmark goals were achieved by developers who didn’t have any government funding in the 1990s leading into the 2000s. One of the more well-known examples happened when IBM’s Deep Blue chess computer program defeated the world chess champion and grandmaster Gary Kasparov in 1997. This was a highly publicized event that demonstrated the computer’s ability to make decisions and operate at a high level of intelligence.
Another event that happened in 1997 was that Windows implemented a speech recognition software, which was developed by Dragon Systems. This opened up opportunities for those who had trouble typing or communicating verbally for any reason to have a way to communicate with the world.
While these updates to the world of AI happened without funding, it would be reductive to say that getting the government out of AI was the key to moving it forward. Instead, what happened was that computers finally caught up to the storage limits that had been keeping us from developing AI successfully for so many years.
Moore’s Law tells us that computer memory and speed doubles every year. Finally, by the 1990s, the speed and memory of computers had sped up enough that developers were able to put into place so many of the ideas and concepts their predecessors had considered 40 or 50 years before.
What once seemed and felt impossible is now a part of our everyday lives. We in and process increases.
Imagine what’s possible if computers keep accelerating at the same speed of doubling in speed and memory every year! The excitement of finding out what else people are capable of developing for machines to learn is part of what drives me every day to keep pushing the boundaries of artificial intelligence forward.
Will hit points in the future where we, once again, have to wait for technology to catch up to our ideas, especially as the sheer amount of data that computers need to take?