The History of Artificial Intelligence

  • Crypto Guru
    July 31, 2023, 10:07

With AI systems hitting the headlines and taking the tech world by storm, society is divided into two groups: those who are delighted by the incredible achievements of technology, and those who don’t trust machines and are afraid that AI will take away their jobs.

Of course, it’s very interesting to consider both points of view, but the arguments are mainly based on assumptions and don’t give accurate predictions – it’s too early to judge. So, let’s first understand how we’ve gotten so far. In this article, Grapherex experts have gathered a brief history and evolution of artificial intelligence.

What Is Artificial Intelligence?

AI refers to computers, robots, and software that try to imitate the problem-solving and decision-making abilities that humans have. Keep in mind that AI is a simulation of human intelligence processes; it isn’t real, independent intelligence. Some algorithms include expert systems, natural language processing (NLP), human speech recognition, and computer vision.

This technology is trending, as it could be (and already is) of great importance in various fields—for example, entertainment, shopping, healthcare, finance, education, and robotics. We already know about and use AI-powered assistants, autonomous vehicles, facial recognition apps, social media recommendation systems, and spam filters.

The market is growing, just like the whole tech industry. Revenue from the AI global software market is predicted to reach $126 billion by 2025, according to Statista. You’ve probably heard about the latest development of ChatGPT — a chatbot with artificial intelligence developed by OpenAI and capable of working in a conversational way. The number of use cases for such technology is almost limitless.

Origins of AI Research

Scientific AI research started in the middle of the 20th century. It was at this time when mathematicians and philosophers decided to try using the abilities of machines to carry out tasks that typically require human thinking. A pioneer of AI was Alan Turing – he proposed the concept of a mechanism that could copy human intelligence tasks. In Turing’s 1950 paper titled ‘Computing Machinery and Intelligence’, the author described how to build intelligent machines and how to test whether they are intelligent. Today, his technology is called the Turing Test. Since those days, AI has come a long way.

A Brief Timeline of AI

Let’s see a very brief timeline of how this technology developed.

Before 1949 – Computers at that time had two major issues that were to be solved. The first was their inability to store data, including commands; they could only execute them. Secondly, it was enormously expensive to have or rent a computer, as costs amounted almost to $200,000 per month. Due to this, many people waited for machines to become more accessible.

1956 – The Dartmouth Conference brought together scientists interested in creating ‘thinking’ machines. The well-formulated AI concept was presented for the first time. The main topics of concern to scientists were rule-based systems, symbolic thinking, and decision-making.

The 1960s and 1970s – AI research flourished. There were many experiments and tests, and much research was carried out. Although they took too much time and effort, scientists were convinced that AI could do something valuable. Then, Joseph Weizenbaum’s ELIZA appeared, which was a great leap in automatic problem-solving and language processing. The government was also interested in this technology, thinking about sponsoring developers. The overall optimism was high.

The 1970s and 80s – Researchers understood that symbolic thinking was no longer a viable solution, so they turned to machine learning. They used statistical methods to let computers learn from large amounts of data. As a result, the first neural networks appeared. The world saw two key changes: a significant expansion of the algorithmic toolkit, and a boost in money flow.

The ‘AI Winter’ – This period started in 1984, following a public debate at the yearly meeting of AAAI – the ‘American Association of Artificial Intelligence’. The winter began with pessimism in the AI community and the press. Teaching mechanisms to ‘think’ was a long and extremely time-consuming process, and computers were not developed well enough to work as smoothly as people wanted them to. The developments were narrowly specialised and weak. Interest and sponsorship decreased, which led to a decline in the research.

The 1990s and 2000s – Fortunately, even without government funding and public hype, AI remained strong. The emergence of neural networks and their impact on AI systems enabled the development of robotics, computer vision, and natural language processing. Several years later, deep learning technology made it possible to advance speech and image recognition. This new branch of machine learning uses so-called deep neural networks.

Modern-Day AI

Modern days are characterised by recent advances in AI, including cutting-edge deep learning and reinforcement learning systems. We’re seeing an increased role of AI in fields like linguistics, healthcare, and finance. Virtual assistants, medical diagnostics, and self-driving cars are the novelties that surround us daily. The number of new applications and software is rapidly increasing, while researchers explore new ideas like quantum computing and neuromorphic computing.

One of the trends is the creation of more human-like interactions, led by voice assistants like Siri and Alexa. Current natural language processing systems enable machines to understand and respond to human speech with high accuracy. For example, ChatGPT, an AI language model developed by OpenAI, is a system designed to understand natural language and generate human-like responses to a whole range of queries and prompts. Trained on a vast amount of text data, books, articles, and websites, it comes up with meaningful and coherent responses.

Future of AI

As for the future, AI is likely to be increasingly influential in solving some of the biggest challenges that society faces. Algorithms will help analyse climate change, improve healthcare, and boost security. AI will have a lasting impact on how we work, learn, and communicate.

However, there are ethical concerns surrounding the development of AI and its social implications. Ethics is a really hard thing to implement or teach, so we will probably see heated debates on this topic in the next few years. Some people believe that ethics in AI should even be taught in schools.

The potential outcomes of the global implementation of artificial intelligence and algorithms may also seem scary. There is fear of mechanisms taking over people’s jobs. In 2004, scientists from MIT and Harvard published thorough research on the job market, listing those professions most likely to undergo automation. Not only auto drivers but also managers can be replaced, thanks to powerful algorithms. For instance, Uber could manage millions of taxi drivers with minimal human supervision.

Modern programs are powerful and efficient, even more than creators thought they would be. We are to see what the future has for us.