The History and Evolution of Artificial Intelligence
1964 - 1967
ELIZA
A machine that could carry out intelligent dialogue independent of real-time programming.
Significance
Demonstrated a machine’s ability to react to language stimuli and “think” on its own.
Implications
Showed scientists the power of machines to recognize and process language.
Although scientists have been interested in the concept of artificial intelligence since the early 1950s, AI remained just that—an idea—for at least another decade.[2]
As computers evolved over the following decade and beyond, they became less expensive and gained the ability to process increasing quantities of data. Machines that were once only accessible to universities and other well-funded institutions now had a “seat at the table” in more organizations. And, with greater exposure and the potential to store more commands and “knowledge,” scientists and experts began transforming artificial intelligence from a concept into a reality in the 1960s.
One of the earliest innovations was the interactive ELIZA program, credited to scientist Joseph Weizenbaum of the Massachusetts Institute of Technology (MIT). ELIZA could converse in English with another party on its own, without requiring real-time, third-party programming for responses.
Universally, experts and government entities were interested in the potential of artificial intelligence to transcribe and translate languages. To see if machines could realize this goal, the U.S. government – namely, the Defense Advanced Research Projects Agency (DARPA) – funded AI research at several institutions in the 1970s. Other governments worldwide followed suit with similar initiatives over the next two decades.
The 1970s saw success with Natural Language Processing (NLP)—interactions between computer programs and the human language. Through NLP, machines could process a multitude of documents, and by analyzing language, they could deduce overarching themes and conclusions.
1970
Natural Language Processing
A program to process and analyze large volumes of documentation to identify themes and overarching ideas.
Significance
The first program that required no human interaction to reach intelligent conclusions.
Implications
Precursor to more sophisticated AI, like machine learning and deep learning.
1997
Deep Blue
IBM’s chess-playing computer game.
Significance
The first machine to beat a world chess champion and grand master (Gary Kasparov of Russia).
Implications
Showed the world that a machine could react to real-time actions and mimic human decisions – even those of the most brilliant minds.
Machine Learning
Throughout the 1990s and the 2000s, the world saw the arrival of even more dynamic Artificial Intelligence through machine learning—a form of AI that uses data and algorithms “to imitate the way that humans learn, gradually improving …accuracy.”[3]
When we use AI today, it is often in the form of machine learning – i.e., programs that can improve their functioning and output data from the data they ingest. Machine learning allows computers to learn from data so that programmers don’t have to give them explicit instructions for every different function.
While a derivative of AI, machine learning does not fully embrace the idea that a machine can function precisely the way an autonomous human would. Instead, machine learning “aims to teach a machine how to perform a specific task and provide accurate results by identifying patterns.”[4]
The Last Decade of AI Innovation: At the Cross Section of Big Data
Artificial intelligence is all about data.
AI enables humans to better understand and thus optimally leverage the massive amounts of data at their fingertips. It is a tool that helps users determine the data that is and isn’t relevant in any given scenario.
Specifically, we can’t talk about AI without talking about Big Data, which, in the words of Gartner, is “high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”[5]
Big Data differs from typical data sets, which are manageable and easily digestible. On the other hand, Big Data refers to large volumes of complex, dynamic assets that require sophisticated tools to access and process.
Big Data took on a new meaning in the mid-2000s with the rise of cloud computing when machines began exchanging information without being physically connected. The possibilities for gathering, storing, and exchanging data with other sources became seemingly endless.
Today, the success of any modern business hinges on Big Data, or the ability to mine vast volumes of information and intelligence for essential insights. This has been a critical component in the rise of artificial intelligence, a technology that enables users to process and analyze Big Data. As a result, AI—and especially machine learning—has become increasingly crucial for businesses across all industries.
When AI is applied to Big Data within a machine, the results are transformational to businesses. Machine learning can:
Detect deviations in data
Determine the probability of future outcomes
Discover patterns within data structures that are too big for humans to comprehend
AI Today… and Beyond
AI has come a long way over the past several decades and will continue to evolve. Now is the time to determine AI's role in your organization’s digital business strategies—both today and in the future.
In the upcoming sections of this eBook, we will delve deeper into the world of AI and its varying levels of complexity. We will also examine its role in the property insurance industry, from the insurance provider space to the realm of restoration contractors.