Chapter 2:
The Nuts and Bolts:
A Deeper Dive into Data, AI, and… Humans
As we established in the first chapter, the concept of AI is quite simple at its core. It's simply the idea that a machine can take actions and exhibit human-like decision-making.
The benefits of using AI are also straightforward.
AI can help organizations streamline business processes and workflows to enhance decision-making and achieve greater efficiency and productivity. In turn, people can focus less on rote, administrative tasks and more on strategic initiatives. This enables professionals to spend more time leveraging their unique skill sets to conquer more significant business transformational goals.
While the benefits of AI are undeniable, a lot goes into the successful execution of these technologies. AI technologies simplify many jobs by removing manual, repetitive tasks from a human’s to-do list. But first, there are complexities to address when making them work practically for businesses.
How Does AI Work?
In the beginning, AI machines were limited to following an instructive computer program, step by step.
Today, AI technologies are more autonomous and “intelligent” because they are built using data science, which IBM defines as combining “math and statistics, specialized programming, advanced analytics, artificial intelligence (AI), and machine learning with specific subject matter expertise to uncover actionable insights hidden in an organization’s data.”[6]
While Artificial Intelligence and data science are two different concepts and practices, they are also closely intertwined.
When building AI technology, programmers often use algorithms that guide the platforms and applications to process input data and turn it into output data. These algorithms are built for flexibility, so that machines can appropriately follow different workflows whenever necessary.
Artificial Intelligence algorithms enable machines to make decisions using data, including real-time data. They “combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data.”[7]
Avoiding “Garbage In, Garbage Out”
The old saying, “garbage in, garbage out,” is especially relevant in the overlapping worlds of data science and AI. For AI-driven machines to be effective, they need precise and informed programming, and their learning must be guided by algorithms built exclusively with high-quality input data.
No matter how “intelligent” a machine is, to perform at the level of a human, it needs to have a comprehensive set of rules and logic – solid, future-proof computer programming—to guide it through its operations.
Since data has become so dynamic, programmers and data scientists can rarely “set it and forget it” when it comes to algorithmic development and programming for AI-based machines.
[6] https://www.ibm.com/topics/data-science
[7] https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/
Users of AI can overcome data quality issues by integrating the human back into the technology.
Today, responsible AI use requires data scientists to perform quality checks—regular audits to identify duplicates, outliers, missing information, corrupted files, and even typos.
While human intervention is required more frequently in some cases than others, consistent human oversight is always critical to data integrity. It takes a combination of good programming and regular data checks to ensure that AI tools have a data set that’s complete, accurate, relevant, and high quality to produce the output—or intelligence—that organizations need to remain competitive in their fields.
As we will discuss in greater detail later in this book, AI isn’t magic. Companies must balance human performance and machine output to capitalize on the opportunities it offers.
"
Companies must balance human performance and machine output to capitalize on the opportunities it offers.
"
How to Ensure You Are Working With the Best Data Sets
An organization’s success with AI of any type hinges on the quality of data.
Your AI tools must be able to consistently access the right data to produce precise, accurate property insights. However, data silos, which are data collections that are independent from other information sources and are untouchable to other stakeholders or business units, pose challenges to this ability. These isolated data sets are often stored in only one location—digital or otherwise (as we will discuss further in Chapter 6).
In general, consistently ensuring that an organization works with integrated (i.e., un-siloed), good-quality data is challenging. The number of data sources available to property insurers has grown exponentially in recent years. They will only continue to increase, too.
Basically, data management can be downright difficult.
The process begins with a deep assessment of your data – identifying where it all lives and then conducting an honest review of its quality.
From there, your organization can ensure that your AI technology is working with the best data sets to ensure maximum benefits in a few different ways.
- Adjust Your Data Governance Model to Maximize the Power of AI
Defined as “the collection of processes, policies, roles, metrics, and standards that ensures an effective and efficient use of information,” data governance “helps establish data management processes that keep your data secured, private, accurate, and usable throughout the data life cycle.”[1]
In other words, companies set up data governance structures to set the quality standard for data stored in their technology and to ensure that it is kept and used securely.
To guarantee that your AI has the high-quality input data it needs to operate most efficiently and at total capacity, your company should revise its data governance policies to ensure that all input data sets are accurate, consistent in formatting, complete, timely, and relevant. Establishing data quality standards, processes surrounding data, and determining the people in roles who will provide human oversight in data management is crucial in optimizing the AI experience.
So, aligning your data management practices with company goals and regulatory compliance is key to ensuring that your AI technology is ideally positioned to facilitate transformational operational efficiency and productivity.
2. Conduct Regular Data Audits
Organizations with a dedicated team of professionals who oversee data quality will reap the greatest results from AI solutions. This team will perform regular data audits to ensure that data is always handled securely and within the bounds of established data governance models.
These routine audits are a necessary step in empowering your organization to “uncover silos, access issues, or areas where a greater depth or breadth of collection would be beneficial.”[2]
We have discussed the importance of establishing the right blend of AI and human performance in a company’s operations, and here is an excellent example of where human interaction plays a vital role in the success of AI.
A specialized team of people monitoring data quality and practices will see where there is room for improvement—both in terms of data and how AI is interacting with it—and can identify how to execute AI enhancements and upgrades. Note that it may be necessary to equip data management professionals with the right resources for conducting effective audits—data quality technology solutions or otherwise.
3. Establish an Integrated Digital Ecosystem
Both property insurance carriers and restoration contractors should establish a digital architecture of solutions that can speak with one another and share data between platforms. The more integrated these systems are, the less likely it will be for unwanted data silos to exist or develop over time.
If data from an underwriting technology can automatically populate within claims technology, for example, the possibilities for inconsistencies and miscommunications are greatly reduced. An integrated digital ecosystem lays the framework for accuracy and important collaboration between all stakeholders in a property insurance workflow.
With an integrated digital ecosystem, there is one centralized source of truth. As a result, AI technologies will have a single place from where they pull data. This will optimize performance and output data so businesses can make decisions based on the most reliable and consistent intelligence.
4. Work Exclusively With Trusted Data Sources
Because AI technologies tap into all your data sources, it’s essential to work with vendors who can guarantee that their property data is always up-to-date, accurate, and comprehensive. You should also work to procure the right permissions to use and distribute the data from each of your sources to avoid any compliance issues down the road.
For example, CoreLogic’s CoreAI™ leverages data from the industry's most comprehensive and accurate property data source. This ensures that its solutions don’t pull different data sets from various sources and then produce insights based on inconsistent or contradictory collections of data. Additionally, CoreLogic gets explicit permission from every client before using any data in its proprietary database.
[8] https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-a-data-governance/
[9] https://www.capterra.com/resources/how-to-conduct-a-data-audit/