Artificial Intelligence, ChatGPT, and the Relationship Between Humans and Machines - Veda Skip to content
Veda

Artificial Intelligence, ChatGPT, and the Relationship Between Humans and Machines

By: Dr. Bob Lindner, Chief Science & Technology Officer, Co-Founder

If the explosive launch of ChatGPT has taught us anything, it’s that there is a growing appetite for engaging with AI. According to a recent UBS study, the chatbot from OpenAI reached 100 million monthly active users in January— only two months after its launch. By comparison, it took TikTok about nine months to reach that milestone and Instagram two-and-a-half years.

While ChatGPT and the generative AI that powers it represent the latest advancements in AI and machine learning, the fact is that organizations and individuals have been trying to harness the power of AI for years. Some see it as the wave of the future. Others are scared of what it portends for the complicated relationships between humans and machines.

Many people are so afraid of being displaced by the automation that artificial intelligence brings that they overlook the benefits of this amazing technology. But the fear of “robots replacing humans” isn’t the only thing that gives people pause. There’s also concern that machines will make unacceptable errors. Of course, when people make the occasional mistake, we’re used to giving them the benefit of the doubt, but we struggle to do the same for machines because we don’t know how to contextualize their errors.

Why do we react so emotionally to AI? How can we shift our perspectives? And how can we actually score recommendations in AI systems? The hope is that with greater understanding, we can apply AI to more business settings and drive greater success.

Digging deeper into our fears and hesitations

Behaviorally, people tend to fear things we don’t understand or that seem out of our control. When it comes to risk, specifically, we struggle to comprehend how to assess it in an objective—rather than emotional—way.

For example, think about self-driving cars. The thought of a car without a driver makes many of us uneasy. Even though more than 75% of us will be in at least one major car accident during our driving lifetime, we’re afraid to put autonomous cars with this type of driving record on the road. While the probability of an accident is likely not higher than for a human driving a car, the combination of not knowing the exact percentage of risk and not being in control makes it harder to accept. We’re just not used to making our decisions based on probability; we are used to listening to our gut.

In order to process the data with a probabilistic AI system, we have to score it and set a threshold for “good” data; anything with a score below our threshold is discarded and anything higher is deemed an acceptable level of risk and included in the data set.

In my experience, the best way to get comfortable with objective assessment of risk is practice. Over time, it becomes more natural to look at the numbers as opposed to looking at our emotional response. Of course, understanding exactly how AI works helps too.

Understanding how to assess risk associated with AI

AI acts on two types of systems: deterministic and probabilistic. With a deterministic system, an outcome can be determined with relative certainty. This includes apps like Amazon, Doordash, and Venmo, which generate predictable types of data within a confined system. These are usually not considered “mission-critical,” and as a result, we’re willing to tolerate some level of inaccuracy in their algorithms. For example, when Netflix recommends a movie that doesn’t actually interest us, we don’t cancel our subscription to the service. We just look at the next recommendation in the queue or scan the top 10 titles of the week. We’re forgiving.

Probabilistic systems have built-in uncertainty. The exact output is not known. Think about the difficulty of forecasting the weather. It’s hard for us to understand the uncertainty of probabilistic systems and the stakes get even higher when we’re dealing with “mission critical” data, like we are in healthcare technology. In order to process the data with a probabilistic AI system, we have to score it and set a threshold for “good” data; anything with a score below our threshold is discarded and anything higher is deemed an acceptable level of risk and included in the data set.

The first step is to understand how these systems work, and the second is to set thresholds to score data that matches your risk tolerance.

Take a risk

With machine learning models, we are training a system to learn and adapt in order to improve—so it’s necessary to make assessments on an ongoing basis, rather than measuring an automation system’s performance once and only once. Because of that, it’s essential to have patience, as data can and will change, depending on many factors.

While risk makes people feel uncomfortable regardless of the setting, it’s time to address those fears and reluctance to move forward. Once we have tangible examples and parallels we often relate and tolerate it better.

As for ChatGPT and its generative AI brethren, the key will be for each person who engages with these tools to determine what level of risk they are willing to take. For most of us, a simple chat about something mundane or unimportant is likely acceptable. For some, the exchange of critical data or asking it to perform an important function will be a bridge too far. For now.

Dr. Bob Lindner is the Chief Science & Technology Officer and Co-Founder of Veda. More about Veda’s science and technology: Automation, Machine Learning, and the Universe: Q&A with Bob Lindner.

RELATED BLOGS

THE TOP CHOICE FOR HEALTH CARE

We're the authority on health plan data. See how Veda's Smart Automation platform can work for your health plan. Improve data accuracy, reduce workloads, and enhance the member experience. But first, start with a free demo.