The most up-to-date, comprehensive, and accurate source of data. Your organization can access profiles of every active provider in the U.S.—over 6 million.
See how we’ve helped leading healthcare organizations achieve significant cost savings, improve data accuracy, and enhance patient care. Here, you will find our results, research, reports, and everything else our scientists are testing in the Veda Lab – no lab coat required.
At Veda we understand that every data point is an opportunity to improve the healthcare experience. And we can see the potential when data is no longer a barrier.
Dr. Bob Lindner is the Chief Science and Technology Officer at Veda, a company addressing provider directory data challenges.
It’s no surprise to anyone who works with data—it’s messy. In every industry and every business, there are data anomalies and issues that can impact the story data tells. If we have any hope of improving data practices and making collected data truly actionable, we first have to acknowledge its limitations and then explore modern solutions for improving it.
Bad Data Is The Norm
With the new federal administration exploring cost-cutting measures and releasing data nearly daily, a specific example caught my eye—it was a Social Security disbursements by age graph, with the data suggesting 210 year olds are receiving Social Security entitlements. As a data scientist who has been working with healthcare data for over 10 years, this graph wasn’t shocking to me.
I recently saw one dermatologist who was practicing at 20 different variations of one address; imagine the extra legwork required by a patient to find out where you are booking an appointment. Or how about two providers with the exact same name but one is a veterinarian on the West Coast and the other is a physician in New York? There is state licensing info for both of them, but the only one with a federal National Provider Identifier (NPI) is the veterinarian. These are complex data problems occurring every day.
Data engineers know that a lot of data in every industry is collected manually, and this often introduces errors that are quickly propagated and magnified throughout downstream processes. In fact, most data systems in the modern economy, all around the globe, have shockingly out-of-date practices. With a spotlight on data issues right now, it’s important to dig deeper and examine data processes to have any hope of modernizing databases and making data functional.
If we can speed up end-point to end-point connections of the healthcare lifecycle and remove additional steps in the clerical process, it will result in improved patient experiences.
However, simplifying workflows isn’t enough unless the data driving the information is accurate and timely.
In the fast-paced world of healthcare, sluggish provider data is a liability, not a luxury. Backlogged rosters pile up, decisions stall, and resources drain away. But what if provider data moved faster?
How Veda’s Speed Redefines Provider Data Management
Imagine what is possible when automation delivers provider rosters at unprecedented speeds. That’s the power of Veda. We’re not just automating data; we’re redefining it. In the future of provider data, speed isn’t just a goal – it’s how we connect health systems and payers to solve complex healthcare data challenges.
Manual provider data approaches are specifically troublesome when handling large provider rosters, some containing hundreds of rows. Handling the volume of data created in healthcare every day is unfeasible without AI.
Where AI Comes In
One of the main benefits of AI is the ability to quickly wrap up tasks—especially when compared to manual methods and reduced processing times free up resources for other meaningful tasks.
Large, unruly provider rosters or atypical formats? Not a problem with robust and reliable (and patented) AI. When data quality is maintained by automation, it also means rosters don’t need addressing or fixing again later.
AI also delivers on what we call “synthetic attestation.” This is an attestation that occurs with no provider intervention or effort. While this is important in all specialties, it’s especially impactful for behavioral health when providers do not have precious moments available to pick up the phone and self-attest. Synthetic attestation uses the data providers are already creating in their day-to-day workflows.
Faster Data, Faster Care
With accurate data that quickly gets to where it needs to be, providers are displayed correctly, decision-making is improved, and patients have faster access to care.
Deepfakes Can Damage Businesses—Here’s How To Fight Back
Deepfakes—AI-generated synthetic media in which visuals or audio are manipulated to create deceptively realistic content—are often discussed in terms of their impact on the public’s perception of current events, but they pose a growing threat to businesses as well. Created and leveraged by unscrupulous actors, deepfakes can enable fraud, perpetuate misinformation and cause lasting brand damage.
Whether they take the form of a fabricated video, cloned voice or contrived image, deepfakes can erode trust and disrupt operations in ways many companies aren’t prepared for. Members of Forbes Technology Council discuss some of the specific ways deepfakes could be used to hurt a company and what leaders can do to defend their organizations (or respond when a deepfake succeeds).
Regularly Review Employee LinkedIn Profiles
“We’ve noticed LinkedIn profiles for people who claim to work at our company but who don’t or never have. Such deepfake profiles damage our company because our people, our reputation and our brand are being abused. Leaders can respond to this specific use of deepfakes by periodically reviewing all “employees” of your company. Look for surprises and flag the frauds for review by LinkedIn.” – Robert Lindner, Veda
Meghan: When we founded Veda, we set out to create lasting infrastructure in the healthcare industry that allows accurate data to flow automatically between payers and providers. That meant inventing new ways of processing data that were both secure and accurate, and then publishing our work through the patent process. Ten years later, we are staying true to those objectives— we’ve built AI tools to modernize healthcare and we’ve shared our discoveries through the patent process so our solutions can fuel further innovation.
Bob: We needed to bring a fresh perspective to the problems surrounding provider data that have remained stagnant for over four decades. By creating wholly new approaches to the trillion-dollar data administration problem in healthcare, we knew that our solutions were innovative and unique. So we began early in our company’s history with the patenting of Veda’s technology—protecting our inventions in the short term, while also benefitting all of us in the long run.
Veda’s patents protect our entity resolution engine, AI modeling engine, ML training data process & platform, and web-scale data collection.
How else has Veda committed to AI development?
Bob: I’m an astrophysicist and I built AI tools in radio astronomy before founding Veda. Scientists have been building innovative AI tools for decades and have a cultural rigor that drives them to test and publish their findings.
We’ve recruited a team of PhD scientists—from physics to molecular genetics and astronomy—who help build and test Veda’s in-house LLM technology, train our machine learning models, and develop the infrastructure that is the foundation for Veda’s patented systems.
What makes your AI systems different from others in the industry?
Bob: Our AI is trained on Veda’s proprietary training data, which is ethically sourced and high quality. Our training data is used to fine-tune Veda’s models and help solve critical healthcare-specific tasks with the highest possible performance.
Plus, Veda’s AI models are entirely owned by Veda with no external dependencies. Our application of AI differentiates us from others in the industry because it leverages LLMs and contextual understanding but does not produce hallucinations. We allow the model to select correct answers, not to invent free-form text.
Meghan: Our company is founded on scientific rigor and was built specifically for healthcare from Day 1. We have over 80 combined years of AI expertise, and our commitment to science and data integrity compels us to approach problems differently. It hasn’t always been easy. We did the hard work upfront. We threw out the rule book and asked ourselves, “How do I ensure I can access care?”
Putting ourselves in the patients’ shoes is how we began to turn these challenges on their heads and look at them differently—we’ve calibrated our success to the patient’s ability to use the data to access care. What does that mean technologically? It means our AI systems must provide hallucination-free, predictable, and measurable results because that is what our customers expect and it is what patients deserve.
Bob: It was essential we build the system in a new way. The blend of patents is what makes our AI systems so unique. The patented technology works together, in parallel, to accomplish complex data curation challenges with speed and accuracy that was previously thought impossible.
Which provider data problem is Veda’s AI solving?
Bob: All of them. But the one I’m particularly excited about, and that our most recently granted patent underscores, is our ability to automate intake at scale.
Meghan: Veda’s technology isn’t just a single model. It offers many capabilities working in tandem towards one comprehensible function. There are several foundational data challenges that our technology solves. One of the unique benefits of our patented technology is that it can be assembled in different ways to address many kinds of healthcare industry problems.
Bob: For example, our patented entity resolution system efficiently matches the identity of healthcare providers. The special challenge in this problem is that healthcare providers change lots of their information over the course of their careers, so the system needs to connect their identities while allowing for a normal amount of drift in some fields over time.
Why do you need AI to solve provider data problems?
Meghan: Veda’s AI can cut through data barriers and ensure that people can access care when they need it the most. That’s why we founded Veda—because everyone deserves access to accurate, up-to-date information that empowers them to get the care they need.
What are the risks of using AI in healthcare and how can they be mitigated?
Meghan: While everyone is looking to AI and automation for solutions, in healthcare the AI isn’t living up to the hype. In a race to reduce costs, many have lost sight of the problem they are trying to solve and have left out foundational components of professional services, actual results, and rigorous testing. In fact, I think the irresponsible development of some AI tools could negatively impact the companies that are taking a transparent and tested path.
For instance, imagine a business trying a new product for the first time, and it doesn’t go well. It breaks, it’s costly, and leaves a negative impression. After that bad experience, you might be reluctant to try another product in that category. This can happen with AI too—if one company delivers poor results, people might dismiss AI solutions altogether and revert to outdated methods, which ultimately hurts innovation.
Bob: We succeed with AI when it is effective, robust, and focused on responsibly making an impact. While there is a risk posed by poorly designed and underperforming tools, I see an opportunity for Veda to prove our integrity to the industry. We’re proud to showcase our patented AI and machine learning solutions, which were developed and tested with an unwavering commitment to scientific rigor and ethical, security-forward principles.
Ready for Veda’s provider data solutions? Contact us.
Veda Announces Tenth AI and Machine Learning Patent
Proprietary Technology Leads the Health Data Industry
MADISON, February 6, 2025 – Veda Data Solutions, Inc. (Veda), a healthcare technology company solving complex provider data challenges, announced its tenth patent has been granted by the United States Patent and Trademark Office, with four patents secured in the last four months.
“Our provider data solution is the only one of its kind,” said Veda Chief Science & Technology Officer and patent author Dr. Bob Lindner. “The 10 patents work in tandem to deliver automation, speed, and provider data accuracy that others can’t match. Our IP portfolio spans the entire operational pipeline from web-scale data collection, entity resolution, automatic semantic recognition and transformation, accuracy modeling, and human-in-the-loop interactivity.”
Why did Veda patent its AI technology?
Veda is committed to building responsible and transparent AI. The patent process is rigorous and ensures inventors are both creating technology with unique value while also openly sharing their research to fuel an innovation ecosystem.
What does Veda’s patented AI technology do?
Veda’s patented technology definitively solves provider data problems plaguing the healthcare industry.
Veda offers the optimal solution for automatic mass-scale demographic information management along with automatic roster ingestion, directory accuracy, network construction, and network adequacy optimization.
Veda’s best-in-class product leaves behind flawed, biased, and outdated notions of “sources of truth” and attestation, instead leaning on artificial intelligence and sound scientific design to produce reliable and reproducible results.
Is Veda’s AI secure?
Veda’s AI systems are HITRUST-certified and built entirely in-house. Veda’s implementation of its patented technology is bias and hallucination-free with all customer data and services fire-walled within the United States for maximum security.
At Veda, provider data is treated with the same reverence for security and privacy that is required for patient data.
What is next for Veda’s proprietary innovations?
With 14 more pending patents, Veda continues innovating to remain the optimal solution for provider data roster automation and data accuracy scoring.
“Veda’s technology isn’t only patented, it’s powerful. Innovated precisely for healthcare organizations and their unique data problems, our patents are essential to the delivery of fast and accurate data to Veda’s customers,” said Veda CEO Meghan Gaffney. “Veda was the first to tackle the provider roster data problem successfully and continues to develop innovative solutions in healthcare data today. With our patented approach, organizations can dramatically reduce operating costs by automating complex business rules for data extraction, transformation, and loading.”
About Veda Veda blends science and imagination to solve healthcare’s most complex data issues. Using AI, machine learning, and human-in-the-loop automation, our solutions dramatically increase productivity, enable compliance, and empower healthcare businesses to focus on delivering care. Veda’s platforms are simple to use and require no technical skills or drastic system changes because we envision a future for healthcare where data isn’t a barrier—it’s an opportunity. To learn more about Veda, visit vedadata.com and follow us on LinkedIn.
In this episode, #MillenniumLive is joined by Dr. Bob Lindner, Chief Science & Technology Officer and Co-Founder at Veda, for a deep dive into the fascinating world of artificial intelligence (AI). Bob shares his insights on what excites him most about AI development, exploring the balance between innovation and responsibility. Tune in as Bob discusses the differences between supervised and unsupervised learning, the critical role of data science in AI modeling, and why modeling is essential to delivering impactful results.
We’ll look at the future of healthcare data and the challenges it faces, and how Veda is positioned to lead the charge in transforming the industry. Whether you’re an AI enthusiast or just curious about the technology shaping our future, this episode is packed with knowledge, thought-provoking discussions, and practical advice for businesses exploring AI solutions.
AI accountability in healthcare for business success
Chief Healthcare Executive – Many in the public are leery of AI. By committing to transparency and accountability, health organizations can emerge as leaders in innovative and responsible AI implementation.
AI is becoming integral to healthcare, revolutionizing everything from clinical outcomes to operational efficiencies. Stakeholders across the industry—payers, providers, and pharmaceutical companies—are leveraging AI technologies like machine learning, generative AI, natural language processing, and large language models to streamline processes and close gaps in care. These innovations are transforming aspects like image analysis and claims processing through data standardization and workflow automation.
However, integrating AI into healthcare is not without its hurdles. Public trust in AI has plummeted, dropping globally from 61 percent in 2019 to just 53 percent in 2024, with many skeptical about its application.
Certifying outcomes from AI-driven practices remains an unregulated territory and transparency around how algorithms impact health data practices and decision-making is lacking. For example, AI models designed for real-time automation can quickly process flawed data, leading to erroneous outcomes. AI transparency and ethical practices must evolve towards greater accountability and compliance to advance the industry.
However, for healthcare executives, establishing and showcasing ethical and transparent AI practices goes beyond following existing guidelines. By committing to transparency and accountability, organizations can position themselves as leaders in innovative and responsible AI implementation.
To effectively demonstrate these principles, healthcare business leaders should consider the following:
Implement rigorous validation protocols: Ensure that your organization’s AI algorithms undergo thorough and unbiased third-party validation. This step is crucial for verifying the accuracy, reliability, and safety of AI outputs. Validation helps to mitigate risks and ensures that AI systems operate as intended.
Promote transparency: Be transparent about how your AI models work and how they impact data processes. This includes disclosing the use of AI to patients, payers, and providers, and providing clear explanations of the AI’s role in decision-making processes. Transparency builds trust and helps stakeholders understand the value and limitations of AI technologies.
Commit to ethical standards: Adhere to ethical guidelines and best practices in AI development and deployment. This includes addressing potential biases, ensuring data privacy, and prioritizing patient safety. Ethical AI practices foster a culture of accountability and integrity within your organization.
Engage with stakeholders: Actively involve stakeholders in the development and implementation of AI systems. Gather feedback, address concerns, and make adjustments based on input from patients, providers, and others. Engaging with both internal and external stakeholders helps to build trust and ensures that AI solutions meet needs and expectations.
Stay ahead, informed, and compliant: Keep abreast of evolving regulations and guidelines related to AI in healthcare. Ensure that your AI systems comply with all relevant regulatory requirements. Staying informed and compliant helps to mitigate legal risks and demonstrates a commitment to responsible AI use.
Q&A with Bob Lindner on why sustainably-fed AI models are the path forward
As an AI company powered by our proprietary data training AI models, the article, “When A.I.’s Output Is a Threat to A.I. Itself,” in the New York Times caught our eye. Illustrating exactly what happens when you make a copy of a copy, the article lays out the problems that arise when AI-created inputs generate AI-created outputs and repeat…and repeat.
Veda focuses on having the right sources and the right training data to solve provider data challenges. A data processing system is only as good as the data it’s trained on; if the training data becomes stale—or, is a copy of a copy—inaccurate outputs will likely result.
We asked Veda’s Chief Science & Technology Officer, Bob Lindner, PhD, for his thoughts on AI-model training, AI inputs, and what happens if you rely too heavily on one source.
Veda doesn’t use payers’ directories as inputs in its AI and data training models. Why not?
At Veda, we use what we call “sustainably-fed models.” This means we use hundreds of thousands of input sources to feed our provider directory models. However, there is one kind of source we don’t use: payer-provided directories.
Provider directories are made by health plans that are spending millions of dollars of effort to make them. By lifting that data directly into Veda’s AI learning model, we would permanently depend on ongoing spending from the payers.
We aim to build accurate provider directories that allow the payers to stop expensive administrative efforts. A system that depends on payer-collected data isn’t useful in the long term as that data will go away.
The models will begin ingesting data that was generated by models and you will experience quality decay just like the New York Times article describes. We use sustainably sourced inputs that won’t be contaminated or affected by the model outputs.
Veda does the work and collects first party sources that stand independently without requiring the payer directories as inputs.
Beyond the data integrity problems, if you are using payers’ directories to power directory cleaning for other payers, you are effectively lifting the hard work from payer 1 and using it to help payer 2, potentially running into data sharing agreement problems. This is another risk of cavalier machine learning applications—unauthorized use of the data powering them.
Can you give us an analogy to describe how problematic this really is?
Imagine we make chocolate and we are telling Hershey that they should just sell our chocolate because it’s way better than their own. We tell them, “You could save a lot of money by not making it yourselves anymore.”
However, we make our chocolate by buying a ton of Hershey’s chocolate, remelting it with some new ingredients, and casting it into a different shape.
In the beginning, everything is fine. Hershey loves the new bar and they’re saving money because we’re doing the manufacturing. Eventually, they turn off their own production. Now, with the production turned off, we can’t make our chocolate either. The model falls apart and in the end, no one has any chocolate. A real recipe for disaster.
Why It Took Language Processing For AI To Go Mainstream
Scientists and technologists have been using AI for decades. We’ve used it to do complicated calculations and run algorithms and equations that we couldn’t previously conceive of. Your favorite streaming services have been using it for years to recommend shows and movies. But looking at media coverage of the past year, you’d think that AI was just developed. Why is mainstream AI language processing now taking off?
In late 2022, did AI experience an onslaught of media attention that made it seem like it was a new functionality? Why are legislators and regulators now racing to regulate something that has been in existence for about the same length of time as the color TV?
Learning To Learn
Tools powered by AI have essentially learned to learn. The language models we’re all seeing now train themselves with two primary algorithms. First, they can look at any sentence in any context and try to predict the next one.
The other way that language models try to learn is by guessing words in a sentence if some words are randomly removed. These are examples of implicit supervised training, and it’s made possible because these tools use the entire corpus of the internet as training data. This is the actual breakthrough.The other way that language models try to learn is by guessing words in a sentence if some words are randomly removed. These are examples of implicit supervised training, and it’s made possible because these tools use the entire corpus of the internet as training data. This is the actual breakthrough.
Rural healthcare challenges: How bad data deepens disparities
In rural healthcare, timely access to crucial mental healthcare and other specialized services presents a significant challenge. Over the last decade, numerous rural hospitals have shuttered, with more at risk of closure due to staffing shortages, declining reimbursement rates, diminished patient volume, and challenges attracting talent. The answer to the challenges in rural healthcare is to get more data.
With very few options for specialty and subspecialty providers, rural patients often endure long journeys for necessary care. According to a Pew Research Center report, the average drive to a hospital in a rural community is approximately 17 minutes, nearly 65 percent longer than the average drive time in urban areas. Such systemic failures not only exacerbate disparities but also challenge the very foundation of patient care.
A functioning rural health system relies on legions of specialty care doctors conducting outreach visits across vast geographic areas. In principle, this approach presents an efficient means to provide rural patients with access to specialty care, eliminating the need for extensive travel to major urban centers. However, the persistence of inaccurate data poses a significant barrier to achieving comprehensive access to specialty care in rural regions.
Discover Bob Lindner’s take on how bad data exacerbates rural healthcare challenges and impacts patients on Chief Healthcare Executive.
Veda’s provider data solutions help healthcare organizations reduce manual work, meet compliance requirements, and improve member experience through accurate provider directories. Select your path to accurate data.
Velocity
ROSTER AUTOMATION
Standardize and verify unstructured data with unprecedented speed and accuracy.