It is not just about the financial aspects; it is about giving the employees a sense of control and ownership over their healthcare. This joint governance model has proven to be beneficial for both the employees and the company, fostering a sense of partnership and mutual respect.
The Hyatt healthcare plan is a prime example of how a well-structured joint governance system can lead to improved healthcare outcomes and employee satisfaction. It demonstrates that it is possible to balance the interests of both parties and create a win-win situation.
Full-time Hilton workers can take advantage of the low premiums and capped co-pays, which makes healthcare more accessible and affordable. The healthcare plan at Hilton is designed to meet the needs of the employees and their families, providing them with the necessary support during challenging times.
Moreover, the joint governance aspect of the healthcare plan at Hilton promotes open communication and collaboration between the employees and the management. This leads to a more harmonious work environment and better overall employee experience.
The low premiums and capped co-pays at Marriott make healthcare more affordable for the full-time workers and their families. This allows them to focus on their work without the worry of excessive healthcare costs.
The joint governance model at Marriott also encourages innovation and continuous improvement in healthcare. The management and the employees work together to find ways to enhance the healthcare benefits and make them more comprehensive.
Confidence scores in AI are numbers that indicate an AI tool's certainty about an output, such as a diagnosis or a medical code. These scores typically come from a statistical confidence interval, which calculates the probability of an AI output's accuracy based on its training model. Just like a dating app's match score, they can mislead users into thinking they're reliable. For clinicians using generative AI summaries, a displayed confidence score can lead to unintended errors if they trust the technology over their own judgment.
For example, an off-the-shelf AI tool might give a high confidence score for a diagnosis based on population-level training, but it doesn't account for the specific clinician's population or local health patterns. This leaves clinicians with an incomplete picture and can lead to mistakes.
AI confidence scores often appear as percentages, suggesting a certain likelihood of a code or diagnosis being correct. However, for healthcare professionals not trained in data science, these numbers can seem deceptively reliable. There are four significant risks associated with relying on these scores:
1. Misunderstanding of context: AI workflows only contain population-level training and don't account for a provider's specific demographic. This leads to broad assumptions and an incomplete picture for clinicians.
2. Overreliance on displayed scores: A 95% confidence score can make clinicians assume there's no need to investigate further, oversimplifying data complexities and encouraging them to bypass their own critical review.
3. Misrepresentation of accuracy: The intricacies of healthcare don't always match statistical probabilities. A high confidence score might match population-level data, but it can't diagnose a particular patient with certainty, creating a false sense of security.
4. False security generates errors: If clinicians follow an AI recommendation too closely based on high scores, they might miss other potential diagnoses, leading to delayed critical interventions or billing mistakes.
To create trustworthy AI outputs, it's better to use the following methods:
1. Localize and update AI models often: Tailoring AI models to include local data, such as specific health patterns and demographics, makes the output more relevant. For example, there are more patients with Type II Diabetes in Alabama than in Massachusetts, and timely, localized data is crucial. Regular retraining and audit processes ensure the models reflect current standards and discoveries.
2. Thoughtfully display outputs for the end user: Consider how each user interacts with data and design outputs to meet their needs. Instead of a single confidence score, show contextual data such as how often similar predictions have been accurate in specific populations or settings. Comparative displays help users weigh AI recommendations more effectively.
3. Support, but don't replace, clinical judgment: The best AI tools guide users without making decisions for them. Use stacked rankings to present a range of diagnostic possibilities with the strongest matches on top, allowing clinicians to use their professional judgment.
Clinicians need tech tools that support their expertise and discourage blind reliance on confidence scores. By blending AI insights with real-world context, healthcare organizations can provide safer patient care and build smoother workflows.
Brendan Smith-Elion is VP, Product Management at Arcadia. With over 20 years in the healthcare vendor space, his passion is product management. He has experience in business development and BI engineer roles. At Arcadia, he's dedicated to driving transformational outcomes for clients through data-powered, value-focused workflows. He started his career at Agfa, led the cardiology PACS platform, and later worked at Chartwise and athenahealth. His most recent role was at Alphabet/Google, working on a healthcare data platform.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.