Joe Kiani, MasimoJoe Kiani, Masimo

As artificial intelligence becomes an integral part of health technology, it carries both incredible potential and significant risks. Nowhere is this more evident than in how AI models serve or fail to serve diverse patient populations. Equity is no longer just a policy concern, but it is a design imperative. Joe Kiani, Masimo and Willow Laboratories founder, is among the health tech leaders urging the industry to prioritize fairness, representation and inclusivity as core functions of AI development.

 

Willow Laboratories’ latest innovation, Nutu™, is a platform that provides personalized health insights through real-time metabolic and behavioral data. The platform’s AI models are designed to reflect the experiences and needs of people from different races, genders, socioeconomic backgrounds, and health profiles.

 

The Cost of Homogenous Data

One of the primary reasons AI models fall short for marginalized communities is the lack of diversity in training data. If a model is developed using datasets that primarily represent white, male, urban populations, it can perform best for that group and likely worse for others. That gap can be dangerous to your health. Misinterpretation of symptoms, inaccurate predictions and biased recommendations can reinforce existing disparities rather than close them.

 

Startups in the health tech space are increasingly recognizing the challenge of developing AI models that serve diverse populations. To address this, many platforms are tested with a wide variety of users, encompassing different dietary habits, stress levels, and cultural approaches to health. This inclusive data collection is essential for ensuring AI models can respond appropriately to the varied needs of people from different backgrounds. By broadening the range of inputs used in development, platforms can be more effective at delivering personalized health recommendations and improving outcomes for a broader user base.

 

Equity Requires Intentional Design

Creating inclusive AI models means going beyond broad demographic categories. It involves understanding how different populations interact with health care, experience stress, and interpret digital feedback. It adjusts its recommendations not only based on physiological signals but also on behavioral inputs. A person’s work schedule, food availability, or caregiving responsibilities can all affect how and when they engage with health platforms.

 

AI model interprets those patterns to offer guidance that fits the user’s life, not just their numbers. Joe Kiani, Masimo founder, notes, “Our goal with Nutu is to put the power of health back into people’s hands by offering real-time, science-backed insights that make change not just possible but achievable.” This focus on achievable, user-driven change is at the heart of equitable AI design.

 

The Limits of One-Size-Fits-All Recommendations

Many health tools start with broad advice, such as eating fewer carbs, sleeping more, and exercising regularly. But those tips don’t account for the realities of someone working two jobs, caring for a family member or managing cultural expectations around food and wellness.

 

Equitable AI must recognize those nuances. Its platform adapts to each user’s baseline, tracking what works for them and adjusting accordingly. It eliminates the pressure to conform to a generic ideal and focuses instead on consistent, supportive improvements. This approach improves outcomes and increases trust. When people see that a platform understands their context, they’re more likely to stay engaged and act on its recommendations.

 

Building With Community, Not Just for It

A crucial part of developing equitable AI is collaborating closely with the communities it aims to serve. Developers need to engage with users directly through interviews, pilot programs, and continuous feedback. This ensures the platform reflects diverse perspectives, particularly in language, tone, and functionality.

 

Incorporating feedback from varied test groups helps ensure the AI feels welcoming and usable to a broad audience. Key aspects of this process include ensuring language accessibility, crafting culturally sensitive messaging, and designing adaptable interfaces. These steps go beyond regulatory compliance to demonstrate a commitment to respect and inclusivity.

 

Continuous Auditing and Feedback Loops

Equity isn’t a one-time checkbox. AI models must be monitored continuously to ensure they remain fair and accurate as more users join. That includes reviewing recommendation patterns for signs of bias, soliciting user feedback, and updating algorithms as needed.

 

It builds these feedback loops into its development cycle. It doesn’t just track biometric outcomes, but it also learns from user interactions. If a certain group consistently ignores a type of prompt or responds poorly to a recommendation, that information is flagged and investigated. This adaptive learning process is essential for maintaining equity at scale. Static models can’t keep up with developing populations or health needs.

 

Ethical Use of Demographic Data

Demographic data plays a key role in improving representation within AI, but it must be handled responsibly. Developers need to be transparent about what data is collected, how it will be used, and the safeguards in place to protect it. User data is anonymized and encrypted to ensure privacy without compromising performance.

 

It’s important that users understand how their information contributes to enhancing the platform and that they have control over what they choose to share. Building this transparency fosters trust, which is as crucial in healthcare as the functionality of the technology itself.

 

Measuring Success Beyond Averages

In traditional AI development, performance is often evaluated based on averages, assuming that one-size-fits-all solutions work for everyone. However, equitable AI goes beyond this approach. Developers need to assess how their models perform across various demographic groups, aiming for fairness and parity, not just overall accuracy. This might involve fine-tuning models for specific populations to ensure they work as effectively for diverse users as they do for the majority.

 

Offering users more control over how recommendations are presented can help tailor the experience to individual needs. Ultimately, equitable AI is not about applying the same approach to everyone. It is about ensuring that everyone has an equal opportunity to succeed, no matter their background, health status, or personal circumstances. This level of consideration leads to more inclusive, actionable outcomes and a greater sense of trust and empowerment for users.

 

Equity as a Business Advantage

Designing for equity is not only the right approach, it also offers a competitive edge. As the digital health market continues to grow, users are increasingly drawn to platforms that align with their values and address their unique needs. Investors and partners are taking notice as well. Companies that can show they deliver equitable performance are more likely to secure partnerships with health systems, employers, and public health agencies that are focused on reducing disparities in healthcare. This shift makes equity not just a moral imperative but a business advantage.