You are looking for that next big bet. Artificial Intelligence (AI) seems to be the hottest thing in town. You’re also aware that healthcare is ripe for transformation through digital innovation. That is why you made time to be a judge at that pitch fest which has just kicked off.
Ten seconds into the first pitch, you have heard the terms “AI,” “data” and “healthcare.” Will this be your next investment, or should you flash your buzzword sheet and be skeptical? Follow this lightning-round data and AI due diligence run sheet to decide.
Minute One: The Use Case
Not everything is better with AI. There needs to be a convincing value proposition for using AI. The impact of a product is defined by the value it delivers to consumers, not by the fact that AI runs in its engine room.
Dissect the use case and focus on the problem statement. Remove any reference to AI: Is it still clear what the product delivers and why it matters? Scan for client quotes endorsing the value proposition and demonstrating consumer-driven product design and validation.
Nothing says “this product matters” better than prospective users asking for its creation and having been part of its conception. Nothing points stronger to a lack of market relevance than the absence of the consumer voice.
Now, scrutinize AI operationalization. AI is a powerful data analytics and automation tool for assisting humans with making faster and better-informed decisions, but AI cannot and should not take human decision-makers out of the loop.
The stakes of using AI in healthcare are high: Tasked with making diagnostic, prognostic and therapeutic decisions, users of AI-powered products and consumers of their outputs set high hurdles toward adoption and trust.
Ethical design features for AI models deployed as part of real-life workflows are fairness, explainability, transferability, accountability, robustness, protection of user data rights and a clearly stated purpose of use. All of these need to be part of product design and the MLOps cycle. Does the pitch refer to ethical, regulatory and equity aspects arising from the use of AI in the presented product? Founders need to have accounted for three things.
3. Human factors impacting the design, implementation and validation of robust workflows as the product runs in its operational environment.
Do you hear the term “disrupt”? Transformation and change are needed— “disruption,” however, is counterproductive in clinical settings where adoption of new technology is predominantly a behavioral change task. The proverbial Silicon Valley paradigm “move fast and break things” does not apply to health AI.
Now, do high-level AI tech due diligence. Is a neural network pitched to be the differentiating feature? No AI that has a chance to be published at this year’s NeurIPs should show up in a product you expect to reach the market within the next three years. The risk of such AI not delivering to the performance level and regulatory standards is substantial.
AI innovation is a fast-moving field; what’s new today is old news in four months. The distinguishing feature of an AI-powered product is not its bleeding-edge AI model. On the contrary, the path to market leads through a scalable, intelligently and efficiently growing dataset.
Minute Two: The Data
Data is the most important component of any AI-powered product. Over 80% of all work on delivering the product will be data engineering, and only 20% or less will be consumed by coding AI algorithms and doing analytical work.
Getting an understanding of the data ecosystem will not only lead to a deeper understanding of the involved stakeholders but also of the maturity level of data management and how the project partners are using data to deliver outcomes. The questions you should ask are:
1. Is there an MVP AI model and disclosure of any of its performance metrics?
2. How is data collected, who owns it and where is it stored? This is particularly important if personalized sensitive patient data is used. Legal definitions and privacy laws differ around the globe.
3. Does data collection scale as the solution is deployed, i.e., does the data repository grow?
4. What data governance and standardization frameworks are in place?
5. Are auditing methods in place for monitoring AI algorithms before and after their deployment and regulatory approval?
6. Is outcomes data looped back into the AI model during operation, and if so, for which purpose?
7. How and when is data used to assist the human decision maker?
It’s not “big data” but quality data that makes the difference. A crucial ingredient for a fair AI model is a balanced, unbiased dataset supporting the use case at hand. Increasingly, the creation of synthetic data has become a useful tool to complement and augment existing datasets.
Minute Three: The Team
It takes a village to build a health AI product. Design, implementation, integration, validation, production and maintenance of AI technology at scale is an art. A successful team in that business will be composed of data scientists, software engineers, cloud architects, UX designers, regulatory and ethics experts, AI/ML developers, clinicians and subject matter experts in the sector in which the AI product will be commercialized in.
Ideally, the consumer will be embedded in product design cycles and product validation trials as well. Make sure you understand whether you are dealing with a start-up or a scale-up, as this defines which type of skills and personalities the business needs to grow.
From Judging To Coaching
The start-up scene is fiercely competitive. Pitch-fests are depicted as shark tanks and battlefields. Being a judge requires thinking quickly on one’s feet, and it takes coaching skills and personal entrepreneurial experience to deliver valuable feedback to participants.
Having written about rational decision-making in emotionally charged situations in a previous article, asking tough and targeted questions ensures that AI is used to deliver innovative and valuable products to consumers.
(Copyright: Forbes AI, Data And Healthcare: Buzzword Bingo Or Elevator Pitch? (forbes.com))