Our AI research aims to learn from the journey of every patient with the logic of an expert and the scale of the machine.
Over 80% of patient data remains unstructured. Mendel's mission is to ingest both structured and unstructured data, synthesizing the intricacies of clinical language for coherent understanding.
A patient's journey is more akin to a book than a page. While generic LLMs grapple with limited window sizes, Mendel focuses on processing thousands of pages per patient. This allows us to decode a coherent narrative by connecting the dots amidst conflicting information.
In healthcare, hallucinations are unacceptable. Any failure of a system should be consistent and traceable, not merely a product of chance or probability.
Every output our models produce is accompanied by a clear rationale. Users can trace the system’s reasoning and dive deeper into the system’s suggestions, fostering trust.
LLMs represent cutting-edge deep learning techniques that fuel today's natural language processing (NLP) applications. While they are powerful, standard LLMs are limited in enterprise clinical use cases.
LLMs can converse fluently but often lack deep understanding, leading to superficial reasoning and occasional "hallucinations" in their response. This makes them unsuitable for critical clinical applications without necessary controls and optimizations.
Simply enumerating every conceivable rule isn't a feasible way to replicate the nuanced judgment and adaptive learning of human clinicians.
Instead of providing a holistic understanding of patient care, such systems can become rigid and overly complex, missing out on the intricacies and evolving nature of medical knowledge and practice.
We've spent 5 years investing in R&D to develop a system capable of clinical reasoning through a hybrid approach that combines deep learning with symbolic AI.
We’re employing processes commonly used in logic, mathematics, and computer science to approach clinical thinking as a form of algebra.
Our research aims to sidestep the pitfalls of symbolic systems by using our proprietary generative symbolic learning method, paired with large language modeling.
In a recent study, we found that solely using large language models (LLMs) like GPT-3.5 decreases Mendel’s system performance by 64.72% on average across 13 medical variables. However, Mendel’s hybrid model combining symbolic clinical reasoning models with LLMs significantly improves interpretation of electronic medical records.
Using actual medical records annotated by a dedicated team of physicians and symbolic modeling, we are well-positioned to drive innovation in clinical AI.
This hybrid approach is the bedrock of our fully integrated AI suite of clinical-specific data processing products.
We bring together industry-leading NLP models for OCR, document segmentation, document classification, named-entity extraction, relation extraction, de-identification, and LLM-based retrieval and generation (e.g., question/answering) with clinical reasoning models to serve complaint intelligence from millions of patient journeys.
Our R&D team consists of top-caliber AI scientists and clinical experts. We’re building a novel technology using a novel approach: training AI scientists to think like physicians and training physicians to think like AI scientists