A poor night’s sleep might signal far more than fatigue—it could foreshadow diseases striking years later. Stanford Medicine researchers have unveiled SleepFM, the first artificial intelligence foundation model that deciphers physiological signals from a single overnight sleep study to forecast risks for more than 100 health conditions. Published January 6 in Nature Medicine, the study leverages 585,000 hours of polysomnography data from 65,000 participants, transforming routine sleep labs into predictive powerhouses.
“We record an amazing number of signals when we study sleep,” said Emmanuel Mignot, MD, PhD, the Craig Reynolds Professor in Sleep Medicine and co-senior author. “It’s a kind of general physiology that we study for eight hours in a subject who’s completely captive. It’s very data rich.” Polysomnography captures brain waves via electroencephalography, heart rhythms through electrocardiography, respiratory airflow, muscle activity, eye movements, and leg twitches—streams long underutilized beyond basic sleep staging.
“From an AI perspective, sleep is relatively understudied,” noted James Zou, PhD, associate professor of biomedical data science and co-senior author. “There’s a lot of other AI work that’s looking at pathology or cardiology, but relatively little looking at sleep, despite sleep being such an important part of life.” SleepFM, akin to large language models like ChatGPT, segments data into five-second epochs and learns inter-signal relationships.
Harmonizing the Body’s Nocturnal Symphony
The model’s breakthrough lies in ‘leave-one-out contrastive learning,’ a technique that masks one data stream—say, breathing—and tasks the AI with reconstructing it from others, forging a unified ‘language of sleep.’ “SleepFM is essentially learning the language of sleep,” Zou explained. “One of the technical advances that we made in this work is to figure out how to harmonize all these different data modalities so they can come together to learn the same language.”
Initially validated on core tasks, SleepFM matched or surpassed state-of-the-art models in sleep-stage classification and apnea severity diagnosis, as detailed in the Stanford Medicine News Center release. The real innovation emerged when paired with longitudinal health records from Stanford’s Sleep Medicine Center, founded in 1970 by William Dement, MD, PhD, the father of sleep medicine. Researchers linked 35,000 patients’ studies (1999-2024, ages 2-96) to up to 25 years of outcomes.
Scanning over 1,000 disease categories, SleepFM pinpointed 130 predictable with concordance index (C-index) above 0.75—80% alignment on event timing between any patient pair. Standouts included Parkinson’s (0.89), prostate cancer (0.89), breast cancer (0.87), dementia (0.85), all-cause mortality (0.84), hypertensive heart disease (0.84), and heart attack (0.81), per SciTechDaily.
Out-of-Sync Signals as Disease Harbingers
Interpretability tools revealed no lone signal suffices; predictive power stems from discordance, like a dormant brain paired with an alert heart. “The most information we got for predicting disease was by contrasting the different channels,” Mignot said. “Body constituents that were out of sync… seemed to spell trouble.” Heart signals dominated circulatory forecasts, brain waves mental health risks, yet multimodal fusion yielded peak accuracy.
Co-lead authors Rahul Thapa, a Stanford biomedical data science PhD student and Knight-Hennessy scholar, and Magnus Ruud Kjaer from Technical University of Denmark, drove the effort alongside collaborators from Copenhagen University Hospital, BioSerenity, University of Copenhagen, and Harvard Medical School. Funding came from NIH grant R01HL161253, Knight-Hennessy Scholars, and Chan-Zuckerberg Biohub.
Reactions rippled across platforms. “AI can predict 130 diseases from 1 night of sleep,” tweeted Zou, amassing over 11,000 likes. Eric Topol, MD, highlighted the paper via infographic, while X users like @ShiningScience noted its prowess in cancers, circulatory woes, and mental disorders, citing the Nature Medicine DOI.
From Lab Goldmine to Clinical Frontier
Polysomnography’s richness—overlooked amid AI’s focus on imaging or genomics—positions sleep studies for revival. Becker’s Hospital Review emphasized SleepFM’s edge over models with 0.7 C-indices already deemed clinically viable. Yet hurdles loom: explainability, integration with wearables, and ethical risk disclosure.
“It doesn’t explain that to us in English,” Zou admitted. “But we have developed different interpretation techniques to figure out what the model is looking at.” Future iterations eye consumer devices, per Fox News, where Dr. Harvey Castro cautioned: “A significant signal doesn’t equal ready medicine.”
Stanford Report and ScienceDaily amplified the trove’s potential, while Reuters tied it to broader AI health scans. On X, @StanfordAIMI promoted it as a tool for cancer, Parkinson’s, and heart risks.
Reshaping Preventive Care Horizons
For industry insiders, SleepFM signals a paradigm shift: sleep as a holistic biomarker, rivaling bloodwork or genetics. MarkTechPost detailed its Cox proportional hazards layer for time-to-event modeling atop patient embeddings, incorporating age and sex. External validation across cohorts like MrOS and MESA confirmed robustness.
This foundation model unlocks downstream applications, from risk stratification to personalized interventions. As Zou reflected, “We were pleasantly surprised that for a pretty diverse set of conditions, the model is able to make informative predictions.” With polysomnography underused yet scalable, SleepFM could embed in health systems, per Inside Precision Medicine, prioritizing high-risk patients amid clinician shortages.
The work, rooted in Stanford’s half-century archive, underscores data hoards’ value when AI-unlocked. As X buzz from @bensmithlive (3,500+ likes) proclaimed, patterns lurked unseen; now, one night’s rest forecasts futures, heralding precision medicine’s next era.


WebProNews is an iEntry Publication