We are moving our newsletter to Substack for a better experience!
In Week #219 of the Doctor Penguin newsletter, the following papers caught our attention:
1. Drug Discovery. Idiopathic pulmonary fibrosis (IPF), a progressive and debilitating interstitial lung disease, is associated with a high mortality rate due to the lack of effective therapies.
In just 18 months, Ren et al. were able to complete the process from target discovery to preclinical candidate nomination for IPF using a generative AI-driven drug-discovery pipeline. During this process, the pipeline identified TNIK as an anti-fibrotic target and generated a highly specific TNIK inhibitor, INS018_055. This compound was synthesized and demonstrated its selective, anti-fibrotic activity in multiple murine and rat models of fibrosis. A phase I clinical trial involving 78 healthy participants highlighted the safety and tolerability of INS018_055. Unlike other target-discovery approaches, this pipeline was developed using a time machine approach, enabling it to search targets for multiple diseases and aging. This approach trains models using data published before a specific time point and validates them based on their ability to predict targets that gained attention in the pharmaceutical industry after that time point. In summary, this study showcases the potential of generative AI platforms in providing time-efficient solutions for discovering target-specific drugs.
Read paper | Nature Biotechnology
2. Neurodegenerative Disorders. A dataset that describes the clinical signs and symptoms associated with various brain disorders.
Mekkes et al. constructed clinical disease trajectories by mining medical record summaries from 3,042 brain donors of the Netherlands Brain Bank (NBB) with various brain disorders, encompassing 84 neuropsychiatric signs and symptoms identified through a computational pipeline consisting of parsers and natural language processing techniques. These trajectories have the potential to facilitate fundamental research questions, such as identifying clinical subtypes and investigating heterogeneity within disorders, ultimately contributing to a more personalized medicine approach. The authors demonstrated the value of this dataset by conducting temporal analyses across different dementia subtypes, predictive modeling of end-stage neurodegenerative diseases, and identifying subtypes of dementia, multiple sclerosis, and Parkinson's disease. The datasets and ontologies are accessible through their website (https://nnd.app.rug.nl).
Read Paper | Nature Medicine
3. Large Language Model. Clinical notes are typically filled with technical language and abbreviations that make them difficult to read and understand for patients and their care partners.
Zaretsky et al. conducted a cross-sectional study to assess the effectiveness of using GPT-4 to convert inpatient discharge summaries into patient-friendly language and format. They compared the readability and understandability of the original discharge summaries with the transformed, patient-friendly versions generated by the LLM. The study included discharge summaries from 50 patients. Two physicians reviewed each patient-friendly discharge summary for accuracy using a 6-point scale. In 54 out of 100 reviews (54.0%), the summaries received the highest possible rating of 6, and 56 reviews (56.0%) found the summaries to be entirely complete. The patient-friendly discharge summaries consistently met the recommended sixth or seventh grade reading level according to conventional readability standards. However, 18 reviews noted safety concerns, primarily involving omissions of key information and several inaccurate statements (hallucinations).
Read Paper | JAMA Network Open
4. Dermatology. What is the current state of dermatology mobile apps with AI features?
Wongvibulsin et al. analyzed 41 direct-to-consumer dermatology apps with AI features and found several alarming aspects. Notably, none of the apps were approved by the US Food and Drug Administration (FDA), and only two included disclaimers about the lack of regulatory approval. The apps demonstrated poor transparency regarding their development methods, validation processes, AI model effectiveness, data used for development, and the handling of user images. This lack of transparency raises concerns about potential biases, inappropriate recommendations, and user privacy. To address these issues, app developers should, at a minimum, disclose information on the specific AI algorithms used, the datasets used for training, testing, and/or validation, the extent of clinician input, the existence of supporting publications, the use and handling of user-submitted images, and the implementation of measures to safeguard data privacy.
Read Paper | JAMA Dermatology
-- Emma Chen, Pranav Rajpurkar & Eric Topol