Doctor of Medicine, Kyoto University (2006)
Doctor of Philosophy, Kyoto University (2017)
ERROR! No headcode.htm file found.
Recurrence risk stratification of patients undergoing primary surgical resection for hepatocellular carcinoma (HCC) is an area of active investigation, and several staging systems have been proposed to optimize treatment strategies. However, as many as 70% of patients still experience tumor recurrence at 5 years post-surgery. We developed and validated a deep learning-based system (HCC-SurvNet) that provides risk scores for disease recurrence after primary resection, directly from hematoxylin and eosin-stained digital whole-slide images of formalin-fixed, paraffin embedded liver resections. Our model achieved concordance indices of 0.724 and 0.683 on the internal and external test cohorts, respectively, exceeding the performance of the standard Tumor-Node-Metastasis classification system. The model's risk score stratified patients into low- and high-risk subgroups with statistically significant differences in their survival distributions, and was an independent risk factor for post-surgical recurrence in both test cohorts. Our results suggest that deep learning-based models can provide recurrence risk scores which may augment current patient stratification methods and help refine the clinical management of patients undergoing primary surgical resection for HCC.
View details for DOI 10.1038/s41598-021-81506-y
View details for PubMedID 33479370
View details for PubMedCentralID PMC7820423
Detecting microsatellite instability (MSI) in colorectal cancer is crucial for clinical decision making, as it identifies patients with differential treatment response and prognosis. Universal MSI testing is recommended, but many patients remain untested. A critical need exists for broadly accessible, cost-efficient tools to aid patient selection for testing. Here, we investigate the potential of a deep learning-based system for automated MSI prediction directly from haematoxylin and eosin (H&E)-stained whole-slide images (WSIs).Our deep learning model (MSINet) was developed using 100 H&E-stained WSIs (50 with microsatellite stability [MSS] and 50 with MSI) scanned at 40× magnification, each from a patient randomly selected in a class-balanced manner from the pool of 343 patients who underwent primary colorectal cancer resection at Stanford University Medical Center (Stanford, CA, USA; internal dataset) between Jan 1, 2015, and Dec 31, 2017. We internally validated the model on a holdout test set (15 H&E-stained WSIs from 15 patients; seven cases with MSS and eight with MSI) and externally validated the model on 484 H&E-stained WSIs (402 cases with MSS and 77 with MSI; 479 patients) from The Cancer Genome Atlas, containing WSIs scanned at 40× and 20× magnification. Performance was primarily evaluated using the sensitivity, specificity, negative predictive value (NPV), and area under the receiver operating characteristic curve (AUROC). We compared the model's performance with that of five gastrointestinal pathologists on a class-balanced, randomly selected subset of 40× magnification WSIs from the external dataset (20 with MSS and 20 with MSI).The MSINet model achieved an AUROC of 0·931 (95% CI 0·771-1·000) on the holdout test set from the internal dataset and 0·779 (0·720-0·838) on the external dataset. On the external dataset, using a sensitivity-weighted operating point, the model achieved an NPV of 93·7% (95% CI 90·3-96·2), sensitivity of 76·0% (64·8-85·1), and specificity of 66·6% (61·8-71·2). On the reader experiment (40 cases), the model achieved an AUROC of 0·865 (95% CI 0·735-0·995). The mean AUROC performance of the five pathologists was 0·605 (95% CI 0·453-0·757).Our deep learning model exceeded the performance of experienced gastrointestinal pathologists at predicting MSI on H&E-stained WSIs. Within the current universal MSI testing paradigm, such a model might contribute value as an automated screening tool to triage patients for confirmatory testing, potentially reducing the number of tested patients, thereby resulting in substantial test-related labour and cost savings.Stanford Cancer Institute and Stanford Departments of Pathology and Biomedical Data Science.
View details for DOI 10.1016/S1470-2045(20)30535-0
View details for PubMedID 33387492
To automate the grading of histological images of engineered cartilage tissues using deep learning.Cartilaginous tissues were engineered from various cell sources. Safranin O and fast green stained histological images of the tissues were graded for chondrogenic quality according to the Modified Bern Score, which ranks images on a scale from zero to six according to the intensity of staining and cell morphology. The whole images were tiled, and the tiles were graded by two experts and grouped into four categories with the following grades: 0, 1-2, 3-4, and 5-6. Deep learning was used to train models to classify images into these histological score groups. Finally, the tile grades per donor were averaged. The root mean square errors (RMSEs) were calculated between each user and the model.Transfer learning using a pretrained DenseNet model was selected. The RMSEs of the model predictions and 95% confidence intervals were 0.49 (0.37, 0.61) and 0.78 (0.57, 0.99) for each user, which was in the same range as the inter-user RMSE of 0.71 (0.51, 0.93).Using supervised deep learning, we could automate the scoring of histological images of engineered cartilage and achieve results with errors comparable to inter-user error. Thus, the model could enable the automation and standardization of assessments currently used for experimental studies as well as release criteria that ensure the quality of manufactured clinical grafts and compliance with regulatory requirements.
View details for DOI 10.1016/j.joca.2020.12.018
View details for PubMedID 33422705
View details for Web of Science ID 000610553800055