-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 255 KB
/
index.json
1
[{"authors":"geert","categories":null,"content":"Geert Litjens is an assistant professor of Computational Pathology at the Radboud University Medical Center. His research is at the intersection of machine learning, medical imaging, and oncology. He co-chairs the Computational Pathology Group, which develops automated machine learning systems for cancer detection, biomarker discovery and quantification, and improved prognostication.\nHe is also the developer of the ASAP software package for analyzing and visualizing whole-slide images and (co-)organizer of several high-profile challenges in medical imaging such as PROMISE12, CAMELYON, and PANDA. He (co-)authored over 60 publications in medical, imaging, and machine learning conferences and journals.\n","date":-62135596800,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":-62135596800,"objectID":"d2023b12df26468a958bad1ad1f46dcc","permalink":"https://geertlitjens.nl/authors/geert/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/geert/","section":"authors","summary":"Geert Litjens is an assistant professor of Computational Pathology at the Radboud University Medical Center. His research is at the intersection of machine learning, medical imaging, and oncology. He co-chairs the Computational Pathology Group, which develops automated machine learning systems for cancer detection, biomarker discovery and quantification, and improved prognostication.","tags":null,"title":"Geert Litjens","type":"authors"},{"authors":[],"categories":[],"content":"Overview The central PANCAIM concept is to successfully exploit available genomic and clinical data to improve personalized medicine of pancreatic cancer. PANCAIM’s concept is unique as it integrates the whole spectrum of genomics with radiomics and pathomics, the three future pillars of personalized medicine. The integration of these three modalities is very challenging in the clinic, but also with AI. PANCAIM uses an explainable, data-efficient, two-staged AI approach. AI biomarkers transform the unimodal data domains into interpretable likelihoods of intermediate disease features. A second AI layer merges the biomarkers and responds with an integrated assessment of prognosis, prediction and monitoring of therapy response, to assist in clinical decision making. PANCAIM builds on four key concepts of AI in Healthcare: data providers, clinical expertise, AI developers, and MedTech companies to connect to data and bring AI to healthcare. Data quantity and quality is the main factor for successful AI. Partners provide eleven Pan European repositories of almost 6000 patients that are open to ongoing accrual. SME Collective Minds builds the GDPR data platform that hosts the data and provides a trustable connection to healthcare for even more and sustainable data. SME TheHyve builds tooling to connect to more genomic repositories (EOSC Health). Six Pan European academic centers provide clinical expertise across all modalities and help realize a curated, high quality annotated data set. Partners also include expert AI healthcare researchers across all clinical modalities with a proven track record. Finally, Siemens Healthineers provides their AI expertise and tooling to bring AI into healthcare for clinical validation and swift clinical integration in 3000 health care institutes.\n Integration of CT and pathology Tasks Pathology is a diagnostic gold standard for PDAC. However, the information it provides is limited to definitive confirmation of the diagnostic entity and the size and locoregional extent of the tumour. The only tumour-intrinsic feature is the grade of differentiation, a historical concept that is of no relevance for the management of PDAC. The rich information contained in tumour morphology integrates results of interactions at all levels - genetic, epigenetic, microenvironmental -, but is left totally unexplored. It is only with the advent of AI that attempts at deciphering this rich information have been undertaken. Recent studies on a variety of other cancers show that AI can extract information from routine pathology tissue sections that relate to underlying genomic aberrations and allows prognostic discrimination between patients that are currently lumped within the same clinical stage. With the wealth of data accessible in its repository, PANCAIM is in a unique position to decipher and fully exploit the prognostic and predictive information that hitherto has remained unmined in surgical and biopsy specimens from PDAC patients.\nRecent studies on a variety of other cancers show that AI can extract information from routine pathology tissue sections that relates to underlying genomic aberrations and allows prognostic discrimination between patients that are currently lumped within the same clinical stage. Although this has been shown and applied in for example, breast cancer [Jaber et al, Breast Cancer Research, 2020] and mesothelioma [Courtiol et al, Nature Medicine 2019], these insights have not yet been applied to PDAC. A key reason is the limited availability of large datasets that combine radiology, pathology, genetics, and clinical follow-up. With the wealth of data accessible in its repository, PANCAIM is in a unique position to decipher and fully exploit the prognostic and predictive information that hitherto has remained unmined in surgical and biopsy specimens from PDAC patients. Specifically, it is our ambition to use novel machine learning techniques such as neural image compression [Tellez et al, TPAMI 2019] to elucidate key pathomics features which can help predict prognosis, treatment response and genetic alterations in PDAC (Figure 9, Figure 8). By additionally applying for the latest advances in explainable artificial intelligence, such as attention-weighting and saliency mapping, PANCAIM furthers the acceptance and integration of these pathomic features in clinical practice.\n","date":1641911520,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641911520,"objectID":"52ccaaeb2dd85fe36982faabea895972","permalink":"https://geertlitjens.nl/project/eu-pancaim/","publishdate":"2022-01-11T15:32:00+01:00","relpermalink":"/project/eu-pancaim/","section":"project","summary":"Pancreatic cancer AI for genomics and personalized Medicine","tags":["Active","EU","Pancreatic cancer","Radiology","Pathology","Genetics","Machine Learning"],"title":"PANCAIM","type":"project"},{"authors":[],"categories":[],"content":"Overview BIGPICTURE, a pathology-led consortium, has the vision to become the catalyst in digital transformation in Pathology. Our mission is to create the first European GDPR compliant platform, in which both quality-controlled Whole Slide Imaging (WSI) data and advanced Artificial intelligence (AI) algorithms will exist. The BIGPICTURE platform will be built on existing assets of ELIXIR EU data infrastructure, including the federated European Genome-phenome Archive (EGA) technology for managing the exchange of confidential information between contributors and users. The consortium will use Cytomine, an established open-source, cross-platform framework to develop unique tools for access to WSI, including annotations and visualisation of algorithm results, while we will develop new and generic models to facilitate AI development and mining of WSI data. By engaging and building consensus with all the relevant stakeholders, we will contribute to the development of a regulatory framework for digital slides and AI-based methods. Finally, BIGPICTURE envisions sustainability of its platform through a community- based model which relies on reciprocity, value creation and inclusiveness. To achieve our vision, we have brought together Europe’s leaders in the field of computational pathology who have access to national and European high-performance computing infrastructures as well as Europe’s fully digitalised pathology departments. Additionally, the consortium has currently access to approximately 4.5 million clinical WSI covering a wider range of indications through 17 partners and 23 third parties from the largest European and international pathology and trial groups. Our consortium is further strengthened by the presence of the European Society of Pathology, Digital Pathology Association, FDA and 9 SMEs as partners, while we are further supported by professional societies, and patient advocates.\n","date":1641911512,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641911512,"objectID":"9efb3b4ee70b9b5ec2ee79491ff52c72","permalink":"https://geertlitjens.nl/project/imi-bigpicture/","publishdate":"2022-01-11T15:31:52+01:00","relpermalink":"/project/imi-bigpicture/","section":"project","summary":"European AI-integrated Digital Pathology Repository","tags":["Active","IMI","Pathology","Machine Learning"],"title":"IMI BigPicture","type":"project"},{"authors":[],"categories":[],"content":"Overview An ever-increasing amount of treatment options is available to prostate cancer patients. Although this is a positive development, it also increases the complexity of selecting the right therapy for the individual patient. The fusion of in and ex vivo information streams, such as from radiology and pathology, offers a promising avenue for improved models of disease physiology and progression, and consequently, better strategies for treatment selection. However, to build accurate models, large sets of fused radiology/pathology data are needed, which have been impossible to obtain due to the time-consuming and expensive nature of acquiring such datasets.\nIn this project, we propose an artificial-intelligence-based platform that can automatically combine large archival sets of digitized histopathological slides and multi-parametric MRI (mp-MRI) and leverage them to build a disease model which will improve 1) identification of clinically significant prostate cancer, 2) selection of patients for active surveillance, and 3) predict lutetium-PSMA treatment success.\n Flowchart of the interlocking work packages Tasks This project encompasses five objectives: I) establish a collection of combined mp-MRI and digitized prostatectomy specimens from 1350 patients, II) build an automated reconstruction algorithm for the generation of 3D tissue volumes from 2D digitized histopathology slides, III) develop registration techniques for spatial alignment of ex vivo histopathology to in vivo mp-MRI, IV) learn mp-MRI / histopathology correlations using unique deep streaming generative models, and V) Evaluate the learned correlations for improved diagnostics, active surveillance, and lutetium-PSMA treatment selection.\nThe impact of the project will not just be improved diagnostic and treatment decisions for patients but can be the starting point of an entirely new field of cross-medical-specialty research; the developed platform can be leveraged for other cancer types and even non-oncological diseases.\n","date":1641911396,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641911396,"objectID":"9483401cbade790bc59f8c14a630e478","permalink":"https://geertlitjens.nl/project/vidi-icarus/","publishdate":"2022-01-11T15:29:56+01:00","relpermalink":"/project/vidi-icarus/","section":"project","summary":"Improving CAre for prostate cancer patients through artificial-intelligence-based Radiology/pathology fUSion","tags":["Active","NWO","Prostate cancer","Radiology","Pathology","Machine Learning"],"title":"ICARUS","type":"project"},{"authors":[],"categories":[],"content":"Overview Computational pathology, the application of advanced machine learning (ML) methods to digitized tissue sections, can revolutionize cancer care and research. Specifically, I propose a paradigm shift by moving away from the currently used manual grading systems towards ML-supported patient prognostication. However, significant knowledge gaps are hindering the field of computational pathology. We do not know how to: 1) effectively leverage global and local information in WSIs, 2) identify pan-cancer and cancer-specific prognostic features, and 3) make ML models explainable and interpretable.\n Flowchart of the interlocking work packages Tasks This ambitious project will address these critical knowledge gaps by building on the novel stochastic streaming gradient descent developed in my group. First, I will push SSGD to the next level by integrating hierarchical hyperparameter optimization and separable convolutions. Second, to identify pan-cancer and cancer-specific prognostic biomarkers, I will integrate innovative multi-task and cross-task learning algorithms with SSGD. Third, I will leverage the latest advances in concept learning and natural language processing to endow deep neural networks with unprecedented transparency and explainability. Last, I will validate our developed methodology in the largest dataset of oncological WSIs globally.\nBy publicly releasing all developed tools and data, the proposed project will have a scientific multiplier effect on the fields of computational pathology, machine learning, and oncology. Specifically, the enhanced SSGD method can open new research areas for ML that require data across scales, such as remote sensing. My novel approach to ML explainability can encourage the adoption of innovative technologies, such as self-driving cars. Last, the derived specific and pan-cancer biomarkers will have a tremendous impact on the quest to understand cancer development and progression, and ultimately on public health and the economy\n","date":1641911378,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641911378,"objectID":"82988a050c86dc341208e021c5d6d125","permalink":"https://geertlitjens.nl/project/erc-aiscap/","publishdate":"2022-01-11T15:29:38+01:00","relpermalink":"/project/erc-aiscap/","section":"project","summary":"Interpretable Artificial Intelligence across Scales for Next-Generation Cancer Prognostics","tags":["Active","ERC","Pathology","Machine Learning"],"title":"AISCAP","type":"project"},{"authors":[],"categories":[],"content":"Overview Optimal treatment decisions for cancer patients are hampered by variability in grading among pathologists. When there is a suspicion for cancer, typically a tissue biopsy is taken. This biopsy is stained with hematoxylin and eosin (H\u0026amp;E) and evaluated by a pathologist for the presence and aggressiveness (i.e. grading) of the tumor. Professional organizations have drafted standardized guidelines on how this grading should be performed; however, there is still significant inter- and intra-observer variation among pathologists. Artificial intelligence, specifically through deep learning, has shown to increase efficiency and consistency in histopathological diagnosis, and, recently, to perform grading at the expert level. We have prototypes of such algorithms validated in the lab which could significantly impact clinical practice, but have not yet been tested in routine diagnostics. The aim of this project is three-fold.\n Example of automated prostate cancer grading, including bar plots of grades assigned by multiple pathologists Tasks First, we will further develop our existing grading algorithms for prostate and breast cancer, to enhance robustness to variance in data sources and disease (sub)types. Second, we will conduct studies to determine the most efficient way to integrate algorithms into the routine workflow. Last, we will evaluate the most promising workflow prospectively in pilot lines at both peripheral and academic centers to assess performance. Currently, there are no algorithms commercially available which can do pathologist-level grading of cancer. Thus, this project can have both significant societal and economic impact. From a societal perspective, we can make expert grading available at locations without access to subspecialized pathologists. Outside the Netherlands, there are pathologist shortages in countries such as India and China where these algorithms could have even more impact. Thirona is already ISO13485 certified and has experience with CE certification and FDA 510(k) clearance. As such, upon completion of this project, algorithms could be commercially available quickly, offering the potential for a significant disruption of the health care market.\n Examples of prostate biopsy, transurethral resection, and prostatectomy ","date":1641911361,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641911361,"objectID":"b7745b896385008e59f836054e7b010a","permalink":"https://geertlitjens.nl/project/ppp-deepgrading/","publishdate":"2022-01-11T15:29:21+01:00","relpermalink":"/project/ppp-deepgrading/","section":"project","summary":"AI-supported cancer grading for more efficient and accurate histopathological diagnosis","tags":["Active","HealthHolland","Prostate cancer","Breast cancer","Pathology","Machine Learning"],"title":"DeepGrading","type":"project"},{"authors":["Wouter Bulten","Kimmo Kartasalo","Po-Hsuan Cameron Chen","Peter Ström","Hans Pinckaers","Kunal Nagpal","Yuannan Cai","David F. Steiner","Hester van Boven","Robert Vink","Christina Hulsbergen-van de Kaa","Jeroen van der Laak","Mahul B. Amin","Andrew J. Evans","Theodorus van der Kwast","Robert Allan","Peter A. Humphrey","Henrik Grönberg","Hemamali Samaratunga","Brett Delahunt","Toyonori Tsuzuki","Tomi Häkkinen","Lars Egevad","Maggie Demkin","Sohier Dane","Fraser Tan","Masi Valkonen","Greg S. Corrado","Lily Peng","Craig H. Mermel","Pekka Ruusuvuori","Geert Litjens","Martin Eklund","Américo Brilhante","Aslı Çakır","Xavier Farré","Katerina Geronatsiou","Vincent Molinié","Guilherme Pereira","Paromita Roy","Günter Saile","Paulo G. O. Salles","Ewout Schaafsma","Joëlle Tschui","Jorge Billoch-Lima","Emíio M. Pereira","Ming Zhou","Shujun He","Sejun Song","Qing Sun","Hiroshi Yoshihara","Taiki Yamaguchi","Kosaku Ono","Tao Shen","Jianyi Ji","Arnaud Roussel","Kairong Zhou","Tianrui Chai","Nina Weng","Dmitry Grechka","Maxim V. Shugaev","Raphael Kiminya","Vassili Kovalev","Dmitry Voynov","Valery Malyshev","Elizabeth Lapo","Manuel Campos","Noriaki Ota","Shinsuke Yamaoka","Yusuke Fujimoto","Kentaro Yoshioka","Joni Juvonen","Mikko Tukiainen","Antti Karlsson","Rui Guo","Chia-Lun Hsieh","Igor Zubarev","Habib S. T. Bukhar","Wenyuan Li","Jiayun Li","William Speier","Corey Arnold","Kyungdoc Kim","Byeonguk Bae","Yeong Won Kim","Hong-Seok Lee","Jeonghyuk Park","the PANDA challenge consortium"],"categories":[],"content":"","date":1640995200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1642693767,"objectID":"942918826f548f9a6fc2546336afd686","permalink":"https://geertlitjens.nl/publication/bult-22/","publishdate":"2022-01-20T15:49:26.665921Z","relpermalink":"/publication/bult-22/","section":"publication","summary":"Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge--the largest histopathology competition to date, joined by 1,290 developers--to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted κ, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.","tags":[],"title":"Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge","type":"publication"},{"authors":["Leander van Eekelen","Hans Pinckaers","Michiel van den Brand","Konnie M. Hebeda","Geert Litjens"],"categories":[],"content":"","date":1635724800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641904985,"objectID":"9ffe16a0ff176132db5db6e6eca2aaa4","permalink":"https://geertlitjens.nl/publication/eeke-21/","publishdate":"2022-01-11T12:43:00.029432Z","relpermalink":"/publication/eeke-21/","section":"publication","summary":"Cellularity estimation forms an important aspect of the visual examination of bone marrow biopsies. In clinical practice, cellularity is estimated by eye under a microscope, which is rapid, but subjective and subject to inter- and intraobserver variability. In addition, there is little consensus in the literature on the normal variation of cellularity with age. Digital image analysis may be used for more objective quantification of cellularity. As such, we developed a deep neural network for the segmentation of six major cell and tissue types in digitized bone marrow trephine biopsies. Using this segmentation, we calculated the overall bone marrow cellularity in a series of biopsies from 130 patients across a wide age range. Using intraclass correlation coefficients (ICC), we measured the agreement between the quantification by the neural network and visual estimation by two pathologists and compared it to baseline human performance. We also examined the age-related changes of cellularity and cell lineages in bone marrow and compared our results to those found in the literature. The network was capable of accurate segmentation (average accuracy and dice score of 0.95 and 0.76, respectively). There was good neural network-pathologist agreement on cellularity measurements (ICC=0.78, 95% CI 0.58-0.85). We found a statistically significant downward trend for cellularity, myelopoiesis and megakaryocytes with age in our cohort. The mean cellularity began at approximately 50% in the third decade of life and then decreased ±2% per decade to 40% in the seventh and eighth decade, but the normal range was very wide (30-70%).","tags":["Bone marrow; cellularity; deep learning; digital pathology; segmentation"],"title":"Using deep learning for quantification of cellularity and cell lineages in bone marrow biopsies and comparison to normal age-related variation.","type":"publication"},{"authors":["Kimmo Kartasalo","Wouter Bulten","Brett Delahunt","Po-Hsuan Cameron Chen","Hans Pinckaers","Henrik Olsson","Xiaoyi Ji","Nita Mulliqi","Hemamali Samaratunga","Toyonori Tsuzuki","Johan Lindberg","Mattias Rantalainen","Carolina Wählby","Geert Litjens","Pekka Ruusuvuori","Lars Egevad","Martin Eklund"],"categories":[],"content":"","date":1625097600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641904986,"objectID":"679ffc5387d24ef7bac4e569e09271ab","permalink":"https://geertlitjens.nl/publication/kart-21-a/","publishdate":"2022-01-11T12:43:05.548457Z","relpermalink":"/publication/kart-21-a/","section":"publication","summary":"Diagnosis and Gleason grading of prostate cancer in biopsies are critical for the clinical management of men with prostate cancer. Despite this, the high grading variability among pathologists leads to the potential for under- and overtreatment. Artificial intelligence (AI) systems have shown promise in assisting pathologists to perform Gleason grading, which could help address this problem. In this mini-review, we highlight studies reporting on the development of AI systems for cancer detection and Gleason grading, and discuss the progress needed for widespread clinical implementation, as well as anticipated future developments. PATIENT SUMMARY: This mini-review summarizes the evidence relating to the validation of artificial intelligence (AI)-assisted cancer detection and Gleason grading of prostate cancer in biopsies, and highlights the remaining steps required prior to its widespread clinical implementation. We found that, although there is strong evidence to show that AI is able to perform Gleason grading on par with experienced uropathologists, more work is needed to ensure the accuracy of results from AI systems in diverse settings across different patient populations, digitization platforms, and pathology laboratories.","tags":["Artificial intelligence; Gleason grading; Histopathology; Prostate cancer; Uropathology"],"title":"Artificial Intelligence for Diagnosis and Gleason Grading of Prostate Cancer in Biopsies-Current Status and Next Steps.","type":"publication"},{"authors":["O. Turner","B. Knight","A. Zuraw","G. Litjens","D. Rudmann"],"categories":null,"content":"","date":1622505600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1622505600,"objectID":"78b9ab3e22e84ae736765982090f93dd","permalink":"https://geertlitjens.nl/publication/turn-21/","publishdate":"2021-06-09T08:35:55.623367Z","relpermalink":"/publication/turn-21/","section":"publication","summary":"The 2019 manuscript by the Special Interest Group on Digital Pathology and Image Analysis of the Society of Toxicologic pathology suggested that a synergism between artificial intelligence (AI) and machine learning (ML) technologies and digital toxicologic pathology would improve the daily workflow and future impact of toxicologic pathologists globally. Now 2 years later, the authors of this review consider whether, in their opinion, there is any evidence that supports that thesis. Specifically, we consider the opportunities and challenges for applying ML (the study of computer algorithms that are able to learn from example data and extrapolate the learned information to unseen data) algorithms in toxicologic pathology and how regulatory bodies are navigating this rapidly evolving field. Although we see similarities with the \"Last Mile\" metaphor, the weight of evidence suggests that toxicologic pathologists should approach ML with an equal dose of skepticism and enthusiasm. There are increasing opportunities for impact in our field that leave the authors cautiously excited and optimistic. Toxicologic pathologists have the opportunity to critically evaluate ML applications with a \"call-to-arms\" mentality. Why should we be late adopters? There is ample evidence to encourage engagement, growth, and leadership in this field.","tags":["artificial intelligence; deep learning; digital toxicologic pathology; machine learning; neural networks"],"title":"Mini Review: The Last Mile-Opportunities and Challenges for Machine Learning in Digital Toxicologic Pathology.","type":"publication"},{"authors":["Michela Antonelli","Annika Reinke","Spyridon Bakas","Keyvan Farahani","Annette Kopp-Schneider","Bennett A. Landman","Geert Litjens","Bjoern Menze","Olaf Ronneberger","Ronald M. Summers","Bram van Ginneken","Michel Bilello","Patrick Bilic","Patrick F. Christ","Richard K. G. Do","Marc J. Gollub","Stephan H. Heckers","Henkjan Huisman","William R. Jarnagin","Maureen K. McHugo","Sandy Napel","Jennifer S. Goli Pernicka","Kawal Rhode","Catalina Tobon-Gomez","Eugene Vorontsov","Henkjan Huisman","James A. Meakin","Sebastien Ourselin","Manuel Wiesenfarth","Pablo Arbelaez","Byeonguk Bae","Sihong Chen","Laura Daza","Jianjiang Feng","Baochun He","Fabian Isensee","Yuanfeng Ji","Fucang Jia","Namkug Kim","Ildoo Kim","Dorit Merhof","Akshay Pai","Beomhee Park","Mathias Perslev","Ramin Rezaiifar","Oliver Rippel","Ignacio Sarasua","Wei Shen","Jaemin Son","Christian Wachinger","Liansheng Wang","Yan Wang","Yingda Xia","Daguang Xu","Zhanwei Xu","Yefeng Zheng","Amber L. Simpson","Lena Maier-Hein","M. Jorge Cardoso"],"categories":[],"content":"","date":1622505600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641904987,"objectID":"c59ff973c57628d5f8db814ab26f3394","permalink":"https://geertlitjens.nl/publication/anto-21/","publishdate":"2022-01-11T12:43:06.598145Z","relpermalink":"/publication/anto-21/","section":"publication","summary":"International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts.","tags":["eess.IV","cs.CV","cs.LG","68T07"],"title":"The Medical Segmentation Decathlon","type":"publication"},{"authors":["J. van der Laak","G. Litjens","F. Ciompi"],"categories":null,"content":"","date":1619827200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1619827200,"objectID":"d8ad8afd884456302918d819d7201a94","permalink":"https://geertlitjens.nl/publication/laak-21/","publishdate":"2021-06-09T08:35:55.609472Z","relpermalink":"/publication/laak-21/","section":"publication","summary":"Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.","tags":null,"title":"Deep learning in histopathology: the path to the clinic.","type":"publication"},{"authors":["T. de Bel","J. Bokhorst","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1619827200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1619827200,"objectID":"2c47bf3ec3fcc5882af473389a55a0fc","permalink":"https://geertlitjens.nl/publication/bel-21/","publishdate":"2021-06-09T08:35:55.630403Z","relpermalink":"/publication/bel-21/","section":"publication","summary":"Variation between stains in histopathology is commonplace across different medical centers. This can have a significant effect on the reliability of machine learning algorithms. In this paper, we propose to reduce performance variability by using -consistent generative adversarial (CycleGAN) networks to remove staining variation. We improve upon the regular CycleGAN by incorporating residual learning. We comprehensively evaluate the performance of our stain transformation method and compare its usefulness in addition to extensive data augmentation to enhance the robustness of tissue segmentation algorithms. Our steps are as follows: first, we train a model to perform segmentation on tissue slides from a single source center, while heavily applying augmentations to increase robustness to unseen data. Second, we evaluate and compare the segmentation performance on data from other centers, both with and without applying our CycleGAN stain transformation. We compare segmentation performances in a colon tissue segmentation and kidney tissue segmentation task, covering data from 6 different centers. We show that our transformation method improves the overall Dice coefficient by 9% over the non-normalized target data and by 4% over traditional stain transformation in our colon tissue segmentation task. For kidney segmentation, our residual CycleGAN increases performance by 10% over no transformation and around 2% compared to the non-residual CycleGAN.","tags":["Adversarial networks; Histopathology; Stain normalization; Structure segmentation"],"title":"Residual cyclegan for robust domain transformation of histopathological tissue slides.","type":"publication"},{"authors":["Annika Reinke","Matthias Eisenmann","Minu D. Tizabi","Carole H. Sudre","Tim Rädsch","Michela Antonelli","Tal Arbel","Spyridon Bakas","M. Jorge Cardoso","Veronika Cheplygina","Keyvan Farahani","Ben Glocker","Doreen Heckmann-Nötzel","Fabian Isensee","Pierre Jannin","Charles E. Kahn","Jens Kleesiek","Tahsin Kurc","Michal Kozubek","Bennett A. Landman","Geert Litjens","Klaus Maier-Hein","Bjoern Menze","Henning Müller","Jens Petersen","Mauricio Reyes","Nicola Rieke","Bram Stieltjes","Ronald M. Summers","Sotirios A. Tsaftaris","Bram van Ginneken","Annette Kopp-Schneider","Paul Jäger","Lena Maier-Hein"],"categories":[],"content":"","date":1617235200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641904988,"objectID":"8ea8febbb37d4326ebcafdcf90988c73","permalink":"https://geertlitjens.nl/publication/rein-21-a/","publishdate":"2022-01-11T12:43:07.646097Z","relpermalink":"/publication/rein-21-a/","section":"publication","summary":"While the importance of automatic image analysis is increasing at an enormous pace, recent meta-research revealed major flaws with respect to algorithm validation. Specifically, performance metrics are key for objective, transparent and comparative performance assessment, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. A common mission of several international initiatives is therefore to provide researchers with guidelines and tools to choose the performance metrics in a problem-aware manner. This dynamically updated document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts.","tags":["cs.CV","eess.IV"],"title":"Common Limitations of Image Processing Metrics: A Picture Story","type":"publication"},{"authors":["M. Balkenhol","F. Ciompi","Ż. Świderska-Chadaj","R. van de Loo","M. Intezar","I. Otte-Höller","D. Geijs","J. Lotz","N. Weiss","T. de Bel","G. Litjens","P. Bult","J. van der Laak"],"categories":null,"content":"","date":1617235200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1617235200,"objectID":"108168ed2262cba5ab819d7c8faf14fe","permalink":"https://geertlitjens.nl/publication/balk-21/","publishdate":"2021-06-09T08:35:55.603363Z","relpermalink":"/publication/balk-21/","section":"publication","summary":"The tumour microenvironment has been shown to be a valuable source of prognostic information for different cancer types. This holds in particular for triple negative breast cancer (TNBC), a breast cancer subtype for which currently no prognostic biomarkers are established. Although different methods to assess tumour infiltrating lymphocytes (TILs) have been published, it remains unclear which method (marker, region) yields the most optimal prognostic information. In addition, to date, no objective TILs assessment methods are available. For this proof of concept study, a subset of our previously described TNBC cohort (n = 94) was stained for CD3, CD8 and FOXP3 using multiplex immunohistochemistry and subsequently imaged by a multispectral imaging system. Advanced whole-slide image analysis algorithms, including convolutional neural networks (CNN) were used to register unmixed multispectral images and corresponding H\u0026E sections, to segment the different tissue compartments (tumour, stroma) and to detect all individual positive lymphocytes. Densities of positive lymphocytes were analysed in different regions within the tumour and its neighbouring environment and correlated to relapse free survival (RFS) and overall survival (OS). We found that for all TILs markers the presence of a high density of positive cells correlated with an improved survival. None of the TILs markers was superior to the others. The results of TILs assessment in the various regions did not show marked differences between each other. The negative correlation between TILs and survival in our cohort are in line with previous studies. Our results provide directions for optimizing TILs assessment methodology.","tags":["Artificial intelligence; Multispectral imaging; Prognosis; Triple negative breast cancer; Tumour infiltrating lymphocytes"],"title":"Optimized tumour infiltrating lymphocyte assessment for triple negative breast cancer prognostics.","type":"publication"},{"authors":["H. Pinckaers","W. Bulten","J. Van der Laak","G. Litjens"],"categories":null,"content":"","date":1614556800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1614556800,"objectID":"af83b06acfef205c698056f61e1b6902","permalink":"https://geertlitjens.nl/publication/pinc-21/","publishdate":"2021-06-09T08:35:55.618365Z","relpermalink":"/publication/pinc-21/","section":"publication","summary":"Prostate cancer is the most prevalent cancer among men in Western countries, with 1.1 million new diagnoses every year. The gold standard for the diagnosis of prostate cancer is a pathologists' evaluation of prostate tissue. To potentially assist pathologists deep-learning-based cancer detection systems have been developed. Many of the state-of-the-art models are patch-based convolutional neural networks, as the use of entire scanned slides is hampered by memory limitations on accelerator cards. Patch-based systems typically require detailed, pixel-level annotations for effective training. However, such annotations are seldom readily available, in contrast to the clinical reports of pathologists, which contain slide-level labels. As such, developing algorithms which do not require manual pixel-wise annotations, but can learn using only the clinical report would be a significant advancement for the field. In this paper, we propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end on 4712 prostate biopsies. The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB. We show that modern CNNs, trained using our streaming approach, can extract meaningful features from high-resolution images without additional heuristics, reaching similar performance as state-of-the-art patch-based and multiple-instance learning methods. By circumventing the need for manual annotations, this approach can function as a blueprint for other tasks in histopathological diagnosis. The source code to reproduce the streaming models is available at https://github.com/DIAGNijmegen/pathology-streaming-pipeline.","tags":null,"title":"Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels.","type":"publication"},{"authors":["D. Tellez","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1612137600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1612137600,"objectID":"406fbc308fa587c0c074518addcbcc0e","permalink":"https://geertlitjens.nl/publication/tell-21/","publishdate":"2021-06-09T08:35:55.589373Z","relpermalink":"/publication/tell-21/","section":"publication","summary":"We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts.","tags":null,"title":"Neural Image Compression for Gigapixel Histopathology Image Analysis.","type":"publication"},{"authors":["Annika Reinke","Matthias Eisenmann","Minu Dietlinde Tizabi","Carole H Sudre","Tim Rädsch","Michela Antonelli,","Tal Arbel","Spyridon Bakas","Jorge Cardoso","Veronika Cheplygina","Keyvan Farahani","Ben Glocker","Doreen Heckmann-Nötzel","Fabian Isensee","Pierre Jannin","Charles Kahn","Jens Kleesiek","Tahsin Kurc","Michal Kozubek","Bennett A Landman","Geert Litjens","Klausv Maier-Hein","Anne Lousise Martel","Henning Müller","Jens Petersen","Mauricio Reyes","Nicola Rieke","Bram Stieltjes","Ronald M Summers","Sotirios A Tsaftaris","Bram van Ginneken","Annette Kopp-Schneider","Paul Jäger","Lena Maier-Hein"],"categories":[],"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1641904989,"objectID":"4ec8f066c1744c0c23f90c6849d865e4","permalink":"https://geertlitjens.nl/publication/rein-21/","publishdate":"2022-01-11T12:43:08.676176Z","relpermalink":"/publication/rein-21/","section":"publication","summary":"","tags":[],"title":"Common limitations of performance metrics in biomedical image analysis","type":"publication"},{"authors":["D. Geijs","H. Pinckaers","A. Amir","G. Litjens"],"categories":null,"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1609459200,"objectID":"bf681b6e8bce83231a8dc00ed0572c36","permalink":"https://geertlitjens.nl/publication/geij-21/","publishdate":"2021-06-09T08:35:55.594364Z","relpermalink":"/publication/geij-21/","section":"publication","summary":"The high incidence of BCC skin cancer caused that the amount of work for pathologists has risen to unprecedented levels. Acquiring outlined annotations for training deep learning models classifying BCC is often tedious and time consuming. End-to-end learning provides relief in labelling data by using a single label to predict an clinical outcome. We compared multiple-instance-learning (MIL) and a streaming performance for detecting BCC in 420 slides collected from 72 BCC positive patients. This resulted in an ROC with AUC of 0.96 and 0.98 for respectively streaming and MIL. Saliency and probability maps showed that both methods were capable of classifying classifying BCC in an end-to-end way with single labels.","tags":null,"title":"End-to-end classification on basal-cell carcinoma histopathology whole-slides images","type":"publication"},{"authors":["K. Faryna","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1609459200,"objectID":"809666351eb09ffdc1dd8207cba42b28","permalink":"https://geertlitjens.nl/publication/fary-21/","publishdate":"2021-06-09T08:35:55.641364Z","relpermalink":"/publication/fary-21/","section":"publication","summary":"","tags":null,"title":"Tailoring automated data augmentation to H\u0026E-stained histopathology","type":"publication"},{"authors":["Z. Swiderska-Chadaj","T. de Bel","L. Blanchet","A. Baidoshvili","D. Vossen","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1598918400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1598918400,"objectID":"2a94f3abf50cb4f43d1816ead048ef52","permalink":"https://geertlitjens.nl/publication/swid-20-b/","publishdate":"2020-10-20T20:31:23.658973Z","relpermalink":"/publication/swid-20-b/","section":"publication","summary":"Algorithms can improve the objectivity and efficiency of histopathologic slide analysis. In this paper, we investigated the impact of scanning systems (scanners) and cycle-GAN-based normalization on algorithm performance, by comparing different deep learning models to automatically detect prostate cancer in whole-slide images. Specifically, we compare U-Net, DenseNet and EfficientNet. Models were developed on a multi-center cohort with 582 WSIs and subsequently evaluated on two independent test sets including 85 and 50 WSIs, respectively, to show the robustness of the proposed method to differing staining protocols and scanner types. We also investigated the application of normalization as a pre-processing step by two techniques, the whole-slide image color standardizer (WSICS) algorithm, and a cycle-GAN based method. For the two independent datasets we obtained an AUC of 0.92 and 0.83 respectively. After rescanning the AUC improves to 0.91/0.88 and after style normalization to 0.98/0.97. In the future our algorithm could be used to automatically pre-screen prostate biopsies to alleviate the workload of pathologists.","tags":null,"title":"Impact of rescanning and normalization on convolutional neural network performance in multi-center, whole-slide classification of prostate cancer","type":"publication"},{"authors":["Wouter Bulten","Maschenka Balkenhol","Jean-Joël Awoumou Belinga","Américo Brilhante","Aslı Çakır","Lars Egevad","Martin Eklund","Xavier Farré","Katerina Geronatsiou","Vincent Molinié","Guilherme Pereira","Paromita Roy","Günter Saile","Paulo Salles","Ewout Schaafsma","Joëlle Tschui","Anne-Marie Vos","Brett Delahunt","Hemamali Samaratunga","David J. Grignon","Andrew J. Evans","Daniel M. Berney","Chin-Chen Pan","Glen Kristiansen","James G. Kench","Jon Oxley","Katia R. M. Leite","Jesse K. McKenney","Peter A. Humphrey","Samson W. Fine","Toyonori Tsuzuki","Murali Varma","Ming Zhou","Eva Comperat","David G. Bostwick","Kenneth A. Iczkowski","Cristina Magi-Galluzzi","John R. Srigley","Hiroyuki Takahashi","Theo van der Kwast","Hester van Boven","Robert Vink","Jeroen van der Laak","Christina Hulsbergen-van der Kaa","Geert Litjens"],"categories":[],"content":"","date":159624e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1603277985,"objectID":"c87452665986e337c09375ce78bc9228","permalink":"https://geertlitjens.nl/publication/bult-20-a/","publishdate":"2020-10-21T10:59:44.836543Z","relpermalink":"/publication/bult-20-a/","section":"publication","summary":"The Gleason score is the most important prognostic marker for prostate cancer patients, but it suffers from significant observer variability. Artificial intelligence (AI) systems based on deep learning can achieve pathologist-level performance at Gleason grading. However, the performance of such systems can degrade in the presence of artifacts, foreign tissue, or other anomalies. Pathologists integrating their expertise with feedback from an AI system could result in a synergy that outperforms both the individual pathologist and the system. Despite the hype around AI assistance, existing literature on this topic within the pathology domain is limited. We investigated the value of AI assistance for grading prostate biopsies. A panel of 14 observers graded 160 biopsies with and without AI assistance. Using AI, the agreement of the panel with an expert reference standard increased significantly (quadratically weighted Cohen's kappa, 0.799 vs. 0.872; p = 0.019). On an external validation set of 87 cases, the panel showed a significant increase in agreement with a panel of international experts in prostate pathology (quadratically weighted Cohen's kappa, 0.733 vs. 0.786; p = 0.003). In both experiments, on a group-level, AI-assisted pathologists outperformed the unassisted pathologists and the standalone AI system. Our results show the potential of AI systems for Gleason grading, but more importantly, show the benefits of pathologist-AI synergy.","tags":[],"title":"Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists","type":"publication"},{"authors":["H. Pinckaers","B. van Ginneken","G. Litjens"],"categories":null,"content":"","date":159624e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":159624e4,"objectID":"51c7b4dabdd650696365a639fd7301db","permalink":"https://geertlitjens.nl/publication/pinc-20/","publishdate":"2020-10-20T20:31:23.674944Z","relpermalink":"/publication/pinc-20/","section":"publication","summary":"Due to memory constraints on current hardware, most convolution neural networks (CNN) are trained on sub-megapixel images. For example, most popular datasets in computer vision contain images much less than a megapixel in size (0.09MP for ImageNet and 0.001MP for CIFAR-10). In some domains such as medical imaging, multi-megapixel images are needed to identify the presence of disease accurately. We propose a novel method to directly train CNNs using any input image size end-to-end. This method exploits the locality of most operations in modern CNNs by performing the forward and backward pass on smaller tiles of the image. In this work, we show a proof of concept using images of up to 66-megapixels (8192x8192), saving approximately 50GB of memory per image. Using two public challenge datasets, we demonstrate that CNNs can learn to extract relevant information from these large images and benefit from increasing resolution. We improved the area under the receiver-operating characteristic curve from 0.580 (4MP) to 0.706 (66MP) for metastasis detection in breast cancer (CAMELYON17). We also obtained a Spearman correlation metric approaching state-of-the-art performance on the TUPAC16 dataset, from 0.485 (1MP) to 0.570 (16MP). The code to reproduce a subset of the experiments is available at https://github.com/DIAGNijmegen/StreamingCNN.","tags":null,"title":"Streaming convolutional neural networks for end-to-end learning with multi-megapixel images","type":"publication"},{"authors":["G. van Leenders","T. van der Kwast","D. Grignon","A. Evans","G. Kristiansen","C. Kweldam","G. Litjens","J. McKenney","J. Melamed","N. Mottet","G. Paner","H. Samaratunga","I. Schoots","J. Simko","T. Tsuzuki","M. Varma","A. Warren","T. Wheeler","S. Williamson","K. Iczkowski","I. Members"],"categories":null,"content":"","date":159624e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":159624e4,"objectID":"3ee5dad8a4e87ef9eee9c2246ae09500","permalink":"https://geertlitjens.nl/publication/leen-20/","publishdate":"2020-10-20T20:31:23.692538Z","relpermalink":"/publication/leen-20/","section":"publication","summary":"Five years after the last prostatic carcinoma grading consensus conference of the International Society of Urological Pathology (ISUP), accrual of new data and modification of clinical practice require an update of current pathologic grading guidelines. This manuscript summarizes the proceedings of the ISUP consensus meeting for grading of prostatic carcinoma held in September 2019, in Nice, France. Topics brought to consensus included the following: (1) approaches to reporting of Gleason patterns 4 and 5 quantities, and minor/tertiary patterns, (2) an agreement to report the presence of invasive cribriform carcinoma, (3) an agreement to incorporate intraductal carcinoma into grading, and (4) individual versus aggregate grading of systematic and multiparametric magnetic resonance imaging-targeted biopsies. Finally, developments in the field of artificial intelligence in the grading of prostatic carcinoma and future research perspectives were discussed.","tags":["Biopsy; Carcinoma","pathology; Carcinoma","Ductal","pathology; Consensus; Humans; Male; Neoplasm Grading","standards; Neoplasm Invasiveness; Pathology","Clinical","standards; Predictive Value of Tests; Prostatic Neoplasms","pathology; Urology","standards"],"title":"The 2019 International Society of Urological Pathology (ISUP) Consensus Conference on Grading of Prostatic Carcinoma.","type":"publication"},{"authors":null,"categories":null,"content":"Interview with Wouter Bulten and myself on AI in medical imaging for popular science magazine Kijk, with a specific focus on our automated Gleason grading algorithm.\n","date":1588291200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1588291200,"objectID":"da112c188feac03441b79dfe77d80aa2","permalink":"https://geertlitjens.nl/media/kijk-gleason/","publishdate":"2019-04-23T21:32:50+02:00","relpermalink":"/media/kijk-gleason/","section":"media","summary":"Interview with Wouter Bulten and myself on AI in medical imaging for popular science magazine Kijk, with a specific focus on our [automated Gleason grading algorithm](/publication/bult-20/).","tags":[],"title":"Interview with Kijk on AI in Medical Imaging","type":"media"},{"authors":null,"categories":null,"content":"AI in medical imaging is often pitched as either an aid or a threat to the physician. But what is the impact for the patient? Can they benefit from the introduction of AI and if so, in which way? In this talk I will sketch the impact of AI on prostate cancer diagnosis from the perspective of the patient.\n","date":1582243200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1582243200,"objectID":"ddf6eaa478e7f63654def75ad26dfcd2","permalink":"https://geertlitjens.nl/talk/visit-patient-foundation/","publishdate":"2020-05-11T14:21:55+02:00","relpermalink":"/talk/visit-patient-foundation/","section":"talk","summary":"AI in medical imaging is often pitched as either an aid or a threat to the physician. But what is the impact for the patient? Can they benefit from the introduction of AI and if so, in which way? In this talk I will sketch the impact of AI on prostate cancer diagnosis from the perspective of the patient.","tags":[],"title":"Visit Dutch Prostate Cancer Patient Foundation","type":"talk"},{"authors":null,"categories":null,"content":"Article covering some of the latest advances of AI in medical imaging, with specific focus on our JAMA Oncology paper on automated Gleason grading (on the cover of the Science section).\n","date":1580515200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1580515200,"objectID":"22ed5d98fded16222e635135f10e979a","permalink":"https://geertlitjens.nl/media/nrc-gleason/","publishdate":"2020-05-11T00:00:00Z","relpermalink":"/media/nrc-gleason/","section":"media","summary":"Article covering some of the latest advances of AI in medical imaging, with specific focus on our [JAMA Oncology](/publication/bult-20/) paper on automated Gleason grading (on the cover of the Science section).","tags":[],"title":"NRC article on artificial intelligence in medicine","type":"media"},{"authors":null,"categories":null,"content":"Interview together with Wouter Bulten on our publication in JAMA Oncology on using AI for automated Gleason grading.\n ","date":1578528e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1578528e3,"objectID":"d007d8c50c99ad8b9f7be89719c4a23d","permalink":"https://geertlitjens.nl/media/bnr-interview-gleason/","publishdate":"2020-05-11T21:08:25+02:00","relpermalink":"/media/bnr-interview-gleason/","section":"media","summary":"Interview together with Wouter Bulten on our publication in JAMA Oncology on using AI for automated Gleason grading.","tags":[],"title":"BNR Wetenschap Vandaag - Interview AI for prostate cancer grading","type":"media"},{"authors":null,"categories":null,"content":"Interview in the NOS op 3 tech podcast on AI for medical imaging\n ","date":1578528e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1578528e3,"objectID":"ab66731b5979b721d9f43069d6c46590","permalink":"https://geertlitjens.nl/media/nos-techpodcast-gleason/","publishdate":"2020-05-11T00:00:00Z","relpermalink":"/media/nos-techpodcast-gleason/","section":"media","summary":"Interview in the NOS op 3 tech podcast on AI for medical imaging","tags":[],"title":"NOS Tech Podcast on AI for Gleason Grading","type":"media"},{"authors":["Z. Swiderska-Chadaj","K. Hebeda","M. van den Brand","G. Litjens"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"5e29ee5196d237603c249ab373a8d8d4","permalink":"https://geertlitjens.nl/publication/swid-20-c/","publishdate":"2020-10-20T20:31:23.643942Z","relpermalink":"/publication/swid-20-c/","section":"publication","summary":"In patients with suspected lymphoma, the tissue biopsy provides lymphoma confirmation, classification, and prognostic factors, including genetic changes. We developed a deep learning algorithm to detect MYC rearrangement in scanned histological slides of diffuse large B-cell lymphoma. The H\u0026E-stained slides of 287 cases from 11 hospitals were used for training and evaluation. The overall sensitivity to detect MYC rearrangement was 0.93 and the specificity 0.52, showing that prediction of MYC translocation based on morphology alone was possible in 93% of MYC-rearranged cases. This would allow a simple and fast prescreening, saving approximately 34% of genetic tests with the current algorithm.","tags":null,"title":"Artificial intelligence to detect MYC translocation in slides of diffuse large B-cell lymphoma","type":"publication"},{"authors":["W. Bulten","H. Pinckaers","H. van Boven","R. Vink","T. de Bel","B. van Ginneken","J. van der Laak","C. Hulsbergen-van de Kaa","G. Litjens"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"295adbb5b62475170a8d287048af23b6","permalink":"https://geertlitjens.nl/publication/bult-20/","publishdate":"2020-05-11T12:24:14.129177Z","relpermalink":"/publication/bult-20/","section":"publication","summary":"BACKGROUND: The Gleason score is the strongest correlating predictor of recurrence for prostate cancer, but has substantial inter-observer variability, limiting its usefulness for individual patients. Specialised urological pathologists have greater concordance; however, such expertise is not widely available. Prostate cancer diagnostics could thus benefit from robust, reproducible Gleason grading. We aimed to investigate the potential of deep learning to perform automated Gleason grading of prostate biopsies. METHODS: In this retrospective study, we developed a deep-learning system to grade prostate biopsies following the Gleason grading standard. The system was developed using randomly selected biopsies, sampled by the biopsy Gleason score, from patients at the Radboud University Medical Center (pathology report dated between Jan 1, 2012, and Dec 31, 2017). A semi-automatic labelling technique was used to circumvent the need for manual annotations by pathologists, using pathologists' reports as the reference standard during training. The system was developed to delineate individual glands, assign Gleason growth patterns, and determine the biopsy-level grade. For validation of the method, a consensus reference standard was set by three expert urological pathologists on an independent test set of 550 biopsies. Of these 550, 100 were used in an observer experiment, in which the system, 13 pathologists, and two pathologists in training were compared with respect to the reference standard. The system was also compared to an external test dataset of 886 cores, which contained 245 cores from a different centre that were independently graded by two pathologists. FINDINGS: We collected 5759 biopsies from 1243 patients. The developed system achieved a high agreement with the reference standard (quadratic Cohen's kappa 0*918, 95% CI 0*891-0*941) and scored highly at clinical decision thresholds: benign versus malignant (area under the curve 0*990, 95% CI 0*982-0*996), grade group of 2 or more (0*978, 0*966-0*988), and grade group of 3 or more (0*974, 0*962-0*984). In an observer experiment, the deep-learning system scored higher (kappa 0*854) than the panel (median kappa 0*819), outperforming 10 of 15 pathologist observers. On the external test dataset, the system obtained a high agreement with the reference standard set independently by two pathologists (quadratic Cohen's kappa 0*723 and 0*707) and within inter-observer variability (kappa 0*71). INTERPRETATION: Our automated deep-learning system achieved a performance similar to pathologists for Gleason grading and could potentially contribute to prostate cancer diagnosis. The system could potentially assist pathologists by screening biopsies, providing second opinions on grade group, and presenting quantitative measurements of volume percentages. FUNDING: Dutch Cancer Society.","tags":null,"title":"Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study","type":"publication"},{"authors":["J. Linmans","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"9f9182822fedac3b38c61b247a45ecd1","permalink":"https://geertlitjens.nl/publication/linm-20/","publishdate":"2020-10-20T20:31:23.706608Z","relpermalink":"/publication/linm-20/","section":"publication","summary":"Successful clinical implementation of deep learning in medical imaging depends, in part, on the reliability of the predictions. Specifically, the system should be accurate for classes seen during training while providing calibrated estimates of uncertainty for abnormalities and unseen classes. To efficiently estimate predictive uncertainty, we propose the use of multi-head CNNs (M-heads). We compare its performance to related and more prevalent approaches, such as deep ensembles, on the task of out-of-distribution (OOD) detection. To this end, we evaluate models trained to discriminate normal lymph node tissue from breast cancer metastases, on lymph nodes containing lymphoma. We show the ability to discriminate between in-distribution lymph node tissue and lymphoma by evaluating the AUROC based on the uncertainty signal. Here, the best performing multi-head CNN (91.7) outperforms both Monte Carlo dropout (88.3) and deep ensembles (86.8). Furthermore, we show that the meta-loss function of M-heads improves OOD detection in terms of AUROC.","tags":null,"title":"Efficient Out-of-Distribution Detection in Digital Pathology Using Multi-Head Convolutional Neural Networks","type":"publication"},{"authors":["L. van Eekelen","H. Pinckaers","K. Hebeda","G. Litjens"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"639adea9e7e15fccda1164b4517ffed0","permalink":"https://geertlitjens.nl/publication/eeke-20-a/","publishdate":"2020-10-20T20:31:23.756597Z","relpermalink":"/publication/eeke-20-a/","section":"publication","summary":"","tags":null,"title":"Multi-class semantic cell segmentation and classification of aplasia in bone marrow histology images","type":"publication"},{"authors":["Z. Swiderska-Chadaj","K. Hebeda","M. van den Brand","G. Litjens"],"categories":null,"content":"","date":1577836800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577836800,"objectID":"4cb540290544c72adebac43722796c75","permalink":"https://geertlitjens.nl/publication/swid-20/","publishdate":"2020-10-20T20:31:23.723535Z","relpermalink":"/publication/swid-20/","section":"publication","summary":"Diffuse large B-cell lymphoma (DLBCL) is the most common type of B-cell lymphoma. It is characterized by a heterogeneous morphology, genetic changes and clinical behavior. A small specific subgroup of DLBCL, harbouring a MYC gene translocation is associated with worse patient prognosis and outcome. Typically, the MYC translocation is assessed with a molecular test (FISH), that is expensive and time-consuming. Our hypothesis is that genetic changes, such as translocations could be visible as changes in the morphology of an HE-stained specimen. However, it has not proven possible to use morphological criteria for the detection of a MYC translocation in the diagnostic setting due to lack of specificity. In this paper, we apply a deep learning model to automate detection of the MYC translocations in DLBCL based on HE-stained specimens. The proposed method works at the whole-slide level and was developed based on a multicenter data cohort of 91 patients. All specimens were stained with HE, and the MYC translocation was confirmed using fluorescence in situ hybridization (FISH). The system was evaluated on an additional 66 patients, and obtained AUROC of 0.83 and accuracy of 0.77. The proposed method presents proof of a concept giving insights in the applicability of deep learning methods for detection of a genetic changes in DLBCL. In future work we will evaluate our algorithm for automatic pre-screen of DLBCL specimens to obviate FISH analysis in a large number of patients.","tags":null,"title":"Predicting MYC translocation in HE specimens of diffuse large B-cell lymphoma through deep learning","type":"publication"},{"authors":null,"categories":null,"content":"Artificial Intelligence is starting a more important role in diagnostics, also in diagnostics of prostate cancer. In this talk I will sketch the current applications of AI and give directions for future use, also with respect to clinical research.\n","date":1572912e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1572912e3,"objectID":"444f1f4d9fb1663d0f75b9011d759718","permalink":"https://geertlitjens.nl/talk/prostate-cancer-academy/","publishdate":"2020-05-11T14:21:55+02:00","relpermalink":"/talk/prostate-cancer-academy/","section":"talk","summary":"Artificial Intelligence is starting a more important role in diagnostics, also in diagnostics of prostate cancer. In this talk I will sketch the current applications of AI and give directions for future use, also with respect to clinical research.","tags":[],"title":"Artificial Intelligence in Prostate Cancer Diagnostics","type":"talk"},{"authors":["Oscar A. Debats","Geert J. S. Litjens","Henkjan J. Huisman"],"categories":[],"content":"","date":1572566400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1603275860,"objectID":"b663cb46855e422f25a323a7add5a964","permalink":"https://geertlitjens.nl/publication/deba-19/","publishdate":"2020-10-21T10:24:19.793979Z","relpermalink":"/publication/deba-19/","section":"publication","summary":"To investigate whether multi-view convolutional neural networks can improve a fully automated lymph node detection system for pelvic MR Lymphography (MRL) images of patients with prostate cancer. A fully automated computer-aided detection (CAD) system had been previously developed to detect lymph nodes in MRL studies. The CAD system was extended with three types of 2D multi-view convolutional neural networks (CNN) aiming to reduce false positives (FP). A 2D multi-view CNN is an efficient approximation of a 3D CNN, and three types were evaluated: a 1-view, 3-view, and 9-view 2D CNN. The three deep learning CNN architectures were trained and configured on retrospective data of 240 prostate cancer patients that received MRL images as the standard of care between January 2008 and April 2010. The MRL used ferumoxtran-10 as a contrast agent and comprised at least two imaging sequences: a 3D T1-weighted and a 3D T2*-weighted sequence. A total of 5089 lymph nodes were annotated by two expert readers, reading in consensus. A first experiment compared the performance with and without CNNs and a second experiment compared the individual contribution of the 1-view, 3-view, or 9-view architecture to the performance. The performances were visually compared using free-receiver operating characteristic (FROC) analysis and statistically compared using partial area under the FROC curve analysis. Training and analysis were performed using bootstrapped FROC and 5-fold cross-validation. Adding multi-view CNNs significantly ( ","tags":[],"title":"Lymph node detection in MR Lymphography: false positive reduction using multi-view convolutional neural networks","type":"publication"},{"authors":["J. van der Laak","F. Ciompi","G. Litjens"],"categories":null,"content":"","date":1571270400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1571270400,"objectID":"e17bba9024b086217662f2258d62720e","permalink":"https://geertlitjens.nl/publication/laak-19/","publishdate":"2019-10-17T19:22:55.632136Z","relpermalink":"/publication/laak-19/","section":"publication","summary":"A deep-learning model for cancer detection trained on a large number of scanned pathology slides and associated diagnosis labels enables model development without the need for pixel-level annotations.","tags":["artificial intelligence","letter","opinion","deep learning"],"title":"No pixel-level annotations needed","type":"publication"},{"authors":null,"categories":null,"content":"Interview for the show ‘De zomeravond van…’ for Omroep P\u0026amp;M, which is an interview program for (former) residents of the municipality Peel \u0026amp; Maas. It partly highlights my work, but also some personal items.\nAbove you can find the full link to the video on the Omroep P\u0026amp;M website (via the Video button). For convenience I have also added a link to YouTube, which you can see here:\n ","date":1565222400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1565222400,"objectID":"9f1baede2877a4b58780c7473e989797","permalink":"https://geertlitjens.nl/media/omroep_pm_zomeravond/","publishdate":"2019-10-06T00:00:00Z","relpermalink":"/media/omroep_pm_zomeravond/","section":"media","summary":"Interview for the show 'De zomeravond van...' for Omroep P\u0026M, which is an interview program for (former) residents of the municipality Peel \u0026 Maas. It partly highlights my work, but also some personal items.","tags":[],"title":"Interview in 'De zomeravond van...' van Omroep P\u0026M","type":"media"},{"authors":["Z. Swiderska-Chadaj","H. Pinckaers","M. van Rijthoven","M. Balkenhol","M. Melnikova","O. Geessink","Q. Manson","M. Sherman","A. Polonia","J. Parry","M. Abubakar","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1564617600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1564617600,"objectID":"00a4fd309a9ed2451399a4a0a5d7b5c1","permalink":"https://geertlitjens.nl/publication/swid-19/","publishdate":"2019-10-06T19:22:55.647128Z","relpermalink":"/publication/swid-19/","section":"publication","summary":"The immune system is of critical importance in the development of cancer. The evasion of destruction by the immune system is one of the emerging hallmarks of cancer. We have built a dataset of 171,166 manually annotated CD3 and CD8 cells, which we used to train deep learning algorithms for automatic detection of lymphocytes in histopathology images to better quantify immune response. Moreover, we investigate the effectiveness of four deep learning based methods when different subcompartments of the whole-slide image are considered: normal tissue areas, areas with immune cell clusters, and areas containing artifacts. We have compared the proposed methods in breast, colon and prostate cancer tissue slides collected from nine different medical centers. Finally, we report the results of an observer study on lymphocyte quantification, which involved four pathologists from different medical centers, and compare their performance with the automatic detection. The results give insights on the applicability of the proposed methods for clinical use. U-Net obtained the highest performance with an F1-score of 0.78 and the highest agreement with manual evaluation (κ=0.72), whereas the average pathologists agreement with reference standard was κ=0.64. The test set and the automatic evaluation procedure are publicly available at lyon19.grand-challenge.org.","tags":["computational pathology","deep learning","immune cell detection","immunohistochemistry"],"title":"Learning to detect lymphocytes in immunohistochemistry with deep learning","type":"publication"},{"authors":["D. Tellez","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1564617600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1564617600,"objectID":"2d698de7ff5b217717172948c6a4ee82","permalink":"https://geertlitjens.nl/publication/tell-19/","publishdate":"2019-10-06T19:23:29.071174Z","relpermalink":"/publication/tell-19/","section":"publication","summary":"We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts.","tags":null,"title":"Neural Image Compression for Gigapixel Histopathology Image Analysis","type":"publication"},{"authors":["D. Tellez","G. Litjens","P. Bándi","W. Bulten","J. Bokhorst","F. Ciompi","J. van der Laak"],"categories":null,"content":"","date":1564617600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1564617600,"objectID":"d38ee7c67e1e3b3999619dba06513653","permalink":"https://geertlitjens.nl/publication/tell-19-a/","publishdate":"2019-10-06T19:22:55.662145Z","relpermalink":"/publication/tell-19-a/","section":"publication","summary":"Stain variation is a phenomenon observed when distinct pathology laboratories stain tissue slides that exhibit similar but not identical color appearance. Due to this color shift between laboratories, convolutional neural networks (CNNs) trained with images from one lab often underperform on unseen images from the other lab. Several techniques have been proposed to reduce the generalization error, mainly grouped into two categories: stain color augmentation and stain color normalization. The former simulates a wide variety of realistic stain variations during training, producing stain-invariant CNNs. The latter aims to match training and test color distributions in order to reduce stain variation. For the first time, we compared some of these techniques and quantified their effect on CNN classification performance using a heterogeneous dataset of hematoxylin and eosin histopathology images from 4 organs and 9 pathology laboratories. Additionally, we propose a novel unsupervised method to perform stain color normalization using a neural network. Based on our experimental results, we provide practical guidelines on how to use stain color augmentation and stain color normalization in future computational pathology applications.","tags":["computational pathology","convolutional neural network","deep learning"],"title":"Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology.","type":"publication"},{"authors":["G. Litjens","F. Ciompi","J. Wolterink","B. de Vos","T. Leiner","J. Teuwen","I. Išgum"],"categories":null,"content":"","date":1564617600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1564617600,"objectID":"b90335ad4523479dafbdb331ac5b870d","permalink":"https://geertlitjens.nl/publication/litj-19/","publishdate":"2019-10-06T19:22:55.632136Z","relpermalink":"/publication/litj-19/","section":"publication","summary":"Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses 80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed.","tags":["artificial intelligence","cardiovascular imaging","deep learning"],"title":"State-of-the-Art Deep Learning in Cardiovascular Image Analysis.","type":"publication"},{"authors":["K. Dercksen","W. Bulten","G. Litjens"],"categories":null,"content":"","date":1559347200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1559347200,"objectID":"a4d9e1d17f3dca0a2897a74b0161be6c","permalink":"https://geertlitjens.nl/publication/derck-19/","publishdate":"2019-06-01T00:00:00Z","relpermalink":"/publication/derck-19/","section":"publication","summary":"Large amounts of unlabelled data are commonplace for many applications in computational pathology, whereas labelled data is often expensive, both in time and cost, to acquire. We investigate the performance of unsupervised and supervised deep learning methods when few labelled data are available. Three methods are compared: clustering autoencoder latent vectors (unsupervised), a single layer classifier combined with a pre-trained autoencoder (semi-supervised), and a supervised CNN. We apply these methods on hematoxylin and eosin (H\u0026E) stained prostatectomy images to classify tumour versus non-tumour tissue. Results show that semi-/unsupervised methods have an advantage over supervised learning when few labels are available. Additionally, we show that incorporating immunohistochemistry (IHC) stained data provides an increase in performance over only using H\u0026E.","tags":null,"title":"Dealing with Label Scarcity in Computational Pathology: A Use Case in Prostate Cancer Classification","type":"publication"},{"authors":["T. de Bel","M. Hermsen","J. Kers","J. van der Laak, G. Litjens"],"categories":null,"content":"","date":1559347200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1559347200,"objectID":"7c6ee82f92e39b13ef8d035488253f9f","permalink":"https://geertlitjens.nl/publication/bel-19/","publishdate":"2019-06-01T00:00:00Z","relpermalink":"/publication/bel-19/","section":"publication","summary":"Precise segmentation of the vertebrae is often required for automatic detection of vertebral abnormalities. This especially enables incidental detection of abnormalities such as compression fractures in images that were acquired for other diagnostic purposes. While many CT and MR scans of the chest and abdomen cover a section of the spine, they often do not cover the entire spine. Additionally, the first and last visible vertebrae are likely only partially included in such scans. In this paper, we therefore approach vertebra segmentation as an instance segmentation problem. A fully convolutional neural network is combined with an instance memory that retains information about already segmented vertebrae. This network iteratively analyzes image patches, using the instance memory to search for and segment the first not yet segmented vertebra. At the same time, each vertebra is classified as completely or partially visible, so that partially visible vertebrae can be excluded from further analyses. We evaluated this method on spine CT scans from a vertebra segmentation challenge and on low-dose chest CT scans. The method achieved an average Dice score of 95.8% and 92.1%, respectively, and a mean absolute surface distance of 0.194 mm and 0.344 mm.","tags":null,"title":"Stain-Transforming Cycle-Consistent Generative Adversarial Networks for Improved Segmentation of Renal Histopathology","type":"publication"},{"authors":null,"categories":null,"content":"Clinical pathology is at the forefront of a digital revolution. In addition to the digital workflow, pathologists will also come into contact with machine learning algorithms aimed at improving their diagnostic accuracy and efficiency. In this presentation I highlight some applications which will be among the first to see use in clinical practice.\n","date":1557964800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1557964800,"objectID":"fa482a2ad519d746a1cb55a42e4b6bef","permalink":"https://geertlitjens.nl/talk/nordic-digital-pathology-symposium/","publishdate":"2018-06-04T14:21:55+02:00","relpermalink":"/talk/nordic-digital-pathology-symposium/","section":"talk","summary":"Clinical pathology is at the forefront of a digital revolution. In addition to the digital workflow, pathologists will also come into contact with machine learning algorithms aimed at improving their diagnostic accuracy and efficiency. In this presentation I highlight some applications which will be among the first to see use in clinical practice.","tags":[],"title":"Applications of Machine Learning for Clinical Practice","type":"talk"},{"authors":null,"categories":null,"content":"Above you can find the full link to the article and the video on the Omroep Gelderland website (via the Video button). For convenience I have also uploaded the video to YouTube, which you can see here:\n In addition, a longer podcast was made by Vera Eisink from Omroep Gelderland which covers our research in a bit more detail. It is hosted on SoundCloud and can be listened here:\n","date":1557100800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1557100800,"objectID":"2b161b63aa4820d91909a188bb34f407","permalink":"https://geertlitjens.nl/media/omroep_gelderland_interview/","publishdate":"2019-05-27T15:49:11+02:00","relpermalink":"/media/omroep_gelderland_interview/","section":"media","summary":"Above you can find the full link to the article and the video on the Omroep Gelderland website (via the Video button). For convenience I have also uploaded the video to YouTube, which you can see here:","tags":[],"title":"Report on Omroep Gelderland on 'De Nieuwe Mens'","type":"media"},{"authors":["M. Maas","G. Litjens","A. Wright","U. Attenberger","M. Haider","T. Helbich","B. Kiefer","K. Macura","D. Margolis","A. Padhani","K. Selnæs","G. Villeirs","J. Fütterer","T. Scheenen"],"categories":null,"content":"","date":1556668800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1556668800,"objectID":"7a34f8eaddfbbd0241244af4e33e73a3","permalink":"https://geertlitjens.nl/publication/maas-19/","publishdate":"2019-05-01T00:00:00Z","relpermalink":"/publication/maas-19/","section":"publication","summary":"The aims of this study were to assess the discriminative performance of quantitative multiparametric magnetic resonance imaging (mpMRI) between prostate cancer and noncancer tissues and between tumor grade groups (GGs) in a multicenter, single-vendor study, and to investigate to what extent site-specific differences affect variations in mpMRI parameters. Fifty patients with biopsy-proven prostate cancer from 5 institutions underwent a standardized preoperative mpMRI protocol. Based on the evaluation of whole-mount histopathology sections, regions of interest were placed on axial T2-weighed MRI scans in cancer and noncancer peripheral zone (PZ) and transition zone (TZ) tissue. Regions of interest were transferred to functional parameter maps, and quantitative parameters were extracted. Across-center variations in noncancer tissues, differences between tissues, and the relation to cancer grade groups were assessed using linear mixed-effects models and receiver operating characteristic analyses. Variations in quantitative parameters were low across institutes (mean [maximum] proportion of total variance in PZ and TZ, 4% [14%] and 8% [46%], respectively). Cancer and noncancer tissues were best separated using the diffusion-weighted imaging-derived apparent diffusion coefficient, both in PZ and TZ (mean [95% confidence interval] areas under the receiver operating characteristic curve [AUCs]; 0.93 [0.89-0.96] and 0.86 [0.75-0.94]), followed by MR spectroscopic imaging and dynamic contrast-enhanced-derived parameters. Parameters from all imaging methods correlated significantly with tumor grade group in PZ tumors. In discriminating GG1 PZ tumors from higher GGs, the highest AUC was obtained with apparent diffusion coefficient (0.74 [0.57-0.90], P ","tags":null,"title":"A Single-Arm, Multicenter Validation Study of Prostate Cancer Localization and Aggressiveness With a Quantitative Multiparametric Magnetic Resonance Imaging Approach.","type":"publication"},{"authors":null,"categories":[],"content":"This is the first part of a three part tutorial on how to get started with the CAMELYON dataset. This first part will focus on getting a basic convolutional neural network trained using PatchCAMELYON, TensorFlow 2.0, Keras and TensorFlow Datasets. Part 2 will cover applying your trained model to a whole-slide image and visualizing the results and Part 3 will cover how to use the full dataset to train a model at different resolution levels, sampling strategies, and data augmentation.\nTo get started you need to setup a Python environment with NumPy, Matplotlib and TensorFlow 2.0. To use the PatchCAMELYON dataset with TensorFlow Datasets you will need to use my fork of the project for now as the pull request to add PatchCAMELYON to the master branch is not yet approved. To this end you need to do clone the repository and add it to your Python environment:\ngit clone https://github.com/GeertLitjens/tensorflow_datasets\rcd tensorflow_datasets\rpython setup.py develop\r After this step you should be able to import the relevant packages with the following cell\n# Import NumPy to handle array\u0026#39;s and Matplotlib for plotting loss curves\rimport numpy as np\rimport matplotlib.pyplot as plt\r# Import TensorFlow and relevant Keras classes to setup the model\rimport tensorflow as tf\rfrom tensorflow.keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten, Dropout\rfrom tensorflow.keras.models import Model\rfrom tensorflow.keras.optimizers import SGD\rfrom tensorflow.keras.utils import to_categorical\rfrom tensorflow.keras.callbacks import ModelCheckpoint\r The next cell will automatically download PatchCAMELYON from Zenodo and prepare the TensorFlow Datasets\nimport tensorflow_datasets as tfds\rpcam, pcam_info = tfds.load(\u0026#34;patch_camelyon\u0026#34;, with_info=True)\rprint(pcam_info)\r tfds.core.DatasetInfo(\rname=\u0026#39;patch_camelyon\u0026#39;,\rversion=1.0.0,\rdescription=\u0026#39;The PatchCAMELYON dataset for identification of breast cancer metastases in lymph nodes. This dataset has been extracted from the larger CAMELYON dataset of 1399 whole-slide images, which created for the CAMELYON challenges at ISBI 2016 and 2017.It contains 96x96 RGB patches of normal lymph node and tumor tissue in a roughly 50/50 distributions. It packs the clinically-relevant task of metastasis detection into a straight-forward image classification task, akin to CIFAR-10 and MNIST. This increases the ease of use by removing the complexity of handling large whole-slide images.\u0026#39;,\rurls=[\u0026#39;https://github.com/basveeling/pcam\u0026#39;, \u0026#39;https://camelyon17.grand-challenge.org/\u0026#39;],\rfeatures=FeaturesDict({\r\u0026#39;image\u0026#39;: Image(shape=(96, 96, 3), dtype=tf.uint8),\r\u0026#39;label\u0026#39;: ClassLabel(shape=(), dtype=tf.int64, num_classes=2)\r},\rtotal_num_examples=327680,\rsplits={\r\u0026#39;test\u0026#39;: \u0026lt;tfds.core.SplitInfo num_examples=32768\u0026gt;,\r\u0026#39;train\u0026#39;: \u0026lt;tfds.core.SplitInfo num_examples=262144\u0026gt;,\r\u0026#39;validation\u0026#39;: \u0026lt;tfds.core.SplitInfo num_examples=32768\u0026gt;\r},\rsupervised_keys=(\u0026#39;image\u0026#39;, \u0026#39;label\u0026#39;),\rcitation=\u0026#39;\u0026#34;\u0026#34;\u0026#34;\r@ARTICLE{Veeling2018-qh,\rtitle = \u0026#34;Rotation Equivariant {CNNs} for Digital Pathology\u0026#34;,\rauthor = \u0026#34;Veeling, Bastiaan S and Linmans, Jasper and Winkens, Jim and\rCohen, Taco and Welling, Max\u0026#34;,\rmonth = jun,\ryear = 2018,\rarchivePrefix = \u0026#34;arXiv\u0026#34;,\rprimaryClass = \u0026#34;cs.CV\u0026#34;,\reprint = \u0026#34;1806.03962\u0026#34;\r}\r@article{Litjens2018,\rauthor = {Litjens, G. and Bándi, P. and Ehteshami Bejnordi, B. and Geessink, O. and Balkenhol, M. and Bult, P. and Halilovic, A. and Hermsen, M. and van de Loo, R. and Vogels, R. and Manson, Q.F. and Stathonikos, N. and Baidoshvili, A. and van Diest, P. and Wauters, C. and van Dijk, M. and van der Laak, J.},\rtitle = {1399 H\u0026amp;E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset},\rjournal = {GigaScience},\rvolume = {7},\rnumber = {6},\ryear = {2018},\rmonth = {05},\rissn = {2047-217X},\rdoi = {10.1093/gigascience/giy065},\rurl = {https://dx.doi.org/10.1093/gigascience/giy065},\reprint = {http://oup.prod.sis.lan/gigascience/article-pdf/7/6/giy065/25045131/giy065.pdf},\r}\r\u0026#34;\u0026#34;\u0026#34;\u0026#39;,\rredistribution_info=,\r)\r Now we have our dataset ready, it is time to define our model. The cell below defines a very simple VGG-like convolutional neural network using Keras.\r```python\r#First setup the input to the network which has the dimensions of the patches contained within PatchCAMELYON\rinput_img = Input(shape=(96,96,3))\r# Now we define the layers of the convolutional network: three blocks of two convolutional layers and a max-pool layer.\rx = Conv2D(16, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(input_img)\rx = Conv2D(16, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(x)\rx = MaxPool2D(pool_size=(2,2), strides=(2,2))(x)\rx = Conv2D(32, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(x)\rx = Conv2D(32, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(x)\rx = MaxPool2D(pool_size=(2,2), strides=(2,2))(x)\rx = Conv2D(64, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(x)\rx = Conv2D(64, (3, 3), padding=\u0026#39;valid\u0026#39;, activation=\u0026#39;relu\u0026#39;)(x)\rx = MaxPool2D(pool_size=(2,2), strides=(2,2))(x)\r# Now we flatten the output from a 4D to a 2D tensor to be able to use fully-connected …","date":1556106103,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1556106103,"objectID":"d91c943005e36a04aff38d3b3151fa95","permalink":"https://geertlitjens.nl/post/getting-started-with-camelyon/","publishdate":"2019-04-24T13:41:43+02:00","relpermalink":"/post/getting-started-with-camelyon/","section":"post","summary":"This post is the first of a three post series on using deep learning to tackle the CAMELYON Challenge. This first post covers basic convolutional neural network training using the PatchCAMELYON dataset and TensorFlow 2.0.","tags":["CAMELYON","machine learning","tutorial"],"title":"Getting Started With Camelyon (Part 1)","type":"post"},{"authors":null,"categories":[],"content":"In 2018 Jeroen van der Laak and I were nominated and eventually won a Radboud Science Award for our work with the CAMELYON challenge. One part of the prize was the oppertunity to turn our research into an educational program for elementary schools, specifically ages 9 till 12. This post is a summary of my experience going through this process and showcases the things we developed. Luckily, we were not in this alone and we got great support from the Radboud Wetenschapsknooppunt and two of our PhD students: Meyke Hermsen and Maschenka Balkenhol. In addition, the kids from the elemantary schools also made a great introductory for us:\n An ‘educational program’ might sound a bit vague, so I’ll dive into some specifics. The idea was to develop in total six activities which the children could do under supervision of their teachers (and partly by themselves) which would guide them through the background and the different steps of our research. These activities would take place over the course of several weeks in which there would be lessons to prepare, execute, present, and provide feedback on the activities. In the end the children need to define their own research questions in relation to the activities and execute it. The biggest challenge was to build these activities in such a way that they stay true to our research results but are also understandable for children of varying ages.\nWe quickly decided to split our research into two separate components with separate activities which would come together in the final activity. The two components were: histopathological diagnostics and artificial intelligence. In the end we came up with these six activities:\n When are computers intelligent? What are applications of artifical intelligence you encounter? How do you train a smart computer? What is a diagnosis and how do you perform one? How does a pathologist do diagnoses? Diagnosing cancer with artificially intelligent computers. The fist two activities were aimed at getting a discussion going in the classroom on what the definition of intelligence is. Think along the lines of: Is a calculator intelligent? Or a navigation system? Secondly, the children were asked to discuss this at home and find examples of devices in their own house which they think were intelligent. In the end we would provide them with what we, within this project, mean with artificial intelligence: a computer program which automatically learns by example and can generalize what is learned this to unknown situations; similarly to the way they learn new skills.\nThe third activity was the first real hands-on activity for which I adapted the excellent webapp Teachable Machine by Google. I translated the app to Dutch and made some fixes for tablets and phones so it would be easier to use in the classroom. My adaptation can be found here, with the source code here. This webapp allows you to train a three-class classifier with your webcam. One fun application is to turn hand-gestures into instruments, as depicted in the video below.\n The idea of this activity was that children can figure out what a computer can easily learn and what is more difficult. For example, if you teach it to react to your face, will it also react to your friend’s face? Can it discriminate between your left and right hand? And if it doesn’t, what do you need to do to make it learn the difference. Many challenges that actually appear in machine learning, like bias, can be explored this way in a playful manner.\nThe next two activities completely moved away from artifical intelligence and focused on a key job of a doctor: obtaining a diagnosis. In activity 4 we developed several disease scenarios and the children had to identify what questions you need to ask to figure out what disease the person has. Specifically, we had a scenario where a child was either suffering from food poisoning or the flu. This activity intends to show how the process of a diagnosis works and how, by asking the right questions, you can narrow down your options and eventually figure out what ails the patient.\nActivity 5 move to the domain of histopathology. Here we introduced the microscope and concept of ‘good’ cells and ‘bad’ cells (cancer). We prepared a lot of images (one example below) to show the children the differences in appearance and how a pathologist, with a microscope, can see what is wrong with a patient. We also gave them a couple of images were they had to figure out themselves what were the good and bad cells.\nThe last activity combined artificial intelligence and histopathological diagnostics. Here again we used a webapp (source here) which I made based of the face classifier by Brendan Sudol. Here the students could upload images with cells and the computer would tell them whether they were good or bad. Initially they had to classify these images themselves and then see if the computer agreed. Additionally, they could try to find cases were they think the computer was wrong and see if a …","date":1555674074,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1555674074,"objectID":"346ee5dc3fa29f95f155a544b5f2fd38","permalink":"https://geertlitjens.nl/post/ml-for-elementary-school/","publishdate":"2019-04-19T13:41:14+02:00","relpermalink":"/post/ml-for-elementary-school/","section":"post","summary":"In 2018 Jeroen van der Laak and I were nominated and eventually won a Radboud Science Award for our work with the CAMELYON challenge. One part of the prize was the oppertunity to turn our research into an educational program for elementary schools, specifically ages 9 till 12. This post is a summary of my experience going through this process and showcases the things we developed.","tags":["teaching","radboud science award","elementary school"],"title":"CAMELYON for Elementary School","type":"post"},{"authors":["Lilija Aprupe","Geert Litjens","Titus J. Brinker","Jeroen van der Laak","Niels Grabe"],"categories":null,"content":"","date":1554724714,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1554724714,"objectID":"e728cdbac61b33e77dac039ad04eef99","permalink":"https://geertlitjens.nl/publication/apru-19/","publishdate":"2019-04-08T13:58:34+02:00","relpermalink":"/publication/apru-19/","section":"publication","summary":"Recent years have seen a growing awareness of the role the immune system plays in successful cancer treatment, especially in novel therapies like immunotherapy. The characterization of the immunological composition of tumors and their micro-environment is thus becoming a necessity. In this paper we introduce a deep learning-based immune cell detection and quantification method, which is based on supervised learning, i.e., the input data for training comprises labeled images. Our approach objectively deals with staining variation and staining artifacts in immunohistochemically stained lung cancer tissue and is as precise as humans. This is evidenced by the low cell count difference to humans of 0.033 cells on average. This method, which is based on convolutional neural networks, has the potential to provide a new quantitative basis for research on immunotherapy.","tags":[],"title":"Robust and accurate quantification of biomarkers of immune cells in lung cancer micro-environment using deep convolutional neural networks.","type":"publication"},{"authors":["W. Bulten","H. Pinckaers","C. Hulsbergen-van de Kaa","G. Litjens"],"categories":null,"content":"","date":1552867200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1552867200,"objectID":"bcc08b66dce05fa705bf6a0c15382be6","permalink":"https://geertlitjens.nl/publication/bult-19-a/","publishdate":"2019-03-18T00:00:00Z","relpermalink":"/publication/bult-19-a/","section":"publication","summary":"Grading prostate cancer is a time-consuming process and suffers from high inter- and intra-observer variability. Advances in computer-aided diagnosis have shown promise in improving histopathological diagnosis. We trained a deep learning system using data retrieved from the patients records to grade digitized prostate biopsies. Our system is the first that can automatically classify background, benign epithelium, Gleason 3, 4, and 5 on a gland-by-gland level in prostate biopsies. 532 glass slides containing 2162 prostate biopsies, evaluated by an experienced urogenital pathologist were collected and scanned. 596 biopsies were kept separate for evaluation, the remaining 1576 were used to train the deep learning algorithm (see table for Gleason grade distribution). A single label denoting the Gleason score (e.g. 3+4=7) was available for each biopsy, without information on tumor location or volume. To generate detailed annotations for training we used two previously trained deep learning networks to first segment the epithelium and, subsequently, to detect cancer. The Gleason grade from the patient record was assigned to the cancerous epithelium. These generated weakly annotated regions of tumor were then used to train a Gleason grading system. To evaluate, the system was applied to the biopsies in the test set. We used the total predicted surface area of each growth pattern to determine the Gleason score of the biopsy. Predicted tumor areas smaller than 15% of total epithelial tissue were considered unreliable (e.g. incomplete glands at the edges of the biopsy) and ignored for slide level classification. For predicted grades only areas larger than 5% of all epithelial tissue were considered, which is also common in clinical practice. Predicting whether a biopsy contains tumor resulted in an accuracy of 86% (linear weighted kappa (k) of 0.73, area under the ROC curve of 0.96). We compared the predicted primary Gleason grade to the one from the pathologists’ report. Our system achieved an accuracy of 75% (k 0.64). On predicting the Grade Group (using primary and secondary pattern), our system achieved an accuracy of 67% (k 0.57). Misclassifications of more than one grade are rare. Our deep learning system automatically identifies Gleason patterns and benign tissue on a gland-by-gland basis. This can be used to determine the biopsy-level Grade Group and Gleason score, and show which parts of the tissue contribute to this prediction. Improvements need to be made to decrease misclassifications, for example in areas with inflammation.","tags":null,"title":"Automated Gleason Grading of Prostate Biopsies Using Deep Learning","type":"publication"},{"authors":["A. Simpson","M. Antonelli","S. Bakas","M. Bilello","K. Farahani","B. van Ginneken","A. Kopp-Schneider","B. Landman","G. Litjens","B. Menze","O. Ronneberger","R. Summers","P. Bilic","P. Christ","R. Do","M. Gollub","J. Golia-Pernicka","S. Heckers","W. Jarnagin","M. McHugo","S. Napel","E. Vorontsov","L. Maier-Hein","M. Cardoso"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"508be65cf29f7076ca781e681e5c890d","permalink":"https://geertlitjens.nl/publication/simp-19/","publishdate":"2019-01-01T00:00:00Z","relpermalink":"/publication/simp-19/","section":"publication","summary":"Semantic segmentation of medical images aims to associate a pixel with a label in a medical image without human initialization. The success of semantic segmentation algorithms is contingent on the availability of high-quality imaging data with corresponding labels provided by experts. We sought to create a large collection of annotated medical image datasets of various clinically relevant anatomies available under open source license to facilitate the development of semantic segmentation algorithms. Such a resource would allow: 1) objective assessment of general-purpose segmentation methods through comprehensive benchmarking and 2) open and free access to medical image data for any researcher interested in the problem domain. Through a multi-institutional effort, we generated a large, curated dataset representative of several highly variable segmentation tasks that was used in a crowd-sourced challenge - the Medical Segmentation Decathlon held during the 2018 Medical Image Computing and Computer Aided Interventions Conference in Granada, Spain. Here, we describe these ten labeled image datasets so that these data may be effectively reused by the research community.","tags":null,"title":"A large annotated medical image dataset for the development and evaluation of segmentation algorithms","type":"publication"},{"authors":["O. Geessink","A. Baidoshvili","J. Klaase","B. Ehteshami Bejnordi","G. Litjens","G. van Pelt","W. Mesker","I. Nagtegaal","F. Ciompi","J. van der Laak"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"ab3fd9b1138b1d92b7ac397af0eb08d8","permalink":"https://geertlitjens.nl/publication/gees-19/","publishdate":"2019-01-01T00:00:00Z","relpermalink":"/publication/gees-19/","section":"publication","summary":"Purpose:Tumor-stroma ratio (TSR) serves as an independent prognostic factor in colorectal cancer and other solid malignancies. The recent introduction of digital pathology in routine tissue diagnostics holds opportunities for automated TSR analysis. We investigated the potential of computer-aided quantification of intratumoral stroma in rectal cancer whole-slide images. Methods: Histological slides from 129 rectal adenocarcinoma patients were analyzed by two experts who selected a suitable stroma hot-spot and visually assessed TSR. A semi-automatic method based on deep learning was trained to segment all relevant tissue types in rectal cancer histology and subsequently applied to the hot-spots provided by the experts. Patients were assigned to a ‘stroma-high’ or ‘stroma-low’ group by both TSR methods (visual and automated). This allowed for prognostic comparison between the two methods in terms of disease-specific and disease-free survival times. Results: With stroma-low as baseline, automated TSR was found to be prognostic independent of age, gender, pT-stage, lymph node status, tumor grade, and whether adjuvant therapy was given, both for disease-specific survival (hazard ratio = 2.48 (95% confidence interval 1.29–4.78)) and for disease-free survival (hazard ratio = 2.05 (95% confidence interval 1.11–3.78)). Visually assessed TSR did not serve as an independent prognostic factor in multivariate analysis. Conclusions: This work shows that TSR is an independent prognosticator in rectal cancer when assessed automatically in user-provided stroma hot-spots. The deep learning-based technology presented here may be a significant aid to pathologists in routine diagnostics.","tags":null,"title":"Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer","type":"publication"},{"authors":null,"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"fd36605688ef45e10dc233c860158012","permalink":"https://geertlitjens.nl/cv/","publishdate":"2019-01-01T00:00:00Z","relpermalink":"/cv/","section":"","summary":"Geert's CV","tags":null,"title":"CV","type":"widget_page"},{"authors":["W. Bulten","P. Bándi","J. Hoven","R. de Loo","J. Lotz","N. Weiss","J. der Laak","B. van Ginneken","C. de Kaa","G. Litjens"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"a3c1d969d7a36098f899d146f2d1c4cd","permalink":"https://geertlitjens.nl/publication/bult-19/","publishdate":"2019-01-01T00:00:00Z","relpermalink":"/publication/bult-19/","section":"publication","summary":"Given the importance of gland morphology in grading prostate cancer (PCa), automatically differentiating between epithelium and other tissues is an important prerequisite for the development of automated methods for detecting PCa. We propose a new deep learning method to segment epithelial tissue in digitised hematoxylin and eosin (H\u0026E) stained prostatectomy slides using immunohistochemistry (IHC) as reference standard. We used IHC to create a precise and objective ground truth compared to manual outlining on H\u0026E slides, especially in areas with high-grade PCa. 102 tissue sections were stained with H\u0026E and subsequently restained with P63 and CK8/18 IHC markers to highlight epithelial structures. Afterwards each pair was co-registered. First, we trained a U-Net to segment epithelial structures in IHC using a subset of the IHC slides that were preprocessed with color deconvolution. Second, this network was applied to the remaining slides to create the reference standard used to train a second U-Net on H\u0026E. Our system accurately segmented both intact glands and individual tumour epithelial cells. The generalisation capacity of our system is shown using an independent external dataset from a different centre. We envision this segmentation as the first part of a fully automated prostate cancer grading pipeline.","tags":null,"title":"Epithelium segmentation using deep learning in H\u0026E-stained prostate specimens with immunohistochemistry as reference standard","type":"publication"},{"authors":["H. Pinckaers","W. Bulten","G. Litjens"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"291ab9d0e67daa3a23a11858f4cda715","permalink":"https://geertlitjens.nl/publication/pinc-19/","publishdate":"2019-01-01T00:00:00Z","relpermalink":"/publication/pinc-19/","section":"publication","summary":"Prostate cancer is the most common cancer for men in Western countries, counting 1.1 million new diagnoses every year. The incidence is expected to increase further, due to the growing elderly population. This is leading to a significantly increased workload for pathologists. The burden of this time-consuming and repetitive workload has the potential to be decreased by computational pathology, e.g., by automatically screening prostate biopsies. The current state-of-the-art in many computational pathology tasks use patch-based convolutional neural networks. Developing such algorithms require detailed annotations of the task-specific classes on whole-slide images, which are challenging to create due to low availability of the pathologists. Therefore, it would be beneficial to be able to train using labels the pathologist already provides for regular clinical practice in the form of a report. However, these reports correspond to whole-slide images which are of such a high resolution that current accelerator cards cannot process them at once due to memory constraints. We developed a method, streaming stochastic gradient descent, to train a convolutional neural network end-to-end with entire high resolution images and slide-level labels extracted from pathology reports. Here we trained a neural network on 2812 whole prostate biopsies, at a input size of 8000x8000 pixels, equivalent to 50x total magnification, for a binary classification, cancerous or benign. We achieved an accuracy of 84%. These results show that we may not need expensive annotations to train classification networks in this domain.","tags":null,"title":"High resolution whole prostate biopsy classification using streaming stochastic gradient descent","type":"publication"},{"authors":["P. Bándi","M. Balkenhol","B. van Ginneken","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"e9fecd47837988ba8e7a4c8bfc757614","permalink":"https://geertlitjens.nl/publication/band-19-a/","publishdate":"2020-05-11T12:24:14.133175Z","relpermalink":"/publication/band-19-a/","section":"publication","summary":"Modern pathology diagnostics is being driven toward large scale digitization of microscopic tissue sections. A prerequisite for its safe implementation is the guarantee that all tissue present on a glass slide can also be found back in the digital image. Whole-slide scanners perform a tissue segmentation in a low resolution overview image to prevent inefficient high-resolution scanning of empty background areas. However, currently applied algorithms can fail in detecting all tissue regions. In this study, we developed convolutional neural networks to distinguish tissue from background. We collected 100 whole-slide images of 10 tissue samples--staining categories from five medical centers for development and testing. Additionally, eight more images of eight unfamiliar categories were collected for testing only. We compared our fully-convolutional neural networks to three traditional methods on a range of resolution levels using Dice score and sensitivity. We also tested whether a single neural network can perform equivalently to multiple networks, each specialized in a single resolution. Overall, our solutions outperformed the traditional methods on all the tested resolutions. The resolution-agnostic network achieved average Dice scores between 0.97 and 0.98 across the tested resolution levels, only 0.0069 less than the resolution-specific networks. Finally, its excellent generalization performance was demonstrated by achieving averages of 0.98 Dice score and 0.97 sensitivity on the eight unfamiliar images. A future study should test this network prospectively.","tags":null,"title":"Resolution-agnostic tissue segmentation in whole-slide histopathology images with convolutional neural networks","type":"publication"},{"authors":null,"categories":null,"content":" This presentation was the first part of a half-a-day workshop on deep learning in medical imaging. It introduces the basic deep learning concepts, contrasts them to more traditional pattern recognition approaches, and shows some examples from the field. If you are interested in a more thorough overview of different applications, I can recommend this publication.\n","date":1537083e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1537083e3,"objectID":"a456dcd0a126cfab6189aab48f845943","permalink":"https://geertlitjens.nl/talk/intro-deep-learning/","publishdate":"2018-06-16T00:00:00Z","relpermalink":"/talk/intro-deep-learning/","section":"talk","summary":"This presentations aims to provide an introduction to the basics and fundamental concepts of deep learning in medical imaging.","tags":["Machine Learning","Deep Learning","Medical Imaging","Tutorial"],"title":"Introduction to Deep Learning in Medical Imaging","type":"talk"},{"authors":["P. Bandi","O. Geessink","Q. Manson","M. van Dijk","M. Balkenhol","M. Hermsen","B. Bejnordi","B. Lee","K. Paeng","A. Zhong","Q. Li","F. Zanjani","S. Zinger","K. Fukuta","D. Komura","V. Ovtcharov","S. Cheng","S. Zeng","J. Thagaard","A. Dahl","H. Lin","H. Chen","L. Jacobsson","M. Hedlund","M. Cetin","E. Halici","H. Jackson","R. Chen","F. Both","J. Franke","H. Kusters-Vandevelde","W. Vreuls","P. Bult","B. van Ginneken","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1533081600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1533081600,"objectID":"48bf9c1a746f820b84f2b72f4a298e4e","permalink":"https://geertlitjens.nl/publication/band-18/","publishdate":"2018-08-01T00:00:00Z","relpermalink":"/publication/band-18/","section":"publication","summary":"Automated detection of cancer metastases in lymph nodes has the potential to improve assessment of prognosis for patients. To enable fair comparison between the algorithms for this purpose, we set up the CAMELYON17 challenge in conjunction with the IEEE International Symposium on Biomedical Imaging 2017 conference in Melbourne. Over 300 participants registered on the challenge website, of which 23 teams submitted a total of 37 algorithms before the initial deadline. Participants were provided with 899 whole-slide images for developing their algorithms.The developed algorithms were evaluated based on the test set encompassing 100 patients and 500 whole-slide images. The evaluation metric used was a quadratic weighted Cohentextquoterights kappa. We discuss the algorithmic details of the ten best preconference and two post-conference submissions. All these participants used convolutional neural networks in combination with pre- and postprocessing steps. Algorithms differed mostly in neural network architecture, training strategy and pre- and postprocessing methodology. Overall, the kappa metric ranged from 0.89 to -0.13 across all submissions. The best results were obtained with pre-trained architectures such as ResNet. Confusion matrix analysis revealed that all participants struggled with reliably identifying isolated tumor cells, the smallest type of metastasis, with detection rates below 40%. Qualitative inspection of the results of the top participants showed categories of false positives, such as nerves or contamination, which could be targets for further optimization. Last, we show that simple combinations of the top algorithms result in higher kappa metric values than any algorithm individually, with 0.93 for the best combination.","tags":null,"title":"From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge","type":"publication"},{"authors":null,"categories":null,"content":"For a special series of Summer Mini-Lectures Jeroen van der Laak and I were interviewed by Karlijn Meinders on AI in diagnostic pathology. ","date":1531872e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1531872e3,"objectID":"f7d5bd3fa141b6324f1d3a3f59248fcf","permalink":"https://geertlitjens.nl/media/wetenschapvandaag-zomercollege/","publishdate":"2019-04-23T21:33:47+02:00","relpermalink":"/media/wetenschapvandaag-zomercollege/","section":"media","summary":"For a special series of Summer Mini-Lectures Jeroen van der Laak and I were interviewed by Karlijn Meinders on AI in diagnostic pathology.","tags":[],"title":"BNR Wetenschap Vandaag - Zomercollege","type":"media"},{"authors":null,"categories":null,"content":"Should doctors fear for their jobs due to the rise of AI? In this presentation I explained which aspects of their job will change and why they should not fear, but welcome the introduction of AI in healthcare.\n","date":1529020800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1529020800,"objectID":"2e1a71ff38b88c8fbaa8311be59025cd","permalink":"https://geertlitjens.nl/talk/bessensap/","publishdate":"2019-04-25T21:19:56+02:00","relpermalink":"/talk/bessensap/","section":"talk","summary":"Should doctors fear for their jobs due to the rise of AI? In this presentation I explained which aspects of their job will change and why they should not fear, but welcome the introduction of AI in healthcare.","tags":[],"title":"Bessensap","type":"talk"},{"authors":null,"categories":null,"content":"Interview with BNR Beter on the digitization of pathology and the role of artificial intelligence in the workflow of the pathologist. Together with prof. Paul van Diest. ","date":1528070400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1528070400,"objectID":"5a89eff49e4210abf2d8f70edbcca1a7","permalink":"https://geertlitjens.nl/media/beter-interview/","publishdate":"2019-04-23T21:08:25+02:00","relpermalink":"/media/beter-interview/","section":"media","summary":"Interview with BNR Beter on the digitization of pathology and the role of artificial intelligence in the workflow of the pathologist. Together with prof. Paul van Diest.","tags":[],"title":"BNR Beter - Interview","type":"media"},{"authors":["G. Litjens","P. Bandi","B. Ehteshami Bejnordi","O. Geessink","M. Balkenhol","P. Bult","A. Halilovic","M. Hermsen","R. van de Loo","R. Vogels","Q. Manson","N. Stathonikos","A. Baidoshvili","P. van Diest","C. Wauters","M. van Dijk","J. van der Laak"],"categories":null,"content":"","date":1527811200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1527811200,"objectID":"61dc979978420c6c04a2a057d9c77d43","permalink":"https://geertlitjens.nl/publication/litj-18/","publishdate":"2018-06-01T00:00:00Z","relpermalink":"/publication/litj-18/","section":"publication","summary":"Background: The presence of lymph node metastases is one of the most important factors in breast cancer prognosis. The most common strategy to assess the regional lymph node status is the sentinel lymph node procedure. The sentinel lymph node is the most likely lymph node to contain metastasized cancer cells and is excised, histopathologically processed and examined by the pathologist. This tedious examination process is time-consuming and can lead to small metastases being missed. However, recent advances in whole-slide imaging and machine learning have opened an avenue for analysis of digitized lymph node sections with computer algorithms. For example, convolutional neural networks, a type of machine learning algorithm, are able to automatically detect cancer metastases in lymph nodes with high accuracy. To train machine learning models, large, well-curated datasets are needed. Results: We released a dataset of 1399 annotated whole-slide images of lymph nodes, both with and without metastases, in total three terabytes of data in the context of the CAMELYON16 and CAMELYON17 Grand Challenges. Slides were collected from five different medical centers to cover a broad range of image appearance and staining variations. Each whole-slide image has a slide-level label indicating whether it contains no metastases, macro-metastases, micro-metastases or isolated tumor cells. Furthermore, for 209 whole-slide images, detailed hand-drawn contours for all metastases are provided. Last, open-source software tools to visualize and interact with the data have been made available. Conclusions: A unique dataset of annotated, whole-slide digital histopathology images has been provided with high potential for re-use.","tags":null,"title":"1399 H\u0026E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset","type":"publication"},{"authors":null,"categories":null,"content":"Clinical pathology as at the forefront of a digital revolution. In addition to the digital workflow, pathologists will also come into contact with machine learning algorithms aimed at improving their diagnostic accuracy and efficiency. In this presentation I highlight some applications which will be among the first to see use in clinical practice.\nThis presentation showed applications of deep learning for diagnostic histopathology. The slides are shown below. Note that some slides are a bit scrambled due to the conversion from PowerPoint to Google Slides.\n ","date":1523577600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1523577600,"objectID":"f4a68f43ec927019f67a5ac6e53c104c","permalink":"https://geertlitjens.nl/talk/dag-van-de-pathologie/","publishdate":"2018-04-13T14:21:55+02:00","relpermalink":"/talk/dag-van-de-pathologie/","section":"talk","summary":"Clinical pathology as at the forefront of a digital revolution. In addition to the digital workflow, pathologists will also come into contact with machine learning algorithms aimed at improving their diagnostic accuracy and efficiency. In this presentation I highlight some applications which will be among the first to see use in clinical practice.","tags":[],"title":"Dag van de Pathologie","type":"talk"},{"authors":null,"categories":null,"content":"Interview with the Foundation which supports prostate cancer patients in the Netherlands. They supported my Bas Mulder Award application and were interested in my research. This interview gives a good overview of the goals of the project for a general audience. Note that the interview is in Dutch.\n","date":1519862400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1519862400,"objectID":"3f673552ae97041271726ea762008fbc","permalink":"https://geertlitjens.nl/media/prostaatkankerstichting/","publishdate":"2019-04-26T10:22:02+02:00","relpermalink":"/media/prostaatkankerstichting/","section":"media","summary":"Interview with the Foundation which supports prostate cancer patients in the Netherlands. They supported my Bas Mulder Award application and were interested in my research. This interview gives a good overview of the goals of the project for a general audience. Note that the interview is in Dutch.","tags":[],"title":"Interview with Prostate Cancer Patient Foundation","type":"media"},{"authors":["D. Tellez","M. Balkenhol","I. Otte-Holler","R. van de Loo","R. Vogels","P. Bult","C. Wauters","W. Vreuls","S. Mol","N. Karssemeijer","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1519862400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1519862400,"objectID":"909177765a8620fb34153b86a95abe95","permalink":"https://geertlitjens.nl/publication/tell-18/","publishdate":"2018-03-01T00:00:00Z","relpermalink":"/publication/tell-18/","section":"publication","summary":"Manual counting of mitotic tumor cells in tissue sections constitutes one of the strongest prognostic markers for breast cancer. This procedure, however, is time-consuming and error-prone. We developed a method to automatically detect mitotic figures in breast cancer tissue sections based on convolutional neural networks (CNNs). Application of CNNs to hematoxylin and eosin (H\u0026E) stained histological tissue sections is hampered by: (1) noisy and expensive reference standards established by pathologists, (2) lack of generalization due to staining variation across laboratories, and (3) high computational requirements needed to process gigapixel whole-slide images (WSIs). In this paper, we present a method to train and evaluate CNNs to specifically solve these issues in the context of mitosis detection in breast cancer WSIs. First, by combining image analysis of mitotic activity in phosphohistone-H3 (PHH3) restained slides and registration, we built a reference standard for mitosis detection in entire H\u0026E WSIs requiring minimal manual annotation effort. Second, we designed a data augmentation strategy that creates diverse and realistic H\u0026E stain variations by modifying the hematoxylin and eosin color channels directly. Using it during training combined with network ensembling resulted in a stain invariant mitosis detector. Third, we applied knowledge distillation to reduce the computational requirements of the mitosis detection ensemble with a negligible loss of performance. The system was trained in a single-center cohort and evaluated in an independent multicenter cohort from The Cancer Genome Atlas on the three tasks of the Tumor Proliferation Assessment Challenge (TUPAC). We obtained a performance within the top-3 best methods for most of the tasks of the challenge.","tags":null,"title":"Whole-Slide Mitosis Detection in H\u0026E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks","type":"publication"},{"authors":["W. Bulten","C. Hulsbergen - van de Kaa","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"5c4cc8f004b5f5e9e3f3953cba95feed","permalink":"https://geertlitjens.nl/publication/bult-18/","publishdate":"2018-02-01T00:00:00Z","relpermalink":"/publication/bult-18/","section":"publication","summary":"Prostate cancer is generally graded by pathologists based on hematoxylin and eosin (H\u0026E) stained slides. Because of the large size of the tumor areas in radical prostatectomies (RP), this task can be tedious and error prone with known high interobserver variability. Recent advancements in deep learning have enabled development of automated systems that may assist pathologists in prostate diagnostics. As prostate cancer originates from glandular tissue, an important prerequisite for development of such algorithms is the possibility to automatically differentiate between glandular tissue and other tissues. In this paper, we propose a method for automatically segmenting epithelial tissue in digitally scanned prostatectomy slides based on deep learning. We collected 30 single-center whole mount tissue sections, with reported Gleason growth patterns ranging from 3 to 5, from 27 patients that underwent RP. Two different network architectures, U-Net and regular fully convolutional networks with varying depths, were trained using a set of sparsely annotated slides. We evaluated the trained networks on exhaustively annotated regions from a separate test set. The test set contained both healthy and cancerous epithelium with different Gleason growth patterns. The results show the effectiveness of our approach given a pixel-based AUC score of 0.97. Our method contains no prior assumptions on glandular morphology, does not directly rely on the presence of lumina and all features are learned by the network itself. The generated segmentation can be used to highlight regions of interest for pathologists and to improve cancer annotations to further enhance an automatic cancer grading system.","tags":null,"title":"Automated segmentation of epithelial tissue in prostatectomy slides using deep learning","type":"publication"},{"authors":["D. Geijs","M. Intezar","J. van der Laak","G. Litjens"],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"9d0850f560cdd3cd850d2ec0727eb230","permalink":"https://geertlitjens.nl/publication/geij-18/","publishdate":"2018-02-01T00:00:00Z","relpermalink":"/publication/geij-18/","section":"publication","summary":"Assessment of immunohistochemically stained slides is often a crucial diagnostic step in clinical practice. However, as this assessment is generally performed visually by pathologists it can suffer from significant inter-observer variability. The introduction of whole slide scanners facilitates automated analysis of immunohistochemical slides. Color deconvolution (CD) is one of the most popular first steps in quantifying stain density in histopathological images. However, color deconvolution requires stain color vectors for accurate unmixing. Often it is assumed that these stain vectors are static. In practice, however, they are influenced by many factors. This can cause inferior CD unmixing and thus typically results in poor quantification. Some automated methods exist for color stain vector estimation, but most depend on a significant amount of each stain to be present in the whole slide images. In this paper we propose a method for automatically finding stain color vectors and unmixing IHC stained whole slide images, even when some stains are sparsely expressed. We collected 16 tonsil slides and stained them for different periods of time with hematoxylin and a DAB-colored proliferation marker Ki67. RGB pixels of WSI images were converted to the hue saturation density (HSD) color domain and subsequently K-means clustering was used to separate stains and calculate the stain color vectors for each slide. Our results show that staining time affects the stain vectors and that calculating a unique stain vector for each slide results in better unmixing results than using a standard stain vector.","tags":null,"title":"Automatic color unmixing of IHC stained whole slide images","type":"publication"},{"authors":["D. Tellez","M. Balkenhol","N. Karssemeijer","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"d5065e8dda6fcb49e97d519af2866b68","permalink":"https://geertlitjens.nl/publication/tell-18-a/","publishdate":"2018-02-01T00:00:00Z","relpermalink":"/publication/tell-18-a/","section":"publication","summary":"The number of mitotic figures per tumor area observed in hematoxylin and eosin (H and E) histological tissue sections under light microscopy is an important biomarker for breast cancer prognosis. Whole-slide imaging and computational pathology have enabled the development of automatic mitosis detection algorithms based on convolutional neural networks (CNNs). These models can suffer from high generalization error, i.e. trained networks often underperform on datasets originating from pathology laboratories different than the one that provided the training data, mainly due to the presence of inter-laboratory stain variations. We propose a novel data augmentation strategy that exploits the properties of the H and E color space to simulate a broad range of realistic H and E stain variations. To our best knowledge, this is the first time that data augmentation is performed directly in the H and E color space, instead of RGB. The proposed technique uses color deconvolution to transform RGB images into the H and E color space, modifies the H and E color channels stochastically, and projects them back to RGB space. We trained a CNN-based mitosis detector on homogeneous data from a single institution, and tested its performance on an external, multicenter cohort that contained a wide range of unseen H and E stain variations. We compared CNNs trained with and without the proposed augmentation strategy and observed a significant improvement in performance and robustness to unseen stain variations when the new color augmentation technique was included. In essence, we have shown that CNNs can be made robust to inter-lab stain variation by incorporating extensive stain augmentation techniques.","tags":null,"title":"H\u0026E stain augmentation improves generalization of convolutional networks for histopathological mitosis detection","type":"publication"},{"authors":["Z. Swiderska-Chadaj","H. Pinckaers","M. van Rijthoven","M. Balkenhol","M. Melnikova","O. Geessink","Q. Manson","G. Litjens","J. van der Laak","F. Ciompi"],"categories":null,"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514764800,"objectID":"3ea4fab0c092e3f6cdada7c8d29316bc","permalink":"https://geertlitjens.nl/publication/swid-18/","publishdate":"2018-01-01T00:00:00Z","relpermalink":"/publication/swid-18/","section":"publication","summary":"Recent advances in cancer immunotherapy have boosted the interest in the role played by the immune system in cancer treatment. In particular, the presence of tumor-infiltrating lymphocytes (TILs) have become a central research topic in oncology and pathology. Consequently, a method to automatically detect and quantify immune cells is of great interest. In this paper, we present a comparison of different deep learning (DL) techniques for the detection of lymphocytes in immunohistochemically stained (CD3 and CD8) slides of breast, prostate and colon cancer. The compared methods cover the state-of-the-art in object localization, classification and segmentation: Locality Sensitive Method (LSM), U-net, You Only Look Once (YOLO) and fully-convolutional networks (FCNN). A dataset with 109,841 annotated cells from 58 whole-slide images was used for this study. Overall, U-net and YOLO achieved the highest results, with an F1-score of 0.78 in regular tissue areas. U-net approach was more robust to biological and staining variability and could also handle staining and tissue artifacts.","tags":null,"title":"Convolutional Neural Networks for Lymphocyte detection in Immunohistochemically Stained Whole-Slide Images","type":"publication"},{"authors":["H. Pinckaers","G. Litjens"],"categories":null,"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514764800,"objectID":"f582cb04bcbbb58dac33ca48bde9ab54","permalink":"https://geertlitjens.nl/publication/pinc-18/","publishdate":"2018-01-01T00:00:00Z","relpermalink":"/publication/pinc-18/","section":"publication","summary":"To train deep convolutional neural networks, the input data and the intermediate activations need to be kept in memory to calculate the gradient descent step. Given the limited memory available in the current generation accelerator cards, this limits the maximum dimensions of the input data. We demonstrate a method to train convolutional neural networks holding only parts of the image in memory while giving equivalent results. We quantitatively compare this new way of training convolutional neural networks with conventional training. In addition, as a proof of concept, we train a convolutional neural network with 64 megapixel images, which requires 97% less memory than the conventional approach.","tags":null,"title":"Training convolutional neural networks with megapixel images","type":"publication"},{"authors":["W. Bulten","G. Litjens"],"categories":null,"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514764800,"objectID":"d0c84e96a38af8874631295d37366a10","permalink":"https://geertlitjens.nl/publication/bult-18-a/","publishdate":"2018-01-01T00:00:00Z","relpermalink":"/publication/bult-18-a/","section":"publication","summary":"We propose an unsupervised method using self-clustering convolutional adversarial autoencoders to classify prostate tissue as tumor or non-tumor without any labeled training data. The clustering method is integrated into the training of the autoencoder and requires only little post-processing. Our network trains on hematoxylin and eosin (H\u0026E) input patches and we tested two different reconstruction targets, H\u0026E and immunohistochemistry (IHC). We show that antibody-driven feature learning using IHC helps the network to learn relevant features for the clustering task. Our network achieves a F1 score of 0.62 using only a small set of validation labels to assign classes to clusters.","tags":null,"title":"Unsupervised Prostate Cancer Detection on H\u0026E using Convolutional Adversarial Autoencoders","type":"publication"},{"authors":null,"categories":null,"content":"Report by Nieuwsuur on the results of our JAMA publication which showed that AI could perform at the level of expert pathologists. ","date":1513123200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1513123200,"objectID":"731e20ad53b89dbc981521bf03f1218d","permalink":"https://geertlitjens.nl/media/nieuwsuur_camelyon/","publishdate":"2019-04-22T21:08:25+02:00","relpermalink":"/media/nieuwsuur_camelyon/","section":"media","summary":"Report by Nieuwsuur on the results of our [JAMA publication](/publication/ehte-17/) which showed that AI could perform at the level of expert pathologists.","tags":[],"title":"Report on Nieuwsuur about CAMELYON.","type":"media"},{"authors":["B. Ehteshami Bejnordi","M. Veta","P. van Diest","B. van Ginneken","N. Karssemeijer","G. Litjens","J. van der Laak","t. Consortium"],"categories":null,"content":"","date":1512086400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1512086400,"objectID":"5c89790677a66912549d04f2817dec00","permalink":"https://geertlitjens.nl/publication/ehte-17/","publishdate":"2017-12-01T00:00:00Z","relpermalink":"/publication/ehte-17/","section":"publication","summary":"Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P ","tags":null,"title":"Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer","type":"publication"},{"authors":["B. Bejnordi","G. Zuidhof","M. Balkenhol","M. Hermsen","P. Bult","B. van Ginneken","N. Karssemeijer","G. Litjens","J. van der Laak"],"categories":null,"content":"","date":1506816e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1506816e3,"objectID":"2010ee905929416d27f191cb36873170","permalink":"https://geertlitjens.nl/publication/bejn-17-b/","publishdate":"2017-10-01T00:00:00Z","relpermalink":"/publication/bejn-17-b/","section":"publication","summary":"Currently, histopathological tissue examination by a pathologist represents the gold standard for breast lesion diagnostics. Automated classification of histopathological whole-slide images (WSIs) is challenging owing to the wide range of appearances of benign lesions and the visual similarity of ductal carcinoma in-situ (DCIS) to invasive lesions at the cellular level. Consequently, analysis of tissue at high resolutions with a large contextual area is necessary. We present context-aware stacked convolutional neural networks (CNN) for classification of breast WSIs into normal/benign, DCIS, and invasive ductal carcinoma (IDC). We first train a CNN using high pixel resolution to capture cellular level information. The feature responses generated by this model are then fed as input to a second CNN, stacked on top of the first. Training of this stacked architecture with large input patches enables learning of fine-grained (cellular) details and global tissue structures. Our system is trained and evaluated on a dataset containing 221 WSIs of hematoxylin and eosin stained breast tissue specimens. The system achieves an AUC of 0.962 for the binary classification of nonmalignant and malignant slides and obtains a three-class accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC, demonstrating its potential for routine diagnostics.","tags":null,"title":"Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images","type":"publication"},{"authors":null,"categories":null,"content":"How will AI impact medicine, and specifically, pediatric medicine? These questions I will try to answer during this presentation with specific examples. Specifically, I will also address the impact in low-resource countries.\n","date":1497225600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1497225600,"objectID":"8b4fd91410185488505bab11eb63c43c","permalink":"https://geertlitjens.nl/talk/kindergeneeskunde/","publishdate":"2017-06-12T00:00:00Z","relpermalink":"/talk/kindergeneeskunde/","section":"talk","summary":"How will AI impact medicine, and specifically, pediatric medicine? These questions I will try to answer during this presentation with specific examples. Specifically, I will also address the impact in low-resource countries.","tags":[],"title":"The Digital Doctor","type":"talk"},{"authors":["S. Steens","E. Bekers","W. Weijs","G. Litjens","A. Veltien","A. Maat","G. van den Broek","J. van der Laak","J. Fütterer","C. van der Kaa","M. Merkx","R. Takes"],"categories":null,"content":"","date":1493596800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1493596800,"objectID":"55630d2422509dba5d1e61e06419d828","permalink":"https://geertlitjens.nl/publication/stee-17/","publishdate":"2017-05-01T00:00:00Z","relpermalink":"/publication/stee-17/","section":"publication","summary":"Purpose of this feasibility study was (1) to evaluate whether application of ex-vivo 7T MR of the resected tongue specimen containing squamous cell carcinoma may provide information on the resection margin status and (2) to evaluate the research and developmental issues that have to be solved for this technique to have the beneficial impact on clinical outcome that we expect: better oncologic and functional outcomes, better quality of life, and lower costs. We performed a non-blinded validation of ex-vivo 7T MR to detect the tongue squamous cell carcinoma and resection margin in 10 fresh tongue specimens using histopathology as gold standard. In six of seven specimens with a histopathologically determined invasion depth of the tumor of [Formula: see text] mm, the tumor could be recognized on MR, with a resection margin within a 2 mm range as compared to histopathology. In three specimens with an invasion depth of [Formula: see text] mm, the tumor was not visible on MR. Technical limitations mainly included scan time, image resolution, and the fact that we used a less available small-bore 7T MR machine. Ex-vivo 7T probably will have a low negative predictive value but a high positive predictive value, meaning that in tumors thicker than a few millimeters we expect to be able to predict whether the resection margin is too small. A randomized controlled trial needs to be performed to show our hypothesis: better oncologic and functional outcomes, better quality of life, and lower costs.","tags":["Ex-vivo; Magnetic resonance imaging; Squamous cell carcinoma; Tongue; Validation"],"title":"Evaluation of tongue squamous cell carcinoma resection margins using ex-vivo MR.","type":"publication"},{"authors":["P. Bandi","R. de Loo","M. Intezar","D. Geijs","F. Ciompi","B. van Ginneken","J. der Laak","G. Litjens"],"categories":null,"content":"","date":1491004800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1491004800,"objectID":"fd6e98ce45adfc947e3d9c2e0504684a","permalink":"https://geertlitjens.nl/publication/band-17/","publishdate":"2017-04-01T00:00:00Z","relpermalink":"/publication/band-17/","section":"publication","summary":"Tissue segmentation is an important pre-requisite for efficient and accurate diagnostics in digital pathology. However, it is well known that whole-slide scanners can fail in detecting all tissue regions, for example due to the tissue type, or due to weak staining because their tissue detection algorithms are not robust enough. In this paper, we introduce two different convolutional neural network architectures for whole slide image segmentation to accurately identify the tissue sections. We also compare the algorithms to a published traditional method. We collected 54 whole slide images with differing stains and tissue types from three laboratories to validate our algorithms. We show that while the two methods do not differ significantly they outperform their traditional counterpart (Jaccard index of 0.937 and 0.929 vs. 0.870, p ","tags":null,"title":"Comparison of Different Methods for Tissue Segmentation In Histopathological Whole-Slide Images","type":"publication"},{"authors":["F. Ciompi","O. Geessink","B. Bejnordi","G. Silva de Souza","A. Baidoshvili","G. Litjens","B. van Ginneken","I. Nagtegaal","J. van der Laak"],"categories":null,"content":"","date":1491004800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1491004800,"objectID":"5e58e4cd11ba17a4b585b36f0de57074","permalink":"https://geertlitjens.nl/publication/ciom-17/","publishdate":"2017-04-01T00:00:00Z","relpermalink":"/publication/ciom-17/","section":"publication","summary":"The development of reliable imaging biomarkers for the analysis of colorectal cancer (CRC) in hematoxylin and eosin (H\u0026E) stained histopathology images requires an accurate and reproducible classification of the main tissue components in the image. In this paper, we propose a system for CRC tissue classification based on convolutional networks (ConvNets). We investigate the importance of stain normalization in tissue classification of CRC tissue samples in H\u0026E-stained images. Furthermore, we report the performance of ConvNets on a cohort of rectal cancer samples and on an independent publicly available dataset of colorectal H\u0026E images.","tags":null,"title":"The importance of stain normalization in colorectal tissue classification with convolutional networks","type":"publication"},{"authors":["M. Dalmis","G. Litjens","K. Holland","A. Setio","R. Mann","N. Karssemeijer","A. Gubern-Mérida"],"categories":null,"content":"","date":1485907200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1485907200,"objectID":"1c38ce9d7a91f33011019f96c1f512c6","permalink":"https://geertlitjens.nl/publication/dalm-17/","publishdate":"2017-02-01T00:00:00Z","relpermalink":"/publication/dalm-17/","section":"publication","summary":"Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as \"U-net.\" We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations. The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P ","tags":["MRI; breast segmentation; deep learning"],"title":"Using deep learning to segment breast and fibroglandular tissue in MRI volumes","type":"publication"},{"authors":["G. Litjens","T. Kooi","B. Ehteshami Bejnordi","A. Setio","F. Ciompi","M. Ghafoorian","J. van der Laak","B. van Ginneken","C. Sánchez"],"categories":null,"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1483228800,"objectID":"d268cec438479049c16ce266ce870779","permalink":"https://geertlitjens.nl/publication/litj-17/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/litj-17/","section":"publication","summary":"Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.","tags":null,"title":"A Survey on Deep Learning in Medical Image Analysis","type":"publication"},{"authors":["T. Kooi","G. Litjens","B. van Ginneken","A. Gubern-Mérida","C. Sánchez","R. Mann","A. den Heeten","N. Karssemeijer"],"categories":null,"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1483228800,"objectID":"0e6ab122d08f5b062a4ef6a4aee28bc3","permalink":"https://geertlitjens.nl/publication/kooi-17/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/kooi-17/","section":"publication","summary":"Recent advances in machine learning yielded new techniques to train deep neural networks, which resulted in highly successful applications in many pattern recognition tasks such as object detection and speech recognition. In this paper we provide a head-to-head comparison between a state-of-the art in mammography CAD system, relying on a manually designed feature set and a Convolutional Neural Network (CNN), aiming for a system that can ultimately read mammograms independently. Both systems are trained on a large data set of around 45,000 images and results show the CNN outperforms the traditional CAD system at low sensitivity and performs comparable at high sensitivity. We subsequently investigate to what extent features such as location and patient information and commonly used manual features can still complement the network and see improvements at high specificity over the CNN especially with location and context features, which contain information not available to the CNN. Additionally, a reader study was performed, where the network was compared to certified screening radiologists on a patch level and we found no significant difference between the network and the readers.","tags":["Breast cancer; Computer aided detection; Convolutional neural networks; Deep learning; Machine learning; Mammography"],"title":"Large scale deep learning for computer aided detection of mammographic lesions","type":"publication"},{"authors":["M. Ghafoorian","N. Karssemeijer","T. Heskes","I. van Uden","C. Sánchez","G. Litjens","F. de Leeuw","B. van Ginneken","E. Marchiori","B. Platel"],"categories":null,"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1483228800,"objectID":"0e454615e2a098b3c1933f62ddce85cd","permalink":"https://geertlitjens.nl/publication/ghaf-17-c/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/ghaf-17-c/","section":"publication","summary":"The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).","tags":null,"title":"Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities","type":"publication"},{"authors":["S. Laban","G. Giebel","N. Klümper","A. Schŗ̈k","J. Doescher","G. Spagnoli","J. Thierauf","M. Theodoraki","R. Remark","S. Gnjatic","R. Krupar","A. Sikora","G. Litjens","N. Grabe","G. Kristiansen","F. Bootz","P. Schuler","C. Brunner","J. Br\"ëlmann","T. Hoffmann","S. Perner"],"categories":null,"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1483228800,"objectID":"0837322b886e4c88e11f1c4b14b0f17b","permalink":"https://geertlitjens.nl/publication/laba-17/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/laba-17/","section":"publication","summary":"Melanoma associated antigens (MAGE) are potential targets for immunotherapy and have been associated with poor overall survival (OS) in head and neck squamous cell carcinoma (HNSCC). However, little is known about MAGE in lymph node metastases (LNM) and recurrent disease (RD) of HNSCC.To assess whether MAGE expression increases with metastasis or recurrence, a tissue microarray (TMA) of 552 primary tumors (PT), 219 LNM and 75 RD was evaluated by immunohistochemistry for MAGE antigens using three monoclonal antibodies to multiple MAGE family members. Mean expression intensity (MEI) was obtained from triplicates of each tumor specimen.The median MEI compared between PT, LNM and RD was significantly higher in LNM and RD. In paired samples, MEI was comparable in PT to respective LNM, but significantly different from RD. Up to 25% of patients were negative for pan-MAGE or MAGE-A3/A4 in PT, but positive in RD. The prognostic impact of MAGE expression was validated in the TMA cohort and also in TCGA data (mRNA). OS was significantly lower for patients expressing pan-MAGE or MAGE-A3/A4 in both independent cohorts.MAGE expression was confirmed as a prognostic marker in HNSCC and may be important for immunotherapeutic strategies as a shared antigen.","tags":["HNSCC; MAGE; cancer-testis antigens; head and neck squamous cell carcinoma; melanoma-associated antigen"],"title":"MAGE expression in head and neck squamous cell carcinoma primary tumors, lymph node metastases and respective recurrences: implications for immunotherapy","type":"publication"},{"authors":["O. Debats","A. Fortuin","H. Meijer","T. Hambrock","G. Litjens","J. Barentsz","H. Huisman"],"categories":null,"content":"","date":147528e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":147528e4,"objectID":"619d2eb03af757843773cd87d568e9af","permalink":"https://geertlitjens.nl/publication/deba-16-a/","publishdate":"2016-10-01T00:00:00Z","relpermalink":"/publication/deba-16-a/","section":"publication","summary":"The key to MR lymphography is suppression of T2* MR signal in normal lymph nodes, while retaining high signal in metastatic nodes. Our objective is to quantitatively compare the ability of ferumoxtran-10 and ferumoxytol to suppress the MR signal in normal pelvic lymph nodes. In 2010, a set of consecutive patients who underwent intravenous MR Lymphography (MRL) were included. Signal suppression in normal lymph nodes in T2*-weighted images due to uptake of USPIO (Ultra-Small Superparamagnetic Particles of Iron Oxide) was quantified. Signal suppression by two USPIO contrast agents, ferumoxtran-10 and ferumoxytol was compared using Wilcoxon's signed rank test. Forty-four patients were included, of which all 44 had a ferumoxtran-10 MRL and 4 had additionally a ferumoxytol MRL. A total of 684 lymph nodes were identified in the images, of which 174 had been diagnosed as metastatic. USPIO-induced signal suppression in normal lymph nodes was significantly stronger in ferumoxtran-10 MRL than in ferumoxytol MRL (p ","tags":["Lymph node metastases; MR lymphography; Prostate cancer; USPIO contrast agents"],"title":"Intranodal signal suppression in pelvic MR lymphography of prostate cancer patients: a quantitative comparison of ferumoxtran-10 and ferumoxytol.","type":"publication"},{"authors":["B. Bejnordi","M. Balkenhol","G. Litjens","R. Holland","P. Bult","N. Karssemeijer","J. van der Laak"],"categories":null,"content":"","date":1472688e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1472688e3,"objectID":"a25a82c65c90b1532ae1065004abfa8c","permalink":"https://geertlitjens.nl/publication/ehte-16-a/","publishdate":"2016-09-01T00:00:00Z","relpermalink":"/publication/ehte-16-a/","section":"publication","summary":"This paper presents and evaluates a fully automatic method for detection of ductal carcinoma in situ (DCIS) in digitized hematoxylin and eosin (H\u0026E) stained histopathological slides of breast tissue. The proposed method applies multi-scale superpixel classification to detect epithelial regions in whole-slide images (WSIs). Subsequently, spatial clustering is utilized to delineate regions representing meaningful structures within the tissue such as ducts and lobules. A region-based classifier employing a large set of features including statistical and structural texture features and architectural features is then trained to discriminate between DCIS and benign/normal structures. The system is evaluated on two datasets containing a total of 205 WSIs of breast tissue. Evaluation was conducted both on the slide and the lesion level using FROC analysis. The results show that to detect at least one true positive in every DCIS containing slide, the system finds 2.6 false positives per WSI. The results of the per-lesion evaluation show that it is possible to detect 80% and 83% of the DCIS lesions in an abnormal slide, at an average of 2.0 and 3.0 false positives per WSI, respectively. Collectively, the result of the experiments demonstrate the efficacy and accuracy of the proposed method as well as its potential for application in routine pathological diagnostics. To the best of our knowledge, this is the first DCIS detection algorithm working fully automatically on WSIs.","tags":null,"title":"Automated Detection of DCIS in Whole-Slide H\u0026E Stained Breast Histopathology Images","type":"publication"},{"authors":["B. Bejnordi","G. Litjens","N. Timofeeva","I. Otte-Holler","A. Homeyer","N. Karssemeijer","J. van der Laak"],"categories":null,"content":"","date":1472688e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1472688e3,"objectID":"afad9f90499851f6b27e746b250d87e8","permalink":"https://geertlitjens.nl/publication/ehte-16/","publishdate":"2016-09-01T00:00:00Z","relpermalink":"/publication/ehte-16/","section":"publication","summary":"Variations in the color and intensity of hematoxylin and eosin (H\u0026E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. This paper presents a fully automated algorithm for standardization of whole-slide histopathological images to reduce the effect of these variations. The proposed algorithm, called whole-slide image color standardizer (WSICS), utilizes color and spatial information to classify the image pixels into different stain components. The chromatic and density distributions for each of the stain components in the hue-saturation-density color model are aligned to match the corresponding distributions from a template wholeslide image (WSI). The performance of the WSICS algorithm was evaluated on two datasets. The first originated from 125 H\u0026E stained WSIs of lymph nodes, sampled from 3 patients, and stained in 5 different laboratories on different days of the week. The second comprised 30 H\u0026E stained WSIs of rat liver sections. The result of qualitative and quantitative evaluations using the first dataset demonstrate that the WSICS algorithm outperforms competing methods in terms of achieving color constancy. The WSICS algorithm consistently yields the smallest standard deviation and coefficient of variation of the normalized median intensity measure. Using the second dataset, we evaluated the impact of our algorithm on the performance of an already published necrosis quantification system. The performance of this system was significantly improved by utilizing the WSICS algorithm. The results of the empirical evaluations collectively demonstrate the potential contribution of the proposed standardization algorithm to improved diagnostic accuracy and consistency in computer-aided diagnosis for histopathology data.","tags":null,"title":"Stain specific standardization of whole-slide histopathological images","type":"publication"},{"authors":["O. Debats","M. Meijs","G. Litjens","H. Huisman"],"categories":null,"content":"","date":1464739200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1464739200,"objectID":"9f3c4788df83c0cbb65d4e70bd739c20","permalink":"https://geertlitjens.nl/publication/deba-16/","publishdate":"2016-06-01T00:00:00Z","relpermalink":"/publication/deba-16/","section":"publication","summary":"To investigate whether atlas-based anatomical information can improve a fully automated lymph node detection system for pelvic MR lymphography (MRL) images of patients with prostate cancer.Their data set contained MRL images of 240 prostate cancer patients who had an MRL as part of their clinical work-up between January 2008 and April 2010, with ferumoxtran-10 as contrast agent. Each MRL consisted of at least a 3D T1-weighted sequence, a 3D T2*-weighted sequence, and a FLASH-3D sequence. The reference standard was created by two expert readers, reading in consensus, who annotated and interactively segmented the lymph nodes in all MRL studies. A total of 5089 lymph nodes were annotated. A fully automated computer-aided detection (CAD) system was developed to find lymph nodes in the MRL studies. The system incorporates voxel features based on image intensities, the Hessian matrix, and spatial position. After feature calculation, a GentleBoost-classifier in combination with local maxima detection was used to identify lymph node candidates. Multiatlas based anatomical information was added to the CAD system to assess whether this could improve performance. Using histogram analysis and free-receiver operating characteristic analysis, this was compared to a strategy where relative position features were used to encode anatomical information.Adding atlas-based anatomical information to the CAD system reduced false positive detections both visually and quantitatively. Median likelihood values of false positives decreased significantly in all annotated anatomical structures. The sensitivity increased from 53% to 70% at 10 false positives per lymph node.Adding anatomical information through atlas registration significantly improves an automated lymph node detection system for MRL images.","tags":null,"title":"Automated multistructure atlas-assisted detection of lymph nodes using pelvic MR lymphography in prostate cancer patients","type":"publication"},{"authors":["R. Remark","T. Merghoub","N. Grabe","G. Litjens","D. Damotte","J. Wolchok","M. Merad","S. Gnjatic"],"categories":null,"content":"","date":1464739200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1464739200,"objectID":"b32f3aa6b022cbaac24a8e93d79e698f","permalink":"https://geertlitjens.nl/publication/rema-16/","publishdate":"2016-06-01T00:00:00Z","relpermalink":"/publication/rema-16/","section":"publication","summary":"Despite remarkable recent achievements of immunotherapy strategies in cancer treatment, clinical responses remain limited to subsets of patients. Predictive markers of disease course and response to immunotherapy are urgently needed. Recent results have revealed the potential predictive value of immune cell phenotype and spatial distribution at the tumor site, prompting the need for multidimensional immunohistochemical analyses of tumor tissues. To address this need, we developed a sample-sparing, highly multiplexed immunohistochemistry technique based on iterative cycles of tagging, image scanning, and destaining of chromogenic substrate on a single slide. This assay, in combination with a newly developed automated digital landscaping solution, democratizes access to high-dimensional immunohistochemical analyses by capturing the complexity of the immunome using routine pathology standards. Applications of the method extend beyond cancer to screen and validate comprehensive panels of tissue-based prognostic and predictive markers, perform indepth in situ monitoring of therapies, and identify targets of disease.","tags":null,"title":"In-depth tissue profiling using multiplexed immunohistochemical consecutive staining on single slide","type":"publication"},{"authors":null,"categories":null,"content":"During Alpe Hu’Zes in 2016 I was given the honor to receive the Bas Mulder Award to continue my research. I was invited to speak a bit about my work during the event during the live registration. ","date":1464739200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1464739200,"objectID":"fdea8e0cc940dca62a478e5943bcfb57","permalink":"https://geertlitjens.nl/media/alpehuzes-interview/","publishdate":"2019-04-23T21:33:56+02:00","relpermalink":"/media/alpehuzes-interview/","section":"media","summary":"During Alpe Hu'Zes in 2016 I was given the honor to receive the Bas Mulder Award to continue my research. I was invited to speak a bit about my work during the event during the live registration.","tags":[],"title":"Interview Regional TV at Alpe Hu'Zes","type":"media"},{"authors":["G. Litjens","K. Safferling","N. Grabe"],"categories":null,"content":"","date":1451606400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1451606400,"objectID":"2315f2140b00401b995a10905b3e1de6","permalink":"https://geertlitjens.nl/publication/litj-16/","publishdate":"2016-01-01T00:00:00Z","relpermalink":"/publication/litj-16/","section":"publication","summary":"Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.","tags":null,"title":"Automated robust registration of grossly misregistered whole-slide images with varying stains","type":"publication"},{"authors":["G. Litjens","R. Elliott","N. Shih","M. Feldman","T. Kobus","C. Hulsbergen-van de Kaa","J. Barentsz","H. Huisman","A. Madabhushi"],"categories":null,"content":"","date":1451606400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1451606400,"objectID":"82cfec958c1e605dfa7cb1cc96ca2de8","permalink":"https://geertlitjens.nl/publication/litj-16-b/","publishdate":"2016-01-01T00:00:00Z","relpermalink":"/publication/litj-16-b/","section":"publication","summary":"To determine the best features to discriminate prostate cancer from benign disease and its relationship to benign disease class and cancer grade.The institutional review board approved this study and waived the need for informed consent. A retrospective cohort of 70 patients (age range, 48-70 years; median, 62 years), all of whom were scheduled to undergo radical prostatectomy and underwent preoperative 3-T multiparametric magnetic resonance (MR) imaging, including T2-weighted, diffusion-weighted, and dynamic contrast material-enhanced imaging, were included. The digitized prostatectomy slides were annotated for cancer and noncancerous disease and coregistered to MR imaging with an interactive deformable coregistration scheme. Computer-identified features for each of the noncancerous disease categories (eg, benign prostatic hyperplasia [BPH], prostatic intraepithelial neoplasia [PIN], inflammation, and atrophy) and prostate cancer were extracted. Feature selection was performed to identify the features with the highest discriminatory power. The performance of these five features was evaluated by using the area under the receiver operating characteristic curve (AUC).High-b-value diffusion-weighted images were more discriminative in distinguishing BPH from prostate cancer than apparent diffusion coefficient, which was most suitable for distinguishing PIN from prostate cancer. The focal appearance of lesions on dynamic contrast-enhanced images may help discriminate atrophy and inflammation from cancer. Which imaging features are discriminative for different benign lesions is influenced by cancer grade. The apparent diffusion coefficient appeared to be the most discriminative feature in identifying high-grade cancer. Classification results showed increased performance by taking into account specific benign types (AUC = 0.70) compared with grouping all noncancerous findings together (AUC = 0.62).The best features with which to discriminate prostate cancer from noncancerous benign disease depend on the type of benign disease and cancer grade. Use of the best features may result in better diagnostic performance.","tags":["Adenocarcinoma","diagnosis/pathology/surgery; Aged; Diagnosis","Differential; Humans; Magnetic Resonance Imaging","methods; Male; Middle Aged; Prostatectomy; Prostatic Neoplasms","diagnosis/pathology/surgery; Retrospective Studies"],"title":"Computer-extracted Features Can Distinguish Noncancerous Confounding Disease from Prostatic Adenocarcinoma at Multiparametric MR Imaging.","type":"publication"},{"authors":["G. Litjens","C. Sánchez","N. Timofeeva","M. Hermsen","I. Nagtegaal","I. Kovacs","C. Hulsbergen-van de Kaa","P. Bult","B. van Ginneken","J. van der Laak"],"categories":null,"content":"","date":1451606400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1451606400,"objectID":"67a699fe05242bd055785a569fef8047","permalink":"https://geertlitjens.nl/publication/litj-16-c/","publishdate":"2016-01-01T00:00:00Z","relpermalink":"/publication/litj-16-c/","section":"publication","summary":"Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.","tags":null,"title":"Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis","type":"publication"},{"authors":["A. Setio","F. Ciompi","G. Litjens","P. Gerke","C. Jacobs","S. van Riel","M. Wille","M. Naqibullah","C. Sánchez","B. van Ginneken"],"categories":null,"content":"","date":1451606400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1451606400,"objectID":"91c313e7e38c2494dca4ee57b51a49a7","permalink":"https://geertlitjens.nl/publication/seti-16/","publishdate":"2016-01-01T00:00:00Z","relpermalink":"/publication/seti-16/","section":"publication","summary":"We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDCIDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.","tags":["Cancer;Computed tomography;Design automation;Feature extraction;Lesions;Lungs;Solids;computed tomography;computer-aided detection;convolutional networks;deep learning;lung cancer;pulmonary nodule"],"title":"Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks","type":"publication"},{"authors":["G. Litjens","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1446336e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1446336e3,"objectID":"4edffde050a5b7712316a745051678f5","permalink":"https://geertlitjens.nl/publication/litj-15-b/","publishdate":"2015-11-01T00:00:00Z","relpermalink":"/publication/litj-15-b/","section":"publication","summary":"To investigate the added value of computer-aided diagnosis (CAD) on the diagnostic accuracy of PIRADS reporting and the assessment of cancer aggressiveness.Multi-parametric MRI and histopathological outcome of MR-guided biopsies of a consecutive set of 130 patients were included. All cases were prospectively PIRADS reported and the reported lesions underwent CAD analysis. Logistic regression combined the CAD prediction and radiologist PIRADS score into a combination score. Receiver-operating characteristic (ROC) analysis and Spearman's correlation coefficient were used to assess the diagnostic accuracy and correlation to cancer grade. Evaluation was performed for discriminating benign lesions from cancer and for discriminating indolent from aggressive lesions.In total 141 lesions (107 patients) were included for final analysis. The area-under-the-ROC-curve of the combination score was higher than for the PIRADS score of the radiologist (benign vs. cancer, 0.88 vs. 0.81, p = 0.013 and indolent vs. aggressive, 0.88 vs. 0.78, p ","tags":null,"title":"Clinical evaluation of a computer-aided diagnosis system for determining cancer aggressiveness in prostate MRI","type":"publication"},{"authors":["E. Vos","T. Kobus","G. Litjens","T. Hambrock","C. Hulsbergen-van de Kaa","J. Barentsz","M. Maas","T. Scheenen"],"categories":null,"content":"","date":1438387200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1438387200,"objectID":"a3c186457ac4948612b9f773764756af","permalink":"https://geertlitjens.nl/publication/vos-15/","publishdate":"2015-08-01T00:00:00Z","relpermalink":"/publication/vos-15/","section":"publication","summary":"The aim of this study was to determine and validate the optimal combination of parameters derived from 3-T diffusion-weighted imaging, dynamic contrast-enhanced imaging, and magnetic resonance (MR) spectroscopic imaging for discriminating low-grade from high-grade prostate cancer (PCa).The study was approved by the institutional review board, and the need for informed consent was waived. Ninety-four patients with PCa who had undergone multiparametric MR imaging (MRI) before prostatectomy were included. Cancer was indicated on T2-weighted images, blinded to any functional data, with prostatectomy specimens as the reference standard. Tumors were classified as low grade or high grade based on Gleason score; peripheral zone (PZ) and transition zone (TZ) tumors were analyzed separately. In a development set (43 patients), the optimal combination of multiparametric MRI parameters was determined using logistic regression modeling. Subsequently, this combination was evaluated in a separate validation set (51 patients).In the PZ, the 25th percentile of apparent diffusion coefficient (ADC) derived from diffusion-weighted imaging and washout (WO25) derived from dynamic contrast-enhanced MRI offered the optimal combination of parameters. In the TZ, WO25 and the choline over spermine + creatine ratio (C/SC) derived from MR spectroscopic imaging showed the highest discriminating performance. Using the models built with the development set, 48 (74%) of 65 cancer lesions were classified correctly in the validation set.Multiparametric MRI is a useful tool for the discrimination between low-grade and high-grade PCa and performs better than any individual functional parameter in both the PZ and TZ. The 25th percentile of ADC + WO25 offered the optimal combination in the PZ, and the choline over spermine + creatine ratio + WO25 offered the optimal combination in the TZ. The ADC parameter has no additional value for the assessment of PCa aggressiveness in the TZ.","tags":null,"title":"Multiparametric Magnetic Resonance Imaging for Discriminating Low-Grade From High-Grade Prostate Cancer","type":"publication"},{"authors":null,"categories":null,"content":"To accuractely determine the outcome of clinical trials careful analysis of the biomarkers is required. In recent years this has become more and more complex due to quantity of biomarkers that has to be assessed in new clinical trials. Furthermore, as we move to more personalized therapy, we need to be able to measure even subtle changes in biomarker expression, putting a larger emphasis on accurate and precise biomarker quantification. These changes have made manual assessment of biomarker expression tedious, time-consuming, and, often, inaccurate.\n Quantification of CD3-positive cells inside (yellow) and outside (green) the invasive margin of prostate cancer (orange) specimens. Within this project, funded by the Humboldt Foundation, we set out to develop machine learning tools to automate the quantification and discovery of biomarkers in a variety of clinical trials. Specifically, we looked at applications in prostate cancer immunotherapy, lung cancer prognosis, and MAGE-expression in head and neck cancers.\n","date":1427883394,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1427883394,"objectID":"73b82fdca559d0218563f5242365bb35","permalink":"https://geertlitjens.nl/project/aahctd/","publishdate":"2015-04-01T12:16:34+02:00","relpermalink":"/project/aahctd/","section":"project","summary":"Machine learning methods were used to automatically extract and quantify biomarkers from histopathological clinical trial data.","tags":[],"title":"Automated Analysis of Histopathological Clinical Trial Data","type":"project"},{"authors":["B. Bejnordi","G. Litjens","M. Hermsen","N. Karssemeijer","J. van der Laak"],"categories":null,"content":"","date":1422748800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1422748800,"objectID":"cd51f282435afd044e42a3144c8778fa","permalink":"https://geertlitjens.nl/publication/bejn-15/","publishdate":"2015-02-01T00:00:00Z","relpermalink":"/publication/bejn-15/","section":"publication","summary":"This paper presents a new algorithm for automatic detection of regions of interest in whole slide histopathological images. The proposed algorithm generates and classifies superpixels at multiple resolutions to detect regions of interest. The algorithm emulates the way the pathologist examines the whole slide histopathology image by processing the image at low magnifications and performing more sophisticated analysis only on areas requiring more detailed information. However, instead of the traditional usage of fixed sized rectangular patches for the identification of relevant areas, we use superpixels as the visual primitives to detect regions of interest. Rectangular patches can span multiple distinct structures, thus degrade the classification performance. The proposed multi-scale superpixel classification approach yields superior performance for the identification of the regions of interest. For the evaluation, a set of 10 whole slide histopathology images of breast tissue were used. Empirical evaluation of the performance of our proposed algorithm relative to expert manual annotations shows that the algorithm achieves an area under the Receiver operating characteristic (ROC) curve of 0.958, demonstrating its efficacy for the detection of regions of interest.","tags":["histopathology","whole-slide imaging","multi-scale superpixels","region of interest","breast cancer"],"title":"A multi-scale superpixel classification approach for region of interest detection in whole slide histopathology images","type":"publication"},{"authors":null,"categories":null,"content":"In Nijmegen it is typical to give a 10-minute layman’s presentation on your PhD research before the start of official defense.\n","date":1421971200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1421971200,"objectID":"bab62b68417e4e48ec435cfca00a5c64","permalink":"https://geertlitjens.nl/talk/lekenpraatje/","publishdate":"2015-01-23T00:00:00Z","relpermalink":"/talk/lekenpraatje/","section":"talk","summary":"The layman's presentation on my PhD research on computerize detection of prostate cancer in multi-paramteric MRI.","tags":[],"title":"PhD Thesis Defense","type":"talk"},{"authors":["G. Litjens","B. Bejnordi","N. Timofeeva","G. Swadi","I. Kovacs","C. Hulsbergen-van de Kaa","J. van der Laak"],"categories":null,"content":"","date":1420070400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1420070400,"objectID":"fbdc52c6ed8cf65e4f30964359077340","permalink":"https://geertlitjens.nl/publication/litj-15/","publishdate":"2015-01-01T00:00:00Z","relpermalink":"/publication/litj-15/","section":"publication","summary":"Automated detection of prostate cancer in digitized H and E whole-slide images is an important first step for computer-driven grading. Most automated grading algorithms work on preselected image patches as they are too computationally expensive to calculate on the multi-gigapixel whole-slide images. An automated multi-resolution cancer detection system could reduce the computational workload for subsequent grading and quantification in two ways: by excluding areas of definitely normal tissue within a single specimen or by excluding entire specimens which do not contain any cancer. In this work we present a multi-resolution cancer detection algorithm geared towards the latter. The algorithm methodology is as follows: at a coarse resolution the system uses superpixels, color histograms and local binary patterns in combination with a random forest classifier to assess the likelihood of cancer. The five most suspicious superpixels are identified and at a higher resolution more computationally expensive graph and gland features are added to refine classification for these superpixels. Our methods were evaluated in a data set of 204 digitized whole-slide H and E stained images of MR-guided biopsy specimens from 163 patients. A pathologist exhaustively annotated the specimens for areas containing cancer. The performance of our system was evaluated using ten-fold cross-validation, stratified according to patient. Image-based receiver operating characteristic (ROC) analysis was subsequently performed where a specimen containing cancer was considered positive and specimens without cancer negative. We obtained an area under the ROC curve of 0.96 and a 0.4 specificity at a 1.0 sensitivity.","tags":null,"title":"Automated detection of prostate cancer in digitized whole-slide images of H\u0026E-stained biopsy specimens","type":"publication"},{"authors":["G. Litjens"],"categories":null,"content":"","date":1420070400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1420070400,"objectID":"856b4acce8b986aa62060876715c173b","permalink":"https://geertlitjens.nl/publication/litj-15-a/","publishdate":"2015-01-01T00:00:00Z","relpermalink":"/publication/litj-15-a/","section":"publication","summary":"Prostate MRI is becoming an increasingly popular imaging technique for the detection of prostate cancer. However, it requires a substantial amount of expertise and time from radiologists to accurately report on prostate MRI. Furthermore, quantitative analysis is needed for accurate assessment of cancer aggressiveness in vivo. Computer-aided detection and diagnosis (CAD) systems are excellent tools to tackle these challenges. In this thesis the design of such a systems is discussed. CAD systems are typically a connected pipeline of differing algorithms performing consecutive tasks, for example segmentation, feature extraction and classification. We followed a similar outline in this thesis. In Chapter 2 we discuss the segmentation of the prostate capsule in the setting of a 'grand challenge'. Further division of the prostate capsule in distinct anatomical zones is the topic of Chapter 3. After segmentation of the capsule and the prostate zones, features discriminative for cancer and cancer aggression are discussed in Chapter 4 and 5. The interconnection of the components into one unified CAD system is subsequently discussed in Chapter 6 and the evaluation of the system in a clinical setting in Chapter 7. In this last Chapter we not only show that designing a fully automated CAD system is feasbile, but that it can be used to the benefit of radiologists.","tags":null,"title":"Computerized detection of cancer in multi-parametric prostate MRI","type":"publication"},{"authors":null,"categories":null,"content":"Interview about my participation in FameLab in 2015.\n","date":1420070400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1420070400,"objectID":"d8b6b2664097ed985020f0909851c898","permalink":"https://geertlitjens.nl/media/vox-famelab/","publishdate":"2019-04-23T21:32:50+02:00","relpermalink":"/media/vox-famelab/","section":"media","summary":"Interview about my participation in FameLab in 2015.","tags":[],"title":"Interview with Vox about FameLab","type":"media"},{"authors":null,"categories":null,"content":"ASAP (Automated Slide Analysis Platform) was developed by the Computation Pathology Group, part of the Diagnostic Image Analysis Group, at the Radboud University Medical Center. It was started after frustration with the current freely available software for annotating multi-resolution digital pathology images. For more details head to the project site.\n","date":1396347934,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1396347934,"objectID":"a892a29b488b96a0632a17e8dd11f4ca","permalink":"https://geertlitjens.nl/project/asap/","publishdate":"2014-04-01T12:25:34+02:00","relpermalink":"/project/asap/","section":"project","summary":"Open source whole-slide image IO library and viewer","tags":["Active","Software","Viewer","Open-Source"],"title":"ASAP","type":"project"},{"authors":["G. Litjens","R. Toth","W. van de Ven","C. Hoeks","S. Kerkstra","B. van Ginneken","G. Vincent","G. Guillard","N. Birbeck","J. Zhang","R. Strand","F. Malmberg","Y. Ou","C. Davatzikos","M. Kirschner","F. Jung","J. Yuan","W. Qiu","Q. Gao","P. Edwards","B. Maan","F. van der Heijden","S. Ghose","J. Mitra","J. Dowling","D. Barratt","H. Huisman","A. Madabhushi"],"categories":null,"content":"","date":1391212800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1391212800,"objectID":"078ad90742ef3d13f74bf7d520e9f883","permalink":"https://geertlitjens.nl/publication/litj-14/","publishdate":"2014-02-01T00:00:00Z","relpermalink":"/publication/litj-14/","section":"publication","summary":"Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p","tags":null,"title":"Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge","type":"publication"},{"authors":["G. Litjens","O. Debats","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"0c9f831bb43040aa266adac42c617010","permalink":"https://geertlitjens.nl/publication/litj-14-c/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/litj-14-c/","section":"publication","summary":"Prostate cancer is one of the major causes of cancer death for men in the western world. Magnetic resonance imaging (MRI) is being increasingly used as a modality to detect prostate cancer. Therefore, computer-aided detection of prostate cancer in MRI images has become an active area of research. In this paper we investigate a fully automated computer-aided detection system which consists of two stages. In the first stage, we detect initial candidates using multi-atlas-based prostate segmentation, voxel feature extraction, classification and local maxima detection. The second stage segments the candidate regions and using classification we obtain cancer likelihoods for each candidate. Features represent pharmacokinetic behavior, symmetry and appearance, among others. The system is evaluated on a large consecutive cohort of 347 patients with MR-guided biopsy as the reference standard. This set contained 165 patients with cancer and 182 patients without prostate cancer. Performance evaluation is based on lesion-based free-response receiver operating characteristic curve and patient-based receiver operating characteristic analysis. The system is also compared to the prospective clinical performance of radiologists. Results show a sensitivity of 0.42, 0.75, and 0.89 at 0.1, 1, and 10 false positives per normal case. In clinical workflow the system could potentially be used to improve the sensitivity of the radiologist. At the high specificity reading setting, which is typical in screening situations, the system does not perform significantly different from the radiologist and could be used as an independent second reader instead of a second radiologist. Furthermore, the system has potential in a first-reader setting.","tags":null,"title":"Computer-aided detection of prostate cancer in MRI","type":"publication"},{"authors":["G. Litjens","N. Karssemeijer","J. Barentsz","H. Huisman"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"41c34fe678d27cc94bca8bbbcb626ab7","permalink":"https://geertlitjens.nl/publication/litj-14-e/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/litj-14-e/","section":"publication","summary":"PURPOSE Accurate reporting of multi-parametric prostate magnetic resonance imaging (mpMRI) is difficult and requires substantial experience. We investigate the effect of computer-aided diagnosis (CAD) on the diagnostic accuracy of prostate MRI reporting. METHOD AND MATERIALS Two consecutive cohorts of patients were used. One for training/development of the CAD system (347 patients) and one for the prospective evaluation (130 patients). Both cohorts comprise mpMRI and subsequent MR-guided biopsy and pathology. The mpMRIs were ESUR guideline compliant and performed on a Siemens 3T MRI without the use of an endo-rectal coil. Both cohorts were prospectively reported by one of ten radiologists according to the PI-RADS guidelines. Experience of the radiologists ranged from inexperienced to very experienced (1-20 years). The computer-aided diagnosis (CAD) system comprised of a voxel classification stage and a subsequent candidate segmentation and classification stage. Features include quantified T2, ADC, pharmacokinetics, texture and anatomical characteristics. ROC and FROC analysis was used to evaluate performance. For the prospective validation the CAD system assigned a score to each radiologist-identified lesion. Logistic regression combining the radiologist and CAD scores was used to emulate independent, prospective CAD reading. Subsequently, the diagnostic performance in detecting intermediate-to-high-grade cancer of the CAD system alone, the radiologist alone and the radiologist CAD-system combination was evaluated using sensitivity and specificity for the different PI-RADS thresholds. Bootstrapping was used to assess significance. RESULTS FROC analyses showed that the CAD system could detect 82% of all intermediate-to-high-grade lesions at 1 false positive per case. Combined CAD and radiologist score significantly improved the sensitivity at a PI-RADS 4 threshold over the radiologist alone (0.98 for the combination, 0.93 for the radiologist alone, p = 0.029). A significantly improved specificity was found at a PI-RADS threshold of 3 (0.25 versus 0.09, p = 0.013). CONCLUSION CAD can achieve excellent performance. As a second observer to characterize prostate lesions it can improve sensitivity and specificity in discriminating intermediate-to-high-grade cancer. CLINICAL RELEVANCE/APPLICATION Improving the performance of mpMRI in the detection of prostate cancer by CAD can prevent unnecessary biopsies.","tags":null,"title":"Computer-aided Detection of Prostate Cancer in Multi-parametric Magnetic Resonance Imaging","type":"publication"},{"authors":["G. Litjens","H. Huisman","R. Elliott","N. Shih","M. Feldman","J. Futterer","J. Bomers","A. Madabhushi"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"983c6c3d1df1f411ae771bf92f5b9b5c","permalink":"https://geertlitjens.nl/publication/litj-14-b/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/litj-14-b/","section":"publication","summary":"","tags":null,"title":"Distinguishing benign confounding treatment changes from residual prostate cancer on MRI following laser ablation","type":"publication"},{"authors":["G. Litjens","R. Elliott","N. Shih","M. Feldman","J. Barentsz","C. Hulsbergen - van de Kaa","I. Kovacs","H. Huisman","A. Madabhushi"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"416e79575c2927e02bb1bcae4e1b6e96","permalink":"https://geertlitjens.nl/publication/litj-14-a/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/litj-14-a/","section":"publication","summary":"","tags":null,"title":"Distinguishing prostate cancer from benign confounders via a cascaded classifier on multi-parametric MRI","type":"publication"},{"authors":["E. Vos","T. Kobus","G. Litjens","T. Hambrock","C. de Kaa","M. Maas","T. Scheenen"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"8c1674d2fea87d40d8973dcd67370226","permalink":"https://geertlitjens.nl/publication/vos-14-c/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/vos-14-c/","section":"publication","summary":"","tags":null,"title":"Multiparametric MR imaging for the assessment of prostate cancer aggressiveness at 3 Tesla","type":"publication"},{"authors":["G. Litjens","H. Huisman","R. Elliott","N. Shih","M. Feldman","S. Viswanath","J. Fütterer","J. Bomers","A. Madabhushi"],"categories":null,"content":"","date":1388534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1388534400,"objectID":"49c1fbd50deb511a392502488a69fe79","permalink":"https://geertlitjens.nl/publication/litj-14-d/","publishdate":"2014-01-01T00:00:00Z","relpermalink":"/publication/litj-14-d/","section":"publication","summary":"","tags":null,"title":"Quantitative identification of magnetic resonance imaging features of prostate cancer response following laser ablation and radical prostatectomy","type":"publication"},{"authors":["E. Vos","G. Litjens","T. Kobus","T. Hambrock","C. Kaa","J. Barentsz","H. Huisman","T. Scheenen"],"categories":null,"content":"","date":1356998400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1356998400,"objectID":"2b29c9a6cac3a72d51f973e1913ccc6b","permalink":"https://geertlitjens.nl/publication/vos-13/","publishdate":"2013-01-01T00:00:00Z","relpermalink":"/publication/vos-13/","section":"publication","summary":"","tags":null,"title":"Assessment of Prostate Cancer Aggressiveness Using Dynamic Contrast-enhanced Magnetic Resonance Imaging at 3 T","type":"publication"},{"authors":["K. Nagel","M. Schouten","T. Hambrock","G. Litjens","C. Hoeks","B. Haken","J. Barentsz","J. Fütterer"],"categories":null,"content":"","date":1356998400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1356998400,"objectID":"6b7a4797b9a4662a97a2590be5e59076","permalink":"https://geertlitjens.nl/publication/nage-13/","publishdate":"2013-01-01T00:00:00Z","relpermalink":"/publication/nage-13/","section":"publication","summary":"Purpose:To determine if prostatitis and prostate cancer (PCa) can be distinguished by using apparent diffusion coefficients (ADCs) on magnetic resonance (MR) images, with specimens obtained at MR-guided biopsy as the standard of reference.Materials and Methods:The need for institutional review board approval and informed consent was waived. MR-guided biopsies were performed in 130 consecutive patients with cancer-suspicious regions (CSRs) on multiparametric MR images obtained at 3 T. In this retrospective study, 88 patients met the inclusion criteria. During the biopsy procedure, an axial diffusion-weighted sequence was performed and ADC maps were generated (repetition time msec/echo time msec, 2000/67; section thickness, 4 mm; in-plane resolution, 1.8 × 1.8 mm; and b values of 0, 100, 500, and 800 sec/mm(2)). Subsequently, a confirmation image with the needle left in situ was acquired and projected on the ADC map. The corresponding ADCs at the biopsy location were compared with the histopathologic outcomes of the biopsy specimens. Linear mixed-model regression analyses were used to test for ADC differences between the histopathologic groups.Results:The study included 116 biopsy specimens. Median ADCs of normal prostate tissue, prostatitis, low-grade PCa (Gleason grade components 2 or 3), and high-grade PCa (Gleason grade components 4 or 5) were 1.22 × 10(-3) mm(2)/sec (standard deviation, ± 0.21), 1.08 × 10(-3) mm(2)/sec (± 0.18), 0.88 × 10(-3) mm(2)/sec (± 0.15), and 0.88 × 10(-3) mm(2)/sec (± 0.13), respectively. Although the median ADCs of biopsy specimens with prostatitis were significantly higher compared with low- and high-grade PCa (P ","tags":null,"title":"Differentiation of Prostatitis and Prostate Cancer by Using Diffusion-weighted MR Imaging and MR-guided Biopsy at 3 T","type":"publication"},{"authors":["G. Litjens","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1356998400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1356998400,"objectID":"792bb48000695031609b97a8de8bd54e","permalink":"https://geertlitjens.nl/publication/litj-13/","publishdate":"2013-01-01T00:00:00Z","relpermalink":"/publication/litj-13/","section":"publication","summary":"Purpose: To evaluate the performance of the prostate imaging reporting and data standard (PI-RADS) and the effect of reader experience. Materials and Methods: A consecutive patient cohort of 254 patients who underwent both a detection MRI between January 1st, 2012 and December 31st, 2012 and a subsequent MR guided biopsy were included in this study. All patients were prospectively reported by one radiologist according to the PI-RADS guidelines. In total 10 different radiologists reported patients in this cohort. Of these 10 radiologists, 2 radiologist can be considered experts (19 and 12 years of experience with prostate MRI) and 8 can be considered inexperienced (3 years or less experience with prostate MRI). Together, the 2 experienced readers reported 108 patients and 146 were reported by the 8 inexperienced readers The radiologists reported 436 lesions in these patients of which 339 were biopsied. Of these 339 biopsied lesions 190 were prostate cancer. Of these 190 prostate cancer 127 lesions had a Gleason 4 or higher component and were considered high-grade prostate cancer, all others were considered low grade. The distribution of non-cancers, low-grade cancers and high-grade cancer was similar between the inexperienced and experienced observers (44%/19%/36% vs. 46%/16%/38%). Each lesion received, according to the PI-RADS guidelines, a score between 1 and 5. The sensitivity, specificity, positive predictive value and negative predictive value were calculated at each of the PI-RADS scores relative to the biopsy results. High-grade cancers with a PI-RADS score above or equal to the threshold are true positives. Non-cancers below the threshold were considered true negatives. This was done for both the inexperienced and experienced radiologists. Results: In total 19 PI-RADS 2, 67 PI-RADS 3, 112 PI-RADS 4 and 141 PI-RADS 5 lesions were biopsied. No PI-RADS 1 lesions were biopsied. The inexperienced reader sensitivities for PIRADS 2, 3, 4 and 5 are: 1, 1, 0.96 and 0.69 respectively. The experienced readers obtained 1, 1, 0.98 and 0.71. The corresponding specificities were 0, 0.16, 0.48 and 0.71 for the inexperienced and 0, 0.07, 0.36 and 0.76 for the experienced readers. The positive and negative predictive values were 0.46, 0.50, 0.61, 0.71 and 1, 1, 0.93, 0.74 for the inexperienced readers. For the experienced readers we obtained 0.46, 0.48, 0.57, 0.84 and 1, 1, 0.96, 0.78 respectively. Conclusion: In this population we can see that especially PI-RADS 4 and 5 classifications have excellent sensitivity, specificity, PPV and NPV characteristics. From this data we conclude that only PI-RADS 4 and 5 lesions require biopsy; inexperienced and experienced readers have sensitivities of 0.96 and 0.98 at this threshold. Experience matters: the number of unnecessary biopsies in PI-RADS 5 lesions reduces from 29/100 to 16/100 between experienced and inexperienced readers. Clinical relevance: PI-RADS reported lesions may help reduce the number of unnecessary biopsies. The strong effect of experience emphasizes the need for adequately trained radiologists for reporting prostate MR.","tags":null,"title":"Initial prospective evaluation of the prostate imaging reporting and data standard (PI-RADS): Can it reduce unnecessary MR guided biopsies?","type":"publication"},{"authors":["M. Maas","M. Koopman","G. Litjens","A. Wright","K. Selnas","I. Gribbestad","M. Haider","K. Macura","D. Margolis","B. Kiefer","J. Futterer","T. Scheenen"],"categories":null,"content":"","date":1356998400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1356998400,"objectID":"14ce600f6c40168ae6af91d2c8d54026","permalink":"https://geertlitjens.nl/publication/maas-13-b/","publishdate":"2013-01-01T00:00:00Z","relpermalink":"/publication/maas-13-b/","section":"publication","summary":"","tags":null,"title":"Prostate Cancer localization with a Multiparametric MR Approach (PCaMAP): initial results of a multi-center study","type":"publication"},{"authors":["G. Litjens","T. Hambrock","C. Hulsbergen-van de Kaa","J. Barentsz","H. Huisman"],"categories":null,"content":"","date":1343779200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1343779200,"objectID":"594b11f9c85ee420a432390df7113d0e","permalink":"https://geertlitjens.nl/publication/litj-12-b/","publishdate":"2012-08-01T00:00:00Z","relpermalink":"/publication/litj-12-b/","section":"publication","summary":"Purpose:To determine the interpatient variability of prostate peripheral zone (PZ) apparent diffusion coefficient (ADC) and its effect on the assessment of prostate cancer aggressiveness.Materials and Methods:The requirement for institutional review board approval was waived. Intra- and interpatient variation of PZ ADCs was determined by means of repeated measurements of normal ADCs at three magnetic resonance (MR) examinations in a retrospective cohort of 10 consecutive patients who had high prostate-specific antigen levels and negative findings at transrectal ultrasonographically-guided biopsy. In these patients, no signs of PZ cancer were found at all three MR imaging sessions. The effect of interpatient variation on the assessment of prostate cancer aggressiveness was examined in a second retrospective cohort of 51 patients with PZ prostate cancer. Whole-mount step-section pathologic evaluation served as reference standard for placement of regions of interest on tumors and normal PZ. Repeated-measures analysis of variance was used to determine the significance of the interpatient variations in ADCs. Linear logistic regression was used to assess whether incorporating normal PZ ADCs improves the prediction of cancer aggressiveness.Results:Analysis of variance revealed that interpatient variability (1.2-2.0 � 10(-3) mm(2)/sec) was significantly larger than measurement variability (0.068 � 10(-3) mm(2)/sec � 0.027 [standard deviation]) (P = .0058). Stand-alone tumor ADCs showed an area under the receiver operating characteristic curve (AUC) of 0.91 for discriminating low-grade versus high-grade tumors. Incorporating normal PZ ADC significantly improved the AUC to 0.96 (P = .0401).Conclusion:PZ ADCs show significant interpatient variation, which has a substantial effect on the prediction of prostate cancer aggressiveness. Correcting this effect results in a significant increase in diagnostic accuracy.� RSNA, 2012.","tags":null,"title":"Interpatient Variation in Normal Peripheral Zone Apparent Diffusion Coefficient: Effect on the Prediction of Prostate Cancer Aggressiveness","type":"publication"},{"authors":["G. Litjens","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1328054400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1328054400,"objectID":"6d58feebcba121e2383342bc4a214858","permalink":"https://geertlitjens.nl/publication/litj-12/","publishdate":"2012-02-01T00:00:00Z","relpermalink":"/publication/litj-12/","section":"publication","summary":"MRI has shown to have great potential in prostate cancer localization and grading, but interpreting those exams requires expertise that is not widely available. Therefore, CAD applications are being developed to aid radiologists in detecting prostate cancer. Existing CAD applications focus on the prostate as a whole. However, in clinical practice transition zone cancer and peripheral zone cancer are considered to have different appearances. In this paper we present zone-specific CAD, in addition to an atlas based segmentation technique which includes zonal segmentation. Our CAD system consists of a detection and a classification stage. Prior to the detection stage the prostate is segmented into two zones. After segmentation features are extracted. Subsequently a likelihood map is generated on which local maxima detection is performed. For each local maximum a region is segmented. In the classification stage additional shape features are calculated, after which the regions are classified. Validation was performed on 288 data sets with MR-guided biopsy results as ground truth. Free-response Receiver Operating Characteristic (FROC) analysis was used for statistical evaluation. The difference between whole-prostate and zone-specific CAD was assessed using the difference between the FROCs. Our results show that evaluating the two zones separately results in an increase in performance compared to whole-prostate CAD. The FROC curves at .1, 1 and 3 false positives have a sensitivity of 0.0, 0.55 and 0.72 for whole-prostate and 0.08, 0.57 and 0.80 for zone-specific CAD. The FROC curve of the zone-specific CAD also showed significantly better performance overall (p ","tags":null,"title":"Automated computer-aided detection of prostate cancer in MR images: from a whole-organ to a zone-based approach","type":"publication"},{"authors":["G. Litjens","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1325376e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1325376e3,"objectID":"81bfb2af8877ef04cc7ac2b2723bab57","permalink":"https://geertlitjens.nl/publication/litj-12-d/","publishdate":"2012-01-01T00:00:00Z","relpermalink":"/publication/litj-12-d/","section":"publication","summary":"","tags":null,"title":"A multi-atlas approach for prostate segmentation in MRI","type":"publication"},{"authors":["G. Litjens","O. Debats","W. van de Ven","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1325376e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1325376e3,"objectID":"47b22b8070fd47a7b4220ed179017f08","permalink":"https://geertlitjens.nl/publication/litj-12-a/","publishdate":"2012-01-01T00:00:00Z","relpermalink":"/publication/litj-12-a/","section":"publication","summary":"Zonal segmentation of the prostate into the central gland and peripheral zone is a useful tool in computer-aided detection of prostate cancer, because occurrence and characteristics of cancer in both zones differ substantially. In this paper we present a pattern recognition approach to segment the prostate zones. It incorporates three types of features that can differentiate between the two zones: anatomical, intensity and texture. It is evaluated against a multi-parametric multi-atlas based method using 48 multi-parametric MRI studies. Three observers are used to assess inter-observer variability and we compare our results against the state of the art from literature. Results show a mean Dice coefficient of 0.89 +/- 0.03 for the central gland and 0.75 +/- 0.07 for the peripheral zone, compared to 0.87 +/-0.04 and 0.76 +/- 0.06 in literature. Summarizing, a pattern recognition approach incorporating anatomy, intensity and texture has been shown to give good results in zonal segmentation of the prostate.","tags":null,"title":"A pattern recognition approach to zonal segmentation of the prostate on MRI","type":"publication"},{"authors":["G. Litjens","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1325376e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1325376e3,"objectID":"f1a2f1ff973f1437a406ce25bb4c5e02","permalink":"https://geertlitjens.nl/publication/litj-12-c/","publishdate":"2012-01-01T00:00:00Z","relpermalink":"/publication/litj-12-c/","section":"publication","summary":"Purpose: The recent PI-RADS standard considers T2-weighted (T2W) MR the best imaging modality to characterize central gland (CG) lesions. In this study we assessed whether computer-aided diagnosis using T2 texture and relaxation features can separate benign and malignant CG lesions. Materials and Methods: MR scans of 101 patients were included in this study. The reference standard was MR-guided MR biopsy. Of these patients 36 had benign disease (e.g. benign prostatic hyperplasia) and 65 had prostate cancer. Lesions were annotated on the T2W sequence using a contouring tool. A quantitative T2 relaxation map was computed using an estimator that combines the T2W and proton density images with a turbo-spin-echo signal model and a gain factor. The latter was estimated using an automatically selected muscle reference region. Several texture voxel features were computed on the resulting T2-map: co-occurrenc matrix based homogeneity, neighboring gray-level dependence matrix based texture strength, and multi-scale Gaussian derivative features. For the latter 5 scales between 2 and 12 mm and derivatives up to the second order were calculated. For the matrix based features we calculated several histogram bin sizes (8, 16 and 32) and kernel sizes (4, 8 and 12 mm). The total number of texture features was 42. A linear discriminant classifier with feature selection was trained to compute the cancer likelihood for each voxel in the lesion. A feature selection was performed in a nested cross-validation loop using 10 folds. Cross-validation was performed in a leave-one-patient-out manner. For each annotated region a summary lesion likelihood was computed using the 75th percentile of the voxel likelihoods. The diagnostic accuracy of the lesion cancer likelihood was evaluated using receiver-operating characteristic (ROC) analysis and bootstrapping. Results: An area under the ROC curve of 0.76 (95% bootstrap confidence interval 0.64 � 0.87) was obtained for determining cancer likelihood using texture features, which is similar to radiologist performance reported in the literature when they only have T2W images available, like in this study. Conclusion: A novel method for characterizing lesions in T2-weighted MRI using texture descriptors was developed. The performance is in the range of values reported in the literature for radiologists. Clinical relevance: A CAD system for classification of CG lesions could improve the characterization of these lesions, which might result in better treatment planning.","tags":null,"title":"Computerized characterization of central gland lesions using texture and relaxation features from T2-weighted prostate MRI","type":"publication"},{"authors":["E. Vos","G. Litjens","T. Kobus","T. Hambrock","C. Hulsbergen van de Kaa","H. Huisman","T. Scheenen"],"categories":null,"content":"","date":1325376e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1325376e3,"objectID":"bf606a2735a240392aebe958a0cc2db7","permalink":"https://geertlitjens.nl/publication/vos-12/","publishdate":"2012-01-01T00:00:00Z","relpermalink":"/publication/vos-12/","section":"publication","summary":"","tags":null,"title":"Dynamic contrast enhanced MR imaging for the assessment of prostate cancer aggressiveness at 3T","type":"publication"},{"authors":["O. Huisman"],"categories":null,"content":"","date":1320105600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1320105600,"objectID":"2a06d9a6c2b58d0235c20980c4cd2e2d","permalink":"https://geertlitjens.nl/publication/deba-11/","publishdate":"2011-11-01T00:00:00Z","relpermalink":"/publication/deba-11/","section":"publication","summary":"PURPOSE: Computer aided diagnosis (CAD) of lymph node metastases may help reduce reading time and improve interpretation of the large amount of image data in a 3-D pelvic MRI exam. The purpose of this study was to develop an algorithm for automated segmentation of pelvic lymph nodes from a single seed point, as part of a CAD system for the classification of normal vs metastatic lymph nodes, and to evaluate its performance compared to other algorithms. METHODS: The authors' database consisted of pelvic MR images of 146 consecutive patients, acquired between January 2008 and April 2010. Each dataset included four different MR sequences, acquired after infusion of a lymph node specific contrast medium based on ultrasmall superparamagnetic particles of iron oxide. All data sets were analyzed by two expert readers who, reading in consensus, annotated and manually segmented the lymph nodes. The authors compared four segmentation algorithms: confidence connected region growing (CCRG), extended CCRG (ECC), graph cut segmentation (GCS), and a segmentation method based on a parametric shape and appearance model (PSAM). The methods were ranked based on spatial overlap with the manual segmentations, and based on diagnostic accuracy in a CAD system, with the experts' annotations as reference standard. RESULTS: A total of 2347 manually annotated lymph nodes were included in the analysis, of which 566 contained a metastasis. The mean spatial overlap (Dice similarity coefficient) was: 0.35 (CCRG), 0.57 (ECC), 0.44 (GCS), and 0.46 (PSAM). When combined with the classification system, the area under the ROC curve was: 0.805 (CCRG), 0.890 (ECC), 0.807 (GCS), 0.891 (PSAM), and 0.935 (manual segmentation). CONCLUSIONS: We identified two segmentation methods, ECC and PSAM, that achieve a high diagnostic accuracy when used in conjunction with a CAD system for classification of normal vs metastatic lymph nodes. The manual segmentations still achieve the highest diagnostic accuracy.","tags":null,"title":"Automated 3-Dimensional Segmentation of Pelvic Lymph Nodes in Magnetic Resonance Images","type":"publication"},{"authors":["G. Litjens","P. Vos","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":1298937600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1298937600,"objectID":"920e259a3e3bae461991e6cb7c2112ad","permalink":"https://geertlitjens.nl/publication/litj-11/","publishdate":"2011-03-01T00:00:00Z","relpermalink":"/publication/litj-11/","section":"publication","summary":"Development of CAD systems for detection of prostate cancer has been a recent topic of research. A multi-stage computer aided detection scheme is proposed to help reduce perception and oversight errors in multi-parametric prostate cancer screening MRI. In addition, important features for development of computer aided detection systems for prostate cancer screening MRI are identified. A fast, robust prostate segmentation routine is used to segment the prostate, based on coupled appearance and anatomy models. Subsequently a voxel classification is performed using a support vector machine to compute an abnormality likelihood map of the prostate. This classification step is based on quantitative voxel features like the apparent diffusion coefficient (ADC) and pharmacokinetic parameters. Local maxima in the likelihood map are found using a local maxima detector, after which regions around the local maxima are segmented. Region features are computed to represent statistical properties of the voxel features within the regions. Region classification is performed using these features, which results in a likelihood of abnormality per region. Performance was validated using a 188 patient dataset in a leave-one-patient-out manner. Ground truth was annotated by two expert radiologists. The results were evaluated using FROC analysis. The FROC curves show that inclusion of ADC and pharmacokinetic parameter features increases the performance of an automatic detection system. In addition it shows the potential of such an automated system in aiding radiologists diagnosing prostate MR, obtaining a sensitivity of respectively 74.7% and 83.4% at 7 and 9 false positives per patient.","tags":null,"title":"Automatic Computer Aided Detection of Abnormalities in Multi-Parametric Prostate MRI","type":"publication"},{"authors":["O. Debats","T. Hambrock","G. Litjens","H. Huisman","J. Barentsz"],"categories":null,"content":"","date":129384e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":129384e4,"objectID":"54211808ee72d160e4b165d1b1bf5481","permalink":"https://geertlitjens.nl/publication/deba-11-a/","publishdate":"2011-01-01T00:00:00Z","relpermalink":"/publication/deba-11-a/","section":"publication","summary":"PURPOSE: Magnetic Resonance Lymphography (MRL) with ferumoxtran-10 as contrast agent has a high diagnostic accuracy in the detection of lymph node metastases in prostate cancer (PCa) patients, but ferumoxtran-10 is no longer available. Ferumoxytol may be an alternative. Our purpose was to compare the diagnostic accuracy of ferumoxytol-based MRL with ferumoxtran-10-based MRL. METHOD AND MATERIALS: Our dataset contained the MRL images from patients who had had two MRLs as part of their clinical work-up in 2009 or 2010: one with ferumoxtran-10 and one with ferumoxytol as contrast agent. Each MRL consisted of at least a 3D T1-weighted sequence and a 3D T2*-weighted sequence. Two expert readers (an experienced radiologist and an experienced researcher), reading in consensus, interpreted each lymph node as normal or metastasized, based on the contrast uptake pattern in the T2*-weighted images, and delineated the lymph nodes in both types of MRL. Based on these delineations, quantitative measurements of the contrast-induced signal decrease were performed using histogram analysis. Measurements of signal decrease inside the lymph nodes were used to discriminate metastatic from normal nodes. The areas under the receiver operating characteristic curves (AUC) were computed to compare diagnostic accuracy, and the statistical significance of the difference between the two contrast agents was tested using the R Project for Statistical Computing and the ROCKIT software package. RESULTS: A total of 57 lymph nodes from four patients were included in the analysis, of which 7 were metastatic. The AUC was 0.96 for ferumoxtran-10-MRL, which is similar to results found in previous studies, and 0.87 for ferumoxytol-MRL. The difference between ferumoxtran-10 and ferumoxytol was statistically significant (p","tags":null,"title":"Detection of Lymph Node Metastases with Ferumoxtran-10 vs Ferumoxytol","type":"publication"},{"authors":["M. Schouten","K. Nagel","T. Hambrock","C. Hoeks","G. Litjens","J. Barentsz","J. Fütterer"],"categories":null,"content":"","date":129384e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":129384e4,"objectID":"917868194547fe1802a2d958f374af68","permalink":"https://geertlitjens.nl/publication/scho-11/","publishdate":"2011-01-01T00:00:00Z","relpermalink":"/publication/scho-11/","section":"publication","summary":"PURPOSE To prospectively discriminate between normal prostate tissue, prostatitis and prostate cancer, based on apparent diffusion coefficient (ADC)-values, using MR-guided biopsy specimens as the standard of reference. METHOD AND MATERIALS The requirement to obtain institutional review board approval was waived. MR-guided biopsies were performed in 130 consecutive patients with cancer suspicious regions (CSRs) on multi-parametric MR imaging at 3T. Exclusion criteria were patients with suspicion of prostate cancer recurrence after therapy, and biopsy specimens which could not be categorized within the following histopathological groups: normal prostate tissue, prostatitis and prostate cancer. During the biopsy procedure, an axial diffusion-weighted sequence was acquired with construction of ADC maps (TR/TE, 2000/67 ms; section thickness of 4 mm; in-plane resolution, 1.8 x 1.8 mm and b-values of 0, 100, 500 and 800 s/mm2). To verify the biopsy location, a confirmation scan with the needle left in situ was acquired. This confirmation scan was projected on the calculated ADC map in order to draw a region-of-interest (ROI) on the ADC map, representing the biopsied CSR. The obtained ADC-values of this ROI were compared with the histological outcomes of the biopsy specimens. A one-way ANOVA with post-hoc comparison was used to test for ADC-value differences among the three histological groups. Differences were considered to be significant at p","tags":null,"title":"Differentiation of Normal Prostate Tissue, Prostatitis, and Prostate Cancer: Correlation between Diffusion-weighted Imaging and MR-guided Biopsy","type":"publication"},{"authors":["W. van de Ven","G. Litjens","J. Barentsz","T. Hambrock","H. Huisman"],"categories":null,"content":"","date":129384e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":129384e4,"objectID":"cba8e042765bcf1fa6f1ae902ea90632","permalink":"https://geertlitjens.nl/publication/ven-11-a/","publishdate":"2011-01-01T00:00:00Z","relpermalink":"/publication/ven-11-a/","section":"publication","summary":"MR to TRUS guided biopsies can be a cost-effective solution for prostate biopsies. Prostate cancer can be detected on MRI and a biopsy can be directed towards a suspicious region. With the help of an accurate MR-US registration method the tumor can also be targeted under transrectal US guidance. For heterogeneous tumors, the needle should be guided towards the most aggressive part of the tumor. Not the tumor size, but the size of this smaller tumor hotspot determines the required accuracy of the registration. We investigate the percentage of tumors that are heterogeneous and the corresponding hotspot volume. Results show a hotspot in 63% of the tumors, with a median volume of 0.3 cm3. By assuming a spherical shape, the required accuracy can be determined. For a 90% tumor hit-rate, the registration error should be less than 2.87 mm.","tags":["MR-US registration; accuracy; hotspot; prostate biopsy"],"title":"Required accuracy of MR-US registration for prostate biopsies","type":"publication"},{"authors":["G. Litjens","J. Barentsz","N. Karssemeijer","H. Huisman"],"categories":null,"content":"","date":129384e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":129384e4,"objectID":"f6bea4d805a0aad84f84ecd36fde9736","permalink":"https://geertlitjens.nl/publication/litj-11-b/","publishdate":"2011-01-01T00:00:00Z","relpermalink":"/publication/litj-11-b/","section":"publication","summary":"PURPOSE Interpretation of multi-parametric MRI findings in the peripheral zone (PZ) or the transition zone (TZ) of the prostate is different. Therefore, this study investigates the performance of zone-specific computer-aided detection (CAD) as opposed to whole-prostate CAD. METHOD AND MATERIALS 117 consecutive prostate MRI?s from 2009 were extracted from our database. 71/117 MRI?s showed no malignant findings, 26/117 patients had a PZ tumor, 20/117 a TZ tumor. The MRI?s were acquired on a 3T MR scanner (Siemens Trio Tim, Erlangen, Germany) and included T2-weighted images (T2WI), dynamic contrast enhanced MRI (DCE-MRI), and diffusion-weighted images (DWI). From DCE-MRI and DWI pharmacokinetic parameters (PK) and ADC maps were calculated respectively. Lesion locations were indicated by an expert radiologist. Histology was obtained using MR-guided biopsy or prostatectomy. A two-stage classification strategy was used. The prostate was segmented using an atlas based method including PZ and TZ. First stage voxel classification resulted in a likelihood map, in which local maxima were detected. Then, a region was segmented for each local maximum. Second stage classification resulted in a malignancy likelihood per region. Voxel features used were the T2WI intensities, PK and ADC values and blob detection values for T2WI, ADC and PK images. For the second stage 25th- and 75th-percentiles within the segmented regions were calculated for all voxel features including the initial likelihood map. Classification in both stages was performed using a whole-prostate classifier or two separate zone-specific classifiers. The first stage used linear discriminant classifiers, the second stage support vector machine classifiers. Validation was performed in a leave-one-patient-out manner. FROC calculation and statistical analysis were performed using the JAFROC software package. The figure-of-merit (FOM) used is the area under the alternative FROC (AFROC) curve. RESULTS Zone-specific CAD was significantly better than whole-prostate CAD (FOM 0.63 vs. 0.48, p ","tags":null,"title":"Zone-specific Automatic Computer-aided Detection of Prostate Cancer in MRI","type":"publication"},{"authors":null,"categories":null,"content":"Prostate cancer is the most commonly diagnosed malignancy and the second leading cause of cancer death among men in the Netherlands. Due to the shortcomings of the current diagnostic pathway for prostate cancer, especially with respect to assessing cancer aggressiveness, alternative strategies are being investigated. Magnetic resonance imaging (MRI) has emerged as an important modality to assist and potentially replace (part of) the current diagnostic pathway. The high complexity of prostate MRI and the lack of sufficient expertise among the radiological community at large has opened the door for (semi-)automated analysis of prostate MRI by computer systems, with or without human intervention.\n Transversal slide through the prosate in a T2-weighted MRI sequence. Within this project such as system was developed and evaluated. It consisted of several key components: segmentation of the prostate gland in MRI, discovering cancer-specific features, system development and evaluation. The results were reported through a number of publications which are listed below and summarized in my PhD Thesis.\n","date":1262858145,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1262858145,"objectID":"8a08a28001a330e49485719f70e974e2","permalink":"https://geertlitjens.nl/project/mpmri-pca/","publishdate":"2010-01-07T11:55:45+02:00","relpermalink":"/project/mpmri-pca/","section":"project","summary":"Automation of multi-parametric MRI diagnostics via traditional pattern recognition.","tags":[],"title":"Computerized Detection of Prostate Cancer in Multi-Parametric MRI","type":"project"},{"authors":["H. Huisman","P. Vos","G. Litjens","T. Hambrock","J. Barentsz"],"categories":null,"content":"","date":1262304e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1262304e3,"objectID":"795d68b64c2cd74b353e8291c5b26e1f","permalink":"https://geertlitjens.nl/publication/huis-10/","publishdate":"2010-01-01T00:00:00Z","relpermalink":"/publication/huis-10/","section":"publication","summary":"One in 10 men will be diagnosed with prostate cancer during their life. PSA screening in combination with MR is likely to save lifes at low biopsy and overtreatment rates. Computer Aided Diagnosis for prostate MR will become mandatory in a high volume screening application. This paper presents an overview including our recent work in this area. It includes screening MR setup, quantitative imaging features, prostate segmentation, and pattern recognition.","tags":null,"title":"Computer aided detection of prostate cancer using T2W, DWI and DCE-MRI: methods and clinical applications","type":"publication"},{"authors":["G. Litjens","M. Heisen","J. Buurman","B. Romeny"],"categories":null,"content":"","date":1262304e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1262304e3,"objectID":"6d1d4a2c1709a7ca632bed7e98ca3ebe","permalink":"https://geertlitjens.nl/publication/litj-10/","publishdate":"2010-01-01T00:00:00Z","relpermalink":"/publication/litj-10/","section":"publication","summary":"Pharmacokinetic modeling is increasingly used in DCE-MRI high risk breast cancer screening. Several models are available. The most common models are the standard and extended Tofts, the shutterspeed, and the Brix model. Each model and the meaning of its parameters is explained. It was investigated which models can be used in a clinical setting by simulating a range of sampling rates and noise levels representing different MRI acquisition schemes. In addition, an investigation was performed on the errors introduced in the estimates of the pharmacokinetic parameters when using a physiologically less complex model, i.e. the standard Tofts model, to fit curves generated with more complex models. It was found that the standard Tofts model is the only model that performs within an error margin of 20% on parameter estimates over a range of sampling rates and noise levels. This still holds when small complex physiological effects are present.","tags":null,"title":"Pharmacokinetic models in clinical practice: what model to use for DCE-MRI of the breast?","type":"publication"},{"authors":["G. Litjens","L. Hogeweg","A. Schilham","P. de Jong","M. Viergever","B. van Ginneken"],"categories":null,"content":"","date":1262304e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1262304e3,"objectID":"322e6312b4e5aa45a4ee17effbcdb56a","permalink":"https://geertlitjens.nl/publication/litj-10-a/","publishdate":"2010-01-01T00:00:00Z","relpermalink":"/publication/litj-10-a/","section":"publication","summary":"A method is proposed to simulate nodules and diffuse infiltrates in chest radiographs. This allows creation of large annotated databases for training of both radiologists and computer aided diagnosis systems. Realistic nodules and diffuse infiltrates were generated from three-dimensional templates segmented from CT data. These templates are rescaled, rotated, projected and superimposed on a radiograph. This method was compared, in an observer study, to a previously published method that simulates pulmonary nodules as perfectly spherical objects. Results show that it is hard for human observers to distinguish real and simulated nodules when using templates (AUC-values do not significantly differ from .5, p .05 for all observers). The method that produced spherical nodules performed slightly worse (AUC of one observer differs significantly from .5, p = .011). Simulation of diffuse infiltrates is challenging but also feasible (AUC = 0.67 for one observer).","tags":["Algorithms; Computer Simulation; Humans; Imaging","Three-Dimensional","methods; Lung Diseases","radiography; Models","Biological; Radiographic Image Enhancement","methods; Radiographic Image Interpretation","Computer-Assisted","methods; Radiography","Thoracic","methods; Reproducibility of Results; Sensitivity and Specificity; Tomography","X-Ray Computed","methods"],"title":"Simulation of nodules and diffuse infiltrates in chest radiographs using CT templates","type":"publication"},{"authors":["P. Snoeren","G. Litjens","B. van Ginneken","N. Karssemeijer"],"categories":null,"content":"","date":1262304e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1262304e3,"objectID":"e7a2725c1aadedf9ba8cac1662ad4848","permalink":"https://geertlitjens.nl/publication/snoe-10/","publishdate":"2010-01-01T00:00:00Z","relpermalink":"/publication/snoe-10/","section":"publication","summary":"This paper addresses the hypothesis that artificially implanted lung nodules from computed tomography exams (CT exams) into chest radiographs can improve the performance of a computer aided detection system (CAD system). Twenty-four three-dimensional lung nodules were segmented and projected in five directions, mimicking 120 X-rayed nodules. The CAD system was tested by fivefold cross validation on a publicly available database. The results were evaluated by a free-response receiver operating characteristic analysis (FROC). It was found that the performance of the CAD system trained with simulated nodules comes close to the performance of state of the art CAD systems that are trained with real nodules. The CAD system trained with real nodules did improve by adding simulated nodules, but only when there were few real nodules used for training.","tags":null,"title":"Training a Computer Aided Detection System with Simulated Lung Nodules in Chest Radiographs","type":"publication"},{"authors":["Geert Litjens"],"categories":null,"content":"","date":1230768e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1230768e3,"objectID":"320b51cc849a6d244ee4f955e8cee9d7","permalink":"https://geertlitjens.nl/publication/litj-10-b/","publishdate":"2020-10-21T09:26:40.845088Z","relpermalink":"/publication/litj-10-b/","section":"publication","summary":"Breast cancer is a disease which impacts the lives of thousands of people. In the entire world over half a million people die due to breast cancer every year, mostly women. However, when breast cancer is detected in the early stages of disease ?ve-year-survival rate approaches 100%. So, it is very important to detect breast cancer as early as possible. That is why in most of modern Western society screening programs for breast cancer have been developed. Most of these screening programs focus on x-ray mammography. However, women who have an increased risk to get breast cancer these screening programs are not adequate. These women usually develop breast cancer at a younger age and x-ray mammography for those women has a low sensitivity. For these cases, and inconclusive ?ndings in x-ray mammography in other women, dynamic contrast enhance (DCE) MRI is used. In DCE MRI a contrast agent is used which takes advantage of the fact that tumor vasculature is leaky and sloppy, thus the contrast agent tends to accumulate in the tumor, leading to increased signal intensity in T1-weighted images. Due to the fact that we have a time range of images it is possible to look at kinetic behavior. However, as kinetic curves have a large inter and intra patient variability and variability depending on the imaging site the analysis of these curves is not straightforward. Pharmacokinetic modeling could be an answer to those problems as it can be used to obtain lesion-speci?c physiological parameters. To use pharmacokinetic modeling however we need high temporal resolution data, which is not readily available. The University of Chicago Medical Center obtained several high temporal resolution data sets for the initial part of the kinetic enhancement curve in addition to the regular low temporal clinical scans. These data sets were the basis for this research. When analyzing such data several factors play an important role. The ?rst of these being the extraction of the signal-vs.-time curves from the data sets. In this report we used a small graphical user interface to extract the data. The low and high temporal resolution images were obtained in different orientations, which was a problem we also needed to solve. The second step after extraction of the signal-vs.-time-curves was the conversion of the signal intensity to contrast agent concentration. In literature there were several methods that were used to accomplish this, but all were based on the use of a gradient recalled echo signal model. We ?rst investigated the assumption that we can neglect T2* effects, which we concluded was allowed. To estimate concentration the tissue T1 at time 0 has to been known. As we had no additional T1 measurements, we used a reference tissue approach to estimate T1. We investigated if the simpli?cations often used in this method were allowed and we concluded that it was better to use the full model. The last part of the conversion to concentration is the estimation of uncertainty in the concentration curves, which in itself contains several uncertainties. We derived an algebraic expression for these uncertainties using a Taylor expansion of concentration uncertainty. On average the uncertainty levels are around 10% of the concentration. The third step was choosing a pharmacokinetic model, we inspected a total of four models, the standard and extended Tofts models, the shutter speed model and the Brix model. We ?rst assessed the ability of each model to ?nd correct minima using a forward-backward simulation approach. We then simulated data that has the same temporal and uncertainty characteristics as real clinical data and used the same forward-backward approach to estimate model performance. We concluded that for data with those speci?c characteristics only the standard Tofts model performed adequately. After that we started investigating the data requirements for all models and we could see that for all models except the standard Tofts model data requirements on especially temporal resolution are high. Lastly we did an investigation in the errors introduced by assuming that the underlying physiological processes are more simplistic, which is what we do when we use the standard Tofts model. The fourth step was ?nding the arterial input function, which is used as an input for the pharmacokinetic model. In literature there are several methods, we discussed three: the use of a standardized input function (population averaged or mathematical), the use of a single reference tissue approach and the use of a multiple refernce tissue approach. We found that errors caused by using a population averaged input function can be quite large as deviations from the true local input functions are seen directly in the pharmacokinetic parameters. The single reference tissue approach is another way to estimate the input function. We found that when we know the exact pharmacokinetic parameters of the reference tissue the errors are considerably lower than in the use of a standardized AIF. When pharmacokinetic parameters of the reference tissue are wrong however we can still induce large errors in parameter estimates. The third option was the use of a multiple reference tissue approach, which gave the best results. If multiple reference tissue are available within the data set this option should be used. The last step is to put together the pieces from the previous steps and use that to analyze the clinical data. We were able to use 14 patient data sets. Although a small number, we were able to see that there seems to be a relation between malignancy and Ktrans values. Benign tissues seemed to have lower Ktrans values when compared to malignant tissues. Another question was if we were able to cluster different cancer types according to pharmacokinetic parameters, but we have too little data to support that claim.","tags":null,"title":"Pharmacokinetic modeling in breast cancer MRI","type":"publication"},{"authors":["G. Litjens","M. Heisen","J. Buurman","A. Wood","M. Medved","G. Karczmar","B. Haar-Romeny"],"categories":null,"content":"","date":1230768e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1230768e3,"objectID":"f53ad6984851612755adef0fdafbe51b","permalink":"https://geertlitjens.nl/publication/litj-09/","publishdate":"2009-01-01T00:00:00Z","relpermalink":"/publication/litj-09/","section":"publication","summary":"PURPOSE For standardized interpretation of DCEMRI curves, calculation of contrast agent (CA) concentration from signal intensity over time is desired. Accurate measurement of tissue T1 before and after CA administration is thus necessary. Current T1 measurement methods are time-consuming. We propose the use of the ?reference tissue? method for fast T1 measurements concurrent with DCEMRI data acquisition, but with use of a reference phantom. METHOD AND MATERIALS The ?reference tissue? method is based on the approximation that in T1-weighted gradient echo images, signal intensity is proportional to 1/T1 ? thus signal intensity can be referenced to a tissue or a phantom with a known T1. We compared this method to the ?variable flip angle? method, most commonly used in clinical practice. We compared the ?reference tissue? method (TR/TE = 25/1.1 ms, a=40?) to the ?variable flip angle? method (TR/TE = 25/1.1 ms, a=3/5/10/15/20/25/30/35/40?), using the Eurospin T05 phantom, in which 10 out of 18 vials containing agar with varying concentrations of Gd-DTPA (T1 range: 281 ? 1384 ms) were used. RESULTS With the ?reference tissue? method, using 9 tubes successively as a reference for the remaining tube, the average error in the estimation of T1 was 8.5%, with a standard deviation of 6.1%, and was random. Using the ?variable flip angle? method, the average error was 5.7 % with a standard deviation of 3.8%. Using a two-sided Student?s t-test we found no statistically significant differences in the performance of the two methods (p-value = 0.52). CONCLUSION As DCEMRI imaging is done in a heavily T1-weighted regime, the ?reference tissue? method can be used to concurrently measure T1, and quantify CA concentration. Use of an agar phantom will require a correction for lower proton density in biological tissue. Proton density can be measured prior to CA administration, or tabulated values can be used. We demonstrated the feasibility of fast T1 measurements using a reference phantom, providing T1 maps without additional scanning time. This will allow quantification of the CA concentration throughout the DCEMRI scan, which cannot be achieved using the current clinical method. CLINICAL RELEVANCE/APPLICATION The use of a reference phantom for determining T1 can lead to a drastic reduction in scanning time and thus patient discomfort when compared to a regular clinical ?variable flip angle? scan.","tags":null,"title":"T1 Quantification: Variable Flip Angle Method vs Use of Reference Phantom","type":"publication"},{"authors":null,"categories":null,"content":"Overview Built on the success of its predecessor, CAMELYON17 is the second grand challenge in pathology organised by the Diagnostic Image Analysis Group (DIAG) and Department of Pathology of the Radboud University Medical Center (Radboudumc) in Nijmegen, The Netherlands.\nThe goal of this challenge is to evaluate new and existing algorithms for automated detection and classification of breast cancer metastases in whole-slide images of histological lymph node sections. This task has high clinical relevance and would normally require extensive microscopic assessment by pathologists. The presence of metastases in lymph nodes has therapeutic implications for breast cancer patients. Therefore, an automated solution would hold great promise to reduce the workload of pathologists while at the same time reduce the subjectivity in diagnosis.\n Detection of breast cancer metastases in lymph nodes. Task The TNM system is an internationally accepted means to classify the extent of cancer spread in patients with a solid tumour. It is one of the most important tools for clinicians to help them select a suitable treatment option and to obtain an indication of prognosis. Since the histological assessment of lymph node metastases is an essential part of TNM classification, CAMELYON17 will focus on the pathologic N-stage, in short: pN-stage.\nIn clinical practice several lymph nodes are surgically removed after which these nodes are processed in the pathology laboratory. In this challenge we forged artificial patients, with 5 slides provided for each patient where each slide corresponds to exactly one lymph node.\nThe task in this challenge is to determine a pN-stage for every patient in the test dataset. To compose a pN-stage, the number of positive lymph nodes (i.e. nodes with a metastasis) are counted. For the evaluation of the results we use five class quadratic weighted kappa where the classes are the pN-stages.\nWebsite Further information, registration, and the results are available on the challenge website.\n","date":-62135596800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":-62135596800,"objectID":"2c3e90f24b6bc6cd3da533c26acbf573","permalink":"https://geertlitjens.nl/project/camelyon/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/project/camelyon/","section":"project","summary":"Detection of breast cancer metastases in lymph nodes.","tags":["Challenge","Lymph Nodes","Metastases","Machine Learning"],"title":"CAMELYON","type":"project"},{"authors":null,"categories":null,"content":"Most men die with, not because of prostate cancer. This high incidence-to-mortality ratio sounds like a positive trait, but comes with its own share of problems: high risk of overdiagnosis and overtreatment with significant patient morbidity. To combat overtreatment, several models have been developed to assign patients to risk categories with differing treatment regimes. Although these models show good correlation with patient outcome on the group level, their benefit for the individual patient remains limited.\n Prostate cancer segmentation using convolutional neural networks. Several groups have shown that quantifying the tumour and its micro-environment at the cellular level can result in biomarkers with strong prognostic potential, for example tumour/stroma ratio, the presence and composition of immune infiltrates or the amount of proliferating (Ki67-positive) cells. However, these findings have not translated to clinical practice due to the cumbersome and subjective manual extraction of these biomarkers from tissue slides.\nRecent years have seen the more widespread introduction of whole-slide imaging systems, which for the first time allow computerized processing of tissue slides. Automated extraction of aforementioned quantitative biomarkers through image analysis can achieve the required accuracy and robustness to impact clinical practice. In tandem with these developments, computer vision has seen a machine learning revolution over the past decade due to the advent of deep learning.\nIn this project, we will combine deep learning and digitized whole-slide imaging of prostate cancer for reproducible extraction of quantitative biomarkers. Furthermore, due to the ability of deep learning systems to learn relevant features without human intervention, we expect to identify novel biomarkers which allow us to further improve the current risk models.\nThe aim of this project is to prevent unnecessary surgery and adjuvant therapy for individual patients by improving currently established risk models. Risk models will be enhanced through the addition of pre- and post-operative quantitative biomarkers obtained via image analysis and deep learning. We will focus both on the accurate and objective quantification of biomarkers already identified in literature and the discovery of novel biomarkers.\n","date":-62135596800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":-62135596800,"objectID":"e00b71851dc1664bd52e9c15d57a1f58","permalink":"https://geertlitjens.nl/project/deeppca/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/project/deeppca/","section":"project","summary":"Less unnecessary therapy for prostate cancer patients.","tags":["Active","Machine Learning","Prostate Cancer","Computational Pathology"],"title":"Deep PCa","type":"project"},{"authors":null,"categories":null,"content":"Due to the tripling of skin cancer incidence over the past two decades, more skin biopsies and resections are performed than ever before. This has led to an enormous increase in workload for pathologists, who perform the microscopic diagnostics of skin samples. Most of microscopic skin analysis is not professionally challenging, but it is time consuming and can lead to reduced time for more complex diagnostics and increased wait time for patients. Machine learning and specifically deep learning offers a path to automating the diagnoses of skin samples, which would reduce the pressure on pathologists and the cost of diagnosis, both in time and money.\n Annotation of a Basal Cell Carcinoma, a skin cancer with, typically, good prognosis. We propose not just to develop an algorithm which can perform skin diagnostics at the level of an expert pathologist, but also explicitly identify the most fruitful way of integrating these algorithms into the routine workflow. This project is exceedingly timely, as by the end of 2019 all histopathological diagnostics in the Radboud University Medical Center will be done digitally.\nThe project consists of four work packages. In work package 1, we will collect a large retrospective cohort of annotated and labeled skin biopsies and resections from multiple centers. The focus of work package 2 is on development of algorithms for segmentation of different skin tissue classes, subtyping of basal cell carcinoma, and identification of rare incidental findings. Work package 3 and 4 cover the development and prospective evaluation of the optimal algorithm-integrated workflow in a real world clinical setting.\nAfter completion, we will have the world’s first prospectively evaluated algorithm-supported workflow for digital pathology, and a valuable, expert labeled, retrospective dataset of skin specimens; both excellent targets for valorization. Last, it will increase the time of pathologists for complex diagnostics and reduce the wait time for patients.\n","date":-62135596800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":-62135596800,"objectID":"8e67021670157263095910414d400a75","permalink":"https://geertlitjens.nl/project/deepderma/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/project/deepderma/","section":"project","summary":"Improving efficiency and accuracy of skin cancer diagnostics.","tags":["Active","Machine Learning","Skin Cancer","Computational Pathology"],"title":"DeepDerma","type":"project"}]