PathoDuet: Foundation Models for Pathological Slide Analysis of H&E and IHC Stains

December 15, 2023 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: CODE_OF_CONDUCT.md, CONFIG.md, CONTRIBUTING.md, LICENSE, README.md, banner.png, convert_to_deit.py, generate_train_data.py, main_bridge.py, main_cross.py, main_lincls.py, main_moco.py, moco, overall.png, transfer, used_TCGA.csv, vits.py

Authors Shengyi Hua, Fang Yan, Tianle Shen, Lei Ma, Xiaofan Zhang arXiv ID 2312.09894 Category cs.CV: Computer Vision Cross-listed cs.AI Citations 27 Venue arXiv.org Repository https://github.com/openmedlab/PathoDuet โญ 222 Last Checked 1 month ago
Abstract
Large amounts of digitized histopathological data display a promising future for developing pathological foundation models via self-supervised learning methods. Foundation models pretrained with these methods serve as a good basis for downstream tasks. However, the gap between natural and histopathological images hinders the direct application of existing methods. In this work, we present PathoDuet, a series of pretrained models on histopathological images, and a new self-supervised learning framework in histopathology. The framework is featured by a newly-introduced pretext token and later task raisers to explicitly utilize certain relations between images, like multiple magnifications and multiple stains. Based on this, two pretext tasks, cross-scale positioning and cross-stain transferring, are designed to pretrain the model on Hematoxylin and Eosin (H&E) images and transfer the model to immunohistochemistry (IHC) images, respectively. To validate the efficacy of our models, we evaluate the performance over a wide variety of downstream tasks, including patch-level colorectal cancer subtyping and whole slide image (WSI)-level classification in H&E field, together with expression level prediction of IHC marker, tumor identification and slide-level qualitative analysis in IHC field. The experimental results show the superiority of our models over most tasks and the efficacy of proposed pretext tasks. The codes and models are available at https://github.com/openmedlab/PathoDuet.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision