| Home > Publications database > Evaluating Pretraining Strategies for OCT-Based Macular Degeneration Classification |
| Book/Journal Article | DKFZ-2025-02639 |
; ; ; ; ; ; ; ; ; ; ; ;
2025
Springer Nature Switzerland
Cham
ISBN: 978-3-031-86650-0 (print), 978-3-031-86651-7 (electronic)
Abstract: Pretraining is a crucial step to improve the performance of deep learning models or to accelerate the training process. Ideal pretraining strategies enable fast adaptation to the target domain after pretraining and the development of foundation models. While in the natural scene image processing domain, recently various foundation models have been published, the medical domain is still lacking a general pretraining scheme, handling arbitrary acquisition modalities, diverse diseases, and varying anatomical structures. Current evaluations are mostly centered around common applications like organ segmentation in abdomen images or consider only a few selected pretraining strategies. In this paper, we compare various pretraining strategies, self-supervised and unsupervised on a classification task for longitudinal macular degeneration classification. We compare pretraining schemes specifically tailored towards the medical domain as well as schemes from the natural scene image domain. Our results show that upscaling pretraining schemes outweigh specific pretrained models in the medical or OCT scan domain. The code and hyperparameter settings can be found in our Github repository: https://github.com/MIC-DKFZ/mario.
|
The record appears in these collections: |