000285099 001__ 285099
000285099 005__ 20240229155101.0
000285099 0247_ $$2doi$$a10.1186/s12880-023-01128-w
000285099 0247_ $$2pmid$$apmid:37907876
000285099 037__ $$aDKFZ-2023-02223
000285099 041__ $$aEnglish
000285099 082__ $$a610
000285099 1001_ $$0P:(DE-He78)ccd97c91195da1dc12bb5c621a879126$$aStrack, Christian$$b0$$eFirst author$$udkfz
000285099 245__ $$a'A net for everyone': fully personalized and unsupervised neural networks trained with longitudinal data from a single patient.
000285099 260__ $$aLondon$$bBioMed Central$$c2023
000285099 3367_ $$2DRIVER$$aarticle
000285099 3367_ $$2DataCite$$aOutput Types/Journal article
000285099 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1698937778_13408
000285099 3367_ $$2BibTeX$$aARTICLE
000285099 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000285099 3367_ $$00$$2EndNote$$aJournal Article
000285099 500__ $$a#EA:E010#
000285099 520__ $$aWith the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets.Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary.The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved.We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.
000285099 536__ $$0G:(DE-HGF)POF4-315$$a315 - Bildgebung und Radioonkologie (POF4-315)$$cPOF4-315$$fPOF IV$$x0
000285099 588__ $$aDataset connected to CrossRef, PubMed, , Journals: inrepo02.dkfz.de
000285099 650_7 $$2Other$$aBrain Tumor
000285099 650_7 $$2Other$$aLongitudinal
000285099 650_7 $$2Other$$aMRI
000285099 650_7 $$2Other$$aMachine learning
000285099 650_7 $$2Other$$aNeural networks
000285099 650_7 $$2Other$$aPersonalized
000285099 650_7 $$2Other$$aPrivacy-safe
000285099 650_7 $$2Other$$aUnsupervised
000285099 650_7 $$2Other$$aWasserstein-GAN
000285099 650_7 $$2Other$$aZero-training data
000285099 7001_ $$aPomykala, Kelsey L$$b1
000285099 7001_ $$0P:(DE-He78)3d04c8fee58c9ab71f62ff80d06b6fec$$aSchlemmer, Heinz-Peter$$b2$$udkfz
000285099 7001_ $$aEgger, Jan$$b3
000285099 7001_ $$0P:(DE-He78)ec13544e7fd4c62ac008490a4547e990$$aKleesiek, Jens$$b4$$udkfz
000285099 773__ $$0PERI:(DE-600)2061975-3$$a10.1186/s12880-023-01128-w$$gVol. 23, no. 1, p. 174$$n1$$p174$$tBMC medical imaging$$v23$$x1471-2342$$y2023
000285099 909CO $$ooai:inrepo02.dkfz.de:285099$$pVDB
000285099 9101_ $$0I:(DE-588b)2036810-0$$6P:(DE-He78)ccd97c91195da1dc12bb5c621a879126$$aDeutsches Krebsforschungszentrum$$b0$$kDKFZ
000285099 9101_ $$0I:(DE-588b)2036810-0$$6P:(DE-He78)3d04c8fee58c9ab71f62ff80d06b6fec$$aDeutsches Krebsforschungszentrum$$b2$$kDKFZ
000285099 9101_ $$0I:(DE-588b)2036810-0$$6P:(DE-He78)ec13544e7fd4c62ac008490a4547e990$$aDeutsches Krebsforschungszentrum$$b4$$kDKFZ
000285099 9131_ $$0G:(DE-HGF)POF4-315$$1G:(DE-HGF)POF4-310$$2G:(DE-HGF)POF4-300$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$aDE-HGF$$bGesundheit$$lKrebsforschung$$vBildgebung und Radioonkologie$$x0
000285099 9141_ $$y2023
000285099 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bBMC MED IMAGING : 2022$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0320$$2StatID$$aDBCoverage$$bPubMed Central$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0501$$2StatID$$aDBCoverage$$bDOAJ Seal$$d2023-05-02T09:05:38Z
000285099 915__ $$0StatID:(DE-HGF)0500$$2StatID$$aDBCoverage$$bDOAJ$$d2023-05-02T09:05:38Z
000285099 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bDOAJ : Open peer review$$d2023-05-02T09:05:38Z
000285099 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0113$$2StatID$$aWoS$$bScience Citation Index Expanded$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0160$$2StatID$$aDBCoverage$$bEssential Science Indicators$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0561$$2StatID$$aArticle Processing Charges$$d2023-10-24
000285099 915__ $$0StatID:(DE-HGF)0700$$2StatID$$aFees$$d2023-10-24
000285099 9201_ $$0I:(DE-He78)E010-20160331$$kE010$$lE010 Radiologie$$x0
000285099 9201_ $$0I:(DE-He78)HD01-20160331$$kHD01$$lDKTK HD zentral$$x1
000285099 9200_ $$0I:(DE-He78)E010-20160331$$kE010$$lE010 Radiologie$$x0
000285099 980__ $$ajournal
000285099 980__ $$aVDB
000285099 980__ $$aI:(DE-He78)E010-20160331
000285099 980__ $$aI:(DE-He78)HD01-20160331
000285099 980__ $$aUNRESTRICTED