Project

Deep Learning for enabling cost-effective high-throughput Total Body PET

Code
DOCT/011729
Duration
02 January 2024 → 21 September 2025 (Ongoing)
Doctoral researcher
Research disciplines
  • Natural sciences
    • Artificial intelligence not elsewhere classified
    • Modelling and simulation
  • Medical and health sciences
    • Nuclear imaging
  • Engineering and technology
    • Biomedical image processing
    • Biomedical instrumentation
Keywords
Deep Learning Image artifact removal Total-Body PET
 
Project description

Since the advent of Total-Body PET (TB-PET), sub-one-minute acquisitions at reasonable dose levels and very good image quality are within reach. However, the high unit price and maintenance cost, which is about 3-4 times higher than conventional PET-CT, coupled with limited achievable practical patient throughput significantly impedes implementation into clinical routine. The MEDISIP research group proposes a novel TB-PET system concept, the so-called Walk-Through PET (WT-PET), comprising two flat detector panels between which patients are scanned in an upright standing position. This obviates the need for patient positioning on a bed, resulting in improved throughput and less need for assistance by personnel. The device is based on a new generation of monolithic detectors that delivers much higher and isotropic spatial resolution at a lower component cost than conventional pixelated detectors. Moreover, the geometry further reduces the cost, as the required detector surface for the flat panels is almost halved compared to current annular TB-PET devices. Thus, the WT-PET offers the benefits of TB-PET, at a price tag in the same order as conventional short axial systems.

Nevertheless, the unique system concept faces three limitations that require resolution to obtain images of diagnostic quality. Firstly, the flat panels inherently limit the range of projection angles obtained around the patient, resulting in image artifacts after reconstruction. Secondly, we want to avoid the need for CT-image acquisition for attenuation and scatter correction (ASC) to limit the device’s total cost and accumulated dose delivered to the patient. Consequently, alternative (non-CT) ASC methods are necessary that maintain image quality. Finally, the upright standing position of the patient provides less support and may lead to motion artifacts, which need to be corrected. This project aims to overcome these issues by incorporating Deep Learning algorithms during the conversion from coincidence data into the desired image. The data required for the training and evaluation of the neural networks will be acquired by GATE Monte Carlo simulations and will be enriched with experimental data where possible.