MICCAI 2022 Daily – Tuesday
“ I proposed a transformer-based method because transformers are used in natural language processing to process a sequence of text or audio , ” he explains. “ If I model multiple slices as a sequence of images, I can use this transformer-based method to extract the relevances between different slices in the whole sequence. ” By jointly processing the stacks of slices as a sequence, SVoRT registers each slice using the context of other slices . To do this, it must first implement the forward model to simulate slices from the volume. “ For a realistic forward model, we must take into account the Point- Spread-Function and the motion, ” he points out. “ We want this method to be fast enough during training and inference, so we need to implement these operators on GPUs. However, traditional deep learning frameworks like PyTorch don’t have these operators, which is one of the challenges we face trying to implement this method. ” Junshen is looking to extend the method to whole-body imaging and deformable motions . This paper assumes the brain is a rigid body and therefore focuses on rigid registration. He also hopes to work across the whole pipeline. Currently, once the slices are registered, they are put 6 DAILY MICCAI Tuesday Oral Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=