TY - JOUR
T1 - Rigid Medical Image Registration Using Learning-Based Interest Points and Features
AU - Zou, Maoyang
AU - Hu, Jinrong
AU - Zhang, Huan
AU - Wu, Xi
AU - He, Jia
AU - Xu, Zhijie
AU - Zhong, Yong
PY - 2019/8/1
Y1 - 2019/8/1
N2 - For image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy and interventional radiology, one of the important techniques is medical image registration. In our study, we propose a learning-based approach named “FIP-CNNF” for rigid registration of medical image. Firstly, the pixel-level interest points are computed by the full convolution network (FCN) with self-supervise. Secondly, feature detection, descriptor and matching are trained by convolution neural network (CNN). Thirdly, random sample consensus (Ransac) is used to filter outliers, and the transformation parameters are found with the most inliers by iteratively fitting transforms. In addition, we propose “TrFIP-CNNF” which uses transfer learning and fine-tuning to boost performance of FIP-CNNF. The experiment is done with the dataset of nasopharyngeal carcinoma which is collected from West China Hospital. For the CT-CT and MR-MR image registration, TrFIP-CNNF performs better than scale invariant feature transform (SIFT) and FIP-CNNF slightly. For the CT-MR image registration, the precision, recall and target registration error (TRE) of the TrFIP-CNNF are much better than those of SIFT and FIP-CNNF, and even several times better than those of SIFT. The promising results are achieved by TrFIP-CNNF especially in the multimodal medical image registration, which demonstrates that a feasible approach can be built to improve image registration by using FCN interest points and CNN features.
AB - For image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy and interventional radiology, one of the important techniques is medical image registration. In our study, we propose a learning-based approach named “FIP-CNNF” for rigid registration of medical image. Firstly, the pixel-level interest points are computed by the full convolution network (FCN) with self-supervise. Secondly, feature detection, descriptor and matching are trained by convolution neural network (CNN). Thirdly, random sample consensus (Ransac) is used to filter outliers, and the transformation parameters are found with the most inliers by iteratively fitting transforms. In addition, we propose “TrFIP-CNNF” which uses transfer learning and fine-tuning to boost performance of FIP-CNNF. The experiment is done with the dataset of nasopharyngeal carcinoma which is collected from West China Hospital. For the CT-CT and MR-MR image registration, TrFIP-CNNF performs better than scale invariant feature transform (SIFT) and FIP-CNNF slightly. For the CT-MR image registration, the precision, recall and target registration error (TRE) of the TrFIP-CNNF are much better than those of SIFT and FIP-CNNF, and even several times better than those of SIFT. The promising results are achieved by TrFIP-CNNF especially in the multimodal medical image registration, which demonstrates that a feasible approach can be built to improve image registration by using FCN interest points and CNN features.
KW - CNN feature
KW - Deep learning
KW - Interest point
KW - Medical image registration
UR - http://www.scopus.com/inward/record.url?scp=85075263055&partnerID=8YFLogxK
U2 - 10.32604/cmc.2019.05912
DO - 10.32604/cmc.2019.05912
M3 - Article
AN - SCOPUS:85075263055
VL - 60
SP - 511
EP - 525
JO - Computers, Materials and Continua
JF - Computers, Materials and Continua
SN - 1546-2218
IS - 2
ER -