A novel multimodality anatomical image fusion method based on contrast and structure extraction
Arbab Sufyan
Department of Computer Science, BUITEMS, Quetta, Pakistan
Search for more papers by this authorCorresponding Author
Muhammad Imran
Department of Electrical Engineering, BUITEMS, Quetta, Pakistan
Control, Automotive, and Robotics Lab, National Center of Robotics and Automation, Rawalpindi, Pakistan
Correspondence
Muhammad Imran, Department of Electrical Engineering, BUITEMS, Quetta, 87300, Pakistan.
Email: [email protected]
Search for more papers by this authorSyed Attique Shah
Department of Computer Science, BUITEMS, Quetta, Pakistan
Search for more papers by this authorHamayoun Shahwani
Department of Telecom Engineering, BUITEMS, Quetta, Pakistan
Search for more papers by this authorArbab Abdul Wadood
Quetta Institute of Medical Sciences, Quetta, Pakistan
Search for more papers by this authorArbab Sufyan
Department of Computer Science, BUITEMS, Quetta, Pakistan
Search for more papers by this authorCorresponding Author
Muhammad Imran
Department of Electrical Engineering, BUITEMS, Quetta, Pakistan
Control, Automotive, and Robotics Lab, National Center of Robotics and Automation, Rawalpindi, Pakistan
Correspondence
Muhammad Imran, Department of Electrical Engineering, BUITEMS, Quetta, 87300, Pakistan.
Email: [email protected]
Search for more papers by this authorSyed Attique Shah
Department of Computer Science, BUITEMS, Quetta, Pakistan
Search for more papers by this authorHamayoun Shahwani
Department of Telecom Engineering, BUITEMS, Quetta, Pakistan
Search for more papers by this authorArbab Abdul Wadood
Quetta Institute of Medical Sciences, Quetta, Pakistan
Search for more papers by this authorFunding information: National Center of Robotics and Automation, Pakistan
Abstract
Image modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and so on, reflect various levels of details about objects of interest that help medical practitioners to examine patients' diseases from different perspectives. A single medical image, at times, may not be sufficient for making a critical decision; therefore, providing detailed information from a different perspective may help in making a better decision. Image fusion techniques play a vital role in this regard by combining important details from different medical images into a single, information enhanced image. In this article, we present a novel weighted term multimodality anatomical medical image fusion method. The proposed method, as a first step, eliminates the distortions from the source images and afterward, extracts two pieces of crucial information: the local contrast and the salient structure. Both the local contrast and salient structure are later combined to obtain the final weight map. The obtained weights are then passed through a fast guided filter to remove the discontinuities and noise. Lastly, the refined weight map is fused with source images using pyramid decomposition to get the final fused image. The proposed method is accessed and compared both qualitatively and quantitatively with state-of-the-art techniques. The result illustrates the performance superiority and efficiency of the proposed method.
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
Open Research
DATA AVAILABILITY STATEMENT
Data will be provided if required.
REFERENCES
- 1Liu S, Zhao J, Shi M. Medical image fusion based on improved summodified laplacian. International Journal of Imaging Systems and Technology. 2015; 25(3): 206-212.
- 2Gupta N, Bhatele P, Khanna P. Identification of gliomas from brain mri through adaptive segmentation and run length of centralized patterns. J Comput Sci. 2018; 25: 213-220.
- 3Tandel GS, Balestrieri A, Jujaray T, Khanna NN, Saba L, Suri JS. Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm. Comput Biol Med. 2020; 122:103804. https://doi.org/10.1016/j.compbiomed.2020.103804
- 4Sajjad M, Khan S, Muhammad K, Wu W, Ullah A, Baik SW. Multigrade brain tumor classification using deep cnn with extensive data augmentation. J Comput Sci. 2019; 30: 174-182.
- 5Ahammad SH, Rajesh V, Rahman MZU. Fast and accurate feature extraction-based segmentation framework for spinal cord injury severity classification. IEEE Access. 2019; 7: 46092-46103.
- 6B¨auerle J, Schuchardt F, Schroeder L, Egger K, Weigel M, Harloff A. Reproducibility and accuracy of optic nerve sheath diameter assessment using ultrasound compared to magnetic resonance imaging. BMC Neurol. 2013; 13(1): 187.
- 7Corbat L, Henriet J, Chaussy Y, Lapayre J-C. Fusion of multiple segmentations of medical images using ov2assion and deep learning methods: application to ct-scans for tumoral kidney. Comput Biol Med. 2020; 124:103928.
- 8Zhang G, Jiang S, Yang Z, et al. Automatic nodule detection for lung cancer in ct images: a review. Comput Biol Med. 2018; 103: 287-300.
- 9Pickhardt PJ, Lee LJ, Mnouz del Rio A, et al. Simultaneous screening for osteoporosis at ct colonography: bone mineral density assessment using mdct attenuation techniques compared with the dxa reference standard. J Bone Miner Res. 2011; 26(9): 2194-2203.
- 10Foster B, Bagci U, Mansoor A, Xu Z, Mollura DJ. A review on segmentation of positron emission tomography images. Comput Biol Med. 2014; 50: 76-96.
- 11Mariani G, Bruselli L, Kuwert T, et al. A review on the clinical uses of spect/ct. Eur J Nucl Med Mol Imaging. 2010; 37(10): 1959-1985.
- 12James AP, Dasarathy BV. Medical image fusion: a survey of the state of the art. Inf Fusion. 2014; 19: 4-19.
- 13Plooij JM, Maal TJ, Haers P, Borstlap WA, Kuijpers-Jagtman AM, Berg'e SJ. Digital three-dimensional image fusion processes for planning and evaluating orthodontics and orthognathic surgery. A systematic review. Int J Oral Maxillofac Surg. 2011; 40(4): 341-352.
- 14Du J, Li W, Lu K, Xiao B. An overview of multi-modal medical image 570 fusion. Neurocomputing. 2016; 215: 3-20.
- 15Ghassemian H. A review of remote sensing image fusion methods. Inf Fusion. 2016; 32: 75-89.
- 16Pohl C, van Genderen J. Structuring contemporary remote sensing image fusion. Int J Image Data Fusion. 2015; 6(1): 3-21.
- 17Hayat N, Imran M. Ghost-free multiexposure image fusion technique using dense sift descriptor and guided filter. J Vis Commun Image Represent. 2019; 62: 295-308.
- 18Zhao J, Zhou Q, Chen Y, Feng H, Xu Z, Li Q. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition. Infrared Phys Technol. 2013; 56: 93-99.
- 19Mendoza F, Lu R, Cen H. Comparison and fusion of four nondestructive sensors for predicting apple fruit firmness and soluble solids content. Postharvest Biol Technol. 2012; 73: 89-98.
- 20Tadeusiewicz R, Ogiela M. Structural approach to medical image un-derstanding. Bull Pol Acad Sci Tech Sci. 2004; 52(2): 131-139.
- 21Yin M, Liu X, Liu Y, Chen X. Medical image fusion with parameteradaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans Instrum Measure. 2018; 68(1): 49-64.
- 22Bavirisetti DP, Kollu V, Gang X, Dhuli R. Fusion of MRI and CT images using guided image filter and image statistics. Int J Imaging Syst Technol. 2017; 27(3): 227-237.
- 23Ma J, Chen C, Li C, Huang J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion. 2016; 31: 100-109.
- 24Du J, Li W, Xiao B. Fusion of anatomical and functional images using parallel saliency features. Inform Sci. 2018; 430: 567-576.
- 25Liu Y, Chen X, Ward RK, Wang ZJ. Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett. 2019; 26(3): 485-489.
- 26Li S, Kang X, Hu J. Image fusion with guided filtering. IEEE Trans Image Process. 2013; 22(7): 2864-2875.
- 27Jiang Y, Wang M. Image fusion using multiscale edge-preserving decomposition based on weighted least squares filter. IET Image Process. 2014; 8(3): 183-190.
- 28Kumar BS. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 2015; 9(5): 1193-1204.
- 29Chai Y, Li H, Zhang X. Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain. Optik. 2012; 123(7): 569-581.
- 30Zhu Z, Yin H, Chai Y, Li Y, Qi G. A novel multi-modality image fusion method based on image decomposition and sparse representation. Inform Sci. 2018; 432: 516-529.
- 31Zhan K, Xie Y, Wang H, Min Y. Fast filtering image fusion. J Electron Imaging. 2017; 26(6):063004.
- 32Bhatnagar G, Wu QJ, Liu Z. Human visual system inspired multi-modal medical image fusion framework. Expert Syst Appl. 2013; 40(5): 1708-1720.
- 33Dou J, Qin Q, Tu Z. Image fusion based on wavelet transform with genetic algorithms and human visual system. Multimed Tools Appl. 2019; 78(9): 12491-12517.
- 34Yang Y, Que Y, Huang S, Lin P. Multimodal sensor medical image fusion based on type-2 fuzzy logic in nsct domain. IEEE Sensors J. 2016; 16(10): 3735-3745.
- 35Liu F, Chen L, Lu L, Ahmad A, Jeon G, Yang X. Medical image fusion method by using laplacian pyramid and convolutional sparse representation. Concurr Comput Pract Exp. 2020; 32(17):e5632.
- 36Wang Z, Cui Z, Zhu Y. Multi-modal medical image fusion by laplacian pyramid and adaptive sparse representation. Comput Biol Med. 2020; 123:103823.
- 37Singh R, Khare A. Fusion of multimodal medical images using daubechies complex wavelet transform–a multiresolution approach. Inf Fusion. 2014; 19: 49-60.
- 38Chavan SS, Mahajan A, Talbar SN, Desai S, Thakur M, D'cruz A. Nonsubsampled rotated complex wavelet transform (nsrcxwt) for medical image fusion related to clinical aspects in neurocysticercosis. Comput Biol Med. 2017; 81: 64-78.
- 39Li S, Yin H, Fang L. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans Biomed Eng. 2012; 59(12): 3450-3459.
- 40Yin H, Li Y, Chai Y, Liu Z, Zhu Z. A novel sparse-representation-based 650 multi-focus image fusion approach. Neurocomputing. 2016; 216: 216-229.
- 41Liu Y, Chen X, Cheng J, Peng H. A medical image fusion method based on convolutional neural networks. Paper presented at: 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–7.
- 42Du J, Li W, Xiao B. Anatomical-functional image fusion by information of interest in local laplacian filtering domain. IEEE Trans Image Process. 2017; 26(12): 5855-5866.
- 43Zhang Y-D, Dong Z, Wang S-H, et al. Advances in multimodal data fusion in neuroimaging: overview, challenges, and novel orientation. Inf Fusion. 2020; 64: 149-187.
- 44Wang S-H, Govindaraj VV, G'orriz JM, Zhang X, Zhang Y-D. Covid19 classification by fgcnet with deep feature fusion from graph convolutional network and convolutional neural network. Inf Fusion. 2021; 67: 208-229.
- 45Liu S, Yin L, Miao S, Ma J, Cong S, Hu S. Multimodal medical image fusion using rolling guidance filter with cnn and nuclear norm minimization. Curr Med Imaging. 2020; 16: 1-16.
- 46Liu S, Zhang T, Li H, Zhao J, Li H. Medical image fusion based on nuclear norm minimization. Int J Imaging Syst Technol. 2015; 25(4): 310-316.
- 47Singh S, Anand RS. Multimodal medical image fusion using hybrid layer decomposition with cnn-based feature mapping and structural clustering. IEEE Trans Instrum Measure. 2020; 69(6): 3855-3865. https://doi.org/10.1109/TIM.2019.2933341
- 48Liu S, Shi M, Zhu Z, Zhao J. Image fusion based on complex-shearlet domain with guided filtering. Multidimens Syst Signal Process. 2017; 28(1): 207-224.
- 49Al-Ameen Z, Sulong G, Johar MGM. Enhancing the contrast of ct medical images by employing a novel image size dependent normalization technique. Int J Biosci Biotechnol. 2012; 4(3): 63-68.
- 50Qiu T, Wen C, Xie K, Wen F, Sheng G, Tang X. Efficient medical image enhancement based on cnn-fbb model. IET Image Process. 2019; 13(10): 1736-1744. https://doi.org/10.1049/iet-ipr.2018.6380
- 51Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comp Vis. 2004; 60(2): 91-110.
- 52Liu Y, Liu S, Wang Z. Multi-focus image fusion with dense sift. Inf Fusion. 2015; 23: 139-155.
- 53Liu C, Yuen J, Torralba A. Sift flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell. 2010; 33(5): 978-994.
- 54Li W, Xie Y, Zhou H, Han Y, Zhan K. Structure-aware image fusion. Optik. 2018; 172: 1-11.
- 55He K, Sun J, Tang X. Guided image filtering. Paper presented at: European Conference on Computer Vision, Springer, 2010, pp. 1–14.
- 56Gastal ES, Oliveira MM. Domain transform for edge-aware image and video processing. Paper presented at: ACM SIGGRAPH 2011 papers, 2011, pp. 1–12.
- 57Caraffa L, Tarel J-P, Charbonnier P. The guided bilateral filter: when the joint/cross bilateral filter becomes robust. IEEE Trans Image Process. 2015; 24(4): 1199-1208.
- 58Nejati M, Karimi M, Soroushmehr SMR, Karimi N, Samavi S, Najarian K. Fast exposure fusion using exposedness function. Paper presented at: 2017 IEEE International Conference on Image Processing (ICIP). 2017, pp. 2234–2238.
- 59Burt P, Adelson E. The laplacian pyramid as a compact image code. IEEE Trans Commun. 1983; 31(4): 532-540.
- 60 T.W.B.A. of Harvard Medical School, BWorld Robot Control Software, 720. Accessed February 2, 2020. http://www.med.harvard.edu/AANLIB/
- 61Hossny M, Nahavandi S, Creighton D, Bhatti A. Image fusion performance metric based on mutual information and entropy driven quadtree decomposition. Electron Lett. 2010; 46(18): 1266-1268.
- 62Piella G, Heijmans H. A new quality metric for image fusion. Paper presented at: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), Vol. 3, IEEE, 2003, pp. III–173.