ISINET: AN INSTANCE-BASED APPROACH FOR SURGICAL INSTRUMENT SEGMENTATION

C. GONZÁLEZ*, L. BRAVO-SÁNCHEZ* AND P. ARBELÁEZ

23RD INTERNATIONAL MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION (MICCAI), 2020

Abstract

We study the task of semantic segmentation of surgical instruments in robotic-assisted surgery scenes. We propose the Instance-based Surgical Instrument Segmentation Network (ISINet), a method that addresses this task from an instance-based segmentation perspective. Our method includes a temporal consistency module that takes into account the previously overlooked and inherent temporal information of the problem. We validate our approach on the existing benchmark for the task, the Endoscopic Vision 2017 Robotic Instrument Segmentation Dataset [2], and on the 2018 version of the dataset [1], whose annotations we extended for the fine-grained version of instrument segmentation. Our results show that ISINet significantly outperforms state-of-the-art methods, with our baseline version duplicating the Intersection over Union (IoU) of previous methods and our complete model triplicating the IoU.

Results


Table 1. Comparison against the state-of-the-art on the EndoVis 2017 dataset.

METHODDT
TernausNet [28]
MF-TAPNet [15]
ISINet (Ours)


ISINet (Ours)

CHALLENGE IOUMEAN CLASS IOU
35.27
37.35
53.55
55.62
10.17
10.77
26.92
28.96
66.27
67.74
36.48
38.08

Table 2. Comparison against the state-of-the-art on the EndoVis 2018 dataset.

METHODDT
TernausNet [28]
MF-TAPNet [15]
ISINet (Ours)


ISINet (Ours)

CHALLENGE IOUMEAN CLASS IOU
46.22
67.87
72.99
73.03
14.19
24.68
40.16
40.21
77.19
77.47
44.58
45.29

Qualitative results


Figure 1. Each row depicts an example result for the task of instrument type segmentation on the EndoVis 2017 and 2018 datasets. The columns from left to right: image, annotation, segmentation of TernausNet [28], segmentation of MFTAPNet [15] and the segmentation of our method ISINet. The instrument colors represent instrument types.

Downloads


References


[1] Allan, M., Shvets, A., Kurmann, T., Zhang, Z., Duggal, R., Su, Y.H., et al.:2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426(2019)

[2] Allan, M., Shvets, A., Kurmann, T., Zhang, Z., Duggal, R., Su, Y.H., et al.: 2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426 (2019)

[15] Jin, Y., Cheng, K., Dou, Q., Heng, P.A.: Incorporating temporal prior from motion flow for instrument segmentation in minimally invasive surgery video. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. pp. 440–448. Springer International Publishing, Cham (2019)

[28] Shvets, A.A., Rakhlin, A., Kalinin, A.A., Iglovikov, V.I.: Automatic instrument segmentation in robot-assisted surgery using deep learning

Universidad de los Andes | Monitored by Mineducación
Recognition as University: Decree 1297 of May 30th, 1964.
Recognition as legal entity: Resolution 28 of February 23, 1949 Minjusticia.

© Universidad de los Andes. All rights reserved.