semantic segmentation of surgical scenes is a fundamental task that must be solved in the pursuit of autonomous robot-assisted surgeries and image-guided interventions. The objective of this task is to create an automatic method for identifying and spatially limiting the relevant objects in a surgical scene. This task is the first step towards solving computer-aided surgery problems such as overlay and 3D reconstruction of organs, instrument tracking, navigation, and programming of robot arms for precision and strength critical tasks. Our work mostly focuses on footage from robotic-assisted surgery scenes, however the methods developed for natural images are directly applicable to laparoscopic surgery scenes, thus widening the scope and range of applications for our work.