Computer Vision systems have achieved remarkable performances across a wide variety of tasks, such as recognition, segmentation, detection, and generation. However, these systems have also been shown to be vulnerable against semantically-meaningless perturbations. In particular, recent works have shown that these systems, while accurate, lack robustness. This property is undesirable for intelligent systems on which we wish to rely on in the real world. In the Center, we have worked on robustness on various dimensions. In particular, we have (1) designed biologically-inspired techniques to improve robustness, (2) proposed novel semantically-oriented dimensions for the assessment of the robustness, (3) studied how inexpensive techniques during system deployment can provide robustness benefits, (4) investigated the pervasiveness of the lack of robustness in the medical domain, and (5) shown how techniques for improving robustness can be harnessed to improve the performance of super-resolution systems.