Research

Overview

My main research interest is visual attention, both animal and artificial. I am fascinated by the ability of humans and other animals to efficiently extract high quality, contextual visual information from the environment in a wide range of situations, and seek to better understand why artificial systems still struggle to achieve similar degrees of flexibility and robustness.

Laboratory for Cognition and Attention in Time and Space

Lab for CATS logo.


At the Laboratory for Cognition and Attention in Time and Space (Lab for CATS), I work with students doing independent study projects during the academic year and full-time research over the summer term. We work on a variety of vision-related projects. Some recent and ongoing projects include:

  • Visualizing Attention in Action Recognition: Despite the enormous success of deep learning models, they are frequently much more brittle and prone to error under small changes in input when compared to human observers. Previous research has shown that by identifying and comparing the specific visual information driving the decisions of both deep learning models and human observers, we can devise new approaches to improve model robustness. In this project, we are focusing on action recognition models.
  • Eye Tracking and Psychophysics of Attention: In collaboration with Professor Breeden, this is a collection of projects focused on understanding human visual cognition and attention. In addition to expanding our knowledge of how human vision works, one of the primary aims of this work is to identify aspects of visual processing that are currently poorly represented in artificial systems and, through improved understanding of biological vision, suggest novel mechanisms and approaches for artificial systems. Recent areas of focus include asymmetric visual search, scientific figure understanding, and cue integration in spatial vision.
  • SMILER maintenance and development: The Saliency Model Implementation Library for Experimental Research (SMILER) is an open-source bundle of saliency models wrapped under a common API. Students work with SMILER to resolve open issues on GitHub and extend the library by wrapping new models into the framework.

Publications

Peer-Reviewed Journal and Conference Papers

Saeed Ghorbani, Calden Wloka, Ali Etemad, Marcus A. Brubaker, and Nikolaus F. Troje (2020) Probabilistic Character Motion Synthesis Using a Hierarchical Deep Latent Variable Model. Computer Graphics Forum Paper Link; Video Abstract; SCA 2020 Talk

John K. Tsotsos, Iuliia Kotseruba, and Calden Wloka (2019) Rapid visual categorization is not guided by early salience-based selection. PLOS ONE 14(10):1-23 Paper link

Iuliia Kotseruba, Calden Wloka, Amir Rasouli, and John K. Tsotsos (2019) Do Saliency Models Detect Odd-One-Out Targets? New Datasets and Evaluations. Proc. of British Machine Vision Conference (BMVC) – Oral Presentation PDF Dataset

Calden Wloka, Iuliia Kotseruba, and John K. Tsotsos (2018) Active Fixation Control to Predict Saccade Sequences. Proc. of Conference on Computer Vision and Pattern Recognition (CVPR) PDF

Calden Wloka and John K. Tsotsos (2016) Spatially Binned ROC: A Comprehensive Saliency Metric. Proc. of Conference on Computer Vision and Pattern Recognition (CVPR) Link

John K. Tsotsos, Iuliia Kotseruba, and Calden Wloka (2016) A Focus on Selection for Fixation. Journal of Eye Movement Research 9(5):2,1-34 Link

Neil D.B. Bruce, Calden Wloka, Nick Frosst, Shafin Rahman, and John Tsotsos (2015) On computational modeling of visual saliency: Examining what’s right, and what’s left. Vision Research 116:95-112 Link

Preprints and Technical Reports

Joshua Abraham and Calden Wloka (2021) Edge Detection for Satellite Images without Deep Networks arXiv paper

Calden Wloka and John K. Tsotsos (2020) An Empirical Method to Quantify the Peripheral Performance Degredation in Deep Networks arXiv paper

Calden Wloka, Toni Kunić, Iuliia Kotseruba, Ramin Fahimi, Nicholas Frosst, Neil D. B. Bruce, and John K. Tsotsos (2018) SMILER: Saliency Model Implementation Library for Experimental Research. arXiv paper GitHub repository

Invited Talks

Calden Wloka. “Modelling Fixations Beyond a Static Saliency Map.” Conference on Robots and Vision CIPPRS Award Talk, Burnaby BC, Canada, May 2021

Calden Wloka. “Some Salient Issues with Saliency Models.” A.I. Socratic Circles (AISC), Toronto ON, Canada, October 2020 Recorded talk

Calden Wloka. “Putting Computational Saliency in Context.” Facebook Reality Labs, Redmond WA, USA, January 2020

Calden Wloka. “An Overview of Computational Saliency.”Centre for Neuroscience Studies Seminar Series Queen’s University, Kingston ON, Canada, September 2014

Theses

Calden Wloka (2019) An Evaluation of Saliency and Its Limits. Doctoral Dissertation, York University PDF
Winner of the Canadian Image Processing and Pattern Recognition Society John Barron Doctoral Dissertation Award

Calden Wloka (2012) Integrating Overt and Covert Attention Using Peripheral and Central Processing Streams. Masters Thesis, York University PDF
Winner of the Joseph Liu Best Thesis Award and York University Thesis Award