Close mobile menu

Richard Wildes

Associate Professor, EECS Department
Member, Centre for Vision Research
Member, IC@L
Associate Director, VISTA

Website | Email

2021 – 2022 Research Highlights

Computational Understanding of Visual Spacetime

Vision-based systems are a central research topic in computing, robotics, and artificial intelligence. This prominence arises as current trends in automotive, consumer and home technology (e.g., Waymo, Amazon Go, Apple HomePod) are leveraging video data to understand how it is possible to understand and react to the environment. Moreover, interesting scientific questions arise as we seek to understand how it is possible to recover detailed information about the world from a mere captured video. My research targets the underpinning theoretical and engineering principles of video understanding.

Some highlights of the past year include development of a theoretical explanation for why ConvNets learn oriented bandpass filters (Hadji & Wildes 2020), development of a novel approach and state-of-the art algorithms for action prediction in videos (Zhao & Wildes 2020) and development of a novel approach and state-of-the-art algorithms for image denoising (Su, Cheung, Wildes and Lin 2020).

Research Highlights

  1. Hadji and R. P. Wildes, Why convolutional networks learn oriented bandpass filters: Theory and empirical support, arXiv e-print arXiv:2011.14665v1, 2020.
  2. W. Su, G. Cheung, R. P. Wildes and C.-W. Lin, Graph neural net using analytical graph filters and topology optimization for image denoising, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020.
  3. H. Zhao and R. P. Wildes, On diverse asynchronous activity anticipation, Proceedings of the European Conference on Computer Vision (ECCV), 2020.