My current work focuses on how to improve 3D displays by better understanding how the human visual system processes depth from disparity and other cues.
I'm interested in the neural underpinnings of lightness perception and filling- in. I study this using psychophysics and computational modeling. Currently I'm most interested in how well low-level models can account for errors in lightness perception (i.e. lightness illusions). My current model (FLODOG) combines simple spatial filtering using contrast sensitive receptive fields and response normalization, and accounts for many lightness illusions, including many variants of White's effect. FLODOG does not include any filling in, or high-level perception, and thus it is particularly interesting that it can account for so many illusions. On the other hand, there seems to be a role for both filling in and higher level perception in lightness processing so eventually the field will require a unified approach.
Subjects often display a surprising inability to detect large changes to their visual environment. Based on these findings, several researchers have argued for an extremely limited memory capacity. On the other hand, there is strong evidence that these experiments underestimate memory capacity, including some of my own work.
Change blindness demos - I've put together some visual demos of the traditional flicker task, along with some novel variations. (FYI, the images used and the JAVA source code for the display are both freely available).
For Matlab psychophysics toolbox (PCWIN) - Example code for simple RSVP style experiments, as well as some code I've written for my lightness perception work.
Every now and then I experiment with making interesting images or visual illusions that have nothing to do with my current research.
(c) 2015 Alan Robinson (robinsoncogsci.ucsd.edu)