Date of Award

2019

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Neuroscience

College

College of Graduate Studies

First Advisor

Thomas Naselaris

Second Advisor

Truman Brown

Third Advisor

Jane Joseph

Fourth Advisor

Tom Jhou

Fifth Advisor

Andy Shih

Abstract

When you picture the face of a friend or imagine your dream house, you are using the same parts of your brain that you use to see. How does the same system manage to both accurately analyze the world around it and synthesize visual experiences without any external input at all? We approach this question and others by extending the well-established theory that the human visual system embodies a probabilistic generative model of the visual world. That is, just as visual features co-occur with one another in the real world with a certain probability (the feature “tree” has a high probability of occurring with the feature “green”), so do the patterns of activity that encode those features in the brain. With such a joint probability distribution at its disposal, the brain can not only infer the cause of a given activity pattern on the retina (vision), but can also generate the probable visual consequence of an assumed or remembered cause (imagery). The formulation of this model predicts that the encoding of imagined stimuli in low-level visual areas resemble the encoding of seen stimuli in higher areas. To test this prediction we developed imagery encoding models-a novel tool that reveals how the features of imagined stimuli are encoded in brain activity. We estimated imagery encoding models from brain activity measured while subjects imagined complex visual stimuli, and then compared these to visual encoding models estimated from a matched viewing experiment. Consistent with our proposal, imagery encoding models revealed changes in spatial frequency tuning and receptive field properties that made early visual areas during imagery more functionally similar to higher visual areas during vision. Likewise, signal and noise properties of the voxel activation between vision and imagery favor the generative model interpretation. Our results provide new evidence for an internal generative model of the visual world, while demonstrating that vision is just one of many possible forms of inference that this putative internal model may support.

Rights

All rights reserved. Copyright is held by the author.

Share

COinS