Development and Regeneration; Central Visual Processing – NEI Council 06/21/19


>>Our next speaker is going
to be Martha Flanders. Martha is a program officer at
the NEI and she oversees a portfolio of grants in central
visual processing.>>SAVP, so it’s a real pleasure
for me, really an honor for me to tell you about this great
group of portfolios called strabismus, amblyopia,
and visual processing. This is really a cluster or a
group of portfolios and I am the leader of this
group and we’ve decided as a group that we’re going to
present part of it next October and have an invited guest and
the other part here, the bulk of it is really this
central visual processing. So the cluster of portfolios is
about visual processing, and disorders that influence it
like strabismus, and myopia. And it’s also a group of
portfolios that range from the very cell and molecular
approaches in the Developmental portfolio, to Psychophysics. So it’s really a broad range
which is partly why we decided to break it up into two, and so
what I want to accomplish today is just give you a feel
for what goes on in the portfolios, in these two
portfolios, Developmental and the Central Visual Processing
and what our principal investigators, what drives our
PIs, what are they thinking. So what’s that portfolio
composed of. So just to step back and get
the big picture, this is drawn from the FY18
awards in research project grants, and you’ve probably
seen this pie chart before. Retinal disease has the biggest
number of research grants funded SAVP is the second biggest. And it actually in this recent
count it’s shrunk a little bit from previous years and I think
that maybe because a lot of these investigators from SAVP
are the ones getting the BRAIN funding. Now you know and you hear again
and again that BRAIN is a very important part,
and the Eye Institute is a very important part of
BRAIN is what I should say and many of our SAVP, retina but
also SAVP PIs are being funded by the BRAIN Initiative. What we learned last time in
January is that many of the BRAIN awards go to
NEI PIs. And we also may have mentioned
that the working group that’s provided the vision for the
BRAIN Initiative, the first one and then now the update of that
vision were co-led by SAVP PIs, so Bill Newsome for the
original vision of BRAIN, and now John Maunsell for the
update of that vision. So vision in neuroscience has
really always led in my opinion and you see it very much
in BRAIN. So what I want to do next is
zoom in on the SAVP portfolio and give you yet another pie
chart – you can see that the Central Visual Processing
portfolio is the largest and in fact, I’ve left it empty
now in the pie chart because in future slides I’m
going to subdivide it yet again, but this is showing you that the
other portfolios, myopia, ocular motor and so forth are going to
be saved for Fall and today in the yellow boxes I’ll be
talking about the Development and Regeneration portfolio and
the Central Visual Processing portfolio. So as an overview of the
presentation, I’m going to start with Development and
Regeneration which is actually Tom Greenwell’s portfolio and
he’s here I hope to answer your questions there in the back
so I’ll just give you a really a snapshot of what’s in that
portfolio starting with axon guidance, synaptogenesis,
synaptic patterning — so the kinds of synapses that
are forming on these post synaptic neurons. As a basis for that pattern and
forms of bases of complex visual functions as I’ll try to
explain and as a substrate for plasticity which I’ll talk
about a bit. And now, when we transition into
visual function and like experience dependent plasticity
we’re transitioning into the Central Visual Processing – my
portfolio, when we start to talk less about development
and more about function. So within my own portfolio of
Central Visual Processing, I’ll give several examples. As I said, I want to share with
you sort of the motivation and the thinking of the PIs in
this portfolio rather than than go through all the details
of what they do. So I’m going to talk about the
thinking in terms of trying to get at the biophysical basis
of neuronal computations and the idea that there’s sort
of a progressive feature extraction as we go from retina
to V1, to higher levels. And then I’ll get into some
examples of complex visual motor functions from projects in the
portfolio, and even brain-machine interface,
I’ll be showing you. So starting with axon guidance,
this is the part of Tom’s Development and
Regeneration portfolio that our own Carol Mason is funded
through and I just have a quick snapshot of some of the factors
involved in the guidance of axons as they come out of the
retina and try to find their way to the appropriate places in
the brain. The kind of retinotopic mapping
that’s maintained in that process is also covered in this
portfolio, for instance after the gradients and the superior
colliculus that would be in this Development/Regeneration
portfolio. Also the cartoon on the bottom
is showing a picture of synaptogenesis, and I’ve chosen
a cartoon here that shows some of the important players in
synaptogenesis. Synaptogenesis of course is one neuron,
contacting another after it’s grown out to find the right
neuron to contact, so you see on the right the post-synaptic
spine, the dendritic spines are the places where the
synapses are forming, you see glial cells, bracketing
the synaptic cleft there, glial cells are often important
players and getting these in synaptic plasticity and
getting the synapses formed and getting them formed right. I also want to mention the red
squiggly things are molecules of the major histocompatibility
complex and these are players in the stories about — the
story about the PirB receptors that has been made so famous by
Carla Shatz, and so as you probably know, the PirB receptor
is important in synaptogenesis or synaptic plasticity,
especially in cases where you’re trying to reopen the critical
period for amblyopia therapy. Okay good this is where I really
needed the pointer because I want to draw your attention
to this complex synapse. So now we’re on to the topic
of synaptic patterning, I mentioned that after you have
the axon outgrowth and all the molecular
mechanisms for synaptogenesis, now here’s a study, a recent
study from the Fox laboratory where the investigator is
starting to look at whether or not these are just simple
one-on-one, one retinal terminal in this case, here’s a simple
pattern of synapse between the retinal ganglion cell axon
coming in here to the dorsal lateral geniculate nucleus.
Some synapses are simple. Some are much more complex. So here you see color coded tha
the different retinal terminals, with different types have come
on and converged on these post synaptic neurons. So the
hypothesis tested in this recent study from Fox was that first
of all, you know, what is the function of having these
complex synapses? Does it matter to the animal? So what they did in this study
was they made a knockout mouse that was lacking a synaptic
adhesion molecule and they found that in these knockout mice
they could indeed wipe out the complex synapses but not
the simple synapses. And these mice did indeed have
good–they had vision, they had visual acuity, they
were tested on the Morris water maze, they could see a vertical
stripe indication where the platform was, so they could
perform simple visual tasks with just the simple synapses,
but as soon as the task got more complicated like distinguishing
a vertical from a horizontal grating to tell where the
platform was, then the mice that didn’t have the complex
synapses performed significantly worse. So the message being that maybe
there’s an association between the complex synapses and
complex visual function. So what are the implications
of this? Well, one obvious practical implication is if we
want to regenerate the axons maybe we need to worry about
getting the right patterns of synaptic connection if we are to
restore normal visual function. So that’s a practical
implication, but why do we have these complex synapses anyway?
Here I want to take you to another study and this time
we’re in the central visual processing portfolio.
This is a recent study from Chen and Andermann in Cell,
pretty recent. And what they’re doing in this
study is they’re looking closely again at these boutons from the
axons of the retinal ganglion cells that will come in here to
the lateral geniculate of the thalamus. So they are looking
closely at those complex synapses and by wonders of
technology, they are actually able to quantify the function
of each individual bouton. So the mouse here in the study
is being shown, patterns of visual stimulus that are either
coming on or going off, or on/off. And this is to
characterize whether those incoming axons are from cells
that have this on or off characterization from their
properties in the retina. So there’s different – you know
different properties for different neurons. Other neurons
coming from the retina might have this property of preferring
a certain direction, horizontal vertical or oblique.
The direction of motion in the retina would be a functional
characteristic that the retina is carrying to the LGN. And this
cartoon shows various patterns of convergence. We know we had
complex synapses but is there a reason for that? Well, you see
various possibilities here. You see possibilities where for
instance all the vertical direction, all the neurons that
prefer vertical or horizontal direction of motion are coming
together and they are combining but they could have different
on/off preferences, all right? So there’s this pooling of
information and they quantified all the different kinds of
patterns. Here’s another example where it could be any direction
of motion preference coming in but it’s only the on axons from
the ganglion cells that prefer on. So you have really a structure
here that picks out– picks out certain functions and
makes it invariant to other things, so it’s feature
invariance. You have a kind of pooling
of information that will let a postsynaptic neuron in the
LGN be sensitive to horizontal motions don’t matter whether
they’re on– whether the light is coming
on or off. So this kind of invariance is a
functional explanation for this for this com–what seems like a
complex hard wired circuitry that’s come from development.
Okay? Was that clear? However, you know if you think a
little bit more deeply about this and these authors… we
need to go back, back, back. These authors did mention, you
know thought a little bit more about it and mentioned in
the discussion of the paper. So have you some functional
logic and some–and functional logic is what they called it,
reasons for these patterns for information processing but
also they mentioned it would provide the substrate for
plasticity. So now imagine this is what the
brain does, this is what brain circuits do, is there’s an
opportunity for changes when we learn things, when circuitry is
changing for various reasons, you want to build plasticity
into the system, and they mentioned that having hard wired
complex convergent patterns, would actually give the
opportunity for plasticity. Imagine that you could enhance
some synapses but not others that would actually change the
functional characteristics of the cell. And this same type of thing,
imaging the functional activity in dendritic spines is also
done in visual cortex. So I’ve now taken from you from
the lateral geniculate to the primary visual cortex and
through this amazing technology and this one is from the
Fitzpatrick lab, now you see a case where through–with
2 photon calcium imaging, the calcium imaging shows the
activity in a neuron usually, and this chart is from the cell
body so you can see the activity of a neuron in response to a
visual stimulus being given to the mouse, the activity of a
neuron in the soma but you can also quantify the activity in
individual dendritic spines which is what’s shown up here
so this is an example where the ferret this time is
being shown motion in different directions and you see the
angle, the direction of motion here and now they’ve quantified
activity in the cell body of a pyramidal neuron in mouse visual
cortex, so head-fixed mouse, but the visual stimulus is
moving in, in various directions here and we can say that this–
this pyramidal neuron that we’re looking at is
preferring the 270-degree direction. So this is the
directional preference of this neuron. But now look at the–look at the
dendritic spines here, and you see color coded in blue
the cell body likes 270 degrees and the some of the dendritic
spines are actually most active for that direction but the ones
coded in red are active for the opposite direction
in fact. So there’s this–we can quantify
and I actually think this is most exciting just to be able
to quantify this, in a living visual cortex while
the alert animal is watching stimulus on the screen. So now this provides the
opportunity, I think, the biggest thrust of this Nature
paper was that we can do this, in mouse primary visual cortex
and now this gives the opportunity to pursue all sorts
of questions of plasticity in the circuit of the primary
visual cortex. And I’m not going to take you
all through this. Don’t worry but it’s from a
review article from Michael Stryker, and I found it
very helpful in trying to understand what was going on in
my portfolio because there’s a lot of activity in this area
and people are looking at the different things that are
being modulated, I put in the– I put in the yellow box there,
really the target of all this modulation, so you’re seeing–
you’re seeing dendritic spines, you’re seeing synapses that want
to be unsilenced, or subject to spine density so that new spines
grow, subject to some form of plasticity. So you might want to
have some circuit that helps with synaptic plasticity in
various ways, either by unsilencing synapses, promoting
synaptogenesis, or even just heightening the activity or the
responsivity of an entire neuron so that that pyramidal cell can
form in — this pyramidal cell could
possibly enhance its synaptic connections to other pyramidal
cells in the circuit as a substrate for memory formation. So plasticity in this system is
key and just listed here are some of the ways that
investigators in the portfolio are looking at acetyl choline
receptors, even transplantation of embryonic inhibitory neurons
but the big picture really is this local circuit that’s
promoting, that’s feeding into the pyramidal neurons, the key
players, and really modulating the excitation inhibition of
this system and giving rise to plasticity so the punchline
here is that V1 experience- dependent plasticity is a local
circuit phenomenon, there’s lots of players and lots of people,
lots of PIs and projects in this portfolio that are looking at
plasticity in very detailed ways However this V1 plasticity is
only a relatively small part of the portfolio so I want to
move on now and talk quickly about some of the other parts
of the portfolio. So V1 plasticity, now I’ve made
a pie chart to help me understand my own portfolio,
I’ve broken it down. I went through all the grants
and broke it down into various areas. I’ve already given
examples of V1 plasticity. And the structure and function
of the thalamus, this is going to include also the pulvinar-
important area of the thalamus, and we have studies in the
portfolio that are looking at connections between the thalamus
and the cortex and back, so the thalamo-cortical circuit
is represented here in this portfolio. But the bulk of the
portfolio is really some of these other areas that are
very basic fundamental research that try to get at visual
processing, the anatomy of the system, structure-function of
the cortex, visual cortical representations, processing
mechanisms, higher visual cortical functions, just a
wealth of Hubel and Wiesel type work that’s going on in this
portfolio and very, very BRAIN Initiative
related as well. So this is as I said this is
basic fundamental research. It underlies disorders like
amblyopia, various agnosias, would be served by understanding
this vast literature, but more recently what’s really
come across is that in recent efforts for visual cortical
prosthetics. So BRAIN is funding two projects on visual cortical
prosthetics where there are electrodes being inserted into
the brain of live subjects and they’re trying to figure out
correct pattern of stimulation to use there to restore some
kind of vision. So at this point all of a sudden
we start to appreciate, it’s good thing we know
something about visual cortical processing because
there’s so much to needs to be done to fine tune the right
patterns to restore vision. For example, the issues that
have come up right away is that you can’t always get the
right stimulation site, the electrodes sometimes have
to be implanted not in the foveal region of V1. It’s
going to be some place else. So it’s a good thing we
know the anatomy. The stimulation timing is
critically important because there’s plasticity in this
tissue, also there’s adaptation of this neuron so if
you’re programming in, you have a camera on the head,
mounted on the head and you are trying to use that
image to figure out what pattern of stimulation to give,
it’s a good thing we know something about what kind of
stimulation that circuit naturally gets from the
thalamus. And then the one thing that
really has come across is eye movements. All of a sudden the camera’s on
the head, the visual cortex is naturally getting signals
that where the retina is moving all the time, the camera is
not moving, what do you do about eye movements. So it just makes me glad that
we’ve done so much basic research on eye movements
because here’s a problem that really needs to be tackled. So I thought that there’s so
much in this portfolio that I thought maybe I’d back up a
little bit and just sort of started out by giving you–
trying as I said I want to give you an
understanding of what people in this portfolio
are thinking. So I thought I’d sort of break
it up into approaches that follow sort of the–are
looking for, looking for explanations in
terms of the biophysical basis of neuronal computation.
The portfolio is all about visual processing, visual
computations, the scientists in this portfolio aren’t happy
unless they can envision a mechanism at the cellular
level for the visual perceptual phenomena that we’re
trying to study basically. To put it simply, everyone
seems to have in mind the sum morphology of the
neurons, the membrane properties of the neurons, the
synaptic connections in the local circuit and beyond. Those are the basic things that
we want to have in mind when we’re measuring — in the
lab and we’re able to measure things like orientation, you
know, the response of the neuron to the orientation of
stimulus, to the direction of the motion of stimulus, to the
contrast, to the spatial frequency, the binocularity…
those are things we can measure the response of the
neuron, give stimuli in very carefully controlled directions
or of different spatial frequencies but we would
like to understand and relate it back to some
morphology which actually determines receptor field size,
the membrane properties determine the dynamics and how
quickly adapt in a neuron and so we talk in — the PIs in this
field talk about the neurons tuning, what is the
neuron like, what is the neuron’s tuning or what are its
preferences. So this is–this is one sort of
thing that’s in most of the PI’s heart is to try and
understand it in terms of the biophysical mechanism.
And another, these two conceptual approaches are not
mutually exclusive, I think everyone subscribes to the
theory that’s in every textbook that we’re–when
we’re thinking about visual processing, we’re thinking
about sort of a hierarchy of features presented on the
retina, which follow through to relatively simple features
presented in primary visual cortex, and then when you go
on through the visual pathway, when you get for example, into
temporal cortex, we have more complicated figures like
shapes and faces. So there’s a hierarchy built
in, and I’m willing to call this a theory because I think
this is always in the framework of our thinking. A couple quick examples of
these two ways of thinking, conceptual approaches.
I thought I would use some work from our own council
member, Jose-Manuel Alonzo and if you don’t understand this,
I’ll explain it at lunch, who’s actually in a beautiful
way used the interplay between characteristics like on or off
neurons — the features of the light comes on or comes off,
so features like on off, features like spatial
frequency, contrast sensitivity, and you see here
in the diagram, that there are sort of complex interplays
between neurons that are carrying that type of
information, even in primary visual cortex. Jose-Manuel and his colleagues
have in the Nature paper, which I like very much, they
discovered there were more off neurons in primary visual cortex
and coined the phrase “dark dominance” which does indeed
sound like Star Wars to me. But more than that, they’ve
used it to explain the mapping of features in primary
visual cortex. But also some of these
interplays suggest gestalts, like there should be sensitivity
to emotion and depth. And the gestalt I got from
reading some of these papers is that maybe this explains why
some art is more beautiful to us than others or why
some art is beautiful. I wonder how many of us have
ever thought about whether or not we’re seeing what’s
actually there, or if we’re seeing what our brains are
telling us we see. So for me, from reading the
papers was that large white objects like here, the river or
the clouds look larger and fluffier than they would —
he’s nodding so maybe I got this right, and there’s also–
you also have a better ability to see fine details in the
dark areas. So I think you know maybe
Ansel Adams was on to something here. Another thing I wanted to
briefly mention was I had mentioned the importance of
eye movements and I put this in the category of biophysical
explanations. It’s been–especially by Rucci
and colleagues, it’s been appreciated–maybe this a
rebirth of activity but as we scan a scene as you see
here the yellow lines are saccades to different fixation
points. Now ideally fixation points
should be where your eye is still, that’s why they call it
fixation. But if you look closely at a
fixation point like the F12 fixation point there, and you
can see here a close-up of the over time the movement of the
eye and these periods in between the saccades are
supposed to be steady fixations right and you can
see here in the red what’s actually happening during what
we thought it would be a steady fixation is it’s as if
the retina rubbing itself across the visual image, much
as we rub our fingers across textures that we like to feel.
So it’s been appreciated more and more and I think I see this
on every study section now is what about eye movements?
Because if the eye is moving and we have gone
to such trouble to quantify the spatial frequency tuning,
the contrast sensitivity of a neuron, that changes if
the eye is moving. So the image on the retina is
being conveyed to the visual cortex is different in terms
of its spatial temporal frequency because of this
retinal blur. So people are starting to
appreciate that and again it’s relevant to V1 neurophysiology,
people are starting to account– you know worry about eye
movements and try to take it into account but also from
visual cortical prosthesis as I mentioned. If the eye is really moving
all the time and if there’s purpose to those small eye
movements, then we should know that in a visual cortical
prosthetic interface. This one can’t be skipped
because it’s the example of feature extraction, this idea I
mentioned that we–everyone subscribes to and what’s shown
here is in the nonhuman primate brain is there have
been face patches discovered that have been well
characterized and form a face patch network and as you go
from the occipital representations down here
to the temporal lobe, you basically get from a picture-
based representation to something that’s view invariant. So if you want to recognize a
face, you know, it’s fine if it’s facing forward, but you
would also like to represent it from different views. So, somehow the processing in
this stream has gotten you to view invariants and this paper
also discovered a new area in the temporal pole from Freiwald
that was just active from familiar faces, so very
beautiful extreme things that this network is doing, which
I think Anitha is going to talk more about some of these
elegant transformations going on in this. So I wanted to show examples
from what I call computational fMRI, I see I’m out of time I
won’t have time to go through this, but as part of the
portfolio, there are some very elegant–this is an example
from Epstein, and it was published in PLoS Computational
Biology, which is one of the best journals
for that type of thing. And it’s an fMRI study where
they had a gorgeous experimental design where
people were looking at different scenes in the magnet
and they were able to determine through use of a
convolutional neural network model in a computationally
intensive way, they were able to show that this area, that
the occipital place area, was sensitive to the path, those
are the red lines, that the subject intends to take
through the scene as they’re looking at the scene. Well I’m
sorry I’ve run out of time. I will tell you–I will show you
the last slide very quickly. Very recent work from Richard
Andersen’s lab at CalTech. This a photograph of the most
recent subject in his BMI studies with permission, you see
the subject here who is paraplegic, completely
paralyzed and you can see there’s a connector on his
head and he’s looking at a video screen and what’s being
recorded in the brain is Utah Arrays have been implanted
in both the posterior parietal cortex and the motor cortex,
so those grids are electrophysiological recordings
from those two areas from many, many neurons.
You see the wave forms are identifying that we really got
a lot of different neurons and we got stable recordings from
these neurons in this most recent subject that has been
implanted. They told us that he required
very little training as soon as he was connected to the
computer interface, after the implants had healed and so
forth, he was able to move the cursor on the screen with his
own thoughts. And this I think is an advance,
you’re probably aware of the BMI technology, but
this one has both posterior parietal activity, plus the
posterior parietal cortex is a planning area for motion.
So the subject can think about where to move and motor
cortex activity again thinking. But the interplay between the
information in these two areas is expected to give more
sophisticated control. So you know with a little bit
of practice and better training of the algorithm, I
think this guy is gonna be playing video games, and you
know that’s according to my son, that’s what every young
man lives for is to be playing real video games with their
friends. So I probably–I went very
quickly through some of these aspects of the two portfolios
just trying to give you a flavor of what’s in them.
I left a little bit of a gap because our invited speaker
Anitha is going to tell you more and for the interest of
time, shall we just move to Anitha without and ask
questions at the end? Or do you want questions now?>>Yeah, let’s entertain
questions. The one slide you skipped over
that I thought was interesting s was you have to will it us how
playing Pokemon changes your brain.>>That’s the one I left out
so okay… I wanted to try to give a good
balance of the portfolio, so I wanted to be sure to say we
have both nonhuman primate work and fMRI studies.
And in terms of the face patches, both human fMRI and
nonhuman primate studies from the Livingston lab, the human
fMRI from the Grill-Spector lab, in both cases these
studies are interested in the development with young people,
how we talked about plasticity, how do these
things change over time? So in the adult there’s this
very well defined face patches in the network.
How do we get there, how does this develop in the young?
So that the Pokemon paper in Nature Human Behavior is
showing you that in the adult temporal lobe, there are very
well defined face and nonface areas. So some areas you see fMRI
activity when the person is looking at a nonface object
in some cases, there is very definite face objects and
this pattern is really quite repeatable across subjects.
Though in their design, they compared the face activity in
subjects who had and had not had Pokemon playing experience
when they were younger. And they were actually able to
significantly show that those who had played Pokemon had
better face representations for looking at Pokemon figures
so that speaks to development. I left it out because it’s very
nice but not unexpected with what we know from
development.>>Thank you so much for a nice
summary of the NEI portfolio in the brain. So I was surprised
to hear that the section of the visual processing was shrinking
and I wonder if there is a relationship with the moment
where the mouse became popular as an animal model in this
process and I’m curious to know what are the relative
percentages of mouse versus, you know carnivores and
nonhuman primates now versus let’s say 5 years or 10 years.>>You know my impression is
that’s held pretty steady and Michael had the portfolio
before me so I’m looking at him.>>So there’s been obviously
a great increase in the mouse studies when I first
started there were almost none. When I was an SRO in the
study section and an application that came in that
involved mice the answer would be, why mice, they
can’t see anyway? Why are we studying them?
So there was almost no representation, but this
portfolio has one of the highest representations of
nonhuman primate studies in all of NIH so it has a really
intense number of people doing nonhuman primate.
But the number of mouse studies are growing as the
basic research is being done to understand what the
pathways are and what the representations are so there
is an increase, a definite increase, that’s every year
getting bigger.>>So my perspective was only
over 3 years and all I was trying to say is that even in
that 3 years I’ve seen more go to BRAIN. We haven’t done an
analysis of that. That’s why I mentioned why it shrunk
slightly in the last 5 years.>>This is a very impressive
portfolio and the basic science advances are tremendous
but I didn’t see a lot of disease specific
research here, and in particular do you feel that the
amblyopia research is keeping pace with these basic findings?
Is it a competitive pool of grants of the same quality that
you’re discussing here and similarly I didn’t see much
about a cortical remapping and blindness, which is a big
issue if you’re going to do these prosthetics and someone
is blind from retinal disease and the cortex gets remapped,
then are you stimulating a map that is going to be useful and
accurate, are those active areas in the portfolio?>>That’s a great segue to the
upcoming Counsel presentations in October, those will be parts
of the SAVP group that deal with strabismus and
amblyopia specifically, and also myopia, and also some of
the perception and psychophysics maybe looking at
sort of rearranging mapping. We also have the low vision
portfolio from some of those issues, so what I presented
today was really mostly the fundamental research.
And they’ll tell you in October that those areas are
going strong, too.

Leave a Reply

Your email address will not be published. Required fields are marked *