Skip to content
Who's in the Video
Vincent Pieribone is Associate Fellow, The John B. Pierce Laboratory, and Associate Professor, Cellular & Molecular Physiology and Neurobiology, Yale University School of Medicine. He attended New York University College[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Vincent Pieribone is most excited about figuring out a way to study the brain without destroying it.

Question: What’s been one of your best “Ah-Ha” moments?

Vincent Pieribone: Let’s see, big ah-ha moments.  That’s interesting.  One of the projects that I’m most excited about these days is trying to develop new ways of studying the brain and one of the things I’ve been working on is optical methods of observing the brain.  In other words, ways of literally visualizing brain function.  So, literally seeing when nerve cells are firing in response to something that’s going on in real time.  And having computers analyze these kind of images, or movies of the brain and deriving and understanding how the brain is processing the world around us.  And the future of this is prosthetics and things like this.  And we were developing a probe to try to do this where we were combining something that glowed with something that sensed voltage.  Voltage is the big thing that we are concerned about in the brain; like little nerve cells are changing there electrical potentials and we’d like to see that.  So, we’re creating these artificial proteins, which combined naturally occurring things, that smell voltage and naturally give off light.  And we put them together in various ways and 90 percent of them don’t work at all, and it’s very frustrating. 

I plotted out a whole series of experiments for my graduate student that you should insert it here, you should insert these to together there, and of course everything I suggested didn’t work, and in the middle of our discussion he said, “Well, I want to make this kind of combination.”  And I said, well, that’s foolhardy because you’re a student and don’t know better.  And of course that’s the only one that did work, at the end of the day.  So, I was eating crow for a long time.  But it was that moment when we had gone through a lot of these and things didn’t look very good and he started looking at me like, “I think he’s crazy, and I think I made a mistake by doing my thesis with him.”  And then you put the one in and it happens.  And you’re watching it and of course the first impression is, something’s wrong because it’s working.  So, we’ve done something wrong.  We literally spent a week saying, okay, let’s do it again because there’s something wrong, it looks too good.  The worst thing that can happen is you get all excited about something and then you find out it was some idiotic mistake that you’ve left a switch wrong – of course we’ve done that more often than not.  And this was that kind of moment for me where, I don’t know when it was during the week when I kind of finally go, this may actually be real.  And there was one aspect to the way that we made that was responding that didn’t make sense to me. That’s when I though it wasn’t real.  And I showed it to a friend at Yale, and he said, “Oh no, that’s how it should act” because of this that you didn’t know about.  And then when I learned that, I said, oh, then it is doing what it’s supposed to be doing.  And then I got very excited and it all started coming together.  And that was a moment for him and for the student, I guess he was also sighing a relief that he was going to finally graduate.  But it’s just this idea that – wow, it did work.  I can’t believe we did this, and something actually came out of it. 

And so then followed to study that and he ultimately got his thesis and graduated and worked for McKinsey, you know.  So, there goes that.  But he was a good scientist and he finished and he went on back to Japan and then worked at a different job.  But it was that moment, and I think he still sees it – he writes me all the time from his job of how he misses that emotional feeling of, you know. And then we went on to make lots of different variations on that, each of which had its own little personality and that’s kind of a moment. 

And then recently we had the same experience, this was about two weeks ago in the lab, modifying these things again with something we didn’t think would work, did.  And it’s just little things.  They’re not things being up on the cover of Time, but there are things that are incrementally moving us forward.  We know all the people in our business are going to be really impressed by us, and that’s what makes us sound – We know that when we go to a meeting, somebody is going to go, “wow!  That’s cool.”  And that’s the gratification that we need because that’s our peer saying, “God, I know how much work that must have been and you must have been really excited.” 

Question: How will this research come into play for prosthetics?

Vincent Pieribone: A while back, some five or six years ago, I was struck by a couple of things happened in my life and one of them was, I saw a couple of movies and one was the “The Diving Bell and the Butterfly,” about his high cervical injury – spinal cord injury patient who ended up writing this whole book by blinking his eyes.  It’s really a beautiful story.  And the movie I thought was well done, and sad.  But this notion that when you get spinal cord damage high up – high cervical meaning high up in the neck, you lose all the ability to use your lower organs.  In other words, you lose the ability to move muscular, motor, and sensory.  And it’s a kind of a thing they call “Lock in Syndrome,” which is basically, the brain is complete functioning and generally your visual system is generally okay because that enters your brain above the spinal cord.  Most of the upper cranial nerves come in so they can usually move their mouths and sometimes they can breathe and things like this.  But essentially they have no control of their lower body.  And when we were writing this book, we interviewed a kid who was in a graduation, graduating from high school and got into a fight on the night of his graduation and someone cut him in his neck with a knife and he was paralyzed like this when he was like 18.  And now he’s 25-30 around that age and he’s just been living in this essential prison. It’s horrible to imagine.  The suicide rate is very high in this group and it’s just terrible.  There’s almost 100,000 people in America every year that get this from auto injuries and things like this and motorcycle injuries.  And in Connecticut where we have not helmet laws we have lots of people like this.  But this is just like completely untouched by medical science.  There’s nothing that we can do. 

And you’re dealing with a problem where the brain has made these connections to the spinal cord, when we were tiny embryos and then as we grew and grew, these connections kind of stretched and stretched until an adult human.  And the cells in our brain are like 20 microns, tiny little things.  And they send these long fibers that travel all the way down to our spinal cord.  And they, of course tell our spinal cord to move and do all the things we do.  And then in reverse, all these fibers that come up and carry all this information tactile information about what we’re touching and feeling.  And when you cut that, all the nerve cells are still up there in the brain, they’re still doing what they’re doing.  They’re still saying move your leg, move your arm.  But nobody’s listening.  And those fibers cannot regrow.  And there’s been a lot, a lot of money spent on trying to get these things to regrow. 

But the analogy I always use is, it’s like the size comparison would be, if I was a nerve cell that my arm would be projecting all the way out and touching someone on their shoulder in somewhere in North Carolina, right?  That’s the distances in the size relationship.  And then somewhere in Delaware, my arm gets cut off.  You know?  And then I have to blindly find my way, with a new arm, all the way back to that person’s shoulder in that specific town, in that specific, you know.  And that happened at a time when that person was right next to me.  I essentially made the connection when they were standing next to me.  And then as they moved all the way down to North Carolina, my arm just grew.  But the chances then later in life that my arm is going to be able to find its way down there with all those clues is essentially zero.  So, I’m not really a big fan of a lot of this research in attempting to regrow these things.  Unfortunately I wish it was the case, but the idea that I’m going to somehow learn to reach out and find that person when there’s really no map to do that.  There’s a map when we’re embryos, there’s all kinds of things going on telling us what to do.  There’s also thousands and thousands of connections, or millions of connections that are made that are pruned and removed as we go through, because they are misconnected.  So, it’s a kind of process of elimination.  So there really isn’t any way to retrain those neurons to find their way down there.  

So I saw this and sort of thought to take a different approach to the process, which namely is kind of – since those cells are still in the brain and you’re still able to say, “move our arm,” move your leg,” but there’s no way.  It’s, can we bypass the damage and have a computer, through these methods, grab that information in the brain and watch the nerve cells and watch you think, essentially?  Where you will say, I want to move my arm and what are the nervous things that happen in your brain?  What do the neurons do to instruct your arm to do that, and can we capture that signal and then say, okay, your brain is saying it wants to do this?  And feed that information to robotics and to computers and have them do it. 

It’s sort of the matrix kind of thing where you could read that information – and I don’t mean it in an all sophisticated way it is portrayed in these films, I mean more things like can you move a wheelchair left, can you move a wheelchair right.  Can you open the door, can you answer the phone.  Start very simple with a series of commands that could be learned.  And they can do that now with EEG and things like this, but we would like to go beyond that where you can maybe write an email.  Or you could send an email, or eventually you can move a robotic arm, would be the thing.  You’ll never be able to play a piano maybe with the arm.  But you don’t really need the arm to play the piano.  You may be able to think the notes and have it play that kind of thing. 

And so, all that’s feasible really.  It’s actually feasible to do that because whatever you’re capable of doing, all those commands have to have been constructed in your brain and what we lack only really is the ability to capture that information and translate it into what the outcome would be.  So every time you go to play this C-note on the piano, your brain creates a certain specific pattern of activity.  And if we knew that pattern, and computers today are so powerful to deconstruct these, what seem to be rather abstract signals to our mind, computers can abstract those signals and say, take those huge datasets and crunch them and say, okay that’s what this is and that’s what that is. 

But I looked at this whole process from capturing the information to processing information to the robotic action and everything is in place to have this ready to go except the capturing of the information.  So, except for the ability to read out what the brain was doing, in what I would say a non-invasive way, we don’t have that yet and that’s the limiting technology.  So, I refocused my science several years ago to trying to fix that one problem; to try to fix the capturing of that information.  Now, the traditional way to do it is you put wire electrodes, unfortunately, and you record individual nerve cells on the tips of these tiny, fine wires that are half the thickness of a human hair.  They go in and with luck they hit some of the cells.  And people have done this kind of work in animals and have shown amazing things.  You can records maybe a hundred neurons from a monkey and as the animal moves his arm around; you can reconstruct his movements exactly within a certain degree of freedom by just the activity of a hundred neurons in his brain. 

In these experiments, if the animal moves his arm to sort of follow a cursor on a screen, you can have the computer understand what the animal is attempting to do before he’s actually going to do it, and therefore move the object for the animal, and eventually the animal will stop actually moving his arm at all and using his brain and the computer will move it around on the screen on his own and the animal will then take his arm and do something else with it.  So, the animal is scratching his butt while his brain is doing the computer games.  You know. 

And so, the brain is able to teach the computer how to do these things.  And so if you see those experiments, it just makes you think about the possibilities that would exist for humans, these people who are just locked in. 

So, this gentleman who they did, there’s only two humans who have ever had these wires put into their brain and a company in Massachusetts did it.  And we interviewed this guy for a book and he was really like overwhelmed by it and so excited by the notion of it.  And unfortunately, they put these in him and they only worked for a short time and then they stopped giving a signal.  But for him it was the most exciting thing in his life because he said, for a moment he had control of something in his world.  I mean, this is a guy who has to sit there and have people change him and bathe him and feed him and everything, right?  And for a moment, he could do something with his own volition.  It was such an empowering feeling I think.  So, even simple things to be able to do.  So that’s kind of been the goal, really.  It’s a long way to go and it’s a struggle, but it’s the goal ultimately is to kind of help get these people out of this prison that – their ability to do anything.  That’s the long answer to your question.

The idea would be, every time these nerve cells do their little thing, which is that everything we do as I’m moving my hands as we’re talking and as you’re thinking.  All that is just lots of nerve cells firing and those nerve cells firing is what gives us consciousness what let’s us know who we are, what we’re doing, everything about us is basically those cells.  And we have to be able to see those cells firing.  And their firing, their way of communicating with each other and we have to listen to those conversations.  That’s the name of the game.  And we have to listen to lots of conversations.  There’s just millions and millions of neurons in the brain involved in any one activity.  So, we need to record from some subset of those. We need to see what they’re saying.  And of course, they’re speaking a language which is very foreign to us, and we need to see what they’re speaking while they’re doing something and train a computer to make that connection that when Vincent is lifting his hand like this, these cells are doing something, when he does that they’re doing something different. 

And so, we need to train the computer about all these activities and then in the future, in a person and this person, when they go to do something, it can just read those thought activity and have it do that action.  So the first people who were there, they had them sit in a chair and they said, “Imagine yourself picking up this coffee cup.”  Or, imagine yourself moving the joystick, or if there’s a TV screen, imagine yourself moving a cursor in this direction or that direction.  And then the brain cells will fire, the computer captures that sequence of events and then when that happens in the future, it will move the cursor that way, and when it sees a different sequence, it moves the cursor that way. 

And so, our goal is to try to turn those little tiny conversations in brain cells, those electrical conversations into little flashes of light basically so that every time a cell is talking, we can see it rather than having to put a little microphone down next to each one of them like we do now, this way we just look at them and we run a very high speed camera that would say, okay he’s talking, he’s talking, she’s talking that kind of thing and then when this pattern of activity happens that means this, and when that pattern that means that.  So, it’s that little step between turning that information that’s buried in the brain into something that captured by the computer and it’s translating it. 

And the idea would be with these proteins that we’ve been working with that come from coral, you can potentially image these things through the skull, through a think skull, per se, in a person, but directly through the skull.  So you don’t have to have all these horrible wires going through people’s brains.  And of course if you’re not really using your brain, in the case of people who are in that situation, and this guy who we interviewed, who was more than glad to volunteer to do this because really, essentially, he’s not using much of his brain any longer.  He can’t feel anything for 90% of his body, he can’t execute any function.  So, I think the FDA and these bodies are thinking about doing that because these people really have no hope whatsoever anyway, and I think our goal is to kind of give them some kind of a functionality, or return some kind of functionality to them.  So, that’s where the research focus is.  But it involves these little crazy proteins, which got us of on some sort of tangent to studying these strange proteins that came out of coral that help us do this.

Recorded on January 21, 2010


Related