Anesthesia and Consciousness

As anyone unfortunate to have experienced with any frequency will tell you, anesthesia is a strange thing. One moment you’re in the operating room, chatting nervously with the people who will soon hold your life in their hands, and the next, you’re staring at the ceiling of the recovery room. It’s not like sleep, with it’s natural transition from one state to the next; it’s abrupt, and disorienting.

The Atlantic recently published long-form article roughly focusing on consciousness through the lens of anesthesia. While I wasn’t in love with how the article handled the discussion of awareness during surgery, overall it’s well written and touches on many of the key issues involved in trying to figure out this whole consciousness thing. Moreover, it reminded me of a talk given by Gilles Plourde at the Cognitive Sciences Summer Institute on the Evolution and Function of Consciousness, which I had the honor of attending last summer. Dr. Plourde (who I’m surprised wasn’t mentioned in the article) is an anesthesiologist with a tremendous moustache who has done some really interesting work using brain imaging and electrophysiology to try and figure out what exactly anesthetics do to disrupt consciousness (tl;dr, it’s probably got a lot to do with thalamocortical communication). All of the sessions during the Summer Institute were recorded and and put online, and I’ve embedded Dr. Plourde’s talk below (though I strongly encourage you to check out the full list).

Somewhat related, there have been a bunch interesting papers recently on how anesthetics influence functional connectivity in the brain:

Lewis, L. D., Weiner, V. S., Mukamel, E. A., Donoghue, J. A., Eskandar, E. N., Madsen, J. R., et al. (2012). Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness. Proceedings of the National Academy of Sciences of the United States of America, 109(49), E3377–86. doi:10.1073/pnas.1210907109

Heine, L., Soddu, A., Gómez, F., Vanhaudenhuyse, A., Tshibanda, L., Thonnard, M., et al. (2012). Resting state networks and consciousness: alterations of multiple resting state network connectivity in physiological, pharmacological, and pathological consciousness States. Frontiers in Psychology, 3, 295. doi:10.3389/fpsyg.2012.00295

Liang, Z., King, J., & Zhang, N. (2012). Intrinsic organization of the anesthetized brain. The Journal of Neuroscience, 32(30), 10183–10191. doi:10.1523/JNEUROSCI.1020-12.2012

Schröter, M. S., Spoormaker, V. I., Schorer, A., Wohlschläger, A., Czisch, M., Kochs, E. F., et al. (2012). Spatiotemporal Reconfiguration of Large-Scale Brain Functional Networks during Propofol-Induced Loss of Consciousness. The Journal of Neuroscience, 32(37), 12832–12840. doi:10.1523/JNEUROSCI.6046-11.2012

 

Introduction to NMR/MRI

Aside

FMRI has become an industry standard for neuroimaging, and while it’s relatively easy to understand the basics of the BOLD response and how neural activity can effect blood flow, trying to visualize the fundamentals of MRI physics can be really difficult. Luckily, friendly New Zealand company Magritek has produced an incredibly easy to follow and informative series of videos covering the basic physics of magnetic resonance all the way up through 2D MRI. They also make a really neat desktop MRI apparatus that uses the Earth’s magnetic field as its primary field, thus avoiding the need for cryogenics and superconducting magnets.

CNS 2011

My trip to San Francisco was possible largely thanks to the hospitality of my best friend Rupe, who has an incredible view from his balcony.

Earlier this month I had the opportunity to attend the Cognitive Neuroscience Society 2011 Annual Meeting in beautiful San Francisco, where my lab presented some of our recent research.

Sylvia, Dr. Mangels, & Belèn with our poster: "Task goals and achievement mindset influence attention to feedback and learning success in a challenging memory task"

This was my first time at a big academic conference, and I really had an incredible time. Few things make me happier than interesting ideas and intelligent people, and CNS provided 40 solid hours of learning and conversation which left me smiling like an idiot by the time I got on the plane back to New York. Aside from filling 25 pages of my Moleskine with notes (which I’m in the process of transcribing and putting online), the chance to talk with many of the names I’ve been reading for years was really exciting, and possible in no small part thanks to the fact that my advisor turned out to be friends with (or have worked with) almost everyone I wanted to meet. It was particularly great to chat with Roberto Cabeza, whose Attention to Memory model inspired my insight research. On a more practical level, it was really valuable to be able to talk with current graduate students in programs and labs I’m considering applying to for my PHD, and learned some important things for when it comes time to make my decision.

I’m sure I would have enjoyed the conference had I gone on my own, but it was great to be there with 5 other members of my lab. Though I met a bunch of great people at the Student Society dinner, it was really nice to always have someone I knew to grab lunch with or chat to during coffee breaks.

Equally important to having people to talk to is having people to watch your things while you collapse between sessions.

The conference went from 8am to 7pm every day, so I didn’t get much of a chance to see friends in the city, but I did experience my first authentic Mission burrito during our Lab dinner at Pancho Villa Taqueria. The foursquare tips said to get my burrito mojado (wet, thus the sauce and cheese), and it was definitely the right decision.

The horchata was also excellent.

I also rode the cable car for the first time, which I had somehow neglected to try despite all the time I’ve spent in San Francisco. I couldn’t help feeling like a monkey as I hung off the side (thanks Eddie Izzard), but it was preferable to having to walk back up Nob hill to where I was staying.

I’ll post a note with a link when I get my notes up online, but for now, I leave you with nightmare EEG baby courtesy of the EGI booth.

EEG baby knows your darkest secrets.

P.S: Tal Yarkoni, currently a post-doc at CU Boulder, is not just a talented researcher and blogger, but also a very funny man. Check out his CNS timeline for a good laugh.

Watson

Dave, who knows more than a bit about this sort of complex computer mumbo-jumbo, pointed out that Watson likely isn’t actually processing the questions semantically, but rather basing its answers on statistical relationships. Though this doesn’t significantly change the things I address in my post, its an important distinction to make.

This evening was the first airing of the long-awaited Jeopardy! match between two of the game’s most successful players and Watson, a question-parsing supercomputer developed by IBM. It’s obvious that big Blue wouldn’t have given the go ahead for an official match until they could be relatively sure of a victory (or at least holding their own), that knowledge doesn’t make watching this machine play any less impressive. At least judging from what I’ve seen, Watson seems to be pretty on-par with the best human players, if not a little bit better, though that got me wondering about how much of the computer’s ability was due to its processing, and how much benefit it got from it uniquely non-human characteristics.

Our lab does a lot of work on investigating how things like emotion, anxiety, and stress influence learning, memory, and cognition. We often do this by having study participants answer a series of general knowledge trivia questions under different experimental manipulations. Put simply, our research and that of other labs has shown that stress of being in a test environment, anxiety about getting the answers right and, and focus on negative emotion if you get an answer wrong, all contribute to a person doing significantly worse in these kind of situations than others who aren’t as stressed, worried, or sensitive to negative feedback.

Add to those relatively high-level concerns the fact that humans are easy to distract, have different reading speeds, make inconsistent muscle movements, and are subject to things like being hungry or tired. Compare these facts to a computer who never needs to eat or sleep, has only one possible goal (answering questions), and doesn’t have any feelings to be hurt or things to worry about.

Suddenly bringing a human to a trivia fight might not be the best idea.

Of course, there are many areas where humans have a leg up on our robotic competition, particularly when it comes to “creative” thinking, language use, and joking. That being said, Watson seems to be surprisingly good at the kind of word-play problems that are common on Jeopardy!, which might be a good indication of how close it is to our level of ability.

Looking at the big picture, competing on a game show is unimportant compared to the possible real-world applications of a technology like Watson. As computers become more and more able to do things historically only done by humans, an obvious question is when that all-important line is crossed and the computers are better at being us than we are.

Though there are many ways to determine where this point is, and equally as many arguments against any particular measure, lets stick with Jeopardy! for a second. The way I see it, to be better than a human at answering trivia questions, Watson only has to be as good as the best human player, at least in terms of “thinking” ability. In a game between Watson and a human champion with exactly equal semantic-processing/question-answering abilities, the computer will always win. Because Watson doesn’t care if it gets a question wrong, but Ken Jennings really doesn’t want to be beaten by a computer. Because Watson will always hit the buzzer at exactly the right time, and Joe Human might slip. For all the reasons I talked about above, Watson doesn’t have to be any smarter than a human to beat us.