Anesthesia and Consciousness

As anyone unfortunate to have experienced with any frequency will tell you, anesthesia is a strange thing. One moment you’re in the operating room, chatting nervously with the people who will soon hold your life in their hands, and the next, you’re staring at the ceiling of the recovery room. It’s not like sleep, with it’s natural transition from one state to the next; it’s abrupt, and disorienting.

The Atlantic recently published long-form article roughly focusing on consciousness through the lens of anesthesia. While I wasn’t in love with how the article handled the discussion of awareness during surgery, overall it’s well written and touches on many of the key issues involved in trying to figure out this whole consciousness thing. Moreover, it reminded me of a talk given by Gilles Plourde at the Cognitive Sciences Summer Institute on the Evolution and Function of Consciousness, which I had the honor of attending last summer. Dr. Plourde (who I’m surprised wasn’t mentioned in the article) is an anesthesiologist with a tremendous moustache who has done some really interesting work using brain imaging and electrophysiology to try and figure out what exactly anesthetics do to disrupt consciousness (tl;dr, it’s probably got a lot to do with thalamocortical communication). All of the sessions during the Summer Institute were recorded and and put online, and I’ve embedded Dr. Plourde’s talk below (though I strongly encourage you to check out the full list).

Somewhat related, there have been a bunch interesting papers recently on how anesthetics influence functional connectivity in the brain:

Lewis, L. D., Weiner, V. S., Mukamel, E. A., Donoghue, J. A., Eskandar, E. N., Madsen, J. R., et al. (2012). Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness. Proceedings of the National Academy of Sciences of the United States of America, 109(49), E3377–86. doi:10.1073/pnas.1210907109

Heine, L., Soddu, A., Gómez, F., Vanhaudenhuyse, A., Tshibanda, L., Thonnard, M., et al. (2012). Resting state networks and consciousness: alterations of multiple resting state network connectivity in physiological, pharmacological, and pathological consciousness States. Frontiers in Psychology, 3, 295. doi:10.3389/fpsyg.2012.00295

Liang, Z., King, J., & Zhang, N. (2012). Intrinsic organization of the anesthetized brain. The Journal of Neuroscience, 32(30), 10183–10191. doi:10.1523/JNEUROSCI.1020-12.2012

Schröter, M. S., Spoormaker, V. I., Schorer, A., Wohlschläger, A., Czisch, M., Kochs, E. F., et al. (2012). Spatiotemporal Reconfiguration of Large-Scale Brain Functional Networks during Propofol-Induced Loss of Consciousness. The Journal of Neuroscience, 32(37), 12832–12840. doi:10.1523/JNEUROSCI.6046-11.2012

 

Disproving Radical Solipsism with Microsoft Excel

Aside

Start digging at all deeply into the problem of consciousness and philosophy of mind, and you’ll quickly encounter arguments of radical solipsism, the view that it is impossible to prove the existence of anything other than your own conscious experience, and certainly not any external reality. Though most philosophers don’t take this claim very seriously, there has yet to be made a convincing argument against it.

Eric Schwitzgebel has just posted an interesting argument suggesting that there may be ways to experimentally disprove radical solipsism, using nothing more than your own mathematical ability and a copy of Excel. It’s certainly preliminary work, but fascinating nonetheless.

Watson

Dave, who knows more than a bit about this sort of complex computer mumbo-jumbo, pointed out that Watson likely isn’t actually processing the questions semantically, but rather basing its answers on statistical relationships. Though this doesn’t significantly change the things I address in my post, its an important distinction to make.

This evening was the first airing of the long-awaited Jeopardy! match between two of the game’s most successful players and Watson, a question-parsing supercomputer developed by IBM. It’s obvious that big Blue wouldn’t have given the go ahead for an official match until they could be relatively sure of a victory (or at least holding their own), that knowledge doesn’t make watching this machine play any less impressive. At least judging from what I’ve seen, Watson seems to be pretty on-par with the best human players, if not a little bit better, though that got me wondering about how much of the computer’s ability was due to its processing, and how much benefit it got from it uniquely non-human characteristics.

Our lab does a lot of work on investigating how things like emotion, anxiety, and stress influence learning, memory, and cognition. We often do this by having study participants answer a series of general knowledge trivia questions under different experimental manipulations. Put simply, our research and that of other labs has shown that stress of being in a test environment, anxiety about getting the answers right and, and focus on negative emotion if you get an answer wrong, all contribute to a person doing significantly worse in these kind of situations than others who aren’t as stressed, worried, or sensitive to negative feedback.

Add to those relatively high-level concerns the fact that humans are easy to distract, have different reading speeds, make inconsistent muscle movements, and are subject to things like being hungry or tired. Compare these facts to a computer who never needs to eat or sleep, has only one possible goal (answering questions), and doesn’t have any feelings to be hurt or things to worry about.

Suddenly bringing a human to a trivia fight might not be the best idea.

Of course, there are many areas where humans have a leg up on our robotic competition, particularly when it comes to “creative” thinking, language use, and joking. That being said, Watson seems to be surprisingly good at the kind of word-play problems that are common on Jeopardy!, which might be a good indication of how close it is to our level of ability.

Looking at the big picture, competing on a game show is unimportant compared to the possible real-world applications of a technology like Watson. As computers become more and more able to do things historically only done by humans, an obvious question is when that all-important line is crossed and the computers are better at being us than we are.

Though there are many ways to determine where this point is, and equally as many arguments against any particular measure, lets stick with Jeopardy! for a second. The way I see it, to be better than a human at answering trivia questions, Watson only has to be as good as the best human player, at least in terms of “thinking” ability. In a game between Watson and a human champion with exactly equal semantic-processing/question-answering abilities, the computer will always win. Because Watson doesn’t care if it gets a question wrong, but Ken Jennings really doesn’t want to be beaten by a computer. Because Watson will always hit the buzzer at exactly the right time, and Joe Human might slip. For all the reasons I talked about above, Watson doesn’t have to be any smarter than a human to beat us.

The Science of Morality

I was lucky last night to be able to attend a fantastic lecture panel on the Science of Morality at the 92stY as part of the World Science Festival. Philosophers Daniel Dennett and Patricia Churchland, and neuroscientists Antonio Damasio and Marc Hauser discussed the philosophy, psychology, and biology of a fundamental aspect of human nature. Below are some of the more interesting ideas discussed.

Reduced D2 receptor densities lead to difficulty in learning from error and negative reinforcement:
Genetically Determined Differences in Learning from Errors (Science)
“Go” and “NoGo”: Learning and the Basal Ganglia (DANA)
This is of particular interest to me due to its implications for ADHD.

Breakdown of Theory of Mind in social, emotional, and moral decision-making:
Mindblindness: An Essay on Autism and Theory of Mind (Google Scholar, book)

PFC damage inhibits normal social/moral behavior:
On the neurology of morals (Nature Neuroscience)

Fundamental neurochemical differences effect social behavior:
The effects of oxytocin and vasopressin on partner preferences in male and female prairie voles (Microtus ochrogaster) (PubMed)

Most of these articles are behind pay-walls, so let me know if you need access to one of them and I’ll use my school account.

m4s0n501