Think Smart A Neuroscientist's Prescription for Improving Your Brain's Performance
When I was a medical student, neither teachers nor students placed much emphasis on the brain. The curriculum included only a first-year course in neuroanatomy, followed two years later by a one-month rotation spent working with patients on the neurology wards. After graduation, most medical students tended to avoid specialty training in careers devoted to treating the brain (neurology and neurosurgery), based on the general perception that not much could be done to heal or even improve the lives of many of the patients afflicted by brain diseases. I remember vividly my father's disappointment when I told him I was interested in neurology and psychiatry rather than obstetrics (his specialty). "You can't do anything for most of the patients you'll encounter in either of those specialties, and it's awfully depressing to just diagnose and not treat," he told me.
A lot has changed since then. We have learned more about the brain in the past decade than we did in the previous two hundred years. If he were still alive, my father would be amazed at the effective treatments now available for many brain diseases such as multiple sclerosis, migraine, and epilepsy, to mention just the most common. Neuroscience (brain science) is currently one of the most popular career choices among students attracted to science. Psychiatry and neurology are in the process of merging into the hybrid discipline of neuropsychiatry. But these advances didn't happen spontaneously. The advance from nihilism and pessimism toward curiosity and hope was stimulated principally by new ways of imaging the brain.
Until the middle part of the twentieth century, what little was known about the brain consisted of a melange of speculation and dogmatism based on hosts of hoary old men staring through microscopes at brightly colored dye-stained neurons. Thanks to advances in brain-imaging techniques over the last thirty-plus years, it's currently possible for neuroscientists (many of whom are now women) to observe the development of the brain in real time and without any need for either speculation or dogmatism. The principle behind these illuminating (in both senses of the term) imaging techniques is straightforward: blood flow to the brain varies with activity. The greater the activity, the greater the flow of blood needed to replenish the oxygen and glucose used by the active neurons. This isn't any different from what happens elsewhere in the body.
When you lift a hundred-pound barbell at the health club in an effort to attract the attention of someone nearby in whom you're romantically interested, blood flow increases in the muscles of your arms and chest based on the increased need by those muscles for oxygen and glucose. Similarly, when you use a specific circuit in the brain, the components of that circuit will become more active and call on the circulatory system to provide additional glucose and oxygen. Positron-emission tomography (PET) and functional magnetic resonance imaging (fMRI) detect the changes in blood flow within active parts of the brain and record them while the subject lies within a special scanner.
Thanks to fMRI imaging and other techniques, we know that the brain never wears out; it gets better the more we use it; it changes in structure and function throughout our lives. As a consequence of this plasticity we sculpt our brains according to our life experiences. As a result, no two brains are exactly alike, not even the brains of identical twins who, while they share the same genetic makeup, don't share identical experiences. Due to this diversity in the brain's organization and structure from one person to another, it's often possible to reach valid conclusions about a person on the basis of his brain's organization.
Perhaps you're thinking, "These differences could be the result of self-selection: people with superior visual attention might naturally be attracted to video games." To cover that possibility, Bavalier and Green scrounged around the campus and after a good deal of effort came up with thirty-two non-video-playing students. Half of them were assigned to play the puzzle game Tetris, and half to play the action game Unreal Tournament. All played their assigned game for thirty hours over the course of about thirty days.
In case you're not familiar with Unreal Tournament, it features a cornucopia of enemies and hazards coming at you on the screen from every direction. Allow your attention to waver for a few milliseconds and ... lights out! In contrast, the classic falling-blocks puzzle game Tetris is far less visually demanding. Since the puzzle pieces always drop from the top of the screen, there isn't any need to scan elsewhere.
With practice, "video-game players can process visual information more quickly and can track more objects on a computer screen than nonplayers," according to Bavelier. Two processes serve as limiting factors in tracking objects on a screen.
The first, attentional blink, is the half-second recovery time required to detect a second target during a rapid-fire sequence of targets. For example, if I ask you to watch for a white letter appearing on a computer screen at some point in a stream of black letters, you'll have no problem doing so. But if I then ask you to look for an X appearing after the white letter in a position ranging from immediately after the white letter to eight letters later, things become more complicated.
No problem if the X comes three or more letters later. But if the X appears very close in time to the white letter, you'll probably miss it. That's because while your brain is "busy" looking for the white letter, it will be unable, because of attentional blink, to process anything else. This brief but critical gap can be narrowed if a person spends several hours a week on a regular basis playing action-video games. In essence, action-video games enable the game players to shorten their attentional blink and thereby perceive and respond to threatening targets that typically spring unexpectedly from the periphery of the screen.
The second process, subitizing, refers to the ability to look at an array of objects and immediately and correctly enumerate them without resorting to counting. For example, when you quickly glance at the checkout lines in the supermarket and without counting automatically select the shortest line, you're subitizing. Most people can do that with up to four objects. Anything beyond that subitizing limit of four must be counted and requires extra time, 250 to 350 milliseconds, for each additional item. But for action-video players, the subitizing number is 50 percent improved. What's more, only one hour a day for ten days of action-video gaming with an emotionally intense shooting game such as Medal of Honor is sufficient to improve both visual attention and processing time. No improvement in either factor occurs among players of Tetris or other non-action-video games, Bavelier and Greene discovered.
Although no one has so far come up with a completely satisfying explanation for these differences, I suspect they're due to the increased threat and fear levels experienced by players of games like Medal of Honor, where players can lose their "lives" rather than simply fail to solve a puzzle as with Tetris.
This arousal of fear and aggression in response to perceived threats also plays a part in explaining why violent video games incite violent behavior in certain predisposed players. The amygdala, a small almond-shaped nucleus below the cerebral cortex that responds to threatening or fearful faces, doesn't distinguish between events occurring in a game and the same thing happening in "real life."
"Playing action-video games can alter fundamental characteristics of the visual system," Bavelier says. Essentially, video games provide a convenient and easily accessible means for changing the brain. The resulting fine-tuning of visual attention will enable you to see more, and respond more quickly and more accurately to simultaneously occurring events.
Cognitive Versus Physical Fidelity
In order to understand the value of digital game-based learning as a training tool for brain enhancement, it's helpful to distinguish between what's called cognitive and physical fidelity. Physical fidelity means that the training program faithfully replicates the real-life situation. For example, the early versions of the air-flight simulators consisted of the front of a real airplane hooked up to computer displays. Sitting in one, you almost couldn't tell whether you were in the cockpit of an actual plane or in a simulator.
Several years ago I experienced firsthand the effects of digital virtual learning-flight simulation based on physical fidelity. An airline pilot patient of mine was required by the FAA to undergo simulator testing after recovering from a recent head injury. As part of the evaluation, the FAA requested that I accompany him to the testing so that I could be interviewed about the risk that he might experience an epileptic seizure as a result of the head injury. While in the simulator my patient performed so well that he successfully completed the testing a half-hour early. Since a half-hour of paid-for time remained, the evaluators asked if I would like to try the simulator. Caught off guard, I readily agreed to what I anticipated would be an experience no different from playing a video game.
A few minutes later, I found myself strapped into a seat and at the controls of what had once been the cockpit of a real 747 but was now part of a simulated testing protocol. I remember two things about the experience: the initial thrill of looking through the windshield and observing how the scene changed as I moved the yoke (control wheel); and the uneasiness I immediately experienced when the instructor told me after several minutes of admittedly enjoyable flight simulation, "It's now time to land the airplane. Listen to my instructions and I'll tell you how to do it." Of course I reminded myself for the umpteenth time that the simulated flight wasn't "real." But somehow that didn't calm me down.
The brain, in contrast, changes in both size and function as it ages. The brain reaches its maximum size (measured by weight) somewhere between twenty and thirty years of age and decreases progressively for the remainder of its life span. Function too changes with age: as we age we experience decreases in reaction time, spatial processing, and working memory, among other functions. Yet these changes aren't due so much to brain cell loss -- as was formerly believed -- but to failures to maintain the neuronal circuitry linking neurons to one another. In support of this view, neuroscientists have found that the number of synapses linking neurons to one another in the cerebral cortex decreases with age, while the number of neurons themselves doesn't change very much.
Given these facts about the brain, the metaphor of brain-as-computer is of limited usefulness. What metaphor will enable us to put into practice all of the different pathways to brain enhancement and improvement suggested in this book?
The best and most helpful metaphor for the brain that I have come across was suggested by neurologist Kenneth Rockwood of Dalhousie University, Halifax, Nova Scotia, Canada. "Perhaps we should think of the metaphor of a series of marathons," he says. "As our brains age, we must prepare them to resist injury -- equip them with good education, train them thoughtfully with challenging regimens, support them with nurturing environments, and be prepared to refresh them from time to time."
Rockwood's metaphor allows for both the structural and functional changes that accompany brain growth, development, and aging, as well as the active approach that will enable us to help our brain to achieve optimal performance throughout our lives. The metaphor also is consistent with the brain's varying performance depending upon its physical conditioning, which can always be improved by additional effort and training.
Rockwood concludes: "We should recognize that performance can vary dramatically from one marathon to the next. Perhaps the most important consequences of such a metaphor are that we should aim to see cognitive aging as a challenge for which we must prepare, that we gain enough equanimity to accept the slowing that even elite athletes experience, and that we reflect on what has been achieved along the way."