It might not seem like it when you’ve forgotten your email password for the third time in as many days, but your brain is capable of amazing things.
It can instantly process the intricate sensory inputs needed to understand the world while simultaneously conducting motor neurons to navigate these landscapes. It can read complex emotions from minute facial cues, adapt to new information and learn new skills even into old age. Sure, computers are faster at algorithm-driven calculations, but the human brain is still eons ahead when it comes to linguistic abilities, pattern recognition and creativity.
So much about how the brain works is still a mystery, but researchers at The University of Texas at Austin are making rapid advancements in prevention, treatment and care in the area of brain health.
Andreana Haley often uses the term “healthspan” when discussing her work. As it sounds, healthspan denotes not how many years the average person will live, but rather the number of years one can expect to spend in reasonably good health. And while it’s easier to pinpoint when death occurs than the moment when a person can no longer be described as “healthy,” it’s clear that human healthspan lags considerably behind lifespan.
In Haley’s lab, healthspan depends on brain function.
“Cognition is by far the most important determinant of quality of life and independence,” says Haley, an associate professor of psychology who began her career working with Alzheimer’s patients. While interacting with these older patients, she realized something discouraging.
“If you wait for someone to already have a diagnosis of Alzheimer’s disease or vascular dementia, it’s too late,” Haley explains. “Those processes that led the brain to clinical symptoms have been going on for decades.”
Haley opted to pursue prevention instead of cure, focusing her work on individuals in midlife (ages 40-60) and on identifying and changing those factors that predict problems in later life.
What does it take to keep our brains from deteriorating before we reach retirement? It turns out that cognitive health isn’t maintained simply by doing word puzzles or other “mental calisthenics.” The brain is, after all, just another organ in the body. And the same habits that imperil our hearts and livers may also chip away at our gray matter.
Research has shown links between obesity and other metabolic issues in middle age and cognitive decline in later life. The exact mechanisms are not fully understood, but clogged arteries delivering insufficient blood (and thus oxygen) to the brain are one possible culprit. Thus, interventions that help prevent or delay cardiovascular disease may also improve mental outcomes. And, yes, by “interventions” I mean the dreaded diet and exercise.
Haley’s lab is working on several projects involving changes to nutrition and exercise habits of middle-age subjects, using behavioral and neuroimaging techniques to measure the effects of these interventions. Researchers aim to answer several questions: What are the neurological markers that predict cognitive decline in later life? What underlying mechanisms drive this decline? And which individuals can be helped by a particular intervention?
Haley notes that not only is the body affecting the brain, the brain is affecting the body. She says these bidirectional connections are not easy to disentangle, but they’re extremely interesting to pursue. Neurodegeneration in the brain affects cognitive function, personality and mood, but areas of the brain that control weight and appetite and blood pressure are also going to degenerate. The brain affects all of these metabolic factors, as well as the body affecting the brain.
For those who will not benefit sufficiently from lifestyle interventions, Haley may have a workaround — transcranial laser stimulation, also called low-level laser therapy. The idea is that rather than trying to increase the amount of oxygenated blood reaching the brain via behavioral changes, you bypass the vasculature and stimulate energy production directly in the brain by aiming a specific wavelength of light at the region that is struggling.
The light used in research is a specific, near-infrared wavelength, and Haley is quick to remind us that only a fraction of this light will reach its target, which is why it’s so important to carefully calibrate the dose. Her lab is lucky, she points out, that the areas they want to affect lie within the frontal and temporal lobes, unblocked by hair follicles, which (even when hair is shaved) can disrupt light waves.
The transcranial laser stimulation study — which tests cognitive task performance of subjects being treated with the technique while also gathering physiological data — is still underway, so you may want to hold off on canceling your gym membership until the results are analyzed.
Haley cautions that no single intervention will work for everyone, even within the lifestyle approaches.
“A one-size-fits-all diet is not going to be it,” she says. “There are so many genetic and environmental interactions in that realm that we’re going to have to figure out how to recommend individualized diets. It’s not going to be a ‘do this’ or ‘do that’ for everyone.”
She also notes that it isn’t necessary to eliminate cognitive decline entirely. Great strides could be made in both quality of life and health care costs by delaying its onset long enough for some lesser indignity to kill us.
“At some point,” Haley says, “we’re going to end up at the physiological end of life, and if we can push cognitive health to that point, I think we’ve done well.”
We talk about depression as though it is a singular entity, manifesting itself more or less identically in its victims with a progression as predictable as that of the flu. But in reality the disorder occurs in a bewildering array of configurations.
In a recent study, Chris Beevers’ lab surveyed 10,000 college students with depression about their specific symptoms. Although the survey contained only 13 questions, researchers found that 60 percent of respondents had unique combinations of symptoms.
“I was completely floored by that,” says Beevers. “We checked it like five times because I thought we’d done it wrong in terms of computing.”
A few key symptoms such as sadness and anhedonia (the clinical term for inability to enjoy anything) were present in most patients, but the remainder of the profiles were surprisingly diverse. Think of it as one of those build-your-own-salad places. Everyone starts with lettuce, but one person might add peppers, carrots, and garbanzo beans, while another opts for hard-boiled eggs, red onions and that imitation crab stuff.
Beevers, a professor of psychology and director of UT’s Institute for Mental Health Research, is trying to make sense of all that diversity. Ultimately, he hopes not only to identify subgroupings of depression, but to determine which treatment options are most and least effective for patients with particular clusters of symptoms. To accomplish this, he and his colleagues are gathering large amounts of behavioral and physiological data, and then creating algorithms to predict which factors make a person more likely to respond to a particular treatment.
The main treatment Beevers studies is a nonpharmaceutical intervention called cognitive behavioral therapy (CBT). The treatment aims to break patients of negative patterns of thought (that can perpetuate their symptoms) through practice in identifying and redirecting those thoughts. The idea isn’t necessarily to turn every loss into a win, but to step back and get a bit of perspective.
“It’s not just ‘think positively about yourself,’ ” says Beevers. “It’s more like ‘think more accurately.’ Just that people are open to the idea that ‘I need to pay attention to what I’m thinking because my thoughts may not always be true.’ That is, I think, one of the big lessons of CBT. ‘Don’t believe everything you think’ is kind of a good philosophy.”
As with any learned skill, getting benefits from CBT requires strenuous repetition, but the hope is that as neural circuits used in more realistic outlooks are strengthened, and those maintaining negative ruts shrink, the process becomes automatic.
CBT has well-documented success in the treatment of anxiety disorders, but its efficacy is lower when used for depression. Beevers hopes to improve outcomes by creating more customized forms of the intervention.
“I’m very interested in developing precise and specific treatments that target a specific mechanism,” he explains. “As opposed to CBT, it is kind of a broad, multifaceted treatment, and so if it does work, you often don’t know why it works.”
One possible mechanism Beevers is looking at is “self-dislike” (probably more commonly described as “self-loathing” by those actively experiencing it). Rather than using a general version of CBT for subjects who reported this symptom, his lab is testing an intervention that specifically addresses self-dislike.
“The idea is that if we can target these really important symptoms in a population, changing those symptoms could have a cascading effect,” Beevers says. So, once the self-dislike dissipates, things such as mood and sleep may also improve.
The goal of this approach is to spare suffering patients and overwhelmed care providers the trial and error that typifies current treatment recommendations.
“There’s been a lot of discovery,” Beevers notes of his field, “but not a lot of translation.”
There’s still much research to be done before a person can walk into a clinic, complete a quick assessment, and be matched to the treatment options most likely to speed recovery, but a common device may help on both data-gathering and intervention fronts — the ubiquitous smartphone.
Your phone knows quite a lot about your habits without ever looking at the content of your text messages, which Beevers assures us this technology would not do. One of his newest projects is looking at whether smartphones can be used to spot early behavioral changes related to depression. Factors such as the number of calls/texts made in a day or how many places the phone visits beyond just home and work could be used to flag deviations from normal behavior in its owner.
It’s not unlike your bank’s fraud early warning system, except that instead of scanning for dubious out-of-state purchases, the system might sound the alarm if you stopped replying to friends’ texts or skipped too many social engagements. By notifying the patient or care provider of such changes, the system could allow for an intervention to be made before things deteriorate further. This is crucial because preventing a depressive episode in someone exhibiting sub-clinical symptoms is far easier than pulling that person out of a full-blown depression.
For now, the plan is just to gather and analyze the data. But if it proves useful, the system could also be designed to provide daily statistics to its users — sort of a mental health Fitbit for those who enjoy obsessing over quantifications of their activity levels. And, who knows, keeping track of your daily mental “steps” could itself be a way to maintain emotional well-being.
How do we perceive the world around us? How is the brain able to understand three-dimensional shapes from the two-dimensional images projected onto it? How do we know where one object ends and another begins?
“That’s a difficult problem, and a lot of people don’t appreciate how difficult it is because it seems simple,” says Bill Geisler, who has little patience for the common analogy that our visual systems work like a camera.
“Within the blink of the eye, your brain solves all these problems,” he adds. “It seems simple because it happens so quickly, but in fact probably 35 to 40 percent of all the gray matter in your brain is devoted to just doing basic vision perception. It’s very, very sophisticated to do, and it’s very, very difficult, and it’s taken millions of years of evolution to create a visual system that can do what ours can do.”
Making things even more challenging is the fact that our intuitions about how the brain processes visual information are terrible. If researchers based their work on best guess “armchair” hypotheses, they wouldn’t get very far. This is why Geisler, a professor of psychology and director of the Center for Perceptual Systems, and his colleagues are taking a different approach. Rather than starting with the physiology of our visual systems, they are examining the environments in which those systems evolved.
This means collecting data on so-called natural scenes (i.e., everything around us) and analyzing their statistical properties. From these analyses, far better hypotheses can be formed about what the brain would need to do in order to keep us from walking into trees and the countless other seemingly easy functions it performs daily.
Geisler offers an illustration: If we see a shadowed figure in the distance and have to guess whether it is male or female, how would we accomplish such a task?
“Well, just knowing the statistical differences in height between males and females would help you perform it better,” he explains. Such statistical properties are all around us, and our brain observes and uses them constantly without our even being aware of these behind-the-scenes calculations.
One of the ways Geisler’s lab uncovers these statistical relationships is by using a device that takes stereoscopic photographs (comparable to what you would see through your two eyes) while also gathering measurements of the real-world camera-to-object distance for each corresponding pixel in the photo. From this data, they can run mathematical and computation analyses to determine what the “rules” might be for parsing a particular arrangement of pixels.
One innovation that emerged from their research is an application that can enlarge and sharpen photographs with less loss of quality than what Photoshop can offer. Another is an algorithm that may improve the focusing speed of low-vision aides that work by showing their wearers enhanced video of what is in front of them, and that require constant refocusing. Cameras focus by searching until they find the lens configuration that yields the most contrast (i.e., the least blur). But the human eye snaps to the correct spot in just one try.
This is due to something called chromatic aberration, which occurs because a lens (both the eye’s lens and a camera’s lens) cannot perfectly focus all three primary wavelengths of light onto the same point. Even when a scene is in focus, certain colors will be very slightly blurred. And when a scene is out of focus, which wavelengths are more blurred indicates whether the lens is aiming too far or too near. It is this property that our visual systems exploit to find the correct point to focus on without all the trial and error. An algorithm created by Geisler’s lab mimics this technique and may allow the cameras of low-vision aides to retain better focus when adjusting to rapidly changing scenes.
Further down the road, the Center for Perceptual Systems may someday aid in the development of prosthetic devices that would allow people with damaged eyes (but intact visual cortexes) to see. If you can predict the activity of your visual cortex to natural stimuli, it is possible to build a prosthetic device to mimic that activity, Geisler says.
Such technologies probably are not in our immediate future. Nor are they limited to vision. In theory, any activity in the brain, once understood, could be replicated with the kinds of sci-fi neural prosthetics Geisler describes. But getting to that full understanding of what is going on in the brain will take much more work.
“I think it’s important,” Geisler says, “for the public to recognize that this is not a solved problem.”
Although evolution has had plenty of time to optimize how we perform visual tasks, it’s still working out the kinks in some of our more recently acquired skills. Reading, which has only been around for about 5,000 years, is one such skill.
“The brain didn’t evolve to read,” says Jessica Church-Lang, in describing the unique challenges of written communication. “We have to take these visual mechanisms and these language mechanisms and get them to work together in a way that’s novel.”
That novelty may account for why reading difficulties are so common, affecting 10 to 15 percent of the population.
Much of assistant professor of psychology Church-Lang’s research focuses on children with reading difficulties. She is working on a study tracking the effects of educational interventions in middle school students who speak English as a second language (ESL). The multisite study, conducted by teams in Austin and Houston in conjunction with the Texas Center for Learning Disabilities, assesses reading improvements in these children over a two-year period of intervention, while also measuring the structural and functional brain changes that accompany them. Church-Lang’s lab handles the functional brain data, analyzing fMRI recordings of which areas of the brain are active (and how active they are) when children with reading difficulties perform tasks, how these activity patterns change over time and how they compare with those of children who have an easier time mastering the written word.
“What this can tell us above and beyond behavioral measures is sometimes brain changes are out of pace with behavioral changes,” she says. “So it could be that brain change precedes behavioral change, and so you don’t actually see any behavioral difference yet, but you’re starting to see brain-related changes, and this could be a way to measure whether an intervention is working. It can help us understand potentially the mechanisms that are supporting the learning-related change. And that could help us target interventions.”
In the study, Church-Lang’s lab is also looking for markers of which children will benefit from a particular intervention by studying brain data from before and after said intervention. The idea is that certain differences will already exist pre-intervention and can predict how effective that intervention will be. Because no one fix is going to address all students’ needs, it’s not so much a question of whether an intervention works; it’s for whom it will work and for whom it won’t. The ESL learners are an especially interesting group to study, according to Church-Lang, because many of the monolingual interventions used in schools may not be ideal for them.
Another reason to focus on this group is that children’s reading difficulties often go unnoticed at younger ages. During the latter half of grade school, Church-Lang says, education shifts away from reading instruction.
“The common saying is it switches from learning to read to reading to learn,” Church-Lang says.
It is at this point that students who don’t have a good handle on reading really start to struggle, but also when helping them becomes more challenging. Children have far greater neuroplasticity than adults, meaning their brains can change more rapidly in response to new information, which is why it’s easier to learn a musical instrument if you start young. As the brain matures, gaining better control over impulses, emotions and attention, that plasticity declines. It’s the trade-off of adulthood – you can sit through a two-hour movie without throwing a tantrum (though two and a half hours might be pushing it), but you can’t learn a new skill as quickly.
For this reason, Church-Lang says, “Later interventions and knowing how to do those best is even more important because a lot of kids don’t get really identified or are falling through the cracks, and so we need to have solutions for these older kids even if it’s not as easy.”
The study in bilingual struggling readers is also exciting to Church-Lang because it focuses on a population often overlooked by neuroscience.
“We’re always really grateful for families that volunteer,” she says, acknowledging the difficulties both for parents driving in from great distances and for children enduring the more tedious aspects of data collection.
“We’re trying to communicate that science can be something that everyone participates in. Because for a long time, neuroimaging was focused on kind of easy samples, which would be professors’ kids.”
Church-Lang says studying a more diverse sample is helping us understand the variability in the brain and the variety of ways that brains can function.
“Fear is an adaptive emotion,” says associate professor of psychology Marie Monfils. “We, humans and other animals, are predisposed to be fearful of things/beings that stand to cause us harm.”
Under the right circumstances, humans can develop phobias of comically nonthreatening stuff – see koumpounophobia (fear of buttons) and pogonophobia (fear of beards) – but it’s much easier to acquire a fear of something that could have posed a danger to our species at some point in our evolution (most notably, what Monfils describes as “scary-looking critters”). Such fears helped our ancestors avoid getting eaten or poisoned by other members of the animal kingdom, but they’re more hindrance than help for many modern city dwellers.
Monfils’ lab studies both how we develop fears and how we might rid ourselves of them. One such subject is social acquisition of fear. An individual’s fears don’t always arise from traumatic experiences. A person might be afraid of airplanes because he or she endured a terrifying flight, but such a phobia could also be triggered by traveling with a parent who displays anxiety about flying, or even by reading about plane crashes on social media. Monfils’ work suggests that we’re most likely to learn fears in this way from close friends and family members. Among her current projects is a study of social acquisition of fear in rats that aims to understand whether and how this differs from social learning in general.
“We’ve identified some brain regions that appear to be specifically engaged during this form of learning, and not direct fear acquisition.” Monfils elaborates. “Now, I want to know: Are these regions specific to social fear acquisition or social information more broadly speaking?”
To answer that question, her lab uses pairs of rats that can learn from each other. Some rats learn positive things; others learn aversions. Then, working with associate professor of psychology Joanne Lee, a cell labeling technique is used to show whether similar groups of neurons are involved in both types of learning or whether learning fear is its own special thing.
Monfils’ work with fear in animals may, in turn, help humans with their fear of animals. A collaboration with psychology professor Mike Telch is translating some of her findings into an intervention designed to improve outcomes in people spooked by snakes and spiders. This new approach combines aspects of therapies called “extinction” and “reconsolidation blockade.”
Extinction is similar to the exposure therapy approach that occasionally crops up in TV sitcom plots. The subject is exposed to the thing he fears in increasing doses, with no negative outcomes until he is finally ready to fly on an airplane or go to the top of a tall building again (at which point, hilarity ensues in the TV version). The problem with so-called extinction is that it doesn’t actually eliminate the fear memory (i.e., snakes are scary). It merely creates another memory alongside it (i.e., that time I looked at a snake photo and nothing bad happened). These two memories compete, with fear often winning and snakes becoming scary again.
Reconsolidation blockade is based on the idea that retrieved memories are malleable for several hours. This is because each time you pull up that snakes-are-scary memory, it then needs to be reconsolidated and returned to long-term storage in your brain. Certain drugs can prevent that reconsolidation by blocking the protein synthesis that enables it. That would be great if those drugs were deemed safe for use in humans, but they are not.
Monfils’ Retrieval+Extinction technique aims to retrieve a fear memory, allow enough time for it to reach that malleable state, and then introduce the extinction session.
“The idea is that rather than creating a second memory trace, extinction training would be incorporated into the initial fear memory as it reconsolidated, de facto updating it as a safe memory,” says Monfils. “We have found that this approach is more effective, on average, than standard extinction in persistently attenuating fear memories.”
As with other scientists I spoke to, Monfils enthusiastically expounds on the wonders of the brain. And she may be the person who finally persuades me to wear a bike helmet.
“Virtually all that we do — speak, read, run, interact, love, learn — depends on our brain,” Monfils says. “While the brain is capable of significant plasticity, a brain injury can be an immense challenge from which to recover. Protect your brain!”