Neuropsychological assessment more efficient than MRI for tracking disease progression in memory clinic patients

Share Button

Progression of disease in memory clinic patients can be tracked efficiently with 45 minutes of neuropsychological testing, new research shows. MRI measures of brain atrophy were shown to be less reliable to pick up changes in the same patients. This finding has important implications for the design of clinical trials of new anti-Alzheimer drugs. If neuropsychological assessment is used as the outcome measure or “gold standard,” fewer patients would be needed to conduct such trials, or the trials may be of shorter duration. Investigators at the University of Amsterdam, The Netherlands, have shown that progression of disease in memory clinic patients can be tracked efficiently with 45 minutes of neuropsychological testing. MRI measures of brain atrophy were shown to be less reliable to pick up changes in the same patients. This finding has important implications for the design of clinical trials of new anti-Alzheimer drugs. If neuropsychological assessment is used as the outcome measure or “gold standard,” fewer patients would be needed to conduct such trials, or the trials may be of shorter duration. The US Food and Drug Administration and its counterparts in other countries, such as the European Medicines Agency, require that pharmaceutical companies test and prove the effectiveness of new drugs through experimental studies. In the case of Alzheimer’s disease, this means amelioration of cognitive and behavioral symptoms or at least slowing down the rate of cognitive and behavioral decline. Until now the outcome measures in this type of research have been cognitive and behavioral rating scales, such as the Alzheimer Disease Assessment Scale (ADAS). If the effect of a new drug cannot be demonstrated with such a scale, the drug will not be approved. The problem with scales like the ADAS is that they are quite crude and cannot pick up subtle changes, especially in early stages of the disease. As an alternative, MRI measures of brain atrophy have been proposed as outcome in clinical trials, because of allegedly better properties to detect subtle changes. This implies that fewer patients are needed in clinical trials of new drugs to show a treatment effect. The Dutch investigators tested this claim at the memory clinic of the Academic Medical Centre, University of Amsterdam, by comparing neuropsychological assessment and MRI measures of brain atrophy in 62 patients with no or early cognitive impairment, but no dementia. At baseline and after two years, neurologists examined the study participants and judged whether or not their cognition was normal. After two years of follow-up, twenty-eight patients were considered to be normal, and 34 had mild cognitive impairment or had progressed to dementia, mostly Alzheimer’s disease. At baseline and at follow-up all patients had a state-of-the-art MRI scan, and memory and other cognitive functions were tested with five standard neuropsychological tests. In the group that the neurologists considered normal at follow-up, cognitive performance was indeed normal at baseline, and it remained so after two years. In the group that was considered impaired, however, cognition was already abnormal at baseline and it declined considerably over the next two years. The MRI measures concerned volumes of the left and right hippocampus, which are extremely important for memory functioning, and are the first to degenerate during the Alzheimer disease process. The volume of the hippocampus decreased less than 1% in the normal group during the follow-up interval, and more than 3% in the impaired group. The pattern of findings was similar for both techniques, but MRI showed less pronounced differences between both groups at baseline than the cognitive tests, and more importantly, less pronounced differences in rate of change. Using figures on rates of change as collected in this study, one may calculate the numbers of patients that would be needed for a hypothetical clinical trial of a new drug. The investigators concluded that only half as many patients would be needed if neuropsychological assessment were used as the gold standard rather than MRI measures of brain atrophy. However, Dr, Edo Richard, one of the neurologists conducting the study, says, “Whichever outcome is selected, evaluation of functioning as it can be noticed by patients will always be needed to confirm the clinical relevance of any treatment effect.” Journal Reference: 1.Ben Schmand, Anne Rienstra, Hyke Tamminga, Edo Richard, Willem A. van Gool, Matthan W.A. Caan, Charles B. Majoie. Responsiveness of Magnetic Resonance Imaging and Neuropsychological Assessment in Memory Clinic Patients. Journal of Alzheimer’s Disease, January 2014 DOI: 10.3233/JAD-131484

Share Button

Chronic pain research delves into brain: New insight into how brain responds to pain

Share Button

Source:University of Adelaide Summary:New insights into how the human brain responds to chronic pain could eventually lead to improved treatments for patients, researchers say. Chronic pain is common throughout the world. More than 100 million Americans are believed to be affected by chronic pain. “People living with chronic headache and other forms of chronic pain may experience reduced quality of life, as the pain often prevents them from working, amongst other things. It is therefore imperative that we understand the causes of chronic pain, not just attempt to treat the symptoms with medication,” the lead author said.Share This Neuroplasticity is the term used to describe the brain’s ability to change structurally and functionally with experience and use. “Neuroplasticity underlies our learning and memory, making it vital during early childhood development and important for continuous learning throughout life,” says Dr Ann-Maree Vallence, a Postdoctoral Fellow in the University of Adelaide’s Robinson Institute. “The mechanisms responsible for the development of chronic pain are poorly understood. While most research focuses on changes in the spinal cord, this research investigates the role of brain plasticity in the development of chronic pain.” Chronic pain is common throughout the world. In Australia, approximately 20% of adults suffer moderate to severe chronic pain. More than 100 million Americans are believed to be affected by chronic pain. Dr Vallence, who is based in the Robinson Institute’s Neuromotor Plasticity and Development Group, has conducted a study on patients with chronic tension-type headache (CTTH), a common chronic pain disorder. CTTH is characterized by a dull, constant feeling of pressure or tightening that usually affects both sides of the head, occurring for 15 days or more per month. Other symptoms include poor sleep, irritability, disturbed memory and concentration, and depression and anxiety. “People living with chronic headache and other forms of chronic pain may experience reduced quality of life, as the pain often prevents them from working, amongst other things. It is therefore imperative that we understand the causes of chronic pain, not just attempt to treat the symptoms with medication,” Dr Vallence says. In this study, participants undertook a motor training task consisting of moving their thumb as quickly as possible in a specific direction. The change in performance (or learning) on the task was tracked by recording how quickly subjects moved their thumb. A non-invasive brain stimulation technique was also used to obtain a measure of the participants’ neuroplasticity. “Typically, when individuals undertake a motor training task such as this, their performance improves over time and this is linked with a neuroplastic change in the brain,” Dr Vallence says. “The people with no history of chronic pain got better at the task with training, and we observed an associated neuroplastic change in their brains. However, our chronic headache patients did not get better at the task and there were no associated changes in the brain, suggesting impaired neuroplasticity. “These results provide a novel and important insight into the cause of chronic pain, and could eventually help in the development of a more targeted treatment for CTTH and other chronic pain conditions,” she says

Share Button

Play it again, Sam: How the brain recognizes familiar music

Share Button

Source:McGill University Summary:Research reveals that the brain’s motor network helps people remember and recognize music that they have performed in the past better than music they have only heard. A recent study sheds new light on how humans perceive and produce sounds, and may pave the way for investigations into whether motor learning could improve or protect memory or cognitive impairment in aging populations.Share This For the study, researchers recruited twenty skilled pianists from Lyon, France. The group was asked to learn simple melodies by either hearing them several times or performing them several times on a piano. Pianists then heard all of the melodies they had learned, some of which contained wrong notes, while their brain electric signals were measured using electroencephalography (EEG). Credit: Palmer, Mathias McGill University[Click to enlarge image] For the study, researchers recruited twenty skilled pianists from Lyon, France. The group was asked to learn simple melodies by either hearing them several times or performing them several times on a piano. Pianists then heard all of the melodies they had learned, some of which contained wrong notes, while their brain electric signals were measured using electroencephalography (EEG).Credit: Palmer, Mathias McGill University Research from McGill University reveals that the brain’s motor network helps people remember and recognize music that they have performed in the past better than music they have only heard. A recent study by Prof. Caroline Palmer of the Department of Psychology sheds new light on how humans perceive and produce sounds, and may pave the way for investigations into whether motor learning could improve or protect memory or cognitive impairment in aging populations. The research is published in the journal Cerebral Cortex. “The memory benefit that comes from performing a melody rather than just listening to it, or saying a word out loud rather than just hearing or reading it, is known as the ‘production effect’ on memory,” says Prof. Palmer, a Canada Research Chair in Cognitive Neuroscience of Performance. “Scientists have debated whether the production effect is due to motor memories, such as knowing the feel of a particular sequence of finger movements on piano keys, or simply due to strengthened auditory memories, such as knowing how the melody tones should sound. Our paper provides new evidence that motor memories play a role in improving listeners’ recognition of tones they have previously performed.” For the study, researchers recruited twenty skilled pianists from Lyon, France. The group was asked to learn simple melodies by either hearing them several times or performing them several times on a piano. Pianists then heard all of the melodies they had learned, some of which contained wrong notes, while their brain electric signals were measured using electroencephalography (EEG). “We found that pianists were better at recognizing pitch changes in melodies they had performed earlier,” said the study’s first author, Brian Mathias, a McGill PhD student who conducted the work at the Lyon Neuroscience Research Centre in France with additional collaborators Drs. Barbara Tillmann and Fabien Perrin. The team found that EEG measurements revealed larger changes in brain waves and increased motor activity for previously performed melodies than for heard melodies about 200 milliseconds after the wrong notes. This reveals that the brain quickly compares incoming auditory information with motor information stored in memory, allowing us to recognize whether a sound is familiar. “This paper helps us understand ‘experiential learning’, or ‘learning by doing’, and offers pedagogical and clinical implications,” said Mathias, “The role of the motor system in recognizing music, and perhaps also speech, could inform education theory by providing strategies for memory enhancement for students and teachers.” This study was conducted within the framework of the European Erasmus Mundus Auditory Cognitive Neuroscience exchange program, in which North American researchers complete a research project in collaboration with a European laboratory for 6-12 months.

Share Button

The Brain Processes Complex Stimuli More Cumulatively Than We Thought

Share Button

The finding represents a new view of how the brain creates internal representations of the visual world. “We are excited to see if this novel view will dominate the wider consensus” said senior author Dr. Miyashita, who is also Professor of Physiology at the University of Tokyo’s School of Medicine, “and also about the potential impact of our new computational principle on a wide range of views on human cognitive abilities.” The brain recalls the patterns and objects we observe by developing distinct neuronal representations that go along with them (this is the same way it recalls memories). Scientists have long hypothesized that these neuronal representations emerge in a hierarchical process limited to the same cortical region in which the representations are first processed. Because the brain perceives and recognizes the external world through these internal images, any new information about the process by which this takes place has the power to inform our understanding of related functions, including knowledge acquisition and memory. However, studies attempting to uncover the functional hierarchy involved in the cortical process of visual stimuli have tried to characterize this hierarchy by analyzing the activity of single nerve cells, which are not necessarily correlated with neurons nearby, thus leaving these analyses lacking. In a new study appearing in the 12 July issue of the journal Science, lead author Toshiyuki Hirabayashi and colleagues focus not on single neurons but instead on the relationship between neuron pairs, testing the possibility that the representation of an object in a single brain region emerges in a hierarchically lower brain area. “I became interested in this work,” said Dr. Hirabayashi, “because I was impressed by the elaborate neuronal circuitry in the early visual system, which is well-studied, and I wanted to explore the circuitry underlying higher-order visual processing, which is not yet fully understood.” Hirabayashi and colleagues analyzed nerve cell pairs in cortical areas TE and 36, the latter of which is hierarchically higher, in two adult macaques. After these animals looked at six sets of paired stimuli for several months to learn to associate related objects (a process that can lead to pair-coding neurons in the brain), the researchers recorded neuron responses in areas TE and 36 of both animals as they again performed this task. The neurons exhibited pair association, but not where the researchers would have thought. “The most surprising result,” said senior author Dr. Yasushi Miyashita “was that the neuronal circuit that generated pair-association was found only in area TE, not in area 36.” Indeed, based on previous studies, which indicated that the number of pair-coding neurons in area TE is much smaller, the researchers would have expected the opposite. During their study, Miyashita and other team members observed that in region TE of the macaque cortex, unit 1 neurons (or source neurons) provided input to unit 2 neurons (or target neurons), which — unlike unit 1 neurons — responded to both members of a stimulus pair. “The representations generated in area TE did not reflect a mere random fluctuation of response patterns,” explained Dr. Miyashita, “but rather, they emerged as a result of circuit processing inherent to that area of the brain.” In area 36, meanwhile, members of neuron pairs behaved differently; on average, unit 1 as well as unit 2 neurons responded to both members of a stimulus pair. Neurons in area 36 received input from area TE, but only from its unit 2 neurons. Taken together, these findings lead the authors to hypothesize the existence of a hierarchical relationship between regions TE and 36, in which paired associations first established in the former region are propagated to the latter one. Here, area 36 represents the next level of a so-called feed forward hierarchy. The work by Hirabayashi and colleagues suggests that the detailed representations of objects commonly observed in the brain are attained not by buildup of representations in a single area, but by emergence of these representations in a hierarchically prior area and their subsequent transfer to the brain region that follows. There, they become sufficiently prevalent for the brain to register. The work also reveals that the brain activity involved in recreating visual stimuli emerges in a hierarchically lower brain area than previously thought. Moving forward, the Japanese research team has plans to expand upon this research, thus continuing to contribute to studies worldwide that aim to give scientists the best possible tools with which to obtain a dynamic picture of the brain. As a next step, the team hopes to further elucidate interactions between the various cortical microcircuits that operate in memory encoding. Dr. Miyashita has conjectured that these microcircuits are manipulated by a global brain network. Using the results of this latest study, he and colleagues are poised to further evaluate this assumption. “It will also be important to weave the neuronal circuit mechanisms into a unified framework,” said Dr. Hirabayashi,” and to examine the effects of learning on these circuit organizations.” Equipped with their new view of cortical processing, the team also hopes to trace the causal chain of memory retrieval across different areas of the cortex. “I am excited by the recent development of genetic tools that will allow us to do this,” said Dr. Miyashita. A better understanding of object representations from one area of the brain to the next will shed even greater light on elusive aspects of this hierarchical organ

Share Button

Brain Frontal Lobes Not Sole Center of Human Intelligence, Comparative Research Suggests

Share Button

May 13, 2013 — Human intelligence cannot be explained by the size of the brain’s frontal lobes, say researchers.  Research into the comparative size of the frontal lobes in humans and other species has determined that they are not — as previously thought — disproportionately enlarged relative to other areas of the brain, according to the most accurate and conclusive study of this area of the brain.

It concludes that the size of our frontal lobes cannot solely account for humans’ superior cognitive abilities.

The study by Durham and Reading universities suggests that supposedly more ‘primitive’ areas, such as the cerebellum, were equally important in the expansion of the human brain. These areas may therefore play unexpectedly important roles in human cognition and its disorders, such as autism and dyslexia, say the researchers.

The study is published in the Proceedings of the National Academy of Sciences (PNAS) today.

The frontal lobes are an area in the brain of mammals located at the front of each cerebral hemisphere, and are thought to be critical for advanced intelligence.

Lead author Professor Robert Barton from the Department of Anthropology at Durham University, said: “Probably the most widespread assumption about how the human brain evolved is that size increase was concentrated in the frontal lobes.
“It has been thought that frontal lobe expansion was particularly crucial to the development of modern human behaviour, thought and language, and that it is our bulging frontal lobes that truly make us human. We show that this is untrue: human frontal lobes are exactly the size expected for a non-human brain scaled up to human size.

“This means that areas traditionally considered to be more primitive were just as important during our evolution. These other areas should now get more attention. In fact there is already some evidence that damage to the cerebellum, for example, is a factor in disorders such as autism and dyslexia.”

The scientists argue that many of our high-level abilities are carried out by more extensive brain networks linking many different areas of the brain. They suggest it may be the structure of these extended networks more than the size of any isolated brain region that is critical for cognitive functioning.

Previously, various studies have been conducted to try and establish whether humans’ frontal lobes are disproportionately enlarged compared to their size in other primates such as apes and monkeys. They have resulted in a confused picture with use of different methods and measurements leading to inconsistent findings

Share Button

Using Anticholinergics for as Few as 60 Days Causes Memory Problems in Older Adults

Share Button

May 7, 2013 — Research from the Regenstrief Institute, the Indiana University Center for Aging Research and Wishard-Eskenazi Health on medications commonly taken by older adults has found that drugs with strong anticholinergic effects cause cognitive impairment when taken continuously for as few as 60 days. A similar impact can be seen with 90 days of continuous use when taking multiple drugs with weak anticholinergic effect.

The study of 3,690 older adults is among the first to explore how length of use of this group of drugs affects the brain. The study is available online in advance of publication in a print issue of Alzheimer’s & Dementia, the journal of the Alzheimer’s Association. The research was funded by a grant (R24MH080827) from the National Institute on Aging.

Anticholinergic drugs block acetylcholine, a nervous system neurotransmitter. Drugs with anticholinergic effects are sold over the counter and by prescription. Older adults commonly use over-the-counter drugs with anticholinergic effects as sleep aids and to relieve bladder leakage. Drugs with anticholinergic effects are frequently prescribed for many chronic diseases including hypertension, cardiovascular disease and chronic obstructive pulmonary disease.

A list of drugs noting their anticholinergic burden can be found on the Aging Brain Care website.

The Regenstrief Institute, IU Center for Aging Research and Wishard-Eskenazi Health researchers reported that continuously taking strong anticholinergics, like many sleeping pills or antihistamines, for only 60 days caused memory problems and other indicators of mild cognitive impairment. Taking multiple drugs with weaker anticholinergic effects, such as many common over-the-counter digestive aids, had a negative impact on cognition in 90 days.

“We found that a high anticholinergic burden — either from one or multiple drugs — plus two to three months of continuous exposure to that high burden approximately doubled the risk of developing cognitive impairment,” said Noll Campbell, Pharm.D., study co-author and Regenstrief Institute investigator. “Millions of older adults are taking sleeping pills or prescription drugs year after year that may be impacting their organizational abilities and memory.”

Dr. Campbell is also an IU Center for Aging Research scientist, a research assistant professor in the Department of Pharmacy Practice, Purdue University College of Pharmacy, and a clinical pharmacy specialist in geriatrics with Wishard-Eskenazi Health Services.

“While the link between anticholinergics and cognitive impairment has been reported by our group and others, the cumulative burden of anticholinergics was rather unexpected, as was the lack of a clear association between anticholinergic burden and dementia,” said Regenstrief Institute investigator Malaz Boustani, M.D., MPH. Dr. Boustani, the senior author of the study, who is also associate director of the IU Center for Aging Research and an associate professor of medicine at IU School of Medicine. He sees patients at the Healthy Aging Brain Center at Wishard-Eskenazi Health.

“The fact that taking anticholinergics is linked with mild cognitive impairment, involving memory loss without functional disability, but not with Alzheimer’s disease and other dementing disorders, gives hope. Our research efforts will now focus on whether anticholinergic-induced cognitive impairment may be reversible,” Dr. Boustani said.

Share Button

Alzheimer’s Fuzzy Signals Into High Definition

Share Button

 May 7, 2013 — Scientists at the Virginia Tech Carilion Research Institute have discovered how the predominant class of Alzheimer’s pharmaceuticals might sharpen the brain’s performance

17One factor even more important than the size of a television screen is the quality of the signal it displays. Having a life-sized projection of Harry Potter dodging a Bludger in a Quidditch match is of little use if the details are lost to pixilation.

The importance of transmitting clear signals, however, is not relegated to the airwaves. The same creed applies to the electrical impulses navigating a human brain. Now, new research has shown that one of the few drugs approved for the treatment of Alzheimer’s disease helps patients by clearing up the signals coming in from the outside world.

The discovery was made by a team of researchers led by Rosalyn Moran, an assistant professor at the Virginia Tech Carilion Research Institute. Her study indicates that cholinesterase inhibitors — a class of drugs that stop the breakdown of the neurotransmitter acetylcholine — allow signals to enter the brain with more precision and less background noise.

“Increasing the levels of acetylcholine appears to turn your fuzzy, old analog TV signal into a shiny, new, high-definition one,” said Moran, who holds an appointment as an assistant professor in the Virginia Tech College of Engineering. “And the drug does this in the sensory cortices. These are the workhorses of the brain, the gatekeepers, not the more sophisticated processing regions — such as the prefrontal cortex — where one may have expected the drugs to have their most prominent effect.”

Alzheimer’s disease affects more than 35 million people worldwide — a number expected to double every 20 years, leading to more than 115 million cases by 2050. Of the five pharmaceuticals approved to treat the disease by the U.S. Food and Drug Administration, four are cholinesterase inhibitors. Although it is clear that the drugs increase the amount of acetylcholine in the brain, why this improves Alzheimer’s symptoms has been unknown. If scientists understood the mechanisms and pathways responsible for improvement, they might be able to tailor better drugs to combat the disease, which costs more than $200 billion annually in the United States alone.

In the new study, Moran recruited 13 healthy young adults and gave them doses of galantamine, one of the cholinesterase inhibitors commonly prescribed to Alzheimer’s patients. Two electroencephalographs were taken — one with the drugs and one without — as the participants listened to a series of modulating tones while focusing on a simple concentration task.

The researchers were looking for differences in neural activity between the two drug states in response to surprising changes in the sound patterns that the participants were hearing.

The scientists compared the results with computer models built on a Bayesian brain theory, known as the Free Energy Principle, which is a leading theory that describes the basic rules of neuronal communication and explains the creation of complex networks.

The theory hypothesizes that neurons seek to reduce uncertainty, which can be modeled and calculated using free energy molecular dynamics. Connecting tens of thousands of neurons behaving in this manner produces the probability machine that we call a brain.

Moran and her colleagues compiled 10 computer simulations based on the different effects that the drugs could have on the brain. The model that best fit the results revealed that the low-level wheels of the brain early on in the neural networking process were the ones benefitting from the drugs and creating clearer, more precise signals.

“When people take these drugs you can imagine the brain bathed in them,” Moran said. “But what we found is that the drugs don’t have broad-stroke impacts on brain activity. Instead, they are working very specifically at the cortex’s entry points, gating the signals coming into the network in the first place.”

Share Button

Distorted Thinking in Gambling Addiction: What Are the Cognitive and Neural Mechanisms?

Share Button

Apr. 8, 2013 — Fascinating new studies into brain activity and behavioural responses have highlighted the overlap between pathological gambling and drug addiction. The research, which is presented at the British Neuroscience Association Festival of Neuroscience (BNA2013) has implications for both the treatment and prevention of problem gambling.

Dr Luke Clark, a senior lecturer at the University of Cambridge (UK), told the meeting that neurocognitive tests of impulsivity and compulsivity, and also positron emission tomography (PET) imaging of the brain have started to show how gambling becomes addictive in pathological gamblers — people whose gambling habit has spiralled out of control and become a problem.

“Around 70% of the British population will gamble occasionally, but for some of these people, it will become a problem,” he said. “Our work has been seeking to understand the changes in decision-making that happen in people with gambling problems. It represents the first large scale study of individuals seeking treatment for gambling problems in the UK, at a time when this disorder is being re-classified alongside drug addiction as the first ‘behavioural addiction’. Given the unique legislation around gambling from country to country, it is vital that we understand gambling at a national level. For example, 40% of the problem gamblers at the National Problem Gambling Clinic report that the game they have a problem with is roulette on Fixed Odds Betting Terminals; this kind of gambling machine is peculiar to the British gambling landscape.”

In collaboration between the University of Cambridge and Dr Henrietta Bowden-Jones, director of the UK’s only specialist gambling clinic in the Central and North West London NHS Trust, Dr Clark and his colleagues compared the brains and behaviours of 86 male, pathological gamblers with those of 45 healthy men without a gambling problem.

“We approach gambling within the framework of addiction, where we think that problematic gambling arises from a combination of individual risk factors, such as genetics, and features of the games themselves. To study individual factors, we have been testing gamblers at the National Problem Gambling Clinic on neurocognitive tests of impulsivity and compulsivity, and we have also measured their dopamine levels using PET imaging,” said Dr Clark.

The tests showed that problem gamblers had increased impulsivity, similar to people with alcohol and drug addictions, but there was less evidence of compulsivity. Levels of dopamine — a neurotransmitter involved in signalling between nerve cells and which is implicated in drug addiction — showed differences in the more impulsive gamblers.

“Previous PET research has shown that people with drug addiction have reduced dopamine receptors. We predicted the same effect in pathological gamblers, but we did not see any group differences between the pathological gamblers and healthy men. Nevertheless, the problem gamblers do show some individual differences in their dopamine function, related to their levels of impulsivity: more impulsive gamblers showed fewer dopamine receptors,” said Dr Clark. “These studies highlight the overlap between pathological gambling and drug addiction.

“To study the properties of the games themselves and how they relate to problem gambling, we have focussed on two psychological distortions that occur across many forms of gambling: ‘near-miss’ outcomes (where a loss looks similar or ‘close’ to a jackpot win) and the ‘gambler’s fallacy’ (for example, believing that a run of heads means that a tail is ‘due’, in a game of chance). In one important discovery, we were the first lab to show that gambling ‘near-misses’ recruit brain regions that overlap with those recruited in gambling ‘wins’. These responses may cause ‘near-misses’ to maintain gambling play despite their objective status as losses.”

Dr Clark said that these findings had implications for both prevention and treatment. “Gambling distortions like the ‘near-miss’ effect may be amenable to both psychological therapies for problem gambling, and also by drug treatments that may act on the underlying brain systems. By understanding the styles of thinking that characterise the problem gambler, we may also be able to improve education about gambling in teenagers and young adults, to reduce the number of people developing a gambling problem.”

The researchers also found a striking demonstration of the underlying brain regions that are involved in gambling when they studied the gambling behaviour of patients who had experienced brain injury due to a tumour or stroke.

“We have seen that two gambling distortions — the ‘gambler’s fallacy’ and the ‘near-miss’ effect — that are evident in the general population, and which appear to be increased in problem gamblers, are actually abolished in patients with damage to the insula region of the brain,” he said. “This suggests that in the healthy brain, the insula may be a critical area in generating these distorted expectancies during gambling play, and that interventions to reduce insula activity may have treatment potential.

“The insula is quite a mysterious part of the brain, tucked deep inside the lateral fissure. It is important in processing pain and, more broadly, in representing the state of the body in the brain, and it is striking that gambling is a very visceral, exciting activity. Our ongoing neuroimaging work will look at the relationship between responses in the insula and the body during our gambling tests.”

Future work will investigate the styles of thinking that are in evidence when the problem gamblers at the National Problem Gambling Clinic play the simplified games the researchers have developed. “This is the first study to directly look at whether these biases are more pronounced in problem gamblers. We are also starting to recruit the siblings of problem gamblers (those who do not have a gambling problem themselves) in order to look at underlying vulnerability factors,” concluded Dr Clark.

This research is funded by grants from the UK’s Medical Research Council, and involves further collaboration with researchers at Imperial College London and the University of Oxford.

Share Button

Non-Invasive Mapping Helps to Localize Language Centers Before Brain Surgery

Share Button

Apr. 8, 2013 — A new functional magnetic resonance imaging (fMRI) technique may provide neurosurgeons with a non-invasive tool to help in mapping critical areas of the brain before surgery, reports a study in the April issue of Neurosurgery, official journal of the Congress of Neurological Surgeons.

Evaluating brain fMRI responses to a “single, short auditory language task” can reliably localize critical language areas of the brain — in healthy people as well as patients requiring brain surgery for epilepsy or tumors, according to the new research by Melanie Genetti, PhD, and colleagues of Geneva University Hospitals, Switzerland.

Brief fMRI Task for Functional Brain Mapping
The researchers designed and evaluated a quick and simple fMRI task for use in functional brain mapping. Functional MRI can show brain activity in response to stimuli (in contrast to conventional brain MRI, which shows anatomy only). Before neurosurgery for severe epilepsy or brain tumors, functional brain mapping provides essential information on the location of critical brain areas governing speech and other functions.

The standard approach to brain mapping is direct electrocortical stimulation (ECS) — recording brain activity from electrodes placed on the brain surface. However, this requires several hours of testing and may not be applicable in all patients. Previous studies have compared fMRI techniques with ECS, but mainly for determining the side of language function (lateralization) rather than the precise location (localization).

The new fMRI task was developed and evaluated in 28 healthy volunteers and in 35 patients undergoing surgery for brain tumors or epilepsy. The test used a brief (eight minutes) auditory language stimulus in which the patients heard a series of sense and nonsense sentences.

Functional MRI scans were obtained to localize the brain areas activated by the language task — activated areas would “light up,” reflecting increased oxygenation. A subgroup of patients also underwent ECS, the results of which were compared to fMRI.

Non-invasive Test Accurately Localizes Critical Brain Areas

Based on responses to the language stimulus, fMRI showed activation of the anterior and posterior (front and rear) language areas of the brain in about 90 percent of subjects — neurosurgery patients as well as healthy volunteers. Functional MRI activation was weaker and the language centers more spread-out in the patient group. These differences may have reflected brain adaptations to slow-growing tumors or longstanding epilepsy.

Five of the epilepsy patients also underwent ECS using brain electrodes, the results of which agreed well with the fMRI findings. Two patients had temporary problems with language function after surgery. In both cases, the deficits were related to surgery or complications (bleeding) in the language area identified by fMRI.

Functional brain mapping is important for planning for complex neurosurgery procedures. It provides a guide for the neurosurgeon to navigate safely to the tumor or other diseased area, while avoiding damage to critical areas of the brain. An accurate, non-invasive approach to brain mapping would provide a valuable alternative to the time-consuming ECS procedure.

“The proposed fast fMRI language protocol reliably localized the most relevant language areas in individual subjects,” Dr. Genetti and colleagues conclude. In its current state, the new test probably isn’t suitable as the only approach to planning surgery — too many areas “light up” with fMRI, which may limit the surgeon’s ability to perform more extensive surgery with necessary confidence. The researchers add, “Rather than a substitute, our current fMRI protocol can be considered as a valuable complementary tool that can reliably guide ECS in the surgical planning of epileptogenic foci and of brain tumors.”

Share Button

Spatial Memory: Mapping Blank Spots in the Cheeseboard Maze

Share Button

Mar. 21, 2013 — IST Austria Professor Jozsef Csicsvari together with collaborators has succeeded in uncovering processes in which the formation of spatial memory is manifested in a map representation.

During learning, novel information is transformed into memory through the processing and encoding of information in neural circuits. In a recent publication in Neuron, IST Austria Professor Jozsef Csicsvari, together with his collaborator David Dupret at the University of Oxford, and Joseph O’Neill, postdoc in Csicsvari’s group, uncovered a novel role for inhibitory interneurons in the rat hippocampus during the formation of spatial memory.

During spatial learning, space is represented in the hippocampus through plastic changes in the connections between neurons. Jozsef Csicsvari and his collaborators investigate spatial learning in rats using the cheeseboard maze apparatus. This apparatus contains many holes, some of which are selected to hide food in order to test spatial memory. During learning trials, animals learn where the rewards are located, and after a period sleep, the researchers test whether the animal can recall these reward locations. In previous work, they and others have shown that memory of space is encoded in the hippocampus through changes in the firing of excitatory pyramidal cells, the so-called “place cells.”

A place cell fires when the animal arrives at a particular location. Normally, place cells always fire at the same place in an environment; however, during spatial learning the place of their firing can change to encode where the reward is found, forming memory maps.

In their new publication, the researchers investigated the timescale of map formation, showing that during spatial learning, pyramidal neuron maps representing previous and new reward locations “flicker,” with both firing patterns occurring. At first, old maps and new maps fluctuate, as the animal is unsure whether the location change is transient or long-lasting. At a later stage, the new map and so the relevant new information dominates.

The scientists also investigated the contribution of inhibitory interneuron circuits to learning. They show that these interneurons, which are extensively interconnected with pyramidal cells, change their firing rates during map formation and flickering: some interneurons fire more often when the new pyramidal map fires, while others fire less often with the new map. These changes in interneuron firing were only observed during learning, not during sleep or recall. The scientists also show that the changes in firing rate are due to map-specific changes in the connections between pyramidal cells and interneurons. When a pyramidal cell is part of a new map, the strengthening of a connection with an interneuron causes an increase in the firing of this interneuron. Conversely, when a pyramidal cell is not part of a new map, the weakening of the connection with the interneuron causes a decrease in interneuron firing rate. Both, the increase and the decrease in firing rate can be beneficial for learning, allowing the regulation of plasticity between pyramidal cells and controlling the timing in their firing.

The new research therefore shows that not only excitatory neurons modify their behaviour and exhibit plastic connection changes during learning, but also the inhibitory interneuron circuits. The researchers suggest that inhibitory interneurons could be involved in map selection — helping one map dominate and take over during learning, so that the relevant information is encoded.

Share Button