The human brain has often been referred to as the Final Frontier of science, as unlocking its secrets would likely lead to the greatest transformation mankind has ever known. We are what we think and so any further understanding we have of the nature of thought and brain function will lead to better thinking, at all levels: individual, societal and perhaps even cellular.
Within every skull sits a “three pound enigma” that has more storage capacity, processing power and connections than all the computers on the planet put together. Comprised of 50-200 billion neurons, connected by between 100 trillion and 10 quadrillion synaptic junctions, the sheer scale of the brain is difficult to comprehend. Our best tools can currently only record the activity of a few neurons at a time, so we have only just begun to scratch the surface of understanding the overall system.
The brain is phenomenally efficient. If we were able to understand and replicate aspects of its architecture and ruleset, we would be able to develop computers that are several million times more powerful than anything we have today. Researchers at IBM and Stanford University have started down this road, reverse-engineering the brain within software. They modeled a cat’s cerebral cortex within the Blue Gene/IP supercomputer (the world’s fourth most powerful supercomputer at the time) and although Blue Gene/IP had 144 terabytes of RAM at its disposal, its simulated cat brain ran about 100 times slower than a real cat brain. In fact, using just 30 watts of electricity (i.e., enough to power a dim light bulb), the human brain outperforms the Blue Gene/IP supercomputer by a factor of a million (Hsu, 2009). If a processor were designed to be as smart as the human brain using current design methods, it would require at least 10 megawatts to operate (i.e. the amount of energy produced by a hydroelectric plant) (Howard, 2012b; Kety, 1957; Rolfe, Brown, 1997; Sokoloff, 1960).
In 2013, the European Commission donated over a billion dollars to fund 10 years of brain research, under an initiative called the Human Brain Project (HBP). The HBP brings together over 80 partner institutions across Europe to focus research onto three main areas: 1) neuroscience, 2) medicine, and 3) future computing technologies. The central objective of the HBP is to develop a realistic brain simulation with a twofold mission: 1) to better understand neurological operations and disorders; and 2) to improve computer technologies by using neurologically-inspired design.
Not to be outdone by Europe, President Obama announced a US version of the HBP, called the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative, also in 2013. The BRAIN project is focused on developing an understanding of the human mind for the purposes of developing new treatments, preventions and cures for neurological disorders such as Alzheimer’s and autism. While this initiative hasn’t received nearly as much press as HBP, it is far better funded – President Obama committed a little over $3 billion, or around $300 million per year to fund the effort – over twice the HBP budget.
Although the HBP and BRAIN initiatives have slightly different goals, both of these large-scale initiatives, along with countless smaller efforts around the world, are inspiring researchers to work day and night to scan, model and understand the human brain. A truly global effort is now underway and countless benefits for computing, society and healthcare await on the other side of it. In terms of scale, President Kennedy’s historic challenge of landing on the moon is rather tame, by comparison.
Understanding how this densely interwoven three pound mass functions, let alone how it generates thought and consciousness, is truly a monumental task. Many people have called the race to understand the human brain the Grand Research Challenge of the 21st Century. I agree with them.
Neurological breakthroughs expected to result from these projects will directly benefit the lives of millions. From traumatic brain injury to autism to dementia, it is estimated that neurological disorders affect over one billion people globally. The US National Institute of Mental Health (NIMH) reports that 1 in 4 American adults (25%!!) suffer from a diagnosable mental disorder. The Alzheimer’s Association estimates that 5.4 million people in the US suffer from Alzheimer’s Disease (AD). It is estimated that 10 million people suffer from Parkinson’s Disease (PD) worldwide, with one million cases the US alone and at least 60,000 new cases diagnosed each year. However, the true number of neurological and psychological disorders is difficult to enumerate, because the conditions typically go undiagnosed, misdiagnosed or diagnosed so late that the disease has already reached an advanced stage. As a result, the actual number of cases is most likely much higher than any of these numbers suggest. Since many neurodegenerative diseases are age-related, these numbers are expected to skyrocket as the baby boomer generation settles into retirement. By 2050, just the cost of treating Alzheimer’s patients in the US will be $1,100,000,000,000. Per year. Brain disorders are a very big deal.
Although still in its infancy, neuroscience has become an extremely fast-growing, highly interdisciplinary discipline, incorporating a wide range of tools and techniques from other specializations to scan and study the functional, structural, molecular, cellular and cognitive aspects of the nervous system. Many layers of activity are involved in any motive or cognitive process, so mapping the activity of brain function and dysfunction essentially means mapping all neuronal networks, which will require multilevel data from both behavioural and cognitive output (Leergaard et al., 2012; Turner et al., 2013). New systemic and interdependent biomarkers should therefore be most readily found through a process of synchronous, multi-modal data capture (and correlation).
At Oxford University, our Computational Neurology Lab (OxCNL) is taking on this Greatest Research Challenge of the 21st Century and has partnered up with hospitals, clinicians, psychiatrists, surgeons and several large data providers to build and test a wearable device that both captures and interprets multi-modal data. The team is actively using my Fundamental Code Unit (FCU) and Brain Code (BC) theories to map these different sources into a shared coordinate system, where they may be effectively analyzed for new patterns. Further clinical trails of the device are expected to be performed throughout the fall.
No system for objectively and reliably diagnosing neurological conditions yet exists. No multimodal system for quantitative diagnosis and progression monitoring yet exists, either, so the device could be a game-changer for neurologists and clinicians everywhere. The Oxford CNL team has high hopes for the technology, as does the industry. Dr. Richard Wert, Senior Fellow at Intel, claims that “this research could perhaps be the missing link to understanding the human brain.”
Projects such as the HBP and BRAIN initiatives are bringing a great deal of new interest and ideas together to join in the common cause of trying to map, model and understand the human brain. Insights in brain function will yield entirely new methods diagnosis and treatment of neurological conditions and neurodegenerative disease. These same insights will likely inspire radically new designs in computer hardware and software. With billions of dollars now invested into solving this Grand Research Challenge, we do hope to see some breakthroughs soon.
At Oxford, we hope to contribute a few.