Computational Neuroscience: Brains vs. Binary

Brains and computers share many similarities in the basic, core structure of their processing methods: Computers utilize the binary system of 1’s and 0’s to encode and organize information, relying on the incredibly rapid method of electrical communication to process data. Patterns of these two digits are compiled in incredibly long strings, providing the variety needed to encompass the numerous characters, numbers, and symbols that comprise human communication. Since the processing of these digits occurs at the speed of light, the massively lengthy strings appear to be woven together into information that transcends its one-dimensional nature.

At the most basic biological level, the neurons in the brain fire, or signal, electrochemical impulses, similar to these 1’s and 0’s. As electrochemical impulses, however, this direct processing speed is on the one hand much slower (operating at approximately 1000 operations/second, as opposed to a computer processor’s 1,000,000,000 operations/second) but on the other hand much more modifiable. The dendrites in the brain’s neurons don’t only measure the modulation of these impulses; they also read the rate at which the impulses are firing. This combination of effects allows for an incredible multitude of different permutations and variations, incorporating multiple dimensions and characteristics of information. Essentially, while computers process issues as just 1 or 0, yes or no, white or black, the human brain expands those two options into so many varieties that the results become an entire gradient of possible responses. Harry Collins, professor at Cardiff University, expounds on this idea by explaining how knowledge is embedded in our methods of communication in a variety of ways. In addition to basic, symbol-based information, Collins writes of embodied knowledge, embrained knowledge, and encultured knowledge. He separates action into “regular action” and “behavior-specific action,” noting that the majority of our day-to-day decisions either follow or establish ‘rules’ and appear to us to be easy to understand. The hidden instructions inherent in this behavior, however, reveal that we “cannot encapsulate all that [we] know about [regular action] into a formula…What is more, what counts as following the rules varies from society to society and situation to situation” (Collins). Manuel Castells writes concerning communication that “all forms of communication…are based on the production and consumption of signs…It is precisely this ability of all forms of language to encode ambiguity and to open up a diversity of interpretations that makes cultural expressions distinct from formal/logical/mathematical reasoning” (403). The inherent “ambiguity” of our brain’s processing abilities, while restricting its speed, also lends it the ability to deal with high levels and multiple dimensions of complexity.

To visualize what’s going on, picture the neurons in the brain as a network of thousands of interconnected nerves, whereas a computer’s processor is more of a pipeline. The brain can calculate many things at once, moving in multiple directions around a network of tens of thousands of connections, calling on intuitive and embedded information and interpolation to supplement the data presented to it. A computer, on the other hand, is (technically) limited to one task per processor. Advances in processing capabilities are beginning to see computers able to undergo parallel processing, but at the end of the day, computers still undergo a more binary and deterministic process, as opposed to the graded and stochastic one undergone by the brain.


No matter how technology continues to advance, the brain is understood as a much more subjective and evolutionary entity, where the computer is, no matter how complex, still deterministic. The computer draws conclusions to programmed questions, where the brain immerses itself in a field of search that often utilizes roundabout methods of questioning and exploration in order to arrive at a “result.” Humans have built-in common sense and intuition: Processes which, even if they are ultimately quantifiable as merely a massive collection of both pre-programmed and learned algorithms, we are far from being able to replicate in computer architecture. Whether the brain is just an incredibly advanced computer or guided and influenced by some non-mathematical, un-quantifiable set of expressions, suffice it to say that the computer of today operates and achieves results in an entirely different way than the human brain. The “results” achieved by the brain (or, to be more specific its “design process”) often take into account so many factors and parameters that the outcome is far from some optimized result.

Optimization brings with it some promise of an “architectural solution.” Some notion that, through the power of computing, we can “solve our problem.” Merriam-Webster defines an algorithm as a “step-by-step procedure for solving a problem or accomplishing some end, especially by a computer.” At first this sounds promising in the vast decision-making field of design. Optimization, however, is inherently inhuman. In mathematical terms, design is about local optima (as opposed to global optima), because design resides in a field that is entirely subjective. Global optima span over all times and situations, but we design in scenarios that are much more fluid and temporally variant. What’s popular today might not be popular tomorrow, and what’s available here may not be available there. Optimization is flawed because it produces a static result, and a snapshot answer: It once again predicts that there is some one style that humanity will cling to, and champion, and live with forever. It glorifies the end and seeks to reduce the process to the most efficient model possible. But why would we seek something so unchangeable? Design is subjective, and bound to a temporal landscape just as much as a physical one. Computers are not.

The proper allocation of computer resources is not in the continued pursuit of computational creativity, but in the careful integration of (and restricted reliance on) pure processing. Computers offer us the ability to process information incredibly rapidly, but it is up to us to recognize this as merely another step in a live and fluid process, rather than a definite answer. “Computers have an unfortunate tendency to present us with binary choices at every level, not just at the lowest one, where the bits are switching” (Lanier 63). It is up to us to avoid deferring to this yes/no procedural mindset and properly prioritize human creativity.

Collins, Harry. “Humans, Machines, and the Structure of Knowledge.” Stanford Humanities Review 4.2 (1995): 67-83. Print.

Lanier, Jaron. You Are Not a Gadget: A Manifesto. New York: Alfred A. Knopf, 2010. Print.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: