BCI Potential Unrealized
The ability to control an external device with one’s mind has the potential to enhance humanity in a variety of ways. Applications include control over neuroprosthetics (movement), phone and computers (spelling, communication, research…), drones and VR worlds. Despite great progress however much work remains. BCIs are not yet a viable commercial technology (Chaudhary et. al., 2021). They are scarcely used outside laboratories for practical applications, and effective translation from proof-of-concept prototypes into reliable applications remains elusive. (Chavarriaga et. al., 2016).
The Mind is Critical
To decode a brain signal is to ascribe meaning to it. Meaning is subjective, a phenomenon of the human mind. What mental states and processes does the signal represent? The subjective mind is obviously an indispensable part of any decoding. The categories of it — of perceptual experience, thought, emotion, imagination & visualization, goals, attention, intention, prediction, learning, inner speech etc. and their combinations… — are what allows the meaning of the brain signal to be understood and labeled in the first place.
Labels (words and word sequences) that accurately represent the mind requires a conceptual understanding of it. The problem is the mind remains ignored. With a BCI task for example, only one (ex: “visualize the cursor moving left”) or maybe two (ex: “visualize it moving left, with high cognitive workload”) aspects of mind are considered.
Components of it activated during device control, however, are many and varied. Herein lies a tremendous opportunity to improve brain signal decoding. Since the mind includes a dozen or more components for almost any mental or behavioral task, they too can be a part of the decoder label. Increasing the number of labels allows for multiple decoders for the same task.
The more comprehensively your set of decoding labels represents the mind, the higher the % of mind, and brain, that can be decoded. For example the label “imagine raising my left arm” includes a variety of associations. One set might be “grasp my phone, send an important text, with focused attention, while feeling excited and confident, predicting the feeling of the phone against my hand, and the inner speech ‘grasp phone’ phoneme sequence.” Each of these mental components (and their combinations), expressed in the brain, have a brain signal feature set available for decoding. These labels are the tools for decoding a much higher % of the brain signal.
Decoder/classifier categories can include any aspect of mind that is strongly and consistently activated during a mental command (in a given context).
Classifiers and the Mind
A narrow view of the user’s mind is inevitably reflected by a narrow set of classifiers. If your classifier represents a single mental state or process (a single “paradigm”) the neural correlate of that mind feature can only be decoded as that feature. A classifier can only classify brain signal features that are labeled. Or more precisely, can only knowingly and explicitly classify them.
However with most real world tasks dozens of mental states and processes are active; often simultaneously. With an imagined movement for example, the user’s mind can include not just imagination but perception (visual, auditory, somatosensation…), emotion (excitement, confidence, frustration, anxiety), belief, motivation, goals, attention, cognitive workload, intention, and prediction as well as hunger, thirst, fatigue, pain and more. All of these represent additional classification “shots on goal” which, assuming these are valid, will increase classification accuracy.
If the categories of mind can be broadened to include a much higher % of the mind, and then divided into multiple classifiers, this is great news for signal decoding. The brain’s activity can now be more comprehensively and precisely covered by this (mind-based) classifier set. This of course assumes a 1-to-1 correspondence between the (subjective or experiential) mind and the brain. Yet is this the case?
The Mind/Brain Connection
Mind and brain activity, if not mirror one another, are very closely connected. Consider human movement. The efferent (away from the brain) signal has the potential to become associated with, and respond to, any mental state or process. This includes not only a direct causal pathway, such as (immediate) movement intention-to-basal ganglia/motor cortex-to-efferent signal. The efferent signal also responds indirectly to the cortex: including perception, recognition, meaning, emotion, prediction, higher goals, level of attention and so on.
Where is the proof that movement is affected by most of the mind? A lab experiment isn’t required — just notice as you read this sentence the movement of your head and eyes, your facial expression, body language you’ll see that perception & recognition (of letters and words), meaning, thoughts, goals & intentions (to read, understand, stop and think, reflect…), emotion, motivation and more can all affect movement instantaneously. As is the case with most activities. Now if brain activity doesn’t mirror the minds’, how could those mental states and processes all affect movement? How could a person possibly control his/her movement with skill and precision, in real time? Clearly the mind influences if not completely controls movement every waking movement.
A close mind-to-brain connection is also the basis of the entire field of cognitive neuroscience. It’s been well-established experimentally that a particular type or instance of stimulus, thought, emotion, imagination, intention and the rest are closely connected to (large scale, coordinated) patterns of neural activity. Distinct brain oscillations underlie specific cognitive functions (Cox et. al., 2018).
The good news is the mind can be used to unlock the secrets of the brain. The first step is to take the mind seriously as a “real” phenomenon. Then, understand it. Finally, define its contents, and their function through space and time, in the brain. In other words create a model of the mind. This “functional mind map” can then be connected to brain activity (functional neural networks) and signal. From here, brain signal “signature” for any mental command can be developed. This signature can be used as the basis of a decoder/classifier, or set thereof, to decode and classify brain signals which match it.
The User-Centered Approach
Taking advantage of the user’s subjective mind in BCI is an idea supported by other researchers. They argue the field would benefit greatly from a more “user-centered” approach. The user’s role in device control and performance has been minimized, and taking into account his or her mental states and skills could have a substantial impact in improving BCI efficiency, effectiveness and usability (Lotte, Jeunet, Mladenovic et al., 2018).
I not only agree, but argue the user-centered approach should be extended further — beyond user states and traits. There’s no reason it can’t include the entire range of mental states and processes. Instead of focusing on a few ad hoc mental states, there is an opportunity to systematically define the user’s mind. The only thing required is a mind model: its states and processes as they occur through space and time, inside the brain. Once defined, this set can then be narrowed to the components activated during device use, within a given context — environment, situation, activity, task, recent performance etc.
Mind-based Brain Signal Classifiers
As a mind component (or combination thereof) and corresponding brain signal is expressed across task trials, a signature of activity can be identified. This signature is not a sparce code (one-off) expression but rather a range of similar expression. This mind-based signature can then be used as, or to develop, a brain signal classifier.
Developing a mind-based classifier involves the following: (1) acknowledge the mind exists (it’s not “the brain” but a subjective phenomenon in its own right), (2) define it, using a mind model, (3) define the mind’s components most active during a mental command, within a mind/brain/body/environment context, (4) identify the brain signal characteristics corresponding to these mind signatures, to create brain signal signatures, and classifiers, and (5) identify those most interesting, motivating and relevant to the user during that task.
A set of classifiers can be developed corresponding to the user’s mind components. Having potentially dozens of viable classifiers at one’s disposal greatly increases the classification “shots on goal” that can be used for decoding. And, this set can be selectively applied (or weighted) depending upon the user’s predicted mind activation, during a given task + context.
Standing in the way of a mind-first classifier development approach however is the lack of a mind model. The human mind is poorly understood (Poldrack & Yarkoni, 2016). Currently no accurate or systematic method of defining a mental state or process, within a given task & context, exists. Because of this shortcoming even a sharp turn toward a user-centered approach, though helpful, will have limited effect.
On the bright side, a model isn’t entirely necessary — if one simply recognizes there’s a lot going on in the user’s mind, all of which affects brain activity. And even better news: a viable mind/brain model has already been developed.
User Empowerment
A greater emphasis on the subjective mind can empower the BCI user. Focusing on the mind as one side of the mind/brain coin allows the user to see she’s in control. As she manipulates her own mind, she simultaneously controls her brain signal. This can be done with clear conscious intent. However the mind is manipulated, the brain follows. Clarifying this relationship helps the user control her brain signal more naturally yet intentionally.
Classifiers based on the mind also encouraged their personalization. In consultation with others, the user can research and define the states & processes easy for her to achieve, strongly and consistently. These could include aspects of her mind most strongly aligned with her lifestyle, interests, favorite activities, professional goals and so on.
In addition, mind-first classifiers can serve as “mind targets” for the user to try to “hit” or activate. More precise and comprehensive targets — clearly understood by the user — cannot help but increase user performance and classification accuracy.
The Core Problem: The Mind is Ignored
The idea the various components of the mind are strongly correlated to brain activity & signal is obvious. After all it’s the main aim of cognitive neuroscience — to elucidate this connection. Yet there’s a tendency (driven by materialism) to minimize if not ignore the subjective. This neglect by the brain science community trickles down to the field of BCI. Although a specific task may be described in great detail, the mind that executes it — beyond that singular mental component — is for the most part ignored.
Minimizing the mind causes its corresponding brain signal to be both decoded and encoded sub-optimally. Signal classification suffer accordingly. Many positive states of mind — feeling calm, confident, happy, content, motivated etc. remain unacknowledged. So do unwanted states — feeling frustrated, impatient, anxious, distracted, in pain etc. If these mental categories are not accounted for subjectively, they won’t be included as part of the classifier, and won’t be classified (as such).
The good news is any mental phenomena can be included as part of a classifier set. Especially if it can be activated with strength and consistently. Any mental state or process (or combination of) can be represented by its own classifier. Also, if the mind component is sporadically and unpredictably activated, a “noise classifier” could be matched to this state; with subsequent matching signals filtered out. Any component of mind can be labeled as noise or part of the signal — once acknowledged as being part of the brain’s activity in the first place.
Summary
Once the user’s mind is defined accurately, connected to the brain, and used to build brain signal signatures, higher-performing classifiers (and “mind targets”) can be developed for the BCI designer, trainer and user to benefit from.
References
Chaudhary, U., Chander, B. S., Ohry, A., Jaramillo-Gonzalez, A., Lule, D., Birbaumer, N. (2021). Brain Computer Interfaces for Assisted Communication in Paralysis and Quality of Life. International Journal of Neural Systems v. 31. https://doi.org/10.1142/S0129065721300035
Chavarriaga, R., Fried, O., Kleih, S., Lotte, F., Scherer, R. (2016). Heading for new shores! Overcoming pitfalls in bci design. Brain-Computer Interfaces, 4,60.
Cox, R., Schapiro, A., Stickgold, R. (2018). Variability and stability of large-scale cortical oscillation patterns. Network Neuroscience, 2(4),481. doi: 10.1162/netn_a_00046
Lotte, F., Jeunet, C., Mladenovic, J., N’Kaoua, B., Pillette, L. (2018). A BCI challenge for the signal processing community: considering the user in the loop. Signal Processing and Machine Learning for Brain-Machine Interfaces, IETpp.1-2
Poldrack, R.A., Yarkoni, T. (2016). From brain maps to cognitive ontologies: informatics and the search for mental structure. Annual Review of Psychology, 67, 587.