Document Type
Article
Publication Date
5-9-2019
Abstract
How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.
Recommended Citation
Doerig, A., Schurger, A., Hess, K., & Herzog, M. H. (2019). The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness. Consciousness and Cognition, 72, 49-59. https://doi.org/10.1016/j.concog.2019.04.002
Copyright
The authors
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Included in
Cognition and Perception Commons, Cognitive Psychology Commons, Other Psychiatry and Psychology Commons, Other Psychology Commons, Philosophy of Mind Commons, Psychological Phenomena and Processes Commons
Comments
This article was originally published in Consciousness and Cognition, volume 72, in 2019. DOI: 10.1016/j.concog.2019.04.002