Posted in Special Feature on 28th Jan 2019
Right: Caitjan Gainty, Department of History, King’s College London, UK.
How do Neurologists learn to look and to see? How does the ‘gaze of the Neurologist’ determine difference or pathology? Trained experts, whether they be medical doctors or Ornithologists, see the world differently from those who lack their training. Indeed, training is, in good part, learning to see.1 So what is it that the Neurologist sees, as a result of their professional clinical training in habits of looking, that those in other areas of medicine and science, and outside these realms altogether, don’t?
In 2016, we (a sociologist and a historian) decided to explore this question. In part, our ‘neurovisions’ project built on work one of us (Rose) had already done to explore and elucidate the birth of the ‘neuromolecular’ gaze: a way of looking that envisages the internal mental world of the patient in terms of the activity of neurons, neurotransmitters, and neural circuits and their normal and abnormal structure and function.2 This ‘gaze’ has become increasingly familiar to non-specialists, via the public circulation of images like brain scans which, we are told, enable us to see the mysteries of the brain from dementia to musical appreciation. Yet, in important ways, these images, like all images, are not the thing itself. Rather brain scans and other medical images are always only visual representations, able to offer one – but never the only – way of reflecting, re-imagining, re-creating disease.
Indeed, one does not even need to look very far back to find an approach in tension with our contemporary inclination to render neurologic difference and pathology as the result of events occurring at the molecular level. Towards the end of his life, the great Neurologist Alexander Romanovich Luria described the approach that he had taken in The Man With A Shattered World and The Mind of the Mnemonist3 as ‘romantic science’.4 Oliver Sacks explicitly followed this genre: in his introduction to the 1987 edition of The Man With A Shattered World, he described his approach of melding the synthetic biography of the individual case with clinical analysis as “the dream of a novelist and a scientist combined”.5 As the historian Anne Harrington has pointed out, this was also the way that the world of the neurodiverse – to use the current term – was portrayed in such movies as Rain Man (a savant with mental disabilities) and The Curious Incident of the Dog in the Night-time (the world of a young man diagnosed with Aspergers).6 Examining this evidence, Harrington suggested that some Neurologists had been less concerned with the question of “what mechanisms have gone wrong” and more with “what is it like to be a person with a brain injury, Alzheimer’s disease, autism, or Tourette syndrome.”6 This seemed good in theory, we thought, but what did holism look like in practice? What were the practices of neurologic looking that this ‘romantic’ approach required? Were these practices a more ‘holistic’ mode of apperception of the patient and the world that he or she inhabited?
These questions led us to the five short films of the Psychiatrist/Neurologist Kurt Goldstein (1878-1965), whose career spanned the middle decades of the twentieth century. We were drawn to Goldstein’s films not only because they fit the general chronology of the ‘romantic science’ of neurology, but also because Goldstein himself was explicitly committed to holism. Pushing back against the tendency toward localism in his own time, Goldstein argued that neurological injury refashioned the brain as a whole, so that the orientation of the patient’s body, self and relation to the world also of necessity changed.7 “The better we became at observing,’ Goldstein noted, ‘the more we came to ascertain that more or less none of the actions are performed normally anymore after a lesion in the nervous system”.8 Goldstein’s mode of neurological looking thus emphasised the crucial importance of multiple, not singular, diagnostic tests. Though Goldstein saw his therapeutic role as facilitator of the brain’s subsequent ‘healing,’ this was not primarily aimed at the restoration of function. Instead, Goldstein’s ‘therapy’ largely consisted of the creation of a new attitude of the body towards its environment.
Goldstein was greatly aided in his work by the use of the motion picture camera, which was widely regarded among scientific and medical communities of this era as an instrument both fundamentally-attuned to scientific work and clinically multifaceted. The still camera had been a mainstay of Neurologists in the nineteenth century, so that when the famed physiologist and cinematographer Étienne-Jules Marey (1830-1944) helped to install the motion picture camera at the Salpêtrière, this was in some ways the extrapolation of a much longer visualising tradition. But the motion picture camera transformed the work done by its still forebear in one radical way: its ability to capture and retain motion, that most ephemeral yet fundamentally critical bellwether of neurologic pathology.
Like his contemporaries, Goldstein understood the motion picture’s significance not only as documentation, but as active diagnostic, therapeutic and pedagogic intervention.9 Diagnostically and therapeutically, filmmaking made possible the capturing of Goldstein’s ‘holism’ in the first place, since it could show patients interacting with their environments that the still photo or case report could not. We meet patients not only as the subjects of Goldstein’s testing, but crucially also en passant, catching glimpses of the constant accommodation of their neurologic to their lived selves as they happen. Goldstein used the unique capabilities of the camera not only to capture but also to edit and construct motion. And in this way, his vision of neurologic pathology as an ephemeral, unstable and dynamic phenomenon was given fullest articulation by film. Film also created a richer pedagogical experience. For, watching these films was not intended as a passive process of witnessing or commenting on a procedure or course of neurologic action.10 Instead, like many medical films of this period, Goldstein’s films also offered an active and immersive education in a specific – neurological – way to see. To watch these films was to learn to see as Goldstein did: to acquire the necessary habits of looking that made these films make sense.
What we found fascinating when examining Goldstein’s films, however, was not just the the empirical grounding in neurologic practice – the ‘deromanticising’ – of neurology’s romance they accomplished. It was also, perhaps more, their illegibility to current practicing Neurologists.11 Some measure of distance would, of course, be presumed between past practices of looking and ours today. No amount of looking at Goldstein’s films, despite their intended pedagogic workings, could shift the foreignness of his vision. These did not seem compatible, even as historical precursors, to contemporary neurological looking practices.
The current unintelligibility of these films confirms what we already theorise, that habits of looking are never ‘natural’ or ‘objective’ but instead culturally constructed, deeply contextual acts. The style of looking evident in these films seems so unnatural and counterintuitive to us now that we instantly understand them as a constructed artefact. Embedded in this recognition is the tacit acknowledgment that our own practices of neurological looking must also be constructed and historically specific. In the process of learning to ‘see’ properly, contemporary Neurologists, like Goldstein, see their own processes as ‘natural,’ objective, as the development of a vision of what is really there. But in fact, learning to see requires the cultivation of a productive myopia, in which certain things are obscured and others highlighted as suits our current notions of neurological disease, our current predispositions toward certain explanatory structures. In this sense, Goldstein’s cinematographic exercises are consistent with ours today, emphasising as they do not what is seen but the significance of the focusing of sight as a timeless, fundamental neurologic act.
These ‘neurovisions’ – these ways of getting the full picture of a neurological disorder – have shifted alongside medicine and neurological practices. It is these larger processes of seeing that we seek to unearth in our current research. Though we initially understood “neurovision” as quite specifically bound up in the act of seeing neurologically itself, more recently, we have understood that seeing neurologically is not merely a visual practice. Even with Goldstein, the patient was put through specific tests or procedures to ‘render visible’ the consequences of the injury, to intensify them so that they were clearer to the observer, even to force into visibility some symptoms that would not, in the ordinary course of events, be visible. Today, what might once have been thought of as the simple act of skilled observation, as portrayed in Rembrandt’s famous 1632, The Anatomy Lesson of Dr. Nicolaes Tulp, is insufficient. It is true, as Andrew Lees has pointed out,12 that the Neurologist is a kind of detective, in search of the clues that will identify the ‘villain’ – the lesion or internal anomaly responsible for the ailment of the patient. But he or she no longer stands alone in front of the patient, trained vision informed only by the ‘case history’. The expert gaze must be amplified and supplemented by images provided by, and often interpreted by, a range of other technologists. We invest our hopes and beliefs in X-ray, CT, MRI, fMRI and all of the other technologies capable of augmenting our vision. But they are not only augmentations. These technologies actually do more: they change the scale at which the Neurologist sees the condition and the form in which its origin is conceptualised. Today, there is also a new relation of time to vision: the push to diagnose diseases earlier, the belief that ‘earlier is almost always better’ when it comes to making a person into a patient – identifying a prodrome which can reveal that he or she is ‘presymptomatically ill’ – means that different kinds of looking, for different kinds of signs and symptoms, is required. This shifted temporality has, in turn, shifted what counts as a symptom. Small differences of function, that might otherwise be considered well within the realm of normal variation, now carry with them the potency of future pathology.
Perhaps the most radical shift has occurred in clinical genetics. No longer content merely to establish a disorder’s genetic cause by charting its occurrences across a lineage, the neurological gaze has shifted once more. It no longer focuses on the external symptoms of disease perceived by the patients themselves or others around them – however slight, however nascent. New technologies of gene sequencing have rendered the invisible visible, and refocused clinical attention on those small variations in gene sequences that, in some but not all cases, will eventually reveal themselves in symptoms. Despite the rhetoric of precision and ‘personalisation’ associated with the predictive power of contemporary molecular genetics, the person so diagnosed now enters a world of probabilities and uncertainties. They are, as it were, ‘patients in waiting’. Will they get sick, when will they get sick, how quickly will their disease progress, when is early intervention warranted, how will these ‘genetic instructions’ play out in their own, singular, individual body and brain? If, as we know, these questions haunt those with a family history of single gene disorders such as Huntington’s Disease – so much that around half of those with such a family history of neurological disorder refuse to take the genetic test – how much more so for the disorders whose genetics is multiple, probabilistic, developmental and contingent on many other factors for its emergence. What is the neurological gaze, then, in these proliferating conditions of uncertainty?
Then there are the disorders which we cannot ‘see’ at all; where none of our technologies reveal anomalies. Can we still think of some disorders as ‘functional’? What is the status of a disorder that only exists in the sometimes erratic narratives of a patient’s story? When it comes to Chronic Fatigue Syndrome, Gulf War Syndrome and many more conditions characterised by a somewhat confused concatenation of neurological, mental and physical complaints, are we confronted with ‘real’ disorders even though our current practices of looking don’t see it? Or is it something that the patient has acquired ‘culturally’, simply an embodiment of a “cultural construction”? Are these symptoms of something else: a psychiatric disorder? Or is this a problem awaiting ‘objective’ confirmation? And if so, on whose authority? Must we wait for a physiological test? A more precise brain scan? Better gene sequencing? Or is it a matter of clinical ‘art’, the culturally-constructed gaze of the clinician, further specified by years of personal experience and a lifetime of successes and failures burned deep into the memory?
Some rather important issues are embedded within these questions about ways of seeing. Beyond the ambiguities of the distinction between functional and organic disorders, perhaps even the borders between neurology and psychiatry are at stake. If all psychiatric disorders are, in the end, to be regarded as brain disorders, and if all neurological disorders also are to be regarded, ultimately as emerging from molecular anomalies in brain and nervous system, why should the division stand? Should we not abandon the idea of a romantic, holistic science, with its respect for the experience of being a person in the world? Or, just perhaps, should the line of development be the other way round: should Neurologists resist the siren call of the new brain sciences and their claims to know, at last, the reality of the disorder, and recognise, as did Goldstein, that what was at stake for the patient was a whole new way of being in the world? In recognising that as one possible way of being a human being, we might be able to look anew at those ways of being that those of us who think of ourselves as ‘normal’ take so much for granted. By exploring the history of these ways of seeing, we aim not only to get a fuller picture of how and what Neurologists see, but also to explore how modes of medical looking more generally are created and continually re-defined, solving some problems and creating others with each new definition.
1. See e.g. Daston L. On Scientific Observation. Isis, 2008;99(1):97-110.
2. Rose N, Abi-Rached J. The birth of the neuromolecular gaze. History of the Human Sciences, 2010;23(1):1-26.
3. Luria AR. (1972). The Man With A Shattered World (L. Solotaroff, Trans.) Cambridge, MA: Harvard University Press. Luria AR. (1932/1976). The Mind Of The Mnemonist (L. Solotaroff, Trans.). Cambridge, MA: Harvard University Press.
4. Luria AR. (1979) The Making Of Mind: A Personal Account Of Soviet Psychology. (Trans. and Ed. M. Cole & S. Cole.). Cambridge, MA: Harvard University Press.
5. Sacks O. (1987) Introduction, The Man With A Shattered World, Cambridge, MA: Harvard University Press, p. xii)
6. http://www.dana.org/Cerebrum/2005/ The_Inner_Lives_of_Disordered_Brains/
7. Goldstein K. (1995) The Organism: A Holistic Approach To Biology Derived From Pathological Data In Man. New York: Zone Books.
8. Goldstein K. (1925) Zur Theorie der Funktion des Nervensystems, Archiv für Psychiatrie 74, 398.
9. Geroulanos S, Meyers T, (2014) Experimente im Individuum: Kurt Goldstein und die Frage des Organismus. BerIin: August.
10. See e.g., Gainty C. (2012) Going After the High-Brows: Frank Gilbreth and the Surgical Subject. Representations, 1-27; Ostherr K. (2013) Medical Visions. New York, NY: Oxford. For an older view on cinematographic roles, see Didi-Huberman G. (1993) The Invention Of Hysteria: Charcot And The Photographic Iconography Of The Salpêtrière. Cambridge, MA: MIT.
11. A selection of these films can be seen at our website: https://neurovision.org.uk/
12. Lees A. (2017). Mentored by a Madman: The William Burroughs Experiment. London: Notting Hill Editions.