That little voice inside our heads

While sitting on the train I catch myself thinking about watching an episode of my favorite series on Netflix when I get home. Immediately, the voice in my head says “nah, you’ll probably end up watching the entire season”, “nah , I have self-control”  I tell myself.  You can imagine which argument won in my internal dialogue.

I caught myself mid-thought, I mean I suddenly became aware of the voice in my head. I deliberately engaged in listening to my inner speech, hearing each argument almost in full sentences. It has the acoustic and structural qualities of external speech, including sarcastic tones.  In this blogpost I want to focus on voluntary verbal thought, how are neuroscientists studying it?  how do deaf people experience the concept of inner voice?.

Alberto Montt, used under CC

Our Inner speech

Our inner voice plays a key role in the regulation of cognition and behavior. We can hear our voices in our daily lives reminding us about the schedule of the day (alongside our working memory), giving us grief for doing something wrong and encouraging us to overcome challenges. In other words, being there when we need it the most (and helpful most of the time).

Nick Seluk used under CC

The silent production of words inside our brain (also referred in the literature as inner-voice, inner-speech, verbal thinking, covert self-talk, internal monologue, and internal dialogue), bridges language and thoughts. It is estimated that we hear it for at least one-quarter of our conscious life.

One of the main theoretical perspectives on inner speech was given in the 1930s by Russian psychologist Lev Vygotsky, who theorized that inner speech was an internalized version of overt (external) speech.   According to Vygotsky, inner speech develops from children’s conversations with their caregivers going through a phase where they talk aloud to themselves (private speech) to finally getting to a fully internalized cover speech. Elaborating on Vygotsky’s ideas, Fernyhough  proposed that inner speech takes two forms: expanded inner speech (with qualities as tone, accent and phonologically similar to overt speech) and an abbreviated version called condensed inner speech, which Vygotsky describe as ‘thinking in pure meanings’, a compressed version of the first one.  Under stressful circumstances, we can switch between types. Who hasn’t started talking to themselves slowly or even thought out loud under a stressful or difficult situation?. In Fernyhough’s model, the default setting for inner speech is condensed, switching to the expanded model under cognitive challenge.

The relationship between inner and overt speech is still a matter of study. Early descriptions by Watson assumed inner speech was just a sub-vocalization (attenuated motor commands) of overt speech. This was refuted when Smith induced paralysis on himself (!!) and was able to listen to his inner speech even without articulation.  Still, the extended version of the voice in my head sounds a lot like the way I talk out loud, even the same rhythm and tempo (even though I think I sound better in my head. There is a model that captures the resemblance and understands inner speech as overt speech with blocked motor execution, a “motor-simulation” view. In this theory inner speech is seen as a truncated overt speech production process, where exactly this truncation lies is unclear. As opposed to this motor simulation view, there it the “abstraction view”, which holds that inner speech is “the consequence of the activation of abstract linguistic representations”.

While talking to ourselves in silence we are using some of the same areas of our brain involved in spoken speech, mostly Broca’s area (associated to speech production), Wernicke’s areas (associated to speech comprehension) and the inferior parietal lobule. This would suggest there is a relationship between inner and outer speech since both induce activation of essential language. Nevertheless, several neuroimaging studies suggest even though inner and overt speech share a common cerebral network, they engage differently in certain regions and produce different activation patterns.  There are varying results, some of them support the idea that inner speech generation is covert speech with blocking execution processes while others imply otherwise. These results could be explained due to the different type of tasks and degree of awareness for each study.

Summoning your inner voice

If you are like me, then you are constantly hearing yourself (especially on the train when your phone is uncharged) but this is not a universal trait, people experience it in various degrees of frequency. During experiments to study covert speech, it is necessary to have inner-chatty people. But, how much attention or monitoring is needed in an experimental setup? Different degrees of awareness (from word repetition to word generation) during the monitoring of one’s own inner speech are associated with different patterns of brain region activations [8]. This makes it difficult to elicit the correct type of inner voice researchers want to measure.

There are different methodological approaches to study inner speech, some of them want to capture it spontaneously and others try to inhibit it in order to measure the process awakened by this actions. A simple method to study subjective inner speech data is through self-report questionnaires, mostly focused on the context and functions of self-talk.  Researchers also use the method called Descriptive Experience Sampling, in which reports are assessed at random intervals of time. The researchers give the participants a beeper to take with them while they carry their daily lives, and when they hear the beeps during the day they take notes. These will later be reported in an expositional interview with the researcher.  Another methods are dual-task designs, in these researchers look to interfere or block inner speech through a secondary task that suppresses sub vocal articulation. The articulatory suppression task can be the repetition of a word, a well-learned sequence of words (eg. the days of the week), or repeating a phrase while completing a primary task (usually a planning task). Dual-task designs have been widely used in children and adults to study the links between inner speech and performance of problem-solving tasks and spatial working memory tasks.

For behavioral, physiological and neuroimaging  (fMRI, iEEG) objective-data collection, the participants are usually required to repeat sentences in their head, articulate words sub-vocally or imagine speech with varying characteristics, but it is difficult to ensure that the phenomenon of inner speech continues while scanning.

Are you talking to me?

How do I know that I my inner voice is really my own? How can I simulate other people’s voices and accents in my head and know it’s myself? Scott suggests that perhaps what happens is that what we “hear” is the prediction of our own voice. A case of attenuation-mechanism where an efferent copy is made and cause sensory attenuation of audible sounds (after all we already know what we are saying). This is similar to the case of tickling ourselves, the motor system creates an efference copy, as the sensations are predicted there is no surprise and the resulting experience is less intense or canceled.

It is hypothesized that a failure in this copy system could underlie auditory verbal hallucinations (AVH) in schizophrenia patients, making them confuse their inner dialogue as not self-produced (for more about schizophrenia and the hallucinating brain, click here and here).

Sign me up!

What happens in the case of deaf people? Do they “hear” an inner voice too? In this case, the concept of hearing takes another meaning; because they lack an auditory language, they mostly think their every-day communication system: sign language (besides of visual imagery and in some cases, lip-reading).  Of course, this varies depending on the individual, for example whether they are congenitally deaf or not (there is a nice Quora thread if you want to know more). According to McGuire et al, “Inner-signing” uses the same areas of the brain used for inner speech in hearing individuals. There is also a review of Atkinson et al. for a study on voice hallucinations in deaf people.

Listen in

When you started reading this blog entry, most probably your inner-voice was accompanying you, and I hope it raised more questions about this methodologically challenging, interesting yet ordinary phenomenon. Talk to yourself about it later!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s