Australasian Science: Australia's authority on science since 1938

How We Sense Time

Credit: freshidea/Adobe

Credit: freshidea/Adobe

By Jack Brooks

Our sense of time is critical to our everyday experience, from consciousness to movement and learning.

Have you ever looked at one eye in the mirror and shifted your gaze to the other eye, surprisingly seeing no movement of your eyes? Or misheard something, such as the Jimi Hendrix lyric, ‘scuse me while I kiss the sky for ‘scuse me while I kiss this guy? At first glance these are simply sensory limitations, but when we look under the hood these are clues about how we process time.

Unlike other senses, time is directional; it unfolds as one event after another. Psychologists have pondered how the sense of time arises as, unlike the traditional senses, there is apparently no receptor or sensory system capable of signifying time.

And yet time must arise from somewhere, as we can track durations ranging from fractions of a millisecond to days with considerable accuracy. The former is the time it takes sound to travel between our ears, the detection of this difference enabling us to localise sound. The latter is demonstrated by our ever-changing circadian rhythms. This article will focus on time events that unfold in the past 2–3 seconds, dubbed by psychologists as the subjective present.

Free Will

Philosophers have debated for hundreds of years whether we have the free will to do what we wish or if our actions are pre­determined. The question was pigeonholed away as a thought-experiment until Benjamin Libet of The University of California, Los Angeles (UCLA) devised a clever experiment to test it. Libet’s interest in studying consciousness was sparked when he spent a year in Australia in the 1950s with Nobel Prize winner John Eccles. It was not until 1983 that Libet published his seminal paper on consciousness. He measured activity in the brain using electroencephalography (EEG). Participants watched a single-handed clock that completed a revolution every 3 seconds, and were instructed to make a fist at any time they chose. Afterwards, participants reported the clock position when they had made the decision to make a fist.

The EEG recordings showed that neural activity associated with making the fist started as much as 1 second before participants were aware they had made a decision. This suggests that an unconscious decision was made, and that only then was the mind made aware of the decision. Some considered this conclusion flawed, as the only choice participants were free to make was the timing of a single action.

Recently another team performed a similar experiment, but here participants had a choice to use their left or right hand to push a button. These scientists used fMRI to drum in on neural activity in the parietal cortex, the area of the brain responsible for processes prior to the final movement command. They claimed they were able to predict the side chosen at above chance levels as early as 7 seconds before participants were aware of their decision ( This is a controversial viewpoint at odds with our everyday experience that we have complete agency over what we do and when we do it.

Although we don’t ever encounter anything to suggest that free will is an illusion, its properties are malleable. In another experimental paradigm, participants click a mouse and a light flashes on the screen in front of them ( There is a short delay between the click and the flash. The task is repeated many times to strengthen the expectation that clicking the mouse causes the flash. In the test trials the delay is dramatically reduced without the participants’ knowledge. Remarkably, participants now judge the light as occurring before the click.

Having participated in a similar experiment myself, I still felt that my click caused the light to flash, even though I perceived the flash preceding the click. This paradox is testament to the strength of the illusion of free will, that even when we are presented with sensory input shouting determinism we choose free will.

Time Creates Space?

Adaptation paradigms are also well-suited to investigating how our brain maps space, in particular the skin. These maps of our body form enable us to make accurate movements. When adjacent locations on the skin are stimulated at around the same time, they come to be represented adjacently in the brain. The more times this coincident stimulation occurs, the stronger the neural connections become. Thus, to map space, we rely on assumptions about the statistical properties of stimuli hitting the skin. Part of the reason this occurs is that we have poor spatial resolution on the skin but excellent temporal resolution.

Our own experiments have probed further to elucidate the mechanisms of how motion, rather than discrete touch, influences the mapping of space by the brain. We exploited the tunnel effect: when a moving object passes behind something and returns out the other side quicker than expected it is still perceived as a single object and not some other object. However, the “object constancy” gained from the tunnel effect comes at the cost of influencing the brain’s map of space.

When we used a similar setup on the skin (Fig. 1), participants felt that the untouched skin patch covered by a band (equivalent to the tunnel) had shrunk. Participants had felt the brush along the entire length of their forearm, but their forearm felt shorter.

Our constant velocity model showed that the speed before and after the band can predict the distance of the missed patch sensed by participants. That is, the brain uses the known parameters – the crossing time and the speed – to calculate the unknown distance of the missed skin patch (velocity × time), thus influencing their sense of spatial resolution on the skin.

A similar mechanism is responsible for the failure to see your eyes move when you shift your gaze from one eye to the other in a mirror. The sensory information hitting your retina during this eye movement (a saccade) is suppressed due to its low quality, so what you perceive during the saccade is actually an estimate using information from each end of the saccade.

Although this mechanism distorts space–time, it stabilises our visual field, which is important given we make tens of thousands of saccades every day. This bias is also why, when you first gaze at a clock, the second hand can appear eerily frozen for a moment.

Many Clocks

Until recently a clock model of time perception dominated the field. The main concept of this model was that there was one clock in the brain. It had a counter, an accumulator and a switch. The counter generated pulses at regular intervals, which were summed by the accumulator. The switch reset the accumulator when necessary.

This model seemed to account for perceptual illusions such as the oddball effect (Fig. 2). In this illusion, the same object such as a pineapple is flickered on a screen, but unexpectedly the pineapple is replaced with a novel object like a pool ball. When participants judge which stimulus in the stream was displayed for the longest, they consistently pick the pool ball, as if time has expanded.

Proponents of the clock model account for this by purporting that the oddball stimulus increases arousal, leading to an increase in the counter rate, with more ticks equating to more time. However, others found that the neural response to the repeated object in the oddball effect may adapt or diminish, a phenomenon known as repetition suppression. Thus the oddball only seems to appear for longer because the repeated object is perceived to appear for less time.

If the single clock model is incorrect then how do we tell time? A/Prof Derek Arnold of The University of Queensland was part of a team that made a key discovery favouring a multi-clock model ( In the experiments, participants viewed a small flickering stimulus for 15 seconds to adapt the eye. After adaptation, the perceived duration of a 600 ms stimulus displayed in the same visual region declined by as much as 20%. If the test stimulus was placed at another location its perceived duration was unaffected as that location remained unadapted.

This is most consistent with a new model of time in which independent clocks track the duration of the different images. This new model requires a shift in our thinking, as it is suggests that we have no mental representation of time and contrasts with our other senses, which have well characterised representations such as the body map for the sense of touch.

The emerging view is that neural networks serve these clocks. These networks are analogous to how the ripples in a pond can be used to establish when a pebble was dropped in.

In 2016, a team from UCLA tested if brain slices from mice could learn to tell time ( The cells were persistently stimulated with bursts of light that lasted 100–500 ms, and the cell’s electrical response was measured. Would cells trained with one stimulus interval show an enhanced response to that interval over other intervals? The answer was in the affirmative: cells in a dish could self-organise into time-keeping circuits and recognise the correct interval.

This in vitro finding paired with the perceptual findings fits well with a non-clock model, but suggests that it is nurture over nature when it comes to learning time.

Turning Back the Clock

Our processing of time and time sense degrade as we age. The elderly are less able to correctly detect the order of sounds, and hence miniscule changes in the timing of syllables that help us distinguish different words may be lost to them. Can these deficits be fixed? Can an everyday person improve their sense of time?

In the late 19th century, psychologist William James wrote: “Like other senses, too, our sense of time is sharpened by practice”. At the time little was known about the veracity of this statement, as the only known data came from the self-experimentation of Max Mehner.

A century later, prominent neuroscientist Michael Merzenich tested these claims ( Participants in the experiment completed almost 1000 trials per day for 10 days. On each trial, two tones were played, separated by a small interval. Participants had to select which tone they felt was longer. After 10 days of training, the difference in duration between the tones required for participants to correctly discriminate their duration had halved. This was great news as it suggests that some types of hearing loss could be reversed. Further experiments confirmed that, in the right conditions, the learning generalises to other tones and intervals.

Today’s brain-training apps make similar claims, but whether they work and have therapeutic potential is a question for time.

Jack Brooks is a PhD student investigating touch perception and body representations with Neuroscience Research Australia at The University of NSW.