Einstein once said: “Space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind union of the two will preserve an independent reality”. According to this, and according to what we know today from Relativistic mechanics, it is not always possible to treat space and time as distinct and independent domains. Because the universe is a dynamical system, events can be described as points in a four dimensional space-time continuum. However, it is possible to extract a timeless space from this continuum because of our ability to extract events occurring at different locations at the same time. Thus, it is plausible to expect that a disturbance in the neural processing of simultaneous events across spatial locations would cause a disturbance in the representation of space.
 
The concept of simultaneity is rather elusive when it comes to our brain. Consider, for example, the simple act of knocking on a door. As our knuckles hit the door a sound is produced, our retinas register the image of the contact, the tactile organs detect the mechanical impulse, and the proprioceptive system carries the information of an expected impact. While these various sensory-motor signals originate from the same instantaneous event – the act of knocking – because of a difference in transmission rates they are likely to reach any integrative brain center with a substantial temporal scatter. Therefore, the fact that we perceive these sensory events as happening “at the same time” must be the outcome of active reconstruction processes that effectively remove the temporal scatter of multi-modal information based on the prior assumption that different sensory streams share a common cause.
 
Here, we consider the hypothesis that the neural processing of simultaneity is not only relevant to temporal processing. Instead, we suggest that it tampers our sense of space, and in particular, its proprioceptive representation.
 
To test this hypothesis, we use state of the art robotic haptic devices and build a virtual reality environment that realizes a game of pong. This game involves the visual, haptic, auditory and proprioceptive modalities. As presented in the figure below (left panel), the subject's hand holds the handle of the robotic device while it is being covered by a screen. When moving the handle, she controls the displacement of a paddle presented on the screen, and tries to hit a ball jumping in the two dimensional space. In the non-delayed truth (middle-up panel) the hand and the paddle are aligned; in the delayed mode (middle-down panel), delay can be introduced between all the modalities.
 
Preliminary results (right panel) shows the initial (red) and endpoint positions (blue) of a representative subject performing blind reaching before a 30 minutes training in a delayed pong experiment. The reaching endpoint positions after that practice are depicted in green. On average, our five subjects show a statistically significantly longer reaching motion after training (t-test, P = 0.03).
 
Pong.png
 
This study aims to contribute to our understanding of how our brain represents space and time, and to provide preliminary evidence for future work on stroke survivors suffering from space representation deficits such as hemi-spatial neglect.