Cognitive robotics is a multi-disciplinary field that draws on research in robotics, artificial intelligence, and neuroscience to design robots that can perceive their environment through multisensory channels, plan movements, anticipate the outcome of their actions and the actions of other agents, and learn. These robots can work in collaboration and in physical contact with humans in a variety of applications such as medicine, agriculture, and industrial automation. Many researchers across campus are both members of the Zlotowski Center for Neuroscience and of the ABC Robotics Initiative, and use neuroscience theories to improve the design and control of robots, and use robots to study neuroscience.
Neuro-engineering is a discipline that uses engineering techniques to understand, repair, replace, enhance, interface with, or otherwise exploit neural systems. Neuro-engineering draws on computational and experimental neuroscience to create models from the system level down to the level of single neurons. It also draws on electrical and biomedical engineering to process signals from neural tissue, and encompasses elements from robotics, cybernetics, computer engineering, neural tissue engineering, materials science, and nanotechnology. The goals of neuro-engineers include restoration and augmentation of human function via direct interactions between the nervous system and artificial devices, and understanding the coding and processing of information in the sensory and motor systems, and how it can be manipulated through interactions with artificial devices including brain-computer interfaces and neuroprosthetics.
Cognitive Robotics and Neuro-engineering Researchers
My lab is conducting research mostly related to computer vision and robotics, using machine learning techniques and specifically deep neural networks. Research topics include:
- visualization and modeling of the activity in deep artificial neural networks
- Development of modular and efficient network architectures
- Computer vision problems: visual object tracking, parts and wholes interaction.
- Addressing robotics tasks using deep learning approaches
- Applying deep learning and computer vision techniques to several problem domains, including ultra sound imaging, agricultural phenotyping, and several others.
The interdisciplinary Computational Vision Lab
(iCVL) studies biological (and in particular, human) vision and machine vision
from both theoretical and applied perspectives. We bring together these fields
to (1) develop algorithmic solutions to challenges in computer vision and image
understanding, (2) devise computational explanations of biological visual
function, and (2) employ insights from studying vision for exploration of other
types of information processing, both sensory and cognitive. To meet these goals,
research is highly interdisciplinary, involving various combinations of
computational and mathematical work, machine learning techniques, behavioral
exploration and visual psychophysics (with both humans and animals), and
inquiry into visual neuroscience
Our research focuses on interfacing biology with microelectronics. In particular, we study the integration of biological materials (such as DNA, proteins, and cells) with micro- and nano-electronic devices that will harness their unique functionalities for the development of the next generation of personalized health monitoring applications (such as electronic skin patches and implantable sensors that can continuously monitor our health).
My research in the Intelligent systems engineering laboratory (www.bgu-isel.org) in the department of Industrial Engineering and Management, at Ben-Gurion University of the Negev, focuses on analysis and engineering of intelligent systems capable of dexterous motion. We develop deterministic and stochastic models for motion generation and representation, and apply them to the analysis of human motion and to the synthesis of robotic motion. Based on our models we study the interaction between perception and action in physical, virtual, and augmented environments. Finally we apply our findings to design and construct intelligent, integrated systems capable of dexterous motion in various application fields (agriculture, rehabilitation, the digital factory). In systems developed for agriculture we hope our studies will enhance production yield along with sustainability. In systems developed for upper-limb rehabilitation we hope our studies will lead to improvement in patient's quality of life. In systems developed for the digital factory we hope to attain robustness and high performance in remote scenarios.
Studies motor control with behavioral, electrophysiology and neuroimaging techniques with a particular interest in understanding the function of the cerebellum.
In the Levy-Tzedek Lab, we study the effects of aging and disease (such as stroke) on motor control and motor learning, and we design rehabilitation tools and techniques.
Specifically, we design gamified human-robot interactions for healthy aging, as well as for post-stroke rehabilitation.
The maidenbaum lab studies the interaction between humans and their surrounding environment - how do we represent our spatial surroundings in our brain? How are these representations modulated by different sensory input channels and by memory? How are real, virtual and augmented environments coded? And how can we use insights from this basic science in order to rehabilitate, assist and augment human spatial skills?
We use naturalistic gamified paradigms in order to test human spatial memory and navigation in healthy participants and in patients, and computational models in order to decode environmental features such as directions, locations, and targets.
The lab is also interested in non-physical spaces, aiming to extend findings from spatial cognition to other dimensions such as time, social and abstract concept spaces.
My students and I apply neuroscience theories about the human sensorimotor control, perception, adaptation, learning, and skill acquisition in the development of human-operated medical and surgical robotic systems. We also use robots, haptic devices, and other mechatronic devices as a platform to understand the human sensorimotor system in real-life tasks like surgery, and in virtual tasks like virtual reality games or surgical simulation. We hope that this research will improve the quality of treatment for patients, will facilitate better training of surgeons, advance the technology of teleoperation and haptics, and advance our understanding of the brain.
and algorithmic tools for processing and analyzing biological and medical
images. Current research projects include object detection, segmentation, atlas
construction, and shape analysis in high throughput microscopy and Magnetic
resonance imaging (MRI) with a particular emphasis on the brain.
Research in the lab of Dr. Oren Shriki uses mathematical analyses of brain activity and machine learning techniques to develop novel diagnostic tools for neurological and psychiatric disorders. The lab also develops computational models of neuronal networks to gain insights into how changes in neural dynamics lead to brain disorders and how neural plasticity may assist in restoring healthy neural dynamics. A major focus of the lab is on translational neuroscience and neurotechnologies, such as brain-computer interfaces, a system for real-time epileptic seizure prediction and a novel pilot helmet which monitor's the pilot's brain.