The lab aims to provide a teaching and research platform for students to learn, investigate, develop and apply advanced methodologies for Telerobotic control using various methods (hand gestures, voice, virtual reality), Advanced kinematic modeling, Imaging and view point optimization, Artificial intelligence and machine vision algorithms, Dexterous grasping.
 
 
 
 
 
Academic Advisor: Sigal Berman
Technical Manager:Yossi Zehavi
 
 
 
 

 A 6 degrees of freedom (DOF), 6Kg payload jointed manipulator (Motoman) and a 5 DOF 2Kg payload jointed robot (CRS). An over head rail system with three hoists for mounting vision equipment. A large array of cameras and lenses. .A 3D bumblebee camera. An variety of sensors including an electronic weight.

Object-action telerobotics
Researchers: Sigal Berman, Jason Friedman, Tamar Flash.
Partners: Weizmann Institute of Science
A telerobotic methodology based on identifying intended actions of the human operator according to hand configuration during object grasping. The robot then selects the grasp most suitable for performing the required action based on both task requirements and robot capabilities
Smooth 3D robotic motion
Researchers: Sigal Berman, Dario Liebermann, Joe McIntyre.
Partners: Tel Aviv University, Université Paris Descartes
Dynamic trajectory changes are required in many robotic applications, where the final destination in not known at movement initiation. Such applications include object tracking, tele-robotics, and operation in dynamic environments. We develop real-time algorithms for 3D robotic motion based on advanced modeling of human arm motion.
Multi view point optimization
Researchers: Doron Marrese, Helman Stern, Sigal Berman.
When setting up a machine vision system, camera and light source placement can greatly influence systems performance. Placement optimization is important for applications with multiple light sources and cameras as well as in the single camera case. Tool condition monitoring (TCM) is important for reducing scrap and rework. We are developing view point optimization methods and validating them using an application for TCM.

A human-robot collaborative reinforcement learning algorithm
Researchers: Helman Stern, Uri Kartoun, Yael Edan
A reinforcement learning algorithm, CQ(λ), enables collaborative learning between a robot and a human The algorithm expedites the learning process by taking advantage of human intelligence and expertise. The robot has enough self awareness to adaptively switch its collaboration level from autonomous (self performing) to semi-autonomous (human intervention and guidance). The approach of variable autonomy is demonstrated and evaluated using a fixed-arm robot for finding the optimal shaking policy of emptying the contents of a plastic bag.

 

 
Scheduling robotic toast making by two level hierarchy reinforcement learning
Researchers: Helman Stern, Amit Gil, Yael Edan, Uri Kartoun.
A scheduling reinforcement learning algorithm here scheduling of a single transfer agent (a robot arm) through a set of sub-tasks in a sequence that will achieve optimal task execution. Execution of a complex task was demonstrated using a Motoman UP-6 six degree of freedom fixed-arm robot, applied to a toast making system. The algorithm finds a sequence of toast transitions with the objective of minimal completion time for a multiple toast making system. Experiments performed examined the trade-offs between exploration of the state-space and exploitation of the information already.
Optimal collaboration in human-robot target recognition
Researchers: Avital Bechar, Joachim Meyer, Yael Edan.
A methodology for determining the best collaboration level for an integrated human-robot target recognition system in unstructured environments was developed. Four human-robot collaboration levels for target recognition tasks were defined, tested and evaluated. The collaboration levels were adjusted to an extensive range of automation, from manual to fully autonomous.
Performance analysis of human-robot collaboration in target recognition tasks
Researchers: Yuval Oren, Avital Bechar, Yael Edan
This research evaluated the performance of an integrated human-robot system in target recognition tasks. The system’s model is composed of four human-robot collaboration levels for target recognition tasks. The collaboration levels were designed specifically for target recognition tasks and adjusted to an extensive range of automation, from manual to fully autonomous. The objective function quantifies the influence of the robot, human, environment and task parameters through a weighted sum of performance measures, and enables it to determine the optimal level of collaboration based on these parameters. To simplify the analysis of the function a modified signal detection theory was applied.
Influence of human reaction time in human-robot collaborative target recognition systems
Researchers: Dror Yashpe, Avital Bechar, Yael Edan
A reaction time model, based on Murdock (1985), is incorporated into Bechar’s model and analyzed.
The analysis reveals new collaboration levels that are preferable when human reaction time cost is high. In these collaboration levels, the human concentrates only on objects that the robot recommended, or in other cases, only on objects that the robot did not mark. Since the human ignores one type of objects, the system reduces the total human reaction time cost resulting in better performance.
The human ignores objects by setting his cutoff point to an extreme value. The analysis shows how the system type, the human sensitivity, the probability of an object to be a target, and the time cost, all influence the phenomena of extreme cutoff point selection.
When human sensitivity is low, the human badly discriminates between targets and other objects. When the system gives high priority for not causing false alarms, the human prefers an extreme positive cutoff point, resulting in no objects marked as targets, and no false alarms. For systems that give high priority for not missing targets, an extreme negative cutoff point was preferred; resulting in all objects marked as targets and no misses. The analysis shows that the time costs affect the position of the optimal cutoff point. The phenomenon, introduced above, arises for higher human sensitivities as the time cost is higher. Furthermore, the analysis shows that collaboration with a human is less profitable in cases when the time cost is high.

 

 

 

 

 

 

Algorithms for dynamic switching of collaborative human-robot system in target recognition tasks
Researchers: Itshak Tkach, Yael Edan, Avital Bechar
A set of algorithms were developed for real-time dynamic switching between collaboration levels in a human-robot target recognition system. The algorithms were developed for a closed-loop controller to maximize system performance despite deviations in the parameter values and were evaluated by conducting a thorough simulation analysis. These developments enable smooth real-time adaptation of the combined human-robot system to many possible changes of the environment, human operator and robot performance. System performance was analyzed in simulations for a variety of target probabilities distributions. Improvements that can be achieved by each algorithm were calculated as a mean value for 200 independent simulations for each target probability distribution. The numerical analysis results indicated that the developed algorithms for dynamic switching achieved improved system performance.
Biomechanical energy conversion technology
Researchers: Raziel Riemer, Amir Shapiro, Itzik Melzer
Graduate student opportunity!
This project aims to develop technology that will extract energy from the human body in order to provide the electrical power needed for external devices, such as radio communication, GPS, cell phones, and more. Constructing such a technology will make it possible to reduce user-dependence on electrical sources, which are unavailable under some circumstances (hiking, in third world countries, and during military operations). This type of technology could also be applied to the design of a better prosthesis and exoskeleton.
Finding intuitive hand gestures for machine interaction
Researchers: Helman Stern, Juan Wachs, Yael Edan
Development and collection of numerical indices that quantify hand gesture intuitiveness, where intuitiveness is the cognitive association between a command or intent, and its physical gestural expression. Intuitive factors, are costly and time consuming to obtain, and hence the need for automated methods for their acquisition. Software was developed to automate the collection of subject gestural responses to two sets of command stimuli: car navigation and robot control. We found a power function analogous to Zipf's Law, where a small number of gestures, out of the number of possibilities, are used to express most of the commands. Evidence of matching of complementary gesture-command pairs was found Gesture preferences are highly individualized, even within a single culture, providing evidence to refute the hypothesis of the universality of gestures.
Improving joint torque calculations through an optimization method for segments, angles, and body segment parameters
Researchers: Raziel Riemer, Elizabeth T. Hsiao-Wecksler, Placid M. Ferreira
Inverse dynamics is a procedure commonly used in the biomechanical analysis of human movement. This procedure calculates force and torque reactions at various body joints. Two main sources of error in inverse dynamics calculations are inaccuracies in segmental motions and estimates of anthropometric body segment parameters (BSP). We are developing a method that uses the over-determined nature of the inverse dynamics to formulate a nonlinear optimization problem for reducing the error in the calculated joint torques.

Real-time hand gesture interface for browsing medical images
Researchers: Helman Stern, Juan Wachs, Yael Edan, Michael Gillam, Craig Feied, Mark Smith, Jon Handler
Partners: The Institute of Medical Informatics, Washington, DC
A gesture interface is developed for users, such as doctors/surgeons, to browse medical images in a sterile medical environment. A vision-based gesture capture system interprets user’s gestures in real-time to manipulate objects in an image visualization environment. The gesture system relies on real-time robust tracking of the user’s hand based on a color-motion fusion model, in which the relative weight applied to the motion and color cues are adaptively determined according to the state of the system. A state machine switches between gestures such as; directional navigation zoom and rotate, as well as a sleep state. A beta test of a system prototype was conducted during a live brain biopsy operation, where neurosurgeons were able to browse through MRI images of the patient’s brain using the sterile hand gesture interface. The surgeons indicated the system was easy to use and fast with high overall satisfaction.

 

 

TELE-GEST : Real-time hand gesture robotic control (Vision Based Gesture Language)
Researchers: Helman Stern, Juan Wachs, Uri Kartoun, Yael Edan
This work describes a real time system for control of a telerobotic arm based on visual recognition of hand gestures, using an optimized Fuzzy C-Means (FCM) algorithm. To control a robot movement, the user evokes a gesture from a gesture vocabulary consisting of thirteen gestures. A Supervised Fuzzy C-Means algorithm is trained to classify the user hand gestures using the salient features of the image. Results revealed a recognition accuracy of 98.90% for the dependent system and for an independent system, the recognition accuracy of 98.21

Fuzzy system for surveillance picture understanding-multi-object tracking
Researchers: Helman Stern, Armin Shmilovici, Uri Kartoun
The last stage of any type of automatic surveillance system is the interpretation of the acquired information from the sensors. This work focuses on the interpretation of motion pictures taken from a surveillance camera, i.e.; image understanding. A prototype of a fuzzy expert system is presented which can describe in a natural language like, simple human activity in the field of view of a surveillance camera. The system has three different components: a pre-processing module for image segmentation and feature extraction, an object identification fuzzy expert system (static model), and an action identification fuzzy expert system (dynamic temporal model). The system was tested on a video segment of a pedestrian