Currently, I work on the development of hybrid schemes that combine numerical and learning methods for performing dexterous, in-hand manipulation with simple, adaptive robot hands. A constrained optimization scheme utilizes analytic models that describe the kinematics of adaptive hands and classic conventions for modelling the manipulation problem (quasistatically). The particular scheme synthesizes a simulation module that accounts for static phenomena, providing also intuition about the problem mechanics. A machine learning (ML) scheme, is used in order to split the problem space, deriving task-specific manipulation models that account for difficult to model, dynamic phenomena (e.g., slipping). In this respect, the ML scheme: 1) employs the simulation module in order to explore the feasible manipulation paths for a specific hand-object system, 2) feeds the feasible paths to the experimental setup (a robot arm hand system) that collects manipulation data in an automated fashion, 3) uses clustering techniques in order to group together similar manipulation trajectories, 4) trains a set of task-specific manipulation models and 5) uses classification techniques in order to trigger a task-specific model based on the user provided task specifications.
The particular project focuses on the automated extraction of dexterous, within-hand manipulation primitives for simplifying the control of adaptive hands. More precisely, a constrained optimization scheme is employed by a simulation module in order to explore the feasible paths for each robot hand design and to provide good initial estimates. Based on these estimates, an automated experimental setup gathers data of numerous manipulation trials without supervision, detecting unstable grasps or the loss of a particular grasp. The raw manipulation data are stored into a database and a clustering method is used to group together similar manipulation strategies. The feature variables used are: i) the object pose (3 variables for 2D tasks and 6 variables for 3D tasks) and ii) the equivalent motor positions at the beginning and the end of the manipulation task. The data of the identified groups are projected to a lower dimensional manifold using a dimensionality reduction technique. Appropriate primitive / synergy matrices are extracted that facilitate an intuitive and simplified control of the examined devices.
I proposed a methodology for discriminating between different objects using a single grasp with an underactuated robot hand equipped with force sensors. The technique leverages the benefits of simple, adaptive robot grippers (which can grasp successfully without prior knowledge of the hand or the object model), with an advanced machine learning technique for classification (Random Forests). Unlike prior work in literature, the proposed methodology does not require object exploration, release or re-grasping and works for arbitrary object positions and orientations within the reach of a grasp. The feature space used consists only of the actuator positions and the force sensor measurements at two specific time instances of the grasping process. A feature variables importance calculation procedure facilitates the identification of the most crucial features, concluding to the minimum number of sensors required. The efficiency of the proposed method is validated with two experimental paradigms involving: i) two sets of fabricated model objects with different shapes, sizes and stiffness and ii) a set of everyday life objects.
Upon contact with the object surface, adaptive hands tend to reconfigure imposing certain parasitic object motions. This project focuses on estimating and compensating for these motions (using an appropriate motion of the robot arm). More specifically, we synthesize a constrained optimization scheme that provides theoretical estimates of these parasitic object motions. Subsequently, a machine learning scheme is formulated in order to estimate the parasitic object motion from the contact forces exerted on the object. The optimization scheme provides insight about the problem mechanics but requires extensive knowledge of the hand and object parameters. The machine learning scheme does not require any information about the hand object system parameters but does not give us any insight. The two methods are compared and we discuss their advantages and disadvantages. The proposed methods can be used with any type of adaptive hands and their efficiency is validated using extensive experimental paradigms.
Advanced learning schemes for EMG based interfaces have been proposed. Human motion and/or intention have been decoded from human myoelectric activations. The learning schemes take advantage of both a classifier and a regressor, that cooperate advantageously in order to split the task-space and confront with task specificity the non-linear relationship between the EMG signals and the human motion. Three different task-features have been discriminated: subspace to move towards, object to be grasped, task to be executed with the object. Task-specific models provide better estimation accuracy than the “general” models.
A series of affordable, light-weight, intrinsically-compliant, under-actuated robot hands, have been developed. The OpenBionics robot hands cost less that 100$ and weigh less than 200 gr while our new anthropomorphic prosthetic hand costs less than 200$ and weighs less than 300 gr. The prosthetic hand takes advantage of a novel selectively lockable differential mechanism (a variation of the whiffle-tree differential mechanism) that can block the motion of each finger using a simple button. A total of 16 different index, middle, ring and pinky combinations can be implemented using the differential mechanism and a single actuator. These can be combined with the 9 discrete positions of the thumb, to produce a total of 144 different grasping postures. The proposed prosthetic hand concept was the winner of the 2015 Robotdalen International Innovation award and the OpenBionics team is currently in the process of preparing a commercial version in collaboration with Robotdalen (www.robotdalen.se). The particular prosthetic hand concept won also the 2nd Prize of the 2015 Hackaday Prize (www.hackaday.io/prize).
I proposed a methodology that uses computational geometry and set theory methods to quantify anthropomorphism of robot arms and hands. The quantification is achieved through advanced comparisons of human and robot workspaces. The ﬁnal score of anthropomorphism uses a set of weighting factors that can be adjusted according to the speciﬁcations of each study, providing always a normalized score between 0 (non-anthropomorphic) and 1 (human-identical). The proposed methodology can be used in order to grade the human-likeness of existing and new robotic arms and hands, as well as to provide speciﬁcations for the design of the next generation of anthropomorphic robot artifacts. Those arms and hands can be used for human robot interaction applications, humanoid robots or even for the development of advanced prostheses.
Functional Anthropomorphism concerns a human to robot motion mapping approach, that has as first priority to guarantee the execution of a specific functionality in task-space and then having accomplished such a prerequisite, to optimize anthropomorphism of structure or form, minimizing a “distance” between the human and robot motion/structures. The proposed mapping schemes are able to achieve human-like robot motion, even for robot artifacts with arbitrary kinematics (hyper-redundant robot arms and m-fingered hands).