The purpose of my thesis was to study the performances of an eye tracking system that has to be used as an HMI to move a wheelchair. Indeed many disabled people may have difficulties to drive with other input, as joystick, mouse and so hence so forth. The possibility to drive a wheelchair by only moving the eye will give them freedom and independence.
The first part of my thesis consist in determining properties of the eye tracker such as accuracy and repeatability. The focus has been put on the estimation of the uncertainty. After the first disastrous data I have learned the tricks to make the eye tracker works better, such as tie my hair! Thanks to this and to some filtering I was able to get some nicer data. Given this data it has been proved that the eye tracker presents a strong systematic error mainly due to the calibration phase of the instrument. Knowing that it has been decided to calculate the uncertainty in a slightly different way than the usual one. That is consider the difference between each data and the true value, instead of the mean. From the values obtained there is no evidence that the covariance ellipse has a particular orientation, given this it has been decided to use a circle instead of an ellipse. This reduce the determination of the covariance to a single parameter: the radius R. In nominal condition this value is 114 pixels.
During the tests it has been seen that the performances largely change going from good nice estimation to nightmares. This is due to the influencing parameters. Indeed the instrument performances are largely affected by changing condition such as head movements, light, objects in front of the eye and so hence so forth. It is interesting to model the uncertainty as a function of this parameters. To reach this scope another sets of tests has been built to determine this model. As a first assumption this model has been taken as linear. In particular the influencing parameters modeled and estimated in real time has been: target position on the screen, number of eyes detected by the eye tracker, head displacement in three directions. With the model built in the 84% of the cases we get right values, right means that the true value is contained in the uncertainty.
The second part consist in developing two HMIs and test their performances in a virtual environment. The virtual environment used represent the mecathronic laboratory.
It is very useful to use a virtual environment since it allows to make all the test in a completely safe manner for the user. Also some tests have been done on the real system obtaining good results…. Most of the times.
Two HMIs have been implemented. The first one works with buttons, that is one of the most used method in literature. Essentially you go in the direction of the part of the screen you’re looking at. If you look left you go left, if you look down you go backward, if you look straight you go straight and if you like right you go right. The second HMI assign a velocity according to the point you’re looking at, embedding a continuous control law.
In order to determine the performances of the system each HMI was tried by seven subjects. Different metrics have been considered obtaining the following results. The hits with the walls have been few for both HMIs, that is a positive result. Indeed this means that the subjects are able to drive avoiding obstacles. The accuracy in the maneuver does not seem to depend on the used HMI but only on the subject’s driving skills. With both the HMIs the acceleration remains in the comfortable range. An issue for the system is the difficulty in depth perception that is caused by the limited field of view of the camera. Comparing the two HMIs have been obtained the following results: HMI-2 is the one that allows a more agile driving since it gives smoother paths and permits to finish the task in a smaller time, while with HMI-1 the users are able to reach higher velocity. Summarizing HMI-1 seems better in case of precise maneuver next to the target, while HMI-2 is more agile and hence better to drive until the approach.