«BY GEORGIOS DIAMANTOPOULOS A THESIS SUBMITTED TO THE UNIVERSITY OF BIRMINGHAM FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF ELECTRONIC, ...»
With the continuous development of eye-tracking, a transparent technological solution where a speech recognition engine is combined with automated eye-tracking to enable fully automated analysis of an experiment similar to that of the case study of Chapter 6 can be sought for. Such a development would further encourage researchers to perform the required experiments and not be burdened by the manual work involved in transcribing the speech, selecting and rating the eye-movements and synchronising them to the text – not to mention that the result of such a task would be at best crude if done manually.
If any truth is found in the EAC model, numerous doors will open for human-computer interaction or perhaps human-computer training. For example, in a study by Malloy (1987) it was found that spelling performance could be significantly be improved by visualising the word at the same time as directing the person to look up and to their left. One cannot help but to envisage a computer system, which with the aid of the REACT eye-tracker and through the use of speech synthesis can consistently enforce this behaviour in pupils while at the same time providing them with interesting spelling tasks and giving feedback in terms of their spelling score – a system not that different to an educational game.
189APPENDIX A: EYE-TRACKING HARDWARE
A recent attempt to build a low-cost, lightweight eye-tracker was made by Babcock and Pelz (2004). The latter eye-tracker comprised one eye camera (monocular) and one scene camera which are mounted on the subjects’ head on a glass frame and whose images are multiplexed into one video stream and recorded onto videotape for offline processing. In combination with the Starburst algorithm (Li et al., 2005), it enables the eye-tracker user to track the subject’s eyemovements in any natural environment. The entire system is also lightweight enough to package into a backpack and hence allows the subject to be mobile. After performing off-line processing, the eye-movements could be overlaid over the scene camera video.
The REACT eye-tracker presented in this thesis is a modified, scaled-down version of the eyetracker presented by Babcock and Pelz (2004). Why this design was chosen is explained in detail in chapters 2 and 3. This appendix briefly presents both eye-trackers as well as their similarities and differences; Table 31 summarizes. Figure 48 shows a photograph of the experimenter wearing the eye-tracker.
Both eye-trackers are monocular but they can be easily extended for binocular eye-tracking but this would add to their invasiveness to the subject’s field of view as well as complicate postprocessing if the two eye-movement data sources were to be combined. Thus, a close-up video of the eye and its movement is recorded in both cases through a Supercircuits PC206XP monochrome pinhole camera; the relevant camera’s properties are summarized in Table 32.
The PC206XP camera encompasses some of the most desirable properties for a head-mounted eye-tracking system.
At only 10 grams, it is a very lightweight camera and thus decreases or even eliminates any subject discomfort. The well-respected commercial EyeLink-II eye-tracker which is also head-mounted weighs an enormous 420 grams (SR Research, 2009). In our preliminary tests with the EyeLink-II, this weight disadvantage often caused subjects to complain, ask for a break after a short time (10-15 minutes) of wearing the eye-tracker or even ask to discontinue the experiment. The total weight of the REACT eye-tracker is approximately 60 grams which allows for prolonged experiments with little to no user discomfort. In fact, as discussed in the hardware usability evaluation section of Chapter 5, most subjects answered that they would be open to taking part to a 20-minute experiment where they would have to wear the eye-tracker at all times.
192 Its small weight also allows it to be mounted on a lightweight, non-metal frame, such as safety glasses. This also contributes to the total weight of the eye-tracker headgear as well as to a positive user attitude towards the eye-tracker.
The camera’s small size is also reflected by its dimensions; at a square 8.5mm, it can easily become part of the subject’s field of view and it will rarely obscure any objects in the environment from the subject. In combination with the camera being off-centre and thus not blocking the subject’s central field of view, subjects are barely aware of the existence of the eye-trackers existence in their visual field. As discussed in the usability evaluation section, in general, subjects answered that they were unaware of the tracker’s existence in their visual field.
Despite its small size, the camera is able to capture at a relatively high-resolution:
640x480 pixels interlaced video at 29.97 frames per second. After de-interlacing, this becomes 640x240 at 59.98 fields per second. Such resolution efficiency can be said to be ideal for a head-mounted eye-tracker that is not targeted to analyse high-speed eyemovements.
The camera itself requires a power source capable of delivering 12V DC and consumes a maximum current of 20mA. As before, such low requirements allow the minimization of the head-mounted hardware.
Both the REACT eye-tracker and the eye-tracker by Babcock and Pelz (2004) use a combination of an infrared light source pointing towards the subject’s eye and a non-infrared filter mounted on the camera lens to introduce the dark pupil effect as explained in past chapters. The effect of capturing the eye using the infrared spectrum is described in extensive detail in the next section.
On the hardware side:
A standard infrared (IR) LED that transmits light at a wavelength of 940nm is used as the illumination source. It requires 1.2V to power it and provides a safe, for the subject’s eyes, amount of infrared light. When illumination regulations come into effect (Hansen and Ji, 2010), it is almost certain that the REACT eye-tracker will require no modifications in order to pass.
The non-infrared filter attached to the camera lens is a Kodak Wratten 87c equivalent (Babcock and Pelz, 2004), which blocks most (400-700 nm) of the natural light spectrum 193 (380-750 nm; Starr, 2005) and thus allows only the infrared light emitted by the IR LED to pass.
The only two components that require power in the REACT eye-tracker are the PC206XP camera (12V) and the IR LED (1.2V). A standard 220V AC input, 12V DC output power supply was used, in conjunction with a power transformer that converts 12V to 1.2V DC to power the LED. If our application required portability, this power supply could have easily been replaced by a series of 8x1.5V batteries. The circuit diagram for the power transformer is shown in Figure 49 below.
Mounting the eye-tracker and cabling Since the REACT eye-tracker is so lightweight, it can be safely and easily mounted on a pair of plastic safety glasses. In fact, with the absence of the scene camera and laser diode (Babcock and Pelz, 2004), it is even possible to remove the plastic glass from the front of the safety glasses and just use their frame. This significantly reduces the obstruction of the subject’s field of view; even though the plastic glass is clear, it adds to the experience of “wearing”. In other words, with the plastic glass removed, some subjects may even forget they are wearing the eye-tracker after quickly acquiring a level of comfort with them (as discussed in the usability evaluation).
The cabling between the eye-tracker headset and the ground components, if used
inappropriately, can cause discomfort to the user. Four cables are required:
Ground (common) 12V+ (power source for the camera) 1.2V+ (power source for the IR LED) Video signal carrier 194 For example, if the cables are too heavy, they can tilt the safety glasses as the cables can often weigh more than the frame itself. Similarly, if the cables are too thick, they can be a constant reminder to the subjects that they are wearing the eye-tracker. On the other hand, if the wrong types of cables are used, the eye-tracker will not work; specifically, if the video is carried in a non-shielded wire, the video signal does not get across intact due to electrical interference. Thus, a shielded 4-core wire is used.
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
APPENDIX C: PUBLISHED AND SUBMITTED PAPERS
Published papers include:
Diamantopoulos, G., Woolley, S. I., and Spann, M., 2008. A critical review of past research into the Neuro-Linguistic Programming Eye-Accessing Cues model. In: Proceedings of the First International NLP Research Conference. Surrey, UK.
Diamantopoulos, G., Spann, M., 2005. Event Detection for Intelligent Car Park Video Surveillance.
Elsevier Journal of Real-Time Imaging, Special Issue on Video Object Processing for Surveillance Applications, 11 (3), pp. 233.
The paper that follows is for submission to the Special Issue of the Signal, Image and Video Processing Journal entitled “Unconstrained Biometrics: Advances and Trends”.
Georgios Diamantopoulos School of Electronic, Electrical and Computer Engineering University of Birmingham Birmingham B15 2TT United Kingdom +44 77 324 903 85 email@example.com Sandra I. Woolley Michael Spann Abstract The Neuro-Linguistic Programming Eye-Accessing Cues model suggests that there is a correlation between eye-movements and the internal processing mode that people employ when accessing their subjective experience. Past research on the model used direct viewing or at best video recording of the subject to select and rate eye-movements. The aforementioned methods are not only error-prone but also have a low-detail output compared to a modern eye-tracker (e.g. time information is missing) thus negatively influencing the reliability of the results. While a plethora of eye-tracker designs is available to date, none of them has been designed to track non-visual eye-movements (eye-movements that are a result of neurophysiological events and are not associated with vision), which tend to range outside the normal visual field and thus perform poorly in such cases. Therefore, this paper introduces a set of novel algorithms for the extraction of relevant eye features (pupil position, iris radius and eye corners) that are combined to calculate the 2D gaze direction and to classify each eye-movement to one of eight classes from the model.
The applicability of the eye-tracker is demonstrated through a pilot study that serves as a real-world
Keywords Eye-movements, eye-tracking, neuro-linguistic programming eye-accessing cues, eye feature extraction, gaze 1 Introduction Eye-tracking is a field that has been actively research for the past few decades and while eye trackers have thus changed dramatically over the last few decades, eyes are still mainly studied as a functional organism of vision. With the exception of studies in rapid eye-movements during sleep, it was only recently that eyes were studied as a part of the brain and its function (in the waking state). While there is a reasonable amount of studies relating eye-movements to speech activities such as reading text or maps, very few studies have involved the recording and tracking of eye-movements with non-visual task performance.
To be clear, the term non-visual tasks refers to those tasks which do not explicitly rely on vision to be performed . Similarly, the term non-visual eye-movements refers to eye-movements concerned with non-visual tasks.
The Neuro-Linguistic Programming (NLP) Eye-Accessing Cues (EAC) model was introduced in  and suggests that the direction of non-visual eye-movements indicates the modality (visual, auditory, kinaesthetic) of their subjective experience a person is currently accessing. Simply said, it is said that when a person16 is looking down and to their right, they are accessing a feeling associated with the experience they are talking about or examining internally.
While it cannot be denied that eye-movements are hard-wired to brain function, the NLP EAC model was not scientifically validated by its authors and in a critical review of EAC research that appeared at later dates and attempted to (dis-)prove the model, it was shown that the results This is a generalization offered by Bandler and Grinder (1979) for a cerebrally normally-organized righthanded person.
232 suffered from severe methodological and experimental flaws . The main flaw that is relevant to this discussion is the use of direct viewing to record, select and rate the eye-movements, which has been the genesis of very significant limitations.
Studies relevant to the NLP EAC model and other such models would have benefited by the use of eye-tracking systems. However, selecting a suitable eye-tracking system for this task does not prove as easy. This is for several reasons, the most important one being that eye-trackers to date were designed to track visual eye-movements   . Whilst the classification of visual versus non-visual eye-movements is not significant in itself, visual eye-movements are normally bound by a much smaller field of view. By contrast, when a person is thinking his or her eyes will usually shift to one extremity of the eye socket (irrelevant of the direction). Thus, if the person was asked to look at the location indicated by this shift, he or she would have turned his or her head and performed a much smaller eye-movement to reach the target; this behaviour is largely undocumented. What is, however, documented is the tendency of subjects to shift their eyes when asked to answer a question (not related to a visual task) and before they return to looking at the interviewer .
Therefore, it is no surprise that eye-trackers designed to track visual eye-movements fail to track non-visual eye-movements of such extremities and the need for development of an eye-tracker that is able to track such eye-movements is introduced.