Eye-Tracking Universal Driver (ETU-Driver) have been developed as a software layer to be used between actual eye tracker driver and end-user application to provide device-independent data access and control. The ETU-Driver consists of a COM objects that implements the interface common for all eye trackers and a set of supporting DLL libraries (API-Converters), which "convert" original manufacturer's APIs into a common API used by ETU-Driver. The benefit of using ETU-Driver comes from the fact that any end-user application implemented on top of ETU-Driver can access data from a newly installed eye tracker simply by copying a corresponding API-converter to the ETU-Driver installation folder.
This tool was developed within WP2 of COGAIN (IST Network of Excellence). So far, ETU-Driver is able to access the following eye trackers:
EyeChess is the chess game with several playing modes. One of the modes consists of end-games solving, and two others are usual games against computer or net-player. Before players start a net-game, they must invite an opponent, or be invited by him/her. Each player is registered automatically on developer's site when EyeChess starts, so becomes accessible for other EyeChess players.
Tic-Tac-Toe is a simple game, where you need to make a 3-pieces line before your opponent does it.
Lines is a well-known game in easten Europe, where you need to make a single-colour 5-pieces line or cross to get rid of the balls that fill the board with every move (except moves when you score).
WarLines is the game created by author and is based on Lines game. Players play agains computer (not implemented yet) or a net-player. They must score 100 points before their opponent does it to win.
EyeCheckers is the checkers game to play agains computer or net-player. Before players start a net-game, they must invite an opponent, or be invited by him/her. Each player is registered automatically on developer's site when EyeCheckers starts, becoming accessible for other EyeCheckers players.
Connect4 is a well-known game, where you need to make a 4-pieces line before your opponent does it.
iSeq is a simple synthesizer with 9 notes and 12 beats.
iSin is an advanced synthesizer based on MIDI device, with 5 channels (customizable, 5 octaves and almost unlimited in beats (beat = 1/4 note). Notes are customizable in values (from full till 1/32) and volume (velocity).
Each game appears in two versions based on the method it gets eye-tracking data. The version that run in MyTobii environment is commertial (distributed by Special Effect. The version that uses ETU-Driver is free.
It is a Skype client with gaze-contingent interface. Only for 32bit Windows OS. Only for MyTobii. Only for Skype versions prior to 5.0
iComponent is an application that allows to record eye-movements from different eye-trackers (actually from SMI EyeLink, SMI iViewX / RED-m and Tobii so far) and later to visualize gathered data. The software consists of the recording engine, some visualization views (including replay) and some user-defined tasks (at least for observation set of pictures, Internet pages and video on the background during recording). Tasks and Drivers are plug-in libraries. Templates for creating drivers and tasks are available, so other developers can extend possibilities of the iComponent implementing and adding they own solutions.
UPD. Support for Tobii T/X was added
Fixation Detector is the tool to detect fixations from raw gaze data (samples). The only gaze data value it needs are gaze X-Y coordinates and sample’s timestamp. It has been developed as a collection of fixation detecting algorithms. Currently, it supports 3 such algorithms: 'fixation size', 'speed' and 'dispersion'. Obviously, the names reflect the measured parameters of a set of samples that are crucial for fixation detection.
The tool is developed as COM server. The manual included into the installation package has a short example of using Fixation Detector as well as its COM interface description.
Gaze-to-Word mapping (GWM) tool have been developed as a collection of gaze-to-word mapping engines, text mask creators and translation and word-frequency dictionaries for on-line word-in-focus detection from gaze data while reading a text and its translation is it was recognized as an unknown word for a reader. The tool is implemented as COM server. It may track multiple non-overlapping documents simultaneously and highlight a word-in-focus in different ways.
A text can be loaded into GWM using an appropriate text mask creator. The latest version of this tool provides 2 way to load a text:
GWM can use more than one mapping engine to map gaze to words. Each engine votes for a vote and the word that gets highest score becomes the word-in-focus. The latest version of GWM tool has only one mapping algorithm, which is a simplified version of the algorithm used in iDict application (developed in frame of the I-EYE project). The results of mapping are saved in 2 files:
Currently, GWM has evolved into something more that just gaze-to-word mapping. Now it provides a word-in-focus translation functionality, like iDict. Several dictionaries may serve for translation of a word that was recognized as a problematic. The problematic word is the word that receives a long gaze fixation. Each word has its own dwell time (normally, between 1 and 1.5 sec.) after which a translation pop-up appears above the fixating word. The dwell time is based on word frequency, thus GWM imports as appropriate frequency dictionary of the language of the tracking text. I ship only English frequency dictionaries, but users can add own space- or tab- separated TXT files into [GWM]\FreqDicts folder, which has two columns: word and its frequency value. Each file must follow a naming convention: "[langCode]_[delimiter]_[anyText].txt", where langCode is 2-character language code (see ISO 639-1 ), delimiter is 3-character abbreviation of delimiter between 2 file columns (spc = space, tab = tab, com = comma) and anyText can be any text, e.g., dictionary's unique name.
However, the latest GWM version has no translation dictionaries, it should come later.
Testing applications were developed for each text mask creator to demonstrate how GWM works. These application are shipping with GWM COM library.
The aim of SpeechText project is to study the real-time transmission between two communication modes, speech and writing in human interaction with a method called "print interpreting". It means translation of spoken language and accompanying significant audible information into written text simultaneously with the talk. The text is typed on a computer and displayed on a screen where the letter-by-letter emerging text is visible. Print interpreting is needed as a communication aid for people with hearing disability to give them access to the speech. Since they have acquired the language in a hearing speech culture and usually can speak it, they need an interpretation which is as close as possible to the original speech. This interpretation must also give an impression of the speaker and the linguistic variation. The challenges of print interpreting are the demands of simultaneity (requiring a high production rate) and verbatim transcription. Another important challenge is to transfer all the relevant auditory information (including non-language sounds from surroundings, etc.) into a visible modality which is understandable to the hearing impaired.
The objectives of the study are 1) to investigate the process of print interpreting and 2) the comprehensibility of the interpretation; and 3) to develop new technology and methods for analyzing and supporting print interpreting. The process means in the narrow sense the real-time conversion act and the changes in the message; in the broader sense it covers the whole communicative event, including the activity of interpreting and the actions of the participants and their interaction. The comprehensibility will be examined in terms of readability and coherence. The main research methods are textual and multimodal analysis, and eye movement analysis. Because the research problems are multidisciplinary, they will be studied in an interdisciplinary collaboration combining approaches from Linguistics, Translation Studies (especially Interpreting Studies), and Computer Sciences.
The practical aim of the study is to develop new technological solutions and to improve the accessibility of communication. Results on the reading process can provide valuable information to develop better ways of rendering the text, and thereby help in making the hearing impaired persons more equal partners in the ubiquitous communication situation. In addition, the study will contribute to deeper understanding of the relationship between writing and speaking, verbal and non-verbal communication, and produce new information of their interchangeability in various media.