Eye Tracking for HCI (via ordinary web cameras)
| Research Area: | Uncategorized | ||
| Status: | Not started | ||
| Members: | |||
| Description: | |||
The term “high-level languages” is due to the command syntax proximity to the human language. Touch screen: Touch screens still require hand movement. Solutions as voice commands interpretation are inappropriate to the goal proposed. These systems input data are words and/or other sounds. A simple mouse movement is quicker and most accurate. Besides, the input data is prone to ambient interferences, pitch and diction, etc. Efficient brain signals classifier/interpreter needs expensive and/or non-ergonomic apparel attached to the user’s head:
So why do we keep on searching for another solution instead of just using what we already have: traditional mouse? Because eyes are faster…
Most of Eye Tracking systems have three main phases: face localization, eye localization and eye tracking (position analysis). The first might be suppressed with more advanced algorithms. Eyes location results: Main difficulties on getting the job (well) done: obviously, the human morphological diversities (eyebrows proximity to the eye, for example), eye glasses, accessories (hat cap), shadows and ambience illumination conditions. Detection for different conditions and subjects: The most relevant techniques for each phase of Eye Tracking
|
|||
















