Learning Eye Movements Strategies on Tiled Large High-Resolution Displays Using Inverse Reinforcement Learning
Jul 1, 2015·,·
0 min read
R. A. A. Mohammed
O. Staadt
Abstract
In this paper, we modeled eye movements on tiled Large High-Resolution Displays (LHRD) as a Markov decision process(MDP). We collected eye movements of users participated with free viewing task experiments on LHRD. We have examined two different inverse reinforcement learning algorithms. The presented approach used information about the possible eye movement positions. We showed that it is possible to automatically extract reward function based on effective features from user eye movement behaviors using IRL. We found that, the Itti and HoG features shows highest positive reward weights for both algorithms. Most interestingly, we found that the reward function was able to extract expert behavior information that fulfill to predict eye movements. This is valuable information for estimating the internal states of users such as in, for example, intention recognition, to adapt visual interfaces, or to place important information.
Type
Publication
2015 International Joint Conference on Neural Networks (IJCNN)
Computer Displays
Eye Movement Position
Eye Movements Strategy Learning
Feature Extraction
Free Viewing Task Experiments
Histogram-of-Oriented Gradients
HoG Features
Inverse Reinforcement Learning Algorithm
Itti Features
Learning (Artificial Intelligence)
LHRD
Markov Decision Process
Markov Processes
MDP
Object Tracking
Reward Function
Sensors
Tiled Large High-Resolution Displays
Visualization
Authors
Authors