Complete ASL Eye Tracker Integration

Paradigm Elements for ASL is the better way to collect data with your ASL eye tracker

  • XDAT Markers for stimuli onsets, responses and during movies using a network or parallel port.
  • Infant calibration routines display movies and images as target points.
  • Drag and drop areas of interest and gaze detection.
  • Gaze contingencies using Paradigm's Python scripting interface.
  • Take stimuli screenshots for ASL Results integration.

See Paradigm Elements in Action

This short video demonstrates how easy it is to add ASL Eye Tracker data collection to your Paradigm experiments. You'll learn how to add a calibration sequence, control recording, and mark stimulus onsets and AOI fixations using Paradigm Element's simple drag and drop interface.

Watch now >>

Flexible Stimulus Presentation and ASL Eye Tracker Integration

Paradigm Elements for ASL allows you to build your experiment using Paradigm's intuitive experiment builder and integrate it with your ASL system using a small set of drag and drop commands. You'll be able to mark stimulus and response onsets, start and stop data recording and calibrate your subjects using a simple graphical user-interface. Paradigm also gives you access to real-time gaze data from within your experiment using its Python scripting interface. Using this simple scripting API you'll be able to create gaze contingencies, measure gaze durations and jump to other parts of your experiment based on gaze position or pupil size.

Millisecond Accurate Event Markers

Paradigm Elements for ASL synchronizes ASL XDAT markers with the onsets of stimuli and responses allowing you to mark any experiment event in your eye tracker data. Elements can also send markers at any point during a stimulus (e.g. movie or sound file) to mark multiple events during the same stimulus.

XDAT markers are sent using a Network Port so you can finally upgrade your lab computer and not have to worry about installing a Parallel Port card.

Conditional Response Markers

Using Paradigm Elements for ASL, you can specify unique XDAT markers for correct, incorrect and non-responses. Paradigm will send the correct or incorrect response XDAT based on the correct response you specify in the stimulus event or will send a "no response" trigger if the event times out.

Simultaneous Triggers to Multiple Devices

Paradigm Elements for ASL can be used in combination with our Paradigm Elements for Ports product to enable simultaneous data collection from multiple devices. For example, a combined EEG and ASL eye tracker experiment or a combined Biopac and ASL eye tracker experiment. Paradigm Elements lets you conduct cutting edge multi-modal research using a variety of devices.

Drag and Drop Areas of Interest and Fixation Detection

Measure fixation durations in a single or multiple areas of interest. You can define a static set of AOIs or have them change position and size with each stimulus. Detected fixations in each AOI are uniquely marked in your eye tracking data for easy analysis. You can also wait for gazes of a minimum duration.

Infant and Standard Calibration

Paradigm Elements for ASL includes a Calibration Event that displays standard targets as well infant targets that can present movies and images. Calibration sequences can display 2, 5 or 9 targets.

The Calibration Event allows you to control the calibration sequence from your eye tracking machine making calibration each participant faster and easier.

Gaze Contingencies using Paradigm's Python Scripting Interface

Paradigm Elements for ASL features an integrated Python scripting interface that gives you access to real-time eye position and pupil data. Using a set of simple scripting functions you can develop sophisticated gaze contingency routines, jump to different parts of your experiment based on gaze location and show alerters to re-engage distracted subjects.

Jump back to top