Chris S. Crawford

University of Florida PhD student

About me

My work focuses on Human-Robot Interaction (HRI) and Brain-Computer Interfaces (BCI). My goal is to leverage software engineering, novel sensing technologies, and robotics to create tools and applications that enhance interactions between humans and robots. I use a combination of research in Human-Computer Interaction and BCI to study Brain-Robot Interaction (BRI). My BRI research centers on the design, implementation, and evaluation of HRI software applications that utilize neurophysiological measures to interpret human behavior. With this research I investigate the use of electroencephalogram (EEG) signals to control robots and evaluate users’ state while interacting with machines. Work in this area aims to provide an efficient object manipulation alternative when hand mobility is restricted. This research also aims to assist with improving industrial human-robot collaboration.

My Research

Block-Based Interactive EEG Visualization

This research investigates ways to utilize Visual Programming Languages (VPLs) with neurophysiological measurements of electroencephalogram (EEG) signals acquired with a Brain-Computer Interface (BCI). This data can be used to understand cognitive and affective states such as fatigue, cognitive workload, engagement, attention, and frustration. Using an interface equipped with a visual block-based programming environment enables users to interact with visualizations mapped to EEG data. Interacting with a drag-and-drop VPL allows users to dynamically utilize data streams captured from the BCI and quickly iterate on various visualization designs. See 'Using a Visual Programing Language to Interact with Visualizations of Electroencephalogram Signals' for more information.

Other Projects

Brain-Drone Race

The Brain-Drone Race is a competition featuring users' cognitive ability and mental endurance. During this event competitors are required to out-focus opponents in a drone drag race fueled by electrical signals emitted from the brain. On April 16, 2016, 16 participants competed using the Emotiv insight headsets and DJI Phantom 2 drones. Although others had previously demonstrated drone manipulation via EEG, this was the first public demonstration of a competitive Brain-Drone event. For more information visit www.braindronerace.com.

Perceptual Computing: Tongue Protrusion Detection

This work investigates a method of detecting tongue protrusion gestures by utilizing the tongue's color and texture characteristics. By taking advantage of recent advances in computer vision, real-time tongue gesture detection is possible with video streams provided by a standard web camera. Tongue gesture detection functionality has the potential to supplement user interaction and provide for an immersive experience in applications such as games or video communication applications. It could also aid communication for mobility-impaired users. See 'Using Cr-Y Components to Detect Tongue Protrusion Gestures ' for a description of a process presenented at the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 15') that uses YCbCr color space manipulation and a support vector machine to detect left, right, and downward tongue protrusions in real-time.

Televoting: Voting for Deployed Military Personnel

Many members of the armed services are overseas during elections. As a result, they are unable to cast their ballot in person. Although the Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) gives soldiers located overseas the right to mail in absentee ballots, they are often left uncounted due to issues with shipping. This research investigates Televoting, an approach to Internet voting (E-Voting) modeled after Telemedicine systems that utilizes video communication technology. Televoting attempts to address security issues that have plagued previous E-Voting platforms by producing a paper ballot instead of storing votes on a server. See 'Televoting: Secure, Overseas Voting' for a discussion of the system design and the voting process users experience when using Televoting.

Multi-robot Surveillance Systems

Unmanned robotic systems are being used in the military for surveillance and reconnaissance missions. However, current systems utilize a one (or multiple) operator/one robot interface. In addition, human-in-the-loop models or systems that have an autonomy component create issues because human operators tend to intervene more frequently if their expectations of the autonomy are not met. This research investigates the impact spatial and temporal cues have on operators' trust in human-multi-robot systems. See 'Affecting operator trust in intelligent multirobot surveillance systems' for a discussion on the system design and the voting process users experience when using Televoting.

Selected Media