Labs & Groups

Animal-Computer Interaction Lab
Faculty: Melody Jackson, Thad Starner, Clint Zeagler, Scott Gilliland
Description: We explore the emerging area of animal-computer interaction focusing on interfaces for inter-species communication and on the design and evaluation of interactive technology for users of multiple species.

Augmented Environments Lab
Faculty: Blair MacIntyre, Jay Bolter
Description: Lab activities focus on understanding how to build interactive computing environments that directly augment a user's senses with computer-generated material. Researchers are interested in augmenting the user's perception, and place particular emphasis on the interaction between the users and their environment.

Faculty: Melody Jackson
Description: The Brain Lab explores innovative ways of accomplishing human-computer interaction through biometric inputs. Biometric interfaces identify and measure small changes in a person's behavior or physiological responses to certain stimuli. The work has potential in many areas, especially for providing individuals with disabilities a means of personal "hands-off" control of computers and other devices. lab
Faculty: Eric Gilbert
Description:  The lab focuses on the design and analysis of social media. According to their website, lab researchers "like puppies, mixed methods, and new students (particularly M.S.)."

Computational Behavior Analysis Lab
Faculty: Thomas Ploetz
Description: The Computational Behavior Analysis Lab focuses on applied machine learning, that is developing systems and innovative sensor data analysis methods for real world applications. Primary application domain for the lab's research is computational behavior analysis where methods are developed for automated and objective behavior assessments in naturalistic environments.

Computational Enterprise Science Lab
Faculty: Rahul Basole
Description: The Computational Enterprise Science Lab focuses on the design, analysis, and management of complex enterprise systems (e.g. organizations, supply chains, business ecosystems) using information visualization, modeling/simulation, and system science approaches.

Computational Linguistics Laboratory
Faculty: Jacob Eisenstein
Description: This lab works on machine learning approaches to understanding human language and is especially interested in non-standard language, discourse, computational social science, and statistical machine learning.

Computational Perception Lab
Faculty: Aaron Bobick, Tucker Balch, Henrik Christensen, Frank Dellaert, Irfan Essa , Jim Rehg, Thad Starner
Description: The Computational Perception Laboratory (CPL) was established to explore and develop the next generation of intelligent machines, interfaces, and environments for modeling, perceiving, recognizing, and interacting with humans and for all forms of behavior analysis from data.

Computer Vision Lab
Faculty and Affiliates: Devi Parikh, Dhruv Batra, Stefan Lee
Description: The Computer Vision Lab (CVL) works on various problems in the visual intelligence space. These include, but are not limited to, building agents that can understand visual content, make decisions and act based on this understanding, and communicate with humans in natural language about visual content, which are interpretable, demonstrate common sense, and can work effectively with humans to accomplish common goals.

Contextual Computing Group
Faculty: Thad Starner
Description: The Contextual Computing Group develops applications and interfaces for the computer to be aware of what the user is doing and to assist the user as appropriate. Several current projects at the research stage are envisioned to work together to assist a user in routine tasks such as automatically scheduling an appointment, re-directing an urgent phone call appropriately based on the user's schedule and current activity, and recognizing that the user is engaged in conversation and would prefer to take the phone call later.

Contextualized Support for Learning
Faculty: Mark Guzdial
Description:  Led by Mark Guzdial, the Contextualized Support for Learning (CSL) lab has as its aim the creation of "collaborative Dynabooks." We are a team of faculty, graduate, and undergraduate students who design and implement innovative technology with the goal of improving learning, then empirically explore the benefits and usefulness of the technology with real users.

Culture and Technology Lab (CAT)
Faculty: Betsy DiSalvo
Description: The CAT Lab studies how culture impacts the use and production of technology with a focus on learning applications, computer science education and the design of new technologies, with culture as a point of convergence.

Design & Intelligence Laboratory
Faculty: Ashok Goel, Keith McGreggor, Spencer Rugaber
Description: The Design & Intelligence Laboratory conducts research into human-centered artificial intelligence and computational cognitive science, with a focus on computational creativity. Current projects explore analogical reasoning in biologically inspired design, visual reasoning on intelligence tests, meta-reasoning in game-playing software agents, and learning about ecological and biological systems in science education.

Electronic Learning Communities
Faculty: Amy Bruckman
Description: The concept that people learn best when they are making something personally meaningful—also known as constructionism—is the lab's guiding philosophy. Computer networks have the potential to facilitate community-supported constructionist learning. The Electronic Learning Communities Lab examines ways communities of learners can motivate and support one another's learning experiences.

Entertainment Intelligence Lab
Faculty: Mark Riedl
Description: The Entertainment Intelligence Lab focuses on computational approaches to creating engaging and entertaining experiences. Some of the problem domains they work on include computer games, storytelling, interactive digital worlds, adaptive media, and procedural content generation. They expressly focus on computationally "hard" problems that require automation, just-in-time generation, and scalability of personalized experiences.

Everyday Computing Lab
Faculty: Beth Mynatt
Description: We introduce a new area of interaction research, everyday computing, by focusing on scaling ubiquitous computing with respect to time. Our motivations for everyday computing stem from the desire to support the informal, unstructured activities typical of our everyday lives. Our goal is to understand the transformation of everyday life as computing is ubiquitously integrated into informal, daily activities and routines.

Graphics Lab
Faculty: Greg Turk, Karen Liu, Jarek Rossignac, Irfan Essa, Jim Rehg, Blair MacIntyre
Description: The Graphics Lab is dedicated to research in all aspects of computer graphics, including animation, modeling, rendering, image and video manipulation, and augmented reality.

Hx Lab
Faculty: Lauren Wilcox
Description: The Health Experience and Applications (Hx) Lab investigates how intelligent, interactive technologies can be designed and developed to facilitate personal health-related information awareness and understanding. We study, design and prototype computing tools for digital capture and communication of personal health status and progress, drawing from the perspectives of clinical caregivers, families, and individuals.

Information Interfaces Group
Faculty: John Stasko
Description: At the Information Interfaces Lab, computing technologies are developed that help people take advantage of information to enrich their lives. The lab group develops ways to help people understand information via user interface design, information visualization, peripheral awareness techniques, and embodied agents. The goal is to help people make better judgments by learning from all the information available to them.

Laboratory for Interactive Artificial Intelligence
Faculty: Charles Isbell
Description: Our fundamental research goal is to understand how to build autonomous agents that live and interact with large numbers of other intelligent agents, some of whom may be human. Progress towards this goal means that we can build artificial systems that work with humans to accomplish tasks more effectively; can be more robust to changes in environment, relationships, and goals; and can better co-exist with humans as long-lived partners.

Machine Learning and Perception Lab
Faculty and affiliates: Dhruv Batra, Devi Parikh, Stefan Lee
Description: We work at the intersection of machine learning, computer vision, natural language processing, and AI, with a focus on developing intelligent systems that are able to concisely summarize their beliefs across different sub-components or 'modules' of AI (vision, language, reasoning, planning, dialog, navigation), and interpretable AI systems that provide explanations and justifications for why they believe what they believe.

Magic Lab
Faculty: Jarek Rossignac
Description: Our research focuses on the design, representation, simplification, compression, analysis, and visualization of highly complex 3D shapes, structures, and animations.

Mobile Robot Laboratory
Faculty: Ron Arkin
Description: The role of the Mobile Robot Laboratory is to discover and develop fundamental scientific principles and practices that are applicable to intelligent mobile-robot systems. In addition, the laboratory facilitates technology transfer of its research results to yield solutions for a range of applications.

Faculty: Keith Edwards
Description: The PIXI Lab is a group of researchers at the GVU Center at Georgia Tech who are exploring the boundaries between interaction and infrastructure. We take a human-centered approach to our research, by understanding the needs and practices of people through empirical methods, designing compelling user experiences that fit that context, and then building the underlying systems and networking infrastructure necessary to realize that user experience. We are dedicated to creating technology that is not simply usable but also useful. 

Social Dynamics and Well Being Lab
Faculty: Munmun De Choudhury
Description: The Social Dynamics and Wellbeing Lab studies, mines and analyzes social media to derive insights into improving our health and well-being.

Social and Language Technologies (SALT) Lab
Faculty: Diyi Yang
Description: The Social and Language Technologies (SALT) lab aims to build socially-aware language technologies. Broadly, we study both content and social aspects of human language, via methods of natural language processing, deep learning, and machine learning as well as theories in social science and linguistics, with the implications of developing interventions to facilitate human-human and human-machine communication.

Socially Intelligent Machines Lab
Faculty: Andrea Thomaz
Description: The vision of our research is to enable robots to function in dynamic human environments by allowing them to flexibly adapt their skill set via learning interactions with end-users. We call this Socially Guided Machine Learning (SG-ML), exploring the ways in which machine learning agents can exploit principles of human social learning. To date, our work in SG-ML has focused on two research thrusts: (1) interactive machine learning, and (2) natural interaction patterns for HRI. Here you will find recent examples of projects in each of these two thrusts.

Technologies and International Development Lab
Faculty: Michael Best
Description: The lab's research focuses on information and communication technologies for social, economic, and political development. In particular the lab studies mobile phones, the Internet, and Internet-enabled services and their design, impact, and importance within low-income countries of Africa and Asia. The lab researches engineering, public policy, HCI/usability, and sustainability issues as methods to assess and evaluate social, economic, and political development outcomes.

Ubiquitous Computing Lab
Faculty: Gregory Abowd, Rosa Arriaga, Sauvik Das, Thomas Ploetz, Agata Rozga, Thad Starner
Description: We are interested in ubiquitous computing (ubicomp) and the research issues involved in building and evaluating ubicomp applications and services that impact our lives. Much of our work is situated in settings of everyday activity, such as the classroom, the office, and the home. Our research focuses on several topics including, automated capture and access to live experiences, context-aware computing, applications and services in the home, natural interaction, software architecture, technology policy, security and privacy issues, and technology for individuals with special needs.

Work 2 Play Lab
Faculty: Rebecca (Beki) E. Grinter
Description: In the last decade, computing has left the office and entered people's domestic and recreational lives. Consequently, computing affects our lives, shaping not just how we work, but also how we play. Moreover, computing potentially allows individuals to blur the boundaries by letting us conduct domestic routines while in the office, or working from a cafe in an urban center. Researchers in the Work 2 Play Lab are interested in using a variety of empirical techniques to advance the state of the knowledge in how computing affects our lives from work to play.