- You are here:
- GT Home
- Home
- News & Events
Latest Episode
March 17, 2020
Featuring Drs. Blair MacIntyre and Jay Bolter
In a previous episode of the Interaction Hour, we discussed one potential space that could benefit from virtual reality. A group that included one of our faculty, Neha Kumar, was using the technology in the educational space, working with local teachers to develop virtual lessons that showed improved engagement and performance. Today, we return to the topic. Virtual and augmented reality continue to be among the most promising technologies, but what they are, what they will become, and where we will benefit is still up for debate. Even more pressing are the potential pitfalls – like privacy – which, without proper vigilance, could be exploited in much the same ways as social media.
Past Episodes
March 17, 2020
Featuring Dr. Matthew Gombolay
Are humans too willing to transfer trust to AI systems that may or may not have earned it yet? What factors lead to that trust? What’s the threshold for how trustworthy a system, like autonomous vehicles, must be before we deploy worldwide, and how do we get there?
February 25, 2020
Featuring Shagun Jhaver
Online communities like Reddit or Twitter act like town halls, where opinions are shared and everyone, in theory, has a voice. Only, it doesn’t always work like that. What was once optimistically viewed as a solution to public discourse, offering promises of open and logical discussions where anyone with a keyboard and an internet connection could speak their piece, has instead become a bit of a Wild West. Message boards have degraded into sources of harassment, misinformation, radicalization, and more. The question is: How can you moderate, while also maintaining the promise of free speech?
February 25, 2020
Featuring Dr. Matthew Gombolay
Machine learning. It’s a term often used, but not always understood in the world of technology. Every day, new innovations, products and capabilities are introduced and adopted by people all over the world, but there’s a bit of a disconnect between researcher and consumer. How is a system trained? Why does it make certain decisions under certain conditions? What kind of reasoning goes into its decision making, and how can we trust that its choice is informed, objective and, ultimately, correct?
February 25, 2020
Featuring Dr. Diyi Yang
Think about the most recent news headline you read. Was it completely objective, void of any presupposition of truth or language that may lead readers down one particular path of understanding? Or did it, more likely, contain subtle cues about how the message was being framed, casting doubt on its veracity or reliability. Every day, we are inundated with these types of texts that, on the surface, proclaim to be arbiters of truth but, due to simple word choice and message framing, can bias their consumers. Luckily, new tools are being developed to help us become more critical recipients of media. In this podcast, we chat with Diyi Yang about how artificial intelligence can help us identify this subjective bias in text – and how AI itself can reflect our own preexisting biases.