Recent concerns from tech luminaries about a robot apocalypse may be overblown, but artificial intelligence researchers need to start thinking about security measures as they build ever more intelligent machines, according to a group of AI experts.
Paul Stamatiou knows how to get your attention online.
The popularity of image-sharing sites like Instagram have made photo filters — single-click visual enhancements with names like “earlybird” and “inkwell” that make pictures appear richer, grainier or dated — part of the everyday digital vernacular.
Within a few decades, perhaps sooner, robotic weapons will likely be able to pick and attack targets – including humans – with no human controller needed.
Augmented reality systems with technology that overlays digital interfaces onto the physical world may eventually edge out virtual reality and significantly impact human perception. While VR products such as Oculus Rift, Gear VR and HTC’s Vive get closer to launch, timelines for augmented reality devices such as Microsoft’s HoloLens and Google-backed Magic Leap remain vague. However, some believe AR is more likely to become integrated into our everyday activities and subsequently affect the way we interact, work and communicate.
It’s surprisingly transfixing watching robots – great, lumbering, 7ft-tall humanoids – trying to walk and use power tools and drive a car. But it’s not the only spectator sport on hand at the DARPA Robotic Challenge.
For 40 days and 40 nights since Spring 2015 commencement, College of Computing faculty have traveled the globe. They threw off velvet tams and threw on Tevas for busy summers – trekking to conferences, teaching abroad, creating new partnerships for Georgia Tech, and more.
A damning indictment of this constant, inexorable, shouted conversation we call life in the Twitter age, researchers have found that roughly a quarter of all tweets are not credible.