While the sessions in the area of marketing/communications and the state of journalism seemed more prominent, SXSW Interactive wouldn’t be what it is without the innovators who are venturing into new, imaginative realms, developing technologies that are already shaping the future. Days Three and Four provided us an opportunity to tour the Trade Show and explore other venues where possible futures were on display.
More VR/ARSXSW 2016 introduced the democratization of virtual reality/augmented reality (VR/AR) technology. Smartphones strapped into a variety of goggles make possible VR/AR experiences once only available in large lab environments. This year showed that companies are doubling down on the possibilities of VR, primarily in the world of gaming. Everywhere we turned at the Trade Show companies were featuring applications of VR. In one of the more comprehensive examples, Sony featured a combination VR headset with a tactile response body suit as part of its WOW Factory. The system puts the player visually in a virtual environment and the body suit allows for accurate movement within the VR environment as well as tactile responses within the virtual space. I didn’t have opportunity to try the immersive experience (there were tons of people gathered around at the time and if there was a line, it looked long), but what I observed was the most immersive VR/AR experience that I have encountered thus far. Once this technology is further enhanced and the costs are brought within the range of the average consumer, something like what I experienced is likely the “game controller” of the future.
Artificial Intelligence (AI) is already baked into many of the technology products in our pockets and around our homes. Apple’s Siri is a form of AI and many mobile apps have automated functions that do far more than respond, but actually anticipate what we might need based on feedback from the sensors and information available to the device. If I ask Siri, for example, to remind me to pick up milk on the way home, Siri works with features in the OS to create a GPS-driven, time specific, calendar notification about purchasing milk the next time I am near a grocery store. This example, however, is just the tip of the iceberg. My iPhone demonstrates the myriad ways it learns and responds automatically to its environment. Maps and Waze learn my commonly used destinations relative to time. So, when I open either app to help me anticipate traffic issues in the morning, my iPhone already anticipates where I am going and how long it will take me to get there, without me actually telling it where I plan to go.
In our homes, the Nest and other connected IoT (Internet of Things…I know, its not the best of all names, but it is descriptive and the one that stuck) devices take care of functions around the house that adapt its conditions to our needs. My Nest, for instance, knows through experience the optimal timing and temperature of our house. Tying into its cloud-based back end it knows the weather conditions, it senses whether we are moving in the house and it adapts itself to maintain the optimal climate conditions in the house. Other devices, like smart outlets, smart lights and network-enabled appliances can talk to one another and work together to adapt to and create optimal conditions in our homes. As the technology develops, IoT devices will be self-directed enough to respond to our location and time of day to create optimal conditions for us without us having to monitor and control every detail. While some may miss managing these myriad details, the ultimate smart home (or office) frees our mental and physical energies to do other things. I’m just looking forward to the smart lawn mower, edger and trimmer…but I fear that is a ways off.
Bots were another form of AI under discussion at SXSW 2017. Companies are developing bots and deploying them into customer service and other environments. The next time you jump onto a company website and use their “live chat” service, you may not be talking to someone on the other end, but a natural language processing virtual assistant/robot who has the capacity to address a wide variety of customer inquiries. Bot implementations like these have myriad applications in different industries to handle many functions in a more efficient manner than our current models.
AI is already infused into our lives and its role will continue to expand as it is embedded within a growing number of devices. The goals within the AI/IoT community are to apply these technologies in a way that will enhance human being, taking over basic, functional operations and systems, leaving us to exercise our energies to other ends.
Last year Google proclaimed (rightly) that the grand experiment of the self-driving car was a success. 2.5 million miles plus in a variety of driving conditions without an accident caused by the test vehicle realized at a low level the promise of this technology and placed the burden on social and legal factors to move driverless vehicles into production. More sessions on driverless cars were featured at SXSW 2017 with a lingering edge of uncertainty about when we would start seeing full-fledged driverless cars. The freight industry has the most to gain from this technology and I anticipate seeing it applied in that arena first. At the consumer level, once people see the vehicle in action and experience its benefits and convenience, they will eventually adapt the technology. Driverless cars have a sense of inevitability about them, presenting the possibility of giving us more time and getting us to places safer and with less congestion. Adoption and legislation will be the most difficult hurdles to clear as we experience this emergent technology.
Projected InterfacesSony featured several different implementations of out of the box, consumer and business-ready projected interactive environments. When we went to Mashable House and were invited to visit their Speakeasy (thanks, Carlye!), we came across several of these small, lightweight, Android-driven devices that project an interactive environment onto a flat surface. Once the environment is projected all of the functionality of a touch screen are available to the user. We played games, manipulated maps and played in other environments on the devices. At Sony’s offsite display, they had two projected environments that allowed people to manipulate objects in space as part of a 3D spatial design project. While robust and responsive, there were moments in different environments where the way the image was projected disabled my interaction within the space. This glitch aside, SXSW 2017 will be remembered as they year these interfaces appeared at a consumer level. It will be interesting the many implication of the availability of this technology.
A Not So Paranoid AndroidThe most awe-inspiring and slightly creepy tech moment during SXSW occurred on our journey through the Japan Factory. NTT and Osaka University’s famous professor Hiroshi Ishiguro have collaborated to design the next generation of android. To see “Android U” leaves you in a state o disbelief because to see her appearance and observe her movements it is difficult to tell that she is not human. The bridge that the engineers are desperate to cross is into the realm of natural conversation. While we did not get to see a complete demonstration, the descriptions indicated that they are moving closer to a android that is nearly conversant. This exhibit “expanded my mental dimensions,” as one of the clearly Japanese to English signs read. In other words, it blew my mind.
An Era of Careful Innovation
While tech innovation remains a critical part of the SXSW Interactive experience, there was – for the second year – nothing that was entirely and completely new. Each of the technologies mentioned above are incremental advancements on things that developers/engineers have been working on for years – sometimes decades. Even in the realm of social platforms there was nothing truly new. Facebook launched its Stories feature in seeming conjunction with SXSW Interactive. A new feature, yes, but Stories is essentially a SnapChat layer within the Facebook universe. It appears, as I suggested last year, that the tech sector is slowing down the pace of year over year disruptions to move more carefully forward. The move seems to be to better integrate technologies where personal and social value are fairly immediate, rather than innovating for the sake of innovation. However, with renewed confidence in the markets under a business-friendly Administration I imagine that we will see the pace of more aggressive innovation pick up once again in the coming years.
If it is the case that the current era of careful innovation will give way to something more progressive, the next economic disruptions are already apparent in the broad implementation of the technologies on display this year. First, there is a solid business case for the mass deployment of autonomous vehicles in several current enterprises. This implementation could be quite a blow to the human capital employed in the trucking industry. Second, the emergence of more capable bot technology could unseat – at the least – the first layer of people who staff customer service points for brands. The business case – from a purely fiscal standpoint – is in favor or bots and kiosks for first-layer, front line support over people. Finally, the continued development of androids that are better able to carry on a conversation bodes for disruption in many different service industries. As tech developments – if I am right – pick up pace in this more open and business-friendly era, look for these and other implementations to continue to disrupt the market and challenge all of us to adapt and keep innovating.