If you’ve ever watched a television show aimed at children, you’ll notice that the characters often break the fourth wall to interact with their pint-sized audiences. The actors will pause after they ask a question in anticipation of a tiny voice shouting the answer on the other side of the screen. Children come to believe that the characters on TV can see and hear them just as clearly as they can see Big Bird and Sid the Science Kid.
New smart TV technology is making such beliefs much more rational, even amongst adults. The designation of new television sets as “smart” now signifies not only software upgrades, but hardware components as well—especially microphones and cameras to facilitate voice controls, motion sensors and facial recognition technology. These features usher in a new world of interactive and intuitive interfaces, but they also introduce distinct security and privacy concerns that are already making consumers uncomfortable.
Take this recent piece from TechSpot, which labels smart TVs as “dangerous” spy tools because of their cameras, microphones, and vulnerability to hacking. At any moment, the piece suggests, someone could be watching you watch TV. Such alarmist claims represent a shift in perception of televisions, which have largely been immune from the privacy controversies so common on the social web.
While nearly all internet-connected technology has a degree of vulnerability, consumers can easily weigh the benefits against the risks (and they continuously do on Facebook) before they make a purchase or download. But that risk assessment may not always be favorable for smart TVs, as many people aren’t yet sure why their television needs a camera and microphone. Consumers like watching Netflix on their TVs, but some other “smart” features may still feel gimmicky or unnecessary—especially if they inadvertently allow hackers, marketers, and government agents into their living rooms.
Of course, it should be emphasized that there isn’t any evidence that anyone’s home smart TV has ever been hacked, only that the possibility exists. Speakers at the recent Black Hat security conference argued that any Smart TV app that might be used to communicate with the outside world can be compromised through remote access, which could then allow hackers to record video from the television’s camera, steal usernames and passwords, and redirect your browser to virus-infested websites which could cause further damage. Similar charges were persuasive enough that Senator Chuck Schumer called for minimum security standards on all units.
While manufacturers scramble to resolve such vulnerabilities, the burden falls on marketers to present a messaging solution. Their task is two-fold: first, continue to highlight the benefits of smart TVs; and second, get ahead of the narrative by prominently addressing security issues as part of the brand promotion. It should be emphasized that new technologies always coincide with new privacy concerns. It’s far better to address such concerns to buyers upfront than let others drive that conversation. Once users are satisfied with privacy precautionary measures, they’ll embrace the technological upside.
None of this is to suggest that any of the identified security issues should be marginalized or brushed aside. Part of the reason smart TVs are under suspicion is not because we’re unsure of how “hackers” might use them, but because we’re unsure of how we will. Once we are accustomed to smart TVs’ functions and, more importantly, the value they add to our lives, we will be much more likely to risk the same sort of privacy threats that we all too readily accept with our smartphones and laptops. Just as Millennials are naturally acclimated to social media and roll their eyes at Facebook privacy concerns, smart TV security concerns may soon seem an equally ridiculous generational divide.
Ultimately, the ability to access personalized content experiences (think: automatic streaming, next-level show recommendations, individual ads) will grow as smart TV technology expands to include facial and voice recognition—and in a not-too-distant future, mood analysis. These highly individual features are part of an overall move toward a semantic web, where we won’t have to search for relevant information, because it will find us.
To get there, we will have to collectively determine how comfortable we are with the possibility that someone (or something) has access to our personal information. Does providing that data or ability to access it always increase our enjoyment of a technology? What if Big Bird is really watching us watch him? We may be left vulnerable in the “privacy” of our own living rooms or, like children, these new developments may add to our enjoyment.
Freddie Laker is the founder and CEO of Guide — a technology startup that turns online news and blogs into video. Previously he was the VP of Strategy at one of the world’s largest digital agencies SapientNitro.