This post was originally featured on the Pristine Blog.
I've argued that context is king in eyeware computing. I'd like to take that one step further and clarify when apps should and shouldn't exist on Glass. They operative term in that sentence is "when."
Most of the major consumer web services on Glass (Facebook, Twitter, Gmail, CNN, NYT, etc) are content services. These services thrive because they provide users with an enormous amount of fresh content every day. Glass isn't a content driven form factor (unlike the iPad, which is very content-centric). Glass is a contextual form factor. There's a mismatch between the major consumer web services and Glass. None of them are Glass-centric. Phrased differently, would any of the apps listed above have been written Glass-first?
I like to phrase things in salient terms that Google never would, but probably should: if your Glass app isn't relevant to what the user is physically doing at a given moment in time, your app shouldn't be in view at all. My stern language is actually a superset of Google's more friendly-worded Glass development guidelines. The window of time in which relative, useful information can be presented is narrow, usually just a few seconds. This is an inherent problem for all of the traditional consumer web services, and the Glass Mirror API exacerbates this problem. These services have no way of knowing what you're doing RIGHT NOW; they are pushing information to users that have nothing to do with what the user is physically doing.
I understand Google's thinking behind the design of the Mirror API. The Mirror API makes it extremely easy to develop apps to send bundles of HTML-encoded information to the user. The problem is that that Mirror API pushes information without enough context. Both the Mirror API and natively written Android apps on Glass have no way of knowing what the user is physically doing at a given point in time, which means that the information being pushed can't be all that contextual. Yes, Glass supports geo-fencing, which provides some location-based context. Even still, location-based context rarely correlates precisely with what the user is physically doing within a given five second window. When information is pushed to Glass, it's only immediately viewable for a few seconds. Given the intrinsic latency of the Mirror API and lack of specificity provided by geo-fencing, it's practically impossible to push truly contextual information to Glass.
I've spoken to quite a few individuals that have app ideas for Glass. Glass is a unique platform with specialized marginal value. Successful Glass apps must be contextually driven and must take advantage of these unique traits. Context is king in eyeware computing. Developers that would like to make a significant sum of money must hold themselves to that standard.