Published: Feb 14, 2013
Thanks to Marco Dini for translating this article to Italian.
For the real football fan, seeing a live game at the stadium can be very unsatisfying. Today’s mega-stadiums make it difficult to see or hear much of what’s happening on the field. Fans miss out on the detailed commentary and nuanced information that live television provides. They don’t get the benefits of replays or see the specifics of the technical moments.
The NFL knows this is a problem and has partnered with a New York company called Fanvision Entertainment to introduce a solution. Fanvision sells or rents fans a handheld device that’s slightly larger than a standard mobile phone with a slew of features aimed at the devoted football fan.
The device provides instant replays, player stats, coach-to-quarterback conversations (on a 15-second delay) and even the coach’s locker room speech. All this only happens on these special devices within the confines of the stadium.
To accommodate a system like this, stadiums are wiring up. Gillette Stadium in Foxboro, MA, home of the New England Patriots, has put in 300 wifi hotspots and snaked thousands of feet of networking cable in the stadium. Fans can now participate in the game through their portable devices, along with just being there.
The NFL football stadium isn’t the only sport that is looking at these portable screen experiences. We walk around with hyper connected super computers in our pockets and purses. This makes it easy to rethink the experiences that everyone can have.
Imagine being in a foreign city, trying to get across town to catch a train. Not knowing where you are, relative to the train station. Getting to the station and having trouble find your way to the train. Interpreting the schedule to ensure you’re on the right train at the right time. Knowing what your food options are before you get onboard.
What could we do with that hyper connected super computer in our pocket? We could start to design an experience that puts us at ease and helps us navigate the world.
We could start simply, by giving simple direction and time estimates on when you’ll arrive at the train station, whether you take a cab, walk, or other local transportation options. This is easy to do with today’s technology.
On the same screen, we could have the information about the train we hope to catch. If it looks like we may have trouble making it to the station in time, we could give information about other routes to get to our destination city.
When we arrive at the station, the screen could change once again. Now it might present a countdown to the departure of our train. It could give us routes to find the nearest restrooms or the food we like. (If we want to get super advanced, we could order our food while we were on our way to the station, pay for it electronically, and pick it up once we arrive at the station.)
As our departure time nears, the screen could change once more to tell us how to get from where we are in the station to the track. This could also indicate where the class of car we’ve purchased lives (Is it the close end or do we have to walk down the train a bit?) and even where we’ll most likely find the empty seats.
As we’re seated on the train, our screen can produce our electronic ticket for the conductor and list the services on the train, like the cafe car. A display with the map of our progress and the time to our stop will also be helpful as the trip proceeds.
The train-catching scenario above takes the Fanvision idea a step further. It adds an awareness of context.
Fanvision, while a nifty idea, only is aware that the game is going on. The changes in information are a broadcast model.
But in the train scenario, we see an awareness to the user’s position, their travel goals, and their food needs. This advanced view of awareness creates a better experience, but requires a more advanced approach
All of the individual pieces are available today. The mapping and routing applications are freely available (though companies like Apple have discovered they aren’t as simple to implement as they appear). Schedule applications have been around for decades. Restaurant reviews and “find the nearest restroom” functionality is easy to find.
Individually, these are pretty simple applications. However, it’s their combination that makes
Apple has taken this idea of awareness and built it into a wonderful store application for their IOS devices. Customers can, from anywhere in the store, summon a salesman, schedule a Genius Bar appointment, or even purchase the product they are holding by scanning it and charging the card associated with their Apple ID.
Again, the individual elements of the application aren’t particularly new or novel. But the combination of these elements have started a revolution in retail. Pretty soon, we’ll see these types of applications for every retail environment. (Retailers call this type of thinking “Cross-channel design.” However, I like context-aware because that reinforces the idea we should be paying attention to the users’ context.)
We’ve never really designed for context before. Sure, we could tell if the user was logged in or if they had established an account with us. But those were not really the users’ current context. They only deal with system states.
Real context aware applications are combining several features together, based on what we’re trying to do and how far along we’ve gotten. They take full advantage of all capabilities of the technologies we’re carrying. And they adapt to what’s most important right now.
Existing design processes don’t work for this. What would be a static wireframe can’t handle the dynamic updating of the displays. Knowing how the visual design needs to adapt to the changing needs of the user is not something we’ve thought about before.
This process starts with mapping out the current experience. If we watched people traveling to board the train, we can discover what they need and when they need it. We can see where the context breakpoints occur, where information needs to shift in the application.
Designing for situational awareness will become the norm. Right now, it’s still a burgeoning frontier. And like all frontiers, it’s all about experimenting and seeing what works. The teams that are most comfortable with working in fluid conditions are the ones who will see the biggest advantage.
Jared M. Spool is the founder of User Interface Engineering. He spends his time working with the research teams at the company and helps clients understand how to solve their design problems.
Where do you think the future of context-aware design is headed? Tell us about it on our blog.
Read related articles: