Ok…where to start with this one?! I’ve been doing a lot of digging lately. I’ve also been asking myself some pretty big questions. No, not the “what is life all about” question. More like the “where is the web going” question. What direction will technology take? How will technology fit into our lives in 5 years, 15 years, 50 years? I recently wrote a post on artificial intelligence and its place within the web, obviously it’s not there yet. Well…not truly there yet. It might be someday, but what I think we can count on as a virtual certainty is this concept of context-aware development and the Internet of Things (IoT).
What is the Internet of Things…
I’m sure you’re all familiar with the Internet of Things, yeah? Well, just in case you aren’t, here’s kinda the concept. The IoT is everything that essentially has a technological pulse and its ability to collect, compile, and exchange data. From the electronic control module of an automobile to the smart refrigerator, your cellphone to your coffeemaker, headphones, wearables, even your washing machine. The Internet of Things is all these things being able to connect, not only to the internet, but each other. And that’s the rub, right? I mean, think about what you would do if your car already knew the best route to work dependent on the flow of traffic for that particular day. Or your FitBit woke you up and then signaled your coffeemaker to start brewing that morning cup of joe! It’s a compelling concept and one that’s quite executable… I think.
What I want to talk about is its connection to context-aware development. Now, some of you may not be familiar with this term, so let me elaborate. Development in websites, applications, mobile, and the like has taken an awe-inspiring (in my mind!) trajectory this last decade. Going from static HTML/CSS sites, to content management systems like WordPress and Drupal, and then onto scripting and programming frameworks that execute both client-side and server-side activities. We currently have more technologies that can talk to each other and work with each other than ever before. But context-aware development brings in the outside world.
Yeah, think about it for a minute. What is context? It’s the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed (courtesy of Google). So, context aware means that the behavior of a device will be enhanced dependent on the context. It will essentially take outside factors (like sunlight, movement, and signals from other devices) to determine what the best user experience should be. Let me explain. Let’s say you’re walking down a street, it’s super cloudy then all of a sudden the clouds break and the sun comes out. Imagine the website you’re viewing on your mobile phone adjusts the contrast so it’s easier to see the page. Now, I know what you’re thinking, there’s auto-contrast. But that’s built into your phone, not the website itself.
I’ll give you another example. Let’s say you are on the subway and reading a Boston Business Journal article. There is a particularly shaky section of track that you’re on and the subway starts rattling back and forth, making it extremely hard to read the text. Well, what happens if the text enlarges itself to make it easier to read. That would be a much better experience, would it not? Or the button that you want to click on gets bigger as a direct product of the condition of your environment. Or maybe the button turns white in a dark room, and black in a light room. That’s pretty kick-ass if you think about it.
Wearables…connecting your body to…well, everything else
The Apple Watch came out on the market and interested a certain section of the population. I don’t think it’s selling like hotcakes, but it’s still a pretty cool device. And let’s face it, it’s made by Apple, so it’ll just keep getting better as new versions of it come out. But there are a few things the Apple Watch and other wearables can do. The biggest thing I see is that most of these wrist devices can receive and collect information from the most pivotal environmental factor to a good user experience—your body!
Now, it can receive data like your heart rate (pulse), sleeping patterns, number of steps, type of activity (like jogging/cycling) and so on. As of right now, most all of these devices need to be synced up with an iPhone or similar device. But, again, as time goes on I think we’ll see these wearables getting smarter and more compact just like the cellphone. All the data it receives can be super useful in giving the wearer the ultimate experience. I also think that Google Glass (another wearable) will eventually make a big splash when the world is ready. For some reason it wasn’t well-received, I wonder why? Blog post for another day.
Let me give you an example of context-aware devices:
Let’s say you’re in the middle of a workout and your heart rate is elevated. Someone sends you a text and your wrist device holds off on letting it through until you can look at it when your heart rate is back to normal. Or the flip-side to that. Let’s assume you’re a doctor in the middle of a workout and someone needs emergency heart surgery. The sender can label the text message (or phone call) as “exigent” and your wearable can send you a quick buzz signifying that you might want to take this call! Or maybe you’re a senior citizen and you have a wearable that can tell if you’re having heart arrhythmia, you can’t get to a phone because of the pain, and your wearable connects to emergency services. Either way, you get the drift.
HMI’s…the connection for all connections
Human Machine Interfaces is a pretty broad term that can be applied liberally to iPod’s, washing machines, coffeemakers, automobiles, stereos, computers, and so on. But it really started in the industrial space with things like heavy machinery, but in the age of computers, that’s kind of subsided. An HMI essentially provides a graphical user interface (GUI) which connects a human to a machine. A great example of this is your car stereo (I have XM!). But it’s a visual representation of all the different channels you’re going through to get to the music you want. You can also control the volume, bass, treble, etc. Now, with the Internet of Things, the ambulatory devices should be able to connect with the stationary ones. Human Machine Interfaces will allow for connections to be made (and synced) from your cellphone to your car, your wearable to your coffeemaker, your iPad to your whatever!
Here’s another example:
Let’s take everything we’ve learned and try to put it into scenarios that would work. Think about this, you get home from a long morning run and you need to get ready for work. Your wearable locates how far away you are from your home and signals your coffeemaker to start brewing when you get close. You’ve got a fresh pot of coffee when you get home. But wait, there’s more.
You forgot to wash your clothes last night so you program your washing machine to start a short cycle, then throw your clothes in before you hop in the shower. You get out of the shower and throw your clothes in the dryer. Then you get a text from your coworker saying that the morning meeting has been pushed up by 30 minutes, oh shit! That signals your car to start and put the AC on (or get warm if you’re in a colder climate) and it also signals your car to find the most appropriate and quickest route to the office that morning.
Now, your car knows that you’re going to be rushed (it can feel your elevated heart rate), so it finds a station to play soothing music (think Enya) while you drive to work. You get your clothes out of the dryer, throw on that nice collared shirt and hop in your air-conditioned car that’s playing relaxing tunes and already knows the quickest way to get you to that rescheduled meeting. Life is good! Sah-weet!!!!
Are there any ramifications?
Of course, there’s always another side to that coin, right? Technology already controls a portion of our lives. Many are addicted to Facebook and other social channels. People text and drive. People text and walk. We tune out the outside world to live in our virtual bubbles. But I think connecting the Internet of Things, using context-aware development techniques, and devices getting smarter and more compact, are just going to help improve our lives.
Now, some would say that being this connected isn’t great. It means people work longer hours, people lose touch with their family lives, but the truth is that we can take this technology and make it work for us.
Your wearable tells you if you’ve been stagnant for too long a period, and can jolt you to get moving. Well, what if it could say something like “hey, go do something fun like hang out with your kids.” What if your cellphone or computer knew that you were spending too much time on it, and automatically shut down? There are lots of ways we could use all this technology and the IoT to our advantage.
When will this happen? What will make this happen?
Honestly, I’m not quite sure. I know CSS4 (which is currently out, but doesn’t have much browser support) does do a little experimenting with context-aware elements like pointer and hover. It also boasts Level 4 Media Queries, which really shaped the face of responsive design when media queries first came out.
With all these technologies, and all the people working on (and with) these technologies, I can’t imagine it’ll be more than 2 or 3 years before we start seeing context aware development integrated with the Internet of Things. Now, for how long it’ll take to perfect it — well…maybe that will be never! Is anything ever really perfected?
Either way, I’m looking forward to this next evolution in the technology space. Context aware development and the Internet of Things will change the way we interact with technology and ultimately each other.