I’ve been thinking about this for a month now with varying intensity and I thought I needed to park this somewhere as I’ve not had anyone to share it at length with. One of the things that greatly amuses and disturbs me, at the same time, is the simplicity and obviousness of the thought which makes me think why anyone else hasn’t thought of it? Or perhaps, someone has and is working somewhere in a lab to make it happen.

For the sake of this experiment, I am going to consider Apple as the company that makes it happen. I’ve two reasons for picking Apple. Reason one is very objective, which is that I cannot think of any technology company today more prepared than Apple to tackle this problem. When I say ‘prepared’, I don’t mean with the right amount of resources at hand, which granted Apple has a lot, but the right area of focus. Two things stand out about Apple. Firstly, Apple has always been individual-focused— putting the consumer of their products first and designing things around their needs. Secondly, Apple is one of the few companies that is known for and has the distinct advantage of ‘seamless integration’ across all their products, owing to the fact that they create the hardware and software themselves and they are very directly responsible for most of the services their products use as well. And as we move more towards contextual computing, the focus on the individual with a seamless integration of everything that the individual uses becomes more important than ever. Reason two for picking Apple is a little subjective and I shall discuss it at the end of the article. Let’s start with the experiment now, shall we?

As we stand today in 2013, we deal with computing devices in different shapes and sizes which are all capable of doing many similar things. The world of technology is emerging and bringing the internet to everything that one uses and they are calling it ‘The Internet of Things’. That is still happening, but the problem I am more interested is what we already face. We have different devices that we deal with every day, which can do the same things in more or less the same way. If we only talk of computers, both desktops and laptops, phones and tablets, just think of the numerous that you can do with each of them. For the most part, all computing devices are used for three primary things:

  1. Consuming content
  2. Creating content
  3. Communication

When it comes to creating content, which involves writing articles like these or entire books, editing a movie or creating an animated one, composing and editing music among other things, the computer is the king. We are gaining more power with every iteration of a mobile device, like the iPhone, or the iPad, but for the most part and for most creations, computers are key. Then comes consumption of content in which these mobile devices have an upper hand already. It is so convenient to have your iPhone or iPod with you when you are travelling to consume your music, podcasts or even to read books, which is a lot better if you have something like the iPad. The biggest advantage the mobile devices have when it comes to consumption is, well, the mobility, and also the fact that you hold the content in your hand. There is nothing like it! Anyone who has tried to read an entire book on a computer, even a notebook computer, knows how different it feels that reading one on an iPad. Finally, we have, communication, and again the mobile devices win there mainly due to the fact that they are always with us. However, there are other things that one can do on any of their devices without much difference or based on personal preferences. Like, for example, a lot of people don’t mind using the iPad to write long articles and long emails but some others always prefer a keyboard. And even when it comes to that, some would add a Bluetooth keyboard to their iPads or even iPhones while others would just go back to using their Macs for typing anything long. Then there are things like browsing the web or reading email, which can be done quite elegantly on any device. There are a lot of decisions to be made before one can actually do something with these devices.

“I am more than my devices”

And that decision-making is part of the problem. Let’s say I own a 27“ iMac, a 13” MacBook Pro, an iPad and an iPhone, and I am in my room thinking of emailing a friend about this new idea which just struck me that would change the world. I am most severely distracted by having to make a decision about which device to use to compose a simple email consisting of a few lines. At first, I might think the iPhone would be the right choice given that it is in my hand because I just used it to reply to a text message. However, I might then decide to use the iPad which sits across the table considering that I might write a little more and it will be easier to type on the bigger screen. But then, being the overtly elaborate person that I am, I might decide to pluck my MacBook Pro from the bag and type on it because I like typing on a physical keyboard when it comes to long emails. Also, considering that I never shut down my notebook, I just have to take it out, open it, launch the email app and I am ready to go. And then my eyes get a glimpse of the iMac just before it notifies me of a friend’s Facebook update, the same friend I wanted to email to. And I think, perhaps I should check his update and then start writing on the iMac itself and not bother taking out the MacBook Pro from the bag. With a big screen, of course, I can have Facebook, the Mail app and iTunes playing a Beethoven symphony all sitting side by side very comfortably. And as I begin to write, I have almost forgotten why I was so excited about this idea and now it just seems mediocre at best for I can’t remember much of the details. And the moment just passes by. And the world will not change. Actually, it would anyway but I won’t have anything to do with that change. And that makes me realise how all these devices are supposed to help me do things better but instead just end up creating new problems to deal with. Don’t get me wrong, I am not against technology, in fact, I am much of a cheerleader when it comes to that. But I am also on the side of things where I see the flaws in existing technology that makes it more of a burden than a benefit.

So, we come to the first problem that we must solve creatively to usher in to the contextual era. The solution is to always put the individual first. Anything that happens is caused by or happens to an individual instead of a device and the device that event happens on is just an attribute. This requires all our devices to be smart enough to talk to each other letting them know who is being used and to do what, at the very least. Let’s look at some examples of what I mean by that. I am on my desk, working on my Mac. I get tired and decide to go out for a little walk. So I lock my Mac and leave my top-secret, world-changing work safely. Now, if the Mac and the iPhone could talk to each other, the iPhone could very well tell the Mac, “Hey, the master and I are leaving, I don’t know where to, but you better lock yourself so no one can look at anything confidential, okay pal?” and the Mac would know when to lock and unlock itself based on the proximity of the iPhone. Nothing fancy, very basic security. Here’s another scenario, if I have the iPhone in my pocket and I am working on the Mac and I get an email. It is most likely that I’ll open the email on the Mac as I am working on currently. Thanks to IMAP, the email I read or delete on one device is marked as such on all other devices. But I go back to my iPhone later to see the notification for the email still there. If only the Mac could tell all my devices, “Hey you all! The master just read _this_email on myself at current time. Just wanted to let you all know.” That would make things a lot easier. Lets look at an iPhone-specific example.Suppose when I get a text message or a phone call, the iPhone could tell the current device I am working on to display a notification alerting me about the text message/phone call I am receiving presently. And because both the iPhone and Mac have my contacts and email, the Mac can be smarter in displaying the notification. Along with the name and picture of the person calling, it could show the messages I’ve received from or sent to that person in the last two days, over IM or email or Facebook or Twitter. That way, before I’ve even answered the call, I’ll have a context of what might be the call regarding. And of course, I can receive and end the call from the Mac itself using its speakers and built-in microphone if I so choose to. Therefore, I would not have to go in to my pocket, take out the iPhone, handle the call, put it back and continue working on the Mac. Also, it can be a user-preference on the iPhone to disable the ringtone and vibration when I am working on another device capable of talking to the iPhone. That would result in lower usage of battery and also reduce the places you’d get notifications from. Now, technically, in case of the email both the devices receive the email but the main device alerts the other not to notify the user whereas in case of a phone call, only the iPhone receives the notification which it then passes to the Mac to show to the user. But to the user, it is transparent. There is always one device the user is using and that is the one which gets all the notifications. And the way to implement this would be a simple master-slave setup. The device that the user is currently working on declares itself as the master to all the other devices. This causes them to go into a slave-sleep state whereby any notifications they receive are automatically transferred to the master. Then it is up to the master to decide what to do with the notifications. A master like the Apple TV can decide to show the picture and name of the person, but it does not have other information like the social whereabouts and emails so it can choose to ignore that unlike the Mac. The Mac, on the other hand, could possibly receive the same notification itself like for the email and then can choose to ignore the one from the iPhone. Also, as the master is aware of its capabilities, it can present an action to the user appropriately. The Mac can allow the user to receive the call on the Mac itself, which it does by sending a request to the slave to pass the call through. The Apple TV does not have a microphone so it can avoid showing such an action. To summarize, the slave informs the master of everything that happens to it and the master can decide what to do with the information. The master can also choose to request the slave to perform a service that it is capable of. Another point to note here is that the device does not necessarily have to be sleeping to become the slave. One might very well be watching a video or listening to music on the Apple TV whilst working on the Mac i.e. one device is still passive while other is actively used. Or one could be working on the Mac while having a chat with a colleague on the iPad i.e. both devices are actively being used. In such cases, two things can happen. Either the devices decide between themselves which should be master based on certain rules like the kind of work being done and a device-based priority or as we discuss later, the secondary device can become a second screen in which case it is automatically demoted to being the slave. So almost everything happens automatically but the user can always override which is master.

So how do the devices know they belong to me, and therefore must always focus on me? Simple. They all have a common ID that belongs to me. I login with my ID on all my devices and they just know. All devices support iCloud already. That should be the one that ties them together. And I, as an individual, can then become more powerful than the device. Any app or any service that wants to send me a notification, would send me a notification without having to worry about what device I’ll receive it on. That would require two changes on the devices. First, the notification part— Apple has already solved it beautifully on the iOS side. As the apps can’t always be running in background on a mobile device, they don’t have the ability to directly send notifications to the user. Instead, they register with Apple’s Push notification service which gathers all notifications for the user and shows them in the notification centre and on the lock screen. Now, if the Mac got the same treatment for the apps, that would make it perfect. Right now, apps on the Mac have to be running for them to send notifications. That is to say, most apps. Calendar on the Mac can send notifications without being running as it has a daemon running in the background. Also, Twitter and Facebook have no apps but just live in the notification centre and share sheets, so they notify always. But for things like Mails or Messages or anything else, the app must be running to notify the user. Instead, if they all used Apple’s push notification server like iOS, the Mac could notify the user without any app running at all. And the server won’t need to know which device the user is using as the devices decide amongst themselves. The server would just push the notification to the user’s iCloud ID and whichever device is the master, gets the notification. The master could then decide if the slaves get the notification or not. By default, only the master would get the notification. Let’s assume a case where the user gets a phone call, sees the notification on the Mac and decides not to answer the call right now, and instead sends a text message back to the caller saying that he’d call later. So in that case, the Mac can choose to let the iPhone retain that notification of the call so that when the user decides to return the call, he may not have his Mac around, he may be outside and the notification on the iPhone would then remind him to call back on the chosen number. Another case where this is important would be for iMessages. The user receives an iMessage on the Mac and replies to it there and has a conversation. Later he might want to continue the conversation on his iPad in the car. So such messages can be distributed to all slaves as per the master’s discretion.

The second part is how the devices would talk to devices of people I interact with regularly, like say, my kids and my spouse. The important thing to remember is that everything is going to be dealt with on an individual basis and not on a device basis. So every individual has unique devices that belong to him/her. And individuals related to each other in some way can decide to share things with each other. But, what about devices that two people share? Like me and my wife would probably share the iMac to do certain things but have separate iPads. Simple. We would have separate user accounts on the computer and they’d act as virtual devices. So if I am not logged in to my user account on that Machine, that is as good as the unique iPhone/iPad that is sleeping and not being used.

What can be done with shared devices?

Remember, the devices or services are not shared. People have relationships and that determines what they want to share with each other. Say, for example, the wife receives a phone call from an old friend who wants to meet soon and she wants to confirm if I am free on the weekend. At that point, I am in a different room, on a different floor, working on my Mac. She can choose to transfer the call to me and I’d receive a call just like any other except that it would notify me visually that this was a call transferred to me by my wife. Or she can choose to conference me in so we can all talk together. Now, how is this different from a phone conference? Well, this is where it gets interesting. What if my iPhone has no coverage? Or is out of battery? A conference call in that case would be impossible. But now, as we’ve removed the layer of devices and we just deal with the person, my wife just conferences me in and her device automatically notifies my master device, the one I am currently interacting with, to handle the call. And because the devices talk to each other there is no need for me to have cell coverage on the iPhone to be able to take this call. Obviously, we need something to connect each other with. In the office or at home, it could easily be the local wifi network. And outside, it could work over the internet. In fact, it could also connect as a peer when there is no wifi just like AirDrop does. That, of course, means the devices all have to be in the same room to talk to each other. And to the user, it’s all transparent.

“AirPlay is no child’s play”

When the devices can talk to each other, why not let them have a good conversation every now and then, eh? Right now, AirPlay does one thing, and it does it well. It allows one to transmit what one sees or hears on one device to another device. That requires some devices to work as AirPlay receivers while others work as senders. For example, the Apple TV or an Airport Extreme connected to speakers or other AirPlay enabled speaker systems are AirPlay receivers while others, like the iPhone, iPod, iPad and the newer MacBooks are senders. What if it was bi-directional and added an additional functionality we’ll call ‘AirPlay actions’? Let’s discuss the direction first. We’ve already been talking of the devices talking to each other. So what do they use to talk to each other if not AirPlay? Well, exactly! AirPlay, of course! It’s just that instead of transferring audio and video, AirPlay can transfer any kind of content and in either direction. That’s what makes all that we have discussed so far possible. So devices use AirPlay to tell each other who’s the current master, share notifications amongst themselves, tell the master what services it can offer when requested, so that the master can call upon them as required, and also share other types of user content besides audio and video, like text. And AirPlay will use whatever is available to communicate with the other devices of a user. So if it’s the iPhone and the user is outside, it can use cellular data, if in the office or home, the wifi network; and if devices are nearby, direct wifi like AirDrop. This would make it convenient to send messages to other people as well. Like for example, I send a message to my wife about dinner plans for tonight. If she’s at home reading a book on her iPad, she’ll get it on the iPad. If she’s driving back from work in her car, and we’ve not talked about other devices but a car could be one of the devices, her iPhone could get the message as the car has no connectivity. But the car and iPhone can talk to each other and as the car is being driven right now it declares itself as the master. So the iPhone passes the message along to the car which can then pause the radio and read the message aloud for the wife.

So that’s AirPlay with bi-directional support and using any network that’s available. The other biggie— AirPlay Actions. One of the most under-rated features of OS X is AppleScript which allows one to talk to any AppleScript-aware app and make it do things repetitively, schedule them or create an entire workflow using a combination of apps. With devices being demoted to just being devices and the person being the most important entity, AirPlay Actions could create a way for devices to talk to each other and now ask each other to perform user-defined tasks. This is important because the user should not have to pick a device as far as possible before he is able to do something. Sometimes it could be because it causes distraction as discussed before, other times it is not possible to access the other device like when one’s in a train. Let’s take an example. The user is in the train reading an article on his iPhone and he suddenly gets an email with a couple images that need some processing before they can be uploaded to his website. The iPhone, as it stands today, is capable of downloading and displaying images from the email, editing the images using apps on the device itself as well as uploading the images to a website either using the web browser or probably an app. But there are things that a Mac can still do better. If it is more than just simple editing and retouching, the Mac is the only device the user can use because it can run something like Photoshop. Batch-editing is also possible on the Mac because of its more powerful processor, as well as the ability to write scripts to work on a lot of files and perform similar actions automatically. Also, downloading and uploading a lot of hi-res images over cellular costs a lot more than it would on the home broadband network. This is where AirPlay Actions can change the way one thinks of working amongst devices. Without having physical access to the Mac, the user can, on the iPhone, ask the Mac to download all images from this particular email. The user can then ask the Mac to apply the usual scripts the user uses for such images which might do some cropping, resizing, thumb-nailing, sharpening and anything else in Photoshop or any other app. And finally, use another script to upload them all to the website and when it’s done just notify the user back. That way, when the user is back home, all the work is prepared for him already. Of course, one can use remote desktop to share the screen of the Mac and do the same things. But here’s the difference— cellular data used for remote desktop screen sharing for about 20 minutes v/s a few text commands sent to a Mac. Also, the difference between zooming and pinching on an iPhone to view a Mac’s screen and pressing buttons to do things remotely is huge. And of course, not everyone wants to use scripting to do a lot of their work, although that’s the only possible way to do a lot of automation, but the system can provide a lot of predefined actions that it can perform. Like, one can buy a movie on the iPhone while in the train to watch tonight and ask the Mac to download it right now so when one is home, the movie waits there for him. Or one can ask the Mac for a file that one wants to look at from the device which could be anything— a pdf, an ebook, a presentation, etc. Or one can ask the iPhone/iPad from the Mac to become a keyboard or a trackpad if the user’s batteries for the keyboard/mouse go out. Or one might ask the iPad to act as a secondary screen for the Mac for a more productive environment. And whether the devices are in the same room or on the same wifi network or anywhere else in the world, it would all work transparently to the user.

There are a lot more things one can do once all devices can talk to each other that we are not even discussing; it has already been a very long article. An interesting thing is this— the Apple TV can know if the kids are in the room and automatically enable parental controls disabling stuff that’s not suitable for kids. Or devices being smart enough to know my context all the time so when I text my wife, “Where are you?”, her iPhone can automatically tell me her current location and also where she’s going. Of course, there are also many issues that we did not discuss, like the above example definitely requires permission from the wife for always or perhaps, a manual way for her to talk to the phone to allow the details to be shared on an individual basis. Also, when you put in other things that are all becoming smart like the refrigerator, car, washing machine, home security system, running shoes, and even clothes, things are taken to an entirely new level. And the whole beauty of all this is everything just needs to be able to talk to other devices and for the most part, that’s all. It can then ask a more capable device to do the heavy-lifting for it. My clothes can have sensors that monitor my heart rate, blood pressure and other health-related stuff but they don’t have to store or analyze anything, they just send it to any device that can take it and that device like the iPhone or the Mac can do lots of magic with it.

To sum it up, two philosophical changes are required in the usage of things. One, everything is done by or done to a person and not a device, the device only acts as the medium. Two, a person performs actions independent of the device that he has access to at present.

Coming back to the second reason for choosing Apple. I started writing this a day or two before February 24, Steve Jobs’ birthday. And I wanted to post this by that day as a tribute to his life and work. I did finish writing it on time but couldn’t get around editing it for publishing any sooner. It turned out to be quite a long post, probably, the longest I’ve ever published. So, I want to thank everyone who sticked around to read it in its entirety.

And here’s to Steve Jobs, the guy who put soul into products and art into our hands.