Screens Are For Consumption, Not For Interaction
Projecting images onto a flat surface has been one of the favourite passtimes for countless generations. It was only in 1960s that screens…
Projecting images onto a flat surface has been one of the favourite passtimes for countless generations. It was only in 1960s that screens offered the ability for users to interact with displayed images. The NLS (oN-Line System) designed by Douglas Engelbart included the raster scan monitor. It also included a three button mouse. NLS was the first ever computer to implement Graphical User Interface (GUI).
What was the reasoning behind implementing such novel system?
Hand-Eye Coordination
Before the advent of GUI, users were communicating with computers using command languages (CLI). This communication was clunky, awkward and error prone. With GUIs, the possibility of making a mistake got reduced. Also, the possibility of getting the computer do the intended thing increased. The success rate in GUI is down to the accuracy of users’ hand-eye coordination. The success rate in CLI is down to users’ ability to recall the correct command syntax.
Command Line Interface Matures
As the computing power increases we’re seeing that CLI matures. Gone are the early days when a user was expected to type arcane, cryptic commands. As the modern CLI advances, its commands more and more read like natural English prose.
Speech recognition also matures by leaps and bounds. We’re now standing at the verge of a Star Trek like situation. Pretty soon we’ll be able to speak commands and the computer will understand and obey.
Screenless Interaction
Products such as Amazon Echo always listens to all speech. It is monitoring for the wake word, which, when we say it, triggers the action. We don’t need visual cues to interact with Amazon’s Alexa bot. Speech is doing a perfect job when it comes to humans interacting with a machine.
This screenless interaction will only get better with further maturation of the bot technology. Using screens to interact with machines feels like trying to fit a square peg into a round hole.
Screens Are Relegated Back To Consumption
Of course, we can talk to the bot and ask it to show us an image on the screen. Or play a movie etc. We can then sit back and consume the visual content. But we don’t have to reach out and touch the screen to make something happen.
Intrigued? Want to learn more about the bot revolution? Read more detailed explanations here:
The Age of Self-Serve is Coming to an End
Only No Ux Is Good UX
Stop Building Lame Bots!
Four Types Of Bots
Is There A Downside To Conversational Interfaces?
Are Bots just a Fad? Are GUIs really Superior?
How to Design a Bot Protocol
Breaking The Fourth Wall In Software
Bots Are The Anti-Apps
How Much NLP Do Bots Need?