Ah okay, gotcha now. Thanks for taking the time to explain.
I don’t think that voice communication with machines is necessarily the equivalent of the command line. Bots equipped with AI can also be…
Ah okay, gotcha now. Thanks for taking the time to explain.
I don’t think that voice communication with machines is necessarily the equivalent of the command line. Bots equipped with AI can also be trained to offer piecemeal discovery, which is similar to how humans talk to each other. As a matter of fact, this is how the web architecture works right now, so it’s going to be just a mild stretch to retrofit the incremental discovery into the field of text/voice activated bots.
On the web, the purpose of the conversation does not have to be predetermined before the conversation commences. Thanks to the REST principles, the user can discover, via stepwise refinements, what is available and what might be of interest/use. Roy Fielding calls it Hypertext As The Engine Of Application State (HATEOAS). The only difference with bots is that bots do not speak hypertext. They only speak plain text.
Pre-web computers connected via networks were incapable of such level of sophistication. Pre-web, the only way to have interaction is to determine the exact purpose of the planned interaction beforehand. Roy Fielding calls that “out of band communication”. Both the web and bots offer much more advanced “in band communication” model. What that means is that all the information necessary to process the request is contained within the message itself. There is no need to reach for the operational manual/API document in order to figure out what to do next.
So in conclusion, I think that future communication with machines is going to be pretty much indistinguishable from communication with humans. You standing on a bus stop listening to someone speaking to a listener on the other end of the line won’t be able to tell whether they are talking to a human or to a machine.