Programming the ENIAC. US Army. |
Computers have been around for a very long time. ENIAC, the
first general purpose electronic computer, was constructed over seventy years
ago. And we have always struggled with how to communicate with them.
Your smartphone screen is littered with colorful graphic
images representing applications (apps). The screen is touch sensitive, and you
call up an app by simply touching an icon. Once in the app (for instance, Facebook), you can view your feed or notifications
by touching the appropriate widgets. Photos can be zoomed by a reverse pinch on
the screen. When you want to type, a virtual keyboard appears as if by magic. If
you’d like, you can specify an Italian or German keyboard. All this by touching
and swiping and pinching on the smooth glass, illuminated with virtual tokens.
In 1946, that would have been beyond the imagination of all
but the most brilliant visionaries. The ENIAC was difficult to interact with, requiring
thousands of cables and switches to be plugged and set. Setting up even a
simple request was a lengthy process and could take days or weeks.
And it took a long time for things to slowly, gradually
improve.
For many years we typed our commands into the computer and
saw the results printed back to us, first on paper and later on green,
glimmering cathode ray tube (CRT) displays. There were no pictures, graphics,
or diagrams, only letters and numbers. (Although truly talented and bored programmers
could print out convincingly recognizable images of Snoopy or Marilyn Monroe using
only row after row of carefully selected characters – at least when viewed from
a distance).
Then along came Apple Computer, who freed us from the
tyranny of the keyboard by presenting us with the mouse and a graphic (as
opposed to character) display. This was the graphical user interface (GUI) which
presented a virtual desktop, containing figurative icons representing stuff you
might want to do – email, word processing, drawing, and so on. By pointing and
clicking with the mouse, you could fire up the email application without typing
an arcane command. The human-machine interface was beginning to grow up.
But the early Macintosh had its problems. The power of
personal computers in those days was very limited, and the GUI interface demanded
a lot of processor and memory performance. As a result, the early GUI machines
tended to be sluggish. This problem was eventually solved as manufacturers
delivered increasingly more capable components. By the mid-1990s, the GUI was
king.
It is hard to comprehend how things have progressed. In the
1990s, huge, multi-ton supercomputers calculated weather models and were able
to predict, possibly, whether tomorrow called for sunglasses or a raincoat.
Now, the smartphone in your pocket has far more power than those goliaths.
Which presents us with an opportunity.
The computer in your pocket has a million times the
processing power of the old mainframes. This is mind boggling. And it gets even
better – our smartphone computers are doubling in power every couple of years.
What to do with all of this power?
There is a new human-machine interface on the horizon –
augmented reality (AR). Impossible even
a few years ago, AR demands huge amounts of processing power. But its time has
come.
According to Christopher Mims, technology writer for the
Wall Street Journal, AR is “the story of the most exciting technology you’re
ever likely to encounter, which could transform how we interact with computers
in the 21st century.”
The key idea is that computer displays will become uncoupled
from the physical desktop monitor or smartphone screen that you are used to.
Mims expounds further. “To understand AR, imagine a display
that sits, not on your desk or in your hand, but in front of your eyes. Today,
these displays are unwieldy, ranging from bulkier versions of safety glasses to
something akin to a bicycle helmet. But many technologists believe that within
five years, these displays will be able to project a virtual screen on every
surface.”
Envision looking at your hand and seeing your smartphone
screen – but it’s not really there, just a representation of it projected to
your eyes. And as Mims says, “Imagine looking at a wall and, with a gesture,
transforming it into a giant display—your entire workspace, with you wherever
you go. Imagine a world without screens, save the one we bring with us.”
Mind-boggling is not enough to describe this next stage of
the human-machine interface. As the song goes, “the future’s so bright, I gotta
wear shades.”