Revolutionizing the desktop since 1975
Spring 2025
In 1968, Douglas Engelbart demonstrated a computer system called the oN-Line System, or NLS. The NLS is the source of a lot of computer firsts. Among other things, Engelbart showed video conferencing, collaborative text editing, embedded graphics, copying and pasting, and hypertext – all of it accessible through a mouse and keyboard. This event has since become known as “The Mother of All Demos”.
Recently, Amelia Wattenberger published an article with ideas about a possible future for user interfaces.
In short, she asks for interfaces with more tactile “friction”, praises multi-modality and suggest variations in both input devices and feedback options. We’ll return to her text, but first, let’s look at some other visions of future user interfaces.
Xerox Alto
Smalltalk-76 (from 1976), displaying a paint program written in Smalltalk-76. It looks curiously like Microsoft Paint – or, more correctly, it’s the other way around. Click the image to view the full version.
The Xerox Alto wasn’t as much a vision as an actual product. Inspired by Engelbart and others, a team of researchers at Xerox PARC set about creating computers for the office of the future. This resulted in the Alto, a machine that would probably be recognizable to a lot of present day computer users.
The Alto was the origin of Smalltalk, a programming language with its own GUI environment – the one that inspired Steve Jobs to commission the Lisa. Looking at that GUI now, it’s baffling how little desktop interfaces have changed since then, almost 50 years ago.
A man and his Alto. The screen was in portrait mode: Xerox was all about paper.
In a commercial for the Alto, we meet a man – some kind of upper middle management, presumably – going about his daily business. He works in a spacious private office and, using the Alto, he can read and send email and produce laser printouts. Eventually, the Alto conjures up a high resolution image of flowers. The man wonders why, and the computer replies – with text on screen – that it’s the man’s wedding anniversary. “I forgot,” says the man, to which the Alto replies, “It’s okay, we’re only human.”
Despite being a very advanced system for its time, the Alto was of course incapable of such banter – and yet, the commercial’s producers saw fit to include it, in order to spice things up.
Sun Starfire
Commercials like that for the Alto always feel a bit stuffy and contrived. Upping the ante in several ways, Sun Microsystem’s 1994 commercial for Starfire (an imaginary future computer), is a cringe-filled orgy in stilty acting and terrible writing. The protagonist once more seems to be upper middle management, and she’s working on a presentation of an electric car. Future! She also engages in a bunch of strange and/or morally questionable activities. We’ll probably never know why the producers decided to give her a cold, or why she spends so much of her time spying on co-workers using the live CCTV function on her expensive computer. But I digress.
The commercial presents several concepts that – like in Engelbart’s demo – are now commonplace. Tablet computers, video conferencing, touch screens, AI-augmented image and video editing, and instant scanning (today we’d probably just photograph the document with our smartphone). Granted, imminent mainstream adoption of such functions was fairly obvious in 1994.
A woman and her ginormous Starfire. Note the apparent lack of a keyboard. A mouse is present, however, despite the massive touchscreen.
Our hero is working on her presentation in a grotesquely spacious private office, which is probably necessary considering the sheer size of the Starfire. Actually operating it inclu
8 Comments
tony-allan
I know that most developers prefer keyboard shortcuts when developing software but I prefer using the mouse mostly because I cannot remember all of the shortcuts in a range of different environments.
Given my preference it would be interesting to explore a more tactile interface.
Other thoughts
When watching videos physical buttons and knobs would be good.
I know professional video and audio engineers already use these technologies but i've never tried them myself.
DidYaWipe
Pretty superficial.
And no mention of the much-hyped "Minority Report" UI that failed spectacularly for obvious reasons.
furyofantares
This is a lot of fairly interesting build up and background leading to a very short and shallow takedown of voice control that's all about audio as IO for a couple practical reasons.
No discussion of whether natural language will be a powerful addition to or replacement of current UI, which of course can just be text you type and read.
gyomu
I once worked in a design research lab for a famous company. There was a fairly senior, respected guy there who was determined to kill the keyboard as an input mechanism.
I was there for about a decade and every year he'd have some new take on how he'd take down the keyboard. I eventually heard every argument and strategy against the keyboard you can come up with – the QWERTY layout is over a century old, surely we can do better now. We have touchscreens/voice input/etc., surely we can do better now. Keyboards lead to RSI, surely we can come up with input mechanisms that don't cause RSI. If we design an input mechanism that works really well for children, then they'll grow up not wanting to use keyboards, and that's how we kill the keyboard. Etc etc.
Every time his team would come up with some wacky input demos that were certainly interesting from an academic HCI point of view, and were theoretically so much better than a keyboard on a key dimension or two… but when you actually used them, they sucked way more than a keyboard.
My takeaway from that as an interface designer is that you have to be descriptivist, not prescriptivist, when it comes to interfaces. If people are using something, it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.
I think the keyboard is here to stay, just as touchscreens are here to stay and yes, even voice input is here to stay. People do lots of different things with computers, it makes sense that we'd have all these different modalities to do these things. Pro video editors want keyboard shortcuts, not voice commands. Illustrators want to draw on touch screens with styluses, not a mouse. People rushing on their way to work with a kid in tow want to quickly dictate a voice message, not type.
The last thing I'll add is that it's also super important, when you're designing interfaces, to actually design prototypes people can try and use to do things. I've encountered way too many "interface designers" in my career who are actually video editors (whether they realize it or not). They'll come up with really slick demo videos that look super cool, but make no sense as an interface because "looking cool in video form" and "being a good interface to use" are just 2 completely different things. This is why all those scifi movies and video commercials should not be used as starting points for interface design.
alt219
> Imagine having to raise your arm to swipe, pinch and tap across an ultra-wide screen several times per minute. Touch works best on small surfaces, even if it looks impressive on a bigger screen.
I regularly find myself wishing pinch zoom were available on my large multi-monitor setup, even if i only used it occasionally, i.e. to augment interactions, not as a replacement for other input methods. As a (poor) substitute, I keep an Apple trackpad handy and switch from a mouse to trackpad to do zooming. Sadly I’ve found not all macOS apps respond to Magic Mouse zooming maneuvers.
awesome_dude
I, for one, am glad we didn't end up with monkey hands/minority report screens that we interact with by flailing our hands and arms in front of.
ojschwa
I'm actually working on a voice controlled, tldraw canvas based UI – and I'm a designer. So I feel quite seen by this article.
For my app, I'm trying to visualise and express the 'context' between the user and the AI assistant. The context can be quite complex! We've got quite a challenge to help humans keep up with reasoning and realtime models speed/accuracy.
Having a voice input and output (in the form of an optional text to speech) ups the throughput on understanding and updating the context. The canvas is useful for the user to apply spatial understanding, given that users can screen share with the assistant, you can even transfer understanding that way too.
I'm not reaching for the future, I'm solving a real pain point of a user now.
You can see a demo of it in action here -> https://x.com/ojschwa/status/1901581761827713134
dataviz1000
The next biggest shift in interface is moving from a tactile input — keyboard, mouse, touch screen, ect. — and visual screen output to none tactile input — voice, brain implants, ect. — and none visual output mostly automation or multistep tasks. Some attempts so far haven't been successful, i.e. Alexa and Siri, others look promising like OpenAI Operator, and it exists in sci-fi like Iron Man's JARVIS, nonetheless, it is definitely the future.
I worked on a browser automation virtual assistant for close to a year — injecting JavaScript into third party webpages like a zombie-ant fungus to control the pages. The idea of tactile input and visual output is so hard coded into the concept of an internet browser, to rethink the input and output of the interface between the human and the machine, everything becomes hack.
After a decade working in UI, it was strange to be on a project where the output wasn't something visual or occasionally audio, but rather the output of the UI was automation.