Skip to content Skip to footer
0 items - $0.00 0

Past and Present Futures of User Interface Design by sedboyz

Past and Present Futures of User Interface Design by sedboyz

Past and Present Futures of User Interface Design by sedboyz

8 Comments

  • Post Author
    tony-allan
    Posted March 18, 2025 at 12:04 am

    I know that most developers prefer keyboard shortcuts when developing software but I prefer using the mouse mostly because I cannot remember all of the shortcuts in a range of different environments.

    Given my preference it would be interesting to explore a more tactile interface.

      - a series of physical knobs to skip back and forward by function, variable reference, etc
      - a separate touch screen with haptic feedback for common functions and jump to predefined functions in my code
      - a macro-pad with real buttons to do the above
    
    

    Other thoughts

    When watching videos physical buttons and knobs would be good.
    I know professional video and audio engineers already use these technologies but i've never tried them myself.

  • Post Author
    DidYaWipe
    Posted March 18, 2025 at 12:16 am

    Pretty superficial.

    And no mention of the much-hyped "Minority Report" UI that failed spectacularly for obvious reasons.

  • Post Author
    furyofantares
    Posted March 18, 2025 at 12:38 am

    This is a lot of fairly interesting build up and background leading to a very short and shallow takedown of voice control that's all about audio as IO for a couple practical reasons.

    No discussion of whether natural language will be a powerful addition to or replacement of current UI, which of course can just be text you type and read.

  • Post Author
    gyomu
    Posted March 18, 2025 at 12:56 am

    I once worked in a design research lab for a famous company. There was a fairly senior, respected guy there who was determined to kill the keyboard as an input mechanism.

    I was there for about a decade and every year he'd have some new take on how he'd take down the keyboard. I eventually heard every argument and strategy against the keyboard you can come up with – the QWERTY layout is over a century old, surely we can do better now. We have touchscreens/voice input/etc., surely we can do better now. Keyboards lead to RSI, surely we can come up with input mechanisms that don't cause RSI. If we design an input mechanism that works really well for children, then they'll grow up not wanting to use keyboards, and that's how we kill the keyboard. Etc etc.

    Every time his team would come up with some wacky input demos that were certainly interesting from an academic HCI point of view, and were theoretically so much better than a keyboard on a key dimension or two… but when you actually used them, they sucked way more than a keyboard.

    My takeaway from that as an interface designer is that you have to be descriptivist, not prescriptivist, when it comes to interfaces. If people are using something, it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.

    I think the keyboard is here to stay, just as touchscreens are here to stay and yes, even voice input is here to stay. People do lots of different things with computers, it makes sense that we'd have all these different modalities to do these things. Pro video editors want keyboard shortcuts, not voice commands. Illustrators want to draw on touch screens with styluses, not a mouse. People rushing on their way to work with a kid in tow want to quickly dictate a voice message, not type.

    The last thing I'll add is that it's also super important, when you're designing interfaces, to actually design prototypes people can try and use to do things. I've encountered way too many "interface designers" in my career who are actually video editors (whether they realize it or not). They'll come up with really slick demo videos that look super cool, but make no sense as an interface because "looking cool in video form" and "being a good interface to use" are just 2 completely different things. This is why all those scifi movies and video commercials should not be used as starting points for interface design.

  • Post Author
    alt219
    Posted March 18, 2025 at 1:56 am

    > Imagine having to raise your arm to swipe, pinch and tap across an ultra-wide screen several times per minute. Touch works best on small surfaces, even if it looks impressive on a bigger screen.

    I regularly find myself wishing pinch zoom were available on my large multi-monitor setup, even if i only used it occasionally, i.e. to augment interactions, not as a replacement for other input methods. As a (poor) substitute, I keep an Apple trackpad handy and switch from a mouse to trackpad to do zooming. Sadly I’ve found not all macOS apps respond to Magic Mouse zooming maneuvers.

  • Post Author
    awesome_dude
    Posted March 18, 2025 at 2:57 am

    I, for one, am glad we didn't end up with monkey hands/minority report screens that we interact with by flailing our hands and arms in front of.

  • Post Author
    ojschwa
    Posted March 18, 2025 at 3:49 am

    I'm actually working on a voice controlled, tldraw canvas based UI – and I'm a designer. So I feel quite seen by this article.

    For my app, I'm trying to visualise and express the 'context' between the user and the AI assistant. The context can be quite complex! We've got quite a challenge to help humans keep up with reasoning and realtime models speed/accuracy.

    Having a voice input and output (in the form of an optional text to speech) ups the throughput on understanding and updating the context. The canvas is useful for the user to apply spatial understanding, given that users can screen share with the assistant, you can even transfer understanding that way too.

    I'm not reaching for the future, I'm solving a real pain point of a user now.

    You can see a demo of it in action here -> https://x.com/ojschwa/status/1901581761827713134

  • Post Author
    dataviz1000
    Posted March 18, 2025 at 4:04 am

    The next biggest shift in interface is moving from a tactile input — keyboard, mouse, touch screen, ect. — and visual screen output to none tactile input — voice, brain implants, ect. — and none visual output mostly automation or multistep tasks. Some attempts so far haven't been successful, i.e. Alexa and Siri, others look promising like OpenAI Operator, and it exists in sci-fi like Iron Man's JARVIS, nonetheless, it is definitely the future.

    I worked on a browser automation virtual assistant for close to a year — injecting JavaScript into third party webpages like a zombie-ant fungus to control the pages. The idea of tactile input and visual output is so hard coded into the concept of an internet browser, to rethink the input and output of the interface between the human and the machine, everything becomes hack.

    After a decade working in UI, it was strange to be on a project where the output wasn't something visual or occasionally audio, but rather the output of the UI was automation.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.