Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We lost the Art of interface building and turned it into a complete science.

I agree, but I think there’s a bit more to it than that.

An extremely condensed history of software UI design might look something like this:

    Programmer UIs
    Designer UIs
    Semi-automatic, data-driven UIs
At first, we didn’t have the same distinctions between roles that many places making software and UIs have today. A UI would be put together by programmers. Those UIs were often powerful, flexible, even logical in their own way, but only if you knew how to use them. For normal people who didn’t think like the programmers or have the same deep knowledge of the system, this generation of UIs often resulted in slow, error-prone, frustrating interaction.

Eventually we responded to that problem by bringing in more expertise in related areas: usability and accessibility, graphic design and typography, and so on. People started thinking more explicitly about information architecture and the flows a user would follow as they navigated an interface and overall a more task-focussed and user-friendly style of UI. Both the look and feel and the practical operation of systems became much better. IMHO, this was the closest we’ve experienced to a “golden age” of UIs so far.

The big problem with that was that doing those things well did require all those other skills, which weren’t native to software developers and didn’t necessarily translate in an obviously quantifiable way to the financial bottom line. With the arrival of CSS3 on the web and flat design as a trend in desktop and mobile OSes, suddenly programmers could make UIs again. Import some glorified stylesheet that gave you a colour scheme and some basic layout and typography, throw in a few rounded corners or font weights for street cred, and you never need to hire anyone with real design skills again, right?

Around the same time, the use of telemetry in software and tools for testing multiple variants of websites in real time were gaining popularity. Now the programmers didn’t even have to make a subjective decision about what colour to use for their action button, because The Mighty Data would dictate such things.

Somewhere around there, much of the industry lost its soul, and much of the software we produce just became bland, homogenous, heavily instrumented mediocrity. It didn’t look interesting and, to add insult to injury, caused a regression in ease of use as well thanks to some glaring usability problems with the popular visual style of the day. And while it’s certainly true that the increased use of hard data rather than subjective personal preference has its advantages, it will only ever tell you some numbers that compare designs you already have. It can never tell you that all of your designs really suck and you should start over with a different concept, only which one sucks 17% less than the others.

The most unfortunate thing, to me, is that with the technologies we now have routinely available, we could do so much better, even in a lot of everyday business software. If a picture is worth a thousand words, what is the worth of a system that lets you interactively explore your whole data set, freely swapping between a range of different textual and graphical views that are relevant to whatever problem you’re interested in solving, combining or filtering your data to focus on areas or relationships of interest, highlighting patterns or outliers that might be important, all while the user experiments with different changes to see what the results are, before sharing all of that in real time with colleagues around the world who have been doing the same thing so everyone can decide which ideas are worth acting on next? And sure, go ahead and add some distinctive and pretty graphics to make it enjoyable to use at the same time. If the rest of the system is well-designed, this shouldn’t hurt, and we used to understand that building a brand image and engaging people using our stuff had value of their own.

Of course, no A/B test is ever going to tell you how to do anything like that in any particular application, and your average programming specialist isn’t going to offer the best ideas either. If we want to build UIs that are powerful, easy to use and perhaps even fun, we need those creative types of thinking and those other design skills too.



I don't work in UI design and really only have a layman's perspective. To add to what you and else here said, my view is that most of these fantasy interfaces are build for specialised/professional tasks.

My impression of most of the user interfaces we encounter on the other hand is that they are build for the lowest common denominator and much of what the previous poster called "the science" is about how quickly the "on boarding" works, so how quickly someone can do a certain task when they are not familiar with the interface.

What seems to be never tested is, when people are very profficent, how long does it take them to do tasks. I understand why that is the case, it's much easier to do a quick study with some new users to test out a UI, but to design several UIs and then let people become very profficent with them first (possibly taking months) to then do a study comparing the interfaces is much more involved. So instead we extrapolate from the novice user studies to advanced users.


Completely agree. I would be interested in a modern, advanced user GUI trend - like most of the modern UIs optimize for discoverability and for a sufficiently complex use case (like photoshop, video/music editing software) where learning the UI is a must, most programs would need a feature-packed, hotkey/gesture-rich one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: