Brief comes from designer, developer puts brief in bin and rings client with the phrase "what do you actually want to achieve".
Some one who can use Photoshop can be a designer, someone who has 20+ year exp can be a decent developer.
Design director Christie Lenneville tells a story that may be familiar to Reg readers. A design expert was called in to modernise some XP-era enterprise software used by call centre staff: their control panel. She wrote: "The design incorporated white space in all of the usual places – around headers, content blocks, and in …
I'm a developer with 20 years' of experience, and this just isn't true. I can use Photoshop/Illustrator/Gimp with the best of 'em, but I'd never call myself a designer.
I thought I was, when I was young and foolish. Now I've worked with professional designers I realise that all I can do, from a frontend point of view, is stack the blocks.
Good design is a real skill that takes years to attain. Sure there are some useless crayon-wielders out there, but a good designer would have asked your question before anyone else did. And they'd have kept asking it in different ways until they got a working design.
Or perhaps I've just been lucky in my colleagues.
Looks like your design friends have modded you up. However we're not talking about designing a new car or a bridge or even a company web page - its an application. It has standard pre coded GUI widgets and the OP was right - it really isn't rocket science to lay these things out in a neat and usable fashion. I'm also a developer and I've done front end coding in my time and tbh it was considered part of the job, not something that need to be hived off to some "designer". As developers we use this stuff every day and so we tend to know what works and what doesn't. And the current flavour of the month flat UI bullshit where it can be impossible to tell widgets from data and some widgets "hide" until you mouse over them (why??) was almost certainly designed by a self styled designer and most certainly DOES NOT work.
I constantly read complaints of the flat UI thing. And I agree : when firefox came out with it I thought there was a bug that stopped the chrome (as they used to call the decoration) rendering. I don't like it any better now.
But there must be someone, somewhere, who thinks it's a good thing. Speak out, tell us the thinking, or you'll continue to get complaints.
There's an add-on for Firefox called Firefox CSS which can get you shaded tabs again. It's not in the official Add-ons database and needs to be downloaded from GitHub, but it works.
Oh, and guess where you put its files? The "chrome" subdirectory under your profile.
>I've done front end coding in my time and tbh it was considered part of the job<
So have I, and so it was. And when it still looked unbalanced and confusing, I'd get approval to take it next door to the design firm, and in 15 minutes they'd have something that was better than something I'd spend all day on.
The key to design is not about tools. You can have 20 years of experience with Photoshop, know it inside out, and still be an absolute rubbish designer.
I have worked a lot with designers and the more I've worked with them the less I am inclined to do any of it myself and as I know I will fail.
That link is the link equivalent of a modern UI suffering from all the typical problems we're on about here.
1. It's not actually a link, it needs to be copied and pasted.
2. You don't know what it is beforehand so you don't know if it's relevant or not.
3. Time could kill the link shorter (Google will be killing goo.gl).
4. Link shortners can take you to malware.
Here you go:
"1. It's not actually a link, it needs to be copied and pasted.
2. You don't know what it is beforehand so you don't know if it's relevant or not."
Copied and pasted is preferable depending on your level of paranoia. Unless you're in the habit of hovering over a link to check the URL it resolves to you don't know what it is anyway so you don't know whether it's safe or not.
Link hovering doesn't work for corporate users of mail filtering services that rewrite URLs to point to their own lookup service. (The advantage being the lookup is at click time, not at mail delivery time.) If the destination URL for the original link's judged risk, you get a block page; if not, you get a 302 to the final destination. (I know of three such services, i suspect they're all either doing it or looking at doing it.)
Oh yeah and a million other attacks on mouseover / tooltips; here's one
Regardless of checking the resolve, a link of any type should have some context. You shouldn't just say "read this". How about: "Read the book listed at this link for information about blah blah blah". If you don't give some context, we wind up back here... watch this https://www.youtube.com/watch?v=oHg5SJYRHA0
There are far to many people who call themselves designers who are not.
A good designer is an esentail part of the product development team
One example of a good designer is Molly...
Take a look at her talk at FOSDEM from a couple of weeks back.
She gets it...
You can be a designer and suck at it, too.
You can be a developer and suck at it.
Being something doesn't mean that you're good at it. True of basically everything! Bad drivers, poor parents, cops, politicians, teachers, cooks.... Hand in hand with the idea that doing something a long time is not the same as being good at something.
It's a common complaint that users never tell designers what the business application ever really does, and users complain that designers never watch what they do.
It works like this:
1) User Experience
2) User Interface
3) UI Design
I have close to 30 years experience of using Adobe Photoshop (since 1991) but ask me to design a website or App using it? No thanks.
Where does workflow fit in? Probably step 0)..
In my experience, people tend to adapt even to bad working practices so it becomes a routine. Sometimes changing a few things around can significantly speed up work and improve efficiency (they're not the same). Once you have that worked out (and accepted - most people *hate* change even if you improve things) and confirmed with tests, the process above applies.
I personally prefer using designers. People equate "design" with graphic design, and it's not the same at all. Good design has a flow, and a good designer can make all the difference in usability (and perception thereof). Making it look nice is, of course, also required but without usability you're merely polishing the proverbial brown stuff.
It works like this:
1) User Experience
2) User Interface
3) UI Design
You left out the User Interaction Model, which arguably is more important than UX. And the whole thing has to be an iterative process if you want a decent chance of producing something usable.
That said, the key phrase in the article, for me, was "users complain that designers never watch what they do". The "never" isn't true, but "rarely" would probably be accurate. There are a number of well-known user research methods that involve looking at what users do, such as user ethnography and contextual inquiry.
When designers fail to do appropriate user research, it may be because they're lousy designers, but it may also be because no one wants to pay for proper design. Proper design involves significant user research involving multiple methods, in each cycle of revision.
I resent that comment I am not a day over 500 years old.
More seriously though we are at the moment working our way through an MVC intranet app that will only ever be viewed on laptops and desktops, so why must we have a responsive design and a 1/3 of the screen blank??
Troll as there is no goblin icon =>
or much much worse. Anything "designed" in last 12 years. Case in point. A simple flat screen TV, just a few controls on front. Power, input and channel up|down. Can you see them: No. Are glyphs close to standard : No. Is there any feedback, audible or tactile: Absolutely not. Are the controls detectable without big magnifying glass and bright lights: No. And lastly the buggers are hyper touch sensitive so stray insects, cat tails trigger them. Yet my 13 year old grandson thinks its a great looking design and is baffled by my insistence the control interface is foolish. "But it looks good" he says as if mere functionality is irrelevant. I have found some millenials who agree with me that user interface design on most new gadgets sucks so it is not just old grumpy experience.
My last car was a 2005 Saab, the radio was a wall of a couple of dozen buttons, yet everything made sense, and I had tactile feedback while driving.
Replaced by a 2013 Skoda, the radio is a touchscreen, it is an absolute danger to try and do anything at speed. Crossed a border and lost radio station? Want to switch from MPH to KM/H? You'd better find somewhere to park for a minute. (Poverty spec so no wheel controls, though you can cycle through modes on a control on the wiper, which itself also takes your eye off the road. I prefer keeping it on digital speedometer mode.)
The more I see new cars where the entire dashboard and centre console are taken up by large touchscreens I despair. Window suddenly fogged up? You need to dab at a tablet and hope you pressed the correct function to set temperature and front window demist.
Ever tried typing a text message on a smartphone while on a bus on a bumpy / speed humped road? Think that.
I hope that someone recognises the safety implications, and we return at least to a combination of duplicated controls on buttons and screens.
"so why must we have a responsive design and a 1/3 of the screen blank??"
Because humans (y'know, the ones who you expect to be using your system) read by difting their eyes *downwards*, ****NOT**** by draggin them acroooosssssssssssssss the screeeeeeeennnnn to the far edge then
bang! across back to the left then draaaaaaaaaaaaaaaaggggggggggggggginnngggggg across again thn
bang! back to the left then.... etc..
In order for your content to enter your user's brains on a wide screen it ****NEEDS**** to be 1/3 or more blank. cf elReg's crud makeover that does this, forcing me to select 'Thread' every time I attempt to read a thread.
Because humans (y'know, the ones who you expect to be using your system) read by difting their eyes *downwards*
So what you are saying is that users should be using the old 4:3 monitor and not a wide screen monitor?
People in IT rarely seem to be interested in spending time watching what users actually do. Specs are often thrashed out with managers of the users who don't spend any time with them either. I've winced when seeing people in the call centre trapped between an angry caller (with broken $EXPENSIVE_THING) and shit software that doesn't tell them what they need to know.
The other thing with software for internal use: if users are on it all day, does it make more sense to have something that takes a little longer to learn but once learned is very fast to operate, or to have something "intuitive" that's always slow to operate?
Happy to say that i'm the support guy that got users to explain a workflow so I could automate their 1099 tax returns to 'Murica from .XLSX to PDF and finally to form filled Printjobs on preprinted american letter format paper.
The main factor is time. If you have time, you can do anything imo. giving myself a beer , because I can, and it's friday!
"The main factor is time. If you have time, you can do anything"
It's a matter of how you use that time. Do you use it, and budget for using it, upfront to investigate the requirement or do you use it unbudgetted later trying to make the product fit reality and probably throwing in a few bugs because you're under pressure trying to do things in time you haven't really got?
*Some* People in IT rarely seem to be interested in spending time watching what users actually do.
Too many people in Techie jobs seem to be so busy that they cut out the 'Waste of time' talking to users to understand their jobs.
This applies not just to their 'users' but also other depts and their people/roles.
I have worked in IT for 35 yrs+ and although on many occasions I can anticipate the problem & solution within minutes, I still try to understand the problem properly.
The same people who dismiss Designers tend to also dismiss Project Managers etc etc, in short anyone who does not do *their* job. Apart from huge arrogance this also will make your job harder !!!
The company you work for employs these other people for a reason, just because you cannot see it does not make them redundant. This is where understanding other peoples jobs becomes useful.
If you ever get the chance work with these people for a few days or week and understand the issues that impact their working lives and how it reflects on other depts in the company.
Just as you are frustrated by the people who 'obstruct' your job, see who obstructs others and the interaction/consequences of the 'obvious' solutions that you think will make the world work better !!! :)
The company and its business is not as simple as you may think, no matter how clever you may be.
"*Some* People in IT rarely seem to be interested in spending time watching what users actually do"
Also, some people, even when told what users do believe them wholly, which becomes their undoing!
20 years ago I was a developer at a company with an educational website. Once, while I was on holiday, one of the junior developers was asked by someone else in the company to produce a specific piece of software to run on the site - the "spec" was verbal only.
When I came back from holiday she told me that what the develope (who apparently hadn't questioned a single thing in the verbal "spec") had produced for her was "exactly what I'd asked for" but "didn't do what I wanted". After that she always came to me with requests so that we could battle our way to a common understanding of what was required and how it would actually look and function.
""*Some* People in IT rarely seem to be interested in spending time watching what users actually do"
Also, some people, even when told what users do believe them wholly, which becomes their undoing!"
I get the point BUT that is more to do with lack of experence of the Junior Developer.
First question that comes to mind is "Why did the Junior Developer NOT talk to anyone else including any other more senior developer ???"
Second question is "Why did the Junior developer run with a 'verbal spec' and not get a confirmation at any point of what they intended to do actual matched what was asked for" i.e. formalise the spec and get sign-off from the user before creating the code etc.
Third question is "Why did you not teach your junior developers to follow the process you followed to 'battle out' the proper solution ?" Not only would it be a useful learning experience for them to develop their skills but it would reduce the pressure on you while making their job a bit more varied and interesting !!!
One thing you learn is that if you ask 6 people from a department for a description of their job you will get 6 different sets of details and neither of them will match the Departmental Managers description of their jobs !!!
The other thing you learn is that mis-communication is a given, so even if you think you understand ..... get your understanding confirmed, in writing, before you commit time/effort.
I cannot remember the number of times that a 'user' has asked for something and explained what they wanted then when you read it back to them they have said "No that is not exactly what I wanted" or "I forgot something that it must also do" etc etc
If it is not formalised into something that the 'user' can sign off on, it will come back to bite you at some point. It is always the developers fault, after the post mortem, so you need to protect yourself by not being pushed into 'running' with something where your understanding will be bought into doubt later.
Too many people in Techie jobs seem to be so busy that they cut out the 'Waste of time' talking to users to understand their jobs.
That may be so, but in my company (which isn't a traditional tech outfit) any software (indeed and business change) is fashionably dubbed MVP by the powers that be. This is term that they've picked up from tech hipsters and management consultants, and our management curiously believe that this term justifies a half baked spec, an ugly and ineffectual interface, minimal testing, and often broken processes. And the reason they believe that is because they think that it means faster development, faster to market, lower cost, and the "opportunity to learn and improve".
So at this company, loads of "change" people, loads of "scrum masters", lots of "agile". But little in the way of real expertise at any stage of software development, and no desire at all to really find out what the needs of the user are.
The thing is, that the UI actually should depend on the task at hand.
Task you do multiple times every day? it should be as quick to use as possible, with keyboard shortcuts out the wazoo
Task you do once a year? It should have a lot of documentation in the interaction screens themselves with multiple double-checks and confirmations.
> "People in IT rarely seem to be interested in spending time watching what users actually do."
Respectfully, I call bullshit.
It's the Project Managers who have a brain fart on how something should be done, who poorly explains it to the IT manager, who poorly explains it to the IT staff, who then have to explain to the users that it was their PM's decision to do it this way without asking the end users how they should think the new application should look/run.
I've lost count on the amount of projects that were rolled out in companies that I have worked for, that haven't required major rework to make it meet the needs of users, rather than the needs of a PM who wants to generate reports with 1 click.
Herring: why the false dichtomy of "does it make more sense to have something that takes a little longer to learn but once learned is very fast to operate, or to have something "intuitive" that's always slow to operate?" In my limited experience user speed of operation and efficiency of anything go together.
I concur that all too often the end users have no input to UI design or or that matter, into what the Next Big Thing, all singing, all dancing software project does. They then have dump years of training and experience to do much the same thing, just slower. (especially if COTS is involved)
"People in IT rarely seem to be interested in spending time watching what users actually do." - My team have been burnt badly repeatedly by this, and with every new project I ask, very specifically, for one thing, and that is "Let me watch users and let me talk to them. Or hire a decent analyst." Sadly, I'm never allowed to do that. And as my role is a developer, my estimates of money wasted by not spending a day or two with the users (or, for that matter, difference between the customer and the user) is not worth consideration as it is none of my business to talk money.
Something embarrassingly obvious that took me a while to grasp, when I was beginning my second life, in IT, decades ago.
I don't really need to remind folks here that this question goes to the heart of UI design, especially in the gulf between mass-market software and bespoke corporate stuff.
I'd suggest that while the design of interfaces for look and feel is important, we still often overlook the importance of providing alternatives in the form of shortcuts, key combos, gestures when appropriate - so that as users become adept and experienced, they can leave E2L behind for E2U.
... but it would be difficult to send you screenshots without hiding almost everything.
Now the application layout is all narrow and vertical and with lot of white spaces to please phone users - just few monitors allows to be turned vertical, so a lot of scrolling is necessary. Each record data are shown as multiple lines inside each record line, when often you'd like a plain tabular format. In exchange you got monochromatic icons that tells you what "category" your movements are assigned to automatically - because creepily now they do that.
The "task bar" once on the left is obviously gone, and you get a "quick access" bar that opens at the top with lots of shortcuts I don't need (but they are there to promote other bank services I don't use) while mostly useful stuff, once easily accessible from the task bar with one click (i.e. a bank transfer) are now hidden under an "hamburger" menu whose navigation requires a GPS device. Ads are placed among financial records, because you know, that's where you look to know more about bank products, and isn't internet the internet all about ads? Some data are shown directly in the page, other generates a PDF for no good reason. They evidently think people access the bank site as a "social experience" - and not because they have to manage money and payments...
Everything is terribly slow - on an high-end machine that goes through photo editing and flight simulation software without issues...
You forgot the page each employee gets for their own internal company blog where they are encouraged to "share" about their work
So your git status window is mixed in with update announcements from some HR idiot in des-Moines blogging about their cat.
Yep, missed a credit card bill because they changed the big "NOW" button from sending the payment, you know, now - to simply selecting "now" as the date when you swipe the hidden portal three times widershins to actually make the payment
I don't think they do that anymore. See Azure DevOps' latest makeover as an example. They took the decent compromise between classic web application design and new-school UX, and went full UX complete with hidden functionality and unintuitive application workflow.
sit someone in front of a computer and get them to talk about what they are doing and what they are thinking when they click menus and buttons. Let them br honest. Then it will help you design a better UI.
Microsoft as UI design paragon? This was intended to be sarcastic....right?
Windows 8 and Windows Phone UI were excellent, well thought designs for a tablet and phone UI - much better than Windows 10. A very good improvement over the very outdated and limited 'Program Manager' UI still used by most phones and tablets.
They should have never, never been used on a desktop or server system which require a totally different UI.
Windows 10 Mail application is another example of a very bad design that trying to be modern became unusable. The same approach was used for the Outlook.com UI, which is now impractical and very slow.
"Better than that is to actually use the software/website/app for real rather than rely on theory."
You can only do that when it exists and then it's too late. Best to build prototypes or follow Brookes' wisdom; plan to throw one away because you will anyway.
As someone pointed out in the thread under the new Libre Office 6.2 article, it would be nice if the LO developers had not copied the MS Word menu structure.
I wrote my PhD thesis so very long ago in WriteNow instead of Word (insert ancient version number) because having dutifully tried Word I found digging for common or garden formatting options was ridiculous. I fear I would still be trying to write the thing now 36 years later had I used Word.
WN even interfaced with Pro-Cite the proper references DATABASE which should have beaten Endnote (spit!) into oblivion but lost to it. I still mourn that, Proc-Cite was a wonder.
Just to make everyone feel old I built my first references database as a custom build in Reflex, an early Mac FileMaker like database program. Exporting the data tab delimited into Pro-Cite when I had it was easy as. RIP Pro-Cite.
For knocking out something really quickly , very little could beat Word Perfect 4.2. Simple function keys to bold underline etc, and you open the a window to untangle the formatting when it invariable went bad after a bit of a changing/moving when you didn't quite select the hidden formatting character
I'm 43 and I'm desperately afraid that UX/responsive design is going to be the thing that makes me seem like an old fart.
Do people actually enjoy having to think about what the little gray nub does, or what the line-drawing icon with no explanation text means? Do people like reading light gray text on a bright white background?
I am currently working with a young designer, fairly fresh out of his education, and he said that if they'd ever hand in a design with a hamburger menu he would fail the assignment. Exactly because a hamburger menu does state what it does.
Either it's not designers who are the cause of hamburger menus or we are witnessing a new generation of designers who would agree with you.
"Exactly because a hamburger menu does state what it does."
A hamburger menu allows you to choose hamburgers. That's what it does.
If you mean a menu that's hidden behind three lines being called a hamburger menu because that sounds kool the problem isn't with the menu, the problem is with it being hidden.
Right now I'm looking at a browser that has the classic CUA menus. If I were to click on File I know what to expect because, give or take application-related variations File menus do similar things.
If I use a browser with a "hamburger" menu I've no idea what it will do and, more to the point, if I know what I want to do there's no obvious way of finding how to do it unless I try everything in turn until I find it. I want to bookmark somethis? My CUA interface has a menu "Bookmark" - I think I've found it. My "hamburger" browser - no it's not on that menu. Oh, look, it's hidden under an icon of a star. What numpty thought of that?
I was meaning to write "a hamburger menu doesn't state what it does". Hopefully that came across nonetheless.
I think the problem with a hamburger menu is that it only works if people have learned that a bunch of horizontal lines in a corner somewhere means a button that can be clicked to show settings or options. Ideally it should not be necessary to learn it, it should be obvious even for the person that encounters it the first time.
That way, however, lies the contentious stuff called skeuomorphic design. Some times it makes sense to design digital stuff the same as real world stuff to make its function immediately clear. Some times it's downright awful.
it should be obvious
It certainly wasn't to me when I first encountered it - I never got to use a Xerox Star but I can see why it disappeared for 30-odd years.
And apart from the vertically-stacked horizontal lines, there are those vertically- and horizontally-stacked dots. Are they the same or different? More of an IQ test than a UI.
The thing is, these things have caught on at a time where there has been a massive increase in available pixels - even on mobile phones - and we have stupid aspect ratios that make chunks of the screen practially unusable for anything other than fashionable whitespace or pointless edge-to-edge pictures. There's usually plenty of space at the top of a portrait screen or at the side of a landscape screen for an unhidden menu.
I think it's because we're not actually expected to interact, only to gawp - along with the menus, the meaningful content is disappearing, replaced by glossy stock photographs and boilerplate motivational slogans.
"There's usually plenty of space at the top of a portrait screen or at the side of a landscape screen for an unhidden menu."
The biggest problem with the primary app I use every day at work was the wide menu bar down the left hand side and all the white space at the bottom of the screen, requiring me to constantly side scroll to get stuff done. This was a on a 4:3 screen.
The devs did eventually take our complaints on board, and have since moved all the menu buttons to the top of the screen. Unfortunately, we all have wide screens these days and now are constantly scrolling up and down to get stuff done and marvelling at all the white space at each side.
See icon ---------------->
"I think the problem with a hamburger menu is that it only works if people have learned that a bunch of horizontal lines in a corner somewhere means a button that can be clicked to show settings or options."
It gets out of hand PDQ. Bing on Waterfox. Waterfox has a hamburger menu top right. Bing has its own hamburger menu just below it.
It's hamburger menus all the way down! I look on them as the graphical equivalent of the acquired speech impediments such as scattering "like" into sentences where it doesn't belong or the introductory "So" which seems to mean "I'm a millennial and I'm starting to speak".
Since it's come up, I'm looking at the odd buttons on my browser top bar, I think I can see how they came about, and while you wouldn't call it skeumorphic, it's taken the worst part of that idea and thrown away the rest.
Hamburger: set of horizontal lines that opens a text menu. It's attempting to represent lines of text, click on it and you get the lines of text menu.
Vertical lines with one on the right slanted. On very close inspection they are unequal lengths. This opens bookmarks and history menu. It's a bookshelf! A very small one.
What both have in common is that once you know what they do, you can see what the idea behind it was. Of course by that point you don't need the obscure visual hint. They replicate real things, but in such an abstract way that they actually obscure their meaning. Funny thing about that skeumorphic calendar link, both have flaws, the skeumorphic example doesn't need fussy details like the clips, but it has richer information provided in using GUI elements to make it clear things can be interacted with and hinting about events that the 'clean' design lacks.
Right now I'm looking at a browser that has the classic CUA menus. If I were to click on File I know what to expect because, give or take application-related variations File menus do similar things.
To be nit-picky, a 100% CUA-compliant browser ought not call its first menu item "File" but something like "Page" instead. Because, well, a Web page isn't a file, is it? Originally, AFAIK, all CUA apps were intended to have different top level menus to give you a clue to the kind of object the app manipulated: "Document", "Sheet", etc. But it goes to show how quickly foolish consistencies and cargo-culting creep into these things.
Everything in OS400 was an object.
Nice mix of CLI & 'GUI' - the CLI used simple commands , remember about half a dozen base prefixs and the name of your object & you were good to go.
A simple list of objects , numbers & function keys quickly got you to where you needed to go on the 'gui'
To be nit-picky, a 100% CUA-compliant browser ought not call its first menu item "File" but something like "Page" instead.
Oddly enough out of the 14 items on the menu only one is "Open Web Location". The rest wouldn't look out of place on the File menu of any other application. Even Page setup is to configure the printer settings.
Sure, 99 times out of 100 the contents of that menu are going to be the same because they are all the same basic top-level actions that most applications support on their associated files¹-- creating a new one, opening an existing one, saving changes, printing, etc. But if you look at the original spec for the CUA (you used to be able to download it from IBM's site though good luck finding it now) the name of the menu was supposed be different depending on the kind of file being edited in that window. It was a minor thing, possibly intended to differentiate the CUA from the Mac HIG which always used "File" as its first menu (though of course the Mac always only had that one shared menu bar at the top), but I don't think it ever got traction. IIRC, even OS/2's Workplace Shell didn't follow it and it kept to the CUA better than most.
¹ Using "file" in the loosest possible sense here, given that CUA and GUIs generally were coming about in the 80s and the height of the object orientation craze when "everything is a file" was fast being superseded by "everything is an object".
"My CUA interface has a menu "Bookmark" - I think I've found it. My "hamburger" browser - no it's not on that menu. Oh, look, it's hidden under an icon of a star. What numpty thought of that?"
That would have been initiated by the Accounts Numpty who wanted to minimise the language/localisation/translation budget. (note, applies across the board, not just to s/w, hence cartoon "instruction" manuals). The designers were then left with trying to find some sort of universal hieroglyphic symbol that would apply worldwide. Why they came up with three horizontal bars is another matter. I always thought of it as a stack of paper, and so thought that it implied options, choices. It was a long time before I heard it referred to as a "hamburger", which just made me go "WTF?" Even if you assume someone thinks it looks like two halves of a bun with a burger in the middle, why would anyone in their right mind think that a minimalist representation of a hamburger was a menu choice in a computer OS? Did McDonalds or Burger King sponsor it as a subtle form of advertising?
New icons are going to be ambiguous until you learn what they do by trial and error. It also may require context. Few people click a magnifying glass on a website, thinking it's going to zoom in. However on a map, if you saw a magnifying glass overlaying the map, you might think it would zoom in. If it was in a small white box overlaying the map, you probably know it's a search function.
On Chrome I can see 3 dots vertically which when clicked, it is a menu. On Edge, it's 3 horizontal dots. How does this differ from a hamburger menu? These are just the new emperor's clothes. The first time I saw the 3 dots, I had to click it to confirm it was a menu - the same process as the first time I interacted with a hamburger menu. I now know if I see a hamburger or the 3 dots in the corner, they are most likely a menu. If it changes to a cat icon in the corner and then everybody starts using a cat icon, I'll eventually accept that's a menu.
What I'm getting at, does any icon truly state what it does until you use it for the first time? What we might call ' obviously intuitive' is actually layers of experience.
Responsive design is a design that responds to the format in which it's being viewed. UX is user experience.
Good UX would require responsive design that maximises easy user interaction. minimises confusion, and avoids "mystery meat" interfaces.
What you want is not an end to UX/responsive design, but responsive design that sticks to good UX principles instead of just being a hamburger and an endless trail of whitespace.
<quote>The light grey on a white background paradigm seems to have spread from Apple</quote>
And the UI designers that implement it ought to be:The light grey on a white background paradigm seems to have spread from Apple
1) tied to a telephone pole, and whipped,
2) beaten with a baseball bat until most/all of their bones are broken,
3) have their eyes put out,
4) boiled in oil, and finally
5) burnt at the stake.
There are many people who need more contrast in a display screen, and those utilizing such a low contrast UI do not appear to take their needs into consideration. Inconsiderate bastards.
"There are many people who need more contrast in a display screen, and those utilizing such a low contrast UI do not appear to take their needs into consideration"
If it's a UK-based company, start flinging complaints under the Equality Act. If they're a public body, the Public Sector Bodies (Websites and Mobile Applications) Accessibility Regulations 2018 is based on an EU directive and requires WAI-AA compliance.for new content from September 2018 (existing content has until 2020).
There's a bigger question you need to ask. Does it need a manual? If you need a manual then the design can take a step back. Function over form and all that, just try and keep users steps as low as possible. The poor sod writing the manual will thank you.
No manual needed? Make it intuitive and get people to test it for you. Get them to smash it apart. See if they figure it out, but don't throw conventions out of the window (like removing a footer, or placing the menu on some weird location like 2/3rds down the page on the right unless your intentionally trying to annoy) as familiarity helps reduce the amount of brain meats required to make things happen.
The poor sod writing the manual will thank you.
It's not just poor sods (like me) who write manuals. Users will thank you, too.
Business users are paid to get shit done. Once they learn the application, speed, efficiency and consistency are far more important than acres of white space*, giant sans-serif text, and a periodic table of indistinguishable flat icons.
* EDIT: make that white and dark-gray space, because the cool kids all use dark mode.
"Does it need a manual? If you need a manual then the design can take a step back. "
Even better, write the manual and call it the design. OK, things get iterated and the draft manual needs to get update. but another gem from TMMM is that the manual is the first thing thing that gets started and the last thing that gets finished.
I daresay that many of us who have been around and involved in this would be quite hesitant to do screenshots for either job security reasons or legal reasons. I watched one such adventure that damn near killed the company because the new interface program was a dog and buggy as hell. It was contracted out to off-shore types who did what manglement wanted. They wasted millions and binned it and then spent more doing it properly.
So who got the ax? It wasn't the high ups who all got bonuses for the both versions. Yes, it was staff who protested what was being developed the first time. Not team players and all that.
When I arrived at my second job, the software I worked on and another set of applications were written for PC using an in-house graphics library wittily called GEMTSC because it was written using GEM and emulated TSC terminals, enormous heavy terminals that took two people to lift. It is possibly that people in this company actually wrote the microcode for the terminals, that was before my time and I don't think I ever really knew.
Now, this was in the forgotten days of software-that-was-graphical-but-not-WIMP, and a particular feature of this library was that you could have 50 lines of 80 characters on a PC screen in EGA mode - EGA was lower resolution than VGA, 640 by 350. Now, of course, the only way of getting 50 rows of characters in 350 pixels is to have a 7 x 7 font. Which one of my colleagues drew over a weekend, including lower case. I have to say that it looked rather better than the 7 x 7 fonts I can find on the interwebs, and I think his feat was insufficiently recognised. (*) Each character could have a different foreground and background colour (3 bits each). As you can see from this description was rather like Teletext mode and, indeed, on the 'real' TSC terminals characters could be made to flash, fortunately this didn't make it to the PC version. It was vehicle routing software. Each route had a line on the screen and each drop was represented by a 2 character abbreviation, with the background and foreground colours of both characters bring significant, so the information was amazingly dense.The users could type in things (something) like M01010503 to move the drop at position 1 on vehicle 1 to position 3 on vehicle 5 and the software would update the display with changed timings and colours to reflect delivery windows etc. The users could do this amazingly fast, far quicker than you could with a mouse.
Meanwhile, with one of the products, not the one I was working on, work was going on on the next generation of that product, which would have a proper WIMP interface, using the full GEM GUI thing, which I think had a silly abbreviation GEM AES - application environment services. Except that it didn't use 'normal' GEM forms as people didn't like them - my boss in all seriousness said that he didn't think we would ever let a windowing system dictate the look and feel of our software. This is not as nutty as it sounds for the late 80s. It would, however, I feel have been less nutty if the colour scheme chosen hadn't been so bizarre, it was very heavy on black, white, grey, light and dark blue and light and dark magenta. A rather gothy look for the software. The text on the forms was either the same font or something very similar, certainly non-proportional, with the buttons having right angle corners. Some time I will have to mock up what it looked like.
The next generation product seemed to me to suffer from the traditional program of it being rewritten by people who didn't quite understand the principles behind the mathematics etc. (I'm sure that if anyone finds this and recognises what I'm talking about they will disagree, and perhaps I am being a bit unfair).
I suspect it didn't help that the person who wrote the original system was not over-communicative and had an ideological objection to meaningful subroutine / function names in the conventional sense of the term. We were, of course, in the land of FORTRAN 6 character names, and his argument was that a name, least of all a 6 character one would be a misleading simplification. Therefore things were given numbers e.g. IX8081 - with IX meaning integer function and 81 being the 81st routine in subcategory 0 of category 8 - you would then look in the documentation to find what the 81st routine in that subcategory was, and of course you would see the hierarchy chart explaining what category 8 and subcategory 0 of it were.
It became clear that it wasn't going to fit into 640K and therefore the software was ported to OS/2. And I mean ported rather than rewritten, so we had the strange non-platform look and feel forms ported from GEM to OS/2. (**)
So far, so good. I actually quite liked OS/2, although it suffered from the problem with all OSs except Windows and (in recent years) Linux of only installing and working on PCs under carefully controlled circumstances (***)
At this point, however, (a) people wanted the software to run on varieties of Unix and (b) people wanted the software to run on Windows. Someone called Bill faffed around with a Unix machine to try and work out what we should do. The decision was taken out of his hands and it was decided to use (cue dun dun dun music) a cross platform library.
The cross platform library, which it appears is still on the go, was called XVT. At the time it provided support for the Mac (which we didn't want because there is no call for engineering software on Macs), Windows, OS/2, and a variety of Unixes (and possibly VMS running X). There was also a rather bizarre character mode version which emulated windowing on a character display. The software was therefore ported again, but maintained its lovely home-grown look and feel.
Now, the thing is that lots of people used this XVT thing back in the day for their desktop software on the off chance that they might want to run it on Unix. However, we were the only people I know to use the Unix version. And it didn't work terribly well. Though possibly that was the fault of (a) the wacky non-platform-look-and-feel forms and (b) the fact that it was on its third port by that point. I have no idea what it's like now, and of course now it's competing with free (beer / speech) software.
So we went back to writing our own cross platform library more optimised for what we did - the fourth port. And so it stayed until I left.
(*) thinks, with a 1280 by 1024 display, you could fit 142 rows of 182 characters, wonder what that would look like?
(**) my group, on the other hand, DID actually ship some software written in GEM using this library, the only software written in the GEM version that ever was shipped. My vague recollection is that we shipped the software in order to get paid though we knew that the customer wasn't going to use it due to a change of management / direction. It was a GEM running on DOS front end to an IBM mainframe system written in FORTRAN using a 3270 terminal emulator. Essentially another brilliant but slightly misguided person wrote a layered protocol stack based on the theory of the OSI 7 levels. This entailed gibberish being written to the 3270 terminal which was read using the emulator library and converted to menus / forms etc. We had to get the resident IBM mainframe guru to write us some special assembler to call from our FORTRAN to clear the 3270 screen. The architect of said software took his money from the sale of the company to quit and travel round the world. We also wrote a route finder package we were going to sell for a few thousand pounds, which is what they cost in those days. We had just about finished it when AutoRoute came out for fifty quid. My, how we laughed at that, too.
(***) "We don't get many of those round here" "Vampires?" "No, controlled circumstances
Cool story. Reminds me of writing Motif code under Windows 3.1 because that's what Bentley Systems Microstation used. I learned a thing or two back then, and never got to use that particular skill again because it was decided we should use the newly fangled MFC from Microsoft for everything else. Which nearly turned me away from programming, but that's another story.
I've had my fair share of complaining about new-fangled crappy UIs. Being it, e.g., buttons that are no longer recognisable because it's an almost indistinguishable, flat, white rectangle. Or illogical layouts of entry masks that spread over three screens and a half. And so on. But for once I would like to draw the attention away from software and get down to hardware, down to the kitchen.
Down to modern touch controls on e.g. a cooker. Touchy and irresponsive crap, that is. What is wrong with oldfashioned knobs?! You just turn them and see (and feel!) immediatly in which position they are.
I spent hours trying to find if anyone out there makes an induction hob with 'real' controls. My research led me to some obsolete models but nothing remotely curent.
In the end I settled on a touch control where you can at least slide your finger along a linear scale for each ring as opposed to needing to repeatedly press +/- buttons. However my original hunch about touch controls was absolutely right and whilst I love the induction technology, I utterly loathe the controls... The control panel ceases to function if it gets the slighest bit wet (like when a pan boils over or a spook drips) - drying the controls with a towel generally causes random button presses and messed up settings.
The hob has a neat timer function which can time each cooking ring separately and shut it off when finished, however to select which ring you want to time you have to repeatedly press a single button that follows a strict cyclic pattern starting with the top-left of the five rings regardless of which are actually in use. Seriously! How much effort would it have taken to code in a bias for active rings first!?
It also has a 'Child Lock' - By pressing the padlock icon you can prevent the change of any settings: Great, except that the power button is not protected by this lock. It is infuriatinly easy to accidentally activate the power button (I say button, but it is, of course just a touch point), which IMMEDIATELY shuts off the hob if touched. Those handy per-ring timers I mentioned? Guess what: They're all lost if powered off mid-cooking, leaving me to only guess how long they might have been on for.
When cooking Christmas dinner this year (the first one in this kitchen) I very nearly took a sledge hammer to the damn thing because it failed to respond and/or inadvertantly powered off so many times! I really wish I'd gone with an 'analogue' gas hob and just put up with it being hard to clean :-(
The ovens are almost as bad - one 'Multi-function' button that you have to press somthing like 5 or 6 times to cycle through to set something simple like the bloody timer, and a UI in the form of a dirt-cheap 7-segment LED display which makes you think you're setting the number of minutes (it say's 'min' under the number) when actually you're setting the 'seconds' (like anyone even needs that level of granularity in an oven timer!?).
I'd love to simply blame AEG as it's their brand stamped on the goods, however the problem is far more wide-spread - I've certainly spotted lots of identical controls accross numerious brands and thery're all jumping on the touch control bandwagon. Sadly (for me at least), most people I ever mention this to don't seem to care - it seems that if a feature is too obscure to be practically useful then they simply ignore it rather than being annoyed that it's badly designed.
Is it really too much to ask for a 'simple' device like a hob to have fool-proof controls which work reliably and consistently for their intended purpose (i.e. not rendered unusable by a water spillage in a known-wet environment)?
The induction hobs in the student digs here are quite nice, you set the temperature, and they resistant to stuff spilling on them. They turn off if they don't sense a pan, but it's two presses to get it on set to 120C, and one push for +/- 10C.
I've got gas at home, knobs and all :D
I wish I didn't know what you are talking about. Unfortunately, this is not that case at all. Just to add to this misery, every time I placed a (hot) pan on the touch controls, the cooker would shut off and lock. Being locked means that you have to press two
buttons touch controls simultaneously for about 5 seconds. Believe me, 5 seconds is an awefully long and painful time just seconds after lifting a hot pan off those controls.
For now I settled with an induction hob by Gaggenau. Not perfect but at least it's got one (yes: one) physical, multifunction knob to control it. Shift it a bit towards the cooking ring you want to control and then turn the knob. Simple. And not having any kids around also helps with not losing said knob, as it's only magnetically coupled.
Automotive UI design:-
Range Rover (sport & Full Fat)
cabin temperature controls are rotary knobs, with the set temperature displayed either in the centre of the rotary knob or on the multi-function screen dependent on model and model year. if you want to change the cabin temerature on either side or both sides of the cabin, you reach out, turn the knob in the 'traditional' increase or decrease direction, theres a visual conformation and the temperature is changed to the one you set.
This system is 'familiar' to just about everyone, can be operated without taking your eyes off the road and (crucially for a multi-surface vehicle) can be operated no matter how bumpy the road - or lack of it makes the journey
Volvo (XC40, 60 & 90, V70 2017 onwards)
Large touchscreen in portrait orientation for all functions, cabin temperature displayed at bottom left & right *most of the time*, if not, press the hard key 'home' button first, then press the temperature display for the side you want to change (no option for master/slave config.), long vertical bar of temperatures pops up - note not all possible temperatures are displayed so you may have to scroll.... select the temperature you want, then close the screen by selecting another option or use the hard key - or repeat for the other side.... Now try that trundling down your gravel drive in the morning without breaking a nail on the screen, having to look at the screen to find where to touch, steadying your hand against the facia so you don't accidentally turn the fan to full or off - or crucially wiping out the gates at the end of your drive because you were going too fast for active city safety to cut in (actually happened - the log file, accessed with the owners consent, showed speed increasing due to throttle input despite the driveway only being 230 metres long with steel gates that would only be fully open in time if the speed passing through the sensor was 7mph or below and didn't increase once the gate opening was triggered, the owner had lived there for twelve years and the electric gates had been installed four years ago).
The MMI dial control moves the cursor in the opposite direction to that expected - all bets are off.
Sometimes an updated UI isn't a great improvement in UX
Back in the 1980s, SAAB put an awful lot of thought into making their cars - especially the classic 900 - usable and maintainable, using experience from the "human factors" department of their aircraft business. By all accounts, it worked very well - once you got used to the controls all being in different places to *other* cars whose designers weren't so forward-thinking. Unfortunately many commentators (including Top Gear) objected to the nonconformism and failed to see the engineering behind it.
The ignition key was on the centre console, when everyone else put it on the steering column. This was primarily a safety feature; in a crash, it was one less sharp implement attached to a part of the car that might intrude into the survival space, and was very close to the driver's knees. (This in the days before airbags and crumple zones became ubiquitous.)
This position also let them interlock the key with the gearshift, so that you had to put it in Reverse (or Park, for automatics) before it let you take the key out. With the handbrake acting on the rear wheels and the transmission on the front, this meant your car was secured by all four wheels when you left it. The downside? Until you developed the right habits, you would often end up starting the engine while still in Reverse - it took two distinct movements of the key, with a movement of the gearshift in between, to go from "Lock" to "Start/Run" or vice versa.
The radio was mounted right at the top of the centre dashboard, so that you didn't have to look *down* to adjust it, only glance across. It was still usable by the front-seat passenger in that position. The ventilation controls were straightforward and widely emulated by other manufacturers at the time - lessons since forgotten by the industry. Secondary controls needed by the driver, such as headlights, were logically grouped either side of the wheel.
Even the dash lighting colours were chosen to maximise visibility while minimising the effect on the driver's night vision (green flood and orange needles - yes, real needles, not fake ones on a computer screen). The tachometer had a simple arc painted on it to indicate the economical range of engine speeds, as opposed to those developing maximum torque and power - the latter being optimised for easy overtaking with minimal gear changes. The warning annunciators were large backlights giving more information than today's laconic "Service Engine Soon" - and if *any* of them were on, you would notice the fact and know to pay attention to what thy were telling you, as soon as you found a safe moment to look more closely.
Then you looked at how the engine was laid out. The battery and fusebox were both under the bonnet, but on the far side of a bulkhead from the engine bay proper, helping them stay cleaner and making them easier to work on when needed. The two dipsticks (one each for the engine and transmission) were next to each other. Everything that commonly needed changing or adjusting could be dealt with without needing a quadruple-jointed driver to reach into some obscure corner of the mechanicals. And the bonnet itself was hinged at the front, in such a way that it would never blow against the windscreen if it came unlatched, nor did it need a strut to keep it raised once deliberately opened.
What General Motors did to that company was nothing short of criminal.
Are you sure that you're not just too lazy to learn how to use current software?
Maybe I'm just not senile enough to understand the difficulty of adjusting myself to the world. It takes half a second of trial and error to figure out what a hamburger menu does, people. Seriously?
Well, guess survival of the fittest applies to the digital realm as well.
It wasn't laziness for the first 4,000 applications but after 40+ years of coping with useless stupid changes it starts to wear. In that time the accelerator pedal has stayed in one place, door knobs tend to turn in the same way, light switches are mostly flipped up and down. And yet every so often some moron with a screen layout program decides to discard everything done before in favor of some different idea, usually a version that takes more steps or is less discoverable. F' it. Just have one button and make the user tap out Morse Code. That's pretty minimalistic. The software will beep back its response.
Quick, without clicking on it, tell all the entries under the hamburger menu on your browser. That's right. You only look there because you hope a feature you need might be under Miscellaneous.
All the Big Corps that can't be bothered to think seriously about their products other than the share holders' dividends seem to be going the same way on UX/UI. All Google and Microsoft UI's are utter garbage. And I'm guessing they've employed UI/Ux experts. From their results, it would appear they're actually just graphic designers trying to be clever, have no experience of life, no empathy wrt the end user and should never have been engaged in the first place. The whole shit show is going to hell in a hand cart. We're just drowning in a sea of incompetence.
As a hardware nerd, my UI's look like a high-school science fair project when I have to release them into the wild (after I get the underlying code working.)
"Why can't it... *meaningless thing*?.." Ungrateful knuckle-dragging morons.
"It's a couple thousand lines of VB code, here's the Github URL. I hate you. But, go Nuts!."
thank you Alpine car stereo for wireless Apple CarTune to my phone every time I get in the car.
If I can't find my phone but the car can, I need to check under the seat or in my gym bag.
I would have given up a cm of screen space width for a normal volume knob.
If one has ever read the original «Inside Macintosh» documentation, considered the research on user behaviour and consistent interface design, and then compared it to the horrors of user interface design that is common in both the mac and PC world today.
The worst of the common lot today has to be the Office 365 suite. The focus on using buttons and bars for everything means that what was once easy to find in the menus has to be hunted for in a mishmash of button bars, or found with the context help.
Upvoted for the Macintosh Human Interface Guidelines. Essential reading for anyone designing a desktop GUI or application thereof, even if it's not for a Mac, and even if the final result doesn't even *look* like a Mac interface - if it *behaves* like one, you won't go far wrong.
Let me offer you a small example: the Panel on KDE. That's the bar (usually) at the bottom of the screen with the start button at one end, the system tray at the other and between them minimised windows any useful bits you want to put there.
In KDE3 this could be set to auto-hide and then to reappear when the cursor went to a prearranged place. This could be anywhere along the edge or just in the corner.
If you selected the edge the ballistics of a mouse cursor would ensure you hit it whenever you went for a control at the bottom of a widow sitting at the bottom of the screen. You then had to wait a moment for the panel to unhide and rehide.
The better option was to select the bottom left corner as the target. This is where the menu button is which is often what you want from the panel anyway. Even if it isn't it's quick enough to go for the corner and then slide along to whatever you want. Very effective and very unobtrusive.
In KDE4 the corner option was done away with. Just that. Annoying panel behaviour unavoidable.
In KDE5....no they haven't seen the light and restored the corner option. They've just added, whether as a bug or by deliberate design, another breakage. The panel now bobs up if a window opens at the bottom of a screen. Not only that but it stays there until the mouse cursor visits and leaves it.
What's worse, if the panel is hidden and a confirmation dialog is invoked, say to confirm binning something, it's invoked minimised on the panel which, just ot be contrary, doesn't unhide. To confirm you have to bring the mouse cursor down to the bottom of the screen, unhide the panel, click on the minimised dialog to restore it and click on it. At least that resolves the unhide by window problem - the only workable solution is to sacrifice screen real estate to a permanently visible panel.
And did I mention that in 5 the background can no longer by a gradient unless you make a gradient as an image and set it as the wallpaper? Or the eye-rattling image as the default wallpaper? Or the bland scribbles that have become the default icon set?
The only rationale I can think if is that they didn't want Microsoft to be the only ones with crap UI design.
A great specific example. 95% of all the settings you needed to tweak a Windows 7 desktop were all only a couple of consistently pathed clicks deep within control panel, and anything more exoctic you could spin up via MMC as required.
Under windows 8 (and carried to 10) - the modern UI's approach to organising settings in particular is god awful. Stuff is spread out in non intuitive places, with loads of scrolling through radio buttons even though there is more than enough white space to include more information per view. A huge step backwards in usability design.
Dunno about that. I learned to swear worse than a sailor* when they started to arbitrarily group various bits of the control panel under text links (each of which I had to read and mentally process to navigate them) under equally arbitrary categories in windows 7, compared to the simple grid of named icons I could navigate at a glance** or even mostly just muscle memory** in Windows XP after a bare minimum of familiarity.
* I kid, I kid. That actually happens to all of us once we start driving...
** Which is what makes "recent" or "most used" collections of apps useless - because they're inherently dynamic, there is no muscle memory developing on where to reach stuff so you need to actively identify everything on them all the time, whereas I only needed visual clues on my old XP start menu to fine-correct my cursor movement; which region to reach for was pure memory.
It's now much better on mobile though, the old design was terrible, and I often wondered why a tech site couldn't come up with a decent mobile site...
But there are more than a few techies who seem to think a useable decent looking UI is for wimps, and people browsing on mobile are stupid millennial types who want the moon on a stick and they should be using a desktop anyway, or possibly android because all apple users are fashionistas or something.
Biting the hand that feeds IT © 1998–2020