Jump to content
Objectivism Online Forum

Why Is Software So Badly Designed?

Rate this topic


DavidV

Recommended Posts

(I posted this on HBlist, but I thought it might make a good discussion topic:)

There are many reasons why most software interfaces are unintuitive and ugly, but here are three of the main ones I've experienced as a user-interface developer and designer.

A concrete-bound mentality. Programmers, managers, and marketers usually focus on the most concrete requirements when they schedule software development – namely features. They fail to plan for more abstract requirements like reliability, testability, and usability. Instead of beginning with the interface, they expect programmers to work on interfaces "along the way" or schedule time during the very end of a project. Inevitably, projects run late, and usability is crowded out by the "just one more feature" effect.

A lack of consideration for the user's context. Programmers think of work flow from their own context of knowledge, rather than the users. They don't realize that the user does not know the architecture of their software, so they place functionality where it would be quickest for them to do, and label functions with technobable rather than what is intuitive for the user. I remember attributing complaints about the difficulty of my own software to user's stupidity since the proper action was so "obvious" to me. Now I screen all new interfaces through several lay-people.

Lack of conceptual integrity. This is the most frustrating problem I deal with. Any non-trivial program will usually be designed by several programmers, each with his own vision of what the software should be. Mix a lack of planning with requests from marketing and sales, successive versions and new employees, and the result is often a bloated, unwieldy jumble. Such code is very difficult to refactor (essentially because its non-hierarchical nature flaunts the crow epistemology), making interface improvements difficult. Such was the fate of Netscape Communicator, which tried to be too many things to too many people, and ended up being such a mess that the development team had to start over from scratch, and never recovered from the setback. Like any enterprise, a program must be a product of a single vision – an architect who creates the design and has the power and ability to get it implemented.

Interface design is improving –largely thanks to the book "The Design of Everyday Things" by Donald Norman, who jump-started the "user centered design" movement. Software from Google and Microsoft, with their upcoming Office 12 reflects the new focus on the user experience and the "less is more" philosophy.

Link to comment
Share on other sites

This is a very funny topic to me because I was thinking about it for a while now. My conclusion a while back was that it was a lack of context consideration on 2 fronts. First of all in my experience programmers generally don't look at things from a business perspective but rather from their own. A lot of times they are left to their own devices and are creating products from such a poorly conceived conceptual place that the software is just a mess.

The second front is that again they don't consider the UI from the user's perspective but rather their own. So really they just have no hierarchy of knowledge and understanding when it comes to actually making a product. Another thing I have seen is a general apathy by developers/programmers to listen to the marketing or business side of it. A fair amount of developers just think they know what is right and that's it (again this is in my experience).

It's really funny but I think I pretty much came to the exact same conclusions that you did Greedy. Your dissemination of it was absolutely terrific though :-). I think a lot of developers would really like to think they are smart (and some certainly are) but unfortunately they spend so much time on concretes that they can't work on an abstract level as well.

Great topic.

Link to comment
Share on other sites

This is a very funny topic to me because I was thinking about it for a while now. My conclusion a while back was that it was a lack of context consideration on 2 fronts. First of all in my experience programmers generally don't look at things from a business perspective but rather from their own. A lot of times they are left to their own devices and are creating products from such a poorly conceived conceptual place that the software is just a mess.

I sell Euro-built synthesizers for a living, and a similar point was made by a customer of mine today. To paraphrase him: "The engineers that build products and are so satisfied with the technical aspects of their creations, that it's all about the status of the engineering, and not the user-friendliness of the actual product. There's nothing wrong with engineers, but because they have little business or marketing sense, their products end up being over-priced and confusing to potential buyers."

And he was right. Although the brand I represent isn't difficult to use (in fact th UIs are quite transparent), the "craftsmanship" - as well as the development time and costs - render these products 50% more expensive than Japanese-built counterparts.

Interestingly enough, after 20+ years of mass-manufactured digital synthesizers, the Japanese are finally starting to learn that features don't mean anything if the benefits can't be realized, and they're redesigning their UIs accordingly. I mean, c'mon: an entire recording studio in one box, and I have just four knobs!?

Edited by synthlord
Link to comment
Share on other sites

This is a common problem in all engineering fields, it's not unique to software development. My dad has a Ph.d. in Human Factors Engineering and its his job to spot and solve human interface problems before they become really bad situations.

I personally think the problem is simply a lack of entity-orientation in design; when you build something, try to consider the who/what/how/when/where/why of its actual use and not just solving a specific problem and you'll be all right.

Link to comment
Share on other sites

A concrete-bound mentality. Programmers, managers, and marketers usually focus on the most concrete requirements when they schedule software development – namely features. They fail to plan for more abstract requirements like reliability, testability, and usability. Instead of beginning with the interface, they expect programmers to work on interfaces "along the way" or schedule time during the very end of a project.
I've encountered this problem a few times in my job. My company does billing for hospitals; each one having *very* different ways of doing things. So my job is to adapt our chart validation software for each hospital. The problem is that sometimes the ED department completely forgets to notify IT of the change until a few days before we go live. So they need something that works in two days, but what they end up with is something that could have been designed better if I actually had a reasonable deadline. As long as that feature "works" they don't care if it could have worked better. Then after the change goes into production, I'll think of a better way of doing it. The problem then becomes users resistence to change. Once they know one way of doing it (no matter how bad that is) they don't want to learn a new way (even if it's an improvement).

A lack of consideration for the user's context. Programmers think of work flow from their own context of knowledge, rather than the users. They don't realize that the user does not know the architecture of their software, so they place functionality where it would be quickest for them to do, and label functions with technobable rather than what is intuitive for the user. I remember attributing complaints about the difficulty of my own software to user's stupidity since the proper action was so "obvious" to me. Now I screen all new interfaces through several lay-people.
This is another problem I've had in the past. When I first started my job the only GUI's I ever designed were for my own personal use. Once I started designing things other people use, I would get some negative feedback from users. I generally don't have that problem anymore because I know a lot more about what the user is thinking when they use it. So I think programmers can get better at this after dealing with their users for a while.

There is another roadblock I've encountered to good interface design. In my case, I'm designing a contract management system for our legal department. They've been hounding us over this for over a year. The problem is that when we ask them what they want, they have no clue. That's somewhat understandable since their job is not interface design or programming. The problem is that the programmer needs at least *some* knowledge of their business processes in order to figure out the best possible way to design the software. What I'm getting at here is that the programmer needs to know something about who will use the software in order to design it in a way that meets their needs. Sometimes the users aren't willing to give you any input and they are the ones that suffer in the end.

Link to comment
Share on other sites

I'd say that there are two groups of people responsible for this:

The software guys who engage in self-serving overengineering and lack a feeling for customer needs and the marketing guys who want the software on the market yesterday and don't understand the need for good testing.

And, of course, each group blames it on the others. :)

Link to comment
Share on other sites

A concrete-bound mentality. Programmers, managers, and marketers usually focus on the most concrete requirements when they schedule software development – namely features. They fail to plan for more abstract requirements like reliability, testability, and usability. Instead of beginning with the interface, they expect programmers to work on interfaces "along the way" or schedule time during the very end of a project. Inevitably, projects run late, and usability is crowded out by the "just one more feature" effect.
I cant talk about commercial programmers, but I would suggest that for free (open source) software this is because graphical interface design is pretty boring (imo anyway, and this is shared by several other programmers I've spoken to). Compared with the 'real programming' parts of the project, making an interface is drudgery, hence its not really surprising that it often gets neglected. The 'lack of concern for users' is related - if the program is performing a fairly simple task (like unzipping files or playing a DVD), then using it from the command line is easy, probably moreso than using a GUI. Windows users have been led to expect a GUI for every minor task whether it is necessary or not, so its unsurprising that they start complaining when told to work from the command line. But why should any programmer bother going through the boredom of creating a GUI, just because users are too lazy to learn how to use command line? And if they are nice enough to provide a GUI, then it may well be half-hearted, since it isnt really needed in the first place.

It's different with commercial software obviously, and with software where a GUI is actually necessary/useful (eg web-browsers, WYSIWYG word processors etc). And you are correct that there a lot of programs out there with terrible, terrible interfaces. There are probably lots of reasons for this, but I think the main one is that most programmers simply havent been taught the basic principles of GUI design. Perhaps this is because it seems like it should be obvious - I mean any idiot can throw together a few menus and buttons, right? Why bother studying something so trivial? But of course, this isnt the case - designing a good interface is nowhere near as easy as it sounds, and it is a skill in its own right. And since they dont bother making any effort to learn this skill, its unsurprising that most programmers lack it.

edit: theres also the fact that good GUIs tend to be unnoticable - when a program is easy to use, you generally dont notice the interface; it only enters your consciousness when it is poorly designed and getting in your way. Therefore, it may be harder to learn GUI principles 'from examples', because the best parts of the GUI are 'hidden' despite being (literally) in front your face. I can tell you many reasons why Opera is a better web browser than Internet Explorer in terms of the features it has, but I would find it a lot harder to explicitly point out reasons why the user interface is better, even though it clearly is. Its even harder to explain why you prefer one program's GUI to anothers, when they are both actually well designed - I would find it very difficult to say why I prefer Opera to firefox (or KDE to GNOME, or winrar to winzip, or MS Word to openoffice/KDEword) - theres not much that I can explictly point to, its just that the interface somehow feels nicer to me. Perhaps if I'd actually studied more GUI design I would be able to explain why I prefer it, just like someone who has studied music theory might be in a better position to explain why he prefers one piece of music to another..

Edited by Hal
Link to comment
Share on other sites

In order to better explain what I mean by people not noticing user interface features, I'll give an example of something which I find quite important, but which most people will probably never even have thought about unless theyd actually studied interface design.

Look at the top left of the window you are currently using - there will be a menubar ("File"/"edit" etc). Notice where this menu bar is placed - its actually about 1cm below the top of your screen (assuming youre in full screen mode). Now, try to click on a menu and see what happens when you move the mouse towards the bar from the middle of the screen - you jerk your hand to move the pointer in that direction, but unless you have very impressive dexterity, the pointer will almost never land exactly on the menu bar - normally it'll land a few cm's away, and you have to make further hand movements until the pointer is where you want it.

Now, try the following - double click on the program title bar which is directly above the menubar (in windows, this toggles whether the program is full screen - double clicking it minimises the program). Notice that this is a _lot_ easier than clicking on the menubar - you dont actually have to aim the pointer - you just throw it up to the top of the screen and it always lands on the bar - you cant move it too far up. In GUI terminology, we say that the program bar 'bleeds' into the edge of the screen. The bar effectively has infinite height, since the mouse pointer will always stop on it no matter how far/fast you move it. In Windows, this bleeding also occurs at the bottom of the screen - try to click on the Start button, or any of the tabs for your minimised programs - youll find that no matter how far down you move the pointer, youll always land on them). Now, just think how annoying it would be if these bars never bled into the screen - if there was a small region of 'dead pixels' around the screen which wasnt part of the program bars, so you had to 'aim' your pointer just like you did for the menubar (in fact, some operating systems used to be like this - nothing bled into the screen on any of the edges). Now if you were designing an operating system for yourself, would you even have thought about something like this? You have no real reason to have ever noticed it, yet if you found yourself using an OS without bleeding, you'd probably notice straight away, and it would be quite irritating. I would claim that quite a lot of GUI faults are similar to this - there arent really any philosophical reasons why they occur, its just things that are so subtle that 90% of people wont notice them (unless they are missing).

On a sidenote, in MS Windows applications, the menubar isnt part of the program bar, hence why its awkward to click on it (you have to aim the mouse). And this is just obvious bad design - in a perfectly designed operating system, the menu bar would always bleed into the top of the screen just like the program bar did (perhaps the left half of the program bar would contain the menus, and the right half would be for minimisation). Even better, programs should be able to choose what they want to put in the program bar - it annoys me in Opera that the bar for changing tabs is freefloating and doesnt bleed (this gets progressively more annoying as the screen resolution increases, as it becomes smaller and hence harder to click on). But the fault here lies with Windows.

edit: 'mouse gestures' are another fantastic example. Until I started using Opera I would never have realised how useful these are, and now I wish that all programs came with them as standard. Its like having a mousewheel on your mouse - its not till youve actually got used to using one that you realise what a genius idea it is.

Edited by Hal
Link to comment
Share on other sites

For the first year at my current job, 'Design document' was a mythical creature that was usually conceived after the product feature was released to production. We are now starting to create design documents beforehand, however the design process is now built around what we want to report first, and how the numbers should reflect on the report. Then we can design the product to meet the needs of the report. Should be an interesting year.

Link to comment
Share on other sites

Knowing the context of the user is the critical aspect.

Ideally, one tries to understand as much as one can about how one's product will be used. Some programmers slip up right here.

Secondly, one must consider usability to be a value that goes into the design. Sometimes a screen will be really complicated because the designer is trying to handle all sorts of rare situations. I've seen screens that are meant to record a customer's name and address, but are really complex because they're created to take into account extremely rare conditions -- like multiple residences, vacation homes that are active for only a certain range of dates, foreign addresses. Of course, if a customer needs to use that rare functionality some times, the software should allow it. However, one should always be aware of what will be used often and what will not be. Then, one needs to calculate whether it is worth doing something extra to handle the typical case. It's like the express check-out lane in grocery stores.

This applies not just to single screens, but to functionality across the system. I once did a quick analysis of a system and found that 50% of the usage of the system was focussed on about 8% of the work-flows/screen-sets. One usually knows the most important parts going in, and can be particular about them. It's also important to monitor ongoing usage, since you'll always get some surprises.

Sometimes the programmer is intentionally isolated from the user's context. A user-representative will often speak to an analyst and that will form the majority of the "requirements". In the worst case, the user-representative is someone the user-department could spare for all those pesky meetings, not the ones who really know the job. In really bureacratic staff-structures, there's a further layer between the programmer and the analyst.

Ideally, every programmer should have some real idea of the user's job. This does not require a whole lot of effort. Suppose one is designing a teller system, every programmer must spend a few hours sitting quietly behind a teller and seeing how they currently do their job. The totality of what is good and bad about their current system, the richness of the context in which they use the system -- the space, the drawers, the customers -- cannot be conveyed second hand. The main architect (and I agree with David, there must be one "go to" person) must spend a lot more time understanding the details about usage; but almost every programmer must have some real experience. If you're developing a check-out system, make every programmer spend a few hours bagging, while watching what happens at the checkout.

Conventional system design was very concerned with making systems that could be changed easily. The typical advice was to down-play the real, concrete implementation and attempt to extract the functional principles, or the "essence". Many books were devoted to this topic. Most had a quick last chapter that said, "Oh yes, once you understand the essence, you have to put it together in a new concrete form", but there was little guidance on how to do so. One often ended up with a tool that could do a whole lot of things, but none too easily.

Finally, the folks developing software should care. This is so obvious, but sometimes the problem is not even as specific as not caring about the user. Sometimes it is a general attitude of not caring unless someone can blame you for something.

So, in summary:

  • care about what you do
  • care about usage
  • experience real, live, breathing usage
  • learn from actual usage, and take corrective action [poor usability is just like any other "bug"]

Link to comment
Share on other sites

  • 1 month later...

Nice thread. However I find that most suggestion only stand on one leg, that is gotten one point right, but dangerously playing with another. The issue of a bad design is so complex that usual counting of bad excuses and reasons won't do. It is simply multi-dimensional where stating software design is bad b/c of A, B, ..., Z will not be good enough.

And this is why I post this.

The real problem is not reasons A and B, but their relations and connection with other reasons.

What is worse is that it becomes too easy to draw in other topics that are just as large as the topic of bad design. I saw posts above stating that some software is desined badly, b/c the programming wasn't done right, and then going off to showing what was bad about programming (this is where it get to another large problem), and so on.

Yes, other issues do affect software design, so the question is how far do you want to go from bad software design in order to figure out what caused. What's really bad is that you can't figure out easily the causes and their effects, b/c their interconnections are so complex. For any programmers who coded large projects can see such cases as a spaghetti code. In such cases one has to untangle all the stuff before attempting to analyze. So, break it down into pieces and write unittests to ensure they do what you want them do to, so that you will have an automatic tool to test your software.

This means you have to be careful about introducing new issues here, as they themselves will mostly likely have problems of their own. For example, I saw some notes about programmers who get tangled up with coding and forget about everything else (design for example). I do no consider such people as programmers. Programming requires brain usage. "I'm gonna forget about anything outside of my code" is not using one's brain, and that will drain down one's programming skills in no time. After that, bad design is only a small problem.

As far as I'm concerned programming is like living a life. Your every move and choice has an effect on your future. Slack off today by writing sloppy code, and you will live with problems for the rest of the project lifetime. The bigger the project the more it's true, as you have to manage more and more. And the only way to survive is to stay true to yourself.

Sounds like objectivism, eh? :lol:

You bet, as a programmer, I've been applying it before I knew what objectivism was and using it consciously in my daily life. This why I consider that "locked into one's mind" programmer is not a programmer, but a monkey who types in text.

Alright, now I can tackle the design part of the topic. The posts I've read above from all of you are correct about not listening to users, ignoring messages, rules, code that produces your best product, miscommunication on anyone's part on the team. I'll stop counting here.

The good news is that there are people out there who tackle the above problems, and I've seen some links posted in this thread, but not enough. I found it amazing that no one mentioned book series "Pragmatic Programmer": http://www.pragmaticprogrammer.com

The amount of work and depth of their views amazes me. They address both user-coder interactions, as well as design, and much more.

I would like to add something else from myself, though. Someone posted above that the users don't know what they want or change their goals or both at the same time. Book from the series I've mentioned address that, but I will place a note: this is not a bad thing and it's expected in many cases. Imagine yourself going to buy a new car. Do you always know down to all details what car you want and how you want it to look? Most likely not, and most likely you will vary your goal from car to car, as you get the feedback from the cars you see, and how they looked to you. The same goes for the user/client of the software. One way to address this has been devised. Many new programming/development approaches has sprung up in the past 5-7 years. "Extreme Programming" and "Agile Development" are just a few.

Ok, I'll finished it up right here. Books I recommend to read on the issue:

http://www.pragmaticprogrammer.com/titles/pad/index.html

http://www.pragmaticprogrammer.com/ppbook/index.shtml

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...