31 May 2016
In the wonderful book, About Face by Alan Cooper, an overarching principle is that “Software should behave like a considerate human being”. I think that idea is pure genius. Who wouldn't love a product that behaves like that? To make your product be like that, just follow these principles.
For more incredibly good advice on designing great products, read the whole book. Also check out Cooper, the design and business strategy firm that Alan founded.
Source: About Face: The Essentials of Interaction Design (book)
A considerate friend wants to know more about you. He remembers your likes and dislikes so that he can please you in the future. Everyone appreciates being treated according to his or her personal tastes.
Most software, on the other hand, doesn’t know or care who is using it. Little, if any, of the personal software on our personal computers seems to remember anything personal about us, in spite of the fact that we use it constantly, repetitively, and exclusively. A good example of this behavior is how browsers such as Firefox and Microsoft Internet Explorer remember information that users routinely enter into forms on websites, such as a shipping address or username. Google Chrome even remembers these details across devices and sessions.
Software should work hard to remember our habits and, particularly, everything we tell it. From the perspective of a developer writing an application, it can be tempting to think about gathering a bit of information from a person as being similar to gathering a bit of information from a database. Every time the information is needed, the product asks the user for it. The application then discards that tidbit, assuming that it might change and that it can merely ask for it again if necessary. Not only are digital products better suited to recording things in memory than humans are, but our products also show they are inconsiderate when they forget. Remembering humans’ actions and preferences is one of the best ways to create a positive experience with a software-enabled product.
A good service provider defers to her client. She understands that the person she is serving is the boss. When a restaurant host shows us to a table in a restaurant, we consider his choice of table to be a suggestion, not an order. If we politely request another table in an otherwise empty restaurant, we expect to be accommodated. If the host refuses, we are likely to choose a different restaurant where our desires take precedence over the host’s.
Inconsiderate products supervise and pass judgment on human actions. Software is within its rights to express its opinion that we are making a mistake, but it is presumptuous for it to judge or limit our actions. Software can suggest that we not “submit” our entry until we’ve typed in our telephone number, and it should explain the consequences if we do so, but if we want to “submit” without the number, we expect the software to do as it is told. The very word submit is a reversal of the deferential relationship we should expect from interactive products. Software should submit to users. Any application that proffers a “submit” button is being rude, as well as oblique and confusing.
If you ask a good retail sales associate for help locating an item, he will not only answer your question, but also volunteer useful collateral information. For example, he might tell you that a more expensive, higher-quality item than the one you requested is currently on sale for a similar price.
Most software doesn’t attempt to provide related information. Instead, it narrowly answers the precise questions we ask it and typically is not forthcoming about other information, even if it is clearly related to our goals. When we tell our word processor to print a document, it doesn’t tell us when the paper supply is low, or when 40 other documents are queued before us, or when another nearby printer is free. A helpful human would.
Figuring out the right way to offer potentially useful information can require a delicate touch. Microsoft’s Office Assistant “Clippy” was almost universally despised for his smarty-pants comments like “It looks like you’re writing a letter. Would you like help?” While we applauded his sentiment, we wished he weren’t so obtrusive and could take a hint when it was clear we didn’t want his help. After all, a good waiter doesn’t interrupt your conversation to ask you if you want more water. He just refills your glass when it’s empty, and he knows better than to linger when it’s clear that you’re in the middle of an intimate moment.
Offering inappropriate functions in inappropriate places is a hallmark of poorly designed interactive products. Many interactive products put controls for constantly used functions right next to never-used controls. You can easily find menus offering simple, harmless functions adjacent to irreversible ejector-seat-lever expert functions. It’s like being seated at a dining table right next to an open grill.
Horror stories also abound of customers offended by computer systems that repeatedly sent them checks for $0.00 or bills for $957,142,039.58. One would think that the system might alert a human in the Accounts Receivable or Payable departments when an event like this happens, especially more than once, but common sense remains a rarity in most information systems.
Generally speaking, we want our software to remember what we do and what we tell it. But there are some things our that software probably shouldn’t remember unless we specifically direct it, such as credit card numbers, tax IDs, bank accounts, and passwords. Furthermore, it should help us protect this kind of private data by helping us choose secure passwords, and reporting any possible improprieties, such as accounts accessed from an unrecognized computer or location.
A human assistant knows that you will require a hotel room when you travel to another city, even when you don’t ask explicitly. She knows the kind of room you like and reserves one without any request on your part. She anticipates your needs.
A web browser spends most of its time idling while we peruse web pages. It could easily anticipate our needs and prepare for them while we are reading. It could use that idle time to preload all the links that are visible. Chances are good that we will soon ask the browser to examine one or more of those links. It is easy to abort an unwanted request, but it is always time-consuming to wait for a request to be filled.
A conscientious person has a larger perspective on what it means to perform a task. Instead of just washing the dishes, for example, a conscientious person also wipes down the counters and empties the trash, because those tasks are also related to the larger goal: cleaning up the kitchen. A conscientious person, when drafting a report, also puts a handsome cover page on it and makes enough photocopies for the entire department.
Here’s an example: If we hand our imaginary assistant, Rodney, a manila folder and tell him to file it, he checks the writing on the folder’s tab—let’s say it reads MicroBlitz Contract—and proceeds to find the correct place for it in the filing cabinet. Under M, he finds, to his surprise, a manila folder already there with the same name. Rodney notices the discrepancy and finds that the existing folder contains a contract for 17 widgets delivered four months ago. The new folder, on the other hand, is for 34 sprockets slated for production and delivery in the next quarter. Conscientious Rodney changes the name on the old folder to read MicroBlitz Widget Contract, 7/13 and then changes the name of the new folder to read MicroBlitz Sprocket Contract, 11/13. This type of initiative is why we think Rodney is conscientious.
Our former imaginary assistant, Elliot, was a complete idiot. He was not conscientious at all. If he were placed in the same situation, he would have dumped the new MicroBlitz Contract folder next to the old MicroBlitz Contract folder without a second thought. Sure, he got it safely filed, but he could have done a better job that would have improved our ability to find the right contract in the future. That’s why Elliot isn’t our imaginary assistant anymore.
If we rely on a typical word processor to draft the new sprocket contract and then try to save it in the MicroBlitz folder, the application offers the choice of either overwriting and destroying the old widget contract or not saving it at all. The application not only isn’t as capable as Rodney, it isn’t even as capable as Elliot. The software is dumb enough to assume that because two folders have the same name, we meant to throw away the old one.
The application should, at the very least, mark the two files with different dates and save them. Even if the application refuses to take this “drastic” action unilaterally, it could at least show us the old file (letting us rename that one) before saving the new one. The application could take numerous actions that would be more conscientious.
At a service desk, the agent is expected to keep mum about her problems and to show a reasonable interest in yours. It might not be fair to be so one-sided, but that’s the nature of the service business. An interactive product, too, should keep quiet about its problems and show interest in the people who use it. Because computers don’t have egos or tender sensibilities, they should be perfect in this role, but they typically behave the opposite way.
Software whines at us with error messages, interrupts us with confirmation dialog boxes, and brags to us with unnecessary notifiers (“Document Successfully Saved!” How nice for you, Mr. App: Do you ever unsuccessfully save?). We aren’t interested in the application’s crisis of confidence about whether to purge its Recycle Bin. We don’t want to hear its whining about being unsure where to put a file on disk. We don’t need to see information about the computer’s data transfer rates and its loading sequence, any more than we need information about the customer service agent’s unhappy love affair. Not only should software keep quiet about its problems, but it should also have the intelligence, confidence, and authority to fix its problems on its own.
Although we don’t want our software pestering us incessantly with its little fears and triumphs, we do want to be kept informed about the things that matter to us. We don’t want our local bartender to grouse to us about his recent divorce, but we appreciate it when he posts his prices in plain sight and when he writes what time the pregame party begins on his chalkboard, along with who’s playing and the current Vegas spread. Nobody interrupts us to tell us this information: It’s there in plain view whenever we need it. Software, similarly, can provide us with this kind of rich modeless feedback about what is going on.
Most of our existing software is not very perceptive. It has a very narrow understanding of the scope of most problems. It may willingly perform difficult work, but only when given the precise command at precisely the correct time. For example, if you ask the inventory query system how many widgets are in stock, it will dutifully ask the database and report the number as of the time you ask. But what if, 20 minutes later, someone in the Dallas office cleans out the entire stock of widgets? You are now operating under a potentially embarrassing misconception, while your computer sits there, idling away billions of wasted instructions. It is not being perceptive. If you want to know about widgets once, isn’t that a good clue that you probably will want to know about widgets again? You may not want to hear widget status reports every day for the rest of your life, but maybe you’ll want to get them for the rest of the week. Perceptive software observes what users are doing and uses those observations to offer relevant information.
Products should also watch our preferences and remember them without being asked explicitly to do so. If we always maximize an application to use the entire available screen, the application should get the idea after a few sessions and always launch in that configuration. The same goes for placement of palettes, default tools, frequently used templates, and other useful settings.
Interactive products should stand by their convictions. If we tell the computer to discard a file, it shouldn’t ask, “Are you sure?” Of course we’re sure; otherwise, we wouldn’t have asked. It shouldn’t second-guess us or itself.
On the other hand, if the computer has any suspicion that we might be wrong (which is always), it should anticipate our changing our minds by being prepared to undelete the file upon our request.
How often have you clicked the Print button and then gone to get a cup of coffee, only to return to find a fearful dialog box quivering in the middle of the screen, asking, “Are you sure you want to print?” This insecurity is infuriating and the antithesis of considerate human behavior.
Inconsiderate products ask lots of annoying questions. Excessive choices, especially in the form of questions, quickly stop being a benefit and instead become an ordeal.
Asking questions is quite different from providing choices. When browsing on your own in a store, you are presented with choices. When going to a job interview, you are asked questions. Which is the more pleasurable experience? Part of the reason why is that individual asking the questions is understood to be in a position superior to the individual being asked. Those with authority ask questions; subordinates respond. When software asks questions rather than offering choices, users feel disempowered.
Beyond the power dynamics issues, questions also tend to make people feel badgered and harassed. Would you like soup or salad? Salad. Would you like cabbage or spinach? Spinach. Would you like French, Thousand Island, or Italian? French. Would you like lite or regular? Stop! Please, just bring me the soup instead! Would you like corn chowder or chicken noodle?
Users really don’t like to be asked questions by products, especially since most of the questions are stupid or unnecessary. Asking questions tells users that products are:
These are all qualities that we typically dislike in people. Why should we desire them in our products? The application is not asking us our opinion out of intellectual curiosity or a desire to make conversation, the way a friend might over dinner. Rather, it is behaving ignorantly while also representing itself as having false authority. The application isn’t interested in our opinions; it requires information—often information it didn’t really need to ask us in the first place.
Many ATMs continually ask users what language they prefer: “Spanish, English, or Chinese?” This answer is unlikely to change after a person’s first use. Interactive products that ask fewer questions, provide choices without asking questions, and remember information they have already learned appear smarter to users, as well as more polite and considerate.
When your friend commits a serious faux pas, he tries to make amends and undo the damage. When an application discovers a fatal problem, it can take the time and effort to prepare for its failure without hurting the user, or it can simply crash and burn.
Many applications are filled with data and settings. When they crash, that information is still often discarded. The user is left holding the bag. For example, say an application is computing merrily along, downloading your e-mail from a server, when it runs out of memory at some procedure buried deep in the application’s internals. The application, like most desktop software, issues a message that says, in effect, “You are hosed,” and it terminates immediately after you click OK. You restart the application, or sometimes the whole computer, only to find that the application lost your e-mail. When you interrogate the server, you find that it has also erased your e-mail because the e-mail was already handed over to your application. This is not what we should expect of good software.
In this example, the application accepted e-mail from the server—which then erased its copy—but it didn’t ensure that the e-mail was properly recorded locally. If the e-mail application had ensured that those messages were promptly written to the local disk, even before it informed the server that the messages were successfully downloaded, the problem would never have arisen.
Some well-designed software products, such as Ableton Live, a brilliant music performance tool, rely on the Undo cache to recover from crashes. This is a great example of how products can easily keep track of user behavior, so if some situation causes problems, it is easy to extricate yourself.
Even when applications don’t crash, inconsiderate behavior is rife, particularly on the web. Users often need to enter information into a set of forms on a page. After filling in 10 or 11 fields, a user might click the Submit button, and, due to some mistake or omission on his part, have the site reject his input and tell him to correct it. The user then clicks the back arrow to return to the page, and lo, the 10 valid entries were inconsiderately discarded along with the single invalid one. Remember your incredibly mean junior high geography teacher who ripped up your report on South America because you wrote it in pencil instead of ink? And as a result, you hate geography to this day? Don’t create products that act like that!
When manual information-processing systems are translated into computerized systems, something is lost in the process. Although an automated order-entry system can handle millions more orders than a human clerk can, the human clerk can work the system in a way that most automated systems ignore. There is almost never a way to jigger the functioning to give or take slight advantages in an automated system.
In a manual system, when the clerk’s friend from the sales force tells him that getting a particular order processed speedily means additional business, the clerk can expedite that one order. When another order comes in with some critical information missing, the clerk still can process it, remembering to acquire and record the information later. This flexibility usually is absent from automated systems.
Most computerized systems have only two states: nonexistence and full compliance. No intermediate states are recognized or accepted. Every manual system has an important but paradoxical state—unspoken, undocumented, but widely relied upon—suspense. In this state a transaction can be accepted even though it is not fully processed. The human operator creates that state in his head or on his desk or in his back pocket.
For example, suppose a digital system needs both customer and order information before it can post an invoice. Whereas the human clerk can go ahead and post an order before having detailed customer information, the computerized system rejects the transaction, unwilling to allow the invoice to be entered without it.
The characteristic of manual systems that lets humans perform actions out of sequence or before prerequisites are satisfied is called fudgeability. It is one of the first casualties when systems are computerized, and its absence is a key contributor to the inhumanity of digital systems. It is a natural result of the implementation model. Developers don’t always see a reason to create intermediate states, because the computer has no need for them. Yet there are strong human needs to be able to bend the system slightly.
One of the benefits of fudgeable systems is reducing the number of mistakes. By allowing many small, temporary mistakes into the system and entrusting humans to correct them before they cause problems downstream, we can avoid much bigger, more permanent mistakes. Paradoxically, most of the hard-edged rules enforced by computer systems are imposed to prevent just such mistakes. These inflexible rules make the human and the software adversaries. Because the human is prevented from fudging to prevent big mistakes, he soon stops caring about protecting the software from colossal problems. When inflexible rules are imposed on flexible humans, both sides lose. It is bad for business to prevent humans from doing things the way they want, and the computer system usually ends up having to digest invalid data anyway.
In the real world, both missing information and extra information that doesn’t fit into a standard field are important tools for success. For example, suppose a transaction can be completed only if the termination date is extended two weeks beyond the official limit. Most companies would rather fudge on the termination date than see a million-dollar deal go up in smoke. In the real world, limits are fudged all the time. Considerate products need to realize and embrace this fact.
Too many interactive products take the attitude that “It isn’t my responsibility.” When they pass along a job to some hardware device, they wash their hands of the action, leaving the stupid hardware to finish. Any user can see that the software isn’t being considerate or conscientious, that the software isn’t shouldering its part of the burden of helping the user become more effective.
In a typical print operation, for example, an application begins sending a 20-page report to the printer and simultaneously displays a print process dialog box with a Cancel button. If the user quickly realizes that he forgot to make an important change, he clicks the Cancel button just as the first page emerges from the printer. The application immediately cancels the print operation. But unbeknownst to the user, while the printer was beginning to work on page 1, the computer had already sent 15 pages to the printer’s buffer. The application cancels the last five pages, but the printer doesn’t know about the cancellation; it just knows that it was sent 15 pages, so it goes ahead and prints them. Meanwhile, the application smugly tells the user that the function was canceled. The application lies, as the user can plainly see.
The user is unsympathetic to the communication problems between the application and the printer. He doesn’t care that the communications are one-way. All he knows is that he decided not to print the document before the first page appeared in the printer’s output tray, he clicked the Cancel button, and then the stupid application continued printing for 15 pages even after it acknowledged his Cancel command.
Imagine what his experience would be if the application could properly communicate with the print driver. If the software were smart enough, the print job could easily have been abandoned before the second sheet of paper was wasted. The printer has a Cancel function—it’s just that the software was built to be too lazy to use it.
If a helpful human companion saw you about to do something that you would almost certainly regret afterwards—like shouting about your personal life in a room full of strangers, or sending an empty envelope in the mail to your boss—they might take you quietly aside and gently alert you to your mistake.
Digital products should similarly help you realize when you are, for example, about to inadvertently send a text to your entire list of contacts instead of the one friend you were intending to confide in, or are about to send an e-mail to the director of your department without the quarterly report you mentioned you were enclosing in the text of your message.
The intervention should not, however, be in the form of a standard modal error message box that stops the action and adds insult to injury, but rather through careful visual and textual feedback that lets you know that you are messaging a group rather than a person, or that you haven’t enclosed any attachments even though you mentioned that you were.
For this latter situation, your e-mail app might even modelessly highlight the drop area for you to drag your attachment to, while at the same time giving you the option of just going ahead and sending the message sans attachments, in case the software got your intentions wrong.
Products that go the extra mile in looking out for users by helping them prevent embarrassing mistakes—and not berating them for it—will quickly earn their trust and devotion. All other things equal, considerate product design is one of the things, and perhaps even the thing, that distinguishes an only passable app from a truly great one.
A considerate friend wants to know more about you. He remembers your likes and dislikes so that he can please you in the future. Everyone appreciates being treated according to his or her personal tastes.
Most software, on the other hand, doesn’t know or care who is using it. Little, if any, of the personal software on our personal computers seems to remember anything personal about us, in spite of the fact that we use it constantly, repetitively, and exclusively. A good example of this behavior is how browsers such as Firefox and Microsoft Internet Explorer remember information that users routinely enter into forms on websites, such as a shipping address or username. Google Chrome even remembers these details across devices and sessions.
Software should work hard to remember our habits and, particularly, everything we tell it. From the perspective of a developer writing an application, it can be tempting to think about gathering a bit of information from a person as being similar to gathering a bit of information from a database. Every time the information is needed, the product asks the user for it. The application then discards that tidbit, assuming that it might change and that it can merely ask for it again if necessary. Not only are digital products better suited to recording things in memory than humans are, but our products also show they are inconsiderate when they forget. Remembering humans’ actions and preferences is one of the best ways to create a positive experience with a software-enabled product.
A good service provider defers to her client. She understands that the person she is serving is the boss. When a restaurant host shows us to a table in a restaurant, we consider his choice of table to be a suggestion, not an order. If we politely request another table in an otherwise empty restaurant, we expect to be accommodated. If the host refuses, we are likely to choose a different restaurant where our desires take precedence over the host’s.
Inconsiderate products supervise and pass judgment on human actions. Software is within its rights to express its opinion that we are making a mistake, but it is presumptuous for it to judge or limit our actions. Software can suggest that we not “submit” our entry until we’ve typed in our telephone number, and it should explain the consequences if we do so, but if we want to “submit” without the number, we expect the software to do as it is told. The very word submit is a reversal of the deferential relationship we should expect from interactive products. Software should submit to users. Any application that proffers a “submit” button is being rude, as well as oblique and confusing.
If you ask a good retail sales associate for help locating an item, he will not only answer your question, but also volunteer useful collateral information. For example, he might tell you that a more expensive, higher-quality item than the one you requested is currently on sale for a similar price.
Most software doesn’t attempt to provide related information. Instead, it narrowly answers the precise questions we ask it and typically is not forthcoming about other information, even if it is clearly related to our goals. When we tell our word processor to print a document, it doesn’t tell us when the paper supply is low, or when 40 other documents are queued before us, or when another nearby printer is free. A helpful human would.
Figuring out the right way to offer potentially useful information can require a delicate touch. Microsoft’s Office Assistant “Clippy” was almost universally despised for his smarty-pants comments like “It looks like you’re writing a letter. Would you like help?” While we applauded his sentiment, we wished he weren’t so obtrusive and could take a hint when it was clear we didn’t want his help. After all, a good waiter doesn’t interrupt your conversation to ask you if you want more water. He just refills your glass when it’s empty, and he knows better than to linger when it’s clear that you’re in the middle of an intimate moment.
Offering inappropriate functions in inappropriate places is a hallmark of poorly designed interactive products. Many interactive products put controls for constantly used functions right next to never-used controls. You can easily find menus offering simple, harmless functions adjacent to irreversible ejector-seat-lever expert functions. It’s like being seated at a dining table right next to an open grill.
Horror stories also abound of customers offended by computer systems that repeatedly sent them checks for $0.00 or bills for $957,142,039.58. One would think that the system might alert a human in the Accounts Receivable or Payable departments when an event like this happens, especially more than once, but common sense remains a rarity in most information systems.
Generally speaking, we want our software to remember what we do and what we tell it. But there are some things our that software probably shouldn’t remember unless we specifically direct it, such as credit card numbers, tax IDs, bank accounts, and passwords. Furthermore, it should help us protect this kind of private data by helping us choose secure passwords, and reporting any possible improprieties, such as accounts accessed from an unrecognized computer or location.
A human assistant knows that you will require a hotel room when you travel to another city, even when you don’t ask explicitly. She knows the kind of room you like and reserves one without any request on your part. She anticipates your needs.
A web browser spends most of its time idling while we peruse web pages. It could easily anticipate our needs and prepare for them while we are reading. It could use that idle time to preload all the links that are visible. Chances are good that we will soon ask the browser to examine one or more of those links. It is easy to abort an unwanted request, but it is always time-consuming to wait for a request to be filled.
A conscientious person has a larger perspective on what it means to perform a task. Instead of just washing the dishes, for example, a conscientious person also wipes down the counters and empties the trash, because those tasks are also related to the larger goal: cleaning up the kitchen. A conscientious person, when drafting a report, also puts a handsome cover page on it and makes enough photocopies for the entire department.
Here’s an example: If we hand our imaginary assistant, Rodney, a manila folder and tell him to file it, he checks the writing on the folder’s tab—let’s say it reads MicroBlitz Contract—and proceeds to find the correct place for it in the filing cabinet. Under M, he finds, to his surprise, a manila folder already there with the same name. Rodney notices the discrepancy and finds that the existing folder contains a contract for 17 widgets delivered four months ago. The new folder, on the other hand, is for 34 sprockets slated for production and delivery in the next quarter. Conscientious Rodney changes the name on the old folder to read MicroBlitz Widget Contract, 7/13 and then changes the name of the new folder to read MicroBlitz Sprocket Contract, 11/13. This type of initiative is why we think Rodney is conscientious.
Our former imaginary assistant, Elliot, was a complete idiot. He was not conscientious at all. If he were placed in the same situation, he would have dumped the new MicroBlitz Contract folder next to the old MicroBlitz Contract folder without a second thought. Sure, he got it safely filed, but he could have done a better job that would have improved our ability to find the right contract in the future. That’s why Elliot isn’t our imaginary assistant anymore.
If we rely on a typical word processor to draft the new sprocket contract and then try to save it in the MicroBlitz folder, the application offers the choice of either overwriting and destroying the old widget contract or not saving it at all. The application not only isn’t as capable as Rodney, it isn’t even as capable as Elliot. The software is dumb enough to assume that because two folders have the same name, we meant to throw away the old one.
The application should, at the very least, mark the two files with different dates and save them. Even if the application refuses to take this “drastic” action unilaterally, it could at least show us the old file (letting us rename that one) before saving the new one. The application could take numerous actions that would be more conscientious.
At a service desk, the agent is expected to keep mum about her problems and to show a reasonable interest in yours. It might not be fair to be so one-sided, but that’s the nature of the service business. An interactive product, too, should keep quiet about its problems and show interest in the people who use it. Because computers don’t have egos or tender sensibilities, they should be perfect in this role, but they typically behave the opposite way.
Software whines at us with error messages, interrupts us with confirmation dialog boxes, and brags to us with unnecessary notifiers (“Document Successfully Saved!” How nice for you, Mr. App: Do you ever unsuccessfully save?). We aren’t interested in the application’s crisis of confidence about whether to purge its Recycle Bin. We don’t want to hear its whining about being unsure where to put a file on disk. We don’t need to see information about the computer’s data transfer rates and its loading sequence, any more than we need information about the customer service agent’s unhappy love affair. Not only should software keep quiet about its problems, but it should also have the intelligence, confidence, and authority to fix its problems on its own.
Although we don’t want our software pestering us incessantly with its little fears and triumphs, we do want to be kept informed about the things that matter to us. We don’t want our local bartender to grouse to us about his recent divorce, but we appreciate it when he posts his prices in plain sight and when he writes what time the pregame party begins on his chalkboard, along with who’s playing and the current Vegas spread. Nobody interrupts us to tell us this information: It’s there in plain view whenever we need it. Software, similarly, can provide us with this kind of rich modeless feedback about what is going on.
Most of our existing software is not very perceptive. It has a very narrow understanding of the scope of most problems. It may willingly perform difficult work, but only when given the precise command at precisely the correct time. For example, if you ask the inventory query system how many widgets are in stock, it will dutifully ask the database and report the number as of the time you ask. But what if, 20 minutes later, someone in the Dallas office cleans out the entire stock of widgets? You are now operating under a potentially embarrassing misconception, while your computer sits there, idling away billions of wasted instructions. It is not being perceptive. If you want to know about widgets once, isn’t that a good clue that you probably will want to know about widgets again? You may not want to hear widget status reports every day for the rest of your life, but maybe you’ll want to get them for the rest of the week. Perceptive software observes what users are doing and uses those observations to offer relevant information.
Products should also watch our preferences and remember them without being asked explicitly to do so. If we always maximize an application to use the entire available screen, the application should get the idea after a few sessions and always launch in that configuration. The same goes for placement of palettes, default tools, frequently used templates, and other useful settings.
Interactive products should stand by their convictions. If we tell the computer to discard a file, it shouldn’t ask, “Are you sure?” Of course we’re sure; otherwise, we wouldn’t have asked. It shouldn’t second-guess us or itself.
On the other hand, if the computer has any suspicion that we might be wrong (which is always), it should anticipate our changing our minds by being prepared to undelete the file upon our request.
How often have you clicked the Print button and then gone to get a cup of coffee, only to return to find a fearful dialog box quivering in the middle of the screen, asking, “Are you sure you want to print?” This insecurity is infuriating and the antithesis of considerate human behavior.
Inconsiderate products ask lots of annoying questions. Excessive choices, especially in the form of questions, quickly stop being a benefit and instead become an ordeal.
Asking questions is quite different from providing choices. When browsing on your own in a store, you are presented with choices. When going to a job interview, you are asked questions. Which is the more pleasurable experience? Part of the reason why is that individual asking the questions is understood to be in a position superior to the individual being asked. Those with authority ask questions; subordinates respond. When software asks questions rather than offering choices, users feel disempowered.
Beyond the power dynamics issues, questions also tend to make people feel badgered and harassed. Would you like soup or salad? Salad. Would you like cabbage or spinach? Spinach. Would you like French, Thousand Island, or Italian? French. Would you like lite or regular? Stop! Please, just bring me the soup instead! Would you like corn chowder or chicken noodle?
Users really don’t like to be asked questions by products, especially since most of the questions are stupid or unnecessary. Asking questions tells users that products are:
These are all qualities that we typically dislike in people. Why should we desire them in our products? The application is not asking us our opinion out of intellectual curiosity or a desire to make conversation, the way a friend might over dinner. Rather, it is behaving ignorantly while also representing itself as having false authority. The application isn’t interested in our opinions; it requires information—often information it didn’t really need to ask us in the first place.
Many ATMs continually ask users what language they prefer: “Spanish, English, or Chinese?” This answer is unlikely to change after a person’s first use. Interactive products that ask fewer questions, provide choices without asking questions, and remember information they have already learned appear smarter to users, as well as more polite and considerate.
When your friend commits a serious faux pas, he tries to make amends and undo the damage. When an application discovers a fatal problem, it can take the time and effort to prepare for its failure without hurting the user, or it can simply crash and burn.
Many applications are filled with data and settings. When they crash, that information is still often discarded. The user is left holding the bag. For example, say an application is computing merrily along, downloading your e-mail from a server, when it runs out of memory at some procedure buried deep in the application’s internals. The application, like most desktop software, issues a message that says, in effect, “You are hosed,” and it terminates immediately after you click OK. You restart the application, or sometimes the whole computer, only to find that the application lost your e-mail. When you interrogate the server, you find that it has also erased your e-mail because the e-mail was already handed over to your application. This is not what we should expect of good software.
In this example, the application accepted e-mail from the server—which then erased its copy—but it didn’t ensure that the e-mail was properly recorded locally. If the e-mail application had ensured that those messages were promptly written to the local disk, even before it informed the server that the messages were successfully downloaded, the problem would never have arisen.
Some well-designed software products, such as Ableton Live, a brilliant music performance tool, rely on the Undo cache to recover from crashes. This is a great example of how products can easily keep track of user behavior, so if some situation causes problems, it is easy to extricate yourself.
Even when applications don’t crash, inconsiderate behavior is rife, particularly on the web. Users often need to enter information into a set of forms on a page. After filling in 10 or 11 fields, a user might click the Submit button, and, due to some mistake or omission on his part, have the site reject his input and tell him to correct it. The user then clicks the back arrow to return to the page, and lo, the 10 valid entries were inconsiderately discarded along with the single invalid one. Remember your incredibly mean junior high geography teacher who ripped up your report on South America because you wrote it in pencil instead of ink? And as a result, you hate geography to this day? Don’t create products that act like that!
When manual information-processing systems are translated into computerized systems, something is lost in the process. Although an automated order-entry system can handle millions more orders than a human clerk can, the human clerk can work the system in a way that most automated systems ignore. There is almost never a way to jigger the functioning to give or take slight advantages in an automated system.
In a manual system, when the clerk’s friend from the sales force tells him that getting a particular order processed speedily means additional business, the clerk can expedite that one order. When another order comes in with some critical information missing, the clerk still can process it, remembering to acquire and record the information later. This flexibility usually is absent from automated systems.
Most computerized systems have only two states: nonexistence and full compliance. No intermediate states are recognized or accepted. Every manual system has an important but paradoxical state—unspoken, undocumented, but widely relied upon—suspense. In this state a transaction can be accepted even though it is not fully processed. The human operator creates that state in his head or on his desk or in his back pocket.
For example, suppose a digital system needs both customer and order information before it can post an invoice. Whereas the human clerk can go ahead and post an order before having detailed customer information, the computerized system rejects the transaction, unwilling to allow the invoice to be entered without it.
The characteristic of manual systems that lets humans perform actions out of sequence or before prerequisites are satisfied is called fudgeability. It is one of the first casualties when systems are computerized, and its absence is a key contributor to the inhumanity of digital systems. It is a natural result of the implementation model. Developers don’t always see a reason to create intermediate states, because the computer has no need for them. Yet there are strong human needs to be able to bend the system slightly.
One of the benefits of fudgeable systems is reducing the number of mistakes. By allowing many small, temporary mistakes into the system and entrusting humans to correct them before they cause problems downstream, we can avoid much bigger, more permanent mistakes. Paradoxically, most of the hard-edged rules enforced by computer systems are imposed to prevent just such mistakes. These inflexible rules make the human and the software adversaries. Because the human is prevented from fudging to prevent big mistakes, he soon stops caring about protecting the software from colossal problems. When inflexible rules are imposed on flexible humans, both sides lose. It is bad for business to prevent humans from doing things the way they want, and the computer system usually ends up having to digest invalid data anyway.
In the real world, both missing information and extra information that doesn’t fit into a standard field are important tools for success. For example, suppose a transaction can be completed only if the termination date is extended two weeks beyond the official limit. Most companies would rather fudge on the termination date than see a million-dollar deal go up in smoke. In the real world, limits are fudged all the time. Considerate products need to realize and embrace this fact.
Too many interactive products take the attitude that “It isn’t my responsibility.” When they pass along a job to some hardware device, they wash their hands of the action, leaving the stupid hardware to finish. Any user can see that the software isn’t being considerate or conscientious, that the software isn’t shouldering its part of the burden of helping the user become more effective.
In a typical print operation, for example, an application begins sending a 20-page report to the printer and simultaneously displays a print process dialog box with a Cancel button. If the user quickly realizes that he forgot to make an important change, he clicks the Cancel button just as the first page emerges from the printer. The application immediately cancels the print operation. But unbeknownst to the user, while the printer was beginning to work on page 1, the computer had already sent 15 pages to the printer’s buffer. The application cancels the last five pages, but the printer doesn’t know about the cancellation; it just knows that it was sent 15 pages, so it goes ahead and prints them. Meanwhile, the application smugly tells the user that the function was canceled. The application lies, as the user can plainly see.
The user is unsympathetic to the communication problems between the application and the printer. He doesn’t care that the communications are one-way. All he knows is that he decided not to print the document before the first page appeared in the printer’s output tray, he clicked the Cancel button, and then the stupid application continued printing for 15 pages even after it acknowledged his Cancel command.
Imagine what his experience would be if the application could properly communicate with the print driver. If the software were smart enough, the print job could easily have been abandoned before the second sheet of paper was wasted. The printer has a Cancel function—it’s just that the software was built to be too lazy to use it.
If a helpful human companion saw you about to do something that you would almost certainly regret afterwards—like shouting about your personal life in a room full of strangers, or sending an empty envelope in the mail to your boss—they might take you quietly aside and gently alert you to your mistake.
Digital products should similarly help you realize when you are, for example, about to inadvertently send a text to your entire list of contacts instead of the one friend you were intending to confide in, or are about to send an e-mail to the director of your department without the quarterly report you mentioned you were enclosing in the text of your message.
The intervention should not, however, be in the form of a standard modal error message box that stops the action and adds insult to injury, but rather through careful visual and textual feedback that lets you know that you are messaging a group rather than a person, or that you haven’t enclosed any attachments even though you mentioned that you were.
For this latter situation, your e-mail app might even modelessly highlight the drop area for you to drag your attachment to, while at the same time giving you the option of just going ahead and sending the message sans attachments, in case the software got your intentions wrong.
Products that go the extra mile in looking out for users by helping them prevent embarrassing mistakes—and not berating them for it—will quickly earn their trust and devotion. All other things equal, considerate product design is one of the things, and perhaps even the thing, that distinguishes an only passable app from a truly great one.