Book summary: Prototyping (II)

This is the second half of my summary for the book “Prototyping” by Todd Zaki Warfel. See the first part on this blog. It will cover chapters 4-12, which talk about the guiding principles for prototyping, prototyping tools and how to test your prototype.


Most prototyping mistakes come from either (1) building too much or too little, (2) prototyping the wrong thing or (3) not setting expectations about what the prototype will be. Principles:

  1. Understand your audience and intent. This is the most important principle. Once you understand them, you’ll be much better equipped to determine what you need to prototype, set appropriate expectations, determine the right level of fidelity and pick the right tool.

  2. Plan a little, prototype the rest. Software systems change constantly and quickly. Plan a little and prototype the rest, so you can cope with the changing environment by working incrementally and iteratively.

  3. Set expectations. This lets you avoid rabbit-hole discussions on things that aren’t important or haven’t been prototyped yet.

  4. You can sketch. Anyone can draw well enough for the purposes of a prototype.

  5. It’s a prototype, not the Mona Lisa. Don’t lose too much time on making it pretty. Not only it’s not necessary, but it has some advantages like making it clear that it’s not finished product, which makes people more likely to give feedback. You need the least amount of effort to communicate your design idea, nothing more.

  6. If you can’t make it, fake it. You can fake many things you can’t make with JPG files, clickable HTML files, PDFs or PowerPoint presentations.

  7. Prototype only what you need. Often prototypes only cover part of a system. Even if your ultimate goal is usability testing, chances are you’ll only test 5 or 6 scenarios, so you only need to build that.

  8. Reduce risk—prototype early and often. Prototyping is about making small investments with a significant return. The return can be positive, in which case you can just go ahead, or negative, in which case your risk is substantially reduced because you identified the problem soon enough. The earlier you catch mistakes, the easier and cheaper it is to fix them.


When choosing the prototyping tool, consider audience, intent, familiarity/learnability, cost, need for collaboration, distribution and throwaway vs. reusable. Notes for specific tools follow:


It’s the most versatile method. It’s also fast, cheap, easy, you can manipulate it on the fly (and even the participants can help), collaborative, not limited by prebuilt widgets or technology, and can be done anywhere and anytime, even without computers. The bad sides are that it’s hard for geographically distributed teams to use it, requires imagination and lacks visual aesthetics. Tips:

  • Include transparencies (useful for simulating roll-overs and such), post-it notes (for displaying changing states on the page, highlighting elements or dialog windows), coloured pens/markers (sketching in black/blue, errors in red, success messages in green) and scotch tape or glue stick in your kit.

  • Use pre-drawn/printed widgets. The book resources include a (kind of limited) sample Illustrator file with printable widgets for that purpose.

  • You can use transparencies for context, pop-up help (even using a marker to highlight fields).

  • To accomplish a show/hide effect, you can fold/unfold part of the paper.

  • You can simulate slide effects by having two different pieces of paper, cut one of them so the other fits (leaving a sort of “window” so you can see the other through), and moving the second one back and forth.

Presentation software

It has a low learning curve, it’s available in most computers, you can use master slides to ensure consistency, you can copy-paste and rearrange elements with drag-and-drop, and export to HTML or PDF if necessary. However, it has limited drawing tools (so often not good for hi-fi prototypes), the interactivity is limited and the prototype has no reusable source code whatsoever. The book resources have a sample prototyping kit for Powerpoint and Keynote, and Manuela Hutter (Oslo UX book club organiser) wrote another prototyping kit for OpenOffice.org (and see the whole blog post about prototyping with OO.o). Tip: you can simulate fade effects in presentation software by having two slides (one with the element highlighted, the other without) and setting a “dissolve” transition effects between them.


There are several ways to approach making prototypes with HTML. You can simply slap up a few images and use image maps to link to each other, you can have HTML exported from some other tool, or you can write “production-level” HTML for a prototype that will contain potentially reusable code. The strengths of the last way are being platform-independent, free, portable, “real” (in case we’re prototyping a web app), it helps gauging feasibility, modular (helps in productivity), collaborative (if we split in different files), reusable code and unlimited potential. The downsides are that it might take more time and effort to make a prototype like this, and that it’s not easy to make annotations on it.

Testing your prototype

Common mistakes

  1. Usability testing is a process, not an event. There’s also planning, analysis and reporting, not just “sitting in front of a person with a computer”.

  2. Poor planning. The first question to ask is “why am I doing usability testing?”. Determine who you want to test, who is going to use the product or service, what are their behaviours, and why would they use the product in the first place.

  3. Not recruiting the right participants. The whole point of the testing is seeing how the design works in the eyes of the people who will use it. If you recruit the wrong people, you will get the wrong data.

  4. Poorly-formed research questions. This is one of the biggest challenges. You have to get your answers without asking explicitly. Instead of telling them to plan a dinner + movie, you can ask them to look for something to do with friends. The point being we shouldn’t make them use the application in the way we want, but make them use the application in whatever way they normally would.

  5. Poor test moderation. A good moderator balances being a silent fly on the wall, watching, with asking enough questions to get the test going; know how to extract just the right level of detail; know when to let the participant explore, and when to pull them back; how to get the answer to the question they want without asking it.

  6. Picking the wrong method to communicate findings and recommendations. Nobody is going to read a 10-20 page report. Short presentations with a summary of the findings typically work well. Including video clips showing the highlights of the test is useful.

Steps to conduct a usability test

  1. Preparation. Decide, with your team, what are the key characteristics and behaviour you’re looking for, and also the ones you don’t want. If you’re going to record audio or video, have a waiver ready for the participants. Knowing the intent of the test will inform the appropriate scenarios, research questions and prototype. Limit the test to 45-60 minutes: enough time to test 5 or 6 scenarios while not exceeding the attention span of the participants.

  2. Design test scenarios. Either specific to determine if a user can access a concrete feature of the site, or exploratory to gain insight into a participant’s overall approach to solving her goal. Focus on the goal, allowing different activities and process to reach it.

  3. Test the prototype. Getting feedback from participants is easier if they feel comfortable. Once they’re comfortable, ask them about their experiences related to whatever you’re going to test that day. You can use that information to provide context.

  4. Record observations and feedback. Have one person moderating, and another person taking notes remotely. It’s better to over-record than to under-record. Use a rating scale of, say, 1-5 for each scenario. Both moderator and participant fill it in. The former should be based of measurable elements like time and effort. The latter is more subjective, focused on the satisfaction with completing the task. Try to filter any variable not related to the system you’re testing. For example, use the same operating system as the user is most used to.

  5. Analyse and determine next steps. When you finish, you typically have a list of bigger issues in your head. This list is a starting point. Analyse all your data points, and find themes. Look for frequency, severity and learnability. It’s better to use a method that combines significance to the customer, value to the business and technical feasibility of fixing the issue.

And that’s the end of my summary.