using heuristics to evaluate products and features

Evaluating digital products and sites using heuristics

I’ve been in this scenario and you probably have too: there is a new feature on a site or in a product to review but time is short. What is the most efficient and fastest way to get quality internal feedback? How many people do you really need to review and help with testing before final sign off?

I have a quick option that uses a small group of 3-5 testers. It is not replacement for a QA team, but it is really useful when usability budgets are tight or you need to do a quick pass before stakeholder sign off. Let me share with you how to do it.

 

Grab the Team

Have two teams for two different sets of reviews if possible. Have other designers and technical people review first to spot any big design /dev issues so that your stakeholder reviewers or users if you do any user testing can focus on scenarios/testing of the feature.

Every rare now and again you may find you are the only person available to review something on very short notice. It is always best if you can find at least one more person, ideally two more to help you review. No evaluator finds every thing that could be a problem. They may spot less than half of the usability problems. If you have five people, you will likely not catch everything but you should catch all show stoppers and close to 80% of usability problems.

Why not have more evaluators? More is better, right? Well…

Once you have more than five evaluators, there is a decreasing return in how many problems are spotted. Twenty is about right once you are ready to move into user testing, but for small internal evaluations, 3-5 people is ideal.

 

Prep the Team

Prep your team. Everyone should review the feature separately, ideally emailing or sharing recorded feedback. You will want them to step through the design several times. I find it helpful to give them one or two key things to focus on each time.

We want to try to keep everyone separate at first for several reasons.

  1. People are more likely to all contribute (sometimes in group settings one person’s feedback tends to dominate)
  2. It is more likely different problems will be revealed
  3. Everyone will work quicker without being distracted by group discussions.

Round one we might be looking at how the feature works in the environment, if there is flow. Is there a critical bug at this stage? Round two – are the two most important requirements met? If there is a specific goal related to design or business needs, is it solved? Round three – following a specific use case, can a task be performed?

 

Heuristics: a Fancy Way to Say ” Important Stuff to Evaluate”

Many people use Jakob Nielsen’s ten heuristics for evaluating sites and products. This is great for design professionals but if you are working with marketers or other people not familiar with heuristics, it can be helpful to have a more focused list of requirements and goals and link those to the heuristics. We can also reframe the heuristics as questions.

Here are Jakob Nielsen’s 10 Heuristics:

  1. visibility of system status
  2. match between system & world
  3. user control & freedom
  4. consistency & standards
  5. error prevention
  6. recognition rather than recall
  7. flexibility & efficiency of use
  8. aesthetic & minimalist design
  9. help users recognize, diagnose & recover from errors
  10. help & documentation

You can also rewrite these to fit into a specific scenario or use case for your testers to run through. This also helps later if you have to hand off feedback to a vendor or another group.

Gather Feedback

I’ve found it helps to compile everyone’s feedback into a document or spreadsheet. I like to make bulleted lists of each key issue and note any important items like browser and device along with possible solutions if that is appropriate. I look through the feedback and the requirements, goals and heuristics and step through the site or product myself and take any screenshots I can and annotate them. If your other reviewers can also provide screenshots of any issues that can be very helpful. If they are not tech savvy, you may want to sit down with them to go over a feature or use a tool like hotjar to capture behavior.

  1. Browser type, version, any add-ons? Is it set to incognito? Is the user logged into a system? Are their settings blocking javascript or are there other settings that might interfere with testing?
  2. Is the tester able to complete the scenario? (i.e. Click the download button and fill out the form successfully?)
  3. Are there any obvious errors?
  4. Does the feature meet the requirements and goals?

 

Evaluating Feedback

Once you have everyone’s feedback review and organize it to see if there are any outstanding big issues. Was the feature tested against the requirements? Were the goals met? Is anything missing? Taking the time to evaluate then prioritize the severity of any usability problems or errors can be a big help when working to keep a project on track.

 

I hope you found this internal feedback method helpful. It is not right for everyone in every situation. Do you use a different method that works for you?