Usebalility testing is nothing but, testing tfhe userfriendlyness of the appkication .During this process we r goimg to concentrate on GUI interface and customer globalisation .it is also called as casmotic testing ,bcz we r concent rating only on the basic fucntionalities like look n feel , spelling mistakes , font size ,backgroungd color , order of fucntionalities , ease of use and other funcs.
this technic comes under black box testing and is again devided into 2 parrts
1)user interface testing (UIT) : This is the first technic to be fallowed after sanity testing to evaluate how well our appliccation is supporting user friendly ness and cosmatic funcs .during this process we not going to prepare any test cases nor execute them.
it is completely customer intrest .
2) Mnual support testing (MST) :this can b fallowed when ever our customer is facing prob with the desinged appln either thriugh phone or in person depending on the severity of the problem .
Usability Test Reporting:
You know when a profession is mature, because the services and products offered by practitioners share a fair amount of consistency. So for example if I commission two different architects to carry out a house survey, their reports should be pretty similar. One may be cheaper than another, and one may be better able to describe the problems with the roof in terms I will understand, but the problems they find should be consistent.
Usability and variability:
Embarrassingly, we have known for a while now that this doesn’t apply to usability testing. The well-publicised work of Rolf Molich shows us that when different usability groups are asked to carry out a web site evaluation, they find lots of usability issues. The problem is that each group finds only a sub-set of all the usability problems. Just one group (of nine) in Molich’s study found more than 25% of the problems. (More detail can be found at Molich’s web site).
Given that all these people would describe themselves as "usability professionals", it’s hard to blame the findings on different skill sets or competencies. A more likely contributor is the fact that the different groups carried out usability testing in a variety of different ways.
So it’s interesting that, during the period of Molich’s work, the US National Institute of Standards and Technology (NIST) initiated an effort to "Define and validate a Common Industry Format for reporting usability tests and their results". The overall aim of the project was to increase the visibility of software usability.
The Common Industry Format (or ‘CIF’ to its friends) isn’t a visual template that helps make usability reports look the same, nor does it tell you how to run a test. However, the framework of the report defines a consistent method of carrying out usability tests. For example, you can only write a compliant report if you take objective usability measures of effectiveness, efficiency and satisfaction (these definitions come from the international usability standard, ISO 9241-11). The report also requires information such as the design of the test (including information about independent variables), data scoring procedures (including operational definitions of usability measures) and details of the statistical analysis used. Following this type of guidance will help ensure consistency and contrasts with the more common approach, where usability tests aren’t "designed", they just happen. If you are interested in seeing a CIF-style report, Andy Edmonds has recently prepared an HTML version. The CIF became an ANSI standard in December 2001 (ANSI/NCITS 354-2001) and became an international standard in 2006 (ISO/IEC 25062:2006 "Common Industry Format (CIF) for usability test reports").
Formative and Summative tests:
The CIF makes a distinction between "formative" and "summative" usability tests. Formative tests are carried out:
During the development of a product;
To mould or improve the product;
Virtually anywhere (you don’t need a lab);
With the test administrator and the participant co-present.
The outputs from a formative test may include:
Participant comments in the form of a "thinking aloud" narrative (for example, attitudes, sources of confusion, reasons for actions);
Photographs and highlights videos;
Usability problems and suggested fixes.
In contrast, summative tests are carried out:
At the end of a development stage;
To measure or validate the usability of a product;
To answer the question: "How usable is this product";
To compare against competitor products or usability metrics;
To generate data to support marketing claims about usability;
In a usability lab;
With the participant working alone.
The outputs from a summative test may include:
Statistical measures of usability (for example, success rate, average time to complete a task, number of assists);
Reports or white papers.