Quality Testing

Quality is delighting customers

Hi Guys,
If anyone has the materials on the White Box Testing especially with the Statement Coverage and the Decision Coverage ... plz do send me

thanks gusy

Views: 102

Reply to This

Replies to This Discussion

White box testing is very different in nature from black box testing. In black box testing, focus of all the activities is only on the functionality of system and not on what is happening inside the system.

Purpose of white box testing is to make sure that:
a. Functionality is proper
b. Information on the code coverage

White box is primarily development teams job, but now test engineers have also started helping development team in this effort by contributing in writing unit test cases, generating data for unit test cases etc.

White box testing can be performed at various level starting from unit to system testing. Only distinction between black box and white box is the system knowledge, in white box testing you execute or select your test cases based on the code/architectural knowledge of system under test.

Even if you are executing integration or system test, but using data in such a way that one particular code path is exercised, it should fall under white box testing.

There are different type of coverage that can be targeted in white box testing:

a. Statement coverage
b. Function coverage
c. Decision coverage
d. Decision and Statement coverage

Conditional Testing:

The first improvement to white box techniques is to ensure that the Boolean controlling expressions are adequately tested, a process known as condition testing.

The process of condition testing ensures that a controlling expression has been adequately exercised whilst the software is under test by constructing a constraint set for every expression and then ensuring that every member on the constraint set is included in the values which are presented to the expression. This may require additional test runs to be included in the test plan.

To introduce the concept of constraint sets the simplest possible Boolean condition, a single Boolean variable or a negated Boolean variable, will be considered. These conditions may take forms such as:

If DateValid then while not DateValid then

The constraint set for both of these expressions is {t ,f } which indicates that to adequately test these expressions they should be tested twice with DateValid having the values True and False.

Perhaps the next simplest Boolean condition consists of a simple relational expression of the form value operator value, where the operator can be one of: is equal to ( = ), is not equal to ( /= ), is greater than ( > ), is less than ( <), is greater than or equal to ( >= ) and is less than or equal to ( <= ). It can be noted that the negation of the simple Boolean variable above had no effect upon the constraint set and that the six relational operators can be divided into three pairs of operators and their negations. Is equal to is a negation of is not equal to, is greater than is a negation of is less than or equal to and is less than is a negation of is greater than or equal to. Thus the condition set for a relational expression can be expressed as {=, >, <}, which indicates that to adequately test a relational expression it must be tested three times with values which ensure that the two values are equal, that the first value is less than the second value and that the first value is greater than the second value.

More complex control expressions involve the use of the Boolean operators, and, or and xor, which combine the values of two Boolean values. To construct a constraint set for a simple Boolean expression of the form BooleanValue operator BooleanValue all possible combinations of True and False have to be considered. This gives the constraint set for the expression as {{t ,t } {t ,f } {f ,t } {f ,f }}. If both BooleanValues are simple or negated Boolean variables then no further development of this set is required. However if one or both of the BooleanValues are relational expressions then the constraint set for the relational expression will have to be combined with this constraint set. The combination takes the form of noting that the equality condition is equivalent to true and the both inequality conditions are equivalent to false. Thus every true condition in the condition set is replaced with ' = ' and every false replaced twice, once with ' > ' and once with ' < '.

Thus if only one the left hand BooleanValue is a relational expression the condition set would be {{= ,t } {= ,f } {> ,t } {< ,t } {> ,f } {< ,f }}. And if both BooleanValues are relation expressions this would become {{= ,= } {= ,> } {= ,< } {> ,= } {< ,= } {> ,> } {> ,< } {< ,> } {< ,< }}.

An increase in the complexity of the Boolean expression by the addition of more operators will introduce implicit or explicit bracketing of the order of evaluation which will be reflected in the condition set and will increase the number of terms in the set. For example a Boolean expression of the following form:

BooleanValue1 operator1 BooleanValue2 operator3nbsp;BooleanValue3

Has the implicit bracketing:

(BooleanValue1 operator1 BooleanValue2) operato3 BooleanValue3

The constraint set for the complete expression would be {{e1,t}{e1,f}}, where e1 is the condition set of the bracketed sub-expression and when it is used to expand this constraint set gives {{t,t,t} {t,f,t} {f,t,t} {f,f,t} {t,t,f} {t,f,f} {f,t,f} {f,f,f}}. If any of the BooleanValues are themselves relational expressions this will increase the number of terms in the condition set. In this example the worst case would be if all three values were relational expressions and would produce a total of 27 terms in the condition set. This would imply that 27 tests are required to adequately test the expression. As the number of Boolean operators increases the number of terms in a condition set increases exponentially and comprehensive testing of the expression becomes more complicated and less likely. It is this consideration which led to the advice, initially given in Section 1, to keep Boolean control expressions as simple as possible, and one way to do this is to use Boolean variables rather than expressions within such control conditions.

Data Life Cycle Testing:

Keeping control expressions simple can be argued to simply distribute the complexity from the control expressions into the other parts of the subprogram and an effective testing strategy should recognise this and account for it. One approach to this consideration is known as data lifecyle testing, and it is based upon the consideration that a variable is at some stage created, and subsequently may have its value changed or used in a controlling expression several times before being destroyed. If only locally declared Boolean variables used in control conditions are considered then an examination of the source code will indicate the place in the source code where the variable is created, places where it is given a value, where the value is used as part of a control expression and the place where it is destroyed.

This approach to testing requires all possible feasible lifecycles of the variable to be covered whilst the module is under test. In the case of a Boolean variable this should include the possibility of a Boolean variable being given the values True and False at each place where it is given a value. A typical outline sketch of a possible lifecycle of a controlling Boolean variable within a subprogram might be as follows:

~~~ SomeSubProgram( ~~~ ) is

ControlVar : BOOLEAN := FALSE;

begin -- SomeSubProgram


while not ControlVar loop


ControlVar := SomeExpression;

end loop;


end SomeSubProg;

In this sketch ~~~ indicates the parts of the subprogram which are not relevant to the lifecycle. In this example there are two places where the variable ControlVar is given a value, the location where it is created and the assignment within the loop. Additionally there is one place where it is used as a control expression. There are two possible lifecycles to consider, one which can be characterised as { f t } indicating that the variable is created with the value False and given the value True upon the first iteration of the loop. The other lifecycle can be characterised as { f, f, ... t }, which differs from the first by indicating that the variable is given the value False on the first iteration, following which there is the possibility of more iterations where it is also given the value False, being given the value True on the last iteration.

Other possible lifecycles such as { f } or { f t f .. t } can be shown from a consideration of the source code not to be possible. The first is not possible as the default value will ensure that the loop iterates causing the variable to experience at least two values during its lifecycle. The second is not possible as soon as the variable is given the value True within the loop, the loop and subsequently the subprogram will terminate, causing the variable to be destroyed.

This quick look at the variable lifecycle approach only indicates the basis of this approach to white box testing. It should also indicate that this is one of the most laborious and thus expensive and difficult techniques to apply. As it is expensive and does not add a great deal to the testing considerations which have already been discussed, it is not widely used.

Loop Testing:

The final white box consideration which will be introduced is the testing of loops, which have been shown to be the most common cause of faults in subprograms. If a loop, definite or indefinite, is intended to iterate n times then the test plan should include the following seven considerations and possible faults.

a. That the loop might iterate zero times.
b. That the loop might iterate once
c. That the loop might iterate twice
d. That the loop might iterate several times
e. That the loop might iterate n - 1 times
f. That the loop might iterate n times
g. That the loop might iterate n + 1 times
h. That the loop might iterate infinite times

All feasible possibilities should be exercised whilst the software is under test. The last possibility, an infinite loop, is a very noticeable, and common fault. All loops should be constructed in such a way that it is guaranteed that they will at some stage come to an end. However this does not necessarily guarantee that they come to an end after the correct number of iterations, a loop which iterates one time too many or one time too few is probably the most common loop fault. Of these possibilities an additional iteration may hopefully cause a CONSTRAINT_ERROR exception to be raised announcing its presence. Otherwise the n - 1 and n + 1 loop faults can be very difficult to detect and correct.

A loop executing zero times may be part of the design, in which case it should be explicitly tested that it does so when required and does not do so when it is not required. Otherwise, if the loop should never execute zero times, and it does, this can also be a very subtle fault to locate. The additional considerations, once, twice and many are included to increase confidence that the loop is operating correctly.

The next consideration is the testing of nested loops. One approach to this is to combine the test considerations of the innermost loop with those of the outermost loop. As there are 7 considerations for a simple loop, this will give 49 considerations for two levels of nesting and 343 considerations for a triple nested loop, this is clearly not a feasible proposition.

What is possible is to start by testing the innermost loop, with all other loops set to iterate the minimum number of times. Once the innermost loop has been tested it should be configured so that it will iterate the minimum number of times and the next outermost loop tested. Testing of nested loops can continue in this manner, effectively testing each nested loop in sequence rather than in combination, which will result in the number of required tests increasing arithmetically ( 7, 14, 21 ..) rather than geometrically ( 7, 49, 343 .. ).

What is a White Box Testing Strategy?

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.

In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning.

Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.

Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.

ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

Types of testing under White/Glass Box Testing Strategy:

Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.
Kiran, Thanks for providing this information. This is more useful to all
Thanks a lot Kiran.. its useful
Hi Nagashree,

Check this document, Its very good to learn concepts of White Box Testing :


Please feel free to ask any doubts regarding Software Testing & ISTQB.


Kapil Samadhiya
Kapil i have read this material from ur Scribd already.. but thought of getting more info....
anyway thanks for all ur materials
Hi Nagashree,

Please check this document :


Kapil Samadhiya
thanks kapil....


TTWT Magazine





© 2021   Created by Quality Testing.   Powered by

Badges  |  Report an Issue  |  Terms of Service