Test Design Test Coverage Techniques of Design Test

Lecture



Test design is a stage of the software testing process at which test cases are designed and created (test cases) in accordance with previously defined quality criteria and testing objectives.
Test design [ ISTQB Glossary of terms ]: The process of transforming general testing conditions into tangible test conditions and test cases.

Test design work plan

  • analysis of existing project artifacts: documentation (specifications, requirements, plans), models, executable code, etc.
  • writing a Test Design Specification
  • design and creation of test cases (Test Cases)

Roles responsible for test design

  • Test analyst - defines " WHAT to test? "
  • Test designer - defines " HOW to test? "

Simply put, the task of a test of analysts and designers is to use different strategies and test design techniques to create a set of test cases that provides the best test coverage of the application under test. However, on most projects these roles are not allocated, but are entrusted to ordinary testers, which does not always have a positive effect on the quality of tests, testing and, as it follows from this, on the quality of the software (final product).

Test Coverage

Test Coverage is one of the test quality assessment metrics, which is the density of test coverage of requirements or executable code.

If we consider testing as "checking the correspondence between the actual and expected behavior of the program, performed on the final test suite," then this final test suite will determine the test coverage:

The higher the required level of test coverage, the more tests will be selected to verify the tested requirements or executable code.

The complexity of modern software and infrastructure has made impossible the task of testing with 100% test coverage. Therefore, to develop a test suite that provides a less high level of coverage, you can use special tools or test design techniques.

There are the following approaches to the assessment and measurement of test coverage:

  1. Requirements Coverage - assessment of the coverage by tests of functional and non-functional requirements to the product by building the traceability matrix.
  2. Code Coverage - an assessment of the coverage of executable code by tests, by tracking parts of the software that were not checked during the testing process.
  3. Test coverage based on control flow analysis - coverage assessment based on determining how to execute the program module code and creating test cases to cover these paths.

Differences :
The requirements coverage method focuses on verifying the compliance of a set of tests with product requirements, while the code coverage analysis completes the verification tests of the developed part of the product (source code), and the control flow analysis tracks the paths in the graph or model of the functions being tested. (Control Flow Graph).

Limitations :
The code coverage estimation method will not reveal unrealized requirements, as it does not work with the final product, but with the existing source code.
The requirement coverage method may leave some parts of the code untested, because it does not take into account the final implementation.

Requirements Coverage

Calculation of test coverage relative to the requirements is carried out according to the formula:

Tcov = (Lcov / Ltotal) * 100%

Where:
Tcov - test coverage
Lcov - the number of requirements checked by test cases
Ltotal - total requirements

To measure coverage of requirements, it is necessary to analyze product requirements and break them down into points. Optionally, each item is associated with test cases checking it. The combination of these links is the trace matrix. Tracing the connection, you can understand exactly what the requirements of the test case checks.

Non-requirements tests are meaningless. Non-test requirements are white spots, i.e. After completing all the created test cases, you can not give an answer whether this requirement is implemented in the product or not.

To optimize test coverage for testing based on requirements, the best way is to use standard test design techniques. An example of developing test cases according to existing requirements is discussed in the section: "Practical application of test design techniques in the development of test cases"

Code Coverage

Calculation of test coverage relative to the executable software code is carried out according to the formula:

Tcov = (Ltc / Lcode) * 100%

Where:
Tcov - test coverage
Ltc - number of lines of code covered with tests
Lcode - the total number of lines of code.

Currently, there is a toolkit (for example: Clover), which allows you to analyze which rows were occurrences during testing, so you can significantly increase coverage by adding new tests for specific cases, as well as get rid of duplicate tests. Conducting such an analysis of the code and the subsequent optimization of the coverage is fairly easy to implement as part of the white-box testing for modular, integration and system testing; when testing the black box (black-box testing), the task becomes quite expensive, since it requires a lot of time and resources to install, configure and analyze the results of work, both from the testers and developers.

Test coverage based on control flow analysis

Control Flow Testing is one of the white box testing techniques that is based on determining how to execute program module code and create test cases to cover these paths. [one]

The foundation for testing control flows is the construction of control flow graphs, the main blocks of which are:

  • process block - one entry point and one exit point
  • alternative point - one entry point, two or more exit points
  • junction point - two or more entry points, one exit point

Different levels of test coverage are defined for testing control flows:

Level Title Short description
Level 0 - “Test everything you test, users test the rest.” In English, this sounds much more elegant: “Test whatever you test, users will test the rest”
Level 1 Operator Coverage Each statement must be executed at least once.
Level 2 Coating Alternatives [2] / Coating Branches Each node with branching (alternative) is performed at least once.
Level 3 Coverage Each condition that has TRUE and FALSE at the output is executed at least once.
Level 4 Coverage of alternatives Test cases are created for each condition and alternative.
Level 5 Coverage of multiple conditions Coverage of alternatives, conditions and conditions of alternatives is achieved (Levels 2, 3 and 4)
Level 6 “Covering an infinite number of paths” If, in the case of looping, the number of paths becomes infinite, their substantial reduction is allowed, limiting the number of execution cycles, to reduce the number of test cases.
Level 7 Track covering All paths must be checked.

Table 1. Test Coverage Levels

Based on the data of this table, you can plan the required level of test coverage, as well as evaluate the existing one.

Literature

[1] A practitioner's Guide to Software Test Design. Lee copeland

[2] Standard Glossary of Terms Used in Software Testing Version 2.0 (December 4, 2008), Prepared by the 'Glossary Working Party' International Software Testing Qualifications Board

Many people test and write test cases (test cases), but not many use special test design techniques . Gradually, gaining experience, they realize that they are constantly doing the same work, which is amenable to specific rules. And then they find that all these rules are already described.

I suggest you read the brief description of the most common test design techniques:

  • Equivalence Partitioning ( EP ). As an example, you have a range of valid values ​​from 1 to 10, you must choose one valid value inside the interval, say 5, and one invalid value outside the interval - 0.
  • Boundary Value Analysis (BVA ) Analysis of Boundary Values . If we take the example above, as the values ​​for positive testing, we choose the minimum and maximum limits (1 and 10), and the values ​​are larger and smaller than the limits (0 and 11). Analysis Boundary values ​​can be applied to fields, records, files, or to any kind of entity with restrictions.
  • Cause / Effect ( Cause / Effect - CE ). It is, as a rule, entering combinations of conditions (causes) to receive a response from the system (Corollary). For example, you check the ability to add a client using a specific screen form. To do this, you will need to enter several fields, such as "Name", "Address", "Phone Number" and then press the "Add" button - this "Reason". After clicking the "Add" button, the system adds the client to the database and displays its number on the screen - this is the "Consequence".
  • Error Guessing (EG ). This is when the test analyst uses his knowledge of the system and the ability to interpret the specification in order to "predict" under what input conditions the system may generate an error. For example, the specification says: "the user must enter the code". The test analyst will think: “What if I do not enter the code?”, “What if I enter the wrong code?”, And so on. This is the prediction of an error.
  • Exhaustive Testing ( ET ) is an extreme case. Within this technique, you should check all possible combinations of input values, and in principle, this should find all the problems. In practice, the application of this method is not possible, due to the huge number of input values.

Practical application of test design techniques when developing test cases

Many people know what a test design is, but not everyone knows how to apply it. To clarify the situation a bit, we decided to offer you a consistent approach to the development of test cases (test cases), using the simplest test design techniques:

  • Equivalence Partitioning , hereinafter referred to as EP
  • Boundary Value Analysis , hereinafter referred to as BVA
  • Error Guessing , hereinafter referred to as EG
  • Cause / Effect (Cause / Effect) , hereinafter referred to as CE

The test case development plan is proposed as follows:

  1. Requirements analysis.
  2. Defining a test case based on EP , BVA , EG .
  3. Developing a test pattern based on CE.
  4. Writing test cases based on initial requirements, test data and test steps.

Further on an example, we will consider the offered approach.

Example:

Test the functionality of the application form, the requirements for which are provided in the following table:

Element

Item type

Requirements

Type of treatment

combobox

Data set:

  1. Consultation
  2. Testing
  3. Advertising placement
  4. Site error

* - does not affect the process of performing the application receipt operation.

The contact person

editbox

1. Required

2. Maximum 25 characters

3. The use of numbers and special characters is not allowed

contact number

editbox

  1. Required
  2. Valid "+" characters and numbers
  3. "+" can only be used at the beginning of the number
  4. Acceptable formats:

    • starts with a plus - 11-15 numbers
      +31612361264
      +375291438884
    • no plus - 5-10 digits, for example:
      0613261264
      2925167

Message

text area

1. Required

2. Maximum length 1024 characters

To send

button

Condition:

1. Default - Inactive (Disabled)

2. After filling in the required fields becomes active (Enabled)

Actions after pressing

1. If the entered data is correct - sending a message

2. If the entered data is NOT correct - validation message

Use case ( sometimes it may not be ):

  Test Design Test Coverage Techniques of Design Test

1. Analysis of requirements

We read, analyze the requirements and highlight the following nuances for ourselves:

  • Which fields are required?
  • Are the fields limited in length or in dimension (border)?
  • Which fields have special formats?

2. Determination of the test data set

Based on the requirements for the fields, using the design test techniques, we begin the definition of a test data set:

  • depending on whether a required field or not, we will determine which fields need to be checked for a blank value, since it may cause an error ( Orange color in the resulting table)
  • because Exhaustive testing is not possible due to the huge number of various combinations of values . First of all, it is necessary to determine the minimum data set . This can be done using techniques such as EP and BVA . ( Blue color in the resulting table)
  • There is a field on the form that has a composite type (numbers are used in conjunction with symbols), has a special data format, and therefore the selection of test data for it is a rather laborious task. Within this article we will confine ourselves to a simple check of the formats and basic requirements described in the form of receiving applications .
  • Upon completion of data generation using standard techniques, you can add a number of values ​​based on personal experience ( EG technique ) - this will be the use of specials. characters, very long lines, different data formats, registers in lines (Upper, Lowwer, Mixed cases), negative and zero values, keywords Null - NaN - Infinity, etc. Here you can include everything that you believe can disable the application (In the resulting table, purple )

Note:

Note that the number of test data after the final generation will be quite large, even when using special design test techniques. Therefore, we confine ourselves to only a few values ​​for each field, since the purpose of this article is to show the process of creating test cases , and not the process of obtaining specific test data.

2.1 Selection of test data for each individual field

  • Treatment Type field. Since all the data are included in the 1st equivalence class, that is, they do not change the process of the application acceptance, we take any (1st) position in the list with the expected result OK. But since the field is implemented as a sheet, it also makes sense to consider the boundary conditions (BVA technique), i.e. take the first and last elements. Total: 1st and last position in the list. Expected result when using - OK.
  • Contact field. This is a required field from 1 to 25 characters (including borders). The check for binding adds a blank value to the test data. Let's analyze the boundary conditions (BVA), get a set: 0, 1, 2, 24, 25 and 26 characters. An empty value (0 characters) has already been added when analyzing the mandatory field for input, so with BVA we will not add it again. (If you add it a second time, there will be duplication of test data, which will not lead to finding new defects, which means that re-adding to the domain does not make sense). Due to the fact that the values ​​of 2 and 24 characters are, from our point of view, non-critical, you can not add them. As a result, we find that the minimum data set for testing the fields are lines 1 and 25 - OK, and 0 (empty value), 26 characters - NOK.
  • The Contact telephone field consists of several parts: a country code, an operator code, a telephone number (which can be composite and separated by hyphens). To determine the correct set of test data, it is necessary to consider each component separately. Applying BVA and EP, we get:

    • for rooms with a plus
      By BVA we get numbers with 10, 11, 12 and 14, 15, 16 digits, where 10 and 16 are NOK, and 11, 12, 14, 15 are OK
      Considering the data obtained from the EP position, we single out that 11, 12, 14, 15 belong to the same equivalence class. Therefore, when testing, we can use any of them, but since 11 and 15 are the interval boundaries, in our opinion, they cannot be omitted. Consequently, we can reduce the set of values ​​to two, eliminating 12 and 14, and leaving 11 and 15 to check the boundary conditions.
      Total we have:
      11 and 15 digits - OK, (+12345678901, +123456789012345)
      10 and 16 digits - NOK; (+1234567890, +1234567890123456)
    • for rooms without a plus:
      By BVA we get numbers with 4, 5, 6 and 9, 10, 11 digits.
      Acting similarly to the example for telephone numbers with a plus, we exclude the values ​​6 and 9, leaving 5 and 10.
      Total we have:
      5 and 10 digits - OK, (12345, 1234567890)
      4 and 11 digits - NOK; (1234, 12345678901)
  • Post field. data selection is carried out by analogy with the field Contact person. The output values ​​are: lines 1 and 1024 - OK, and 1025 characters - NOK.

The resulting data table, for use in the subsequent preparation of test cases

Field

OK / NOK

Value

Comment

Type of treatment

Ok

Consultation

first on the list

Site error

last on the list

NOK

The contact person

Ok

ytsukengschschytsukengshshtsytsuke

25 characters lower case

a

1 character

YTSUKENGSCHSCHFYVAPRODJYACHSMI

25 characters upper case

ITSUKENGSHSCHZfryproljYaSMI

25 characters SMASH register

NOK

empty value

ytsukengshschytsukengshshtsytsukey

length is greater than the maximum (26 characters

@ # $% ^ &;.?,> | \ / № "! () _ {} [<~

specialist. characters (ASCII)

1234567890123456789012345

only numbers

adsadasdasdas dasdasd asasdsads (...) sas

very long string (~ 1Mb)

contact number

Ok

+12345678901

with plus - the minimum length

+123456789012345

with plus - maximum length

12345

no plus - minimum length

1234567890

no plus - maximum length

NOK

empty value

+1234567890

with plus -

+1234567890123456

with plus -> maximum length

1234

no plus -

12345678901

no plus -> maximum length

+ YYYXXXyyyxxzz

with a plus - letters instead of numbers

yyyxxxxzz

no plus - letters instead of numbers

+ ### - $$$ -% ^ - & ^ - &!

specialist. characters (ASCII)

1232312323123213231232 (...) 99

very long string (~ 1Mb)

Message

Ok

ytsuuyuts (...) ytsu

maximum length (1024 characters)

NOK

empty value

ytsuysuits (...) yutsuts

length is greater than the maximum (1025 characters)

adsadasdasdas dasdasd asasdsads (...) sas

very long string (~ 1Mb)

@ ## $$$% ^ & ^ &

only special characters (ASCII)


3. Develop a test pattern

Based on the CE technique and, if possible, the existing use cases (Use case), we will create a pattern of the planned test. This document will be the steps and expected results of the test, but without specific data, which is substituted in the next stage of test case development.

Sample Case Test Template

Act

Expected Result

1. Open the form to send a message

  • Form is open
  • All fields are blank by default.
  • Required fields are marked - *
  • Submit button not active

2. Fill in the form fields:

  • Type of treatment
  • The contact person
  • contact number
  • Message
  • Fields are filled
  • Button "Send" - active (Enabled)

3. Press the "Send" button

  • If the entered data is correct -
    • The message "Application sent" is displayed.
    • New application appeared in the list on the page "Applications".
  • If the entered data is NOT correct -;
    • Validation message with all errors displayed.
    • The application did NOT appear in the list on the "Orders" page.


4. Writing test cases based on initial requirements, test data and test pattern

After the test data and test steps are ready to proceed directly to the development of test cases. Here we can help such methods of combination as:

  • Sequential bust . It is a search of all possible combinations of available values. Thus, it turns out that the number of test cases will be equal to the product of the number of test data options for each field. For our specific example, we get 1170 test cases .
  • Pairwise Testing. Often, failures cause not a complicated combination of all parameters, but a combination of only a couple of parameters. The pair wise search technique allows you to create test cases that combine data from two fields. Due to this, the number of test cases received at the output is several times smaller than when combining the same data set during sequential enumeration . Note also that at the moment there are several algorithms for generating combinations for pairwise testing: Orthogonal Arrays Testing, All pairs, IPO (In-Parameter Order). So for example, when using All Pairs equipment in our particular case, we will get only 118 test cases.. (examples of comparing the effectiveness of different generation algorithms can be found here)

Upon completion of the preparation of data combinations, we substitute them into the test case template, and as a result we have a set of test cases covering the requirements for the form for accepting applications that we are testing.

Note :

We remind you that test cases are divided by the expected result into positive and negative test cases.

An example of a positive test case (all fields are OK ):

Act

Expected Result

1. Open the form to send a message

  • Form is open
  • All fields are blank by default.
  • Required fields are marked - *
  • Submit button not active

2. Fill in the form fields:

  • Type of appeal = Consultation
  • Contact Person = Yukukengschschitsukengshshzjtsuke
  • Contact phone = + 7-916-111-11-11
  • Message
  • Fields are filled
  • Button "Send" - active (Enabled)

3. Press the "Send" button

  • The message "Application sent" is displayed.
  • New application appeared in the list on the page "Applications".


An example of a negative test case (Contact person field - NOK ):

Act

Expected Result

1. Open the form to send a message

  • Form is open
  • All fields are blank by default.
  • Required fields are marked - *
  • Submit button not active


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Quality Assurance

Terms: Quality Assurance