Testing Fundamentals

What is testing?

  • Software testing is the process of evaluation of software to detect differences between given input and expected output.
  • Also to assess the feature of the software.
  • Testing assesses the quality of the product.
  • Software testing is a process that should be done during the development process.
  • In other words, software testing is a verification and validation process.
  • Testing is performed for the following purpose:
    1. To improve quality
    2. For verification and validation (V&V)
    3. For reliability estimation

Testing terminology

Testing:

The execution of a program to find its faults

Verification:

The process of proving the programs correctness.

Verification will help to determine whether the software is of high quality, but it will not ensure that the system is useful. Verification is concerned with whether the system is well-engineered and error-free.

Methods of Verification : Static testing

  • Walk-through
  • Inspection
  • Review

Validation:

The process of finding errors by executing the program in a real environment

Validation is the process of evaluating the final product to check whether the software meets the customer expectations and requirements. It is a dynamic mechanism of validating and testing the actual product.

Methods of Validation: Dynamic testing

  • Testing
  • End Users

Debugging:

Diagnosing the error and correct it

image 41

Difference between defect, error, bug, failure and fault

Error: A discrepancy between a computed, observed, or measured value and condition and the true, specified, or theoretically correct value or condition.

This can be a misunderstanding of the internal state of the software, an oversight in terms of memory management, confusion about the proper way to calculate a value, etc.

FAULT: An incorrect step, process, or data definition in a computer program that causes the program to perform in an unintended or unanticipated manner. A fault is introduced into the software as the result of an error.

It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification. It is the result of the error.

BUG: A bug is the result of a coding error. An Error found in the development environment before the product is shipped to the customer.

A programming error that causes a program to work poorly, produce incorrect results, or crash. An error in software or hardware that causes a program to malfunction. The bug is a terminology of Tester.

FAILURE: A failure is the inability of a software system or component to perform its required functions within specified performance requirements.

When a defect reaches the end customer it is called a Failure. During development Failures are usually observed by testers.

DEFECT: A Software Defect / Bug is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable).

In other words, a defect is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.

Testing pieces

Test case: A test case is a document, which has a set of test data, preconditions, expected results, and post-conditions, developed for a particular test scenario in order to verify compliance against a specific requirement.

Test Case acts as the starting point for the test execution, and after applying a set of input values; the application has a definitive outcome and leaves the system at some endpoint or also known as execution post-condition.

Test script: Test Script is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected. Test scripts are used in automated testing.

Test scenario: A test scenario is a description of an objective a user might face when using the program. An example might be “Test that the user can successfully log out by closing the program.”

Typically, a test scenario will require testing in a few different ways to ensure the scenario has been satisfactorily covered.

Test plan: The test plan is a term and a deliverable. The test plan is a document that lists all the activities in a QA project, schedules them, defines the scope of the project, roles & responsibilities, risks, entry & exit criteria, test objective, and anything else that you can think of.

Test harness: Test harness enables the automation of tests. It refers to the system test drivers and other supporting tools that require executing tests.

It provides stubs and drivers which are small programs that interact with the software under test. Test harnesses execute tests, by using a test library and generate a report.

It requires that your test scripts are designed to handle different test scenarios and test data.

Test suites: The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing.

A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Testing data preparation

  1. Verification: The process of proving the correctness of the program. (Are we building the product right?)
  2. Validation: The process of finding errors by executing the program in a real environment. (Are we building the right product?)
  3. Testing: Testing assesses the quality of the product.
  4. Certification: Certification is to provide authenticity of the correctness of the program.

Testing Principles

All tests should be traceable to customer requirements. As we have seen, the objective of software testing is to uncover errors. It follows that the most severe defects (from the customer’s point of view) are those that cause the program to fail to meet its requirements.

Tests should be planned long before testing begins. Test planning can begin as soon as the requirements model is complete. All tests can be planned and designed before any code has been generated

The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components.

The problem, of course, is to isolate [separate] these suspect components and thoroughly test them. E.g. 20 percent of software bugs cause 80 percent of the software’s failures.

Testing should begin “in the small” and progress toward testing “in the large.” The first tests planned and executed generally focus on individual components.

As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system.

Exhaustive testing is not possible. The number of path permutations [combinations] for even a moderately sized program is exceptionally large.

For this reason, it is impossible to execute every combination of paths during testing. It is possible, however, to adequately [satisfactorily] cover program logic and to ensure that all conditions in the component-level design have been exercised.

To be most effective, testing should be conducted by an independent third party. By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing).

Testing Fundamental

  1. Error, Fault, and Failure
  2. Test oracle
  3. Test Plan
  4. Test Case
  5. Defect logging and tracking
  6. Defect analysis and prevention
  7. Metrics – Reliability Estimation

Error fault and failure

Error can be identified as the difference between calculated observed or measured values and the actual or theatrically correct value.

Fault can be characterized as circumstances that cause the software to fail to execute its required function

Failure is the inability of software or module to execute required function according to its specification

Test oracle

Test oracle is a method, different from program itself, that is use to test out the output produced by any program or module for test cases. Test oracles are essential for testing.

As testing any program or module test case are submitted to oracle and program under testing.

The output of two is than compared

Test plan

It is a document that describes the objectives, scope, approach and focus of software testing effort.

Test plan are formed using following inputs : 

  • Project plan
  • Requirement Document
  • Software Design Document

Test Case

IEEE defines a test case as a set of input values, execution precondition, expected result, and execution postconditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test case specification’s essential part

  1. Test case specification identifier: Unique identifier of the document
  2. Test item: Identifies item and feature to be tested
  3. Input specification: Details of each input to test case
  4. Output specification: Expected output specification
  5. Environmental needs: Hardware, the software required for the execution of particular test case.

Defect logging and tracking

Defect life cycle consist of three stage :

Submitted: When defect is found the first action is to log in to defect along with sufficient information about it. It is submitted stage

Fixed: Fixing defect job is assigned to someone. Person debugs and fixes this is how defect enters in to fixed state.

Closed: Once defect fixing is verified than defect can be marked as fixed

Defect analysis and prevention

  • It used to improve quality and productivity.
  • Pareto Analysis

Metrics: Reliability estimation

MTBF- Mean time between failures

Testing Process 

Testing Process

image 42

Testing strategies

image 43

Levels of testing

image 44

Types of Testing

The various types of testing which are often used are listed below:

  1. White Box Testing
  2. Black Box Testing
  3. Integration Testing
  4. System testing
  5. Unit testing
  6. Acceptance Testing
  7. Performance Testing
  8. Regression testing
  9. Ad hoc Testing

Leave a Comment