Monday, July 23, 2007
TEAM II-Software Testing
Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification.
In general, software engineers distinguish software faults from software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.
A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of regression testing). Clearly the possibilities for projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment