Sunday 29 January 2012

Manual Testing



  Testing is process execution of application in controlled manner with the intent of
finding the errors. It is nothing but “Detection”.

The Quality Assurance involves through out the software development process to
monitor and improve the process, making sure that agreed upon standards and
procedures are followed and ensuring that problems are found and dealt with it. It
is oriented for “Prevention”.

Solving problems is a high-visibility process; preventing problems is low-visibility.
This is illustrated by an old parable.

Software Industry
 In India itself, Software industry growth has been phenomenal.
 IT field has enormously grown in the past 50 years.
 IT industry in India is expected to touch 10,000 crores of which software share is
dramatically increasing.

Software Crisis
 Software cost/schedules are grossly inaccurate.
Cost overruns of several times, schedule slippage’s by months, or even years are
common.
 Productivity of people has not kept pace with demand
Added to it is the shortage of skilled people.
 Quality of software is than desired
Error rates of released software leave customer dissatisfied…Threatening the very
business.
Software Myths
 Management Myths
 Software Management is different.
 Why change or approach to development?
 We have provided the state-of-the-art hardware.
 Problems are technical
 If project is late, add more engineers.
 We need better people.
 Developers Myths
 We must start with firm requirements
 Why bother about Software Engineering techniques, I will go to terminal and
code it.
 Once coding is complete, my job is done.
 How can you measure the quality..it is so intangible.
 Customer’s Myth
 A general statement of objective is good enough to produce software.
 Anyway software is “Flexware”, it can accommodate my changing needs.
 What do we do ?
 Use Software Engineering techniques/processes.
 Institutionalize them and make them as part of your development culture.
 Adopt Quality Assurance Frameworks : ISO, CMM
 Choose the one that meets your requirements and adopt where necessary.

Software Engineering:
RequiQreumicekn Dtse sGiganthering
Refine Requirements Build Prototype
Customer evaluation of the prototype
Design
Implement
MaTienstat in
Requirements Analysis
 Software Engineering is an engineering discipline concerned with the
practical problems of developing large software.
 Software Engineering discipline tracks both technical & non-technical
problems associated with software development.
 Challenge for Software Engineers is to produce high quality software
with finite amount of resources & within a predicted schedule.
 Apply Engineering Concepts to developing Software
 Apply Engineering Concepts to removing crisis.
Software Engineering Process
Consists of Three generic Phases:
 Definition, Development, and Maintenance.
Definition (What)
Customer Contact, Planning, Requirement Analysis.
Development Phase (How)
Design, Coding, Testing.
Maintenance Phase (Change)
Correction, Adaptation, Enhancement, Reengineering.
Support Activities
Quality Assurance, Configuration Management.
Software Life Cycle Models
 Prototyping Model
 Waterfall Model – Sequential
 Spiral Model
 V Model - Sequential
Prototyping Model of Software Development
A prototype is a toy implementation of a system; usually exhibiting limited functional capabilities,
low reliability, and inefficient performance. There are several reasons for developing a prototype.
An important purpose is to illustrate the input data formats, messages, reports and the interactive
dialogues to the customer. This a valuable mechanism for gaining better understanding of the
customer’s needs. Another important use of the prototyping model is that it helps critically
examine the technical issues associated with the product development.
Customer
Suggestions
Acceptance by Customer
Planning
Validation
Validation
Installation Operation and Maintenance
Testing and Integration
Detailed Design
Verification
System
Design
Verification
Coding
Verification
Classic Waterfall Model
In a typical model, a project begins with feasibility analysis. On successfully demonstrating the
feasibility of a project, the requirements analysis and project planning begins. The design
starts after the requirements analysis is complete, and coding begins after the design is
complete. Once the programming is completed, the code is integrated and testing is done. On
successful completion of testing, the system is installed. After this, the regular operation and
maintenance of the system takes place.
Feasibility
Report
Requirement Document
And Project Plan
System Design
Document
Detailed
Design Document
Programs
Test Plan, Test Report
And Manuals
Installation
Report
Typical Spiral Model
Process Feedback
Project Process
Process Review
Study
RFP
Proposal Proposal
Study Review
Working System
Acceptance Customer
Test Review
Actual
Proposal
Requirement
Study Requirement
Review
Deliverable System
System
Testing Certification
Approved
S.R.S
System Functional
Analysis Specification
Review
Integrated Tested Code
Integration Integration
Approved
S.R.S
Design Design
Document Review
Unit Tested Code
Unit Test Test Audit
Approved
S.D.S
Detailed Program
Design Specification
Entry Criteria
Task Validation
Approved
S.P.S
Coding Code
Inspection
Design
Typical Spiral Model
Developed by Barry Boehm in 1988. it provides the potential for rapid development of
incremental versions of the software. In the spiral model, software is developed in a series of
incremental releases. During early iterations , the incremental release might be a paper
model or prototype.
Each iteration consists of
Planning, Risk Analysis, Engineering, Construction & Release & Customer Evaluation.
Customer Communication: Tasks required to establish effective communication between
developer and customer.
Planning: Tasks required to define resources, timelines, and other project related information.
Risk Analysis: Tasks required to assess both technical and management risks.
Engineering: Tasks required to build one or more representatives of the application.
Construction & Release: Tasks required to construct, test, install and provide user support
(e.g., documentation and training)
Customer evaluation: Tasks required to obtain customer feedback based on evaluation of the
software representations created during the engineering stage and implemented during the
installation state.
Project
Closure Report
Accepted
Proposal
Delivered
System
Requirement
Specification
Executable
System
Functional
Specification
Unit Tested
Code
Design
Specification
Code
Design
Specification
Exit Criteria
V – Process Model
Project Process
(Project Plan)
Acceptance
Test Plan
System
Test Plan
Int.Test
Plan.
Unit
T.P
Legend
Chaos model
From Wikipedia, the free encyclopedia.
In computing, the Chaos model is a structure of software development that extends the spiral
model and waterfall model. Raccoon defined the chaos model.
The chaos model notes that the phases of the life cycle apply to all levels of a project, from
the whole project to individual lines of code.
 The whole project must by defined, implemented, and integrated.
 Systems must by defined, implemented, and integrated.
 Modules must be defined, implemented, and integrated.
 Functions must be defined, implemented, and integrated.
 Lines of code are defined, implemented and integrated.
There are several tie-ins with chaos theory.
 The chaos model may help explain why software is so unpredictable.
 It explains why high-level concepts like architecture cannot be treated independently
of low-level lines of code.
 It provides a hook for explaining what do next, in terms of the chaos strategy.
Chaos strategy
From Wikipedia, the free encyclopedia.
The chaos strategy is an approach to software development that extends other strategies
(such as step-wise refinement), and it works with the chaos model.
The main rule is always resolve the most important issue first.
 An issue is an incomplete programming task.
 The most important issue is a combination of big, urgent, and robust.
• Big issues provide value to users as working functionality.
• Urgent issues are timely that they would otherwise hold up other work.
• Robust issues are trusted and tested. Developers can then safely focus their
attention elsewhere.
 To resolve means to bring it to a point of stability.
The chaos strategy resembles the way that programmers work toward the end of a project,
when they have a list of bugs to fix and features to create. Usually someone prioritizes the
remaining tasks, and the programmers fix them one at a time. The chaos strategy states that
this is the only valid way to do the work.
The chaos strategy was inspired by Go strategy.
Top-down and bottom-up design
(Redirected from Top-Down Model)
Top-down and Bottom-up are approaches to the software development process, and by
extension to other procedures, mostly involving software.
In the Top-Down Model an overview of the system is formulated, without going into detail for
any part of it. Designing it in more detail then refines each part of the system. Each new part
may then be refined again, defining it in yet more detail until the entire specification is
detailed enough to begin development.
By contrast in bottom-up design individual parts of the system are specified in detail, and may
even be coded. The parts are then linked together to form larger components, which are in
turn linked until a complete system is arrived at.
Top down approaches emphasise planning, and a complete understanding of the system. It is
inherent that no coding can begin until a sufficient level of detail has been reached on at least
some part of the system. Bottom up emphasises coding, which can begin as soon as the first
module has been specified. However bottom-up coding runs the risk that modules may be
coded without having a clear idea of how they link to other parts of the system, and that such
linking may not be as easy as first thought.
Modern software design approaches usually combine both of these approaches. Although an
understanding of the complete system is usually considered necessary for good design,
leading theoretically to a top-down approach, most software projects attempt to make use of
existing code to some degree. Pre-existing modules give designs a 'bottom-up' flavor. Some
design approaches also use an approach where a partially functional system is designed and
coded to completion, and this system is then expanded to fulfill all the requirements for the
project.
Iterative and Incremental development
From Wikipedia, the free encyclopedia.
Iterative and Incremental development is a software development process, one of the
practices used in Extreme programming.
The basic idea behind iterative enhancement is to develop a Software system incrementally,
allowing the Developer to take advantage of what was being learned during the development
of earlier, incremental, deliverable versions of the system. Learning comes from both the
development and use of the system, where possible. Key steps in the process were to start
with a simple implementation of a subset of the software requirements and iteratively enhance
the evolving sequence of versions until the full system is implemented. At each iteration,
design modifications are made along with addition new functional capabilities.
The Procedure itself consists of the Initialization step, the Iteration step, and the Project
Control List. The initialization step creates a base version of the system. The goal for this
initial implementation is to create a product to which the user can react. It should offer a
sampling of the key aspects of the problem and provide an solution that is simple enough to
understand and implement easily. To guide the iteration process, a project control list is
created that contains a record of all tasks that need to be preformed. It includes such items as
new features to be implemented and areas of redesign of the exiting solution. The control list
is constantly being revised as a result of the analysis phase.
The iteration step involves the redesign and implementation of a task from project control list,
and the analysis of the current version of the system. The goal for the design and
implementation of any iteration is to be simple, straightforward, and modular, supporting
redesign at that stage or at as a task added to the project control list. The code represents the
major source of documentation of the system. The analysis of iteration is based upon user
feedback and the program analysis facilities available. It involves analysis of the structure,
modularity, Usability, reliability, efficiency, and achievement of goals. The project control list is
modified in the of the analysis results.
Guidelines the drive the implementation and analysis include:
 Any difficulty in design, coding and testing a modification should signal then need for
redesign or re-coding.
 Modifications should fit easily into isolated and easy-to-find- modules. If they do not,
some redesign is needed.
 Modifications to tables should be especially easy to make. If any table modification is
not quickly and easily done, redesign is indicated.
 Modifications should be come easier to make as the iterations progress. If they are
not, there is a basic problem such as a design flaw or a proliferation of patches.
 Patches should normally be allowed to exist for only one or two iterations. Patches
may be necessary to avoid redesigning during an implementation phase.
 The existing implementation should be analyzed frequently to determine how well it
measures up to project goals.
 Program analysis facilities should be used whenever available to aid in the analysis
of partial implementations
 User reaction should be solicited and analyzed for indications of deficiencies in the
current implementation.
Iterative Enhancement was successfully applied to the development of an extendable family of
compilers for a family of programming languages on a variety of hardware architectures. A set
of 17 versions of the system was developed at one site generating 17 thousand source lines of
high-level language (6500 lines of executable code). The system was further developed at two
different sites, leading to two different versions of the base language: one version essentially
focused on mathematical applications, adding real numbers and various mathematical
functions, and the other adding compiler writing capabilities. Each iteration was analyzed from
the user's point of view (the language capabilities were determined in part by the user's
needs) and the developer's point of view (the compiler design evolved to be more easily
modified for characteristics like adding new data types). Measurement such as coupling and
modularization were tracked over multiple versions.
Using analysis and measurement as drivers of the enhancement process is one major
difference between iterative enhancement and the current Agile processes. It provides support
for determining the effectiveness of the processes and the quality of product. It allows one to
study, and therefore improve and tailor, the processes for the particular environment. This
measurement and analysis activity can be added to existing agile development methods.
In fact, the context of multiple iterations provides advantages in the use of measurement.
Measures are sometimes difficult to understand in the absolute but the relative changes in
measures over the evolution of the system can be very informative as they provide a basis for
comparison. For example, a vector of measures, m1, m2, ... mn, can be defined to
characterize various aspects of the product at some point in time, e.g., effort to date, changes,
defects, logical, physical, and dynamic attributes, environmental considerations. Thus an
observer can tell how product characteristics like size, complexity, coupling, and cohesion are
increasing or decreasing over time. One can monitor the relative change of the various aspects
of the product or can provide bounds for the measures to signal potential problems and
anomalies.
Testing
 An examination of the behavior of a program by executing on sample data sets.
 Testing comprises of set of activities to detect defects in a produced material.
 To unearth & correct defects
 To detect defects early & to reduce cost of defect fixing
 To avoid user detecting problems
 To ensure that product works as users expected it to.
Why Testing?
• To unearth and correct defects.
• To detect defects early and to reduce cost of defect fixing.
• To ensure that product works as user expected it to.
• To avoid user detecting problems.
What is the 'software life cycle'?
The life cycle begins when an application is first conceived and ends when it is no
longer in use. It includes aspects such as
Initial concept, Test planning, Maintenance,
Requirements analysis, Coding, Updates,
Functional design, Document preparation, Retesting,
Internal design, Integration, Phase-out, and
Documentation planning, Testing, Other aspects.
Test Life Cycle
Identify Test Candidates 5- Evaluate Results
Test Plan 6- Document Test Results
Design Test Cases 7- Casual Analysis/ Preparation of Validation Reports
Execute Tests 8- Regression Testing / Follow up on reported bugs
Types of Tests
Contract Other Tests
System Req. Spec. System Tests
Functional Spec. Functional Tests
H.LD. Integration Testing
L.L.D. Unit Testing
What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the
scheme of things. A wide-angle view of the 'customers' of a software development project
might include end-users, customer acceptance testers, customer contract officers, customer
management, the development organization's management/accountants/testers/salespeople,
future software maintenance engineers, stockholders, magazine columnists, etc. Each type of
'customer' will have their own slant on 'quality' - the accounting department might define
quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS,
monitoring and improving the process,
making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to 'prevention'. ( OR )
The purpose of Software Quality Assurance is to provide management with appropriate
visibility into the process being used by the software project and of the products being built.
Software Quality Assurance involves reviewing and auditing the software products and
activities to verify that they comply with the applicable procedures and standards and
providing the software project and other appropriate managers with the results of these
reviews and audits.
(See the Bookstore section's 'Software QA' category for a list of useful books on Software
Quality Assurance.)
What is Quality Control (QC)?
QC is the series of inspections, reviews, and tests used throughout the development cycle to
ensure that each work product meets the requirements placed upon it. QC includes a feedback
loop to the process that created the work product. The combination of measurement and
feedback allows us to tune the process when the work products created fail to meet their
specification. These approach views QC as part of the manufacturing process QC activities may
be fully automated, manual or a combination of automated tools and human interaction. An
essential concept of QC is that all work products have defined and measurable specification to
which we may compare the outputs of each process the feedback loop is essential to minimize
the defect produced.
What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using hardware
B, and does C, then D should happen'). The controlled conditions should include both normal
and abnormal conditions. Testing should intentionally attempt to make things go wrong to
determine if things happen when they shouldn't or things don't happen when they should. It is
oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of
useful books on Software Testing.)
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization's size and business structure.
Principles of Good Testing:
COMPLETE TESTING ISN'T POSSIBLE
 No matter how much you test, it is impossible to achieve total confidence
 The only exhaustive test is one that leaves the tester exhausted!
TEST WORK IS CREATIVE AND DIFFICULT
 Understand and probe what the system is supposed to do
 Understand and stress the limitations and constraints
 Understand the domain and application in depth.
TESTING IS RISK-BASED
 We can't identify all risks of failure.
 Risk assessments indicate how much to test and what to focus on.
ANALYSIS, PLANNING, AND DESIGN ARE IMPORTANT
 Test objectives must be identified and understood
 Tests must be planned and designed systematically
 Without a road map, you will get lost
MOTIVATION IS IMPORTANT
 You cannot be effective if you don't care about the job
 You must want to find problems and enjoy trying to break the system
TIME AND RESOURCES ARE IMPORTANT
 You can't be effective if you don't have the time or resources to do the job
TIMING OF TEST PREPARATION MATTERS A LOT
 Early test preparation leads to an understanding of project requirements and
design.
 Early test preparation uncovers and prevents problems.
 Early tests improve the effectiveness of subsequent reviews and inspections
MEASURING AND TRACKING COVERAGE IS ESSENTIAL
 You need to know what requirements, design, and code have and have not been
covered
 Complex software is too difficult to cover without systematic measurement
Three Major Concerns in Multiplatform Testing:
 The platform in the test lab will not be representatives of the platform in the
real world. This can happen because the platform in the test lab may not be
updated to the current specifications or may be configured in a manner that
is not representative of the typical configuration for the platform.
 The software will be expected to work on platform not included in the test
labs. By implications users may expect the software to work on platform that
has not been included in testing.
 The supporting software on various platforms is not comprehensive. User
platform may contain software that is not the same as used in the platform
in Test lab, for example: a different database management system and so
forth.
What are some recent major computers system failures caused by software bugs?
In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would not run
due to their inability to recognize the date '31/12/2000'; altering the control system’s date
settings started the trains.
News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage processing
system that did not meet specifications, was delivered late, and didn't work.
In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old system
for at least a year until the bugs were worked out of the new system by the software vendors.
In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should have been in metric units.
Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar
Lander mission, which failed for unknown reasons in December 1999. Several investigating
panels were convened to determine the process failures that allowed the error to go
undetected.
Bugs in software supporting a large commercial high-speed data network affected
70,000 business customers over a period of 8 days in August of 1999. Among those affected
was the electronic trading system of the largest U.S. futures exchange, which was shut down
for most of a week as a result of the outages.
In April of 1999 a software bug caused the failure of a $1.2 billion military satellite
launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure
was the latest in a string of launch failures, triggering a complete military and industry review
of U.S. space launch programs, including software integration and testing processes.
Congressional oversight hearings were requested. A small town in Illinois received an
unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times
larger than it's normal bill. It turned out to be due to bugs in new software that had been
purchased by the local Power Company to deal with Y2K software issues.
In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing a product
before it was ready. The computer system of a major online U.S. stock trading service failed
during trading hours several times over a period of days in February of 1999 according to
nationwide news reports. The problem was reportedly due to bugs in a software upgrade
intended to speed online trade confirmations. In April of 1998 a major U.S. data
communications network failed for 24 hours, crippling a large part of some U.S. credit card
transaction authorization systems as well as other large U.S. bank, retail, and government
data systems. The cause was eventually traced to a software bug.
January 1998 news reports told of software problems at a major U.S.
telecommunications company that resulted in no charges for long distance calls for a month
for 400,000 customers. The problem went undetected until customers called up with questions
about their bills.
In November of 1997 the stock of a major health industry company dropped 60% due
to reports of failures in computer billing systems, problems with a large database conversion,
and inadequate software testing. It was reported that more than $100,000,000 in receivables
had to be written off and that multi-million dollar fines were levied on the company by
government agencies.
A retail store chain filed suit in August of 1997 against a transaction processing system
vendor (not a credit card company) due to the software's inability to handle credit cards with
year 2000 expiration dates.
In August of 1997 one of the leading consumer credit reporting companies reportedly
shut down their new public web site after less than two days of operation due to software
problems. The new site allowed web site visitors instant access, for a small fee, to their
personal credit reports. However, a number of initial users ended up viewing each other’s
reports instead of their own, resulting in irate customers and nationwide publicity. The
Problem was attributed to "...unexpectedly high demand from consumers and faulty
software that routed the files to the wrong computers."
In November of 1996, newspapers reported that software bugs caused the 411-
telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the
2000 operators had to search through phone books instead of using their 13,000,000-listing
database. The bugs were introduced by new software modifications and the problem software
had been installed on both the production and backup systems. A spokesman for the software
vendor reportedly stated that 'It had nothing to do with the integrity of the software.
It was human error.' On June 4 1996 the first flight of the European Space Agency's
new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a
half billion dollars. It was reportedly due to the lack of exception handling of a floating-point
error in a conversion from a 64-bit integer to a 16-bit signed integer.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be
credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The
American Bankers Association claimed it was the largest such error in banking history. A bank
spokesman said the programming errors were corrected and all funds were recovered.
Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear
war in 1983, according to news reports in early 1999. The software was supposed to filter out
false missile detentions caused by Soviet satellites picking up sunlight reflections off cloudtops,
but failed to do so. Disaster was averted when a Soviet commander, based on a what he
said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm.
The filtering software code was rewritten.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is
illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his family
was the most skillful healer. He replied, "I tend to the sick and dying with drastic and dramatic
treatments, and on occasion someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known
among the local peasants and neighbors." "My eldest brother is able to sense the spirit of
sickness and eradicate it before it takes form. His name is unknown outside our home."
How can new Software QA processes be introduced in an existing organization?
A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management buyin
is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with productivity
so as to keep bureaucracy from getting out of hand. For small groups or projects, a more adhoc
process may be appropriate, depending on the type of customers and projects. A lot will
depend on team leads or managers, feedback to developers, and ensuring adequate
communications among customers, managers, developers, and testers. In all cases the most
value for effort will be in requirements management processes, with a goal of clear, complete,
testable requirement specifications or expectations.
What are 5 common problems in the software development process?
Poor requirements - if requirements are unclear, incomplete, too general, or not testable,
there will be problems.
Unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
Inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
Futurities - requests to pile on new features after development is underway; extremely
common.
Miscommunication - if developers don't know what's needed or customers have erroneous
expectations, problems are guaranteed.
What are 5 common solutions to software development problems?
Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements
that are agreed to by all players. Use prototypes to help nail down requirements.
Realistic schedules - allow adequate time for planning, design, testing, bug fixing, retesting,
changes, and documentation; personnel should be able to complete the project
without burning out.
Adequate testing - starts testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug fixing.
Stick to initial requirements as much as possible - be prepared to defend against
changes and additions once development has begun, and be prepared to explain
consequences. If changes are necessary, they should be adequately reflected in related
schedule changes. If possible, use rapid prototyping during the design phase so that customer
can see what to expect. This will provide them a higher comfort level with their requirement
decisions and minimize changes later on.
Communication - requires walkthroughs and inspections when appropriate; make extensive
use of group
Communication tools - e-mail, GroupWare, networked bug-tracking tools and change
management tools Intranet capabilities, etc.; insure that documentation is available and upto-
date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes
early on so that customers' expectations are clarified.
Why does software have bugs?
Miscommunication or no communication
Software complexity
Programming errors
Changing requirements
Time pressures
Poorly documented code
Software development tools
What is Verification? Validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists, walkthroughs,
and inspection meetings. Validation typically involves actual testing and takes place after
verifications are completed.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no
preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and
see what's missing, not to fix anything.
What kinds of testing should be considered?
Black box testing - not based on any knowledge of internal designs or code. Tests are based
on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths, and conditions.
Unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a welldesigned
architecture with tight code; may require developing test driver modules or test
harnesses.
Incremental integration testing - continuous testing of an application as new functionality
is added; requires that various aspects of an application's functionality be independent enough
to work separately before all parts of the program are completed, or that test drivers be
developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to client/server
and distributed systems.
Functional testing - black box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it.
System testing - black box type testing that is based on overall requirements specifications;
covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use, such
as interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.
Sanity testing - Typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying
databases, the software may not be in a 'sane' enough condition to warrant further testing in
its current state.
Regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially near
the end of the development cycle. Automated testing tools can be especially useful for this
type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or
based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also
used to describe such tests as system functional testing while under unusually heavy loads,
heavy repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on
the targeted end-user or customer. User interviews, surveys, video recording of user sessions,
and other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.
Install / uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.
User acceptance testing - determining if software is satisfactory to an end-user or
customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users or
others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules. There
are also various theories and metrics, such as McCabe Complexity metrics. It should be kept
in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer
reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and
enforce standards. For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:
 Minimize or eliminate use of global variables.
 Use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive
(use of more than 20 characters is not out of line); be consistent in naming
conventions.
 Use descriptive variable names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive
(use of more than 20 characters is not out of line); be consistent in naming
conventions.
 Function and method sizes should be minimized; less than 100 lines of code is good,
less than 50 lines is preferable.
 Function descriptions should be clearly spelled out in comments preceding a
function's code.
 Organize code for readability.
 Use white space generously - vertically and horizontally
 Each line of code should contain 70 characters max.
 One code statement per line.
 Coding style should be consistent thought a program (eg, use of brackets,
indentations, naming conventions, etc.)
 In adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
 No matter how small, an application should include documentation of the overall
program function and flow (even a few paragraphs is better than nothing); or if
possible a separate flow chart and detailed program documentation.
 Make extensive use of error handling procedures and status and error logging.
 For C++, to minimize complexity and increase maintainability, avoid too many levels
of inheritance in class hierarchies (Relative to the size and complexity of the
application). Minimize use of multiple inheritances, and minimize use of operator
 Overloading (note that the Java programming language eliminates multiple
inheritance and operator overloading.)
 For C++, keep class methods small, less than 50 lines of code per method is
preferable.
 For C++, make liberal use of exception handlers
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error handling
and status logging capability; and works correctly when implemented. Good functional design
is indicated by an application whose functionality can be traced back to customer and end-user
requirements. (See further discussion of functional and internal design in 'What's the big deal
about requirements?' in FAQ #2.) For programs that have a user interface, it's often a good
idea to assume that the end user will have little computer knowledge and may not read a user
manual or even the on-line help; some common rules-of-thumb include: the program should
act in a way that least surprises the user it should always be evident to the user what can be
done next and how to exit the program shouldn't let the users do something stupid without
warning them.
What is SEI? CMM? ISO? IEEE? ANSI? Will it help?
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.
Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is
geared to large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if reasonably
applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by
qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals
to successfully complete projects. Few if any processes in place; successes may not
be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be
repeated.
Level 3 - standard software development and maintenance processes are integrated
throughout an organization; a Software Engineering Process Group is in place to
oversee software processes, and training programs are used to ensure
understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
Level 5 - the focus is on continuous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required.
(Perspective on CMM ratings: During 1992-1996 533 organizations were assessed. Of
those, 62% were rated at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5. The median
size of organizations was 100 software engineering/maintenance personnel; 31% of
organizations were U.S. federal contractors. For those rated at Level 1, the most
problematical key process area was in Software Quality Assurance.)
ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard (which
replaces the previous standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing organizations, not just
software. It covers documentation, design, development, production, testing, installation,
servicing, and other processes.
ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software
development organizations. The U.S. version of the ISO 9000 series standards is exactly the
same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version
can be purchased directly from the ASQ (American Society for Quality) or the ANSI
organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and
Certification is typically good for about 3 years, after which a complete reassessment is
required. Note that ISO 9000 certification does not necessarily indicate quality products - it
indicates only those documented processes are followed.
(Publication of revised ISO standards are expected in late 2000; see http://www.iso.ch/ for
latest info.)
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard
829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for
Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
ANSI = 'American National Standards Institute', the primary industrial standards body in the
U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality). Other software development process assessment methods
besides CMM and ISO 9000 include SPICE, Trillium, TickIT. and Bootstrap.
Software QA and Testing-related Organizations and Certifications
SEI - Software Engineering Institute web site; info about SEI technical programs, publications,
bibliographies, some online documents, SEI courses and training, links to related sites.
IEEE Standards - IEEE web site; has Software Engineering Standards titles and prices; the
topical areas for publications of interest would include listings in the categories of Software
Design/Development and Software Quality and Management.
American Society for Quality - American Society for Quality (formerly the American Society
for Quality Control) web site; geared to quality issues in general, not just Software QA. ASQ is
the largest quality organization in the world, with more than 100,000 members. Also see the
ASQ Software Division web site for information related to Software QA and the Certified
Software Quality Engineer (CSQE) certification program.
Society for Software Quality - Has chapters in San Diego, Delaware, and Washington DC
area; each with monthly meetings.
QAI - Quality Assurance Institute
Certification Information for Software QA and Test Engineers:
CSQE - ASQ (American Society for Quality) CSQE (Certified Software Quality Engineer)
program - information on requirements, outline of required 'Body of Knowledge', listing of
study references and more.
CSQA/CSTE - QAI (Quality Assurance Institute)'s program for CSQA (Certified Software
Quality Analyst) and CSTE (Certified Software Test Engineer) certifications.
ISEB Software Testing Certifications - The British Computer Society maintains a program
of 2 levels of certifications - ISEB Foundation Certificate, Practitioner Certificate.
Will automated testing tools make testing easier?
 Possibly. For small projects, the time needed to learn and implement them may not
be worth it. For larger projects, or on-going long-term projects they can be valuable.
 A common type of automated tool is the 'record/playback' type. For example, a
tester could click through all combinations of menu choices, dialog box choices,
buttons, etc. in an application GUI and have them 'recorded' and the results logged
by a tool. The 'recording' is typically in the form of text based on a scripting
language that is interpretable by the testing tool. If new buttons are added, or some
underlying code in the application is changed, etc. the application can then be
retested by just 'playing back' the 'recorded' actions, and comparing the logging
results to check effects of the changes. The problem with such tools is that if there
are continual changes to the system being tested, the 'recordings' may have to be
changed so much that it becomes very time-consuming to continuously update the
scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a
difficult task. Note that there are record/playback tools for text-based interfaces
also, and for all types of platforms.
 Other automated tools can include:
 Code analyzers - monitor code complexity, adherence to standards, etc.
Coverage analyzers - these tools check which parts of the code have been
exercised by a test, and may be oriented to code statement coverage,
condition coverage, path coverage, etc.
 Memory analyzers - such as bounds-checkers and leak detectors.
 Load/performance test tools - for testing client/server and web applications under
various load levels.
 Web test tools - to check that links are valid, HTML code usage is correct, client-side
and server-side programs work, a web site's interactions are secure.
 Other tools - for test case management, documentation management, bug
reporting, and configuration management.
What is the use of Automation?
Record and replay.
What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful
in maintaining a cooperative relationship with developers, and an ability to communicate with
both technical (developers) and non-technical (customers, management) people is useful.
Previous software development experience can be helpful as it provides a deeper
understanding of the software development process, gives the tester an appreciation for the
developers' point of view, and reduce the learning curve in automated test tool programming.
Judgment skills are needed to assess high-risk areas of an application on which to focus
testing efforts when time is limited.
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be
able to understand the entire software development process and how it can fit into the
business approach and goals of the organization. Communication skills and the ability to
understand various sides of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are especially needed. An ability to find
problems as well as to see 'what's missing' is important for inspections and reviews.
What makes a good QA or Test manager?
A good QA, test, or QA / Test (combined) manager should:
 Be familiar with the software development process
 Be able to maintain enthusiasm of their team and promote a positive atmosphere,
despite what is a somewhat 'negative' process (e.g., looking for or preventing
problems)
 Be able to promote teamwork to increase productivity
 Be able to promote cooperation between software, test, and QA engineers
 Have the diplomatic skills needed to promote improvements in QA processes
 Have the ability to withstand pressures and say 'no' to other managers when quality
is insufficient or QA processes are not being adhered to
 Have people judgment skills for hiring and keeping skilled personnel
 Be able to communicate with technical and non-technical people, engineers,
managers, and customers.
 Be able to run meetings and keep them focused
What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices
should be documented such that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports, user
manuals, etc. should all be documented. There should ideally be a system for easily finding
and obtaining documents and determining what documentation will have a particular piece of
information. Change management for documentation should be used if possible.
What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex software
project is to have poorly documented requirements specifications. Requirements are the
details describing an application's externally perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A
testable requirement would be something like 'the user must enter their previously-assigned
password to access the application'. Determining and organizing requirements details in a
useful and efficient way can be a difficult effort; different methods are available depending on
the particular project. Many books are available that describe various approaches to this task.
Care should be taken to involve ALL of a project's significant 'customers' in the requirement
process. 'Customers' could be in-house personnel or out, and could include end-users,
customer acceptance testers, customer contract officers, customer management, future
software maintenance engineers, salespeople, etc. Anyone who could later derail the project if
their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the
requirements are spelled out in a document with statements such as 'The product shall.....'.
'Design' specifications should not be confused with 'requirements'; design specifications should
be traceable back to the requirements.
In some organizations requirements may end up in high-level project plans, functional
specification documents, in design documents, or in other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by testers in order to properly plan and execute tests. Without
such documentation, there will be no clear-cut way to determine if a software application is
performing correctly.
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
 Obtain requirements, functional design, and internal design specifications and other
necessary documents Obtain budget and schedule requirements
 Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change
processes, etc.)
 Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests Determine test approaches and methods - unit, integration,
functional, system, load, usability tests, etc.
 Determine test environment requirements (hardware, software, communications,
etc.)
 Determine testware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.) Determine test input data requirements
Identify tasks, those responsible for tasks, and labor requirements Set schedule
estimates, timelines, milestones Determine input equivalence classes, boundary
value analyses, error classes Prepare test plan document and have needed
reviews/approvals Write test cases Have needed reviews/inspections/approvals of
test cases Prepare test environment and testware, obtain needed user
manuals/reference documents/configuration guides/installation guides, set up test
tracking processes, set up logging and archiving processes, set up or obtain test
input data Obtain and install software releases Perform tests
 Evaluate and report results
 Track problems/bugs and fixes
 Retest as needed
 Maintain and update test plans, test cases, test environment, and testware through
life cycle
What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. It should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be included in a test
plan, depending on the particular project:
 Title
 Identification of software including version/release numbers
 Revision history of document including authors, dates, approvals
 Table of Contents
 Purpose of document, intended audience
 Objective of testing effort
 Software product overview
 Relevant related document list, such as requirements, design documents, other test
plans, etc.
 Relevant standards or legal requirements
 Tractability requirements
 Relevant naming conventions and identifier conventions
 Overall software project organization and personnel/contact-info/responsibilities
 Test organization and personnel/contact-info/responsibilities
 Assumptions and dependencies
 Project risk analysis
 Testing priorities and focus
 Scope and limitations of testing
 Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
 Outline of data input equivalence classes, boundary value analysis, error classes
 Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
 Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
 Test environment setup and configuration issues
 Software migration processes
 Software CM processes
 Test data setup requirements
 Database setup requirements
 Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
 Discussion of any specialized software or hardware tools that will be used by testers
to help track the cause or source of bugs
 Test automation - justification and overview
 Test tools to be used, including versions, patches, etc.
 Test script/test code maintenance processes and version control
 Problem tracking and resolution - tools and processes
 Project test metrics to be used
 Reporting requirements and testing deliverables
 Software entrance and exit criteria
 Initial sanity testing period and criteria
 Test suspension and restart criteria
 Personnel allocation
 Personnel pre-training needs
 Test site/location
 Outside test organizations to be utilized and their purpose, responsibilities,
deliverables, contact persons, and coordination issues
 Relevant proprietary, classified, security, and licensing issues.
 Open issues
What's a 'test case'?
 A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is working
correctly. A test case should contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.
 Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely
thinking through the operation of the application. For this reason, it's useful
to prepare test cases early in the development cycle if possible.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available (see the 'Tools' section
for web resources with listings of such tools). The following are items to consider in the
tracking process:
 Complete information such that developers can understand the bug, get an idea of
its severity, and reproduce it if necessary.
 Bug identifier (number, ID, etc.)
 Current bug status (e.g., 'Released for Retest', 'New', etc.)
 The application name or identifier and version
 The function, module, feature, object, screen, etc. where the bug occurred
 Environment specifics, system, platform, relevant hardware specifics
 Test case name/number/identifier
 One-line bug description
 Full bug description
 Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
 Names and/or descriptions of file/data/messages/etc. used in test
 File excerpts/error messages/log file excerpts/screen shots/test tool logs that would
be helpful in finding the cause of the problem
 Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
 Was the bug reproducible?
 Tester name
 Test date
 Bug reporting date
 Name of developer/group/organization the problem is assigned to
 Description of problem cause
 Description of fix
 Code section/file/module/class/method that was fixed
 Date of fix
 Application version that contains the fix
 Tester responsible for retest
 Retest date
 Retest results
 Regression testing requirements
 Tester responsible for regression tests
 Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various
stages. For instance, testers need to know when retesting is needed, developers need to know
when bugs are found and how to get the needed information, and reporting/summary
capabilities are needed for managers.
What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since
this type of problem can severely affect schedules, and indicates deeper problems in the
software development process (such as insufficient unit testing or insufficient integration
testing, poor design, improper build or release procedures, etc.) managers should be notified,
and provided with some documentation as evidence of the problem.
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on critical
bugs. Since this type of problem can severely affect schedules, and indicates deeper problems
in the software development process (such as insufficient unit testing or insufficient integration
testing, poor design, improper build or release procedures, etc.) managers should be notified,
and provided with some documentation as evidence of the problem.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run
in such an interdependent environment, that complete testing can never be done. Common
factors in deciding when to stop are:
 Deadlines (release deadlines, testing deadlines, etc.)
 Test cases completed with certain percentage passed
 Test budget depleted
 Coverage of code/functionality/requirements reaches a specified point
 Bug rate falls below a certain level
 Beta or alpha testing period ends
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgment skills, common
sense, and experience. (If warranted, formal methods are also available.) Considerations can
include:
 Which functionality is most important to the project's intended purpose?
 Which functionality is most visible to the user?
 Which functionality has the largest safety impact?
 Which functionality has the largest financial impact on users?
 Which aspects of the application are most important to the customer?
 Which aspects of the application can be tested early in the development cycle?
 Which parts of the code are most complex, and thus most subject to errors?
 Which parts of the application were developed in rush or panic mode?
 Which aspects of similar/related previous projects caused problems?
 Which aspects of similar/related previous projects had large maintenance expenses?
 Which parts of the requirements and design are unclear or poorly thoughts out?
 What do the developers think are the highest-risk aspects of the application?
 What kinds of problems would cause the worst publicity?
 What kinds of problems would cause the most customer service complaints?
 What kinds of tests could easily cover multiple functionalities?
 Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing
is still not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester might
then do ad hoc testing, or write up a limited test plan based on the risk analysis.
What can be done if requirements are changing continuously?
A common problem and a major headache.
 Work with the project's stakeholders early on to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance, if
possible.
 It's helpful if the application's initial design allows for some adaptability so that later
changes do not require redoing the application from scratch.
 If the code is well commented and well documented this makes changes easier for
the developers.
 Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
 The project's initial schedule should allow for some extra time commensurate with
the possibility of changes.
 Try to move new requirements to a 'Phase 2' version of an application, while using
the original requirements for the 'Phase 1' version.
 Negotiate to allow only easily implemented new requirements into the project, while
moving more difficult new requirements into future versions of the application.
 Be sure that customers and management understand the scheduling impacts,
inherent risks, and costs of significant requirements changes. Then let management
or the customers (not the developers or testers) decide if the changes are warranted
after all, that's their job.
 Balance the effort put into setting up automated testing with the expected effort
required to re-do them to deal with changes.
 Try to design some flexibility into automated test scripts.
 Focus initial automated testing on application aspects that are most likely to remain
unchanged.
 Devote appropriate effort to risk analysis of changes to minimize regression-testing
needs.
 Design some flexibility into test cases (this is not easily done; the best bet might be
to minimize the detail in the test cases, or set up only higher-level generic-type test
plans)
 Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. If
the functionality isn't necessary to the purpose of the application, it should be removed, as it
may have unknown impacts or dependencies that were not taken into account by the designer
or the customer. If not removed, design information will be needed to determine added testing
needs or regression testing needs. Management should be made aware of any significant
added risks as a result of the unexpected functionality. If the functionality only effects areas
such as minor improvements in the user interface, for example, it may not be a significant
risk.
How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures,
productivity will be improved instead of stifled. Problem prevention will lessen the need for
problem detection, panics and burnout will decrease, and there will be improved focus and less
wasted effort. At the same time, attempts should be made to keep processes simple and
efficient, minimize paperwork, promote computer-based processes and automated tracking
and reporting, minimize time required in meetings, and promote training as part of the QA
process. However, no one - especially talented technical types - likes rules or bureaucracy, and
in the short run things may slow down a bit. A typical scenario would be that more days of
planning and development will be needed, but less time will be required for late-night bug
fixing and calming of irate customers.
(See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project Management'
categories for useful books with more information.)
What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There
is no easy solution in this situation, other than:
 Hire good people
 Management should 'ruthlessly prioritize' quality issues and maintain focus on the
customer
 Everyone in the organization should be clear on what 'quality' means to the
customer
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the focus should be on integration and system
testing. Additionally, load/stress/performance testing may be useful in determining
client/server application limitations and capabilities. There are commercial tools to assist with
such testing.
How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages (such as
applets, JavaScript, plug-in applications), and applications that run on the server side (such as
cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.).
Additionally, there are a wide variety of servers and browsers, various versions of each, small
but sometimes significant differences between them, variations in connection speeds, rapidly
changing technologies, and multiple standards and protocols. The end result is that testing for
web sites can become a major ongoing effort. Other considerations might include:
 What are the expected loads on the server (e.g., number of hits per unit time?), and
what kind of performance is required under such loads (such as web server response
time, database query response times). What kinds of tools will be needed for
performance testing (such as web load testing tools, other tools already in house
that can be adapted, web robot downloading tools, etc.)?
 Who is the target audience? What kind of browsers will they be using? What kinds of
connection speeds will they by using? Are they intra- organization (thus with likely
high connection speeds and similar browsers) or Internet-wide (thus with a wide
variety of connection speeds and browser types)?
 What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
 Will down time for server and content maintenance/upgrades be allowed? How
much?
 What kinds of security (firewalls, encryptions, passwords, etc.) will be required and
what is it expected to do? How can it be tested?
 How reliable are the site's Internet connections required to be? And how does that
affect backup system or redundant connection requirements and testing?
 What processes will be required to manage updates to the web site's content, and
what are the requirements for maintaining, tracking, and controlling page content,
graphics, links, etc.?
 Which HTML specification will be adhered to? How strictly? What variations will be
allowed for targeted browsers?
 Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??
 How will internal and external links be validated and updated? How often?
 Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variability, and real-world Internet 'traffic congestion' problems to be
accounted for in testing?
 How extensive or customized are the server logging and reporting requirements; are
they considered an integral part of the system and do they require testing?
 How are cgi programs, applets, java scripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup
'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a
given situation (Note: more information on usability testing issues can be found in articles
about web site usability in the 'Other Resources' section):
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the application's objects. If the application was well
designed this can simplify test design.
What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on riskprone
projects with unstable requirements. Kent Beck who described the approach in his book
‘Extreme Programming Explained’ created it (See the Softwareqatest.com Books page.).
Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are
expected to write unit and functional test code first - before the application is developed. Test
code is under source control along with the rest of the code. Customers are expected to be an
integral part of the project team and to help develop scenarios for acceptance/black box
testing. Acceptance tests are preferably automated, and are modified and rerun for each of
the frequent development iterations. QA and test personnel are also required to be an integral
part of the project team. A detailed requirement documentation is not used, and frequent rescheduling,
re-estimating, and re-prioritizing is expected. For more info see the XP-related
listings in the Softwareqatest.com 'Other Resources' section.
Web Functional/Regression Test Tools
Rational Suite Test Studio - Rational's functional testing tool; includes Rational
Robot object testing automation tool, which recognizes objects in Java, HTML, ActiveX, Visual
Basic, Visual C/C++, Power Builder, Oracle Developer/2000. For 95/98/NT.
Silk Test - Segue's web testing tool for functional and regression testing; includes
capabilities for testing Java applets, HTML, ActiveX, images; works with MSIE, Netscape,
includes capture/playback capabilities.
Web Site Security Test Tools
HostCheck - Suite of security test and management tools from DMW Worldwide. For UNIX
platforms.
Web Trends Security Analyzer - Web site tool to detect and fix security problems.
Includes periodically- updated Expert knowledge base. For Win95/98/2000/NT
Secure Scanner - Cisco's product for detecting and reporting on Internet server and network
vulnerabilities; risk management; network mapping. For NT or Solaris.
Web Site Management Tools
JetStream - Site management suite for web server monitoring,
A1Monitor - Utility from A1Tech for monitoring availability of web servers.
Capabilities include notification by email and automatic reboot of web server. For Win95/NT
Regression Testing
Regression testing is re-testing of a previously tested program following modification to
ensure that faults have not been introduced or uncovered as a result of the changes made.
Regression tests are designed for repeatability, and are often used when testing a second or
later version of the System Under Test (SUT). Automated regression testing is a benefit of test
automation. Building an automated test system is in fact a software development process. SIM
works alongside the client to select suitable cases for regression testing, such as:
 Tests that cover business critical functions
 Tests that are repetitive
 Tests that need accurate data
 Tests for areas that change regularly.
Functional Testing
Functional testing is testing that operations perform as expected. Functionality is assessed
from two perspectives - the first is to prove and accept the product and the second is to test
the business acceptance. Functional tests are often based on an external requirements
definition.
SIM offers a full functional testing service, and have expertise in all aspects of functional
testing, both manual and automated. SIM will provide the following functional testing services:
 Identify functions to be tested for both online and cyclic batch processes
 Liaise with the business representatives to prioritise the tests
 Identify data to exercise fully the functions to be tested
 Specify the results expected from every test
 Produce automated or manual test scripts that will apply the test data.
Web Test Plan
I am working with a company that works as a Payment Gateway, I am first and only QA there.
So I decided to prepare testing policy. Then write test cases for each type. I need to verify it
with Web Testing "Gurus" Please provide your comments. The Testing policy into four sections
Static Testing,
Test Browsing,
Functional Testing, and
Non Functional Testing
Static Testing (Auto / Manual, Tool: WebKing)
Static testing is the testing of the objects in a web browser that do not change, or are not
transaction based. This type of testing is done on a web page that has already been loaded
into a Web browser.
Content Checking (Auto, Tool: WebKing)
Web page has to be tested for accuracy, completeness, consistency, spelling and accessibility.
These tests sound elementary however; it is in areas like these where the site is first judged
by the website visitor.
Accessibility
Code and content that violates Web accessibility guidelines (Section 508 guidelines and W3C
WAI guidelines).
Spelling Check Content that contains misspellings and typos
Other Accuracy, Web Standards, Completeness, Consistency
Browser Syntax Compatibility (Auto, Tool: WebKing, JTest)
It is the technology of how to represent the content, whether that content consists of text,
graphics, or other web objects. This is an important test as it determines whether or not the
page under test works in various browsers.
1.2.1> Syntax Check
HTML, CSS, JavaScript, and VBScript/ASP coding problems that affect presentation, execution,
dynamic content, performance, transformation, display in non-traditional browsers, etc.
XMLproblems that affect transformations and data retrieval.
Visual Browser Validation (Manual/Auto, Tool: VMWare, BaselinkII, BrowserCam)
Does the content look the same, regardless of supported browser used? They should be
visually hecked to see if there are any differences in the physical appearance of the objects in
the page such as the centering of objects, table layouts, etc. The differences should be
reviewed to see if there is any need to change the page so that it appears exactly the same (if
possible) in all of the supported browsers."
Test Browsing
Test browsing tests aim to find the defects regarding navigation through web pages, including
the availability of linked pages, and other objects, as well as the download speed of the
individual page under test. The integration of web pages to server-based components is
tested, to ensure that the correct components are called from the correct pages.
2.1> Browsing the Site
When traversing links and opening new pages, several questions should be addressed on each
and every page the system links to.
2.1.1> Link Checking (Method: Auto, Tool: WebKing)
Do all of the text and graphical links work? Navigational problems such as broken links,
actions that invoke to designated error pages, anchor problems, non-clickable links, and so
forth.
2.1.2> Object Load and Timing (Method: Auto/ Manual, Tool: Astra Site Manager, WAST)
Can the page be downloaded and displayed? Do all objects load in an acceptable time
(“acceptable” would be based on the business requirements)? When user turns the browser
option of “images-load” to “off” – does the page still work? Other issues to validate are
whether the site still works if JavaScript or Java is disabled, or if a certain plug-ins is not
loaded or disabled."
Functional Testing
3.1> Browser Page Test (Auto, Tool: QAWizard/Winrunner/WebKing)
This type of test covers the objects and code that executes within the browser, but does not
execute the server-based components. For example, JavaScript and VBScript code within
HTML that does rollovers, and other special effects. This type of test also includes field
validations that are done at the HTML level. Additionally, browser-page tests include Java
applets that implement screen functionality or graphical output. Problems exposed such as
JavaScript runtime errors. Pop-up windows, page changes, and other effects that do not work
as expected. Frames that do not load correctly. Frames that do not load correctly. Server-side
program crashes and exceptions. Server errors and failures. Unexpected page content
changes. Unexpected click path flow changes.
3.2> Transaction Testing (Manual/Auto, Tool: QAWizard/Winrunner)
This type of test is designed to force the application to invoke the various components as a
complete set and to determine whether the direct and indirect interfaces work correctly. These
interfaces are: Transfer of control between components, Transfer of data between components
(both directions), Consistency of data across components. Problems exposed such as Serverside
program crashes and exceptions. Server errors and failures.
Non Functional Testing
4.1> Configuration Testing
Beyond the browser validation, this type of test takes into consideration the operating system
platforms used, the type of network connection, Internet service provider type, and browser
used (including version). The real work for this type of test is ensuring that the requirements
and assumptions are understood by the development team, and that a test environment with
those choices are put in place to properly test it."
4.2> Usability (Method: Manual)
Usability is the measure of the quality of a user's experience when interacting with Web site,
Although the tests for usability, can be subjective. Guidelines from http://usability.gov/ can be
used.
4.3> Performance (Method: Auto, Tool: LoadRunner)
Performance testing is the validation that the system meets performance requirements. This
can be as simplistic as ensuring that a web page loads in less than eight seconds, or can be as
complex as requiring the system to handle 10,000 transactions per minute, while still being
able to load a web page within eight seconds.
4.4> Load Testing (Auto, Tool: LoadRunner)
Load testing identifies the volume of traffic accessing a particular application. It measures the
number of simultaneous users that can successfully access the application. Load testing
determines an optimum number of simultaneous users.
4.5> Stress Testing (Auto, Tool: LoadRunner)
Stress testing usually coincides with load testing. Stress testing steadily increases the load on
the site beyond the maximum design load until the site performance degrades to an
unacceptable level or crashes. The benefits of this type of testing are: It tests the behavior of
failures of the system. It determines if system overload results in loss of data or service. It
also stresses the system and may cause certain defects to arise, which may not normally be
detected.
4.6> Security Testing (Manual)
There are several areas of security, and below them are questions or issues that should be
answered for each section.
4.6.1> Data Collection: The web server should be setup so that users cannot browse
directories and obtain file names.
4.6.2> Get vs. Post: When testing, check URLs to ensure that there are no “information
leaks” use to sensitive information being placed in the URL while using a GET command.
4.6.3> Cookies: Testing of Application behavior by disabling or corrupting cookies
In addition to the above testing policy should contain:
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users or
others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.
Code review and document review
Add the testing life cycle in your policy.
Defect Tracking Process
The following are the fields that a bug record consists of.
SCR Id : ______________
Title : ______________ (Form Name)
Version : ______________ (Version of the build)
Description : ______________ (With sequence to reproduce the bug)
Status : Open (Default) ( This is bug record status, it tells existence of the bug )
Close, re-open
Severity : Critical, Major, Minor, Observation/Suggestion
Resolution : Open (This record gives the status wrt Developers)
Fixed, Not a Bug, Fixing In Progress, Postpone, Duplicate, Not Reproducable
Submit Date/Time : (System Date)
Submitter : By default Current User
Assign To : _______________ (The Developer who is responsible)
Apart from this notes can be added and related files can be added to the record.
Submitters:
All the fields will be enabled if the user is a submitter. While posting a bug he needs to
give a title in the format 'Menu Items > Form Name'. Describe the bug giving the sequence of
steps how the bug can be reproduced and assign a Severity to it as Critical, Major, Minor,
Observation/Suggestion. If you feel like attaching a screen shot of the bug you need to submit
the bug and then select that record again and click the Update Files to add a file having the
screen shot. Once the bug is entered Manager assigns it to the corresponding Developer.
Manager:
All the fields will be enabled if the user is a Manager. Once a new Bug is posted the
manager has to track it and assign it to the corresponding Developer.
Developers:
For Developers the only field, which is enabled, is 'Resolution' and all other fields are
disabled. The Resolution field is by default 'Open' for a bug record when it is posted.
Developer’s track records which are posted against them and work on them and accordingly
give the Resolution as one of the following.
Fixed -- If the bug is rectified.
Not a Bug -- If the Developer feels in such way. Need to give proper explanation for
it.
Fixing In Progress -- Need to give proper explanation for it.
Postpone -- Need to give proper explanation for it.
Duplicate -- If the same bug is posted more than once.
Not Reproducable -- If the bug is not reproducable.
Submitter:
After the resolution is changed from Open the submitter tracks it and rechecks it and decides
to close the 'Status' accordingly.
That is what the basic flow of bug cycle is wrt users.
Bug Impacts
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to
occur in normal use, or minor errors in layout/formatting. These problems do not impact use
of the product in any substantive way.
Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at
certain boundary conditions. c) Has a workaround (where "don't do that" might be an
acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very
intermittent
High impact
This should be used for only serious problems, effecting many sites, with no workaround.
Frequent or reproducible crashes/core would fall in this category, as would major functionality
not working.
Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete
inability to use the product at almost any site, etc. For released products, an urgent bug would
imply that shipping of the product should stop immediately, until the problem is resolved.
2. SCOPE AND OBJECTIVES
2.1. Scope of Test Approach - System Functions
+ 2.1.1. Inclusions
+ 2.1.2. Exclusions
2.2. Testing Process
2.3. Testing Scope
+ 2.3.1. Functional Testing
+ 2.3.2. Integration Testing
+ 2.3.3. Business (User) Acceptance Test
+ 2.3.4. Performance Testing
+ 2.3.5. Regression Testing
+ 2.3.6. Bash & Multi-User Testing
+ 2.3.7. Technical Testing
+ 2.3.8. Operations Acceptance Testing (OAT)
2.4. System Test Entrance/Exit Criteria
+ Entrance Criteria
+ Exit Criteria
SCOPE AND OBJECTIVES
2.1. Scope of Test Approach - System Functions
2.1.1. INCLUSIONS
The contents of this release are as follows: -
Phase 1 Deliverables
New & revised Transaction Processing with automated support
New Customer Query Processes and systems
Revised Inter-Office Audit process
Relocate Exceptions to Head Office
New centralised Agency Management system
Revised Query Management process
Revised Retrievals process
New International Reconciliation process
New Account Reconciliation process
2.1.2. EXCLUSIONS
When the scope of each Phase has been agreed and signed off, no further inclusions will be
considered for inclusion in this release, except:
Where there is the express permission and agreement of the Business Analyst and the
System Test Controller;
Where the changes/inclusions will not require significant effort on behalf of the test
team (i.e. requiring extra preparation - new test conditions etc.) and will not adversely affect
the test schedule.
2.1.3. SPECIFIC EXCLUSIONS
Cash management is not included in this phase
Sign On/Sign Off functions are excluded - this will be addressed by existing processes
The existing Special Order facility will not be replaced
Foreign Currency Transactions
International Data Exchanges
Accounting or reporting of Euro transactions
Reference & Source Documentation:
Business Processes Design Document - Document Ref: BPD-1011
Transaction Requirements for Phase 1 - Document Ref: TR_PHASE1-4032
Project Issues & Risks Database - T:\Data\Project\PROJECT.MDB
The System Development Standards - Document Ref: DEVSTD-1098-2
System Development Lifecycle - Document Ref: SDLC-301
2.2. Testing Process
The diagram above outlines the Test Process approach that will be followed.
Organise Project involves creating a System Test Plan, Schedule & Test Approach, and
requesting/assigning resources.
Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit
Criteria, Expected Results, etc. In general, test conditions/expected results will be
identified by the Test Team in conjunction with the Project Business Analyst or Business
Expert. The Test Team will then identify Test Cases and the Data required. The Test
conditions are derived from the Business Design and the Transaction Requirements
Documents
Design/Build Test Procedures includes setting up procedures such as Error Management
systems and Status reporting, and setting up the data tables for the Automated Testing
Tool.
Build Test Environment includes requesting/building hardware, software and data setups.
Execute Project Integration Test - See Section 3 - Test Phases & Cycles
Execute Operations Acceptance Test - See Section 3 - Test Phases & Cycles
Signoff - Signoff happens when all pre-defined exit criteria have been achieved.
2.2.1. Exclusions
SQA will not deal directly with the business design regarding any design / functional issues /
queries.
The development team is the supplier to SQA - if design / functional issues arise they should
be resolved by the development team and its suppliers.
2.3. Testing Scope
Outlined below are the main test types that will be performed for this release. All system test
plans and conditions will be developed from the functional specification and the requirements
catalogue.
2.3.1. Functional Testing
The objective of this test is to ensure that each element of the application meets the functional
requirements of the business as outlined in the :
Requirements Catalogue
Business Design Specification
Year 2000 Development Standards
Other functional documents produced during the course of the project i.e. resolution to
issues/change requests/feedback.
This stage will also include Validation Testing - which is intensive testing of the new Front end
fields and screens. Windows GUI Standards; valid, invalid and limit data input; screen & field
look and appearance, and overall consistency with the rest of the application.
The third stage includes Specific Functional testing - these are low-level tests, which aim to
test the individual processes and data flows.
2.3.2. Integration Testing
This test proves that all areas of the system interface with each other correctly and that there
are no gaps in the data flow. Final Integration Test proves that system works as integrated
unit when all the fixes are complete.
2.3.3. Business (User) Acceptance Test
This test, which is planned and executed by the Business Representative(s), ensures that the
system operates in the manner expected, and any supporting material such as procedures,
forms etc. are accurate and suitable for the purpose intended. It is high level testing, ensuring
that there are no gaps in functionality.
2.3.4. Performance Testing
These tests ensure that the system provides acceptable response times (which should not
exceed 4 seconds).
2.3.5. Regression Testing
A Regression test will be performed after the release of each Phase to ensure that -
There is no impact on previously released software, and
to ensure that there is an increase in the functionality and stability of the software.
The regression testing will be automated using the automated testing tool.
2.3.6. Bash & Multi-User Testing
Multi-user testing will attempt to prove that it is possible for an acceptable number of users to
work with the system at the same time. The object of Bash testing is an ad-hoc attempt to
break the system.
2.3.7. Technical Testing
Technical Testing will be the responsibility of the Development Team.
2.3.8. Operations Acceptance Testing (OAT)
This phase of testing is to be performed by the Systems Installation and Support group, prior
to implementing the system in a live site. The SIS team will define their own testing criteria,
and carry out the tests.
2.4. System Test Entrance/Exit Criteria
2.4.1. Entrance Criteria
The Entrance Criteria specified by the system test controller, should be fulfilled before System
Test can commence. In the event, that any criterion has not been achieved, the System Test
may commence if Business Team and Test Controller are in full agreement that the risk is
manageable.
All developed code must be unit tested. Unit and Link Testing must be completed and
signed off by development team.
System Test plans must be signed off by Business Analyst and Test Controller.
All human resources must be assigned and in place.
All test hardware and environments must be in place, and free for System test use.
The Acceptance Tests must be completed, with a pass rate of not less than 80%.
Acceptance Tests:
25 test cases will be performed for the acceptance tests. To achieve the acceptance criteria 20
of the 25 cases should be completed successfully - i.e. a pass rate of 80% must be achieved
before the software will be accepted for System Test proper to start. This means that any
errors found during acceptance testing should not prevent the completion of 80% of the
acceptance test applications.
Note: These tests are not intended to perform in depth testing of the software.
[For details of the acceptance tests to be performed see
X:\Testing\Phase_1\Testcond\Criteria.doc]
Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing
will not re-commence until the software reaches these criteria.
2.4.2. Exit Criteria
The Exit Criteria detailed below must be achieved before the Phase 1 software can be
recommended for promotion to Operations Acceptance status. Furthermore, I recommend that
there be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been
retested.
All High Priority errors from System Test must be fixed and tested
If any medium or low-priority errors are outstanding - the implementation risk must be
signed off as acceptable by Business Analyst and Business Expert
Project Integration Test must be signed off by Test Controller and Business Analyst.
Business Acceptance Test must be signed off by Business Expert.
What are Use Cases?
Use cases are a relatively new method of documenting a software program’s actions. It’s a
style of functional requirement document - an organized list of scenarios that a user or system
might perform while navigating through an application. According to the Rational Unified
Process,
“A use case defines a set of use-case instances, where each instance is a sequence of actions
a system performs that yields an observable result of value to a particular actor”.
What’s so good about Use Cases?
Use Cases have gained popularity over the last few years as a method of organizing and
documenting a software system’s functions from the user perspective.
What are some of their problems?
There are problems inherent in any documentation method (including traditional functional
requirements documents), and use cases are no different. Some general problems to be aware
of include:
 They might be incomplete
 Each case not describing enough detail of use
 Not enough of them, missing entire areas of functionality
 They might be inaccurate
 They might not have been reviewed
 They might not updated when requirements changed
 They might be ambiguous
Requirement Management:
Requirements are capabilities and objectives to which any product or service must conform
and are common to all development and other engineering activities. Requirements
management is the process of eliciting, documenting, organizing, and tracking requirements
and communicating this information across the various stakeholders and the project team. It
ensures that iterative refinements and unanticipated changes are dealt with during the project
life cycle, with a view towards the overall quality of the resultant service or product.
Requirements management is concerned with understanding the goals of the organization and
its customers and the transformation of these goals into potential functions and constraints
applicable to the development and evolution of products and services. It involves
understanding the relationship between goals, functions and constraints in terms of the
specification of products, including systems behavior, and service definition.
The goals provide the motivation for programmes and projects and represent the 'why' and to
a certain extent the 'what' in development terms. The specification provides the basis for
analyzing requirements, validating that they are indeed what stakeholders want, defining what
needs to be delivered, and verifying the resultant developed product or service.
Requirements management aims to establish a common understanding between the customer
and other stakeholders and the project team(s) that will be addressing the requirements at an
early stage in the project life-cycle and maintain control by establishing suitable base-lines for
both development and management use.
Why are test requirements so important to the testing process?
A test requirement is a testing "goal." It is a statement of what the test engineer wants to
accomplish when implementing a specific testing activity. More than this, it is a goal that is
defined to reflect against an AUT feature as documented in the software requirements
specification.
A test requirement is a step down from the software requirement. It must be "measurable" in
that it can be proved.
Measurable means that the test engineers can qualitatively or quantitatively verify the test
results against the test requirement's expected result.
In order to achieve this:
Test requirements must be broken down into test conditions that contain much more detail
than the software requirements specification and the test requirement allow.
The relationship from software requirement to test requirement can be one-to-one, one test
requirement per software requirement, one-to-many, one software requirement results in
many test requirements, and many to one, more than one software requirement relates to one
test requirements.
Using the same line of thinking the relationship of test requirement to test condition can be
one-to-one, one test condition per test requirement, one-to-many, one test requirement
results in many test conditions, and many to one, more than one test requirement relates to
one test condition.
In both instances, many-to-many relationships are also possible, but they make testing so
complex that the results are difficult to interpret, so this type of relationship should be
avoided. When it occurs, consider using a decomposition approach to split the test
requirement into one or more, less complex requirements.
What can be done if requirements are changing continuously?
A common problem and a major headache.
Work with the project's stakeholders early on to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance, if
possible.
It's helpful if the application's initial design allows for some adaptability so that later
changes do not require redoing the application from scratch.
If the code is well commented and well documented this makes changes easier for the
developers.
Use rapid prototyping whenever possible to help customers feel sure of their equirements
and minimize changes.
The project's initial schedule should allow for some extra time commensurate with the
possibility of changes.
Try to move new requirements to a 'Phase 2' version of an application, while using the
original requirements for the 'Phase 1' version.
Negotiate to allow only easily implemented new requirements into the project, while
moving more difficult new requirements into future versions of the application.
Be sure that customers and management understand the scheduling impacts, inherent
risks, and costs of significant requirements changes. Then let management or the
customers (not the developers or testers) decide if the changes are warranted.
Balance the effort put into setting up automated testing with the expected effort required
to re-do them to deal with changes.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain
unchanged.
Devote appropriate effort to risk analysis of changes to minimize regression-testing
needs.
Design some flexibility into test cases (this is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Reliability Factor:
Assurance that the application will perform its intended function with required precision over
an intended period of time. The correctness of processing deals with the ability of the system
to process valid transactions correctly, while reliability relates to the systems being able to
perform correctly over an extended period of time when placed into production. Reliability is
one of the most important Test Factor to be considered.
Recovery Testing Technique:
Recovery is the ability to restart operations after the integrity of the application that has been
lost. The process normally involves reverting to a point where the integrity of the system is
known, and then reprocessing transactions up until the point of failure. The time required to
recover operations is affected by the number of restart points, the volume of applications run
on the computer center, the training and skill of the people conducting the recovery operation
and tools available for recovery. The importance of recover will vary from application to
application.