Software Engg and CASE tools



Notes for software engg and case tools

Software Product

Software is
(1) instructions (computer programs) that when executed provide desired function and performance,
(2) data structures that enable the programs to adequately manipulate information, and (3) documents that describe the operation and use of the programs.
From the point of view of a software engineer, the work product is the programs, documents, and data that are computer software. But from the user’s viewpoint, the work product is the resultant information that somehow makes the user’s world better.
Software Characteristics

1.       Software is developed or engineered; it is not manufactured in the classical sense.
2.       Software doesn't wear out: The function of time for hardware called the "bathtub curve," indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steady-state level (ideally, quite low) for some period of time. As time passes, however, the failure rate rises again as hardware components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes, and many other environmental maladies. Stated simply, the hardware begins to wear out.
Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form of the “idealized curve”. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected (ideally, without introducing other errors) and the curve flattens.
However, the implication is clear—software doesn't wear out. But it does deteriorate!
During its life, software will undergo change (maintenance). As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve to spike. Before the curve can return to the original steady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise—the software is deteriorating due to change.
Another difference is that when a hardware component wears out, it is replaced by a spare part. There are no software spare parts. Every software failure indicates an error in design or in the process through which design was translated into xmachine executable code. Therefore, software maintenance involves considerably more complexity than hardware maintenance.

3.       Although the industry is moving toward component-based assembly, most software continues to be custom built: As an

engineering discipline evolves, a collection of standard design components is created. A software component should be

designed and implemented so that it can be reused in many different programs. For example, today's graphical user

interfaces are built using reusable components that enable the creation of graphics windows, pull-down menus, and a wide

variety of interaction mechanisms.
The Software Process

A software process can be characterized by a process framework. A common process framework is established by defining a

small number of framework activities that are applicable to all software projects, regardless of their size or complexity.

A number of task sets—each a collection of software engineering work tasks, project milestones, work products, and quality

assurance points—enable the framework activities to be adapted to the characteristics of the software project and the

requirements of the project team. Finally, umbrella activities—such as software quality assurance, software configuration

management, and measurement—overlay the process model. Umbrella activities are independent of any one framework activity

and occur throughout the process.
In recent years, there has been a significant emphasis on “process maturity.” The Software Engineering Institute (SEI) has

developed a comprehensive model predicated on a set of software engineering capabilities that should be present as

organizations reach different levels of process maturity. To determine an organization’s current state of process maturity,

the SEI uses an assessment that results in a five point grading scheme. The grading scheme determines compliance with a

capability maturity model (CMM) that defines key activities required at different levels of process maturity (Initial,

Repeatable, Defined, Managed and optimizing).
The SEI has associated key process areas (KPAs) with each of the maturity levels. The KPAs describe those software

engineering functions (e.g., software project planning, requirements management) that must be present to satisfy good

practice at a particular level. Each KPA is described by identifying the following characteristics:
1.       Goals: The overall objectives that the KPA must achieve.
2.       Commitments: Requirements (imposed on the organization) that must be met to achieve the goals or provide proof of

intent to comply with the goals.
3.       Abilities: Those things that must be in place (organizationally and technically) to enable the organization to

meet the commitments.
4.       Activities: The specific tasks required to achieve the KPA function.
5.       Methods for monitoring implementation: The manner in which the activities are monitored as they are put into

place.
6.       Methods for verifying implementation: The manner in which proper practice for the KPA can be verified.
Software Applications

Software may be applied in any situation for which a prespecified set of procedural steps (i.e., an algorithm) has been

defined. Information content and determinacy are important factors in determining the nature of a software application.

Content refers to the meaning and form of incoming and outgoing information. Information determinacy refers to the

predictability of the order and timing of information. An engineering analysis program accepts data that have a predefined

order, executes the analysis algorithm(s) without interruption, and produces resultant data in report or graphical format.
The following software areas indicate the breadth of potential applications:
*      System software: System software is a collection of programs written to service other programs. Some system software

processes less complex information structures (e.g., compilers, editors, and file management utilities) other systems

applications processes more complex information structures (e.g., operating system components, drivers, telecommunications

processors). In either case, the system software area is characterized by heavy interaction with computer hardware; heavy

usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process

management; complex data structures; and multiple external interfaces.
*      Real-time software: Software that monitors/analyzes/controls real-world events as they occur is called real time.

Elements of real-time software include a data gathering component that collects and formats information from an external

environment, an analysis component that transforms information as required by the application, a control/output component

that responds to the external environment, and a monitoring component that coordinates all other components so that

real-time response (typically ranging from 1 millisecond to 1 second) can be maintained.
*      Business software: Business information processing is the largest single software application area. Discrete

"systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS)

software that accesses one or more large databases containing business information. Applications in this area restructure

existing data in a way that facilitates business operations or management decision making. In addition to conventional data

processing application, business software applications also encompass interactive computing (e.g., point of- sale

transaction processing).
*      Engineering and scientific software: Engineering and scientific software have been characterized by "number

crunching" algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle

orbital dynamics, and from molecular biology to automated manufacturing. However, modern applications within the

engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided design, system

simulation, and other interactive applications have begun to take on real-time and even system software characteristics.
*      Embedded software: Intelligent products have become commonplace in nearly every consumer and industrial market.

Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial

markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or

provide significant function and control capability (e.g., digital functions in an automobile such as fuel control,

dashboard displays, and braking systems).
*      Personal computer software: The personal computer software market has burgeoned over the past two decades. Word

processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business

financial applications, external network, and database access are only a few of hundreds of applications.
*      Web-based software: The Web pages retrieved by a browser are software that incorporates executable instructions

(e.g., CGI, HTML, Perl, or Java), and data (e.g., hypertext and a variety of visual and audio formats). In essence, the

network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a

modem.
*      Artificial intelligence software: Artificial intelligence (AI) software makes use of nonnumerical algorithms to

solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called

knowledge based systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game

playing are representative of applications within this category.
Layered Technology

Software engineering is a layered technology. The foundation for software engineering is the process layer. Software

engineering process is the glue that holds the technology layers together and enables rational and timely development of

computer software. Process defines a framework for a set of key process areas that must be established for effective

delivery of software engineering technology. The key process areas form the basis for management control of software

projects and establish the context in which technical methods are applied, work products (models, documents, data, reports,

forms, etc.) are produced, milestones are established, quality is ensured, and change is properly managed.
Software engineering methods provide the technical how-to's for building software. Methods encompass a broad array of tasks

that include requirements analysis, design, program construction, testing, and support. Software engineering methods rely

on a set of basic principles that govern each area of the technology and include modeling activities and other descriptive

techniques.
Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are

integrated so that information created by one tool can be used by another, a system for the support of software

development, called computer-aided software engineering, is established. CASE combines software, hardware, and a software

engineering database (a repository containing important information about analysis, design, program construction, and

testing) to create a software engineering environment analogous to CAD/CAE (computer-aided design/engineering) for

hardware.
Software Engineering

Engineering is the analysis, design, construction, verification, and management of technical (or social) entities. The work

associated with software engineering can be categorized into three generic phases, regardless of application area, project

size, or complexity.
The definition phase focuses on what: That is, during definition, the software engineer attempts to identify what

information is to be processed, what function and performance are desired, what system behavior can be expected, what

interfaces are to be established, what design constraints exist, and what validation criteria are required to define a

successful system. The key requirements of the system and the software are identified. Three major tasks will occur in some

form: system or information engineering, software project planning, and requirements analysis.
The development phase focuses on how: That is, during development a software engineer attempts to define how data are to be

structured, how function is to be implemented within a software architecture, how procedural details are to be implemented,

how interfaces are to be characterized, how the design will be translated into a programming language (or nonprocedural

language), and how testing will be performed. The methods applied during the development phase will vary, but three

specific technical tasks should always occur: software design, code generation, and software testing.
The support phase focuses on: change associated with error correction, adaptations required as the software's environment

evolves, and changes due to enhancements brought about by changing customer requirements. The support phase reapplies the

steps of the definition and development phases but does so in the context of existing software. Four types of change are

encountered during the support phase:
*      Correction: Even with the best quality assurance activities, it is likely that the customer will uncover defects in

the software. Corrective maintenance changes the software to correct defects.
*      Adaptation: Over time, the original environment (e.g., CPU, operating system, business rules, external product

characteristics) for which the software was developed is likely to change. Adaptive maintenance results in modification to

the software to accommodate changes to its external environment.
*      Enhancement: As software is used, the customer/user will recognize additional functions that will provide benefit.

Perfective maintenance extends the software beyond its original functional requirements.
*      Prevention: Computer software deteriorates due to change, and because of this, preventive maintenance, often called

software reengineering, and must be conducted to enable the software to serve the needs of its end users. In essence,

preventive maintenance makes changes to computer programs so that they can be more easily corrected, adapted, and enhanced.
Software Crisis

Many industry observers have characterized the problems associated with software development as a "crisis."
The word crisis is defined in Webster's Dictionary as “a turning point in the course of anything; decisive or crucial time,

stage or event.” Yet, in terms of overall software quality and the speed with which computer-based systems and products are

developed, there has been no "turning point," no "decisive time," only slow, evolutionary change, punctuated by explosive

technological changes in disciplines associated with software.
The word crisis has another definition: "the turning point in the course of a disease, when it becomes clear whether the

patient will live or die." This definition may give us a clue about the real nature of the problems that have plagued

software development.
What we really have might be better characterized as a chronic affliction. The word affliction is defined as "anything

causing pain or distress." But the definition of the adjective chronic is the key to our argument: "lasting a long time or

recurring often; continuing indefinitely." It is far more accurate to describe the problems we have endured in the software

business as a chronic affliction than a crisis.
Regardless of what we call it, the set of problems that are encountered in the development of computer software is not

limited to software that "doesn't functionproperly." Rather, the affliction encompasses problems associated with how we

develop software, how we support a growing volume of existing software, and how we can expect to keep pace with a growing

demand for more software.
Software Development Paradigms

To solve actual problems in an industry setting, a software engineer or a team of engineers must incorporate a development

strategy that encompasses the process, methods, and tools layers.This strategy is often referred to as a process model or a

software engineering paradigm. A process model for software engineering is chosen based on the nature of the project and

application, the methods and tools to be used, and the controls and deliverables that are required.
All software development can be characterized as a problem solving loop in which four distinct stages are encountered:

status quo, problem definition, technical development, and solution integration. Status quo “represents the current state

of affairs” problem definition identifies the specific problem to be solved; technical development solves the problem

through the application of some technology, and solution integration delivers the results (e.g., documents, programs, data,

new business function, new product) to those who requested the solution in the first place.

The Build and Fix Process Life Cycle Model

In this most simple model of software development, the product is constructed with minimal requirements with no

specifications, without any attempt at design and testing. This is a representation of what is happening in many software

development projects. Note that this way of doing is not only a counter-example: it has its benefits in some situations.
Advantages

*      Cost efficient for very small projects of limited complexity.
Disadvantages

*      Unsatisfying approach for products of reasonable size.
*      Cost is higher for larger projects.
*      Product will not be delivered on time most of the times.
*      Often results in a product of overall low quality.
*      No documentation is produced.
*      Maintenance can be extremely difficult without specification and design document.
The Waterfall Process Life Cycle Model

Sometimes called the classic life cycle or the waterfall model, the linear sequential model suggests a systematic,

sequential approach to software development that begins at the system level and progresses through analysis, design,

coding, testing, and support. The linear sequential model encompasses the following activities:
1.       System/information engineering and modeling: Because software is always part of a larger system (or business),

work begins by establishing requirements for all system elements and then allocating some subset of these requirements to

software. Information engineering encompasses requirements gathering at the strategic business level and at the business

area level.
2.       Software requirements analysis: The requirements gathering process is intensified and focused specifically on

software. To understand the nature of the programs to be built, the software engineer must understand the information

domain for the software, as well as required function, behavior, performance, and interface.
3.       Design: Software design is actually a multistep process that focuses on four distinct attributes of a program:

data structure, software architecture, interface representations, and procedural (algorithmic) detail. The design process

translates requirements into a representation of the software that can be assessed for quality before coding begins. Like

requirements, the design is documented and becomes part of the software configuration.
4.       Code generation: The design must be translated into a machine-readable form. The code generation step performs

this task. If design is performed in a detailed manner, code generation can be accomplished mechanistically.
5.       Testing: Once code has been generated, program testing begins. The testing process focuses on the logical

internals of the software, ensuring that all statements have been tested, and on the functional externals; that is,

conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required

results.
6.       Support: Software will undoubtedly undergo change after it is delivered to the customer (a possible exception is

embedded software). Change will occur because errors have been encountered, because the software must be adapted to

accommodate changes in its external environment (e.g., a change required because of a new operating system or peripheral

device), or because the customer requires functional or performance enhancements. Software support/maintenance reapplies

each of the preceding phases to an existing program rather than a new one.
Among the problems that are sometimes encountered when the linear sequential model is applied are:

*      Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate

iteration, it does so indirectly. As a result, changes can cause confusion as the project team proceeds.
*      It is often difficult for the customer to state all requirements explicitly. The linear sequential model requires

this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects.
*      The customer must have patience. A working version of the programs will not be available until late in the project

time-span. A major blunder, if undetected until the working program is reviewed, can be disastrous.
The Prototyping Process Life Cycle Model

The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway

prototype is built to understand the requirements. This prototype is developed based on the currently known requirements.

Development of the prototype obviously undergoes design, coding and testing. But each of these phases is not done very

formally or thoroughly. By using this prototype, the client can get an "actual feel" of the system, since the interactions

with prototype can enable the client to better understand the requirements of the desired system.
Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system

to help determining the requirements. In such situations letting the client "plan" with the prototype provides invaluable

and intangible inputs which helps in determining the requirements for the system. It is also an effective method to

demonstrate the feasibility of a certain approach. This might be needed for novel systems where it is not clear those

constraints can be met or that algorithms can be developed to implement the requirements. The process model of the

prototyping approach is shown in the figure below.
The basic reason for little common use of prototyping is the cost involved in this built-it-twice approach. However, some

argue that prototyping need not be very costly and can actually reduce the overall development cost. The prototype is

usually not complete systems and many of the details are not built in the prototype. The goal is to provide a system with

overall functionality. In addition, the cost of testing and writing detailed documents are reduced. These factors help to

reduce the cost of developing the prototype. On the other hand, the experience of developing the prototype will very useful

for developers when developing the final system. This experience helps to reduce the cost of development of the final

system and results in a more reliable and better designed system.
Advantages of Prototyping

*      Users are actively involved in the development
*      It provides a better system to users, as users have natural tendency to change their mind in specifying requirements

and this method of developing systems supports this user tendency.
*      Since in this methodology a working model of the system is provided, the users get a better understanding of the

system being developed.
*      Errors can be detected much earlier as the system is mode side by side.
*      Quicker user feedback is available leading to better solutions.
Disadvantages

*      Leads to implementing and then repairing way of building systems.
*      Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond

original plans.
The Iterative Enhancement Process Life Cycle Model

The iterative enhancement life cycle model tries to combine the benefits of both prototyping and the waterfall model. The

basic idea is that the software should be developed in increments, where each increment adds some functional capability to

the system until the full system is implemented.
A common mistake is to consider "iterative" and "incremental" as synonyms, which they are not. In software/systems

development, however, they typically go hand in hand. The basic idea is to develop a system through repeated cycles

(iterative) and in smaller portions at a time (incremental), allowing the developer to take advantage of what was learned

during the development of earlier portions or versions of the system.
At each step extensions and design modifications can be made. An advantage of this approach is that it can result in better

testing, since testing each increment is likely to be easier than testing entire system like in the waterfall model.

Furthermore, as in prototyping, the increments provide feedback to the client which is useful for determining the final

requirements of the system.
In the first step of iterative enhancement model, a simple initial implementation is done for a subset of the overall

problem. This subset is the one that contains some of the key aspects of the problem which are easy to understand and

implement, and which forms a useful and usable system. A project control list is created which contains, in an order, all

the tasks that must be performed to obtain the final implementation. This project control list gives an idea of how far the

project is at any given step from the final system.
Each step consists of removing the next step from the list. The process is iterated until the project control list is

empty, at the time the final implementation of the system will be available. The process involved in iterative enhancement

model is shown in the figure below.
The project control list guides the iteration steps and keeps track of all tasks that must be done. The tasks in the list

can be including redesign of defective components found during analysis. Each entry in that list is a task that should be

performed in one step of the iterative enhancement process, and should be simple enough to be completely understood.

Selecting tasks in this manner will minimize the chances of errors and reduce the redesign work.
The Evolutionary Process Life Cycle Models

The linear sequential model is designed for straight-line development. In essence, this waterfall approach assumes that a

complete system will be delivered after the linear sequence is completed. The prototyping model is designed to assist the

customer (or developer) in understanding requirements. In general, it is not designed to deliver a production system.
The evolutionary nature of software is not considered in either of these classic software engineering paradigms.

Evolutionary models are iterative. They are characterized in a manner that enables software engineers to develop

increasingly more complete versions of the software.
Many models are proposed under evolutionary model category like iterative model, incremental model and spiral model etc.
The Incremental Process Life Cycle Model

The incremental model combines elements of the linear sequential model with the iterative philosophy of prototyping. Each

linear sequence produces a deliverable “increment” of the software.
When an incremental model is used, the first increment is often a core product. That is, basic requirements are addressed,

but many supplementary features remain undelivered. The core product is used by the customer. As a result of evaluation, a

plan is developed for the next increment. The plan addresses the modification of the core product to better meet the needs

of the customer and the delivery of additional features and functionality. This process is repeated following the delivery

of each increment, until the complete product is produced.
The incremental process model, like prototyping and other evolutionary approaches, is iterative in nature. But unlike

prototyping, the incremental model focuses on the delivery of an operational product with each increment. Early increments

are stripped down versions of the final product, but they do provide capability that serves the user and also provide a

platform for evaluation by the user.
Incremental development is particularly useful when staffing is unavailable for a complete implementation by the business

deadline that has been established for the project. Early increments can be implemented with fewer people. If the core

product is well received, then additional staff (if required) can be added to implement the next increment. In addition,

increments can be planned to manage technical risks. For example, a major system might require the availability of new

hardware that is under development and whose delivery date is uncertain. It might be possible to plan early increments in a

way that avoids the use of this hardware, thereby enabling partial functionality to be delivered to end-users without

inordinate delay.

Incremental Development

Incremental development slices the system functionality into increments (portions). In each increment, a slice of

functionality is delivered through cross-discipline work, from the requirements to the deployment. The unified process

groups increments/iterations into phases: inception, elaboration, construction, and transition.
*      Inception: identifies project scope, risks, and requirements (functional and non-functional) at a high level but in

enough detail that work can be estimated.
*      Elaboration: delivers a working architecture that mitigates the top risks and fulfills the non-functional

requirements.
*      Construction: incrementally fills-in the architecture with production-ready code produced from analysis, design,

implementation, and testing of the functional requirements.
*      Transition: delivers the system into the production operating environment.
The Spiral Process Life Cycle Model

This is a recent model that has been proposed by Boehm. As the name suggests, the activities in this model can be organized

like a spiral. The spiral has many cycles. The radial dimension represents the cumulative cost incurred in accomplishing

the steps done so far and the angular dimension represents the progress made in completing each cycle of the spiral. The

structure of the spiral model is shown in the figure given below. Each cycle in the spiral begins with the identification

of objectives for that cycle and the different alternatives are possible for achieving the objectives and the imposed

constraints.
The next step in the spiral life cycle model is to evaluate these different alternatives based on the objectives and

constraints. This will also involve identifying uncertainties and risks involved. The next step is to develop strategies

that resolve the uncertainties and risks. This step may involve activities such as benchmarking, simulation and

prototyping. Next, the software is developed by keeping in mind the risks. Finally the next stage is planned.
The next step is determined by remaining risks. For example, its performance or user-interface risks are considered more

important than the program development risks. The next step may be evolutionary development that involves developing a more

detailed prototype for resolving the risks. On the other hand, if the program development risks dominate and previous

prototypes have resolved all the user-interface and performance risks; the next step will follow the basic waterfall

approach.
The risk driven nature of the spiral model allows it to accommodate any mixture of specification-oriented,

prototype-oriented, simulation-oriented or some other approach. An important feature of the model is that each cycle of the

spiral is completed by a review, which covers all the products developed during that cycle, including plans for the next

cycle. The spiral model works for developed as well as enhancement projects.
Spiral Model Description

The development spiral consists of four quadrants as shown in the figure above:
Quadrant 1: Determine Objectives, Alternatives, and Constraints

Activities performed in this quadrant include:
1.       Establish an understanding of the system or product objectives: namely performance, functionality, and ability to

accommodate change.
2.       Investigate implementation alternatives: namely design, reuse, procure, and procure/ modify
3.       Investigate constraints imposed on the alternatives: namely technology, cost, schedule, support, and risk. Once

the system or product’s objectives, alternatives, and constraints are understood, Quadrant 2 (Evaluate alternatives,

identify, and resolve risks) is performed.
Quadrant 2: Evaluate Alternatives, Identify, Resolve Risks

Engineering activities performed in this quadrant select an alternative approach that best satisfies technical, technology,

cost, schedule, support, and risk constraints. The focus here is on risk mitigation. Each alternative is investigated and

prototyped to reduce the risk associated with the development decisions. Boehm describes these activities as follows:
This may involve prototyping, simulation, benchmarking, reference checking, administering user questionnaires, analytic

modeling, or combinations of these and other risk resolution techniques.
The outcome of the evaluation determines the next course of action. If critical operational and/or technical issues

(COIs/CTIs) such as performance and interoperability (i.e., external and internal) risks remain, more detailed prototyping

may need to be added before progressing to the next quadrant. Dr. Boehm notes that if the alternative chosen is

“operationally useful and robust enough to serve as a low-risk base for future product evolution, the subsequent

risk-driven steps would be the evolving series of evolutionary prototypes going toward the right (hand side of the graphic)

the option of writing specifications would be addressed but not exercised.” This brings us to Quadrant 3.
Quadrant 3: Develop, Verify, Next-Level Product

If a determination is made that the previous prototyping efforts have resolved the COIs/CTIs, activities to develop,

verify, next-level product are performed. As a result, the basic “waterfall” approach may be employed—meaning concept of

operations, design, development, integration, and test of the next system or product iteration. If appropriate, incremental

development approaches may also be applicable.
Quadrant 4: Plan Next Phases

The spiral development model has one characteristic that is common to all models—the need for advanced technical planning

and multidisciplinary reviews at critical staging or control points. Each cycle of the model culminates with a technical

review that assesses the status, progress, maturity, merits, risk, of development efforts to date; resolves critical

operational and/or technical issues (COIs/CTIs); and reviews plans and identifies COIs/CTIs to be resolved for the next

iteration of the spiral.
Subsequent implementations of the spiral may involve lower level spirals that follow the same quadrant paths and decision

considerations.
The Concurrent Process Development Life Cycle Model

The concurrent process model can be represented schematically as a series of major technical activities, tasks, and their

associated states. For example, the engineering activity defined for the spiral model (Section 2.7.2) is accomplished by

invoking the following tasks: prototyping and/or analysis modeling, requirements specification, and design.9
Figure below provides a schematic representation of one activity with the concurrent process model. The

activity—analysis—may be in any one of the states noted at any given time. Similarly, other activities (e.g., design or

customer communication) can be represented in an analogous manner. All activities exist concurrently but reside in

different states.
For example, early in a project the customer communication activity has completed its first iteration and exists in the

awaiting changes state. The analysis activity now makes a transition into the under development state. If, however, the

customer indicates that changes in requirements must be made, the analysis activity moves from the under development state

into the awaiting changes state.
The concurrent process model defines a series of events that will trigger transitions from state to state for each of the

software engineering activities. For example, during early stages of design, an inconsistency in the analysis model is

uncovered. This generates the event analysis model correction which will trigger the analysis activity from the done state

into the awaiting changes state. The concurrent process model is often used as the paradigm for the development of

client/server applications.
When applied to client/server, the concurrent process model defines activities in two dimensions: a system dimension and a

component dimension. System level issues are addressed using three activities: design, assembly, and use. The component

dimension is addressed with two activities: design and realization. Concurrency is achieved in two ways: (1) system and

component activities occur simultaneously and can be modeled using the state-oriented approach described previously; (2) a

typical client/server application is implemented with many components, each of which can be designed and realized

concurrently.
In reality, the concurrent process model is applicable to all types of software development and provides an accurate

picture of the current state of a project. Rather than confining software engineering activities to a sequence of events,

it defines a network of activities. Each activity on the network exists simultaneously with other activities. Events

generated within a given activity or at some other place in the activity network trigger transitions among the states of an

activity.
The System Specification

In the context of computer-based systems (and software), the term specification means different things to different people.

A specification can be a written document, a graphical model, a formal mathematical model, a collection of usage scenarios,

a prototype, or any combination of these.
Some suggest that a “standard template” should be developed and used for a system specification, arguing that this leads to

requirements that are presented in a consistent and therefore more understandable manner. However, it is sometimes

necessary to remain flexible when a specification is to be developed. For large systems, a written document, combining

natural language descriptions and graphical models may be the best approach. However, usage scenarios may be all that are

required for smaller products or systems that reside within well-understood technical environments.
The System Specification is the final work product produced by the system and requirements engineer. It serves as the

foundation for hardware engineering, software engineering, database engineering, and human engineering. It describes the

function and performance of a computer-based system and the constraints that will govern its development. The specification

bounds each allocated system element. The System Specification also describes the information (data and control) that is

input to and output from the system.
The Software Requirements Specification (SRS)

The Software Requirements Specification is produced at the culmination of the analysis task. The function and performance

allocated to software as part of system engineering are refined by establishing a complete information description, a

detailed functional description, a representation of system behavior, an indication of performance requirements and design

constraints, appropriate validation criteria, and other information pertinent to requirements. The National Bureau of

Standards, IEEE (Standard No. 830-1984), and the U.S. Department of Defense have all proposed candidate formats for

software requirements specifications (as well as other software engineering documentation).
The Introduction of the software requirements specification states the goals and objectives of the software, describing it

in the context of the computer-based system. Actually, the Introduction may be nothing more than the software scope of the

planning document.
The Information Description provides a detailed description of the problem that the software must solve. Information

content, flow, and structure are documented. Hardware, software, and human interfaces are described for external system

elements and internal software functions.
A description of each function required to solve the problem is presented in the Functional Description. A processing

narrative is provided for each function, design constraints are stated and justified, performance characteristics are

stated, and one or more diagrams are included to graphically represent the overall structure of the software and interplay

among software functions and other system elements. The Behavioral Description section of the specification examines the

operation of the software as a consequence of external events and internally generated control characteristics.
Validation Criteria is probably the most important and, ironically, the most often neglected section of the Software

Requirements Specification. How do we recognize a successful implementation? What classes of tests must be conducted to

validate function, performance, and constraints? We neglect this section because completing it demands a thorough

understanding of software requirements—something that we often do not have at this stage. Yet, specification of validation

criteria acts as an implicit review of all other requirements. It is essential that time and attention be given to this

section.
Finally, the specification includes a Bibliography and Appendix. The bibliography contains references to all documents that

relate to the software. These include other software engineering documentation, technical references, vendor literature,

and standards. The appendix contains information that supplements the specifications. Tabular data, detailed description of

algorithms, charts, graphs, and other material are presented as appendixes.
Requirements Validation

The work products produced as a consequence of requirements engineering (a system specification and related information)

are assessed for quality during a validation step. Requirements validation examines the specification to ensure that all

system requirements have been stated unambiguously; that inconsistencies, omissions, and errors have been detected and

corrected; and that the work products conform to the standards established for the process, the project, and the product.
The primary requirements validation mechanism is the formal technical review. The review team includes system engineers,

customers, users, and other stakeholders who examine the system specification looking for errors in content or

interpretation, areas where clarification may be required, missing information, inconsistencies (a major problem when large

products or systems are engineered), conflicting requirements, or unrealistic (unachievable) requirements.
Although the requirements validation review can be conducted in any manner that results in the discovery of requirements

errors, it is useful to examine each requirement against a set of checklist questions.
Requirements Management

Requirements management is a set of activities that help the project team to identify, control, and track requirements and

changes to requirements at any time as the project proceeds. Many of these activities are identical to the software

configuration management techniques.
Requirements management begins with identification. Each requirement is assigned a unique identifier that might take the

form

Where requirement type takes on values such as F = functional requirement, D = data requirement, B = behavioral

requirement, I = interface requirement, and P = output requirement. Hence, a requirement identified as F09 indicates a

functional requirement assigned requirement number 9.
Once requirements have been identified, traceability tables are developed. Each traceability table relates identified

requirements to one or more aspects of the system or its environment. Among many possible traceability tables are the

following:
*      Features traceability table: Shows how requirements relate to important customer observable system/product features.
*      Source traceability table: Identifies the source of each requirement.
*      Dependency traceability table: Indicates how requirements are related to one another.
*      Subsystem traceability table: Categorizes requirements by the subsystem(s) that they govern.
*      Interface traceability table: Shows how requirements relate to both internal and external system interfaces.
In many cases, these traceability tables are maintained as part of a requirements database so that they may be quickly

searched to understand how a change in one requirement will affect different aspects of the system to be built.
Requirements Analysis

Software requirements analysis may be divided into five areas of effort: (1) problem recognition, (2) evaluation and

synthesis, (3) modeling, (4) specification, and (5) review.
It is important to understand software in a system context and to review the software scope that was used to generate

planning estimates. Next, communication for analysis must be established so that problem recognition is ensured. The goal

is recognition of the basic problem elements as perceived by the customer/users.
Problem evaluation and solution synthesis is the next major area of effort for analysis. The analyst must define all

externally observable data objects, evaluate the flow and content of information, define and elaborate all software

functions, understand software behavior in the context of events that affect the system, establish system interface

characteristics, and uncover additional design constraints. Each of these tasks serves to describe the problem so that an

overall approach or solution may be synthesized.
Once problems have been identified, the analyst determines what information is to be produced by the new system and what

data will be provided to the system. For instance, the customer desires a daily report that indicates what parts have been

taken from inventory and how many similar parts remain. The customer indicates that inventory clerks will log the

identification number of each part as it leaves the inventory area.
Upon evaluating current problems and desired information (input and output), the analyst begins to synthesize one or more

solutions. To begin, the data objects, processing functions, and behavior of the system are defined in detail. Once this

information has been established, basic architectures for implementation are considered.
A client/server approach would seem to be appropriate, but does the software to support this architecture fall within the

scope outlined in the Software Plan? A database management system would seem to be required, but is the user/customer's

need for associatively justified? The process of evaluation and synthesis continues until both analyst and customer feel

confident that software can be adequately specified for subsequent development steps.
Throughout evaluation and solution synthesis, the analyst's primary focus is on "what," not "how." What data does the

system produce and consume, what functions must the system perform, what behaviors does the system exhibit, what interfaces

are defined and what constraints apply?
During the evaluation and solution synthesis activity, the analyst creates models of the system in an effort to better

understand data and control flow, functional processing, operational behavior, and information content. The model serves as

a foundation for software design and as the basis for the creation of specifications for the software.
Analysis Modeling

The analysis model must achieve three primary objectives: (1) to describe what the customer requires, (2) to establish a

basis for the creation of a software design, and (3) to define a set of requirements that can be validated once the

software is built.
At the core of the model lies the data dictionary—a repository that contains descriptions of all data objects consumed or

produced by the software. Three different diagrams surround the core. The entity relation diagram (ERD) depicts

relationships between data objects. The ERD is the notation that is used to conduct the data modeling activity. The

attributes of each data object noted in the ERD can be described using a data object description.
The data flow diagram (DFD) serves two purposes: (1) to provide an indication of how data are transformed as they move

through the system and (2) to depict the functions (and subfunctions) that transform the data flow. The DFD provides

additional information that is used during the analysis of the information domain and serves as a basis for the modeling of

function. A description of each function presented in the DFD is contained in a process specification (PSPEC).
The state transition diagram (STD) indicates how the system behaves as a consequence of external events. To accomplish

this, the STD represents the various modes of behavior (called states) of the system and the manner in which transitions

are made from state to state. The STD serves as the basis for behavioral modeling. Additional information about the control

aspects of the software is contained in the control specification (CSPEC).
Entity/Relationship Diagrams

The object/relationship pair can be represented graphically using the entity/relationship diagram. A set of primary

components are identified for the ERD: data objects, attributes, relationships, and various type indicators. The primary

purpose of the ERD is to represent data objects and their relationships.
Data objects are represented by a labeled rectangle. Relationships are indicated with a labeled line connecting objects. In

some variations of the ERD, the connecting line contains a diamond that is labeled with the relationship. Connections

between data objects and relationships are established using a variety of special symbols that indicate cardinality and

modality. In addition to the basic ERD notation the analyst can represent data object type hierarchies. ERD notation also

provides a mechanism that represents the associativity between objects. Data modeling and the entity relationship diagram

provide the analyst with a concise notation for examining data within the context of a software application. In most cases,

the data modeling approach is used to create one piece of the analysis model, but it can also be used for database design

and to support any other requirements analysis methods.
*      Data objects: A data object is a representation of almost any composite information that must be understood by

software. By composite information, we mean something that has a number of different properties or attributes.
*      Attributes: Attributes define the properties of a data object and take on one of three different characteristics.

They can be used to (1) name an instance of the data object, (2) describe the instance, or (3) make reference to another

instance in another table.
*      Relationships: Data objects are connected to one another in different ways.
*      Cardinality: The data model must be capable of representing the number of occurrences objects in a given

relationship. Tillmann defines the cardinality of an object/relationship pair in the following manner:
Cardinality is the specification of the number of occurrences of one [object] that can be related to the number of

occurrences of another [object]. Cardinality is usually expressed as simply 'one' or 'many.' Taking into consideration all

combinations of 'one' and 'many,' two [objects] can be related as
One-to-one (l:l)—An occurrence of [object] 'A' can relate to one and only one occurrence of [object] 'B,' and an occurrence

of 'B' can relate to only one occurrence of 'A.'
One-to-many (l:M)—One occurrence of [object] 'A' can relate to one or many occurrences of [object] 'B,' but an occurrence

of 'B' can relate to only one occurrence of 'A.'
Many-to-many (M:M)—An occurrence of [object] 'A' can relate to one or more occurrences of 'B,' while an occurrence of 'B'

can relate to one or more occurrences of 'A.'
*      Modality: The modality of a relationship is 0 if there is no explicit need for the relationship to occur or the

relationship is optional. The modality is 1 if an occurrence of the relationship is mandatory.

Data Flow Model

In Data Flow Diagrams a rectangle is used to represent an external entity; that is, a system element (e.g., hardware, a

person, and another program) or another system that produces information for transformation by the software or receives

information produced by the software. A circle (sometimes called a bubble) represents a process or transform that is

applied to data (or control) and changes it in some way. An arrow represents one or more data items (data objects). All

arrows on a data flow diagram should be labeled.
The double line represents a data store—stored information that is used by the software. As information moves through

software, it is modified by a series of transformations. A data flow diagram is a graphical representation that depicts

information flow and the transforms that are applied as data move from input to output. The basic form of a data flow

diagram, also known as a data flow graph or a bubble chart, is illustrated in Figure below.
A fundamental model for system F indicates the primary input is A and ultimate output is B. We refine the F model into

transforms f1 to f7. Note that information flow continuity must be maintained; that is, input and output to each refinement

must remain the same. This concept, sometimes called balancing, is essential for the development of consistent models.

Further refinement of f4 depicts detail in the form of transforms f41 to f45. Again, the input (X, Y) and output (Z) remain

unchanged.
Control Flow Model

The CFD contains the same processes as the DFD, but shows control flow, rather than data flow. Instead of representing

control processes directly within the flow model, a notational reference (a solid bar) to a control specification (CSPEC)

is used. In essence, the solid bar can be viewed as a "window" into an "executive" (the CSPEC) that controls the processes

(functions) represented in the DFD based on the event that is passed through the window. The CSPEC is used to indicate (1)

how the software behaves when an event or control signal is sensed and (2) which processes are invoked as a consequence of

the occurrence of the event. A process specification is used to describe the inner workings of a process represented in a

flow diagram.
Data flow diagrams are used to represent data and the processes that manipulate it. Control flow diagrams show how events

flow among processes and illustrate those external events that cause various processes to be activated. The

interrelationship between the process and control models is shown schematically in Figure below. The process model is

"connected" to the control model through data conditions. The control model is "connected" to the process model through

process activation information contained in the CSPEC.
The Control Specification

The control specification (CSPEC) represents the behavior of the system (at the level from which it has been referenced) in

two different ways. The CSPEC contains a state transition diagram that is a sequential specification of behavior. It can

also contain a program activation table—a combinatorial specification of behavior.
The CSPEC describes the behavior of the system, but it gives us no information about the inner working of the processes

that are activated as a result of this behavior.
The Process Specification

The process specification (PSPEC) is used to describe all flow model processes that appear at the final level of

refinement. The content of the process specification can include narrative text, a program design language (PDL)

description of the process algorithm, mathematical equations, tables, diagrams, or charts. By providing a PSPEC to

accompany each bubble in the flow model, the software engineer creates a "minispec" that can serve as a first step in the

creation of the Software Requirements Specification and as a guide for design of the software component that will implement

the process.


The Data Dictionary

The data dictionary is an organized listing of all data elements that are pertinent to the system, with precise, rigorous

definitions so that both user and system analyst will have a common understanding of inputs, outputs, components of stores

and [even] intermediate calculations.
Today, the data dictionary is always implemented as part of a CASE "structured analysis and design tool." Although the

format of dictionaries varies from tool to tool, most contain the following information:
*      Name: the primary name of the data or control item, the data store or an external entity.
*      Alias: other names used for the first entry.
*      Where-used/how-used: a listing of the processes that use the data or control item and how it is used (e.g., input to

the process, output from the process, as a store, as an external entity.
*      Content description: a notation for representing content.
*      Supplementary information: other information about data types, preset values (if known), restrictions or

limitations, and so forth.
Software Architecture

Software architecture means “the overall structure of the software and the ways in which that structure provides conceptual

integrity for a system”. In its simplest form, architecture is the hierarchical structure of program components, the manner

in which these components interact and the structure of data that are used by the components. In a broader sense, however,

components can be generalized to represent major system elements and their interactions.
Shaw and Garlan describe a set of properties that should be specified as part of an architectural design:
Structural properties: This aspect of the architectural design representation defines the components of a system (e.g.,

modules, objects, filters) and the manner in which those components are packaged and interact with one another. For

example, objects are packaged to encapsulate both data and the processing that manipulates the data and interact via the

invocation of methods.
Extra-functional properties: The architectural design description should address how the design architecture achieves

requirements for performance, capacity, reliability, security, adaptability, and other system characteristics.
Families of related systems: The architectural design should draw upon repeatable patterns that are commonly encountered in

the design of families of similar systems. In essence, the design should have the ability to reuse architectural building

blocks.
Given the specification of these properties, the architectural design can be represented using one or more of a number of

different models. Structural models represent architecture as an organized collection of program components.
Framework models increase the level of design abstraction by attempting to identify repeatable architectural design

frameworks (patterns) that are encountered in similar types of applications. Dynamic models address the behavioral aspects

of the program architecture, indicating how the structure or system configuration may change as a function of external

events. Process models focus on the design of the business or technical process that the system must accommodate. Finally,

functional models can be used to represent the functional hierarchy of a system.
A number of different architectural description languages (ADLs) have been developed to represent these models. Although

many different ADLs have been proposed, the majority provide mechanisms for describing system components and the manner in

which they are connected to one another.
Modular Design - Cohesion

Cohesion is a natural extension of the information hiding concept. A cohesive module performs a single task within a

software procedure, requiring little interaction with procedures being performed in other parts of a program.
Cohesion may be represented as a "spectrum." We always strive for high cohesion, although the mid-range of the spectrum is

often acceptable. The scale for cohesion is nonlinear. That is, low-end cohesiveness is much "worse" than middle range,

which is nearly as "good" as high-end cohesion. In practice, a designer need not be concerned with categorizing cohesion in

a specific module. Rather, the overall concept should be understood and low levels of cohesion should be avoided when

modules are designed.
At the low (undesirable) end of the spectrum, we encounter a module that performs a set of tasks that relate to each other

loosely, if at all. Such modules are termed coincidentally cohesive. A module that performs tasks that are related

logically (e.g., a module that produces all output regardless of type) is logically cohesive. When a module contains tasks

that are related by the fact that all must be executed with the same span of time, the module exhibits temporal cohesion.
Moderate levels of cohesion are relatively close to one another in the degree of module independence. When processing

elements of a module are related and must be executed in a specific order, procedural cohesion exists. When all processing

elements concentrate on one area of a data structure, communicational cohesion is present. High cohesion is characterized

by a module that performs one distinct procedural task.
Modular Design - Coupling

Coupling is a measure of interconnection among modules in a software structure. Coupling depends on the interface

complexity between modules and what data pass across the interface.
In software design, we strive for lowest possible coupling. Simple connectivity among modules results in software that is

easier to understand and less prone to a "ripple effect", caused when errors occur at one location and propagates through a

system.
Figure below provides examples of different types of module coupling.
*      Modules A and D are subordinate to different modules. Each is unrelated and therefore no direct coupling occurs.

Module C is subordinate to module A and is accessed via a conventional argument list, through which data are passed. As

long as a simple argument list is present low coupling (called data coupling) is exhibited in this portion of structure.
*      A variation of data coupling, called stamp coupling is found when a portion of a data structure (rather than simple

arguments) is passed via a module interface. This occurs between modules b and a.
*      At moderate levels, coupling is characterized by passage of control between modules. Control coupling is very common

in most software designs and is shown in Figure below where a “control flag” (a variable that controls decisions in a

subordinate or superordinate module) is passed between modules d and e.
*      Relatively high levels of coupling occur when modules are tied to an environment external to software. For example,

I/O couples a module to specific devices, formats, and communication protocols. External coupling is essential, but should

be limited to a small number of modules with a structure.
*      High coupling also occurs when a number of modules reference a global data area. This is called as Common coupling.

Modules C, G, and K each access a data item in a global data area (e.g., a disk file or a globally accessible memory area).

Module C initializes the item. Later module G recomputes and updates the item.
*      The highest degree of coupling, content coupling, occurs when one module makes use of data or control information

maintained within the boundary of another module. Secondarily, content coupling occurs when branches are made into the

middle of a module. This mode of coupling can and should be avoided.
*      The coupling modes just discussed occur because of design decisions made when structure was developed. Variants of

external coupling, however, may be introduced during coding. For example, compiler coupling ties source code to specific

(and often nonstandard) attributes of a compiler; operating system (OS) coupling ties design and resultant code to

operating system "hooks" that can create havoc when OS changes occur.


Computer Aided Software Engineering

A good workshop for any software engineer— has three primary characteristics: (1) a collection of useful tools that will

help in every step of building a product, (2) an organized layout that enables tools to be found quickly and used

efficiently, and (3) a skilled artisan who understands how to use the tools in an effective manner. Software engineers now

recognize that they need more and varied tools along with an organized and efficient workshop in which to place the tools.
The workshop for software engineering has been called an integrated project support environment and the tools that fill the

workshop are collectively called computer-aided software engineering.
CASE provides the software engineer with the ability to automate manual activities and to improve engineering insight. Like

computer-aided engineering and design tools that are used by engineers in other disciplines, CASE tools help to ensure that

quality is designed in before the product is built.
Building Blocks of CASE

Computer aided software engineering can be as simple as a single tool that supports a specific software engineering

activity or as complex as a complete "environment" that encompasses tools, a database, people, hardware, a network,

operating systems, standards, and myriad other components.
The building blocks for CASE are illustrated in Figure below. Each building block forms a foundation for the next, with

tools sitting at the top of the heap.
The environment architecture, composed of the hardware platform and system support (including networking software, database

management, and object management services), lays the ground work for CASE. But the CASE environment itself demands other

building blocks.
A set of portability services provides a bridge between CASE tools and their integration framework and the environment

architecture. The integration framework is a collection of specialized programs that enables individual CASE tools to

communicate with one another, to create a project database, and to exhibit the same look and feel to the end-user (the

software engineer). Portability services allow CASE tools and their integration framework to migrate across different

hardware platforms and operating systems without significant adaptive maintenance.
The building blocks depicted in Figure above represent a comprehensive foundation for the integration of CASE tools.

However, most CASE tools in use today have not been constructed using all these building blocks. In fact, some CASE tools

remain "point solutions." That is, a tool is used to assist in a particular software engineering activity (e.g., analysis

modeling) but does not directly communicate with other tools, is not tied into a project database, is not part of an

integrated CASE environment (ICASE). Although this situation is not ideal, a CASE tool can be used quite effectively, even

if it is a point solution.
The relative levels of CASE integration are shown in Figure above. At the low end of the integration spectrum is the

individual (point solution) tool. When individual tools provide facilities for data exchange (most do), the integration

level is improved slightly. Such tools produce output in a standard format that should be compatible with other tools that

can read the format. In some cases, the builders of complementary CASE tools work together to form a bridge between the

tools. Using this approach, the synergy between the tools can produce end products that would be difficult to create using

either tool separately. Single-source integration occurs when a single CASE tools vendor integrates a number of different

tools and sells them as a package.
At the high end of the integration spectrum is the integrated project support environment (IPSE). Standards for each of the

building blocks described previously have been created. CASE tool vendors use IPSE standards to build tools that will be

compatible with the IPSE and therefore compatible with one another.
High-end and low-end CASE tools

Upper CASE Tools support strategic planning and construction of concept-level products and ignore the design aspect. They

support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees,

Decision tables, etc.
Lower CASE Tools concentrate on the back end activities of the software life cycle, such as physical design, debugging,

construction, testing, component integration, maintenance, reengineering and reverse engineering.

Integrated Case Environments

The benefits of integrated CASE (I-CASE) include (1) smooth transfer of information (models, programs, documents, data)

from one tool to another and one software engineering step to the next; (2) a reduction in the effort required to perform

umbrella activities such as software configuration management, quality assurance, and document production; (3) an increase

in project control that is achieved through better planning, monitoring, and communication; and (4) improved coordination

among staff members who are working on a large software project.
But I-CASE also poses significant challenges. Integration demands consistent representations of software engineering

information, standardized interfaces between tools, a homogeneous mechanism for communication between the software engineer

and each tool, and an effective approach that will enable I-CASE to move among various hardware platforms and operating

systems.
A software engineering team uses CASE tools, corresponding methods, and a process framework to create a pool of software

engineering information. The integration framework facilitates transfer of information into and out of the pool. To

accomplish this, the following architectural components must exist: a database must be created (to store the information);

an object management system must be built (to manage changes to the information); a tools control mechanism must be

constructed (to coordinate the use of CASE tools); a user interface must provide a consistent pathway between actions made

by the user and the tools contained in the environment.
A simple model of the framework is shown in Figure below:
 *      The user interface layer incorporates a standardized interface tool kit with a common presentation protocol. The

interface tool kit contains software for human/computer interface management and a library of display objects. Both provide

a consistent mechanism for communication between the interface and individual CASE tools.
*      The presentation protocol is the set of guidelines that gives all CASE tools the same look and feel. Screen layout

conventions, menu names and organization, icons, object names, the use of the keyboard and mouse, and the mechanism for

tools access are all defined as part of the presentation protocol.
*      The tools layer incorporates a set of tools management services with the CASE tools themselves. Tools management

services (TMS) control the behavior of tools within the environment. If multitasking is used during the execution of one or

more tools, TMS performs multitask synchronization and communication, coordinates the flow of information from the

repository and object management system into the tools, accomplishes security and auditing functions, and collects metrics

on tool usage.
*      The object management layer (OML) performs the configuration management functions. In essence, software in this

layer of the framework architecture provides the mechanism for tools integration. Every CASE tool is "plugged into" the

object management layer. Working in conjunction with the CASE repository, the OML provides integration services—a set of

standard modules that couple tools with the repository. In addition, the OML provides configuration management services by

enabling the identification of all configuration objects, performing version control, and providing support for change

control, audits, and status accounting.
*      The shared repository layer is the CASE database and the access control functions that enable the object management

layer to interact with the database. Data integration is achieved by the object management and shared repository layers.
Workbenches

Workbenches integrate several CASE tools into one application to support specific software-process activities. Hence they

achieve:
*      A homogeneous and consistent interface (presentation integration).
*      Easy invocation of tools and tool chains (control integration).
*      Access to a common data set managed in a centralized way (data integration).
CASE workbenches can be further classified into following 8 classes:
1.       Business planning and modeling
2.       Analysis and design
3.       User-interface development
4.       Programming
5.       Verification and validation
6.       Maintenance and reverse engineering
7.       Configuration management
8.       Project management
Introduction to Testing Process

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With

that in mind, testing can never completely establish the correctness of computer software.
One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are

things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the

tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the

word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.
Testing objectives include
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of

effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the

specifications. The data collected through testing can also provide an indication of the software's reliability and

quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.
Structural and Functional Testing

Any engineered product (and most other things) can be tested in one of two ways:
(1) Knowing the specified function that a product has been designed to perform, tests
can be conducted that demonstrate each function is fully operational while at the
same time searching for errors in each function; (2) knowing the internal workings
of a product, tests can be conducted to ensure that "all gears mesh," that is, internal
operations are performed according to specifications and all internal components
have been adequately exercised. The first test approach is called black-box testing
and the second, white-box testing.
When computer software is considered, black-box testing alludes to tests that are
conducted at the software interface. Although they are designed to uncover errors,
black-box tests are used to demonstrate that software functions are operational, that
input is properly accepted and output is correctly produced, and that the integrity of external information (e.g., a

database) is maintained. A black-box test examines some
fundamental aspect of a system with little regard for the internal logical structure of
the software.
White-box testing of software is predicated on close examination of procedural
detail. Logical paths through the software are tested by providing test cases that exercise
specific sets of conditions and/or loops. The "status of the program" may be
examined at various points to determine if the expected or asserted status corresponds
to the actual status.
WHITE-BOX TESTING

White-box testing, sometimes called glass-box testing, is a test case design method
that uses the control structure of the procedural design to derive test cases. Using
white-box testing methods, the software engineer can derive test cases that (1) guarantee
that all independent paths within a module have been exercised at least once,
(2) exercise all logical decisions on their true and false sides, (3) execute all loops at
their boundaries and within their operational bounds, and (4) exercise internal data
structures to ensure their validity.
Why white box testing is good than black box testing:
Logic errors and incorrect assumptions are inversely proportional to the probability
that a program path will be executed. Errors tend to creep into our work
when we design and implement function, conditions, or control that are out
of the mainstream. Everyday processing tends to be well understood (and
well scrutinized), while "special case" processing tends to fall into the cracks.
• We often believe that a logical path is not likely to be executed when, in fact, it
may be executed on a regular basis. The logical flow of a program is sometimes
counterintuitive, meaning that our unconscious assumptions about
flow of control and data may lead us to make design errors that are uncovered
only once path testing commences.
• Typographical errors are random. When a program is translated into programming
language source code, it is likely that some typing errors will occur.
Many will be uncovered by syntax and type checking mechanisms, but others
may go undetected until testing begins. It is as likely that a typo will exist on
an obscure logical path as on a mainstream path.
BASIS PATH TESTING

Basis path testing is a white-box testing technique. The basis path method enables the test case designer to derive a

logical complexity
measure of a procedural design and use this measure as a guide for defining a
basis set of execution paths. Test cases derived to exercise the basis set are guaranteed
to execute every statement in the program at least one time during testing.

Flow Graph Notation


No comments:

Post a Comment