THE LINEAR SEQUENTIAL MODEL
- Sometimes called the classic life cycle or the waterfall model, the linear sequential model suggests a systematic, sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing, and support.
- Although the original waterfall model proposed by Winston Royce.
- Modeled after a conventional engineering cycle, the linear sequential model encompasses the following activities:
- System/information engineering and modeling: Because software is always part of a larger system, work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software.
- This system view is essential when software must interact with other elements such as hardware, people, and databases. System engineering and analysis encompass requirements gathering at the system level with a small amount of top level
- Information engineering encompasses requirements gathering at the strategic business level and at the business area level.
Software requirements analysis: The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineer (“analyst”) must understand the information domain for the software, as well as required function, behavior, performance, and interface. Requirements for both the system and the software are documented and reviewed with the customer.
Design: Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, and procedural (algorithmic) detail. The design process translates requirements into a representation of the software that can be assessed for quality before coding begins. Like requirements, the design is documented and becomes part of the software configuration.
Code generation: The design must be translated into a machine-readable form. The code generation step performs this task. If design is performed in a detailed manner, code generation can be accomplished mechanistically.
Testing: Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.
Support: Software will undoubtedly undergo change after it is delivered to the customer. Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment, or because the customer requires functional or performance enhancements. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.
Type of Project in which this Model is used: This Model is suited for well understood problems, short duration project, and automation of existing manual system.Advantages: It is simple, Easy to execute and Intuitive and logical.
- Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iteration, it does so indirectly.
- It is often difficult for the customer to state all requirements explicitly. The linear sequential model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects.
- The customer must have patience. A working version of the program(s) will not be available until late in the project time-span.
- It is a document driven process that requires formal documents at the end of each phase.
- It follows the “big-bang” approach. The entire s/w is delivered at the end and therefore entails heavy risk, as the user does not know until the very end what they are getting.
- It assumes the requirements of a system can be frozen before the design begins. This is possible for systems designed to automate an existing systems. But for new systems determining the requirement is difficult.
- Freezing the requirements usually requires choosing the hardware.
THE PROTOTYPING MODEL
Often, a customer defines a set of general objectives for software but does not identify detailed input, processing, or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human/machine interaction should take. In such situations, a prototyping paradigm may offer the best approach.
The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A “quick design” then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e.g.,input approaches and output formats). The quick design leads to the construction of prototype. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done.
Ideally, the prototype serves as a mechanism for identifying software requirements.
The basic idea here is that instead of freezing the requirements before any design or coding can proceed, a throwaway prototype is built to help understand the requirements. This prototype is based on the currently specified requirements.
In most projects, the first system built is barely usable (throwaway). It may be too slow, too big, and awkward in use or all three. There is no alternative but to start again and build a redesigned version in which these problems are. The prototype can serve as “the first system” known as throwaway prototype.
The development process using throwaway prototyping typically starts when the preliminary requirement specification document has been developed where a reasonable understanding of the system and its needs and which needs are clear or likely to change. After the prototype has been developed, the end users and clients are given an opportunity to use the prototype, based on their experience feedback is provided to the developer and if any changes are needed, prototype is modified and clients are again allowed to use the system until the client is satisfied.
Type of Project in which this Model is used: Systems with novice (beginner) users, When uncertainities in requirements, When UI very important.
Advantages: Helps in requirements elicitation, Reduce risk, Leads to a better system, Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determine the requirements.
Limitations of prototype model
- The customer sees what appears to be a working version of the software. When informed that the product must be rebuilt so that high levels of quality can be maintained, the customer cries foul and demands that “a few fixes” be applied to make the prototype a working product.
- The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability.
- Although problems can occur, prototyping can be an effective paradigm for software engineering. The customer and developer must both agree that the prototype is built to serve as a mechanism for defining requirements. It is then discarded and the actual software is engineered with an eye toward quality and maintainability.
Fourth generation techniques (4GT) encompasses a broad array of software tools that enables the software engineer to specify some characteristic of software at a high level.
- The tool then automatically generates source code based on the developer’s specification.
- The 4GT paradigm for software engineering focuses on the ability to specify software using specialized language forms or a graphic notation that describes the problem to be solved in terms that the customer can understand.
- A software development environment that supports the 4GT paradigm includes some or all of the following tools
- nonprocedural languages for database query,
- report generation,
- data manipulation,
- screen interaction and definition,
- code generation; high-level graphics capability; spreadsheet capability,
- Automated generation of HTML and similar languages used for Web-site creation using advanced software tools.
- 4GT begins with a requirements gathering step. Ideally, the customer would describe requirements and these would be directly translated into an operational prototype. But this is unworkable. The customer may be unsure of what is required, may be ambiguous in specifying facts that are known, and may be unable or unwilling to specify information in a manner that a 4GT tool can consume. Therefore, the customer/developer dialog described for other process models remains an essential part of the 4GT approach.
- For small applications, it may be possible to move directly from the requirements gathering step to implementation using a nonprocedural fourth generation language (4GL) or a model composed of a network of graphical icons.
- However, for larger efforts, it is necessary to develop a design strategy for the system, even if a 4GL is to be used. The use of 4GT without design (for large projects) will cause the same difficulties (poor quality, poor maintainability, poor customer acceptance) that have been encountered when developing software using conventional approaches.
- Implementation using a 4GL enables the software developer to represent desired results in a manner that leads to automatic generation of code to create those results.
- Obviously, a data structure with relevant information must exist and be readily accessible by the 4GL.
- To transform a 4GT implementation into a product, the developer must conduct thorough testing, develop meaningful documentation, and perform all other solution integration activities that are required in other software engineering paradigms.
- In addition, the 4GT developed software must be built in a manner that enables maintenance to be performed expeditiously.
Like all software engineering paradigms, the 4GT model has advantages and disadvantages. They are:
- Reduction in software development time.
- Greatly improved productivity for people who build software.
- Opponents claim that current 4GT tools are not all that much easier to use than programming languages, that the resultant source code produced by such tools is “inefficient,” and that the maintainability of large software systems developed using 4GT is open to question.
Merits of 4GT approaches:
- The use of 4GT is a feasible approach for many different application areas. Coupled with computer-aided software engineering tools and code generators, 4GT offers a credible solution to many software problems.
- Data collected from companies that use 4GT indicates that the time required producing software is greatly reduced for small and intermediate applications and that the amount of design and analysis for small applications is also reduced.
- However, the use of 4GT for large software development efforts demands as much or more analysis, design, and testing (software engineering activities) to achieve substantial time savings that result from the elimination of coding.
- Each of the software project estimation techniques leads to estimates of work units required to complete software development.
- A recommended distribution of effort across the definition and development phases is often referred to as the 40–20–40 rule.
- Forty percent of all effort is allocated to front-end analysis and design.
- A similar percentage is applied to back-end testing.
- Only twenty percent of effort is allocated to coding
- You can correctly infer that coding (20 percent of effort) is de-emphasized. This effort distribution should be used as a guideline only.
- The characteristics of each project must dictate the distribution of effort.
- Work expended on project planning rarely accounts for more than 2–3 percent of effort, unless the plan commits an organization to large expenditures with high risk.
- Requirements analysis may comprise 10–25 percent of project effort. Effort expended on analysis or prototyping should increase in direct proportion with project size and complexity.
- A range of 20 to 25 percent of effort is normally applied to software design. Time expended for design review and subsequent iteration must also be considered.
- Because of the effort applied to software design, code should follow with relatively little difficulty. A range of 15–20 percent of overall effort can be achieved.
- Testing and subsequent debugging can account for 30–40 percent of software development effort.
- The criticality of the software often dictates the amount of testing that is required. If software is human rated (i.e., software failure can result in loss of life), even higher percentages are typical.
|Estimated Effort||Software Development Life Cycle|
Software Engineering Layers
- A quality focus
Software engineering is a layered technology. Any engineering approach (including software engineering) must rest on an organizational commitment to quality. Total quality management, Six sigma, and similar philosophies foster a continuous process improvement culture, and it is this culture that ultimately leads to the development of increasingly more effective approaches to software engineering. The bedrock that supports software engineering is a quality focus.
Process: The foundation for software engineering is the process layer. The software engineering process is the glue that holds the technology layers together and enables rational and timely development of computer software. Process defines a framework that must be established for effective delivery of software engineering technology. The software process forms the basis for management control of software projects and establishes the context in which technical methods are applied, work products like models, documents, data, reports, forms etch are produced , milestones are established, quality is ensured, and change is properly managed.
Methods: Software engineering methods provide the technical how-to’s for building software. Methods encompass a broad array of tasks that include communication, requirement analysis, design modeling, program construction, testing and support. Software engineering methods rely on a set of basic principles that govern each area of the technology and include modeling activities and other descriptive techniques.
Tools: Software engineering tools provide automates or semiautomated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer aided software engineering, is established. Quality: The Software quality is checked by testing the software in different types like Alpha Testing, Beta Testing, System Testing, Acceptance Testing etc.
Software Development Life Cycle (SDLC)
- Requirement Gathering
- Requirement Analysis
- Design/Plan Solution
- Develop Solution
- System Integration and Testing
- Implementation and Customer Acceptance
- Support and Maintenance
What is the “bathtub” curve ?
In the 1950’s, a group known as AGREE (Advisory Group for the Reliability of Electronic Equipment) discovered that the failure rate of electronic equipment had a pattern similar to the death rate of people in a closed system. Specifically, they noted that the failure rate of electronic components and systems follow the classical “bathtub” curve. Reliability specialists often describe the lifetime of an electronic products using a graphical representation called the bathtub curve. This curve is shown below and has three distinctive phases:
- An “infant mortality” early life phase characterized by a decreasing failure rate (Phase 1). Failure occurrence during this period is not random in time but rather the result of substandard components with gross defects and the lack of adequate controls in the manufacturing process. Parts fail at a high but decreasing rate.
- A “useful life” period where electronics have a relatively constant failure rate caused by randomly occurring defects and stresses (Phase 2). This corresponds to a normal wear and tear period where failures are caused by unexpected and sudden over stress conditions. Most reliability analyses pertaining to electronic systems are concerned with lowering the failure frequency (i.e., lconst shown in the Figure) during this period.
- A “wear out” period where the failure rate increases due to critical parts wearing out (Phase 3). As they wear out, it takes less stress to cause failure and the overall system failure rate increases, accordingly failures do not occur randomly in time.
The failure rate is represented by the height of the curve (lconst shown in the figure) and is not related to the length of the curve (i.e., longevity). It is therefore possible to have a long or short useful life period for a given failure rate.
Electronic systems reliability engineering theory is usually most concerned with the height of the failure rate curve during the useful life of a system life (i.e., the Phase 2 portion of the curve). Many studies have shown that the height of the curve (magnitude of system failure rate) is directly proportional to applied stress. In the general design of electronic systems, the stresses which are most influential to reliability include electrical (voltage and current), thermal, vibration, and humidity. Every effort is made during the design process to mitigate these stresses through steps such as device derating, good thermal design, dampening vibration and hermetic sealing. Derating is the practice of operating devices significantly below their electrical and thermal ratings to reduce the probability that marginal components will fail due to transient over stress conditions during the useful life of a system.