A View of 20th and 21st Century Software Engineering
ABSTRACT George Santayana's statement, "Those who cannot remember the past are condemned to repeat it," is only half true. The past also includes successful histories. If you haven't been made aware of them, you're often condemned not to repeat their successes. In a rapidly expanding field such as software engineering, this happens a lot. Extensive studies of many software projects such as the Standish Reports offer convincing evidence that many projects fail to repeat past successes. This paper tries to identify at least some of the major past software experiences that were well worth repeating, and some that were not. It also tries to identify underlying phenomena influencing the evolution of software engineering practices that have at least helped the author appreciate how our field has gotten to where it has been and where it is. A counterpart Santayana-like statement about the past and future might say, "In an era of rapid change, those who repeat the past are condemned to a bleak future." (Think about the dinosaurs, and think carefully about software engineering maturity models that emphasize repeatability.) This paper also tries to identify some of the major sources of change that will affect software engineering practices in the next couple of decades, and identifies some strategies for assessing and adapting to these sources of change. It also makes some first steps towards distinguishing relatively timeless software engineering principles that are risky not to repeat, and conditions of change under which aging practices will become increasingly risky to repeat.
1. INTRODUCTION One has to be a bit presumptuous to try to characterize both the past and future of software engineering in a few pages. For one thing, there are many types of software engineering: large or small; commodity or custom; embedded or user-intensive; greenfield or legacy/COTS/reuse-driven; homebrew, outsourced, or both; casual- use or mission-critical. For another thing, unlike the engineering of electrons, materials, or chemicals, the basic software elements we engineer tend to change significantly from one decade to the next. Fortunately, I’ve been able to work on many types and generations of software engineering since starting as a programmer in 1955. I’ve made a good many mistakes in developing, managing, and acquiring software, and hopefully learned from them. I’ve been able to learn from many insightful and experienced software engineers, and to interact with many thoughtful people who have analyzed trends and practices in software engineering. These learning experiences have helped me a good deal in trying to understand how software engineering got to where it is and where it is likely to go. They have also helped in my trying to distinguish between timeless principles and obsolete practices for developing successful software-intensive systems. In this regard, I am adapting the [147] definition of “engineering” to define engineering as “the application of science and mathematics by which the properties of software are made useful to people.” The phrase “useful to people” implies that the relevant sciences include the behavioral sciences, management sciences, and economics, as well as computer science. In this paper, I’ll begin with a simple hypothesis: software people don’t like to see software engineering done unsuccessfully, and try to make things better. I’ll try to elaborate this into a high-level decade-by-decade explanation of software engineering’s past. I’ll then identify some trends affecting future software engineering practices, and summarize some implications for future software engineering researchers, practitioners, and educators.
2
2. A Hegelian View of Software Engineering’s Past The philosopher Hegel hypothesized that increased human understanding follows a path of thesis (this is why things happen the way they do); antithesis (the thesis fails in some important ways; here is a better explanation); and synthesis (the antithesis rejected too much of the original thesis; here is a hybrid that captures the best of both while avoiding their defects). Below I’ll try to apply this hypothesis to explaining the evolution of software engineering from the 1950’s to the present
2.1 1950’s Thesis: Software Engineering Is Like Hardware Engineering When I entered the software field in 1955 at General Dynamics, the prevailing thesis was, “Engineer software like you engineer hardware.” Everyone in the GD software organization was either a hardware engineer or a mathematician, and the software being developed was supporting aircraft or rocket engineering. People kept engineering notebooks and practiced such hardware precepts as “measure twice, cut once,” before running their code on the computer. This behavior was also consistent with 1950’s computing economics. On my first day on the job, my supervisor showed me the GD ERA 1103 computer, which filled a large room. He said, “Now listen. We are paying $600 an hour for this computer and $2 an hour for you, and I want you to act accordingly.” This instilled in me a number of good practices such as desk checking, buddy checking, and manually executing my programs before running them. But it also left me with a bias toward saving microseconds when the economic balance started going the other way. The most ambitious information processing project of the 1950’s was the development of the Semi-Automated Ground Environment (SAGE) for U.S. and Canadian air defense. It brought together leading radar engineers, communications engineers, computer engineers, and nascent software engineers to develop a system that would detect, track, and prevent enemy aircraft from bombing the U.S. and Canadian homelands. Figure 1 shows the software development process developed by the hardware engineers for use in SAGE [1]. It shows that sequential waterfall-type models have been used in software development for a long time. Further, if one arranges the steps in a V form with Coding at the bottom, this 1956 process is equivalent to the V-model for software development. SAGE also developed the Lincoln Labs Utility System to aid the thousands of programmers participating in SAGE software development. It included an assembler, a library and build management system, a number of utility programs, and aids to testing and debugging. The resulting SAGE system successfully met its specifications with about a one-year schedule slip. Benington’s bottom-line comment on the success was “It is easy for me to single out the one factor that led to our relative success: we were all engineers and had been trained to organize our efforts along engineering lines.” Another indication of the hardware engineering orientation of the 1950’s is in the names of the leading professional societies for software professionals: the Association for Computing Machinery and the IEEE Computer Society
2.2 1960’s Antithesis: Software Crafting By the 1960’s, however, people were finding out that software phenomenology differed from hardware phenomenology in significant ways. First, software was much easier to modify than was hardware, and it did not require expensive production lines to make product copies. One changed the program once, and then reloaded the same bit pattern onto another computer, rather than having to individually change the configuration of each copy of the hardware. This ease of modification led many people and organizations to adopt a “code and fix” approach to software development, as compared to the exhaustive Critical Design Reviews that hardware engineers performed before committing to production lines and bending metal (measure twice, cut once). Many software
applications became more people-intensive than hardware-intensive; even SAGE became more dominated by psychologists addressing human-computer interaction issues than by radar engineers.
OPERATIONAL PLAN
MACHINE SPECIFICATIONS
OPERATIONAL SPECIFICATIONS
PROGRAM SPECIFICATIONS
CODING SPECIFICATIONS
CODING
PARAMETER TESTING (SPECIFICATIONS)
ASSEMBLY TESTING (SPECIFICATIONS)
SHAKEDOWN
SYSTEM EVALUATION
Figure 1. The SAGE Software Development Process (1956) Another software difference was that software did not wear out. Thus, software reliability could only imperfectly be estimated by hardware reliability models, and “software maintenance” was a much different activity than hardware maintenance. Software was invisible, it didn’t weigh anything, but it cost a lot. It was hard to tell whether it was on schedule or not, and if you added more people to bring it back on schedule, it just got later, as Fred Brooks explained in the Mythical Man-Month [42]. Software generally had many more states, modes, and paths to test, making its specifications much more difficult. Winston Royce, in his classic 1970 paper, said, “In order to procure a $5 million hardware device, I would expect a 30- page specification would provide adequate detail to control the procurement. In order to procure $5 million worth of software, a 1500 page specification is about right in order to achieve comparable control.”[132]. Another problem with the hardware engineering approach was that the rapid expansion of demand for software outstripped the supply of engineers and mathematicians. The SAGE program began hiring and training humanities, social sciences, foreign language, and fine arts majors to develop software. Similar non-engineering people flooded into software development positions for business, government, and services data processing. These people were much more comfortable with the code-and-fix approach. They were often very creative, but their fixes often led to heavily patched spaghetti code. Many of them were heavily influenced by 1960’s “question authority” attitudes and tended to march to their own drummers rather than those of the organization employing them. A significant sub
đang được dịch, vui lòng đợi..
