No Silver Bullet
Warning: Heavy geek content. If you’re not a developer, you probably won’t find this too interesting.
A friend of mine sent me some articles on Knowledge Management, including an article called No Silver Bullet: Essence and Accidents of Software Engineering, by Frederick P. Brooks, Jr., an article that argues that “there will be no more technologies or practices that will serve as ‘silver bullets’ and create a 10-fold improvement in programmer productivity over 10 years” (to quote the Wikipedia article, which can be found here).
In the article, Mr. Brooks breaks software complexity into two categories: accidental complexity, and essential complexity. Accidental complexity is unrelated to the problem the programmer is trying to solve; this complexity comes from the realities of dealing with a computer; things like “bits” and “bytes” and “registers”. Essential complexity, on the other hand, is the complexity that comes with trying to solve real-world problems. The article makes the case that better tools—good Interactive Development Environments (IDEs), faster workstations with more memory, good tool libraries, etc.—have solved the problems involved in accidental complexity, whereas tools cannot solve essential complexity. Furthermore, although better tools will make incremental jumps in productivity, in relation to the accidental complexities being further reduced, there will be no more order of magnitude jumps in productivity.
In other words, the article did a good job of articulating some things that all programmers instinctively know: you can improve and improve and improve the tools that are available to developers, but the truth is, the real complexity in software is something that tools can’t solve. The real complexity comes from the business requirements that the software is trying to implement—which are always a moving target—and from the nature of requirements gathering, which is an imperfect science, no matter how good we try to get at it.
The author argues that moving from assembly programming to higher-level programming languages was an improvement that was an order of magnitude:
However, he continues:High-level languages. Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.
What does a high-level language accomplish? It frees a program from much of its accidental complexity. An abstract program consists of conceptual constructs: operations, data types, sequences, and communication. The concrete machine program is concerned with bits, registers, conditions, branches, channels, disks, and such. To the extent that the high-level language embodies the constructs one wants in the abstract program and avoids all lower ones, it eliminates a whole level of complexity that was never inherent in the program at all.
There were times, as I was reading the article, when I was shaking my head, and not agreeing with Mr. Brooks. However, these were all areas in which he predicted the way software development would evolve, and things didn’t turn out quite that way. (For example, he discounted the benefit of improving the memory and processing power available on developers’ workstations; he argued that the machines were already so fast that it was getting to the point where the human was never waiting for the machine, and the power was not being utilized. But it turns out that the incredible leaps in processing power and memory that are now available have made possible great advances in IDEs—such as auto-completion—which require the tool to keep a lot of data in memory, to be immediately available to the developer.) But considering that the article was written in 1986, it’s not surprising at all that certain assumptions about how software development would evolve turned out to be incorrect, so any time he wasn’t quite right, there was no way to fault him for it. And, frankly, anything that I did disagree with was trivial; his main points are completely valid. (Using the memory example, even things like auto-complete are still just incremental gains in reducing accidental complexity. And there are those who would even argue that it’s harmful; I love that article, even if I don’t completely agree with all of it.)The most a high-level language can do is to furnish all the constructs that the programmer imagines in the abstract program. To be sure, the level of our thinking about data structures, data types, and operations is steadily rising, but at an ever decreasing rate. And language development approaches closer and closer to the sophistication of users.
Moreover, at some point the elaboration of a high-level language creates a tool-mastery burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric constructs.
To be clear, as tools evolve, they are making developers more productive, and Mr. Brooks doesn’t claim that they aren’t. (And he’s not against tools, nor the productivity that they add!) His argument is in scale, not absolutes; the tools will make people more productive, but the leaps in productivity will be incremental, not 5-fold leaps, like the introduction of higher-level languages, from assembly. And there is no denying that the tools are getting better and better; to see how much time you can save with an IDE like WebLogic Workshop or Eclipse, for J2EE development, you’d be a fool not to use these types of tools. (Mr. Brooks also touches on Artificial Intelligence (AI) in the article, and I would argue that the tools are getting better at becoming expert systems. If the developer wants to perform a common task, the IDE can probably generate a good portion of the code, and do it using best practices to boot.) But, again, these improvements are all to reduce accidental complexity, while the developers—and business analysts and systems analysts and project managers—are still left to figure out the essential complexity, of solving the business problem at hand.
In fact, I would argue that, since the tools have done such a good job reducing or eliminating accidental complexity, that developers have gotten more productive, and business users/clients/customers have gotten used to software being delivered faster and more reliably—and have therefore started to demand ever increasingly complex software!
I’ll end with another good quote from the article:
The lesson to take from this article is nothing new; it just re-emphasizes the fact that software development is complex, and developers and designers need to be aware of that. Having a good design—and archiecture—before you begin coding is essential, as is understanding the business requirements before you start designing. Building software iteratively will always win over the “waterfall” approach, because the needs of the users are constantly evolving, and constantly being clarified.Conformity. Software people are not alone in facing complexity. Physics deals with terribly complex objects even at the “fundamental” particle level. The physicist labors on, however, in a firm faith that there are unifying principles to be found, whether in quarks or in unifiedfield theories. Einstein argued that there must be simplified explanations of nature, because God is not capricious or arbitrary.
No such faith comforts the software engineer. Much of the complexity that he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must conform. These differ from interface to interface, and from time to time, not because of necessity but only because they were designed by different people, rather than by God.
1 comments:
I am no developer, but I do actually think that this was a fantastic read. Thanks for sharing! :) Maybe it's because you talk about Knowledge Management, which is one of my favorite topics.
Post a Comment