The use of computers in analyzing building structures is undeniably a great step forward in our profession. When I trained as a structural engineer in the 1950s, computers were a brand new wonder, and there were no packaged programs available. If you wanted to use a computer, you had to write the program yourself.

Our firm, Silman, founded in 1966, was one of the first to write its own structural-analysis and design programs. In 1970, we took our successful composite-steel-beam design program to the New York City Department of Buildings and asked them how we should file calculations. Fortunately, they realized that this was the wave of the future and suggested that we develop prototype calculations by hand in the conventional way and then submit parallel results performed by the computer, illustrating that the solutions were the same. To do so, we rented an IBM 1130 with 8k capacity, which was fed by decks of punch cards grinding away for many minutes on fairly simple problems. This became standard protocol for the Department of Buildings, and the first nine programs filed were from our office.

So I am a great advocate of the use of computers for structural analysis and design, and I always have been. But there are drawbacks. When I was studying structural engineering, I used a slide rule, a wonderful apparatus and now an archaeological artifact. Slide rules help to multiply and divide, provide exponential functions, do logarithms and trigonometry. But the slide rule does not tell you where to place the decimal point. Is the answer 10.00 or 100.00 or 1,000.00?

So most of us, before we even started to fiddle with the slider and the cursor window, estimated the answer in advance. We learned to think in approximations. I can remember designing flat-plate concrete buildings with completely irregular column layouts. We used Hardy Cross’s method of moment distribution and generated pages of incredible calculations for different column configurations. The process become repetitive, and we could guess the required reinforcing pretty accurately before putting pen to paper.

This arcane process gave us a “feel” for the buildings that we were designing. They were not some abstract product of machine technology but were rather tactile creations of our very selves. We had used our intuition, which became sharper with experience. There was no way that a large-scale mistake would find its way into the work–we would notice it as a glaring intruder on our orderly process.

In my present role, I review drawings produced by the engineering staff. When I spot an error, the young engineer inevitably will say, “How did you see that so quickly?” I shrug and reply that it was how I was trained, to think about the approximate answer before figuring out the answers. When skipping that intuitive step, one can be easily seduced by computer results that look so neat and orderly.

I am not a Luddite: Our early design methods had enormous shortcomings. Perhaps two of the most grievous were the inability to model the building in three dimensions, as a whole entity, as well as the difficulty in computing building movements. Even structural analysis problems of modest indeterminacy were often impossible to solve. Anyone could write the compatibility equations, but as the unknowns grew beyond four or five, finding solutions loomed as a lifetime chore.

So we developed neat techniques called approximate methods. Large mathematical matrices of the compatibility equations could be partitioned and manipulated with all sorts of tricks. Indeed, some very complicated buildings were analyzed using tricks, and they have behaved beautifully over their lifespans, much to the credit of their designers.

For sure, the complicated geometries and configurations of buildings today could never have been analyzed with any degree of confidence using some of these approximate techniques. Computer analysis provides a higher level of mathematical certainty about the behavior of a structure—advantageous in new construction as well as in the renovation of historic buildings. One example is Fallingwater, which we helped renovate in 2002. To fix the sagging cantilevers, we needed to determine the stresses in the main cantilever girders that support the house. We knew accurately the building geometry and the reinforcing in the girders, as well as the actual deflections that had occurred over the first 60 years. By performing a three-dimensional analysis, and accounting for the participation of the slabs in two-way action by computer, we were able to manipulate various stiffness factors until the calculated deflections of every cantilever matched the actual measured deflections. With this information we could then design the repair, placing the right amount of post-tensioning where needed. Approximate methods would not have provided the precise answer required.

So how do we train ourselves to get the utmost out of computer analysis without losing an intuitive sense of how a building should behave and what its constituent members should look like? And, as our buildings become more complicated, is it really possible to develop that sort of grasp of their structural elements? We should at least start with some training in approximate analysis of simple structures. Like my professor in my first graduate course in indeterminate structures, instructors should demand that, for the first four weeks of the class, students not be allowed to use any mechanical aids–no calculator, no slide rule, and certainly no computer. Professors should encourage them to sketch the shear and moment diagrams and the shape of the deflected structure; they should thus be able to determine the critical points and quantify them within 15 percent accuracy.

It seems to me that we cannot depend wholly on the answers high technology can give us. Rather we must develop a feel for structures by using some of the educational techniques of the past—fostering the ability to see the whole, which technology supports but cannot replace.

Back to the Future of Practice