DECLASSIFICATION ANALYSIS: THE METHOD, AND SOME EXAMPLES


Declassification is inevitably a political process: when the government decides what evidence to release, that decision is necessarily determined in large part by political considerations. Quite often, material is withheld because it is considered politically sensitive. But this is generally the sort of material the historian would most like to see. There is thus a built-in conflict between the consumer and the supplier of historical evidence: we historians want to see the "dirt," but those responsible for the release of documents want to make sure that the material released does not damage the political interests they are responsible for protecting.

This is why in the United States important evidence is often released in what is called "sanitized" form--that is, with the "dirt" taken out. Words or phrases, whole sentences or even longer passages, are deleted from the version made available to the public. Sometimes--for example, in many documents relating to the question of command and control of nuclear weapons--the material is so heavily sanitized that very little of interest has been left in. But generally the censor has a much lighter hand. In no other country, to my knowledge at least, is historical evidence made available in this way. In Britain, for example, either a document is released or it is not; I never saw a redacted version of a document at the PRO or in the Documents on British Policy Overseas.

How, in such circumstances, is the historian to proceed? A number of years ago, I read a wonderful little book called How to Beat the S.A.T. by Michael Donner, the former editor of Games magazine. This made a big impression on me, because the method Donner outlined can be used not just with the S.A.T., but in many other contexts--and in particular in the context I'm concerned with here. What was Donner's method? Unlike many books dealing with standardized tests, Donner wasn't the slightest bit concerned with teaching substantive material that would help test takers answer specific questions. His basic point was that the S.A.T. is to be understood as a game between test taker and test maker. In this game, the test maker needs a strategy. For example, the array of possible answers on a multiple choice test that the test taker has to choose from cannot be constructed in a totally random way, because if it were, the correct answer would be obvious. So the wrong answers, for example, tend to cluster around the right answer. But figuring out what the test maker's strategy is--and in some cases has to be--gives the savvy test taker a major advantage. He or she can develop a strategy which, in a sense, turns the test maker's strategy against him (or her), jiu-jitsu style--in this case, based on the rule that one should choose the "nuclear answer," the one the other choices in the array cluster around. The basic technique seems to work: Donner took the math SAT six times, and "without even looking at the questions, but only at the answer choices," his average score was "140 points higher than random guessing would have produced," and higher "than one-quarter of what all test takers (test takers who benefited by looking at the questions) actually do score."

I think we historians can, and should, take this same basic approach. We should think of ourselves, in a sense, as engaged in a game of strategy. We want to get at the truth; the declassifiers want to keep us from seeing certain things--that is, they want to keep some of the truth from us. If we want to "win," what sort of strategy should we adopt? As in any game of this sort, our strategy has to be a function of their strategy. We know that when material is deleted, it is not being sanitized out in a totally random way--there's a logic to what the declassifiers do. It's not a perfect logic (and it's a good thing for us that it's not, because if it were, the method to be outlined here would be a lot less effective). But if we can get even a rough sense for what it is, we can turn that understanding to our advantage.

The basic point here is simple: if we can identify the bias introduced into the corpus of available evidence by the fact that declassification is a politicized process, we can control for it. We can discount for it: we can factor our understanding of the fact that the body of available evidence is skewed in certain ways into the interpretation we are developing. Doing that brings us closer to the truth, and in particular helps us overcome the very obstacles to the truth the declassifier has sought to erect.

But how can this be done--that is, how can we go about identifying that bias? The answer has to do with the fact that the historian is not dealing with a single, highly efficient, adversary, who always does things in exactly the same way. A particular document is often found in more than one file, and often in more than one repository. Those different copies are sometimes declassified differently, by different people working on them at different times. We thus often get variant versions of the same document. One might think that newer versions are invariably more complete than older versions, but to a surprising degree this is often not the case. See, for example, documents 3 and 4 below, where recently released documents are more heavily sanitized than versions of the same documents released years earlier. What this means is that there is a big payoff, from the point of view of this method, to very extensive archival research, the kind of research that can hope to turn up variant versions of particular documents.

In any event, the existence of variant versions is what make the method of declassification analysis possible. Different versions can be compared with each other; we can thus see the kinds of passages that tend to get deleted, at least by some declassifiers at particular points in time. For certain types of records, foreign sources can be of great value in this context. The U.S. record of a meeting with British or French officials might be sanitized, but you can often find the British or French record of that same meeting, and compare the two records closely (as, for example, in document 6 below). (This, incidentally, has other advantages, such as enabling you to gauge the quality of notetaking, and giving you a certain insight into differences in national style and sensibility in this area.)

If you do this sort of analysis enough, you are often able to detect a pattern--that is, you get a sense for what the general policy is that governs what gets suppressed. The plausibility of those conjectures about general declassification policy can also be assessed in political terms: one can ask whether it serves a rational political purpose to suppress historical evidence of a particular sort. If there is one, it increases your confidence that the historical record is being skewed in a particular direction. All six of the examples which follow show in a more practical way how all this can be done.

This is the basic reason for being interested in declassification analysis, but there are a couple of other reasons why this method is of value. The first has to do with the fact that material that has been singled out for deletion is likely to be considered sensitive, and therefore important. A sanitized passage, assuming it can be identified, is thus worth paying special attention to. Look, for example, at the Kennedy passage referred to in document 5 below. When this became available, its importance in my eyes was dramatized by the fact that it had been sanitized out of the version of the document released a few years earlier. The judgment that it was important was to my mind confirmed, to a certain extent, by the fact that the declassifier had considered it sensitive enough to delete.

The second reason has to do with the way the sanitization issue generates a kind of "guessing game." One tries to "fill in the blanks" and figure out what has been sanitized out. Often, depending on paragraph structure and the length and position of the deletion, one can practically feel a "but" coming on--that is, an "on the other hand" passage that balances the passage left in and totally changes the overall meaning of the document. The first sanitized passage in document 2 is a good case in point. For another example--where I don't know what the sanitized passage says, but where I would bet that it substantially qualifies the point that precedes it--see FRUS 1958-60, vol. 7, part 1, p. 609, at the very bottom of the page.

Getting a feel for this--coming to suspect that this is one of the things the censors often do--helps you avoid the pitfall of taking the sanitized document at face value. And doing this kind of guesswork also helps you get a certain sense for how solid your interpretation is: when the full document is released, you get to see whether the guess had been correct. If you've guessed correctly, you feel good about your interpretation; you have the sense that you understand what was going on. If not, then you know, in certain cases, that you might have to rethink things.

Doing all this is actually quite enjoyable. Before I understood the value of this method--say, about fifteen years ago--I didn't pay much attention to a new version of a document I've already seen if it was only slightly more complete than an older version. After all, I thought, how much more could it tell you? But now one of the first things I do when a new version is released is to compare it with an older version (if I have one) and focus in on the previously sanitized passages. For all the reasons noted above, these are almost invariably of considerable interest.

 

DOCUMENTS:

(1) Extract from Forrestal Diary, July 28, 1945.

(2) Extract from the notes of a Dulles-Brentano meeting, November 21, 1957

(3) Extract from the Bowie Report of August 1960

(4) Extract from the notes of a meeting between Eisenhower and Spaak, October 4, 1960

(5) Extract from the notes of a Kennedy-Khrushchev meeting, June 4, 1961

(6) Extract from the notes of a meeting between Kennedy, Macmillan and other officials, December 19, 1962