Full text: ARCH+ : Studienhefte für architekturbezogene Umweltforschung und -planung (1969, Jg. 2, H. 5-8)

process to see that their relationships remain intact. In 
verifying the model itself a special check should be made 
on all fitted parameters and on the method of selecting 
typical values. 
VERIFICATION 
Computer 
Program 
Model L 
Validation is the trying out or testing of the abstract 
system of a model to see if it yields reasonable and satis- 
factory approximations to the real world. The validity of 
an abstract system is determined by its ability to predict 
the behavior of the real world system, and this can best 
be done by comparing its output with real world data. 
Since no model can be perfectly valid, the essential 
question becomes how close to reality must the output be 
to have practical usefulness in planning. 
The output of an urban simulation model usually is in the 
form either of time-series data, that is developments 
over time, or patterned data, which is the distribution of 
a phenomenon at a point in time. For the validation of 
time-series data the analysts can apply various statistical 
tests directly, focusing on the number of turning points 
and their timing and direction, and the characteristics 
of the distribution of output data, their amplitude, var- 
iation about the mean, and other distinctive features. 
Appropriate statistical methods include the chi-square 
test, analysis of variance, regression, factor and spectral 
analysis and nonparametric tests. 
More elaborate procedures are required for the validation 
of patterned data. A technique for describing spatial 
phenomena must be worked out before the analyst can 
apply statistical techniques. The fields of animal and 
plant ecology suggest some methods for this. A typical 
approach is to plot the phenomena as a series of dots on 
a map, both the simulated and the corresponding real 
world data, and to superimpose a regular grid over the 
map. Three basic techniques can then be used todescribe 
the distribution of phenomena. Counting the number of 
data points in a bounded area is the simplest one. The 
nearest-neighbor technique and contiguity analysis are 
more sophisticated methods. After the distributions are 
described by one of these methods, the statistical tests 
listed in the previous paragraph can be applied. 
The following diagram illustrates the steps or points at 
which verification and validation theoretically can take 
place. At least in principle, verification can be at- 
tempted between each and every level of abstraction, 
working from the top down, as it were, while validation 
is essentially a comparison of the final product with the 
real world. 
Together verification and validation tell the analyst how 
well designed and useful the simulation model is. Only 
after knowing this can it be properly used as a pseudo- 
laboratory for urban experiments. 
ILLUSTRATIONS OF VERIFICATION AND VALIDATION 
OF THE TOMM MODEL 
It is practically impossible, of course, to document a 
complete verification and validation of a simulation 
model representing so complex a system of population, 
employment and land use as does TOMM. The process of 
verification goes on continuvously from the inception of 
the modeling effort, through each step of abstraction, 
and throughout the time of the model’s use. Validation 
usually is attempted only after the model is programmed 
Manageable 
Set of 
Relationships 
General 
= 
Conceptual 
Scheme 
VALIDATION 
Real World 
Figure 4: Verification and Validation 
Work Down the Levels of Abstraction 
and by its nature can never be complete. There is no such 
thing as a perfectly valid model, and every model be- 
comes less valid with the passage of time, as the structure 
and pheonomena of the real world change. What follows 
below, then, are partial and illustrative examples of 
verification and validation, with reference to the TOMM 
model. 
Verification 
The process of verification can be conceived of aswalk- 
ing down the staircase of abstraction and comparing each 
level for veracity with the one below. We start by de- 
bugging the model. One may look first for errors in the 
computer program itself, such as variables that might be 
added twice, or the use of improper subscripts. Most such 
errors reveal themselves during the program run, when it 
attempts to divide by zero, or generates obviously un- 
realistic numbers, or simply falters and stops. 
More subtle errors may have been made in translating the 
model to a program. Failing to place limiting conditions 
to prevent the number of households from becoming ne- 
gative, failing to distinguish between continuous and 
discrete variables, or forgetting to establish initial con- 
ditions for a variable, are examples of such errors. These 
are oversights, details the analyst normally takes as given 
in his own thinking, but that the unimaginative computer 
does not understand. It cannot appreciate the ridiculous- 
ness of a tract of land populated by a negative number of 
households. 
Coming down the levels of abstraction, we next look for 
errors in the logic of the modeling process itself. When 
the manageable theory, or set of reldtionships, was con- 
verted to a model the analyst may have failed to capture 
a subtle or apparently minor but essential aspect of the 
system. Such errors may reveal themselves only in special 
cases, or for a very few combinations of values of the 
variables, and thus pass undetected through many runs of 
the model that appear reasonable. Such a logical error 
was found in the TOMM model only after it had been run 
repeatedly. The relationship of population density to 
distance from the city’s center had not been properly in- 
corporated into the model, and, under certain circum- 
ARCH+ 2 (1969) H.&
	        

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.