subscribe to our mailing list:
|
SECTIONS
|
|
|
|
Science in the Eyes of a Scientist
By Mark Perakh
Posted on November 3, 2001
Updated August 11, 2003
Contents
Introduction
Features of good, bad and pseudo-science
The language of science
Definitions
The building blocks of science
Methods of experimentation
Data
General discussion of data
Reproducibility and causality
Ceteris Paribus
Errors
Complex systems
Bridging hypotheses
Laws
Models
Cognitive hypotheses
Theories
Science and technology
Science and the supernatural
Conclusion
1. Introduction
In many articles on this site, especially those discussing books and
papers which are devoted to the most recent incarnation of the creationism,
disguised as the so-called "intelligent design theory," or which purportedly
assert the harmony between science and the Bible, the question will repeatedly
arise of what constitutes legitimate science and how to distinguish it from
either bad science or pseudo-science. In this paper I intend to discuss the
above question in general, thus laying foundation for the subsequent discussion
of that question in particular books and articles.
Obviously, this distinction is
possible only if a certain definition of science has first been agreed upon. Unfortunately, it seems that a rigorous definition of science is hard to
offer, and perhaps even impossible, for reasons which may become clear from the
following discourse. In a
non-rigorous and most general way, science can perhaps be defined as a
human endeavor consciously aimed at acquiring knowledge about the world in a systematic and
logically consistent manner, based on factual evidence obtained by observation
and experimentation.
As it should become clear from
the following discourse, it is hard to consistently adhere to the above
definition of science because the latter seems to inevitably encompass elements
beyond the limitations of that definition.
The subject of this article is traditionally discussed within
the framework of the philosophy of science. My intention is to look at the
problem from a different standpoint – that of a scientist rather than of a
philosopher. I envision this as a
glance from within science rather than from a philosophical outside. Such an approach entails considerable difficulty. Whereas philosophers may legitimately avoid discussing the specific nuts
and bolts of scientific work, a glance from within requires analyzing specific
details of the process of scientific inquiry and this may pose a serious
obstacle to non-scientists trying to understand my thesis. I will have, on the
one hand, to avoid the Scylla of engaging in arcane specifics of the modern highly
sophisticated scientific methodology and, on the other hand, the Charybdis of
oversimplification by limiting myself only to the most general notions and thus
actually becoming a dilettante philosopher.
I don't know if I can succeed in such an act of
balance on a thin wire between the two approaches and probably will not be able
to eschew the trap of becoming an
amateur philosopher anyway.
I also realize that it may be
simply impossible to analyze the essence of science as though all of its
branches were characterized by the same set of features. Science is very complex and multifaceted so an analysis which is quite
adequate for some fields of science may happen to be irrelevant in other fields. My approach will be relevant first and most to such fields as physics,
chemistry, engineering disciplines, and biology, while many aspects of my
discourse may be less applicable to medical science, and even less to history,
esthetics, literary criticism and the like.
One of the first questions in the discussion of science is what is the
driving force of science. While
different scientists may be driven by different motivations, I submit that the
main reason for the existence of scientific research is the curiosity which
seems to be inherent in human
nature. This statement does not
imply that every man and woman necessarily possesses such curiosity. On the
contrary, the vast majority of people lack it. As the great Russian poet Aleksandr Pushkin said (he meant only the
Russian people, but his words can probably be applied to humanity as a whole)
"We are lazy and not curious." However,
there has always existed a fraction of people differing from others in that they
are driven by insatiable curiosity. Apparently,
some ethnic groups, at different historical periods and for unknown reasons, had
a larger fraction of such people than others.
It seems that historically attempts to satisfy the curiosity in
question were made in three distinctive ways. These are religion, philosophy and
science.
(As any simple scheme, the above classification does not fully
represent all aspects of the human drive to understand the world. For example,
the above three-part scheme leaves out such important forms of human endeavor as
art. However, this scheme is
convenient for the purpose of the main subject of this article).
At the dawn of civilization, when humankind's knowledge of the
world was extremely limited by the narrow experience within a small locality,
the curiosity about the surrounding world could be satisfied only by means of
religion. For
example, until recently, the Indian tribes in America, as well as native tribes
in Africa and Polynesia, all had certain religious systems, but none of them had
developed either philosophy or science.
Religion is a way of
explaining the world which is not based on evidence but largely on imagination
and fantasy. The religious explanation of the world does not require either
factual evidence or logic. The holy
books of the world's largest religions not only are incompatible with each
other but are also each full of internal inconsistencies and descriptions of
improbable events, but this has in no way impaired their impact on large masses
of people. Although religions often borrowed concepts from each other,
most of them claim the monopoly on truth. They
have an enormous power of survival despite any evidence against their tenets.
As the boundaries of the explored world expanded, the next steps in the
attempts at explaining the world were philosophy and science. It is hard to
establish a definite chronological sequence of their emergence. In some cases
science preceded philosophy. Indeed, well developed science is known to have
existed in the ancient Babylon and Egypt while philosophy does not seem to have
left signs of existence in that remote past. In other cases philosophy and
science seemed to emerge simultaneously, often being for a while
indistinguishable from each other. Whatever
simple scheme can be suggested, it necessarily will be incomplete and contradict
at least some features of the complex relationship between the three ways of
comprehending the world I listed above.
Like religion, philosophy does not necessarily require a factual evidence
(although it may) but, unlike religion, philosophy requires logic. Unlike religion, philosophy rarely claims the monopoly on truth (although
some philosophical theories did just that – an example is the Marxist
philosophy).
Finally, as more factual evidence was accumulated about the world,
science took over as a way to gain knowledge in a systematic matter, trying to
base its conclusions both on factual evidence and logic.
In the course of millennia science underwent multiple
modifications, for many centuries remaining indistinguishable from philosophy.
The modern science which is quite distinctive from philosophy developed
largely during the few most recent centuries.
In a certain sense, the relation between science, philosophy and religion can be
represented, necessarily simplifying the actual situation, by three concentric circles. The inner circle envelopes scientific
knowledge. Its radius is constantly
increasing, as more and more knowledge is accumulated. Beyond that inner circle of scientific knowledge is the concentric circle
of a larger radius, the area between the two circles representing the domain of
philosophy. Philosophy reigns where
science has yet not acquired sufficient knowledge, thus leaving room for
speculations limited only by logic. Finally,
an even larger concentric circle surrounds the two mentioned circles – that of
science and that of philosophy. This is the domain of religion where neither science nor
philosophy has so far managed to offer good explanations of the world. The inner
circle – that of science – is continually expanding at an ever increasing
rate, as science accumulates more and more knowledge. This expansion of science pushes philosophy into a gradually
narrowing domain where it still has the freedom of logical explanations not
based on rigorously collected factual data. This, in turn, pushes religion even
further to the margins of the world's comprehension. The enormous power of survival obviously inherent in
religions is due not to their gradually shrinking explanatory abilities but
rather to their ability to satisfy the emotional needs of man.
2. Features of good, bad, and pseudo-science
We have to distinguish between
"good" and "bad" science as well as between genuine and pseudo-science. While these two dichotomies are of different nature, sometimes it is
difficult to distinguish between bad science and pseudo-science.
Bad science differs from good
science in that it employs the same legitimate approach to inquiry as good
science but at some stage of that inquiry fails, often inadvertently, to follow
the path of an objective and/or logical process and goes astray in its
conclusions. Generally speaking,
good science serves the progress of science itself and of its derivatives in
engineering, medicine, and technologies of various kinds whereas bad science is
rather promptly discarded. Since bad science is usually recognized rather
swiftly, it is often just a nuisance and is rarely so harmful as to seriously
impede the progress of good science.
A case of bad science can be
exemplified by the story of the so-called cold fusion. The discovery of cold fusion was announced by two groups of
scientists, of which one consisted of professors of electrochemistry Martin
Fleischman and B. Stanley Pons and
their associate M. Hawkings, and the other was a group headed by Professor
Steven Jones.
Whereas I never heard of Pons,
Hawkins, and Jones before they announced their discovery, I was
acquainted with Professor Fleischman. In the seventies, we both served as
members of the Council of the International Society of Electrochemistry (of
which Fleischman later served as president). I also twice visited the University of Southampton where Fleischman was a
professor of electrochemistry, and gave a talk there. Since Fleischman's research was in a field rather remote from my own, I
was not really familiar with his scientific production, but I knew of his
reputation as a serious electrochemist. Therefore, when I first heard about cold
fusion I was both greatly impressed and puzzled. I was impressed by the mere claim that two electrochemists, using an
experimental set-up which only could be characterized as primitive, reportedly
achieved a revolutionary result of immense importance. I was puzzled because
that result seemed rather unlikely (in particular because these researchers did
not observe emission of neutrons from an alleged nuclear fusion process, and,
moreover, their description of the experimental procedure seemed to be too
vague.) On the one hand, it seemed hardly probable that
experienced scientists would have made such an extravagant claim without having
meticulously studied the claimed phenomenon, and on the other hand, it seemed
equally hardly probable (although not impossible) that their claim was true. Soon, several
groups of researchers were busy with verification of the sensational cold fusion
reports. Most of them rather rapidly came
to the conclusion that the cold fusion effect could not be reproduced. The sensation died and most of the researchers discarded the cold fusion
report as bad science. (Despite the disappointment by the evident
irreproducibility of the initial results by Fleischman and Pons, some
research in that direction is still going on, and regular conferences
devoted to that subject are being held; although no actual cold fusion has been
reported so far, these efforts may be useful if we remember a saying attributed
to Rutherford. This famous physicist was once asked why he allowed one of the
researchers in his lab to conduct an obviously hopelessly futile study.
Reportedly, Rutherford answered: "Let him do it. He hardly will find what
he is looking for but maybe he'll find something else which happens to be
useful.")
Pseudo-science,
on the other hand, is an endeavor which essentially is not real science but
something else disguised in quasi-scientific clothes for the sake of a certain
agenda, usually having nothing to do with science and often hostile to science. It may be quite detrimental to genuine science because of its ability to
successfully disguise itself as genuine science and thereby forcing on science
an unnecessary defensive effort requiring the spending of valuable human
resources needed for the progress of genuine science. A little later in this article I will suggest some criteria for
discriminating between genuine and pseudo-science and suggest appropriate
examples.
It seems useful to distinguish
between the language of science (and definitions
as a part of it) and its building blocks.
3. The language of science
The language of science is
overwhelmingly mathematical. First,
this statement can justifiably be viewed as a simplification. The
interaction between science and mathematics is neither straightforward nor
one-dimensional. Second, an adequate analysis of mathematics as a
predominant language of science would require a preliminary analysis of what is
a language. This would, though, lead well beyond the scope of my topic. Third,
this
statement may be disparaged as allegedly dismissing non-mathematically expressed
knowledge as non-science. Of
course, it is just a question of semantics. Is psychology a science? Is history a science? Surely, these and many
other fields of inquiry are legitimately listed among sciences even though
mathematics has so far found only a limited room for itself within those
sciences. (A different story is that mathematics finds its way into
those sciences via the ever increasing penetration of computers and artificial
intelligence.) What is more
important, the path these sciences follow seems to repeat that traveled by
physics, chemistry, and biology. Centuries
ago, actually before Johannes Kepler and Galileo
Galilei, physics used little mathematics and was largely a descriptive science,
much like psychology or history are today. It took time for the marriage of physics and mathematics to occur, but
when it happened, the impetus for the development of powerful theories of
physics was enormous. "Mathematics
is a language," said the great American physicist Josiah Willard Gibbs. Modern mathematics is so multifaceted that it is probably more proper to
speak about mathematical languages in plural.
Mathematics is, though, more than a mere language (or languages). It is a compressed logic immensely facilitating a theoretical
comprehension of experimental facts. Without
mathematics, contemporary physics would not exist. Hence, while saying that the language of science is mathematics, this is
fully applicable to the physics of today but also, probably, to the psychology
or history of tomorrow.
The mathematical language is overwhelmingly used in science not
because it is more powerful than the polymorphous natural vernacular. The opposite is true – the polymorphous natural language
is, generally speaking, more
powerful. The use of mathematical
language in science is dictated not by its power but by necessity, as will be
discussed in more detail when we turn to the concept of definition. Actually, whereas mathematical language immensely facilitated
and accelerated the development of science, toward the middle of the 20th
century it has become evident that the scientific study of the so-called large
or poorly-organized systems (which will be discussed in subsequent sections) is
sometimes impeded by a strict adherence to the rigorously enforced use of a
uniform mathematical language. The
sharp demarcation between natural language with its inherent ambiguity of terms
and the rigorous uniform mathematical language started gradually diffusing.
Hence, although the mathematical language is still the prevalent,
largely preferred and often simply necessary way of scientific discourse,
polymorphous language gradually finds its way into that discourse, a phenomenon
which is often viewed simply as a modification of the mathematical language
itself. Indeed, mathematics
constantly develops new tools (such a chaos theory, non-standard logic, and the
like) which help it catch up with the needs of physics and other disciplines in
refining their mathematical representation.
The question of why mathematics
is applicable in physics puzzled many scientists and philosophers alike.
Physics is a discipline in
which mathematics is so intertwined with the body of that science that often it
seems hard to see where physics ends and mathematics starts. Physics and mathematics have historically developed, constantly
exchanging ideas and methods. The greatest mathematicians often happened to be also
outstanding physicists and vice versa. Isaac Newton developed some of the most
fundamental theories in physics and also invented a revolutionary part of
mathematics – differential calculus. Leonard
Euler made enormous contributions both to mathematics and to physics. New mathematical methods which seemed to be of a highly abstract nature
(like functional analysis) swiftly found their way into specific fields of
physics while methods developed to tackle specific problems of physics (like
vector algebra and then vector calculus originally invented in hydrodynamics)
very soon were generalized to become chapters of abstract mathematics.
There is something fascinating
in that mysterious usefulness of mathematics in physics despite the obvious
difference in their treatment of objects. Here
is a simple example. We all know that matter has a discrete structure. It
consists of tiny particles. It is impossible to visualize these particles since
their behavior is enormously different from the behavior of the macroscopic
bodies we see, hear, taste, smell and touch. Whatever these particles really are like, we know that matter is not
continuous. However, we routinely
apply in physics an abstract mathematical concept of infinitesimally small
quantities, like infinitesimally small electric charge and the like. This mathematical trick works very effectively despite its
obvious contradiction of our knowledge of the discrete structure of matter. The concept of an infinitesimally small charge is contrary to the
reliably established facts of the discrete nature of electric charge.
I believe the described
discrepancy between mathematics and physics does not undermine the extreme
usefulness of mathematics in physics for the same reason that the use of
physical models does not undermine the study of real objects. One of the following sections will discuss the use of models in science
in detail. Preempting that
discussion, we may note that models differ from the real objects in that the
models share with the real objects they represent only a small fraction of
properties while ignoring the overwhelming majority of the latter. A good model shares with the real object only those few properties which
are crucial for the problem at hand. For example, in Newton's celestial
mechanics the model of a planet is a point mass. It has only one property common
with a real planet – its mass. Replacing
a real object with its model, while introducing a certain, usually insignificant
imprecision, enables scientists to tackle otherwise enormously complex problems. Similarly, applying an abstract mathematical concept such as
infinitesimally small charge, is similar to using a model in which the discrete
structure of the real electric charge is ignored as inconsequential for the
problem at hand. Since those properties of an electric charge which are
crucial for the problem at hand are accounted for in the theory assuming a
continual structure of the electric charge, the powerful mathematical machine
works very well, immensely facilitating the analysis of the behavior of real
charges.
Therefore
the contradiction between abstract mathematical concepts and the real properties
of the objects of study in physics is more of philosophical than of practical
significance.
Still, the amazing usefulness
of mathematics in physics seems to be a vexing problem for curious minds. It has
so far not received a commonly accepted interpretation, although more than once
various explanations have been suggested.
I believe that the fact of
mathematics being the language of science can be understood through a rather
simple and general idea. One of the
requirements for an approach to be genuinely scientific, which is not limited to
science alone, but is equally valid for any discourse which may be viewed as
reasonable, is to be logical. Logic
itself cannot serve to distinguish between correct and incorrect statements,
because logic is simply the proper path from a premise (or premises) to a
conclusion. The logical conclusion
is only as good as the premise. However,
if the premise is correct the conclusion reached via a logical procedure is
reliable. Therefore it is
imperative to use logic at every stage of a scientific inquiry. Mathematics is the most powerful and the most concise form of logic. Naturally, science had to resort to mathematics as an extremely powerful
tool of logical discourse.
Leaving this philosophical
problem aside, we may assert, that, generally speaking, mathematics is the
proper language for science.
4. Definitions
In
many of the following articles, the question will arise of what constitutes a
proper definition of the concepts under discussion (see, for example, A Consistent Inconsistency,
sections on probability and complexity). Therefore it seems appropriate to
provide first a general discussion of what the features of proper definitions
are and which types of definitions can be acceptable in a scientific discourse.
Obviously,
a fruitful scientific discourse is possible only if certain definitions are
agreed upon. Proper definitions are therefore of the utmost importance and
introducing them requires the maximum possible care.
It
seems a platitude to say that a definition is essentially an
agreement regarding the meaning of a term.
In the everyday discourse, we are rarely concerned
with the precise definition of words
used in the vernacular. The vagueness of the vernacular is actually its
advantage, supplying flexibility to the discourse and enlivening the latter. The vagueness of the vernacular is also connected to its redundancy,
which facilitates communication. Redundancy,
providing a certain amount of excessive information (in the layman's sense of
that term), helps to compensate for the possible distortion of the message which
is due to the unavoidable noise accompanying the transmission of a message.
Scientific terms, on the other
hand, usually require precise definitions and this is one of the features which
distinguish scientific discourse from an everyday chat.
A term may have quite different
definitions depending on the branch of science in which it is being utilized.
For example the term vector in
mathematics and its applications in physics means a quantity fully determined by
its magnitude and direction (as opposite to a scalar which is determined by its magnitude alone, without any
reference to direction). On the other hand, in pathology the same term, vector,
means an organism, such as a mosquito or tick, that carries disease-causing
microorganisms from one host to another. This example illustrates the utmost
importance of rigorous definitions of concepts used in each field of science.
Science is a collective
enterprise. It is impossible without communication within the community of
participants in that enterprise. Communication
is predicated on the acceptance of certain common definitions of concepts of
which the edifice of a particular science is composed.
Formulating a definition of a
concept is a logical procedure which, although it may vary in particular
details, is subordinated to certain general rules common for all the variations
of definitions. One of the most
salient features of a definition is that it presupposes a certain preliminary
knowledge relevant to the subject of definition. No concept can be defined
unless the meaning of certain underlying concepts has already been agreed upon. If the definition of a concept is attempted in the absence of some
underlying relevant knowledge, such an attempt will not provide a legitimate
definition, but at best a quasi-definition which will necessarily be ambiguous
and logically deficient.
A corollary to the last
statement is that every field of science starts by introducing a seminal concept
for which no real definition can be given. The word "starts" in that sentence implies a logical rather than
chronological order of steps in building up a field of science.
There are several possible
classifications of definitions.
One such classification is
based on the logical structure of a definition. According to that
classification, three types of definition can be distinguished. They are a) Deductive definition; b) Inductive definition, and c)
Descriptive definition.
A deductive definition has the
form of a triad. It necessarily
comprises three elements which are: a) a general
concept which encompasses a certain set A of elements and which is assumed to be
commonly understood within the appropriate circle of the participants of a
discourse; b) a particular concept
which is the target of the definition and which encompasses a certain subset B
of elements of the set A, and c) qualifiers
which specify those features of the elements of subset B distinguishing the
latter from all other elements of set A not included in subset B. Obviously, for a deductive definition to be meaningful, the participants
of the discourse must have a preliminary understanding, assumed to be identical
for all participants of the discourse, first of the general concept which
determines set A and second of the qualifiers characterizing subset B.
A deductive definition also
usually contains sentential elements, either explicitly or implicitly, in the form of "such.... that," "that (those)...which (who)" and the
like. These sentential elements can be omitted but they are nevertheless
implied.
For example here is a deductive
definition of the term "colt."
A
colt is such a
horse that is a)
young, and b) male.
In this definition the general concept is that of
"horse." It is assumed to be
known to all participants of the discourse. If it were not known, the definition
would make no sense as it would define the target - "colt" – through an
unknown concept requiring its own preceding definition. By the same token, the qualifiers, of which there are two in
this case (young and male) are assumed to be commonly understood by all participants of
the discourse, otherwise the definition of a colt would be useless. All horses,
regardless of their being colts or not colts (i.e. fillies, geldings, stallions
and mares) are elements of set A, while only those horses which meet the
qualifiers (i.e. are male and young) are
elements of subset B.
Here is another example.
Daughters
of Ann and Andrew are those of their
kids who are female.
I have intentionally chosen an
example which sounds primitive in the extreme because, despite its simplicity,
it is representative of a deductive definition.
According to the goal
of a definition, deductive definitions can be of three basic types, a)
phenomenological, b) taxonomic, and c) explanatory definitions. The distinction between these three types of definitions may sometimes be
quite clear, while in some other cases this distinction may be blurred. For example, a taxonomic definition may be at the same time also either
phenomenological or explanatory. Also, a definition may be viewed as either
phenomenological or explanatory depending on the requirements to the degree of
its explanatory power.
The distinction in question is
determined not as much by the contents of a definition as by its goal.
A taxonomic definition is
always a member of a set of "parallel" definitions all of which contain the
same general concept but differ in qualifiers. For example, the definition of a colt
discussed above is taxonomic, as it is a member of a set, including also
definitions of filly, mare, stallion
and gelding.
From another standpoint,
deductive definitions can be either qualitative
or quantitative.
Let us review an example of a
definition taken from the elementary course of physics: Semiconductors are those
materials whose electric conductivity
increases with temperature.
In this case set A comprises all materials whereas subset B comprises only semiconductors, while the qualifier is a
certain feature of the behavior of the elements of subset B which distinguishes
them from all other materials – namely the temperature dependence of their
electric conductivity. This definition belongs to the phenomenological type. It determines the target of definition by specifying a certain behavior
of elements of subset B without providing an explanation of why they behave as
they do. Assume now that we
need a definition of "semiconductor" which is of explanatory type, i.e.
containing a reference to the mechanism underlying the behavior of conductivity
described in the phenomenological definition. We discover that the mechanism in question is not the same for all
semiconductors, so that there are three distinctive classes of semiconductors,
in all of which electric conductivity increases with temperature, but the
underlying mechanisms of that phenomenon are different. Therefore, in order to
provide an explanatory rather than a
phenomenological definition of a semiconductor, we have to first introduce
taxonomic definitions for each of the three classes of semiconductors. One such class is the so-called intrinsic semiconductors (the two other
classes are the donor-type or n-type semiconductors, and the acceptor-type or
p-type semiconductors). A taxonomic
definition of the intrinsic semiconductors can be given as:
An intrinsic semiconductor is such a semiconductor that
is not doped by either donor or acceptor elements. (The term doping
means deliberately inserting into a material A small amounts of another
material, B, which alters the properties of A in a desired manner). Now,
as the concept of an intrinsic
semiconductor has been taxonomically defined, we can give an explanatory definition
of that concept: Intrinsic semiconductors are such
materials in which the energy gap between the conduction and the valence energy bands is narrow.
This
definition is better in that it specifies the intrinsic property of elements of
subset B which predetermines their behavior. However this definition, while
explanatory, is only qualitative. A
still better scientific definition would be quantitative. It would numerically specify
the meaning of the term "narrow."
If either a general concept
encompassing set A or the qualifiers distinguishing subset B from the rest of
set A are unknown, the triad-like deductive definition is impossible.
The first type of situation is
common if a seminal concept of a science is to be chosen. Since at the starting point of a new field of science there are yet no
defined concepts in the arsenal of that science, its starting concept cannot be
deductively defined. In this case the first element of the triad is absent, hence
the triad cannot be logically constructed. A good example is the concept of energy in physics.
Energy is the most fundamental concept in physics. It is also its
most seminal concept (not in a chronological but in a logical sense). In a
certain sense, physics is a science about energy transformations. The law of
energy conservation is the most fundamental basis of the science of physics.
However, since physics has no concept which is more general than energy, the
latter cannot be deductively defined.
Of course, one can find in textbooks on physics, especially those
published many years ago, any number of alleged definitions of energy, but most
of them are logically deficient and are not real rigorous definitions but rather
attempts at explaining the meaning of that concept without really defining it in
a logically consistent way. Some of
those quasi-definitions are plainly absurd. One can see in some old textbooks such quasi-definition of energy as one
asserting that energy is "the ability of a body to perform work," which is a
meaningless statement, appealing to the laymen's understanding of the term
"work."
The famous physicist Richard
Feynman, in his excellent course "The Feynman Lectures on Physics,"
prudently avoided attempts to define energy in a rigorous way, providing instead
an explanation of the term energy as
"a certain quantity, which we call
energy, that does not change in the manifold changes which nature undergoes."
(R. P. Feynman, R.B. Leyton and M. Sands, "The Feynman Lectures on Physics,
Addison-Wesley Publishing Co, 1963, v.1, p. 4-1). This is a descriptive
definition which cannot be replaced by a rigorous triad because we lack any
concept which is more general than energy, so the vague term a
certain quantity is utilized to avoid the unsolvable problem of defining the
seminal concept.
Another case when the rigorous triad-like deductive
definition is impossible is when no qualifiers are known.
In such cases we know the general concept and know that, in principle,
the target of the definition is a sub-category within that general concept, but
we don't know what distinguishes that sub-object from other sub-objects within
the same general concept. For
example, let us say John and Mike
are identical twins. Obviously they are two different persons, both being,
though, sons of Andrew and Mary. However, we don't know how to distinguish
between John and Mike so we can't provide a triad forming the proper
definition of either who Mike is or who John is. Any attempt to provide a
definition, say, of Mike, will result in a tautology which, although necessarily
true, is of no substance (for example, an assertion that Mike is he of the twins
whose name is Mike). At best, such
a definition can be of what can be called "accidental" type, which is not
really useful because it refers to some irreproducible (accidental) qualifiers
and therefore cannot be applied under conditions which are in any way different
from the specific situation used for generating such qualifiers. For example,
such a definition can sound like: Mike is that
son of Andrew and Mary who on
Tuesday, January 3, 1978 opened the door of their house when we rang the bell. Although
that definition formally is in the form of a deductive triad, its qualifier is
of an accidental type and cannot be used to distinguish between the twins at any
time other than Tuesday, January 3. 1978.
While a deductive quantitative
explanatory definition is the most desirable scientific definition, quite often
we have to satisfy ourselves with different types of definitions. The reason for that is not necessarily the principal impossibility of
formulating a deductive triad, but often simply the history of the development
of a certain range of concepts. If
an experimental work generates a set of data, this often provides the basis for
formulating an inductive definition of a new concept which becomes
indistinguishable from a specific type of law
of science. (We will discuss
the concept of laws of science in detail a little later, but at this point we
can note that a law of science which results from a process of induction is not
equivalent to those laws which constitute parts of a scientific theory. Inductive laws, which are often no more than inductive definitions, belong to the phenomenological type. They state facts without explaining them.)
Inductive definitions which
spell out a law can often be characterized also as relational definitions – they state a certain relation between two
or more concepts. For example,
assume a scientist studied the behavior of the metal Gallium at various
temperatures. Repeating the tests
many times, she found that at a pressure of 105 Pascal,
pure Gallium melts at 302.5 Kelvin. She postulates that her data, consistently showing that result, are not
accidental but reflect a law. She
formulates a law: "Pure Gallium melts at 302.5 K
if the pressure is 105 Pascal."
Her postulate resulted from the procedure of induction wherein she proceeded
from a set of individual events to generalizing it as a law. Her formulation defines a law
and therefore is also a definition. It
is a relational definition as it establishes a relation between a certain event
(melting of Gallium) and two factors – temperature and pressure. It is also a phenomenological definition since its states a fact without
explaining its nature.
Even if an inductive definition
is not stating a law, the path to such a definition is similar to the procedure
of scientific induction wherein the results of observations or experiments
repeated many times are generalized, postulating that the data in question are
the manifestation of a law of nature. We will discuss the transition from a set
of observational or experimental data to a postulate of a law in detail in a
subsequent section. At this point,
let us simply illustrate the idea of inductive definition by an example.
Imagine that a zoologist whose name is Johns went to explore
jungles in, say, some part of Africa, and discovered there a number of animals
hitherto unknown. Let's say all these newly discovered animals happened to be
various kinds of cats – cat M, cat N, cat L, etc. Each of these cats differs
in certain characteristics from other cats, but all of these cats have certain
features in common distinguishing them from cats found in other parts of the
world. The zoologist suggests a name for this newly discovered subfamily of
cats, say Johns's cats. Johns
provides a detailed description of each of the kinds of John's cats – M, N,
L etc. He has then to provide a
definition of the new subfamily of cats as a whole. He defines them as follows:
"Johns's cats is a subfamily of cats comprising cats M, N and L." He may also provide a description of the features which are common for
all Johns's cats, but are absent in all other cats (for example, all Johns's
cats may have a peculiar shape of their ears, which is not observed in any other
cats). This is an inductive definition, generalizing subcategories
into a broader super-category.
Inductive
definitions, by their nature, are open
definitions, in that the list of their elements can be expanded if new elements
are discovered which can be included into the definition. On the other hand, deductive definitions are closed
in that they encompass all possible elements which can be included in the
definition.
The
lack of a rigorous definition, agreed upon by the participants of a discourse,
may render an otherwise sophisticated discourse too vague to be genuinely
scientific and may deprive even a very ingenious argument of its evidentiary value.
5. The building blocks of science
The principal building blocks of science are
1) Methods of
experimentation,
2) Data,
3) Bridging hypotheses,
4) Laws,
5) Models,
6) Cognitive hypotheses,
7) Theories.
A warning seems to be in order.
The above list may create the impression that science can be presented by a neat
straightforward scheme, as a combination of clearly distinguishable building
blocks. No such neat scheme exists. The above listed building blocks of science can overlap, intersect and
emerge in an order different from the above list. However, the division of the body of science into supposedly separate
building blocks, besides providing convenience in analyzing the extremely
complex and multifaceted form of human endeavor we call science, also reflects
the real composition of science, even though the boundaries between its
constituents may sometimes be diffuse.
We will discuss the above
concepts in detail. Before doing that, note that for each of the above elements
of good science there is a corresponding element of bad science, i.e. bad
methods, bad data, bad hypotheses, bad laws, bad models, and bad theories. On
the other hand, pseudo-science differs from genuine science not just because one
or several of the above building blocks can be characterized by adding the term pseudo
(which, of course, is possible in principle) but because pseudo-science
typically lacks legitimate data. Since data are that part of science which provides evidence, the absence
of real data means the absence of evidence which would support the hypotheses,
laws, models and theories. A
hypothesis, law, model, or theory suggested without the real supporting data is
typical of pseudo-science.
The statements in the preceding
paragraph require clarification of what distinguishes good data from bad data
and from wrong data. This question
will be discussed in the section on data.
Whereas bad science is usually
swiftly recognized as such, pseudo-science sometimes features a strong power of
mimicry, disguising it as genuine science. The litmus test enabling one to distinguish genuine science from
pseudo-science is in looking for the data underlying the hypotheses, laws,
models, and theories of the supposed science. Quite often, discovering no data, i.e. no evidence supporting those
hypotheses, laws, models and theories, serves as an indication that we are
dealing with pseudo-science, regardless of how sophisticated the hypotheses,
laws, models and theories in question may seem to be and regardless of how
eloquently they are presented.
Review a few examples of
pseudo-science. Probably the most
egregious example of a pseudo-science which had tragic consequences on an
enormous scale is that of Marxism. Created in the 19th century Europe
mainly by Karl Marx and Friedrich Engels, it so successfully disguised as
science that it won a large number of adherents despite its obvious
contradiction with facts. Marxism
comprised several parts which can be roughly defined first as a philosophy of
the so-called dialectical materialism, second as the alleged historical analysis
of society's development (the so-called historical materialism) and third, as
the Marxist political economy. The dialectical materialism was an eclectic
philosophical theory combining elements of Feuerbachs's materialism and
Hegel's dialectics. Neither Marx
nor Engels contributed much to the development of their predecessors'
philosophical ideas but they reformulated those ideas in the simplified form of
a neat set of simple statements which could be much easier comprehended by
non-philosophers (such as a vulgarized version of Hegel's principles of the
transition of quantity into quality or negation of negation, etc). (It is
interesting to note that a frequent feature of pseudo-scientific theories is
that they often suggest a neat and simple scheme allegedly representing complex
reality). The core of Marxism was though their theory of the class struggle
whose essence they themselves succinctly expressed by the maxim that the history
of mankind is the history of the struggle of classes. That theory reduced all the complexity of human history to one factor –
the struggle of economic classes. Marx
and Engels did not seem to be worried by the fact that their representation of
history was completely contrary to the histories of such countries as, for
example, China or India, and
reflected even the history of Europe only in a rather strained way. Such absurdities as interpreting religious wars as the results of the
struggle of economic classes did not seem to make the creators of "scientific
Marxism" pause and allow for the role of any factors besides the class
struggle in the history of the mankind. Their extremely narrow concept ignored a
multitude of various factors that affected the history of the human society. In other words, they arbitrarily chose a very narrow subset of data from
the much wider set of available data in order to fit the data to their
preconceived theory. This is a
classic example of pseudo-science, whose main fault is the absence of a
sufficient scope of data so that the theory is built accounting for only a
deliberately selected tiny fraction of factual evidence. Since, however, Marxism
had all the appearance of a scientific theory, its predictions won wide
popularity, with one of the consequences being the bloody revolution in Russia. Although the actual revolution and its consequences quite obviously did
not fit the blueprint predicted by Marxism, the latter acquired the status of a
godless religion, believed in with a fanatical stubbornness despite its obvious
futility and the blood bath it caused. Indeed, like in religions, one of the
persistent claims by Marxism was that it was "omnipotent because it was
true."
Another example of a
pseudo-science is Lysenko's pseudo-biology which was imposed by decree of the
Soviet communist rulers as the only true biology compatible with
Marxism-Leninism. Lysenko's
theory was a complete hogwash with no basis in factual data whatsoever. Its
imposition also had tragic consequences, as many genuine scientists were
arrested and perished in the Gulag because they tried to refute Lysenko's
pseudo-science by pointing to the numerous data which it plainly contradicted.
One more, this time rather
comic, example was the allegedly revolutionary theory of viruses suggested in
the late forties in the USSR by a semi-literate veterinary physician Boshian. According to that theory, which was officially approved by the communist
party, viruses constantly convert into bacteria and vice versa. Of course,
Boshian's theory was based on fictional data. When,
after many unsuccessful attempts, a commission of scientists finally gained
access to Boshian's secret laboratory, all they found, instead of samples of
the alleged virus-bacteria colonies, was plain dirt.
Other examples of
pseudo-science include the so-called "creation science," with its arrogant
distortion and misuse of facts, as well as its more recent and more refined
reincarnation under the label of the "intelligent design theory."
This theory, promoted by a large group of writers,
including many with scientific degrees from prestigious universities and with
long lists of publications, and propagated on various levels of sophistication,
has all the appearance of scientific research, as it offers definitions,
hypotheses, laws, models, and theories like a genuine science. What is, though, absent in the "intelligent design theory," is
evidence. No relevant data which
support hypotheses, laws, models, and theories could be found in the papers and
books by proponents of intelligent
design, only unsubstantiated assumptions. Therefore
it can justifiably be viewed as pseudo-science. I will also touch on this
subject in the section on the admissibility of the supernatural into science, as
well as in other articles on this website.
6. Methods of experimentation
The statement that methods of experimentation are the engine of the
progress of science seems to be quite trivial Whereas this statement is especially transparent in physics, chemistry,
biology and in engineering sciences, it is valid as well for any other science
even if it may sometimes be not as obvious as it is for the listed fields of
inquiry.
When Galileo built his telescope, this provided an enormous
impetus to the development of astronomy. When
the Hubble telescope was put in orbit, astronomy underwent another powerful push
forward. When Antony van Leeuwenhoek built his microscope, it revolutionized
biological science. When the first
electron microscope was built, it again revolutionized several fields of
science, leading to the amazing modern achievements
like actually seeing and even manipulating individual atoms.
These well known examples are only the tip of the iceberg, because
the progress of science is pushed forward by the multiple everyday inventions of
various ingenious methods of experiment or observation.
In thousands of research laboratories scientists whose names are
usually unknown to anybody except for the narrow circle of their colleagues,
every day apply their ingenuity and inventiveness to creating new, ever more
subtle methods to question nature. Without
these often extremely ingenious experimental methods and highly sophisticated
experimental set-ups there would be no progress of scientific penetration into
objective reality, which often guards its secrets very jealously.
There are two paths experimental science takes to progress.
One path is the ever increasing capabilities of devices designed
as versatile tools of scientific inquiry and of engineering or medical
development, such as electron microscopes, telescopes, and electronic devices
(signal generators, potentiometers, recorders, oscilloscopes, etc).
The other path is the design of unique experimental contraptions
designed for specific experiments and measurements.
These two path intersect at many points, as designing a specific
new experimental contraptions is often mightily assisted by the availability of
the more advanced tools of a versatile type.
Whereas inventions such as a microscope or radio transmitters and
receivers justifiably gain wide publicity, many amazingly ingenious methods
invented by scores of not-very-famous scientists usually remain unknown except
for a narrow circle of researchers working in that particular field of science.
Here is a typical example. In the late seventies, at one of the
foremost research institutions in the USA, a group of highly qualified and
trained scientists was developing certain types of magnetic memory for
computers. In one part of that
study a magnetic wire was pulled at a certain speed through a tube filled with
an electrolyte, and certain electrochemical processes on the surface of the wire
were investigated. The researchers
encountered a problem: as the electrolyte was pumped through the tube, the
friction between the electrolyte and the walls of the tube caused the
pressure's drop along the wire and this distorted the data. Considerable
effort was invested in order to somehow eliminate or at least alleviate the
described effect, but the effect persisted. Then a guest scientist arrived who
has never before dealt with the experiment in question. At the very first meeting where the detrimental effect of the pressure
drop was discussed, the guest scientist looked at the diagram of the tube and
the wire moving along the latter, and said: "Why, instead of pumping the
electrolyte through the tube, just eliminate the pump, make a rifle-type spiral
groove on the inner surface of the tube and put the tube into rotation. The groove will push the liquid with a pressure which will be
automatically uniform all along the wire."
This is a very simple example of how a fresh look at a problem may
suddenly solve a seemingly difficult problem in a very simple way, but this
simple example is typical and exemplifies those thousands and thousands of small
and sometimes not very small inventions and innovations which occur every day in
numerous research laboratories.
The unstoppable development of experimental methods, from designing giant
accelerators of elementary particles to small improvements in measuring and
observational methods, is what underlies the progress of science.
Therefore, the statement that a revolution in science starts when the
experimental technique proceeds to the next digit after the decimal point
reflects an important aspect of the progress of science.
7. Data
a) General
discussion of data
In the words of the famous Russian physiologist Ivan Pavlov, facts are
the bread of science. The term data is
synonymous with facts. It is also
synonymous, in a certain sense, with evidence.
Without data there is no science, only pseudo-science. In order to function as evidence, data must be reliable. There are two main types of data, observational
and experimental. Both are
equally legitimate, the difference being in that observational data are usually
obtained in a passive manner, by merely recording whatever nature deigns to
display by itself, whereas experimental data are gained as the result of a
deliberate procedure of a planned research in which specific conditions are
artificially created to force nature to display answers to specific questions. The demarcation between observation and experiment are not quite sharply
defined since observation may be a natural part of a designed experiment.
Observation, and even some primitive form of experimentation, may
happen to occur in a completely non-scientific activities. In a popular comic Russian poem for children, a Russian orthodox priest
counts ravens sitting on trees, just for the heck of it. It is an observation and it is accompanied by a measurement, but this
does not make the priest's activity scientific. Scientific observation or
experimentation, while driven mainly by curiosity, always has an either
conscious or subconscious purpose – to establish facts which may shed light on
the intrinsic structure and functioning of the real world.
A crucial element of a scientific experiment is measurement. Measurement makes data quantitative, thus enormously
enhancing the data's cognitive value. There is a theory of measurement which
teaches us about the precautions necessary to ensure the reliability of the
measured quantities and about the proper estimate of unavoidable errors of
measurement. This theory is beyond the scope of this essay, but we have to
realize that, on the one hand, the data obtained via a properly conducted
experimental procedure are reasonably reliable, but on the other hand they are
always true only approximately and cannot be relied upon to an extent exceeding
that determined by the margin of a properly estimated error.
Consider some examples.
Astronomy is a science wherein observation is by far more
prevalent than experimentation. A
classic example of highly valuable observational data is the tables of the
planets' positions compiled by Tycho de Brahe over the course of many years of
painstaking observations accompanied by meticulous measurements. These tables, after they wound up in the possession of
Johannes Kepler, served as the data the latter used to derive his famous three
laws of planetary motion. We will
discuss later the transition from data via hypotheses to laws, and from laws to
theories. My point now is to
illustrate how reliable data,
gained via a proper observational procedure accompanied by measurement became a
legitimate part of a scientific arsenal.
A classic example of the experimental acquisition of data is the
discovery, in the early years of the 20th century, of
superconductivity by a group of researchers in Leyden, the Netherlands, headed
by Heike Kamerlingh Onnes. These
researchers systematically measured the electric resistivity of various
materials at ever lower temperatures. In
a specially designed experiment, samples of material were gradually inserted
deeper and deeper into a Dewar flask on whose bottom there was a puddle of
liquid helium, while their electric resistance was measured. Analysis of the obtained data led to the conclusion that at certain very
low temperatures the electric resistivity of certain materials dropped to zero. This unexpected result seemed contrary to the common understanding of
electric resistance prevalent at that time. However, data take precedence over theories. The scientists of Leyden, however puzzled and astonished by their data,
came to believe in those data's reliability and thus announced the discovery
of superconductivity. Their data
were then reproduced by other scientists and a new law asserting the fact of
superconductivity was formulated. The
formulation of a law did not mean a theory of superconductivity was forthcoming. It took half a century until a theory of superconductivity was developed. Neither the law in question nor the theory of that phenomenon would be
possible without first acquiring reliable data on the behavior of
superconducting materials, these data acquired in a deliberately designed
specific experiment.
As we will see, the path from data acquisition to a law can be
quite arduous and prolonged. It involves steps necessarily requiring imagination
and inventiveness, because no law automatically follows from data. This is even
more true for the path from data, via law, to theory. However, only those laws and theories which stem from
reliable data are constituents of genuine science. (The question of what constitutes reliable
data will be the subject of subsequent sections wherein the reproducibility of
data and the problem of errors will be discussed.)
I would like to illustrate the last statement by an example. In
1994, three Israelis, Witztum, Rips, and Rosenberg (WRR), published a paper in
the mathematical journal "Statistical Science." In that paper they claimed to have discovered in the Hebrew text of the
book of Genesis a statistically significant effect of the so-called "codes." According to WRR, the codes in question contained information about the
names and dates of birth and death of many famous rabbis who lived thousands of
years after the book of Genesis was written. If WRR's claim were true, its only explanation could be that the author
of the book of Genesis knew the future. This
would be a powerful argument for the divine origin of that book. The paper by WRR caused a prolonged and heated discussion. As a result of a thorough analysis of WRR's methodology, the
overwhelming majority of experts in mathematical statistics concluded that the
WRR's data were obtained in a procedure which in many respects was contrary to
the rules of a proper statistical analysis. In other words, the community of
experts concluded that WRR's claim stemmed from bad
data. Therefore, WRR's work was rejected as bad science. It was
not, though, originally rejected as pseudo-science,
because they based their law on certain data to which they applied a statistical
treatment (although the latter was partially flawed as well). However, despite the scientific community's almost unanimous rejection
of WRR's work as based on unreliable data, two of them (Witztum and Rips) as
well as a small circle of their adherents, stubbornly continued to insist that
they made a genuine discovery, thus converting their theory from bad science to
pseudo-science. Because the story
of the alleged Bible code is quite educational with regard to how pseudo-science
appears and persists, it is discussed at length in a separate article (see B-Codes Page).
On the other hand, claims such as those by Immanuel Velikovski
have been rejected from the very beginning not as bad science but rather as
pseudo-science because his claims did not stem from any data but only from
arbitrary assumptions. In
a book published in 1950 and titled "Worlds in Collision" Velikovski offered
a whole bunch of wild theories allegedly explaining many mysteries that
contemporary science could not explain. For example, one of his suggestions was
that when, according to the biblical story, Yehoshua (Joshua) stopped the sun in
the sky, the earth indeed stopped its rotation. Another hypothesis by Velikovski
postulated a near-collision of Venus and Mars with Earth, thus allegedly
explaining numerous biblical
miracles. Of course, there are no
data whatsoever which would serve as evidence for such theories, therefore they
were justifiably relegated to pseudo-science. While Velikovski acquired
substantial notoriety and was compared in non-scientific publications to Newton,
Einstein, and other great scientists, no scientific magazine accepted his papers
because they, while plainly contradicting Newtonian mechanics, did not offer a
shred of evidence which would support his claims.
b)
Reproducibility and causality
To be legitimately useful in science, data must meet several requirements, one of which is reproducibility. Neither the reputation of the scientists claiming certain experimental
results nor the impressive appearance of their data seemingly conforming to the
strict requirements of a properly conducted experiment are sufficient for the
data to be accepted as a contribution to science. Data become a part of science only after they have been reproduced by
other scientists. Indeed, the
demise of the cold fusion (at least as it stands for now) was due precisely to the fact that other groups of
researchers could not reproduce the data claimed by Pons-Fleischman and Jones. Likewise, the data claimed by Witztum and Rips could not be reproduced by
other scientists, which was the
main reason their theory was rejected by the scientific community.
The requirement of reproducibility is based on the assumption of
causality as a universal law of nature. This
assumption presupposes that reproducing certain experimental conditions must
necessarily lead to reproducing the outcome of the experiment. This supposition is of course a philosophical principle to be accepted a
priori. This principle has an ancient origin. It was already discussed in detail
by Aristotle who introduced the concept of a hierarchy of four causes, the
so-called material causes, the efficient causes, the formal causes, and the
final causes. In more recent times,
the principle of causality known as the principle of determinism was formulated
by Laplace, and has been universally accepted in science for at least two
centuries.
The advent of quantum mechanics seemed to have shattered that
principle. The very fact that many
outstanding scientists were prepared to abolish a principle that had been a
foundation of experimental science for so long testifies against the claims by adherents of the so-called intelligent design theory who assert that
scientists are dogmatically adhering to "icons" of metaphysical concepts
rather than keeping open minds.
However, the interpretation of quantum-mechanical effects as a
breach of causality is by no means unavoidable. Rather, it testifies to the insufficient understanding of submicroscopic
processes wherein causality is allegedly absent.
First of all, even in the case of quantum-mechanical events,
causality is obviously present as long as the macroscopic manifestations of
those events are observed. Recall
the example of the alleged absence
of causality, namely experiments with microscopic particles passing through
slits in a partition. If only one
slit is open, on the screen behind the partition a diffuse image of the slit is
observed. If, though, two slits are
open, the image on the screen is a set of fringes. On macroscopic level, the outcome of the experiment is reliably
predictable and fully consistent with the principle of causality. Indeed, the image on the screen is always the same if only
one slit is open, and can be easily reproduced at any time, any number of times,
anywhere in the world. If two slits
are open, the picture on the screen is different from the case of only one
opened slit, but it is reliably predictable and can be easily reproduced at any
time anywhere in the world. Hence,
causality is present as long as the macroscopic effects of the experiment are
the issue. A question about the validity of causality is raised when the details
of the event on a microscopic level are considered.
If an electron is a particle,
it cannot pass simultaneously through two slits. Therefore we could expect that in the case of two opened slits, the image
on the screen would be two separate diffuse images of slits rather than a whole
set of fringes. Indeed, how can an electron passing a particular slit "know"
whether the other slit is open or not? However,
the behavior of individual electrons is different, depending on whether the
other slit is open or not.
Thus, the results of the
described experiments are sometimes interpreted as indicating that the same microscopic conditions lead to different
outcomes, depending on the variations in macroscopic conditions. On the microscopic level, as this argument goes, the same conditions may
result in different outcomes thus causality is absent.
Quantum mechanics tells us that
the explanation of the described phenomena is in that electrons are not
particles like those we can see, touch, etc. They are very different entities, which under certain conditions behave
like waves rather than like particles. It
does not seem possible to visualize
an electron, because it is unlike anything we can interact with by means of our
senses in our macroscopic world. Of
course, it is well known that macroscopic waves (for example, sea waves) can
very well pass several openings simultaneously. Therefore, the two slit experiment is not really an
indication of the absence of causality on a microscopic level, but rather an
indication of our inability to visualize a subatomic particle.
Richard Feynman said that
nobody understands quantum mechanics. To interpret this statement, we have first
to agree what the term "understanding" means. We can't visualize an electron, and in this sense we may say that we
don't understand its behavior. However,
we can reasonably well describe the electron's behavior using the mathematical
apparatus of quantum mechanics, and can successfully predict many features of
that behavior. In that sense, we
can say that we indeed do understand quantum mechanics.
Since we can't visualize the
intrinsic details of the electron's behavior, we are actually uncertain of
whether, on the microscopic level, its behavior is indeed non-deterministic, or
we simply don't know what causes this or that seemingly random path of the
electron. And since we are
uncertain, there is no reason to
assert that the principle of causality breaks down at the microscopic level.
Review another situation
wherein causality is often claimed to be absent. If we have a lump of a
radioactive material, we can easily measure the rate of its atoms' decay. This rate is a constant for a specific material. For example, we can
experimentally find that, in the course of a prolonged experiment, the fraction
of the atoms which decayed every second was, on the average, 0.01. Thus, if initially the lump consisted of N0 atoms, one second
later the number of atoms of that material was N1< N0, one more second later, the number of atoms was N2
< N1, etc, whereas the ratios N1/N0, N2/N1,
etc, averaged over the duration of the experiment, were 0.99. This result is well reproducible, thus, again, on the macroscopic level
causality seems to be intact. However,
if we discuss the phenomenon on microscopic level, the question arises of why at
any particular moment of time a particular atom decays while other atoms do not. All atoms within the lump of material are, from a macroscopic
viewpoint, in the same situation. There
is a certain probability of an atom's decay, which is exactly the same for
every atom in that lump of material. However,
at every moment some atoms decay whereas others do not. Therefore, the argument goes, causality does not seem to be at work in
that process. All atoms are in the
same situation, with the same probability of decay, but only a certain fraction
of them decays without any evident reason which would distinguish the decaying
atoms from those still waiting their turn.
The
described problem of the possible "hidden parameters" affecting microscopic
processes is the subject of a rather heated discussion among scientists, wherein
no consensus has so far been reached. For example, Feynman maintained that an
explanation referring to "hidden parameters" is impossible, because nature
itself "does not know in advance" which atom will decay at which moment, or
which path a particular electron will choose at a given moment. With all due
respect to the views of the brilliant scientist Feynman, I find it hard to
accept his position. I express here my personal view of the problem.
In my view, when we assert that
all atoms are in the same situation, this is true on macroscopic level. We have
insufficient knowledge of the situation of each individual atom on the
microscopic level. The deficit of
knowledge forces us to discuss the situation in probabilistic terms. Probability
is a quantity whose cognitive value is determined by the amount of knowledge
about the situation at hand. If we knew exactly the intrinsic details of the
situation each atom is in, we could, instead of probabilities, discuss the
matter in terms of certainties. Rather
than abolish the principle of causality, the much more natural interpretation of
the described process of atoms' decay is that there are definite causes of
individual atoms decays at this or that moment of time, but we simply do not
possess sufficient knowledge of the details of the process and therefore cannot
predict which atom will decay at which moment. Our lack of knowledge does not necessarily mean the decay of this or that
particular atom occurs without a specific cause.
Actually, an analogous
situation can be observed on a macroscopic level as well. This happens, for example, in the so-called Keno game in Las
Vegas casinos. In that game, players choose a set of several numbers out of a
table containing, say, 49 numbers. The winning set is determined by a machine in
which a pile of small balls is constantly mechanically shuffled in a shaking
plastic vessel. Each ball bears a number. Balls move up and down the pile,
constantly exchanging their positions, some of them pushed up by the rest of the
balls, some others sinking deeper into the pile. Every couple of minutes, one of
the balls is pushed high enough up from the pile and rolls out of the vessel. The number on that ball becomes part of the winning set. This is a chance process in which each ball has the same probability of
rolling out at any given minute. This
situation, albeit on a macroscopic level, is rather similar to the atoms'
decay. At any moment of time, a
particular ball rolls out while the rest of the balls remain within the vessel,
waiting for their turn to get out. Why a particular ball happens to get out, but
not any other ball, despite all of them having the same probability of being
pushed out? We don't have a
sufficient knowledge of the intricate web of interactions between the balls in
the constantly shuffled pile, therefore we resort to probabilistic approach. However, if we knew precisely all the intricate details of the multiple
encounters of those fifty or so balls in the shuffling machine, we would be
able, instead of a probabilistic estimate, to predict precisely which ball will
roll out and when. The lack of detailed knowledge, while forcing a probabilistic
approach, does not mean that the observed occurrence – the choice of
particular ball through a chance procedure - did not have a definite cause
making the fallout of that particular ball at that particular moment inevitable
while for no other ball there existed a similar cause to get pushed out.
The atoms' decay occurs on a
microscopic level, and we have much less knowledge about the intrinsic
conditions within and between the decaying atoms that we have in the case of the
balls in a Keno game. This by no
means must be interpreted as an indication that the acts of decay have no
definite causes. The principle of
causality can be preserved, even accounting for the quantum-mechanical effects.
Whatever interpretation of
causality is preferred, if there is no causality, there is no science.
c) Ceteris paribus
The next question to be
discussed in relation to data as a foundation of science is the so-called
principle of ceteris paribus. This
Latin expression means "everything else equal." The idea of that principle is as follows: Imagine that in a certain
experiment the behavior of a quantity X is studied. This quantity is affected by a number of factors, referred to as factors
A, B, C, D, etc. Some of those factors may be known while some others may be
not. From a philosophical standpoint, everything in nature is
interconnected and therefore the number of factors affecting X is immensely
large, so we never can account for all of them. However, an overwhelming majority of that multitude of
factors have only a very minor effect on X and therefore, in practical terms,
most of them are of no consequence for the study at hand. Still, there is a set of factors whose effect on X cannot be
ignored if we want to gain meaningful data from our study.
The principle of ceteris
paribus is a prescription regarding how to conduct the study. Namely,
according to that principle, the process of research must be divided into a
number of independent series of experiments. In each series only one of the
factors - either A, or B, or C, etc., has to be controllably changed while the
rest of the factors must be kept unchanged. For example, if A is being changed,
B, C, and D have to be kept constant, and the behavior of X has to be recorded
for each value of A. Then another series of experiments has to be conducted
wherein only factor B is being changed and X recorded while A, C, D, etc., are
kept constant.
Of course, the application of
the described procedure is based on the assumption that factors A, B, C, etc.,
are independent of each other, so that it is possible to change at will any one
of those factors without causing all the rest of those factors to change
simultaneously.
There are two problems
inseparable from the above approach. One
is related to the errors of measurement and the other to the very core of the ceteris
paribus principle, i.e. to the above mentioned assumption of the factors'
independence.
The problem of errors stems
from the fact that no measurement can be absolutely precise and accurate ( the
difference between precision and accuracy is defined in the theory of
measurement and is beyond the scope of our discourse). In particular, even if factors A, B, C, etc,. are independent of each
other, it is impossible to guarantee that while one of them is being changed,
the rest of them remain indeed absolutely constant, because the values of B, C,
D, etc., while in principle they are supposed to remain constant, cannot be
measured with absolute precision and accuracy. Moreover, the values of factor A itself, which are supposed to be changed
in a controlled way, are never guaranteed to have precisely the desired values,
and, finally, the target of the study, X
itself cannot be measured with absolute precision and accuracy.
The problem with the principal
foundation of the ceteris paribus
methods stems from the fact that in many systems, factors A, B, C, etc., are in
principle not independent of each other. Hence, as factor A is being
changed, it may be impossible in
principle to suppress the simultaneous changes of factors B, C, D etc, which
are supposed to remain constant. Such
systems are sometimes referred to as "large" or "complex," or
"diffuse," or "poorly organized" systems and their study necessarily can
be conducted only beyond the confines of ceteris
paribus.
Philosophically speaking, all
systems are complex, so that factors A, B, C, etc., are always to a certain
extent interdependent. However, the extent of their interdependence may be
insignificant to a degree which makes the assumption of ceteris paribus reasonably substantiated.
For example, assume we study
the behavior of ice/water in the vicinity of 273 K, which is, of course, 00 C or
320 F. The factor we change is
temperature. Besides temperature,
the behavior of ice/water is affected by a multitude of other factors. Most of them are insignificant and can be ignored because they have only
a very minor effect. However, at least two factors have a pronounced effect –
pressure and the purity of the water. In this case it is relatively easy to
ensure the condition of ceteris paribus. We can keep a sample of ice in a vessel where pressure is automatically
controlled and kept at a predetermined level, regardless of changes in
temperature. Likewise, the purity
of ice can be maintained at a constant level at all values of temperature
investigated. In such a study we
find that ice containing, say, not more than 0.01% impurities, at a pressure of
about 105 Pascal, melts at
273 K.
d) Errors
We have to distinguish between
various types of errors. Sometimes
errors stem from a basically wrong approach to scientific inquiry. It may be due
to an insufficient understanding of the subject by a researcher, which results
in an improperly designed experiment. It may be due to a preconceived view or
belief, or to a strong desire on the part of a researcher to confirm his
preferred theory. The researcher
may subconsciously ignore results which are contrary to his expectations and
inadvertently choose from the set of measured data only those which jibe with
his/her expectations. Errors of these type, while by no means uncommon, should
be viewed as pathological; they produce wrong
data, which are a version of bad
data, and are one of the constituents of bad science. An example of such bad
data is the already discussed case of the alleged Bible codes.
However, even in the most
thoroughly designed experiments, errors inevitably occur despite the most
strenuous effort on the part of a researcher to avoid or at least to minimize
them. These honest errors are due to
the inevitable imperfection of experimental setups and to the equally inevitable
imperfection of the researcher's performance. Let us discuss these honest errors
in experimental studies.
Until recently, a common notion
in the theory of experiment was the distinction between systematic and random
errors. While this distinction is certainly logically meaningful and often can
be made quite rigorously, lately it has become evident that on a deeper level
the demarcation between these two types of error is rather diffuse. Moreover, the error can often be viewed as either systematic
or random depending on the formulation of the problem. For example, an error can be viewed as systematic if it
occurs systematically provided we always conduct measurements in a specific
laboratory A. However, the same
error can be viewed as random if we have a choice among many laboratories A, B,
C, etc., and choose laboratory A, to conduct the measurement on a particular
day, by chance. The deeper analysis
of that question, as well as of the distinction between precision and accuracy
of measurements, which is a proper subject of the theory of experiment, is
beyond the scope of this paper, whose subject is the general discussion of the
nature of science.
The question of experimental
errors is, though, related to the problem of causality. If data are a legitimate component of science only if
they are reproducible, the immediate question is: What are the legitimate
boundaries of reproducibility? It
is an indisputable fact that the expression "precise data," if interpreted
literally, is an oxymoron. If an
experiment is repeated many times, in each run the measured data will be to a
certain extent different. There is
no way to ensure the absolute reproducibility of data. The conventional interpretation of that fact is not that it negates the
principle of causality but rather that in each experiment there are
uncontrollable factors which change despite the most strenuous effort on the
part of the experimenter to keep all conditions of the experiment under control.
Therefore the principle of
causality is not a direct product of the experimental evidence but a
metaphysical principle borne out by the total body of science. Science is based on the assumption of causality without which it simply
would make no sense.
Therefore the inevitable errors
of measurement are something science has to live with, making, though, a very
strong effort to distill objective truth from the chaos of measured numbers. Data which are a legitimate foundation of laws and theories are therefore
the result of a complex process in which wheat is to be separated from chaff and
regularities have to be extracted from the error-laden experimental numbers thus
building a bridge from data to laws. The bridge from data to laws will be
discussed in subsequent sections.
It seems appropriate to discuss
at this juncture of the discourse two points related to the manner in which an
experiment is conducted.
One of these points is
calibration. Calibration is a very
powerful tool for ensuring the meaningfulness of an experiment, wherein the
effects of many factors, which are extraneous to the subject of the study and
whose contribution can only mask the phenomenon under study, are summarily
neutralized in one step. For
example, when Cavendish performed his famous experiment in which he, in his
words, "weighed the earth," he needed to measure the attraction between lead
balls. The force of attraction to
be measured was determined by the angle of rotation of a rod to whose ends the
lead balls were attached. The rod
hung on a string, so that the elastic forces in the string created a torque
which counterbalanced the forces of gravitational attraction. That torque was
approximately proportional to the angle of the rod's rotation. Instead of
calculating the forces which would cause a certain angle of rotation, Cavendish
simply calibrated his contraption by applying known forces to the rod and
measuring the correspondent angle of rotation. He obtained the "calibration
curve," which incorporated all known and unknown factors that could affect the
angle of rotation thus eliminating in one step – calibration – most of the
sources of error. Calibration in
various forms is routinely used in scientific experiments whenever possible,
thus substantially eliminating many sources of error.
Another point to discuss is the
difference between the discrete and the continual methods of recording
experimental data. Each of these two methods has both advantages and
shortcomings.
If a discrete recording is
employed, the target of study - X - is measured for a set of discrete values of
the controlled parameter A, while all other parameters affecting X are kept
constant. Factor A is given values
A1, A 2 ,A 3 etc,
and at each of those values of A, X is measured, yielding a set X1,
X2 , X3 , etc.
As a shortcoming of such
method, it is often asserted that there is never certainty regarding the values
of X between the measured points. However
close the values of A at which X is measured are, there are always gaps between
those values of X, and to fill those gaps the researcher has an infinitely large
number of choices. It is often
expressed as an assertion that there are infinitely many curves which can be
drawn through the same set of experimental points, regardless of how close those points are to each other. (Of
course, a more accurate assertion should be that the curves may be drawn through
the margins of error surrounding each experimental point).
While the above assertion is
certainly correct in an abstract philosophical way, it rarely has serious
practical implications, as will be discussed in the section on the bridging
hypotheses.
The mentioned shortcoming of
the discrete method of experiment seems to be eliminated if a continual method
is employed instead. In this
method, the controlled factor A (or often several controlled factors A, B, C,
etc.) are being continually changed at a certain rate, while the values of the
target of the study, X, are simultaneously and continually recorded.
An obvious advantage of this
method is the substantial saving of time and effort necessary to accumulate
information about the phenomenon under study. The wide availability of sophisticated recording equipment in the 20th
century made continual recording the favorite method of experimental work in
physics, chemistry and other fields of study.
Since the method of continual
recording generates continuous curves rather than sets of experimental points,
it may seem to eliminate the arbitrariness involved in drawing curves through
discrete experimental points. Unfortunately,
this supposed advantage of the continual method is illusory. The illusion is related to the problem of a system's response
time to a perturbation, i.e. to an abrupt change of parameters. If any parameter affecting the system's behavior is
abruptly changed, it sets in motion a process of the system's transition to a
new state, this process proceeding with a certain finite speed. If a system was in a state wherein X had a certain value X1
while factor A had the value A1, and at some moment of time t1
parameter A is changed to A2, the
system's measured property X does not instantly jump to a new value X2,
but rather undergoes a transition to the new value which takes a certain time
("relaxation period"). Therefore
the experimental curves reflecting the functional dependence of X on A will have
different shapes depending on the rate at which A is being altered by the experimental
set-up. In a discrete method of
experiment normally the time interval between consecutive measurements of X is
sufficiently large for X to complete the transition to the new equilibrium
value. In continual recording, depending on the rate of change of A, the
transition of X to the new equilibrium value is often not yet completed as A
already shifts to new values. It is well known that, for example, the loops of
magnetic hysteresis are very different depending on the rate of the magnetizing
field's change.
In connection with the question
of experimental errors, the notion of noise seems to be relevant. The term noise
entered science from information/communication theory. That branch of science studies the transmission of information (of
"signals") which is inevitably accompanied by noise in the elements of the transmission channel. The noise is
superimposed on the useful signal and has to be filtered out if the signal is to
be reliably interpreted. The term noise gradually gained a wider use,
referring to any unintended extraneous input to the experimental setup. Normally, to extract useful data from the experimental output the desired
data have to be distilled from the overall output by excising noise. However, more than once noise happened to be the source of scientific
discoveries. For example,
when, at the end of the 19th century, physicists studied the
processes in a cathode ray tube, these processes were always accompanied by
emission of a certain radiation from the walls of the tube. Some researchers did
not notice these rays, others did but viewed them just as noise and
ignored them. Then Wilhelm Conrad Roentgen, who specialized in the study of dielectrics, and was generally highly
meticulous in his research, looked at the "noise" as a phenomenon
interesting in itself. He very thoroughly studied the phenomenon which his
predecessors dismissed as just a nuisance and thus became the discoverer of that
important type of radiation which he called X-rays.
Another well known example of how a phenomenon which was viewed as
annoying noise led to an important discovery is that of the background cosmic
radiation discovered in 1965 by Arno Penzias and Robert Wilson.
Generally speaking, if an experiment yields results which could be
predicted, this adds little to the progress of science. The matter becomes really interesting if the results of an experiment
look absurd. If a researcher
encounters an absurd result, which persists despite efforts to clean up the
experiment and to eliminate possible sources of error, there is a good chance
the researcher has come across something novel and therefore interesting. The absurd outcomes may often be attributed to noise, but not
too rarely that noise carries information about a hitherto unknown effect or can
be utilized in an unconventional way for a deeper study of the phenomenon.
Let me give an example from my personal experience.
Many years ago I studied the process of adsorption of various
molecules on the surface of metallic and semiconductor electrodes. One of the methods of that study was the measurement of the so-called
differential capacity of the electric double layer on the electrode surface. A couple of my students participating in that research built a setup
enabling us to measure the differential capacity at various values of the
electric potential imposed on the electrode. After having worked with that setup for several weeks, they became
grossly frustrated by their inability to eliminate the instability of the
measured capacity. Each time they
changed the value of the electrode potential, the differential capacity started
shifting and they had to wait for many hours and sometimes days until a new,
uncertain equilibrium seemed to set in. They
viewed this unpredicted "instability" of the measured differential capacity
as annoying noise preventing them from performing the desired measurement. They complained to me about their frustrating experience and asked for
advice - how to get rid of it. I
remember that moment when a kind of a sudden light exploded in my mind and I
shouted, "Lads, it is wonderful! We got an unexpected result.< The capacity creeps – and this means that if you accurately measure the
curve of capacity versus time, this
curve will contain plenty of information about the kinetics of the adsorption
process!" I set out to develop
equations reflecting the relaxation process of the capacity. A new method of scientific study we named potentiostatic
chronofaradometry was thus born. Instead
of trying to eliminate the supposed noise and routinely measure the supposed
equilibrium values of the differential capacity, now my students concentrated on
measuring those very relaxation curves which they angrily considered to be
annoying noise. Plenty of information about the kinetics of the adsorption/desorption
process was extracted from that "noise," assisted by the equations derived
for that process.
e) Complex
systems
For over 200 years, one of the
underlying hypotheses of science, often unspoken, was that every system which is
a subject of a scientific study is "well organized," in the sense that it
was always possible to separate a few phenomena or processes of similar nature,
dependent on a limited set of important factors, in such a way that the
system's behavior could be studied by isolating factors one by one, finding
the functional dependencies between pairs of factors, and attributing to those
functional dependencies the status of laws. In the 20th century the described approach started
encountering serious difficulties. In
many systems the separation of various factors turned out to be impossible. Science encountered the ever increasing number of what is referred to as
"large" systems, or "poorly organized systems," or "diffuse
systems," etc., reflecting various aspects of such systems' behavior. Such systems cannot be legitimately studied under the ceteris
paribus approach because it is impossible to separately and controllably
vary any one parameter affecting the system without causing a simultaneous
variation of other parameters.
It should be noted that the
concept of a "large" or "poorly organized" system is not equivalent to
the concept of stochastic systems. The
latter were well known and successfully dealt with in 19th century
physics. An example of a stochastic system is a gas occupying a certain volume. It consists of an enormous number of molecules (for example, at
atmospheric pressure and room temperature 1cubic centimeter of gas contains
about 1019 molecules). The
behavior of each molecule is determined by the laws of Newton's mechanics.
However, because of the immense number of molecules, it is impossible to analyze
the behavior of gas by solving the equations of mechanical motion for each
molecule. Therefore, 19th century science came up with very
powerful statistical methods which enabled it to analyze the behavior of a gas
without a detailed analysis of the motion of each individual molecule. (There are theories according to which a system already becomes
stochastic, i.e., not treatable by studying the behavior of its individual
elements, if the number of these elements exceeds about 30.)
However, stochastic systems
such as gases behave stochastically only on a microscopic level. On a macroscopic level they can be easily treated using the ceteris
paribus approach. Macroscopic
properties of a gas, such as pressure, temperature, volume, etc., can be very
well studied by isolating and controllably changing these properties one by one
while keeping the rest of the properties constant. This is possible because the immense number of elements of such systems
does not translate into a large number of macroscopic properties. A gas containing quadrillion quadrillions of microscopic molecules, all
of them identical, macroscopically is characterized only by a few parameters,
such as pressure, temperature and volume.
On the other hand, the
"large," or "poorly organized" systems are characterized by a very large
number of macroscopic parameters, many
of which cannot be individually varied without simultaneously changing some
other parameters affecting the system.
Historically, one of the
situations whose study mightily served the realization of the ineliminable
interdependence of various parameters in a "poorly-organized" system is emission spectral analysis. A good description of the problems
encountered when trying to interpret the data obtained via emission spectral
analysis was given in a book "Teoriya Eksperimenta" (The Theory of
Experiment) by Vasily V. Nalimov (Nauka Publishers, Moscow 1971; in
Russian).
In that process, a sample of
material to be studied is placed between two electrodes into which a high
voltage is fed. The breakdown of the gap between the electrodes causes a spark
discharge wherein the temperature reaches tens of thousands of Kelvin. The
density of energy at certain locations on the electrodes reaches a huge level.
An explosive evaporation occurs, followed by an equilibrium evaporation. A cloud
of evaporated matter is created between the electrodes in which simultaneous
processes of diffusion, excitation and radiation take place. Besides,
oxidation/reduction processes occur on the electrodes' surface, and diffusion
of elements occurs in their bulk, dependent on the temperature gradient, phase
composition of electrode material, defects etc. All these processes are of a
periodic nature, on which a one-directional drift is imposed by the
electrodes' erosion. Attempts to analyze the described extremely complex
combination of phenomena by means of separating various factors one by one
turned out to be in vain. The
system is too complex and therefore its analysis forced scientists to give up
attempts to proceed using the ceteris
paribus approach.
To study the "large" or
"complex" systems, the science of the 20th century developed two
main approaches. One is the multi-dimensional statistical approach and the other
is the cybernetic (or computer-modeling) approach.
Since this article is not about
the theory of experiment but about a general discussion of the nature of
science, I will discuss the above two approaches very briefly and only in rather
general terms.
The statistical approach was to
large extent initiated by the British mathematician R. A. Fisher. It is often referred to as multi-dimensional mathematical statistics (MDMS). Essentially, MDMS is a logically substantiated formalization of such an
approach to the study of "large" systems wherein the researcher deliberately
avoids a detailed penetration into intricate mechanisms of a complex phenomenon,
resorting instead to its statistical analysis using multiple variables.
The application of MDMS
requires solving a number of problems involving the appropriate strategy of an
experiment - the proper choice of the essential parameters - since accounting
for too many parameters may make the task beyond the available intellectual and
computational resources, while accounting for too few parameters may reduce the
solution to a triviality. The
effectiveness of MDMS has been drastically improving along with the availability
of ever more powerful computers and software. Statistical models of complex systems have been successfully employed for
a multitude of problems which could not be tackled by the traditional methods of
direct measurement.
The cybernetic approach, whose
origin can be attributed to Norbert Wiener, is in some respects very different
from the statistical one, although it also has become really useful only with
the advent of powerful computers. While the method of multi-dimensional
statistics and that of a computer modeling are principally different, both are
successfully applied, sometimes to the study of the same "complex" system.
The cybernetic approach makes
no sense as long as only simple "well organized" systems are studied. In
such systems one-dimensional functional connections fully determine the
system's behavior and the problem of control (management) is moot. For example, the motions of planets are fully determined by Kepler's
laws. On the other hand, control
of the behavior of a "poorly organized," or "large" system, which is
quite challenging, can be fruitfully approached via a cybernetic model. (Various
meanings of the term model will be
discussed in the section on models in science).
One of the frontiers of modern
science is the field of artificial intelligence which exemplifies the cybernetic
approach to a "complex" system (human intellect) by using a computer model
of the latter.
Biological systems are mostly
"large" or "poorly organized" in the above formulated sense.
Another example of a
"large" system is the economy of any country.
In every country there is a
multitude of factories, farms, companies and individuals each pursuing its/his
particular economic interests, All
these elements of the economic system interact among themselves in an immensely
complicated web of transactions, changing every moment so that this enormously
complex system is in a constant flux whose trends are usually not obvious. It is impossible to account for each and all features of that immensely
complex game, not only because of its sheer size and the huge number of its
constituent elements, but also because of its unstable character. Before the available data about the state of economy have been digested
and interpreted, they have already changed. No computer, however powerful, can follow all the nuances of the economic
game. Therefore the economy cannot be in principle "scientifically managed"
as it supposedly was in the allegedly "socialist planned economy" of the
former USSR. This was one of the
reasons for the abysmal failure of the "socialist economy," which was no
more scientific than it was really socialist. The actual economic system in the former USSR was "state capitalism,"
with all the drawbacks of an extreme form of a monopolistic capitalist system
but without the advantages of a free enterprise. The attempts to sustain a
planned economy allegedly based on a scientific analysis of the resources,
demands and supplies, resulted in a seemingly controlled but actually chaotic
economy wherein stealing and cheating became the only possible means of survival
for the powerless slaves of the state.
Pretending to maintain a
scientifically planned economy, as Marxist theory required, actually resulted in
a system that was contrary to elementary basics of science.
The economy of the modern world
functions in a cyclic manner, following its own poorly understood complex
statistical laws wherein its oscillatory motions have an immense inertia, so
that attempts to steer it in this or that direction (for example, by regulating
the interest rate or changing the tax laws) have only a marginal effect on the
waves of prosperity and recession replacing each other as if by their own will.
Large (or diffuse) systems
require for their analysis specific approaches, and 20th century
science has provided these approaches in the two above mentioned ways.
Whatever method is used to
acquire data, either the traditional multi-step measurement under the ceteris
paribus principle, or the application of multi-dimensional statistics, or a
computer model, or any other approach, the acquisition of reliable data is an
ineliminable step in the scientific method. It is possible to apply the
scientific approach without strict definitions or without good hypotheses or
without good models or without good theories (although omitting any of the
listed components would seriously impair the quality of the scientific study,
often making it bad science). It is
impossible to have any science, good or bad, without reliable data. In the absence of data, the endeavor, however cleverly disguised and
eloquently offered, can only be pseudo-science.
8. Bridging hypotheses
Perhaps, the title of this section should more appropriately be
"Bridging Hypothesis," in the singular, because this hypothesis is
essentially always the same. This hypothesis postulates the objective existence
of a law. It constitutes a
"bridge" from the raw data to the postulated law.
In the case of "simple" or "well organized" systems which
can be studied under the ceteris paribus
condition, the bridging hypothesis appears in its most explicit form, while in
the case of "large" or
"poorly organized" systems it can often be hidden within the complex web of
the data themselves.
Imagine an experiment under the ceteris
paribus assumption wherein a set of values of a target quantity X has been
measured for a set of values of a parameter A, so for each value of A1,
A2, A3, etc., corresponding values X1, X2,
X3,etc., have been measured and also the margin of error ±Δ has been estimated so that every value of Xi is believed to be within
the margin of Xi ±Δ.
For the sake of example, assume that the measured values of X are
found to increase along with the increase of the corresponding values of A. Every
set of X and A, however extensive, is always still only a selected subset of all
possible values of X and A.
Reviewing the sets of measured
numbers, we try to discern a
regularity connecting A and X. To
do so we have necessarily to assume
that the set of actually measured numbers indeed reflects an objective
regularity. There is never unequivocal proof that such a regularity
indeed objectively exists, we have necessarily to postulate it before formulating a law.
More often than not such a hypothesis is implicitly present
without being explicitly spelled out. In scientific papers we, quite usually, see statements introducing a law, without mentioning the
bridging hypothesis. The
researchers reporting their results routinely assert that, for example, "as
our data show, in the interval between A1 and Ak, X
increases proportionally to A." This
statement is that of a law. In
fact, though, such statement of a law might not be legitimately made without
first postulating the very objective existence of a law connecting X to A. The bridging hypothesis according to which the observed sets of values of
X correspond to the selected sets of values of A, not as an accidental result of
an experiment conducted under a limited set of conditions but because of an
objectively functioning law, must necessarily precede, if often subconsciously,
the statement of a law.
The hypothesis in question, which is necessarily present, most
often implicitly, in the procedure of claiming a law, is never more than a
hypothesis and cannot be proven. Fortunately
it happens to be true, at least as an approximation, in the vast number of
situations, thus constituting a reliable building block of science. In fact, if
sets of As and of Xs seem to match each other according to some regularity, more
often than not such a regularity does indeed exists, although sometimes it also
Of course, the bridging hypothesis is not applied if no law is
postulated. A researcher may obtain
sets of values of the measured quantity X corresponding to a set of values of a
parameter A which may look like a display of a regularity, but for various
reasons she may reject the bridging hypothesis and avoid an attempt to spell out
a pertinent law. The reasons for that may be, for example, some firmly
established theoretical concepts making the law in question very unlikely, or
serious doubts regarding the elimination of some extraneous factors which could
distort the data thus creating the false appearance of a regularity, or any
number of other reasons.
More often, though, a scientist may be just uncertain in regard to the
choice between accepting or rejecting the bridging hypothesis rather than flatly
rejecting it. Often such uncertainty is based simply on the insufficiently
complete data. The researcher is
uncertain whether or not the available sets of data do indeed properly represent
the entire multiplicity of possible data.
If that is the case, the proper remedy is to resort to a statistical
study of the phenomenon in question. The proper tool for such a study is the part of mathematical statistics
called "hypotheses testing."
The procedure involves introducing two competing hypotheses, one called
the "null hypothesis," and the other the "alternative hypothesis." The null hypothesis is the assumption that the available set of data does
not represent a law. The alternative hypothesis is in this case a synonym for what
I called the bridging hypothesis, i.e. the assumption that the available set of
data reflects a law. As a result of
a proper statistical test, the researcher compares the likelihood of the null
hypothesis vs. the likelihood of the alternative hypothesis. The hypothesis whose likelihood is larger, is accepted, while the other
hypothesis is rejected. This choice is never assured to be ultimately correct, since
discovery of additional data may change the likelihood of the competing
hypotheses. In this sense, laws of
science are usually referred to as tentative.
Fortunately, the rigorous self-verification inherent in a proper
scientific procedure makes a vast number of laws of science work very well
despite their "tentative" character. In
the next section we will discuss the laws more in detail.
9. Laws
Whereas the bridging hypothesis discussed in the previous section is
simple and uniform in that its gist is simply in assuming that the set of
experimental data reflects a law, the specific formulation of a law itself is
quite far from being simple and uniform. While
in every scientific procedure the same bridging hypothesis is present, this in
itself contains no indication of how the particular law has to be spelled out.
Formulating a law is not a mechanical process of stating the evident
behavior of the studied quantities. It
necessarily involves an interpretive effort on the part of the scientist, who
makes choices among various alternative interpretations of the assumed
regularity which have to be extracted from arrays of numbers defined within
certain margins of error.
In the simple case of a functional dependence between two
variables, the scientist is confronted with a set of experimental points which
seem to display a certain regularity, often rather nebulous because of
inevitable experimental errors. Hence,
besides the bridging hypothesis which simply postulates the very existence of a
law, the scientist must then postulate the specific law itself.
Consider an example. In Coulomb's well known law of interaction between
point electric charges, the force of interaction is presented as being inversely
proportional to the squared distance between the charges.
The inverse proportionality of the force of electric interaction
to the square distance between point charges is, however, not a direct
conclusion from the experimental data. One
of the reasons for that is the inevitability of errors occurring in every
procedure of measurement. If the
dependence of Coulomb's force on the distance between the point charges is
measured, it is assumed that a) while the distance is being changed, the
interacting charges are indeed being kept constant, b) that no other charges
which could distort the results are anywhere close to the measuring contraption,
c) that the distance itself is measured with sufficient accuracy, etc. All of
this is, of course, just an approximation. Let us imagine that in the course of an experiment wherein Coulomb's
force was measured, the distance between the supposedly point charges was given
the following values (in whatever units of length): 1, 2, 3, 4, 5, 6, the total
of six points at which the above values were determined with an error not
exceeding, say, ±5%. This means
that when the distance was assumed to be 2, it could actually be anything
between 1.9 and about 2.1, and similarly for every other value of the distance.
Imagine further that the measured Coulomb's force (in whatever units of force)
happened to have the following set of values, averaged over many repeated
measurements: 1097, 273.87, 121.64, 68.97, 43.12 and 31.01. Repeating the measurement many times, the experimentalist evaluated the
margin of error for the force to be ± 10%. Reviewing the set of numbers for the
values of the measured force, the researcher notices that these numbers are
rather close to a set of numbers obtained by dividing the maximum force of 1097
(measured for the distance of 1) by the squared distances. Indeed, dividing 1097
by the squared values of distances 2, 3, 4, 5, and 6, one obtains the following
set: 274.25, 121.89, 68.56, 43.88, 30.47. Every number in the experimentally measured set differs from
a corresponding number in the second, calculated set, not more than by 10%,
which is within the margin of error of that experiment. Although none of the
measured numbers exactly equals the numbers calculated upon the assumption that
the force drops inversely proportional to the distance, the scientists postulates
that the difference between the measured and calculated sets is due to
experimental error and that the real law, hidden behind the measured set of
numbers, is indeed the inverse proportionality between the force and the squared
distance.
What is the foundation of that
postulate? If instead of the power
of 2, the distance in the same formula appeared with the power of, say, 2.008 or
1.9986, such a formula would describe the experimental data as well as
Coulomb's law, where the power is exactly 2. Of course, the precision and
accuracy of the measurement can be improved so that the margin of error is
substantially reduced. However, it can never be reduced to zero. Therefore, the best experimental data cannot provide more than is
inherent in them. With a much
improved technique we may be able to assert that Coulomb's force changes
inversely proportional to the distance to the power being, say, between
1.9999999987 and 2.0000000035, but we can never assert, based just on the
experimental data, that the power is exactly 2. We have to postulate
that the power is exactly 2 rather than any other number between 1.9999999987
and 2.0000000035.
Such
a postulate is often made on non-scientific grounds. It is often based on some metaphysical consideration, some philosophical
principle, or such criteria as simplicity, elegance and the like. We actually have no scientific grounds to prefer the power of 2 to, say,
the power of 2.0000986345, since both numbers equally well fit the experimental
data. We choose 2 not because the
data directly point to that choice but because this choice seems to be either
simpler (i.e., using Occam's razor), or more elegant, or more convenient, or
maybe just favored by a particular scientist for purely personal reasons.
Of course, the difference between 2 and, say, 2.0000897 has no
practical consequences, so that we can safely use the formula wherein the exact
power of 2 has been postulated, even
though we may not be absolutely confident that it is indeed the exactly correct
value.
Furthermore, even the principal form of the law – the power
function - is not logically predetermined by the data. There are many formulas
differing from the simple power function which would yield numbers within the
margin of error of the experimental data. For
example, it is always possible to construct a polynomial expression whose
coefficients are chosen in such a way as to make the numbers calculated by that
expression match the measured data within the margin of the experimental error. Hence, not only the value of 2 in the formula of Coulomb's law, but
even the form of the law itself is the result of a scientist's assumption. From
the above discussion it follows that the laws of science are necessarily
postulates. We don't know
that the distance in Coulomb's law must be squared, we postulate
it.
The
choice of a postulate is limited by two considerations. One is the requirement
not to contradict experimental data, so we may not arbitrarily assume that the
distance in the formula in question must have the power of, say, 2.75, because
it is contrary to evidence. The other requirement is not to contradict the
overall body of scientific knowledge. Otherwise
any number which is within the margin of the experimental error has, in
principle, the same right to appear in the formula of law, and the choice of 2
instead of, say, 1.99999867 is a postulate based on metaphysical grounds.
The application of some mathematical apparatus such as, for
example, least square fit, does not principally change the situation because the
output of the mathematical machine is only as good as is the input.
The above discourse may create an impression of arbitrariness of
the laws of science. Some philosophers of science think so. Are the laws of
science indeed arbitrary?
They are not.
The fault of the above discourse is that it concentrated on the
data obtained in a particular experiment as though these data exist in vacuum. In fact, every experiment adds only a very small chunk of new information
to the vast arsenal of scientific knowledge and the interpretation of any sets
of data must necessarily fit in with the entire wealth of science.
If we review the example with Coulomb's law, we will see that actually the choice of both the power function for
distance between the interacting point charges and of the exact value of the
power as 2, although resulting from a postulate, was not really arbitrary if the
results of Coulomb's experiment are viewed from a broader perspective,
accounting for information stemming from outside of the experiment itself. Support for the choice of that particular form of Coulomb's law
historically came later than Coulomb's law was suggested. Hence Coulomb indeed
had to postulate the form of the law without having support from any independent
source. However, the chronological
order in which different steps of scientific inquiry may occur is of secondary
importance, since science develops in leaps and bounds, so that its
chronological sequence does not necessarily
coincide with a logical sequence. Eventually, though, various components of the
scientific insight into reality usually come together thus binding the edifice
of science into a strong monolith.
Coulomb's
postulate, seemingly based only on the simplicity and elegance of his law, got
strong independent support after the concept of the electric field entered
science. At that stage of the
development of the theory of electricity the force of electric interaction was
redefined as the effect of an electric field on an electric charge. Simple
geometric consideration shows that the electric field of a lone electric point
charge drops inversely proportional to the squared distance from that charge,
simply because the area over which the field acts increases proportionally to
the squared distance. This
conclusion is not the result of a postulate stemming from experimental data but
a mathematical certainty. (Of course, the mathematical certainty in itself is
based on the postulate of the space being Euclidean).
The convergence of
the indisputable geometric facts with Coulomb's experimental data provides an
extremely reliable conclusion ascertaining the validity of Coulomb's law in
its originally postulated form.
Therefore the notion, often
discussed in the philosophy of science, according to which researchers have
unlimited flexibility in drawing an infinite number of curves through any number
of experimental points (or, more precisely, through the margin of error
surrounding each such point), while correct in an abstractly philosophical way,
is of little significance for the actual scientific procedure of stating a law. Whereas theoretically one can indeed draw any number of the
most fancy curves through any set of points on a graph, in fact a scientist is
severely limited in his choice of legitimate curves connecting his experimental
points. This limitation is based on
at least two factors. One is the requirement that the curve chosen for the
particular data set in question must be also compatible with the rest of the
data collected in the entire study in question. The other is the requirement
that the curve must also be compatible with the enormous wealth of knowledge
accumulated in science, including both established theoretical principles and
experimental data obtained in different versions of related experiments.
The body of contemporary science is immense. Despite many uncertainties which have not been resolved by science, its
achievements are indisputable and ensure the high reliability of science as a
whole. Therefore, when facing the
choice between various ways to connect experimental points by a continuous
curve, the researcher not only matches the particular set of data to other sets
of related data, but is also guided by highly reliable theoretical concepts.
To summarize this section, we can state that laws of science are both
postulates and approximations. Fortunately, as the great successes of science
and the technology based on it prove, those laws which have become firmly
accepted as part of science, are, without exception, good
postulates and good approximations.
10. Models
The term model may have various
meanings. A model
husband is not the same as a model of
an airplane. Even in science,
that term may be used for rather different concepts. For example, a mathematical
model of a phenomenon is a term denoting a set of equations (often
differential equations) reflecting the interconnections among various parameters
affecting the system's behavior. A computer model is a program designed to simulate a certain situation or process.
In this section, I will discuss what is sometimes referred to as a
physical model. I will refer to it simply as model. To explain the meaning of the term model as it will be used in this section, we first have to state
that every model is, in a certain limited sense, always assumed to represent a
real object. In this sense, a model is an imaginary object which has a
limited number of properties in common with a real object, while the real object
represented by the model in question also has many other properties which the
model does not possess.
It can be said that at the level of explanation, i.e. on the level
of scientific theories, science actually deals only with models rather than with
the real objects these models represent. The
reason for the replacement of real objects with models is the simple fact that
real objects are usually so immensely complex that they cannot be comprehended
in all of their complexity by means of a human theoretical analysis. Out of
necessity, to build theories science resorts to simplifications in its effort to
study and understand the behavior of real systems. To do so science replaces, in the process of a theoretical
interpretation of laws, real objects with much simpler representations of the
latter which we call models.
There are many classical examples of the use of models. Recall again the story of Tycho de Brahe and Kepler. The former had
meticulously accumulated numerous data regarding the positions of the planets of
the solar system. The latter
reviewed these data, postulated that these data reflect a law (i.e. introduced a
bridging hypothesis) and further postulated the three specific laws of planetary
motion.
The next step in science was to offer a theory explaining
Kepler's laws, i.e. to develop a new science – celestial mechanics. This
task was brilliantly performed by Newton, who derived theoretical equations
explaining Kepler's laws, based on Newton's theory of gravitation and his
general laws of mechanics. In doing so, Newton wrote equations in which planets
were assumed to be point masses. A point
mass is a model of a planet. It
shares with the real object it represents – a planet – only one property,
the planet's mass. Real object – a planet – has an enormous number of
properties other than its mass. The
model of planet in Newton's theory of planetary motion – a point mass –
does not have any of those numerous properties the real planets have. Replacing a real object by its substantially simplified model, Newton
made it possible to derive a set of elegant equations which provided an
excellent approximation of the actual planetary motion.
Recall now another example. There is a branch of applied science
called seismometry. It studies the
propagation of seismic waves in the crust of the earth. The real object of study in seismometry is the same as in celestial
mechanics – the planet. However,
obviously, using a point mass as a model would be useless for the purposes of
seismometry. The model used in that
science is the so-called elastic half-space. In that model, only three
properties are common with the real object – the modulus of elasticity,
Poisson's coefficient and the sheer modulus. The mass of the planet which is the property of the model of earth in
celestial mechanics, is absent in the model of the same earth in seismometry,
whereas the moduli and coefficients of the seismometric model are absent in the
model of the same earth in celestial mechanics. The two models of the same real object are vastly different in the two
mentioned branches of science to such an extent that it is hard to find anything
in common between the two models, both representing the same object, but with
different theoretical goals.
Both models are highly successful. Choosing these models enabled scientists to successfully build good
theories, in one case of planetary motion, and in the other, of the propagation
of mechanical waves in the crust of a planet.
How is a model chosen? There
is no known prescription for choosing a good model. The choice is to be made of those properties of the real object which are
of primary importance for the studied behavior of the system in question, while
all other properties of the real object are deemed to be of negligible
significance and therefore not attributed to the model. There are no known specific criteria telling the scientist how to
distinguish between those properties of the real object which are crucial for
the problem at hand and those which are insignificant for that particular
theory. The scientist must make the
choice based on his intuition, on his experience, and on his imagination. If the model has been chosen successfully, this paves the way to
developing a good theory of the phenomenon. If some crucial properties of the real object are ignored in the model,
the latter would lead to a defective theory. If some insignificant properties of the real object are attributed to the
model, its theoretical treatment may turn out to be too unwieldy and the ensuing
theory too awkward for its acceptance in science. The proper choice of a model,
which sometimes is made subconsciously, is an extremely important step in a
scientific endeavor. This is the
step where the deviation from good to bad science occurs most often.
Let us now review an example of a bad model. A posting at Irreducible Contradiction
contains a detailed critical review of a book titled "Darwin's
Black Box" by Michael Behe. In
that review, many faults of Behe's concept of the so-called "irreducible
complexity" of micro molecular systems have been criticized. Now we will look at Behe's concept from the viewpoint of his choice of
a model. A favorite example of
"irreducible complexity," used by Behe in several of his publications, is a
mousetrap. This device, which,
unlike a biological cell, is quite simple, consists of five parts, each of them,
says Behe, necessary for the proper functioning of the mousetrap. If any of these parts were absent, says Behe, the mousetrap would not
work. Removing any single part of
the mousetrap, Behe tells us, would render it useless. Therefore it must have
been designed in a one-step process, as a whole rather than having been
developed in stages from simpler versions. Behe asserts that a similar situation exists in biological cells, which
contain a very large number of proteins all working together so that the removal
of even a single protein would render the cell incapable of performing a certain
function necessary for the organism's well
being. Since all proteins are
necessary for the proper functioning of a cell, argues Behe, the cell could not
evolve from some simpler structures because such simpler structures would be
useless and hence could not be steps of evolution.
It is clear that Behe uses a mousetrap as a model of a cell. This
is an amazing example of how bad a model may happen to be.
Assume first that Behe is
correct in stating that removing any part of the mousetrap would render it
dysfunctional. Even with this
assumption, the fundamental fault of that model is in that it ignores some
crucial properties of cells and therefore is inadequate for the discussion in
question. The first fault of
Behe's model is in that his model (the mousetrap) unlike a cell, cannot
replicate. The mousetrap (like
Paley's watch) has no ancestors and will have no descendants. Therefore the mousetrap certainly could not evolve from a
more primitive ancestor. The
biological systems replicate and therefore are capable of evolving via descent
with modification. The second fault of Behe's model is in that it ignores the redundancy of biological structures. For
example, the DNA molecule contains many copies of identical formations some of
which serve as genes and can continue to perform a certain useful function while
their copies are free to undergo mutations and thus lead to new functions. The mousetrap lacks such redundancy and therefore it is a bad model which
does not at all illustrate the alleged irreducible complexity of microbiological
structures.
The situation with the mousetrap as an alleged model of
irreducible complexity is actually much worse. In fact, Behe's five-part mousetrap is not irreducibly complex at all. Out of its five parts, four can be removed one by one and the remaining
four-part, then three-part, then two-part, and finally a one-part contraptions
would still be usable for catching mice. Anybody
with an engineering experience can see that. Alternatively, the five-part mousetrap in Behe's model can
be built up step by step from a one-part contraption, making it gradually
two-part, three-part, four-part, and finally five-part contraptions, each of
them capable of performing the job of catching mice, this capability slightly
improving with each added part. (Nice
versions of simpler mousetraps with one, two, three, and four parts were
demonstrated by Dr. John McDonald at A reducibly complex mousetrap.)
If the mousetrap were capable of reproduction, it
certainly could have evolved from a primitive but still usable one-part device
via descent with modification governed by natural selection. Hence, the model chosen by Behe not only does not support his thesis but
actually can serve to argue against the latter.
Behe's supporters often refer to his mousetrap example as a good
illustration of the irreducible complexity, obviously not noticing the fallacy
of this bad model.
In his book, Behe claims that his thesis of irreducible complexity
is on a par with the greatest achievements of science such as those by Newton,
Einstein, Lavoisier, Schroedinger, Pasteur, and Darwin (page 233 in his book). This claim, besides being a display of self-aggrandizing exaggeration,
runs contrary to Behe's suggestion of what can be perhaps referred to as an
anti-model.
11. Cognitive hypotheses
The choice of a cognitive hypothesis is closely related to the choice of
a model. Sometimes these two steps merge into one. In other cases, though, the choice of a model is a step separate from the
choice of a cognitive hypothesis. Whatever
the case, a hypothesis cannot be defined unless a definite model of the
phenomenon in question, consciously or subconsciously, has been selected, either
in advance or simultaneous with the hypothesis.
A cognitive hypothesis is the necessary first step toward
developing a theory which would strive to explain a phenomenon. A law postulated on the basis of experimental data is always of the
phenomenological character. It describes the regularity which has been assumed
to govern the observed data, without explaining the mechanism of the process
which results in those data. A
theory of a phenomenon, on the other hand, has to be of explanatory nature, i.e.
it must suggest a logically consistent and plausible explanation of the
mechanism in question. In order to
develop a theory, a cognitive hypothesis must be first chosen. The cognitive
hypothesis is a basic idea serving as a foundation for the theory's
development.
Choosing a cognitive hypothesis is often a subconscious procedure,
based essentially on the scientist's imagination. Very often scientists who have come up with a cognitive hypothesis cannot
themselves explain how that hypothesis emerged in their minds. Experience certainly may play a significant role in a
scientist's ability to come up with a hypothesis which would provide a basis
for the development of theory. There
are known, however, many cases when a young scientist with little or no
experience suggested ideas which escaped the minds of much more experienced
colleagues. One of the most famous
examples is Einstein's theory of photoelectric effect, for which he was
awarded the Nobel prize. Einstein
was only 26 at that time and had no record of preceding scientific achievements. The photoelectric effect, which was discovered by Heinrich Hertz more
than 25 years earlier, has a number of puzzling features and none of the leading
experienced scientists could offer any reasonable explanation of those features. In a brief article Einstein offered a brilliant interpretation of the
data obtained in the experiments, which immediately solved the puzzle in a
consistent and transparent way. The
cognitive hypothesis offered by Einstein maintained that electromagnetic energy
is emitted, propagates, and is absorbed by solids in discrete portions (later
named photons). Based on that
hypothesis, Einstein developed a consistent and highly logical theory which
solved the puzzle of the photoelectric effect in a very plausible way.
Many scientists who have suggested various hypotheses, often assert that
they cannot explain how that hypothesis formed in their mind.
I take the liberty of giving an example from my own experience. Many
years ago, I studied internal stress in metallic films. At an early stage of that study I had already found that in the process
of film growth two types of stress, tensile and compressive, emerge
simultaneously, each apparently due to a different type of a mechanism. Reviewing the multitude of experimental data, I tried very hard to
imagine what the mechanisms for the emergence of each of the two types of stress
could be. I will refer now only to
the hypothesis I offered for tensile stress (although I also offered another
hypothesis for compressive stress). I
cannot give any explanation of why and how I came up with the hypothesis
according to which tensile stress in growing films appeared because of the
egress of a certain type of crystalline defects, the so-called dislocations, to
the surface of the crystallites. At
that time, the very existence of dislocations was disputed by some scientists
(although a well developed theoretical model of dislocations had already been
developed). One day, without a
consciously realized reason, a vivid picture
of dislocations moving away from each other until they reach the surface of
crystallites emerged in my mind as if projected on a screen and I had a feeling
that I could somehow see the microscopic world of crystals.
The model I had in mind was a
crystal containing a certain concentration of dislocations, whereas the
existence of all other types of defects, always present in real crystals, was
ignored. The vision of the dislocations' egress as a source of tensile stress
in films originated purely in my imagination. On the basis of that vision I set out to derive formulas which would
enable me to calculate stress originating from the dislocations' egress, i.e.,
I proceeded from the cognitive hypothesis to a theory. Years later, when improved methods of electron microscopy were developed, not
only were dislocations directly observed, thus vindicating the dislocation
theory itself, but also, a little later, the very egress of dislocations, which
I hypothesized earlier as a source of tensile stress, was observed
experimentally, thus justifying the hypothesis which originally was born simply
out of my imagination.
Although my theory of tensile stress in films was, of course, only
a very small contribution to science, I believe that the process is similar also
in the cases of much more important scientific theories.
Quite often more than one hypothesis can be offered for the same data,
i.e. different models of the same object or phenomenon can compete with each
other. The competition between
hypotheses extends into a competition between the theories developed on the
basis of those hypotheses. We will
discuss theories in the next section.
12. Theories
Although the development of science is a very complex process, which
occurs via many different ways and in various manners, so that no simple
straightforward scheme can describe it, it
is more or less common situation wherein the following steps are present in the
development of a particular chunk of science: accumulation of data, adopting a
bridging hypothesis (either directly or as a result of the procedure of
statistical hypotheses testing), formulating a law, choosing a model, suggesting
a cognitive hypothesis, and, finally, developing a theory, although these steps
may partially merge, be taken subconsciously and occur not in the above
chronological order.
Although these steps often appear not in a clearly distinctive form, they
are commonly present and each of them requires specific qualities from a
scientist to be successful. For
example, an accomplished experimentalist may happen to be a poor theoretician,
and vise versa.
Offering a cognitive hypothesis requires imagination,
and different scientists may possess it to a different degree. Development of a theory, on the other hand, requires the pertinent skill
and is facilitated by experience.
There is a distinction between particular theories designed to
explain the results of a specific experiment, and theories of a general
character, embracing a whole branch of science. Theories of the first type are often suggested by the same researchers
who have conducted the experiment. Theories
of the second type usually are developed by theoreticians who rarely or never
conduct any experiments. This
division of tasks between participants of the scientific process is necessary
because, first, modern science has reached a very high level of complexity and
sophistication, thus necessitating a specialization of scientists in narrow
fields, and second, because each of the steps of scientific
inquiry requires a different type of talent and skill.
A scientific theory is an explanation of a phenomenon, which is
systematic, plausible, and compatible with the entirety of the experimental or
observational evidence.
As Einstein said, data remain while theories change with time.
Each theory is only as good as the body of data it is compatible with. If new data are discovered, it may require reconsideration of a theory or
even its complete replacement with another theory. Therefore it is often said that every theory only can make
one of two statements: either "No," or "Maybe," but never an unequivocal "Yes." In other words, it is often said that
all scientific theories are tentative because there is always the possibility
that new facts will be discovered which would disprove the theory.
While the above assertion certainly contains much truth, I believe
that it requires amendment. Reviewing the theories which are accepted in science, it is easy to see
that there are two types of the theories. Most
of the theories are indeed only tentative, because, as is often stated, the
experimental or observational data "underdetermine" the theory. However, there are certain theories whose reliability is so firmly
established that the chance of them being ever disproved is negligible.
An example of such a theory is Mendeleev's periodic system of
elements. There is hardly a chance
it will ever be disproved. This statement relates not only to the
phenomenological form of the periodic system as was suggested by Mendeleev, but
also to the explanatory theory of the periodic system of elements, based on
quantum-mechanical concepts.
In this respect, it seems appropriate to discuss the idea of the
so-called paradigms which was suggested by Kuhn. There is a vast literature
wherein Kuhn's ideas are discussed. According
to Kuhn's idea, science develops within the framework of a system of commonly
accepted concepts, which he called the paradigm. As long as a certain paradigm
is in power, theories are accepted by the scientific community only if they fall
within the framework of the reigning paradigm. If some data seem to contradict
the reigning paradigm, they are either ignored or downplayed. According to the extreme form of that view, the truth is irrelevant in
science, which only is looking for ways to conform to the reigning paradigm.
When the data contradicting the reigning paradigm accumulate, at some
moment the reigning paradigm can no longer withstand the pressure of the
contradicting evidence and collapses, being replaced by another paradigm, and
this is referred to as a revolution in science. According to Kuhn's view, a new paradigm can completely eradicate the
system of concepts which characterized the preceding paradigm, and each paradigm
is a product of human imagination having little to do with the search for truth.
I believe that Kuhn's scheme does not really reflect the nature
of scientific revolutions. Reviewing the history of science shows that Kuhn's
scheme is of purely philosophical origin with little foundation in facts. As the history of science teaches us, a revolution in science normally
does not eradicate the preceding set of adopted concepts in science but rather
reveals the limits of its validity. In that, the scientific revolution not only
does not abolish the preceding "paradigm," but, on the contrary, provides
stronger support for it by clarifying the boundaries of its applicability.
It is hard to indicate a revolution in science which would be more
dramatic than the revolution in physics which occurred at the beginning of the
20th century. The
emergence of the theory of relativity and of quantum mechanics shattered the
science of physics to its very core and seemed to drastically reform the entire
set of concepts on which physics seemed so firmly established. However, even that monumental scientific revolution did not in the least
abolish the constituents of pre-Einstein and pre-Planck physics. Newton's
mechanics, Faraday-Maxwell's theory of electromagnetism, Clausius –
Nernst's thermodynamics, Bolznmann – Gibbs's statistical physics, and many
other fundamental insights into reality of the pre-20th century
physics remained intact as parts of science believed to deeply reflect the
features of the real world, whereas the boundaries of their applicability became
much better understood.
Of course, many scientific theories did not stand up to the
subsequent scrutiny and collapsed under the weight of new evidence. The theories of caloric fluid and of phlogiston in physics, the theory of
vitalism in biology, and many less-known theories died either quietly or after a
desperate fight for survival. However,
this normal process of a natural selection of scientific theories had little in
common with the picture of Kuhn's paradigms, encompassing the entire set of
conceptions and reigning over the scientific endeavors, and allegedly
periodically replacing one another in revolutions which eradicate the preceding
paradigms in their entirety. Kuhn's
idea, in its ultimate form denying that science
is about truth, is fortunately not supported by evidence. With all of its shortcomings, and with indeed the largely tentative
nature of scientific laws and theories, there is no other area of the human
endeavor as successful as science. Its
achievements are amazing and indisputable.
Another point that seems related to the above discussion is the assertion
by the philosopher of science
Karl Popper that a necessary feature of genuine science is its being
falsifiable. Popper's ideas have been discussed in numerous publications and
have even been incorporated into decisions by courts regarding the definition of
science. If a theory is not falsifiable, says Popper, it is not genuinely
scientific. Another way to say this is to assert that every scientific theory
must be "at risk" of being disproved. Without
delving into a detailed discussion of Popper's idea, I must say that I am
uncertain about its validity. I
don't think the periodic system of elements is at risk of being disproved but
it certainly is a legitimate part of science. On the other hand, there are plenty of easily falsifiable theories which
are pseudo-scientific. Therefore I
have some doubts regarding the versatility of Popper's criterion. I will not discuss this question further.
Scientific inquiry continues unabatedly, with the ever increasing rate of
unearthing new knowledge of the real world. Probably the most visible results of the progress of science are its
offspring – technology and medicine. Let us talk a little about the
relationship between science and technology.
13. Science and technology
The amazing progress of various modern technologies is commonly
attributed to input from science. Whereas this attribution is indeed well justified, the
relation between science and technology is not unidirectional. Historically, whereas many innovations in technology resulted
from direct applications of scientific discoveries, some branches of science themselves received a strong impetus
from technical inventions which sometimes were products of an inventor's
intuition. Before
illustrating the above processes of the mutual fertilization of science and
technology, it seems proper to note that even in those cases when an important
invention was made without an apparent direct connection to a specific
scientific discovery or theory, nevertheless the inventor certainly has the
advantage of standing on a firm foundation of the entire body of contemporary
science. Therefore it can be
confidently said that the progress of technology, directly or indirectly, has
been ensured by the progress of science.
A good example of the emergence of a new technology as a direct
result of the progress of science is the explosive use of semiconductors which
started in the late 1940s. Until
the middle thirties the very term semiconductor
was almost completely absent in the scientific literature. The solid materials were usually divided into only two classes –
conductors and dielectrics. The materials which eventually were classified as
the third separate type of solids, named semiconductors, had been viewed simply
either as bad conductors or as bad dielectrics. In the thirties, the quantum-mechanical theory of
conductivity was developed (the so-called energy bands theory) which identified
three distinctive classes of solids from the standpoint of the behavior of their
electrons. This theory predicted
the peculiar features of the behavior of semiconductors, as a separate class
distinct from conductors and dielectrics.
Semiconductor technology was practically non-existent at that
time, although some rather primitive devices were already in use, like electric
current rectifiers in which the semiconductor copper
sulfide was utilized. The
important step in science – the explanation of the mechanism of electric
conductivity – at once laid the foundation for an immediate development of
various semiconductor devices. It
did not happened, though. It seems that whereas science in the thirties had
prepared conditions for a revolution in technology, the latter simply was not
yet prepared for digesting the scientific discovery. It took about fifteen years
before the semiconductor technology for which the scientific basis already
existed started taking off. Since
then, however, the development of semiconductor technology directly stemming
from the progress of physics of semiconductors has been proceeding at an
amazing, ever increasing rate, resulting in a real revolution in communication,
automation and many other fields of engineering and technology.
An example of the opposite sequence of the science-technology
interaction is the story of the Swedish engineer Carl Gustaf Patrik de Laval
(1846-1913), the inventor of the steam turbine. In the process of designing his famous turbine, Laval encountered many
difficult problems. Laval possessed
an inordinate engineering intuition and inventiveness. Among several highly impressive inventions Laval made on his way to
building a workable high-speed turbine, two stand alone as amazing insights into
phenomena which had not yet been studied and understood by science at that time.
One such invention was of the so-called Laval's flexible shaft, and the other,
of Laval's nozzle.
Laval's turbine was designed to rotate at a very high speed. The turbine's wheel sits
on a rapidly rotating shaft. At
certain values of the shaft's rotational speed, which are referred to as
critical speeds, a phenomenon called resonance occurs when the shaft experiences
sharply increasing bending forces. This
may easily cause the breakdown of the shaft. According to common sense, it seems reasonable to increase the shaft's
strength by increasing its diameter. However,
increasing the shaft's diameter also causes the increase of the bending force
at the critical speed of rotation. It seemed to be a vicious circle. Laval, instead of increasing the shaft's diameter, made his
shafts very thin, and hence very flexible. Amazingly, instead of making the thin
rapidly rotating shafts weaker, the decrease of their diameters led to a
peculiar phenomenon, not foreseen by the science of mechanics of rotating
bodies. As the speed of rotation
reached the critical value, the bending force reversed its direction,
straightening the shaft instead of bending it. Nobody knows why and how the idea of a flexible shaft occurred to Laval. The scientific explanation of the behavior of the flexible shafts at
critical speeds was developed years after Laval's turbine was introduced,
becoming a separate chapter in the branch of science called technical dynamics. Laval's insight provided a strong impetus for the development of that
science, which, in turn, later provided powerful inputs into various
technologies.
Laval's invention of a flexible shaft was by no means
exceptional for that great inventor. His genius and intuition were equally
amazing in his invention of Laval's nozzle. Without going into the details of
that extraordinary invention, let us simply state that the subsequent
development of a theory explaining the behavior of Laval's nozzle, gave a
mighty push to the emergence of a new scientific field – gas dynamics which in
turn has become a foundation of important modern technologies.
While paying proper tribute to Laval's genius and recognizing
that his amazing insights were not based on any specific scientific discoveries
of his time, we must also realize that Laval, however inventive and inordinately
talented he was, could not have made his inventions if he lived, say, a hundred
years earlier. Whereas he did not derive the ideas of the flexible shaft and a
novel type of a nozzle from any specific scientific discovery or theory, his
entire mental makeup and the way of thinking were formed as products of his
time, with its already high level of scientific knowledge in physics, chemistry
and many other disciplines.
One more feature of the interaction between science and technology
is that it often becomes hard to distinguish where science ends and technology
development starts. This
convergence of scientific research and the development of new technologies is
often referred to by a not precisely defined term of applied science.
Whereas the goal of pure (or basic)
science is viewed as a search for truth, regardless of if and how the unearthed
secrets of the nature can be utilized towards a gain in man's well being, the
goal of applied science is viewed as a search for better technologies or for
advances in medicine or pharmacology.
Research divisions of large
hi-tech and pharmaceutical companies as well as university departments financed
by industry are engaged in intensive studies wherein the distinction between
pure and applied science is sometimes hard to see.
For example, the two foremost
research institutions – Bell Labs and the IBM research division – have
recently announced the independent successful development of new types of
non-silicon based transistors of molecular-size dimensions. This impressive breakthrough in nanotechnology was certainly
made possible only because in both institutions research was not limited to the
narrow task of technological advance but involved intensive research of the type
traditionally considered a part of pure science.
I am again taking the liberty of referring to an example from my own experience (of course, much more modest than the examples discussed so far). In 1973 I had a student working toward a MSc degree who studied the effect of illumination on the adsorption behavior of semiconductor surfaces. When I was preparing instructions for that student, I had a hunch that the effect of the so-called photo-adsorption, known to exist on solid semi-conductor surfaces, must exist as well for small semiconductor particles if the latter were suspended in water. Together with a chemist who worked in my lab, we prepared a colloid solution containing small spherical particles of semiconductor Selenium. I placed a transparent plexiglas plate on the solution’s surface, ran to the building’s roof and exposed the solution to the rays of sun. To my excitement, a smooth film of amorphous Selenium grew rapidly on the plexiglas plate right in front of my eyes, thus confirming my hunch that high energy photons must lower the potential barrier for electrons on suspended semiconductor particles. Soon a few students joined the project. Besides the theory I developed for what was labeled “inverse photo-adsorption,” the project also resulted in a technology for photodeposition of semiconductor films. (For details of that work see, for example Mark Perakh, Aaron Peled and Zeev Feit. “Photodeposition of Amorphous Selenium Films by the Selor Process. 1. Main Features of the Process; Films Structure.” Thin Solid Films, 50, 1978: 273-282; Mark Perakh and Aaron Peled. Idem. “2. Kinetics Studies.” Thin Solid Films, 50, 1978: 283-292 ; Idem. “3. Interpretation of the Structural and Kinetic Studies,” Thin Solid Films, 50, 1978: 293-302; “Light-Temperature Interference Governing the Inverse/Combined Photoadsorption and Photodeposition of a-Selenium Films,” Surface Science, 80, 1979: 430-438.
14. Science and the supernatural
In this section I will discuss the question of whether or not the
attribution of a phenomenon to a supernatural source may be legitimate in
science.
This question is separate from the problem of the relationship between
science and religion. The latter problem is multi-faceted and includes a number
of questions such as the compatibility of scientific data with religious dogma
(in particular with the biblical story of creation), the roles of science and
religions for the society as a whole and for individuals, the impact of the
science and religions on the moral fiber of the society, and many other aspects,
each deserving a detailed consideration. In this section, however, my topic is much narrower, and
relates only to the question of whether or not science may legitimately consider
possible supernatural explanations of the laws of nature and of specific events.
Obviously, the answer to that question depends on the definition
of science. If science is defined as being limited only to "natural"
phenomena, then obviously anything which is supernatural is beyond scientific
discourse. Such definitions of science have been suggested and even
produced as outcomes of legal procedures.
While I understand the reasons for the limitations of legitimate
subjects for scientific exploration only to "natural causes" and, moreover,
emotionally am sympathetic to them, I think there is no real need to apply such
limitations, and a definition of science should not put any limits whatsoever on
legitimate subjects for the scientific exploration of the world. My opinion is based on the simple fact: although science,
which has so far not used anything which could be attributed to a supernatural
cause, and in doing so has achieved staggering successes, there still remain
unanswered many fundamental questions about nature. Possibly such answers will be found some day, if science
proceeds on the path of only
"natural" explanations. Until
such answers are found, nothing should be prohibited as a legitimate subject of
science, and excluding the supernatural out of hand serves no useful purpose.
Moreover, it does not seem a simple task to offer a satisfactory
definition of the difference between "natural" and "supernatural." A phenomenon which seems to be contrary to known theories and
therefore appears to be a miracle, and, hence, to meet the concept of the
supernatural, may find a "natural" explanation in the course of a subsequent
research. The distinction between
natural and supernatural belongs more to philosophy than to science.
A point which seems to be relevant for the discussion at hand is
about the distinction between rational and irrational approaches to problems. It
seems a platitude that to be legitimately scientific, a discourse must
necessarily be rational, whereas religious faith does not need to be based on a
rational foundation. If this is so,
it seems necessary to first define what makes a discourse rational.
For the purpose of this
discussion, I will suggest the following definition of a rational attitude to
controversial issues. In order to
be viewed as rational, the attitude must meet the following requirements: 1) It
must clearly distinguish between facts and the interpretation of facts. 2) It
must clearly distinguish between a) undeniable facts proven by uncontroversial
direct evidence, b) plausible but unproven notions, c) notions agreed upon for
the sake of discussion, d) notions that are possible but not supported by
evidence, and e) notions contradicting evidence.
If we adopt the premise that
the listed situations must be clearly distinguished in a rational discourse, we
must also accept that a rational conclusion
is such that is based solely on facts
proven by uncontroversial direct evidence. Statements and notions arrived at
without meeting the above criteria I will view as irrational, regardless of how
many adherents they may have. A
conclusion or view can be accepted as rational only if it is based on a careful
analysis of the matter with regards to its meeting the above points.
Note that my definition does not contain any criteria designed to assert
that a notion, conclusion, or view in question is true. In other words, my definition of rational
does not necessarily coincide with a definition of what is true.
If science is defined simply as a human endeavor aimed at
acquiring knowledge about the world in a systematic and logically consistent
way, based on factual evidence
obtained by observation and
experimentation, then there is no need to exclude anything from the scope of scientific exploration simply because it does not
belong in science by definition.
However, removing the prohibitions which limit the legitimate
boundaries of science to whatever can be defined as "natural," does not mean
that in science anything goes. To
be a proper part of science, the process of developing an explanation of
phenomena must necessarily be rationally based on factual evidence, i.e. on
reliable data.
So far, no attribution of any events to a supernatural source have
ever been based on factual evidence.
To clarify my thesis by example, let us return to the discussion
of the so-called Bible code, mentioned in one of the preceding sections. Recall that in a paper by the three Israeli authors,
published in 1994 in the Statistical
Science journal, a claim was made, based on a statistical study of the text
of the book of Genesis, that within that text there exists a complex "code"
predicting the names, and dates of births and deaths of many famous rabbis who
lived thousand years after the text in question
was written. The three authors used
certain moderately sophisticated methods of statistical analysis of texts in
order to prove their claim.
Obviously, if the claim of the three authors were true, it would mean
that the author of the book of Genesis knew about the events which would occur
thousand years after he was compiling the text in question. This would constitute a miracle and hence must have a supernatural
explanation (although at least one writer suggested that the alleged code was
inserted into the text of the Bible by visitors from other planets). Despite the obvious religious implications of the claim made by the three
authors, the editorial board of the journal, which comprised 22 members,
approved the publication of the article by the three Israelis. This was a convincing display of scientists keeping open mind, contrary
to the persistent assertions by the adherents of the "intelligent design,"
such as Phillip Johnson, accusing scientists in having "closed minds."
There was nothing
non-scientific in the three Israelis' approach itself, hence their claim
became a legitimate subject of normal scientific discussion, despite the obvious
implications of a supernatural origin of the text in question.
If the claim by the three
Israelis were confirmed and hence accepted by the scientific community as a
result of properly collected data and of their proper analysis, their results
would have become part of science, despite their religious implications.
Moreover, the proof of the existence of a real code in the Bible would
constitute a strong scientific argument for the divine origin of the Bible and
hence support the foundation of the Jewish and Christian religions. This is a way the supernatural may legitimately enter the
realm of science. It requires
reliable factual evidence.
The Bible code, however, rather
swiftly was shown to be far from being proven. It was convincingly shown that the claim of the three Israelis was based
on bad data. Moreover,
a number of serious faults were identified in the statistical procedure used by
the three Israelis. Hence the story of the alleged Bible code, rejected by the
scientific community, did not change the fact that so far no miracles have ever
been proven by scientific methods.
Therefore, although in my view
the reference to a supernatural source of the events should not be excluded from
science out of hand - simply by definition, - accepting the claim of a miracle
into science can only take place if it is based on reliable evidence, meeting
the requirements of scientific rigor.
If the past is any indication
of the future, science has nothing to be afraid of in not excluding supernatural
explanations if the latter are required to compete with the natural explanations
on equal terms. The chance that a certain actually observed phenomenon will be
successfully attributed to a supernatural source in a scientifically rigorous
way is very slim indeed.
15. Conclusion
In the late forties,
a meeting of biologists took place at the Academy of Agricultural Science in
Moscow where the scientists who objected to Lysenko's pseudo-science were
verbally abused, derided, and insulted, as a prelude to their subsequent
dismissal from all universities and research institutions, arrests, and
imprisonment. Officially the
meeting was supposed to be devoted to a scientific discussion of the problems of
modern biology and agriculture. However,
at the last session of that meeting, Lysenko's speech included the lines "The question is asked in one of the notes handed to
me, 'what is the attitude of the Central Committee of the Communist Party to
my report?' I answer: the Central Committee of the Party examined my report
and approved it."
Everybody in the audience stood
up and lengthy applause followed.
So, we see that sometimes truth
in science was decreed by the ruler of a country. Is this a normal process of establishing the validity of a scientific
theory?
Of course this was an example
of a pathological usurpation of the power of science by dictatorial rulers of a
totalitarian state.
How, then, is the validity of a
scientific theory established? There
is no central committee which is empowered to decide which theory is correct.
Of course, there are numerous organizations, such as the Royal Society of
London, academies of science in various countries, scientific societies, both
national and international, which award prizes and honorary fellowships for
achievements in science thus bestowing an imprimatur of approval on certain
theories and discoveries, but these official or semi-official signs of
recognition serve more as tributes to human vanity than as real proofs of the
validity of scientific theories. The
most prestigious of all these prizes is perhaps the Nobel prize, awarded by the
Swedish Academy of Science. Whereas
the majority of the Nobel prize winners are indeed outstanding contributors to
science, even this enormously prestigious prize could in certain cases invoke
the following line from a very popular in Russia play by Aleksandr Griboedov
titled The Grief because of Wisdom:
"Ranks are given by men and men are prone to err." Of course, the fact that many brilliant scientists whose contributions to
science were among the most important in the 20th century have never
been awarded the Nobel prize is understandable. The number of Nobel prizes is
very limited and the Swedish Academy has to make hard choices. However, even
among the Nobel laureates in physics there are certain personalities who were
simply lucky enough to come across an important discovery by sheer chance, some
of them even not capable of realizing what was the meaning of the observed
effect. The names of some of these prize winners are an open secret in the
scientific community.
Therefore prizes and fellowships, however prestigious, are not a
legitimate means for establishing the validity and importance of this or that
theory.
Is, then, the validity of a
theory established by a vote of scientists, so that the choice between theories
is made based on the majority of votes?
However ridiculous this picture
may seem, and despite the indisputable fact that often a single scientist may
turn out to be right even if all the rest of scientists think otherwise,
actually the acceptance of theories is indeed determined by an unwritten
consensus among scientists. Such
a consensus is usually not formed by a vote at a single meeting of the experts
in a given field but is, rather, achieved in a haphazard way through a piecemeal
exchange of information among the specialists in that field. Sometimes it happens swiftly, when scientists working in a
given area of research overwhelmingly come to believe in a theory because of its
strong explanatory power and solid foundation in data. An example of such a theory is the theory of quarks suggested by Murray
Gell-Mann. In some other cases a
theory may wait for a prolonged time before becoming generally accepted. An
example of such situation is the theory of dislocations. Suggested in the early
twenties (first as a hypothesis) independently by three scientists (in England,
Russia, and Japan) and rather soon developed as a mathematically sophisticated
branch of physics of crystals, it had indisputable explanatory power and
elucidated in a very logical way many effects which seemed puzzling at that
time. Nevertheless, for many years it could not win universal
acceptance because the very existence of dislocations assumed by the theory was
not experimentally proven. Only
with drastic improvements in electron microscopy, were the dislocations
experimentally observed and thus the theory vindicated and universally accepted.
In that sense, it can be said
that scientific theories gain the imprimatur of approval through a vote by
scientists. They fall by the same
procedure.
This means that science is largely immune to attacks by amateurs and pseudo-scientists, regardless of whether they try to undermine science under the banner of an undisguised creationism, or masquerading as legitimate alternatives to the mainstream science, as "intelligent design" proponents are doing nowadays. I believe that their attack is doomed to fail, although they can be expected to continue generating annoying noise thus distracting scientists from a fruitful research.
In particular, the frequent claims by the neo-creationists like Johnson or Dembski that Darwinism is a “dying theory,” to be imminently replaced by the "Intelligent Design," are too obviously only displays of what they wish to be the case rather than the actual situation. Darwinism is part of science and science is robust enough to withstand the attack by pseudo-scientists of the kind represented by Dembski, Johnson and their cohorts.
One thing however, seems quite
certain. Whatever the definition of science may be, and despite the misuse of
its achievements by all kinds of miscreants, and despite its multiple
shortcomings and failures, it is probably the most magnificent of the human
endeavors.
|
|