www.TheLogician.net© Avi Sion All rights reserved


© Avi Sion, 1990 (Rev. ed. 1996) All rights reserved.

You can BUY online, Amazon.com (in paperback or kindle/.mobi form), at Lulu.com (in hardcover, paperback or e-book / .epub form ), and at many other online stores.


1. Degrees of Being.

2. Induction from Logical Possibility.

3. History of Inductive Logic.

1. Degrees of Being.

Before determining where the philosophy of science stands today, I would like to highlight and review some of the crucial findings of our own research in this volume.

The first thing to note are the implications of certain of our findings in modal logic. We saw (ch. 17) that, contrary to what has been assumed throughout the history of logic, the premises:

All M can be P

This S can be P (or: This S is P, or: must be P)

…do not yield the conclusion ‘therefore, This S can be P’, but a more disjunctive result, namely:

therefore, This S can (get to) be or become P.

Thus, the mode ppp is valid, but only provided we take transitive propositions into consideration. Past logicians, including moderns, failed to take the existence of change into account, in their analysis of modal logic, and for this reason did not spot this important alternative conclusion from a merely potential first-figure major premise. It is true that Aristotle analyzed change with great perspicacity in his ontological works — and indeed, my own formalization of change is based on his insights — but even he did not integrate this relation into his formal logic.

The immediate formal significance of this finding is that natural modality is not permutable. Although in common discourse we rephrase ‘S can be P’ as ‘S is {capable of being P}’ or as ‘S is {potentially P}’, in strict terms, we may not do so — we may not enclose the modality within the predicate, and consider these ‘is’ copulae as having the same meaning as that in an actual ‘S is P’. If this is true of potentiality, it has to be equally true of natural necessity, since the oppositional relations between modal forms have to be maintained. By similar argument, we can show that temporal modality is impermutible.

These formal findings force upon us certain ontological inferences of the highest import. I was myself surprised by the conclusions; I had not intentionally ‘built them into’ my system. The implication is, that we may not regard a potential relation as signifying the presence of an actual ‘mark’ in the subject; the subject contains, within its ‘identity’, the potentiality as such, and not by virtue of some actuality. Thus, there really are ‘degrees of being’. We may not reduce all being to the actual; there are lesser degrees of being, called potentialities, and (by extension) higher degrees of being called natural necessities.

In between these extremes, therefore, the degrees of natural probability are also different degrees of being. And likewise, temporal modalities have to be so interpreted. Note well, none of this is speculative: these positions are imposed upon us by formal logic, by the requirement of impermutability (which, incidentally, was also useful in understanding the Russell Paradox — see ch. 45.3). Thus, we are not making a vague metaphysical statement, but referring to precise technical properties which reveal and demonstrate the ‘self-evidence’ (in the formal sense, of logical necessity) of the concept of degrees of being.

Thus, although the concepts of modality are at first presented as purely statistical characterizations of relations, we come to the final conclusion (on formal grounds) that this numerical aspect is merely a symptom of a real ontological variation in the meaning of ‘is’. Aristotle left us with a limited vision of the scope of the copula ‘is’, because of the restrictions of his nonmodal logic; but now we see that there are real nuances in the sense of that copula, which only a modal logic can bring out into the open for our consideration.

We see, in this way, the impact modal deductive logic may have on ontology. But, as we shall see, the ramifications in modal inductive logic are even more significant, for epistemology. However, beforehand, I would like to make some incidental remarks.

Until now, the formal theory of classification, or class logic, has been notoriously simplistic. No one can deny how valuable it has been to science: for instance, Aristotle, and in modern times the Swedish biologist Carolus Linnaeus, have used it extensively in constructing their taxonomies of plant and animal life, and indeed every systematization involves reference to genus-species-individual relations. However, this approach has always seemed somewhat rigid and static.

Our world is conspicuously a world of change. Things come and go, there is generation and corruption, alteration, development, and evolution. What was yesterday a member of one class, may tomorrow be a member of another instead. Something may belong to a class only conditionally. And so forth. Only a modal class logic can assimilate such dynamic relations. Science needs this methodological tool, to fully depict the world of flux it faces.

Instant ‘state of affairs’ pictures are not enough; there is need to specify the avenues and modalities of transition (or absence of transition) from one state to another, as well as the causal relations involved. It is not enough to say vaguely what things ‘are’: we have to specify what they ‘must be’, what they ‘can be’, and from what to what and via what, and in which circumstances, they go: only thus can science fulfill its responsibilities.

For this reason, formal logic is obligated to study transitive categoricals and de-re conditioning of all types, in great detail. Without such a prolegomenon, many philosophical and scientific controversies will remain alive indefinitely. Right now, there is no formal logic (other than the one here proposed) which provides a language and neutral standards of judgment for, say, Darwin’s evolutionary theory or Hegel’s dialectic of history.

It is just so obvious that someone who is aware of the complexities of dynamic relations, is more likely to construct interesting and coherent theories on whatever subject-matter.

Returning now to modality. You will recall that we distinguished between types of modality and categories of modality, and we said that a modality is ‘fully’ specified only when both its type and its category are specified. Upon reflection, now, we can say that even then, the modality is not quite fully specified: to do so, we would still need to pinpoint the exact compound of modality it is an expression of, and indeed, we must do this in both directions of the categorical relations (see ch. 51, 52). Furthermore, to complete our description of the relation, we would need to specify the precise de-re conditions of its actualization (see part IV).

Now, just as natural necessity, actuality, and potentiality form a continuum of ‘degrees of being’, and likewise for temporal modalities — so all the subdivisions of these modalities implied in the previous paragraph clarify the various degrees of being. That is, once we grasp the ontological significance of modality, as we did, then by extrapolation all the other formal distinctions, which occur within the types of modality in question, acquire a real dimension (of which we were originally unaware).

Moreover, the very concept of ‘degrees of being’ can be carried over into the field of extensional modality, in view of the powerful analogies which exist between it and the natural and temporal fields. This is not a mere generalization, because we from the start understood extensional modality as more than mere statistics; it relates to the possibilities inherent in ‘universals’ as units. Thus, ‘Some S are P’ and ‘All S are P’ are different degrees in which S-ness as such may ‘be’ related to P-ness as such. Thus, the quantifier is not essentially something standing outside the relation, but is ultimately a modification of the copula of being.

Going yet further, the valid modes of the syllogism, and indeed all argument, like nnn or npp for instances — they too may be viewed as informing us of the inherent complexities of modal relations. That ‘All S must be P’ implies only ‘some P can be S’ tells us something about being ‘in rotation’, as it were. That premises np yield conclusion p (rather than n or a) tells us something about the causal interactions of these different degrees of being. Likewise, for all types and mixtures of modality. All these so-called processes, therefore, serve to define for us the properties of different types and measures of being, giving us a fuller sense of their connotations.

Which brings us, at last, to the most radical extrapolation of all, and the most relevant to induction theory. Since, as we saw, in principle, logical necessity implies (though it is not implied by) natural necessity, and logical possibility is implied by (though it does not imply) potentiality — we may interpret these logical modalities as, in turn, themselves stronger or weaker degrees of being. The inference is not as far-fetched as it may at first seem. That something is such that its negation is ‘inconceivable’ or such as to be itself ‘conceivable’ is a measure of its belonging in the world as a whole (including the ‘mental’ aspects thereof).

Between minimal logical possibility (which simply means, you will recall, having at all appeared in the way of a phenomenon, with any degree of credibility) and logical necessity (which means that the negation has not even a fictional, imaginary place in the world), are any number of different degrees of logical probability. If our extrapolation is accepted, then high and low logical probability are measures of ‘being’, not merely in a loose epistemological sense, but in a frankly ontological one. This continuum overlaps with but is not limited to the continua of being in a natural, temporal or extensional sense.

‘Truth’, the de-dicta sense of ‘realization’, and ‘singular actuality’ in the natural/temporal and extensional sense, become one and the same in (concrete or abstract) phenomena. The really here and now is the level of experience of phenomenal appearances (in the most open senses of those terms); we might even say of concrete and abstracts that they are also different degrees of presence, in their own way. Beyond that level of the present in every respect, ‘existence’ fans out into various ways of stronger and weaker being. Thus, logical probability may be viewed as in itself informative concerning the object, and not merely a somehow ‘external’ characterization of the object.

This suggestion is ultimately made to us by formal logic itself, remember; it is rooted in the concept of impermutability. Thus, the contention by some that Werner Heisenberg’s Principle of Uncertainty signifies an objective indeterminism, rather than merely an impossibility to measure — may well have significance. I am myself surprised by this possible conclusion, but suddenly find it no longer unthinkable and shocking: once one accepts that there are ‘degrees of being’ in a real sense, then anything goes.

Thus, we may also view the mental and the physical, the conceptual and the perceptual, the ‘universal’ and the individual, the ideal and the real, knowledge and fact, and why not even the absolute and the relative — as different types and degrees of being. Being extends into a large variety of intersecting continua. In this way, all the distinct, and seemingly dichotomous, domains of our world-view are reconcilable.

2. Induction from Logical Possibility.

Let us now return to the main topic, that of induction, and consider the impact of what has been so far said. We acquainted ourselves with two major processes of induction: adduction (see ch. 20-22 and 46-48) and factorial induction (see part VI).

Adduction concerns theory formation and selection. The logical relation between postulates and predictions, consists of a probabilistic implication of some degree, conditioned by the whole context of available information. The postulates logically imply, with more or less probability (hopefully, lots of it) the predictions; and the latter in turn logically imply with more or less probability (anything from minimal possibility, even to logical necessity) the postulates. The logical relations note well are mutual, though to different degrees, and in flux, since they depend on a mass of surrounding data.

Thus, the adduced probability, in any given context, of any single proposition, be it frankly theoretical or seemingly empirical, is the present result of a large syndrome of forces, which impact on each other too. Theories are formed (appear to us), and are selected (by comparison of their overall-considered probabilities, to those of any modifications or alternative theories), with reference to the totality of our experiences.

Concrete experience, note, is by itself informing, even when it is not understood; abstract theories are also in a sense experiences, to be taken into account. Empirical phenomena determine our theories, and they in turn may affect our particular interpretations of empirical phenomena. There is a symbiotic give and take between them, which follows from the holistic, organic, nature of their logical relation.

Thus, adduction may be viewed as the way we generally identify the degree of being of any object, relative to the database present to our consciousness. Within the domain delimited by our attention, each object has a certain degree of being; and this degree is objective, in the sense that from the present perspective the object indeed appears thus and thus. The appearance may not be the central ‘essence’ of that object, but it is in a real sense a facet of it, a projection of it at level concerned. In that way, we see that logical probabilities, and logical modality in general, ultimately have a de-re status too: their way of ‘being’ may be more remote, but it is still a measure of existence.

Deduction is merely one tool, within the larger arsenal of adductive techniques. Deductive processes are, apart from very rare exceptions of self-evidence (in the formal sense), always contextual, always subject to adductive control in a wider perspective. Modern logicians, so-called Rationalists, who attempt to reduce knowledge to deductive processes, fail to grasp the aspect of holistic probability. Our knowledge is not, and can never be made to be, a static finality; the empirical reality of process must be taken into account for a truly broad-based logic. Likewise, the opposite extreme of Empiricism is untenable, because fails to explain how it allowed itself to be formulated in a way that was clearly far from purely empirical terms.

Now, factorial induction is another major tool at our disposal in the overall process of induction. In fact, we may view all induction as essentially adductive, and say that deduction and factorial induction are specific forms or methods of adduction. Essentially, factorial induction is built on the adductive method of listing all the alternative ‘explanations’ about a ‘given datum’ — in our case, the given datum is the gross element or compound, and the list of eventual explanations is the factorial formula; that is, the formally exhaustive series of integers compatible with the gross formula, and therefore constituting logically possible outcomes of it.

In the general adductive relation, the hypothetical proposition ‘these predictions probably imply those postulates (and thus the theory as a whole)’, the terms of the antecedent categorical need not be the same as the terms of the consequent categorical. Thus, the terms of the hypothesis may be mere constructs, of broader meaning and application than the more singular, actual and real terms of the allegedly empirical ground. That there are degrees of being, implies not only that there are degrees of truth (as explained, logical modality has a de-re status too), but also that there are degrees of meaning (again, in the objective sense that something has at least appeared).

The terms of a theory may be at first vague, almost meaningless concepts, but gradually solidify, gaining more and more definition, as well as credence. This evolution of meaning and credibility, as we look at the apparent object every which way, may be viewed a change in the degree of ‘being’; as long as the apparent object does not dissolve under scrutiny, it carries some weight, some ‘reality’, however weak. It remains true that any alternative with apparently more weight of credibility and meaning, has a ‘fuller’ reality, more ‘being’. Thus, even though ‘truth’ is a comparative status, it may still be regarded as an objective rendering of the ‘world’ of our context.

In contrast, factorial induction deals with generalization and particularization of information. What distinguishes it from adduction (in a generic sense) is the uniformity of the terms in its processes. Factorial induction concerns the selection and revision of ‘laws’. We generalize ‘this S is P’ to ‘all S must be P’ or some less powerful compound (some other integer), with reference to precise rules. Here, note well, the terms are the same. This sameness is at least nominal; for it is true that by generalizing the singular actual to a general natural necessity (or whatever), we modify the degree of being and meaning of the terms somewhat. This modification is not arbitrary, but is determined by the whole context, including the rules followed.

But anyway, factorial induction is obviously a case of adduction (though a special case because of the continuity of terms). That means that the terms themselves may well be more or less theoretical, in the sense of having lower degrees of meaning. Also, the seeming empiricism of their singular actual relation may or may not be true; that is, it too has degrees of credibility and truth, determined by the overall context. At all levels, from the seemingly empirical, through factorial induction, to the adduction of overt constructs — there is some interactive reference to overall context.

Thus, the rules of factorial induction remain the same, however meaningful or true the terms appear at a given stage: they are formal rules, which continue to apply all along the development of knowledge. At each stage, they determine a certain answer, or a range of answers, depending on how definite and credible the terms and relations involved appear to us at the time, taking into consideration all available information. The factorial approach to induction is distinguished by its utter formalism, and independence from specific contents.

I want to stress here the profound importance of such an integrated theory of modal induction. Through it we see graphically that there is no essential discontinuity between logical (de-dicta) modality and the de-re modalities. The modality of a thing’s being, is the meeting point of all these aspects: on the outer edge, its logical meaning and truth, ranging from logical necessity to extremely dilute conceivability; closer to the center, the de-re modalities at play; at the very center, the empirical realization of the essence, towards which we try to tend.

Truth and full definition are approached in a spiral motion, as it were. We can tell that we are closer, but there is always some amount of extrapolation toward some presumed center. Our position at any stage, however composed of theoretical constructs and generalizations, always has some reality, some credibility, some meaning — it just may not be as advanced as that which someone else has encountered or which we will ourselves encounter later. But it is still a product of the Object, the whole world of appearances, and as such may well be acknowledged to have some degree of objective being in any case.

Another way to view inductive processes is as follows. Since logical possibility is a subaltern of natural possibility (potential), we can generalize (subject to appropriate rules of corrective particularization) from logical possibility to natural possibility, just exactly as we generalize (under particularizing restrictions) from, say, natural possibility (potentiality) to temporal possibility (temporariness). This means that adduction in general (that is, even with imaginary terms) is a species of factorial induction.

We have already developed a definitive inductive logic for the de-re modalities (with the example of categoricals — de-re conditionals can similarly be dealt with, almost entirely by a computer: we know the way). This de-re inductive logic can now be extended further to de-dicta aspects, simply by introducing more factors into our formulas. We saw that the combinations of the natural and extensional types of modality gave rise to 12 plural elements, and thence to 15 factors. When temporal modality is additionally taken into consideration, the result is 20 plural elements and 63 factors. It is easy (though a big job) to extend the analysis further, with reference to the fourth type of modality, namely the logical.

Roughly speaking (I have not worked out all the details), we proceed as follows. Each previously considered element becomes three elements: a logically necessary version (say, prefixed by an N), a just-true version (unprefixed), and a logically possible version (say, prefix P). These more complex elements are then combined into fractions, and thence into integers; the resulting number of integers is the new maximum number of factors a formula may consist of.

Every gross formula is then given a factorial interpretation, comprising a disjunction of one to all the available factors. The factors must of course be ordered by modal ‘strength’, to allow for easy application of the law of generalization. Logical necessity or impossibility are ‘stronger’ than logical contingency coupled with truth or falsehood. The overall factorial formula for any event is accordingly much longer, but with the factors ordered by ‘strength’, factor selection or formula revision proceeds in accordance with exactly the same unique law of generalization.

Thus, our manifesto for modal induction is not limited to the special field of de-re categoricals (and eventually de-re conditionals), but is capable of coherently and cohesively encompassing even logical modalities (applied categorically, or eventually hypothetically). We have therefore discovered the precise mechanics of all adduction. At any stage in knowledge, it should henceforth therefore be possible to characterize any apparent proposition with reference to a precise integer, the strongest allowed by the context.

This refers, not only to simple generalization of ‘laws’ (observed regularities), but to determining the status as well as scope of any complex ‘theory’ whatever (however abstract or even constructed be its terms, even if their definitions are still notional and their truths still hypothetical). Of course, the terms still have to be at least minimally intuitively meaningful and credible. But the selection (subject to revision) of the strongest available factor precisely determines a proposition (or its negation) as true. There is no appeal to some rough extrapolation on vague grounds, toward the central ‘truth’; we now have a formal depiction of the process of pin-pointing the truth at any time.

3. History of Inductive Logic.

I want to now refer the reader to the article on philosophy of science in the New Encyclopaedia Britannica (25:660-678). Written by Stephen Toulmin, of Chicago University, this paper is the most refreshingly balanced of all those referred to so far. The impression it gives is that current understanding in inductive logic, is by far superior in quality to modern trends in deductive logic. This is no doubt to a great extent the author’s achievement, his ability to avoid extreme positions, his awareness of all the nuances in the matter at hand.

My task is therefore much facilitated. It is to follow the history of inductive logic, and determine where I agree or disagree, or what I may add in the way of comment. By comparison and contrast, the distinctive and original aspects of my own contributions will be highlighted, and further defined and defended.

(Although I do not here review them, the interested reader might consider studying, in addition to the said article, the rest of the entry on ‘Philosophies of the Branches of Knowledge’ of which it is part, as well as the NEB article ‘Epistemology’ (18:601-623)).

One thing is clear at the outset: no one has to date formulated any theory remotely resembling factorial induction. Adduction is well known — it is the hypothetico-deductive method, attributed to Bacon and Newton; actual induction may, I believe, be attributed to Aristotle (I certainly learned it from his work); but factorization, factor selection and formula revision (not to mention the prior logics of transition and of de-re modal conditioning) are completely without precedent.

These constitute, I am happy to report, a quantum leap in formal logic. I stress this not to boast, but to draw attention to it. It was the most difficult piece of intellectual problem-solving (it took 2 or 3 months) this logician has been faced with, and the most rewarding. The problem was finding a systematic way to predict and interpret all consistent compounds of (categorical) modal propositions; many solutions were unsuccessfully attempted, until the ideas of fractions and integers, and of factorial analysis, presented themselves, thanks G-d.

The historical absence of a formal approach to induction, or even the idea of searching for such an approach, is the source of many enduring controversies, as we shall see. Once a formal logic of induction exists, as it now does, many doubts and differences become passé. Just as formal deductive logic set standards which precluded certain views from the realm of the seriously debatable, so precisely the formal inductive logic made possible by factorial analysis of modal propositions simply changes the whole ball game.

Toulmin discusses inductive logic under the name of Philosophy of Science. This reflects the fact that it is currently with reference to the examples provided by modern science that philosophers try to understand induction. Which is as it should be, but implicit in the name of the research is the lack of a sufficiently formal approach. Induction is first of all an issue for Logic to sort out. However, Toulmin does mention ‘the formal study of inductive logic (which reasons from facts to general principles)’.

In any case, the research in question has both ontological and epistemological ‘preoccupations’, reflecting larger subject-object issues. ‘Any hard-and-fast distinction between the knower and the known or between the observer and his observation’ is alleged to be ‘discredited’ by modern discoveries in Physics, like relativity and quantum mechanics (that is to some extent true, but not itself as hard-and-fast as suggested, in my opinion).

‘Ontological preoccupations… have frequently overlapped into the substantive areas of the sciences’, with reference to ‘the existence and reality’ of their theoretical entities. For example, the atom was debated early in this century by the likes of physicists Ernst Mach and Ludwig Boltzmann; similarly also, in biology and sociology. ‘Epistemic concerns’ have also somewhat been affected by psychological research into cognitive processes.

In any case, philosophy of science has tried to analyze and evaluate ‘both the general concepts and methods characteristic of all scientific inquiries and also the more particular ones that distinguish the subject matters and problems of different special sciences’. I agree that all input from special sciences is valuable, and helps to define and test any formal theory of induction. Toulmin, as already said, surveys the field very openly, with ‘no effort to prejudge’.

What becomes apparent is an enduring division, across the centuries, into roughly three camps: the first two are opposite extremes, in a spectrum of proposed answers to any question; and in between them, in the middle, lies any number of attempts at reconciliation between the extremes. These are all known historical divisions. They do not form a uniform vertical continuum, because the problems shifted in emphasis across time. Thus, for the main periods of Ancient Greece we have, briefly put:








Materialism (say)



Later Antiquity





Parmenides and Heraclitus were concerned with the issue of unity and reality, versus plurality and transitoriness, of appearances. Plato and Aristotle were more focused on the issue of transcendence versus immanence of ‘universals’, both more or less acknowledging particulars. The Stoics and Epicureans, in contrast, functioned in the more limited domain of the material world, debating the regularity or spontaneity of its bodies’ movements.

Let us note that Plato was methodologically more committed to axiomatization and less empirical-minded, whereas Aristotle was both a biologist and also the founder of formal logic; so with regard to rationalism, they differed only in degree. Similarly, throughout history, the common ground is as significant as the differences. Therefore, in spite of seeming repetitiveness in the divisions, their frames of references do change a bit and become more defined.

‘The ensuing Hellenistic, Islamic, and medieval periods added little to the understanding of scientific methodology and explanation’.

As of the Renaissance, Empiricists faced-off with Rationalists. Francis Bacon insisted on use of ’empirically observed fact’ (similarly, Locke, Hume), from which theoretical propositions would be ‘formally deduced’ (by ‘exhaustive enumeration’) or eliminated;. René Descartes, in contrast, looked to the model of Euclidean geometry, and considered that comprehensive scientific principles should be deducible from a structured set of ‘self-evident axioms [and] definitions’; similarly, Liebniz, Bishop George Berkeley.

Both these tendencies found expression in the practical scientific work of Isaac Newton, which referred both to observation and experiment, and to theoretical tools like the mathematical calculus. He thus gave birth to the ‘hypothetico-deductive method’: a working hypothesis is assumed, its specific implications are deduced, and these are compared to empirical evidence; so long as harmony prevails well and good, otherwise another hypothesis must be found. Thus, Newton was neither as ‘enumerative’ as Bacon (though they agreed on ‘elimination’), nor as ‘self-evident’ as Descartes, but managed to find a harmonizing middle way, satisfying the concerns of both sides to some extent.

In 1733 and 1766, consistent alternative geometries to Euclid’s were developed, showing that the latter ‘could no longer claim a formal uniqueness’. Also, since all the alternatives were presumably compatible with empirical evidence, a decision between them became seemingly impossible. It therefore became imperative to justify our preference for one of them, in some (perhaps new) way.

Immanuel Kant initially subscribed to the Cartesian ideal, believing that ‘Newton’s physical principles would eventually be put on a fully demonstrative… basis’, but later developed a more ‘critical philosophy’. He advocated a ‘transcendental method’, which would refer to the mind’s structure to explain our adherence to certain categories and axioms of knowledge. Effectively, Kant was saying, we think in such and such a way (for instance, with respect to space and time), because our minds are so structured that we have to.

In my view, funnily enough, that interposition was totally irrelevant and inconsistent. It claimed for itself a transcendental status, and tried to skirt the issue as to whether it had itself been cognized rationally or empirically or in a combination of both ways. It seems ingenious, only because it is laden with paradox. On the one hand, its intent was to impose some certainties into knowledge; but on the other hand, the implication was that our knowledge is rather accidental (that is, it could have been otherwise, were the mind differently structured), and therefore conceivably incorrect (and therefore uncertain).

However, historically, Kant’s influence has been grave, because he effectively equated the concept of rational ‘self-evidence’ with mere tautology, making it lose all content. Simultaneously, Kant put even experience in doubt, since there was a possibility of it having been conditioned (read: distorted) by the mental prism. We can also view this influence in a more positive light: Kant revealed for us the weaknesses of extreme rationalism and extreme empiricism, and forced us to take these problems into account when formulating any subsequent theory.

But in my view, the solution of Kant’s dilemmas is simply to apply Newton’s adductive method to the whole enterprise of knowledge, including philosophy itself. Every item must be equally self-consistent, and consistent with experience — or at least seemingly so. The preferred alternative is that with more such cumulative credibility than all its rivals. These tests apply equally to epistemological and ontological theories: they cannot exempt themselves from the same scrutiny as they apply to the special sciences.

In that case, Kant’s insinuations that the self-evident is contentless and that the mind distorts experience, are merely internal difficulties within his theories, and only serve to prove that they themselves were not thoughtfully constructed. The only self-consistent position is that the intuitively evident has meaning and credibility (subject to ongoing confirmation), and that the mind’s conditioning of experience need not be distortive (though in specific cases it might be so judged, on the basis of other experiences or logical considerations). It suffices to develop a broad-based theory consistent with these prime logical requirements. Kant’s simply does not fit the bill; he did not understand Newton’s method.

Nevertheless, the intervention of Kant’s Idealism, suggesting that consciousness imposes (rather than merely discerns) a structure on its objects, was historically valuable. It stimulated a healthy and fruitful interest in the mechanisms of sensation, stretching from research by Hermann Von Helmholtz in the mid-19th century to wide-ranging present-day efforts by biologists and psychologists. It also helped, by negation, to better define some of the conditions for a rival theory of knowledge. Thus arose, for instance, what Toulmin calls the ‘epiphenomenal view of experience — as a kind of mental froth without causal influence on the underlying physical mechanisms’.

With regard to physiological aspects, the old debate between vitalists and mechanists, as to whether life processes were or were not radically different from other physical phenomena, gained relevance in epistemological discussion. Note however that Kantianism damns you if you do and damns you if you don’t; for free will seems to imply intentional arbitrariness, and mechanical determinism or causelessness seems to imply accidental deviations. As far as I am concerned, objective consciousness is conceivable whatever our aetiological presuppositions.

Continuing our survey, in the 19th century, William Whewell made the important contribution of stressing the temporal dimension of Newton’s method: ‘it was only by a progressive approach that physicists arrived at the most coherent and comprehensive systems’. John Stuart Mill’s practical rules for experimental inquiry and causal reasoning (the methods of agreement, difference, residues, and concomitant variations) were also crucial contributions to scientific method (as well as formal aetiology). It seems to me that, in spite of their rivalry, these two men were essentially on compatible courses.

At the turn of the 20th century, a modern ‘critical reanalysis’ of science and its philosophy began. As science appealed to more and more abstract, and indirectly arrived at, concepts — ‘Kant’s lesson about the constructive character of formal theories’ gained credence. Consecutively, modern science remained of course deeply committed to referral to empirical data. Thus, Ernst Mach, Richard Avenarius, saw ‘theoretical concepts [as] intellectual fictions, introduced to achieve economy’ in the ‘organization of sensory impressions’; such constructs could be useful tools without needing to be claimed to correspond to any reality.

‘As against this instrumentalist or reductionist position, Max Planck, author of the quantum theory, defended a qualified Realism’. Henri Poincaré, Pierre Duhem adopted ‘intermediate, so-called conventionalist positions’. These ‘attempted to do justice to the arbitrary elements in theory construction while avoiding… radical doubt about the ontal status of theoretical entities’. Norman Campbell responded by ‘sharpening the distinction between laws and theories’; the former are concerned with ‘cataloging and describing’, the latter with ‘making intelligible… compact[ing] and organiz[ing]‘.

My own position on the issue at hand is simply open and formal (noncommittal, without contentual prejudice): there is a generalization from logical possibility to natural actuality; so long as no empirical finding or logical insight arises which effectively, by the rules of induction, requires us to revise our position (either totally abandoning the proposed integer or granting equal credence to an alternative integer), it remains true. ‘Particular observations’, ‘laws’ and ‘theories’, all fall under the same rules; there is no pressing need to distinguish them.

It is only ex-post-facto, with regard to demoted ideas, that we can credibly say ‘ah, yes, that one turned out to be a fiction’. If two or more ideas are equally conceivable, we might well adopt one as a mere ‘working hypothesis’ (which may turn out to be fictional). But if only one is predominant, and so long as it stays that way, it cannot consistently be characterized in a skeptical fashion, but must be acknowledged as a reality. To claim all concepts fictional, implies that very claim itself, which is also conceptual, to be fictional (that is, false).

The imaginariness or remoteness of a construct may affect our assessment of credibility in specific cases, but cannot be viewed as having any relevance in principle, since induction is not arbitrary but subject to rules. Any claim that a specific construct is fictional, implies a claim to knowing that there is something else which is real and different from that construct; a general accusation is disqualified in advance. The distinction between fiction and reality presupposes some standards of judgment; it cannot therefore be meaningfully applied without tacit appeal to and acknowledgement of those standards.

However, it must be admitted that just as Kant’s insights, though logically untenable, had a creative influence on subsequent philosophy, Mach’s view of scientific theories as mere flights of fancy, in spite of its internal inconsistency, had a positive effect on scientific thinking. What it did was to psychologically liberate scientists, to give more rein to their imaginations, at a time when science was in full expansion and needed new ideas, new constructs with which to assimilate new empirical findings.

Philosophy had come to a clearer realization of the crucial role of imagination in theorizing. It called for a less pedestrian, richer science. It is noteworthy that this new found freedom was explicitly used and hailed by the likes of Albert Einstein, who talked of scientific theories as ‘free creations of the human mind’. Relativity and Heisenberg‘s Indeterminacy were distinguished by their philosophical daring.

I also agree with Mach that ‘submicroscopic atoms… derived their scientific meaning entirely from the macroscopic sense experiences that they are used to explain’. For me, this is an important point, because it illustrates how our theoretical constructs often refer to mental images of physical objects and events. The conclusion to draw is not however that they are all fictions (that is for inductive logic to determine, case by case); rather, we should notice that this gives initial meaning to the words used, and it is significant that it refers back to causally related experiences. In this context, the Liebniz idea of worlds within worlds, reflecting each other at all levels to some extent, is very pertinent.

In the period between the two World Wars, Mach’s attempt ‘to reduce all knowledge to statements about sensations’, and the modern symbolic logic of Russell and Whitehead, and other similar strands, coalesced in the Vienna Circle of Logical Positivists, which still has a considerable influence today (though less than then).

It is interesting to note that this philosophy was composed of two somewhat contradictory extremes. On the one hand, a neo-Humean focus on only the most concrete of sense impressions; on the other hand (as we saw in the previous chapter, with reference to Carnap), a narrowly ‘linguistic’ analysis of conceptual knowledge. Therefore, in traditional philosophical terminology, they were both extreme empiricists and extreme rationalists. The method advocated by logical positivists was thus, strictly speaking, ‘hypothetico-deductive’ only in name.

They were hedging their bets: pursuing the Cartesian programme of an axiomatic system of science, derived from some most-general postulates ‘posited without proof’, yet at the same time claiming for those first principles, ‘by comparing them with actual experience’, a measure of ‘substantiation’. Still, in that mixed-up context, valuable concepts like ‘verification, confirmation, or corroboration’ (and their negative equivalents), became more common currency and were better understood.

Concurrently, a school of Neo-Kantians ‘questioned the very possibility of identifying the pool of theoretically neutral observations’. Heinrich Hertz advanced the idea that, in a theory like Newton’s dynamics, the logical relations linking postulates and phenomena were themselves too part of the theory. Wittgenstein developed this further with reference to a philosophy of language, and his successors joined, to the concern with the ‘structure’ of theories, a concern with ‘the manner in which [they] succeed one another’.

These issues are dealt with in my own theories, as follows. Regarding primary observations, it does not matter, within a formalized inductive logic, how ‘neutral’ they are, because they are as themselves propositions subject to the same controls and rectifications as more abstract components of theories. With regard to Hertz’ contention, it effectively denies the existence of a deductive logic and mathematics which is truly formal, that is, independent of any particular terms; it has an appearance of credibility, only because it is true that the contents of conclusions depend on the contents of premises, but there remains nevertheless a formal continuity. As for the issues of theory structure and changes, they are discussed in the chapters on theory formation and selection (ch. 47-48).

Toulmin goes on to describe controversies which developed among scientists. For instance, ‘about the legitimacy of extending the methods and categories of physical science to the sphere of the higher, distinctively human mental processes’. He mentions B.F. Skinner, who rejected ‘any distinctive class of mental laws and processes’, and Noam Chomsky, who argued that ‘linguistic activities are creative and rule-conforming in respects that no behaviorist can explain’. Or again, conflicts in sociology and anthropology ‘to do with the significance of history in the explanation of collective human behavior’. Marxists emphasized the ‘dynamic, developing character of social structures and relationships’.

There is still today ‘deep disagreement’ about ‘the relation of theory and observation’. For the very Empiricist, ‘general theoretical principles have authentic scientific content only when interpreted as empirical generalizations about directly grasped empirical data’. For the rest, they ‘suggest that theory construction is totally arbitrary or unconstrained’ — surely, Toulmin says, an exaggeration. (I have shown generalizations, whether from the particular to the general, from the potential to the necessary, and from conceivability to existence, are all identical in formal process.)

At the other end of the spectrum, the very Rationalist ‘reject the idea that raw empirical facts… display any intelligible or law-governed relationships whatever — and still less any necessary ones’. Thus, they ask ‘can one, after all, speak of natural events themselves as happening “of necessity”?’ Carnap even criticized ’empirical generalizations’. (It is interesting to note that this position is crypto-Heraclitean; it is, of course, logically untenable, since it purports to formulate just such a lawful and even necessary relationship, itself. For me, once we have clearly defined necessity, and determined the rules for its induction, the question loses its credibility; note also that to deny necessity implies denial of possibility, too.)

Toulmin very reasonably points out that all the above approaches ’emphasize valid and important points; but, in their extreme forms, they give rise to difficulties’. The task ‘is, accordingly, to find an acceptable middle way’. The philosopher has to ‘come to grips with the full complexity of the scientific enterprise’, without however ‘taking too dogmatic a stand’. The philosophy of science has certain recurring themes and issues to deal with, notably (following Toulmin) the procedural, the structural, and the developmental.

a. Procedure. There have been efforts of ‘careful analysis of the procedures by which empirical data are actually handled’ by science. These include observation, design of experiments, measurement, statistical analysis to deal with large numbers of variables, and systematic classification. These procedures, as well as being ‘necessary preconditions for effective theorizing’ are ‘themselves, in turn, subject to revision and refinement in the light of subsequent theoretical considerations’.

It is true that the scientist is often selective in his observations and that the experiments he designs are expressions of his theoretical assumptions. Kant called it ‘putting Nature to the question’. But this is only a reflection of our limits in time and financial resources, not to mention intelligence. We are obliged to search for short-cuts, but we must also be careful. Selectivity often enough leads to erroneous inductions and narrow views, and many experiments fail or give distorted results.

b. Structure. ‘The formal structure of science’ has been studied. This refers to ‘the straightforward extension of methods already familiar in deductive logic’, and the more inductive goals of finding ‘rigorous formal definitions of… probability, degree of confirmation, and all the other evidential relations’. This is precisely what I have tried to do, through my theory of modal induction.

Modern logicians, Toulmin suggests, are tempted ‘to play down important differences between mere descriptive generalizations… and the explanatory theories’, I have shown the difference to be as follows: for the former, there is a movement within de-re modality; for the latter, the movement is from logical (de-dicta) to de-re modality. However, I have also shown the structural similarity, and single common source of certification (the law of generalization; see ch. 55).

More important, in my view, is modern logic’s confusions concerning the relations between deductive and inductive logic. The former is formally recognized by moderns, ad nauseam; but the latter is only discussed by them in very nonformal ways. ‘It has not been easy’, Toulmin admits, ‘to analyze the formal structure of the sciences’ and give them a ‘working language’. An attempt was made by R.G. Collingwood in 1940, with reference to ‘mutual presuppositions between more or less general concepts’ instead of ‘direct entailments’.

I think the best way to overcome the difficulty, is to view the task as one of developing a formal logic of ‘inductive implication’, as an extension of the concept of ‘deductive implication’. There has, supposedly, been some work done in this direction; perhaps Hans Reichenbach‘s ‘analysis of probabilistic argument’ falls in this category. But the dilemma presented by Carl Hemper, who found it hard to understand the ‘logical link’ between hypotheses and phenomena, seems to belie this supposition.

There is always, admittedly, some ‘reinterpreting’ of nature — and the terms of all propositions, as well as the relations between terms, are to varying extents hypothetical. But the thing is to keep in mind the fine thread of referral involved, which gives meaning to the whole; all constructs, however abstract, have some concrete building blocks. Interpretation presupposes something to interpret and something to interpret with, and therefore cannot be wholly divorced from reality. Abstract theories are just more general and theoretical than concrete generalizations, but not essentially different.

Toulmin very responsibly rejects excessive ‘relativism’, which would destroy ‘the objectivity of scientific knowledge’, and give ‘the impression that the conceptual structures of science are imposed on phenomena by the arbitrary choice of the scientific theorist himself’.

I too vote against sheer Relativism, of course; but I do also recognize that there is some relativity in existence. We have to admit there are relative appearances, in the simple sense that an object is different-looking from different angles or at different times; this is not in itself a major threat to objectivism, but merely an acknowledgement of the complexity of our phenomenal world. Relativity is one of the relations found in our world. But admitting this relation does not prevent us from making distinctions. It just does not follow that all imaginations are realistic or create realities, or that all appearances have equal status so that contradictions may exist, or anything of the sort. Only through a both holistic and case-by-case consideration, can such judgements be made.

c. Development. The reaction to relativism in the late 1960s took the form ‘of questioning… that the entire intellectual content of a science can be captured in a propositional or presuppositional system’. Charles Pierce noted that ‘the logical status of the theoretical terms and statements in a science is… subject to historical change’. More recently, Quine rejected ‘any attempt to classify statements… using the traditional hard-and-fast dichotomies — contingent-necessary and synthetic-analytic — as fallacious and dogmatic’. Thus, a shift developed, away from ‘analyzing a science in static logical terms’ towards ‘analyzing the dynamic processes’.

For me, abandoning the goal of a formal inductive logic is an excessive and defeatist reaction. It is indeed very important to keep in mind, like Pierce, the changing and adaptive character of theorizing and scientific belief. However, that is part of the challenge: to develop a formal logic which is sensitive to the flux of knowledge. I believe the modal inductive logic presented in this volume fulfills these conditions. Quine’s rejection of modality is not self-consistent, and therefore it is without credibility; factorial induction shows clearly that formalism and flexibility are not at odds.

‘The crucial question … “What is a concept?”… had been largely disregarded’ by Logical Empiricists. Viennese Positivists, following Frege, viewed it as ‘a matter for psychologists’ — with reference, for instance, to the equation of the concept of ‘force’ with ‘a feeling of effort or a mental image’. It is clear that the symbolism of modern deductive logic has had a devastating effect on such thinkers. It produced in them a rigidity, filled with preconceptions. There is no reason why the notion of force should not serve as a springboard for a more defined concept of it; why a closed-minded prejudice against intuition?

Reality is infinitely nuanced and varied, and should be categorized only with a very open and nimble attitude. For instance, consciousness and volition range widely in stature, from the insect’s level to the much broader and freer genius and heroism possible to humans. Even inanimate matter and plants may, for all we know, be to varying extents less mechanistic and mindless than we presume. We must remain aware of both the continuities and the differences in degree within that broad range. If one starts with rigidly limited definitions of those concepts, one is bound to disbelieve any manifestations which do not match our simplistic expectations.

The beautiful mystery of existence is the mutual reflection and interconnection of everything, and this must be taken in stride. An honestly universal logic is one which is capable of handling, not only the ‘square’ outlook of science, but the full range of thought, from the notional and vague to very clear concepts. Precisely the role of logic is to help us to gradually move from the former to the latter. A logic which is only capable of dealing with the end-product of this process is useless, since we are ever far from that ideal. A purely ‘linguistic’ and non-‘substantive’ logic is meaningless, and is in any case impossible to build without secretly using and trusting intuition.

There are in fact no propositions without concepts, and no concepts which do not appeal to intuitive notions. This is not a problem, it is a solution. What matters is to take as much as we can of the whole of experience, concrete, abstract and logical, into consideration in constructing both our methodological ‘standards’ and the substantive ‘interpretations’ of the sciences. Toulmin rightly points out how ‘methodological clarification’ and ‘creative advance in science itself’, develop hand in hand. They have a symbiotic relationship, implying a dynamic give-and-take or feedback. ‘It is questionable whether any change, however drastic… is ever as discontinuous or revolutionary’ as rigid logicians or scientists would have us believe.

Toulmin describes the ideal inductive logic. It acknowledges ‘the parts played by intuition, guesswork and chance in scientific investigation’, which Michael Polanyi and Arthur Koestler emphasized; the ‘creativity’ of intellection. It avoids the ‘pedestrian desire to clip the wings of imagination and confine the scientist to stereotyped procedures’, and to a ‘barren… accountancy’. But it also avoids ‘a romantic anti-rationalism’. ‘Chance’, he remarks, ‘favors the prepared mind’; the ‘best trained mind’ is ‘best qualified to appraise… current problems and recognize significant clues [and] promising lines of analysis’.

There is, in my view, a ‘logic of discovery’ which satisfies those criteria. It is, first of all, modal — it acknowledges the gradual clarification of meanings, the gradual certification of truths. Because it is modal, it avoids the sweeping rationalistic and empiricist generalizations concerning the content and validity of knowledge, which narrow-based modern logic has engendered in legions. Raw data and its interpretation form a continuum; logical modality is itself a de-re aspect of the world, an extension and manifestation of the central object. The chasm between them is merely an illusion produced by naive and rigid symbolism and axiomatism.

Without compromising the ‘to be or not to be’ and quantitative requirements of two-valued logic, a multi-valued logic emerges, in which things ‘are’ in some or all similar instances, sometimes or always, in some or all circumstances, in some or all perspectives. Logical necessity claims the very core of being, the esse, the essence. Natural necessity is a slightly broader sphere around that, and temporal necessity yet broader. Still further removed and superficial are the spheres of the temporary, the potential and the conceivable. Extensional modality operates at all these levels, strengthening or weakening the intensity of the other modalities.

The further from the center, the lesser the degree of being. Logical possibility is the most outer wave of an emanation from the core Object, which is part of the Object in its fullest sense. It is not ‘in the mind’ but just closest to the Subject. In some cases, all the Subject is able to penetrate with his consciousness is that superficial level of being; in other cases, it goes deeper. Some appearances are empty shells of possibility, illusions; others are more strongly affirmed, closer to a central reality, more necessary.

The Subject’s position relative to the Object affects his insight in some cases; in some cases, the Subject ‘makes waves’, in the Ether as it were, which blur the Object. Only through a global perspective, by a consideration of the whole field of experience, can these specific relativities and contingencies be assessed for what they are; and that is an ongoing process. There is no case for ab-initio rejecting the appearance of any facet of being, and like the empiricists accepting only concrete surface impressions or like the rationalists only the most enduring abstractions.

A truly broad-based theory of knowledge accepts both that not everything is contingent, and that not everything is necessary. We may, within limits, aspire to a science which is ‘an accurate, objective mirror’ of reality — for phenomenalist (in the sense above described) reasons. What Toulmin calls ‘the strict Realist position’ is an impatient or conceited claim for only absolutes; the ‘strict conventionalist’ or ‘constructivist’ position claims everything relative. None of them recognizes the full range of probabilities, and it is for this reason that they are formally biased. We may grant ontal credibility to some theoretical entities, without having to grant it to all. The test of ‘truth’ is always particular to a given proposition and context; it is a vain prejudice to lay claim to a single, sweeping qualification, which ignores all nuances.

Kant’s ‘attack on things-in-themselves’, and Mach’s later ‘operationalist’ dismissal of ‘all debates about reality and objectivity as inescapably barren and empty’, suffer incurably of self-negation. How do they know enough about that ‘external’ world to be able to deny it? Is not that very denial itself a claim to having information which is in every sense real and absolute?

If the meaning and truth of a proposition derive only from the ‘scientific operations’ surrounding it, and ‘scientists are not to be understood as claiming or disclaiming anything’ — then what about these very statements themselves? Are they meaningful or not? Are they true or not? Are they purely ‘operational’ too, and are they ‘claiming or disclaiming’ nothing? How are those very ‘operations’ known (known to have occurred as described, known to be real or valid), let alone anything else?

These philosophers did not ask themselves such obvious questions. Worse still, their positions are still today considered respectable by many. But in each case, we find them to be limited in perspective and unalert to the variegated nature of being and knowing. Most of all they are mostly inconsistent with themselves, when applied to themselves.

In this context, I think philosopher Ayn Rand deserves attention and respect. Her writings in the sixties and seventies, including Atlas Shrugged and The Objectivist Newsletter, were apparently received with an embarrassed silence by most of the academic community. But, in view of the confusions reigning in epistemology and ontology, I do not see why. One may well not endorse all her pronouncements on every subject — I certainly do not[1] — but one is obliged to recognize what is evidently a valuable contribution to these discussions. She wrote:

An axiom is [not] a matter of arbitrary choice… An axiom is a statement that identifies the base of knowledge and of any further statement pertaining to that knowledge, a statement necessarily contained in all others, whether any particular speaker chooses to identify it or not. An axiom is a proposition that defeats its opponents by the fact that they have to accept it and use it in the process of any attempt to deny it (965).

As the saying goes, ‘one cannot have one’s cake and eat it too’. She thus proposed a radical standard of judgement for all epistemological and ontological theorizing: philosophers must test their pronouncements on themselves. A simple test: if the philosopher is effectively denying his or her structural ability to make that very pronouncement, or that it has truth or meaning, then that statement is false, null and void, untenable. End of discussion. There is no escape from this logic, no convoluted way to claim a transcendent insight, which bypasses this obvious test.

Note the new twist. It is not contradiction between the terms of a categorical, or the elements of a compound (‘self-contradiction’ in the more Aristotelean sense); nor is it simply a proposition logically implying an opposite proposition or a self-contradiction (‘internal inconsistency’ in the more modern sense). It refers more specifically to the ramifications of the act of formulating the proposition: the acknowledgment of the act implies certain strictures on the content (she called it ‘concept-stealing’).

As a logician, I have found this ingenious test repeatedly valuable; I acknowledge the debt. In all fairness, this contribution by itself classes Rand as a major player, a logician of the highest order. Certainly, this does not solve every problem, but it considerably narrows down the field as to what is acceptable.

Toulmin’s article contains many other valuable insights. I will now very briefly note some of these, and point out the parallelisms in my own work.

He raises a question concerning the significance of the ‘statistical character’ of scientific probabilities; I have described modalities as degrees of being, signifying different tendencies towards full realization. He suggests a ‘reappraisal of traditional taxonomy — in the light of evolution theory, genetics, and population dynamics’; this is dealt with in my theory of transitive propositions and modal classification.

He calls for a ‘framing of authentically empirical questions about perception and cognition’; my direct-relation logical criterion is, I believe, very relevant to any such investigation. He points out the ‘variety of perceptual systems’; I have described some features of a logic of the sense-modalities, taken separately and in their interactions. (See ch. 60-62.)

Discussing the relationship of natural science to ethics and religion, he wisely gives credit to theists who ‘deliberately limit the claims of science so as to preserve a freedom of maneuver for ethics, for example, or theology’. Many thinkers agree that scientists must be socially responsible, and learn to balance ‘a whole range of diverse considerations — economic and aesthetic, environmental and human, as well as merely technical’. In some cases, ‘a moratorium on further scientific research’ may be called for.

I whole-heartedly agree with such views. Science is not an end in itself; it is only justifiable as an instrument of human welfare. If science expands in ways which harm or destroy mankind, who will be left to know anything? Knowledge presupposes someone alive enough, and even healthy and happy enough, to know. These issues are particularly important in this age of genetic engineering, nuclear weapons and industries which endanger our whole environment. Morality is the mainstay of all science.

[1] I dearly hope my mention of Ayn Rand in this volume does not cause me to be viewed or labeled as a ‘disciple’ of hers, or Randite. It would be quite unfair. While I freely admit having been influenced by her writings in my youth, I have long ago dissociated myself from the large majority of her ideas (except for those mentioned herein in her name, out of honesty). Her approach to most issues is far too loosely-reasoned and doctrinaire for my taste.