Chapter Two: Sentence Logic (Part I)
Introduction
2.1. Problems for Informal Logic
2.2. Logical Form
The Language of Sentence Logic
2.3. 'And', 'Or', and 'Not': Introduction to the Formal Language.
2.4. Recursive Definitions
2.5. Formalizing the Language: Logical Grammar
2.6. Translation Techniques
Semantic Methods
2.7. Formal Semantics: Truth Tables
2.8. Tautologies and Contradictions
2.9. Testing for Validity
Semantic Methods Extended
2.10. Reductio ad Absurdum Arguments
2.11. Semantic Tableaux: A Semantic Reductio Strategy
Syntactic Methods
2.12. Inference Rules and Direct Deductions
2.13. A Syntactic Reductio Strategy: Indirect Deductions
2.14. Deductive Strategies
2.15. Proofs and Theorems
2.16. Natural Language Arguments
Appendix I: Semantic Adequacy
Introduction.
2.1. Problems for Informal Logic. The informal account of validity set out in the previous chapter has several agreeable features. The most obvious of these is the very fact that it's informal: since it requires only an ability to understand natural language, and to imagine different possible situations, the test is quite intuitive and easy to use. And as we've seen, it is capable of revealing the invalidity of many natural language arguments. So why should we want to move beyond it, in the direction of something more technical and demanding?
The main reason is the serious limitations on the informal test, which are briefly surveyed here:
First Problem: The Infinity of Possible Situations. It only takes one possible situation to reveal the invalidity of an argument; but establishing validity through this informal approach -- showing that there are no invalidating situations -- is not so simple. Consider the following argument:
(1) Either the penny is in my left hand, or the penny is in my right hand.
The penny is not in my left hand.
---------------------------------
(therefore) The penny is in my right hand.
To rule out any invalidating situations, we imagine all the different possible situations where the premises are true. But how many possible situations is that? Consider just the second premise: how many possible situations are there where it's true? There's the possible situation where the penny's not in your left hand, and you're wearing a red bowtie; the possible situation where the penny's not in your left hand, and you have a Turkish grandmother; the possible situation where the penny's not in your left hand, and you've skipped logic class; and so on. There seems to be no end to all the different possible situations where a particular sentence is true. But then it would take forever to check all those situations, one by one, in our imagination. And if we don't go through all of them, we can't be certain that the argument is valid. So searching through possible situations in the imagination doesn't look like a practical way to establish the validity of an argument. But that's all the informal approach offered us. So a new and improved test would be useful here.
Second Problem: Psychological Limits. A method that detects possible situations with the imagination is only as reliable as the imagination itself. We've been assuming that imaginability is a reliable measure of what is or isn't possible; and sometimes that's a legitimate assumption. But sometimes our powers of conception break down, for one or another reason. We consider two reasons here.
(1) For one thing, what we can imagine may be unnaturally constrained by our culture, or our current beliefs. Sometimes things that originally seemed unimaginable turn out to be imaginable after all. When the special theory of relativity was first stated, lots of people (philosophers included) thought what it was claiming was just impossible; but as the evidence accumulated, confirming what the theory predicted, its critics reconsidered. Over time, people learned to live with the unusual claims it made, and now it's accepted without much controversy. What seemed unimaginable yesterday seems perfectly imaginable today. As our beliefs change, the limits of our imagination change with them.
(2) Another limit on the imagination concerns the 'size' and complexity of the situation we try to imagine. It's probably not very difficult to imagine a situation where the premises of Argument (1), above, are both true. But now imagine a situation where all of the following premises are true:
(2)
Here the imagination just boggles in the face of so much complexity. There may very well be possible situations where all the premises are true (indeed, we will show later that there are), but the imagination draws a blank. In the search for even a single possible situation (never mind an infinity of them), the imagination is not always a useful tool.
Note that these problems are not just trouble for demonstrating validity, but for demonstrating invalidity as well. If we don't come up with an invalidating situation after 10 tries, does that mean there isn't one? Perhaps it just means that we haven't sifted long enough through the infinity of situations, or that our imaginations are unnaturally fenced in. The moral is that, even in detecting invalidity, the informal test is not foolproof.
Clearly a better test of validity is called for: a test that can say conclusively, in a finite amount of time, whether an argument is valid or invalid; a test that isn't swayed by the passing limits of what we can imagine today; a test that works just as well on very complicated arguments as it does on simple ones.
2.2. Logical Form. Happily, logicians have developed tests of validity which meet all of these requirements. As an example of how such a test works, consider Argument (1) again:
(1) Either the penny is in my left hand, or the penny is in my right hand.
The penny is not in my left hand.
---------------------------------
(therefore) The penny is in my right hand.
Ask yourself quickly, without thinking it over: is Argument (1) valid?
Most people who understand the word "valid" will answer yes: if the premises are true, the conclusion must be true. (Indeed, even very young children playing 'find the penny' recognize the validity of this argument.) Now, in light of what was said in the last section, this is surprising. It looked as though it would take forever to be sure an argument is valid, since it would involve imagining an infinity of situations. But people can tell that Argument (1) is valid in a finite amount of time -- a matter of a few seconds. In fact, it seems people can just look at Argument (1), and see that it's valid. This is surprising, because it suggests that there is another way of detecting validity -- a way that doesn't involve imagining possible situations, but rather something that can (at least sometimes) be seen in a glance.
Consider another example: is Argument (2) valid?
(3) Either the quiz will be today, or the the quiz will be tomorrow.
The quiz will not be today.
---------------------------------
(therefore) The quiz will be tomorrow.
That question seems silly. Argument (3) is valid just like Argument (1), and what makes it so obvious is that the two arguments are so much alike. But how are they alike? Not in terms of their subject matter, clearly: Arguments (1) and (3) are about completely different things (pennies and hands, vs. quizzes and days). In fact, the topic of the argument is so irrelevant to its validity that it seems we could pick any topic and construct a valid argument like this about it. One more example:
(4) Either Leibniz was the greatest post-Cartesian rationalist, or Spinoza was the greatest post- Cartesian rationalist.
Leibniz was not the greatest post-Cartesian rationalist.
------------------------------------------------
(therefore) Spinoza was the greatest post-Cartesian rationalist.
You may not even know what a 'post-Cartesian rationalist' is -- so you may not have the slightest idea which situations would make these sentences true or false -- but Argument (4) is still just as obviously valid as the previous two. And we could trivially spin out as many more arguments of this type as we liked. Clearly, what these arguments have in common is not subject matter, but a common skeleton, or structure. We can capture that common structure in the following diagram:
(5) Either () or [].
Not ()
--------------
(therefore) []
And the point is: any argument with this logical skeleton would have to be valid, no matter which sentences are put in the blanks.
This point can be phrased in terms of an extremely old distinction, drawn by the very philosopher who started formal logic over two thousand years ago. That philosopher was Aristotle, and the distinction he made throughout his philosophy was between form and matter. Take a simple example: a piece of chalk and a small iron bar, of the same length and radius. Using Aristotle's terminology, we can say the two objects have the same geometrical form (they're both cylinders), while being made up of different matter (chalk vs. iron). The same point applies to Arguments (1), (3), and (4): while they have different subject matter, they share the same logical form.
And of course, the point about validity is not restricted to just logical form (5). Consider another kind of argument:
(6) I've got rhythm, and I've got the blues.
-------------------------------------------
(so) I've got rhythm.
True, the usefulness of this argument might not be apparent -- it's unlikely anyone would bother to draw the conclusion from the premise in a real conversation. But all the same, the validity of the argument is apparent: if the premise of Argument (6) is true, the conclusion is as well. And once again, lots of similar arguments are equally valid:
(7) I've fallen, and I can't get up.
----------------------------------------
(so) I've fallen.
(8) Jimmy cracked corn, and I don't care.
--------------------------------------
(so) Jimmy cracked corn.
As with Arguments (1) through (3), all of these arguments share a common form, which can be depicted in diagram (9):
(9) [] and ().
--------------
(so) [].
And once again, Arguments (6) through (8) are valid because they all have logical form (9), since any argument with that form would have to be valid.
In light of cases like argument forms (5) and (9), logicians have proposed the following bold hypothesis:
The validity of an argument depends entirely on its logical form, and not at all on its subject matter.
In fact, this can be considered the guiding assumption of all formal logic -- that validity depends only on form. (Hence the name "formal logic".)
Guided by this hypothesis, we can picture a new method of testing arguments for validity. While it might indeed take forever to decide validity by sifting through possible situations in the imagination, it need not take forever -- or even very long -- to examine the logical form of an argument, and decide its validity that way. After all, the logical form of these arguments is simple enough to detect in a glance. (And notice: if validity really is a matter of form, it would explain how we can see the validity of these arguments -- since their form really is something we can see.)
Specifically, there would be two requirements for such a new method: first, a procedure for isolating, or "extracting," the logical form of an argument; second, a test of validity that considers only this logical form. Moreover, we need to insure that both of these components are foolproof -- that complicated logical forms can be extracted just as well as simple ones, and that the test of validity works just as well on those complicated forms as on simple ones. As we will see, the methods developed in the following sections are made up of just these two parts, and are indeed foolproof in this way.
The Language of Sentence Logic
2.3. 'And', 'Or', and 'Not': Introduction to the Formal Language.
We tackle the two requirements in order. First we develop the tools necessary to 'extract' logical form from ordinary-language arguments; after that, we construct foolproof tests of validity that look only at this logic form.
While we will take our time setting out the whole method of 'form-extraction', it is possible to get a quick overview of this method from the start. Very briefly: sentence logic will amount to a kind of 'science of validity'; and like any respectable science, it will use a special, technical language, with special symbols and jargon. The language of sentence logic will be a language of pure logical form -- specially designed, in fact, to represent nothing but logical form. By translating the sentences of an argument into this language, focus is placed exclusively on the form of that argument. So translating into this specialized logical language will itself amount to a method of extracting logic form.
Clearly, such a language will differ from a natural language like English. For as the arguments of the last section illustrated, English communicates both logic form and subject matter. A language that filters out the subject matter would be more like the logical skeletons depicted in (5) and (9):
(5) Either () or [].
Not ()
--------------
(therefore) []
(9) [] and ().
--------------
(so) [].
Notice, first, that while lots of words have been filtered out -- the subject-matter words about pennies, Spinoza, rhythm, and so on -- some words of English remain: "and," "either... or," and "not".
Certain words and phrases of English discuss logical form rather than subject matter. These are the "form phrases" of English. And since they contribute to logic form, a specialized language of logical form will need to represent them. Moreover, each phrase makes its own unique contribution, and will need to be kept distinct from the others -- for instance, changing the "and" in (9) to "or" would make the argument invalid. So "and" must be translated differently from "or" or "not". This is an obvious requirement for correct depiction of an argument's logic form, and the langauge of logic built here will meet that requirement.
Notice as well the little blanks -- the () and [] -- where the subject matter used to be. As noted, which sentences go in these blanks isn't important to the form -- that's a matter of matter. But obviously it is important that sentences go there; putting a name into a blank in (9) yields gibberish:
(10) Either the quiz will be today, or Zeke.
The quiz will not be today.
---------------------------------------
(therefore,) Zeke.
It's also important to know whether or not the same sentence shows up in different places. Consider Argument (3) again:
(3) Either the quiz will be today, or the the quiz will be tomorrow.
The quiz will not be today.
---------------------------------
(therefore) The quiz will be tomorrow.
Between "either" and "or" in the first premise is a sentence -- "the quiz will be today" -- which is exactly the same sentence being denied in the second premise. It's absolutely crucial to the argument's validity that the same sentence appear in both these places. Putting in two different sentences yields an invalid argument, as (11) illustrates:
(11) Either the quiz will be today, or the the quiz will be tomorrow.
The penny is not in my left hand.
--------------------------------------------------
(therefore) The quiz will be tomorrow.
Likewise, the sentence in (3) after the "or" -- "the quiz will be tomorrow" -- is exactly the same as the conclusion sentence in (3). Changing this would also lead to invalidity, as (12) shows:
(12) Either the quiz will be today, or the quiz will be tomorrow.
The quiz will not be today
--------------------------------------------------
(therefore) The penny is in my right hand.
The moral: while a language of pure logical form needn't record the subject matter of the sentences, it should definitely mark when the same sentence (whatever it's about) shows up in different places -- that is part of the argument's logical form, just as much as form words like "and," "or," and "not".
Small sentences of this sort are called "atomic sentences," because they have no smaller sentences as parts. Compare: an "and"-sentence like (13) contains two smaller whole sentences as parts:
(13) It's raining and it's cold.
So "and"-sentences (like "or"- and "not"-sentences), are molecular sentences -- molecules built out of smaller parts. The smaller parts -- the sentences "it's raining" and "it's cold" -- are, by contrast, the atoms of the sentence, since they can't be split into yet smaller sentences.
To track such atomic sentences in the logical language, we could use the above symbols -- (), [], and so on. But a complicated argument might have many different atomic sentences to keep track of, and that would require just as many different symbols -- so many that it might become difficult to keep coming up with new ones. (Also, 'artistically challenged' people might have trouble drawing so many different symbols.) Instead, the formal language will mark the atomic sentences of an argument with letters -- capital letters K through Z -- which will be called "sentence letters" (for obvious reasons). So Argument Form (5), for instance, can be rewritten as (14):
(14) Either P or Q
Not P
-------------
Q
K through Z yields 16 sentence letters; but what if an argument is so complicated that it contains more than 16 atomic sentences? In that case, numerals can be added to the sentence letters:
P
P1
P2 (etc.)
With the addition of numerals, there will be an infinity of sentence letters (since there are an infinity of numerals). In practice, numerals won't usually be necessary; but the formal language can use them if the occasion arises.
This language obviously can't stop with sentence letters -- it also needs to mark form words like "and," "or," and "not". To accomplish this, we introduce some simple symbolism, and some technical jargon to go along with it.
"And"-sentences are called "conjunctions"; and the English word "and" will be translated in logic by the symbol " ". Peculiarly, of the three logical symbols introduced here, only " " lacks a special, traditional name; it will simply be called "the conjunction sign". Using the conjunction sign (along with parentheses, for punctuation) an argument like (6) can be represented in symbolic notation (translating the atomic sentences of English as shown):
(6) I've got rhythm, and I've got the blues.
-------------------------------------------
(so) I've got rhythm.
(15)
P: I've got rhythm
Q: I've got the blues
(P Q)
-------
P
"Or"-sentences are called "disjunctions". The English word "or" will be translated in logical notation by the symbol "v" -- referred to as the vel. Translating the atomic sentences the same way as in (15) (and again adding parentheses for punctuation), the vel can be used to construct the symbolic sentence (16),
(14) (P v Q)
which would mean:
(17) Either I've got rhythm, or I've got the blues.
Finally, "not"-sentences are (naturally) called "negations". The English word "not" will be translated by the symbol " " -- referred to as the tilde. (Since the tilde is attached to the left of a single sentence, rather than connecting two different sentences together, parentheses are not needed to group two sentences together as they were with conjunctions and disjunctions.) Using the tilde, along with the vel, Argument (1) can be translated into this symbolic notation (translating the atomic sentences as shown):
(1) Either the penny is in my left hand, or the penny is in my right hand.
The penny is not in my left hand.
---------------------------------
(therefore) The penny is in my right hand.
(18) L: The penny is in my left hand R: The penny is in my right hand
(L v R)
L -------
R
One more piece of jargon concludes this presentation: these symbols -- tilde, vel, and the conjunction sign -- are called connectives.
This completes the specialized language of logic, which will be used throughout the rest of the chapter (indeed, throughout the rest of the book) to represent the logical form of arguments. While new languages typically take years to learn, anyone who has read this far now understands this symbolic language.
We summarize the symbolism set out above:
Connective |
Name |
English-Language Counterpart |
Conjunction Sign |
"And" |
|
Vel (Disjunction Sign) |
"Or" |
|
Tilde (Negation Sign) |
"Not" |
The next two sections will not add any new symbols or letters to this language. They will, however, introduce techniques for putting the material of this section into a more systematic and mechanical format. Once the symbolic language is recast in that more mechanical fashion, even a computer will be able to learn it.
Finally, it should be clear by now why this is called the language of sentence logic: the atomic parts here (the sentence letters) represent whole sentences, and the 'molecules' are larger sentences built out of these (with the help of connectives).
2.4. Recursive Definitions.
The logical language just set out seems fine as it is. And since it's intuitively obvious how to build sentences in this language -- and how to translate from English into symbolic notation -- why should we bother stating the language in a more technical and mechanical way?
Intuitions -- gut convictions about which sentences belong in our logic language, or which arguments are valid -- can be wonderful and useful things. But as we've seen, our intuitions, like our imaginations, tend to break down in the face of complexity. Since we're after a test of validity that works even in very complex cases, and since correctly extracting logical form is crucial to this test, we need to be just as clear about very complicated forms as we are now about simple, intuitive ones. Developing a purely mechanical statement of our symbolic language will guarantee that we can always tell whether a certain bunch of symbols is or isn't a proper sentence of our logical language, how the different parts of the sentence fits together, and so on. As we will see, having such a completely technical, automatic grasp of the language will be essential for building a truly general, full-proof test of validity.
So we set out to develop a more official and mechanical version of logical grammar. In order to do this, we appeal to a very useful device known as the recursive definition. While many speakers of English are not familiar with recursive definitions, they unconsciously use them all the time to speak and understand their language. For linguists have found that recursive definitions are just as important for stating rules of English grammar as they are for rules of logical grammar.
Consider a (slightly over-simplified) example: suppose that there were two original humans -- call them Adam and Eve; and suppose further that every human after Adam and Eve is an offspring of two humans (and, of course, that every such offspring is human). Then, through recursive definition (17), we can specify very simply which things in the world count as humans:
(17) Recursive definition of "human":
(i) Adam and Eve are humans;
(ii) If x and y are humans, and z is the offspring of x and y, then z is human.
One significant feature of recursive definitions like (17) is that they appear, at first glance, to be circular definitions. A definition is typically circular if it uses, in the definition, the very word it's defining. For example, suppose I gave the following definition of "dog":
(18) Dog: a dog is an animal which has four feet, is furry, is a dog, and....
This definition is no good, no matter how much further description I add, since it uses the word -- "dog" -- that it's supposed to define. Obviously such circularity is bad: a definition is meant to explain the meaning of a word, but to understand this definition, I already have to understand the word it's defining.
Now, definition (17) seems to suffer the same circularity in its second clause: clause (ii) only tells us x is a human if we know that x is the offspring of two humans. But then, we already need to know what humans are.
What saves the day in definition (17) is clause (i): it hands us, in effect, a "start-up set" of humans, out of which we can build all the others through clause (ii). If definition (17) had only clause (ii), then it really would be circular, just like definition (18); but with (i), it only looks circular. A clause in a recursive definition which simply sets out some basic examples of the thing being defined (the way clause (i) does in (17)) is called a basis clause. On the other hand, a clause in a recursive definition which exhibits an apparent circularity (the way clause (ii) does in (17)) is called a recursive clause. Every recursive definition has at least one recursive clause (hence the name), and one basis clause (in order to break the circle).
Another characteristic property of recursive definitions is their ability to "cycle" repeatedly on the recursive clause(s) -- indeed, the apparent circularity of recursive clauses is present precisely to allow this repeated "cycling". For example, suppose Adam and Eve are the "input" for (17ii) -- that is, they play the role of x and y. Then Cain, being the offspring of Adam and Eve, can stand in the z spot, and so will also count as human. Likewise, so will Cain's wife. But now, since they count as human, they can, in turn, stand in the x and y places of the definition, and their offspring, Enoch, will count as human (in the z place). And then, being human, Enoch can stand in the x place in turn; and so on. Because (17ii) used the word "human" in both the "if" and "then" parts, it appeared circular; but precisely because of this apparent circularity, clause (ii) can be used over and over, taking its outputs as new inputs, to get new ouputs, which can in turn act as new inputs, and so on.
2.5. Formalizing the Language: Logical Grammar.
Having worked through these preliminary ideas, stating the grammar for the language of logic is fairly simple. We use a recursive definition to define which strings of symbols will count as grammatical in the language of logic. A grammatical sentence in this language will be called a well-formed sentence. So our grammar will define what it is to be a well-formed sentence.
There is one basis clause in the grammar:
1. Sentence letters are well-formed sentences.
And, for the record, we repeat here our official definition of "sentence letter":
(18) Sentence letters are the capital letters K through Z, with or without numerical subscripts. (A numerical subscript is a slightly lowered numeral, next to the letter.)
So by (18), all of the following are sentence letters:
K
K1
Q
Q12
Z256 .
There are three recursive clauses in this logical language, which build 'molecules' out of sentence letter 'atoms'.
2. If A is a well-formed sentence, then A is a well-formed sentence.
3. If A and B are well-formed sentences, then (A B) is a well-formed sentence.
4. If A and B are well-formed sentences, then (A v B) is a well-formed sentence.
(Note that the "A" and "B" here are not sentences letters -- since they're not letters between K and Z. Instead, "A" and "B" are blanks, that get filled in by something. They play the same role as "x" "y" and "z" did in definition (17), above; but since we're saving "x" "y" and "z" for later, we use the special letters A and B instead.)
Clause 2 says: take any string of symbols -- call it "A"; if that string of symbols is a well-formed sentence, then so is that string with a tilde to the left. By Clause 1, "P" is a well-formed sentence (it's a sentence letter); and by C2, "P" with a tilde to the left also counts as a well-formed sentence:
[NB: use inverted (converging) version of analytic trees to build the sentences; then, when you do analytic trees (family history), it'll be visually obvious what's going on.]
P .
Likewise, all of the following will be well-formed sentences, by Clauses 1 and 2:
P1
Q
Q12
Z256 .
But Clause 2 does more than place a tilde next to sentence letters -- and here is where the cyclical power of a recursive clause comes into play. Consider, for instance, " P," which (according to Clause 2) is a well-formed sentence. If " P" is now put in the A place, then Clause 2 would say:
If P is a well-formed sentence, then P is a well-formed sentence.
But "P" is a well-formed sentence; so " P" is also a well-formed sentence.
Now we see why the recursive, "cyclical" nature of Clause 2 is so powerful: it doesn't simply permit a single tilde to the left of a sentence letter -- it allows any finite number of tildes to the left of sentence letters. (And, as we will see, it does even more when combined with the following two clauses.) So, even if we had only one sentence letter to start out with, Clause 2 could generate an infinity of well-formed sentences:
P
P
P
P (etc.)
Any sentence which is the ouput of Clause 2 is called a negation. So clause 2 says: negations are well-formed sentences.
Now consider Clause 3. It says, in effect: take any two strings of symbols -- call them "A" and "B"; if both A and B are symbolic sentences, then so is A and B with the conjunction-sign placed between, and the whole thing wrapped in parentheses. So, for instance, since "K" and "L" are well-formed sentences (by Clause 1), so is the result of putting the conjunction-sign between them, and wrapping the whole thing in parentheses:
(K L) .
By the same token, by Clauses 1 and 3, all of the following are also well-formed sentences:
(R S)
(Z256 X)
(P P) .
(This last example shows that the same sentence can stand in both the A and B places in Clause 3.) We call any output of Clause 3 a conjunction. So Clause 3 says: conjunctions are well-formed sentences.
As we have already noted, conjunctions, unlike negations, must always be wrapped in parentheses. This avoids ambiguity when conjunctions get more complicated. How would they ever get more complicated? Through the marvelous "cyclical" nature of Clause 3 -- which, like Clause 2, is a recursive clause. For instance: by Clause 3, both "(K Q)" and "(R M)" are well-formed sentences. But then, by putting these sentences in the A and B places, Clause 3 also says the following string is a well-formed sentences:
((K Q) (R S)) .
Once again, a recursive clause can take its outputs as new inputs, to create new outputs, etc.
And once there are two recursive clauses, we can mix and match them to create new combinations -- since either clause can take the output of the other one as a new input. Consider: from Clauses 1 and 2, the following negations are well-formed sentences:
O Z .
Then, taking these two negations as inputs for Clause 3, the following conjunction is also a well-formed sentence:
( O Z) .
And then, taking this conjunction as fresh input for Clause 2, the following negation is also a well-formed sentence:
( O Z) .
And so on.
[Parentheses again]
Once this much of the logical grammar is clear, the final recursive clause, for disjunctions, is fairly simple. Clause 4 says
That's all there is to speaking this logical language properly. As long as a speaker utters only sentences built according to these four rules, she is guaranteed to be speaking grammatically in the language of logic.
Analysis -- "genealogy" -- rules in reverse. All start with SL's (fr. basis clause) -- so all should trace back to SL's.
2.6. Translation Techniques.
1. As we know from Chapter One, words like "so" and "therefore" are just indicator phrases; while they help to point out the conclusion, the argument would work just as well without them. By contrast, removing words like "and" or "not" would drastically affect the validity of the argument. So indicator phrases are ignored in the discussion of "form phrases".
2. Though we ordinarily think of definitions as stating the dictionary-type meaning of a word or phrase, example (17) exhibits a broader sense of the word "definition": while (17) doesn't state the characteristic properties of humans, it does specify exactly which things will count as humans. In this sense of the word "definition", we can use recursive definitions to specify the grammar of a language, by 'defining' exactly which things count as grammatical sentences of that language.
3. In the technical jargon of metalogic, A and B are called meta-variables.