Why use incorrect rules of inference?

Call for logicians: I have to convince Prof. Beckermann to drop an incorrect rule of inference from the axiomatic system for predicate logic used in his logic book. The incorrect rule is that from

A to B

one may derive

A to forall x B(t/x)

provided that t does not occur in A.

This is correct in proofs (without premises) but not in deductions (from premises). E.g. it allows to infer

Fa and notFa

from

Fa and notFb.

The rule can be made correct by adding another proviso: "...provided that t does not occur in A nor in any assumption of the current deduction".

The problem is that, oddly, some other authors also use incorrect rules, apparently including those from which Prof. Beckermann learned logic: Kutschera/Breitkopf and Mendelson. So he's afraid that these incorrect rules might after all be essential for something we just don't see. I think they aren't. But then why are they used by these authors? (If I understand him right, Hodges, in the appendix to his chapter in the Handbook of Philosophical Logic even complains that this kind of incorrect rule is an inevitable feature of axiomatic systems.)


[Update: Since probably very few logicians read my humble blog, I've posted the question to sci.logic. However I'm afraid that not many logicians read that group either, given the enormous amount of noise it contains. (Like any other part of the usenet these days.)]


[Another Update: I've tried to reformulate my question in a more intelligible way in this posting.]


[Yet Another Update: I think I've found the answer now.]

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.