Graham WhiteSimulation, Ramification, and Linear Logic. |
[full text] |
[msg to moderator] [debate procedure] [copyright] |
N:o | Question | Answer(s) | Continued discussion |
---|---|---|---|
1 |
1.10 Tom Costello |
3.10 Graham White |
|
2 |
3.10 Andreas Herzig |
28.10 The author |
Q1. Tom Costello (1.10):
Dear Graham
On page 2 of your article, "Simulation, Ramification, and Linear
Logic" you suggest that effect axioms should have
It seems to me that many effect axioms will have preconditions that
say that the effect will occur in certain circumstances, unless there
is something that prevents it. This is naturally written as a
Ab, l, s [ Ab' [¬ On(b', top(b), s)] ^ | ||
Ab' [¬ On(b', l, s)] ^ top(b) =/ l ·-> | ||
On(b, l, move(b, l, s))] |
The left hand side side of this effect axiom is clearly equivalent to
a
Yours,
Tom Costello
A1. Graham White (3.10):
Dear Tom,
Thanks for your question. It seems to me that I would want first of all
to differentiate between your two formulations of the basic objection:
i) "some local properties ... are expressed as
The first formulation has to do with local properties, which I take to be a fairly general, phenomenological notion; the second is to do with action preconditions, which is a notion internal to certain treatments of the frame problem. I'll deal with the second notion first.
My approach doesn't really use action preconditions per se. I don't have anything like a result function: given linear logic terms s and a representing a situation and an action, there may or may not be a valid inference of the form
s, a |- s' |
(on(b, l) × on(b', l) × good)~>(on(b, l) × on(b', l) × bad) |
We now apply the ramification machinery that I described: that is,
we look for sequents corresponding to performing the action,
followed by a terminating sequence of ramification transitions,
such that we eventually end up with a situation of the
form
Incidentally, I'm not at all sure about the phenomenological motivation of the concept of precondition. It seems to me that it conflates two quite distinct phenomena: the first is of actions (such as "opening the door") described by what Ryle called "success words", where it doesn't make sense to talk of performing them when they don't succeed. The other is of action descriptions such as "trying to open the door", where the possibility of failure is allowed for, and where we usually also have some intuitions about what happens in the case of failure. As Ryle (and I suppose also Wittgenstein) point out, the second case is rather more complex than the first, and probably needs a more elaborate description than simply giving preconditions. The first is the usual case in a great deal of everyday language, but, by its nature, does not need any explicit talk of preconditions (although one could, presumably, in many cases find implicit preconditions).
Your first objection is that many local concepts are in fact
I hope this helps, anyway.
Graham
Q2. Andreas Herzig (3.10):
Dear Graham,
I appreciate the aims of the paper, viz. the discussion about the logical form and the proof-theoretic account of reasoning about actions as given in sections 1, 2.1.1, 2.4. I think that too little work has been done in the community on such fundamental issues.
Basically you propose Linear Logic (LL) as a proof theory to reason about actions, following Girard who claimed from the beginning that LL would be much better suited for reasoning about actions and change than any other logical system. LL people have tried to support that claim, such as Masseron et al. (LNCS 427, 1990) or Bidoit et al. (JLC, 1996). The same has been tried also by people coming from Bibel's linear connection method - see e.g. Bibel's IJCAI'97 invited lecture, as well as work of Fronhofer and Grosse et al. that Bibel cites; see also the subsequent discussion at
http://www.ida.liu.se/ext/etai/actions/deb/bibel/index.html |
This is my main criticism. What follows are more detailed comments.
Sections 2.2. The two-blocks Example 1 is similar to Lifschitz' two
switches example, I suppose. (Btw. on p.4, line 8 should be dropped.)
I guess circumscription doesn't give the solution, but several ones,
one of which you give in the table. (Btw. one
Sections 2.3. I don't understand the difference between Example 1 and Examples 2 and 3. Where does the `support' concept come from? I need more comments here.
Section 2.4. The expression "equivalent in first order logic to theory
Section 3.2. You claim your rewrite rules for the two blocks theory is `local data', in the sense that they remain valid `if we introduce new types of cause'. But what about introducing an action such as cutting or removing the string between the blocks? Then the rewrite rules `linking' the blocks do not hold any longer.
Section 4.1. Wy don't you mention the cut rule here? (you talk about its elimination in the proof of Proposition 1). In the statement of Proposion 1, you don't say what ACI is.
If you have cut, how can composition of rewrites be a problem? Cut is just doing that job, viz. chaining sequents. I guess this relates to my main criticism: in order to identify actions in proofs, you must group together sequences of rules related to the same action. This means that there is a new concept that is exterior to LL. (I think the problem of termination of sequences is similar.)
Section 4.2. I don't understand how the introduction of modal operators solves the above problem.
In the basic rewrite rule of Definition 1, I suppose the LL `times' operator is rather an ACI bullet.
I found it difficult to understand the exposition on termination. I suppose
termination refers to termination of a rewrite sequence corresponding to an
action. In the rule, the notation
A2. Graham White (28.10):
Dear Andreas,
Thank you for your comments on my paper. I'll try to respond in order.
In my opinion all these attemps have failed in the sense that the only
thing they can handle within LL is the simplest case: deterministic actions,
|
no state constraints,
|
However, if you do want state constraints (if, for example, you want to do other things than simply generate the ramification behaviour from them), they can be added to my approach, and I've shown how to treat a simple case in my reply to Tom Costello. However, there are important conceptual (not mathematical) issues to be resolved first: in particular, there is an obvious distinction between state constraints which are required to hold during the ramification process, or only at the end of it.
knowledge about the initial situation only.
|
They didn't succeed in going beyond that without violating the spirit of LL:
they all introduced extra concepts that were used to constrain deduction in
one way or another.
|
In the present paper it is time which is made explicit in the encoding
of reasoning about action problems (section 3, 3rd paragraph).
|
But I would expect time to be implicit in a LL account of reasoning about
actions: it is the implication of LL which - if we take Girard seriously -
should be able capture the dynamics, and not formulas such as
|
This is my main criticism. What follows are more detailed comments.
Sections 2.2. The two-blocks Example 1 is similar to Lifschitz' two
switches example, I suppose. (Btw. on p.4, line 8 should be dropped.)
I guess circumscription doesn't give the solution, but several ones,
one of which you give in the table. (Btw. one
|
Sections 2.3. I don't understand the difference between Example 1 and
Examples 2 and 3. Where does the `support' concept come from? I need more
comments here.
|
Section 2.4. The expression ``equivalent in first order logic to theory
|
Section 3.2. You claim your rewrite rules for the two blocks theory is
`local data', in the sense that they remain valid `if we introduce new
types of cause'. But what about introducing an action such as cutting or
removing the string between the blocks? Then the rewrite rules `linking'
the blocks do not hold any longer.
|
Section 4.1. Wy don't you mention the cut rule here? (you talk about its
elimination in the proof of Proposition 1). In the statement of Proposion 1,
you don't say what ACI is.
|
Cut elimination is important, because, if you have it, you can show that the only proofs of the sequents that encode actions correspond to the changes that those actions produce; that is, it says that the linear logic connectives have their intuitive meaning. This is a theme in a tradition in the proof theory of intuitionism, starting with Martin-Löf; the question is, can you understand the logical connectives simply by their role in proofs? And the answer is, yes you can, if the system satisfies cut elimination: because then we only need finite data to decide whether we can apply a rule or not. But the cut rule breaks this, because we can apply it with any cut formula, which is an unbounded totality. So systems without cut can be understood, in some way, finitely, whereas systems with cut can't.
If you have cut, how can composition of rewrites be a problem? Cut is just
doing that job, viz. chaining sequents.
|
I guess this relates to my main criticism: in order to identify actions
in proofs, you must group together sequences of rules related to the same
action.
|
This means that there is a
new concept that is exterior to
|
In the basic rewrite rule of Definition 1, I suppose the LL `times' operator
is rather an ACI bullet.
I found it difficult to understand the exposition on termination.
|
I suppose termination refers to termination of a rewrite sequence
corresponding to an action. In the rule, the notation
|
Anyway, I hope this helps. Thanks again for your questions.
Graham White
13-Jun-98 20:28