In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: “it thinks it should get its queen out early.” This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality.” I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. (Dennett 1981: 107)The rival but critical chess programmer assigns a propositional attitude to the rival program; namely that the rival program thinks it should move its queen out into play early in the game. Such an ascription of a propositional attitude is both useful and predictive. For example when we wish to program our chess computer to play the rival program we may consider the fact that the other program thinks it should get its queen early and respond by making sure we have an adequate defense for such an event. But if we understand how chess programs operate we know that there is no internal representation within the code of a chess computer which represents the propositional attitude that the program 'should get its queen out early'. Dennett sees no reason why the relation between our standard everyday belief talk and talk of psychological processes will be anymore direct than in the chess program/computer example.
The chess program doesn't have a dispositional or potential, belief regarding the early movement of its queen. Rather it operates with the belief that it ought to get its queen out early in the game.There appears to be lots of everyday examples where we reason using certain rules of inference without directly or explicitly representing this rules of inference.
This objection from Dennett hasn't been particularly well received and it is widely regarded that Language of Thought theorists can provide a more than an adequate reply to such objections. The standard reply involves distinguishing between the rules regarding the way Mentalese data structures are manipulated and the data structures themselves. The Language of Thought hypothesis is not committed to every rule being explicitly represented. It is a nomological fact that in a computational device can be explicitly represented some have to be hard wired into the system. It is in this way that language of thought theorists do not have to contend that rules will be explicitly represented, it is data structures that have to be explicitly represented. With these data structures being manipulated formally by rules, causal manipulation is not itself possible without explicit tokening of these representations. It is possible to account for dispositional propositional attitudes in terms of an appropriate principle of inferential closure of explicitly represented propositional attitudes.
A chess program involves at least some certain explicit representations (for example chess board, pieces and some of the rules). Which of the rules of the program are explicit and which are implicit is an empirical matter of fact. What Dennett's objection essentially does it point to the fact that some rules may be emergent out of the implementation of explicit rules and data structures. This does not undermine the language of thought hypothesis, as it possible to account for these emergent rules in terms of data structures and explicit representations.
Dennett D (1981), Brainstorms: Philosophical Essays on Mind and Psychology, Cambridge, Massachusetts: MIT Press, 1981