""http://www.w3.org/TR/html4/loose.dtd" The Philosopher's Annual



Volume XXXI
Introduction


The goal of The Philosopher’s Annual is to gather together the ten best philosophical articles of the last year. Articles of exceptional merit are put forward by the Nominating Editors, and those articles are then collected together for a second review by the Nominating Editors as a whole. We, the four Editors, read the top-ranked thirty-or-so articles and after several days of extended argument decide which ten articles stand out as the best of the bunch.

It would be hubris to suppose that our method is foolproof or that we have achieved our goal of ferreting out the ten best philosophical articles of 2011. A project this ambitious requires a bit more humility than that. Some excellent papers undoubtedly slipped through the nomination process and there were far more than ten or even thirty papers of special note in the wide range of articles received from the Nominating Editors.

What we can certify, however, is that the resulting collection of papers features innovative, ambitious, carefully articulated, rigorously defended, and in some cases underrepresented philosophical views. Examining these papers inspired a great deal of debate and reflection on our part. We pass them on to you for consideration and enjoyment, confident that they are worthy of reflection and debate on your end as well.

Two of the articles offer contemporarily relevant discussions of philosophical concepts in their respective historical contexts. In “Statistical mechanics and thermodynamics: A Maxwellian view” Wayne C. Myrvold explores the Maxwellian conception of heat and work as means-relative. Accordingly the distinction between heat and work depends upon agents’ limitations to observe and intervene in molecular changes. Means-relativity, interesting in its own right, is formative in Maxwell’s thinking about violations of the second law of thermodynamics (which relates the transfer of heat to the expenditure of work.) Myrvold offers a compelling explanation of how Maxwell comes to formulate the second law of thermodynamics as “statistical,” and brings this formulation to bear on contemporary attempts to recover the laws of thermodynamics from statistical mechanics.

In “The Concept of Unified Agency in Nietzsche, Plato, and Schiller” Paul Katsafanas presses us to reconsider the character and significance of psychological unity in Nietzsche’s work. In doing so Katsafanas urges us away from interpretations that treat unified agency in terms of one dominant psychological state dictating an agent’s actions. Drawing on the work of Schiller and treating the priests of Nietzsche’s Genealogy as an instructive example, he develops a notion of unity founded on the stability of an agent’s attitude towards his or her own action when confronted with the psychological states that produced the action. Katsafanas clarifies the roles of genuine agency and psychological conflict in Nietzsche’s work with a keen eye toward textual and philosophical considerations.

The pieces by Mark Schroeder and David Liebesman use careful investigation of the English language to arrive at philosophically insightful conclusions. In “Ought, Agents, and Actions” Mark Schroeder uses tools from linguistics to argue that the normative 'ought' has two distinct senses, which can be distinguished by their syntactic properties. Schroeder goes on to claim that this conclusion is not merely of grammatical interest; the two distinct senses correspond to two separate but philosophically interesting concepts. The first sense is a “deliberative” sense that has important consequences for what particular agents ought to do; the other an “evaluative” sense that bears, in the first instance, on how the world ought to be. In closing, Schroeder describes a number of metaethical debates that have proceeded under the assumption that 'ought' is semantically uniform, and suggests ways in which marking the distinction between the two senses will have consequences for these debates. We think Schroeder's paper constitutes an admirable example of informed and substantive interdisciplinary theorizing.

David Liebesman argues in “Simple Generics” that a founding assumption of the current generics literature is mistaken. 'Birds fly,' Liebesman claims, does not have the syntactic structure of quantificational sentences like ‘some grapes are green;’ it is rather an instance of the familiar subject-predicate construction. In particular, 'birds' is a referential term denoting a kind, which serves as an argument for the one-place predicate 'fly.' Many of the familiar (and puzzling) features of generics, Liebesman argues, follow from independently motivated considerations about the manner in which a kind's properties depend on the properties of its instances. This raises a novel and potentially fruitful avenue for research in the generics literature.

Two of the papers in the collection trace the limitations of our justification in inductive and deductive inferences respectively. Helen Beebee's “Necessary Connections and the Problem of Induction” argues that we get no purchase on the problem of induction by adopting metaphysical claims about the nature of laws. Beebee's first target is the necessitarianism of David Armstrong: Beebee argues that the necessitarian position that timeless laws ground regularities presupposes that inductive inferences are justified. This is because the necessitarian's timeless laws are the best explanation for timeless regularities, but not the merely observed regularities. Beebee next shows that the “scientific essentialism” developed by Brian Ellis cannot justify inductive inferences for the same reason. The end result is an insightful and clear articulation of a fundamental and general problem for attempts to ground a solution to inductive skepticism in the metaphysics of laws.

In “Rational Self-doubt and the Failure of Closure,” Joshua Schechter’s topic is epistemic justification in cases of deductive inference from a single premise. We can rationally suspect that we have made a mistake somewhere in the course of a long deduction. Despite rational belief in our starting point, therefore, there will be cases where we should not believe the conclusion of a competent deduction, not only in the multiple-premise case of the lottery and preface paradoxes but with a many-step deduction from a single premise. Addressing a range of objections, Schechter concludes that there will be no simple and precise closure principle regarding justification under deductive inference, and that no deduction is fully epistemically secure.

In the first two sentences of “Ontological Nihilism,” Jason Turner declares that he is attacking a straw man: nobody out there is really defending the view that nothing exists. But, he promises, we stand to gain valuable philosophical insight by understanding just why it is so indefensible. Turner lays down a challenge for the wholesale nihilist, for he must provide a way of paraphrasing our language into an “ontologically innocent” one that doesn’t commit us to the existence of things, without losing distinctions between different sentences and preserving common patterns of inference. He considers three strategies: replacing all existential quantifiers with stipulated ontologically innocent quantifiers, replacing every sentence with some stipulated ontologically innocent atomic sentence, and constructing a language containing only “feature-placing” sentences like ‘It is raining,’ where the ‘it’ is semantically empty. Turner argues that all three strategies fail either because they are not genuinely ontologically innocent or they commit us to an excessive number of primitives and brute facts. We have done much better, Turner concludes, by accepting a world whose complex structure is built up out of individual things.

Three of the articles formalize philosophical views in order to highlight instructive properties within the subsequent models. In “Rules and Reasons in the Theory of Precedent,” John Horty unifies three existing theories in the literature and provides a formal model of the mechanisms by which precedent constrains future court decisions. Horty treats each court case as generating a rule that maps a set of legal factors to the outcome of the case. This set of factors, whenever it appears in a new case, is interpreted as providing a sufficient legal reason for making the same decision—sufficient, but defeasible. Each rule then contributes to an ordering of legal reasons, and precedential constraint consists in prohibiting any future decisions that lead to inconsistency in the ordering. Horty’s theory thus combines the rule and reason models of precedent and is consistent with the result model. It also allows the possibility of decisions which are not clearly cases of following or distinguishing precedent, and for an elegant way of understanding how new court decisions—whether following or distinguishing—can still change the law.

In “Theory Choice and Social Choice: Kuhn versus Arrow,” Samir Okasha proposes a new framework for understanding Kuhn’s claim that there is no unique algorithm for choosing between theories. Okasha draws an analogy between a standard social choice problem—how to aggregate individuals’ preferences over a set of alternatives—and the problem of theory choice. He treats each criterion of theory choice (simplicity, accuracy, scope, and the like) as an individual with a preference ranking over a set of alternative theories. By Arrow’s theorem, given at least three alternatives, there is no algorithm that will satisfy certain reasonable conditions. Okasha then considers Sen’s solution to Arrow’s problem—enriching the information base by allowing more than ordinal rankings over alternatives—and shows how Bayesian and statistical modeling approaches to theory or hypothesis choice can be understood as doing exactly that. But when this is applied to theory choice, Okasha concludes, it is unclear whether there is a uniquely rationally acceptable way of enriching the informational base, and thus whether Kuhn can ultimately be vindicated.

Can every ethical theory be converted to a form of consequentialism? In “Consequentialize This,” Campbell Brown takes on this question by defining consequentialism within a formal model and tracing its implications. What consequentialism demands, Brown maintains, is a rightness function R that determines exactly one complete ordering of alternatives for every choice situation. He demonstrates the equivalence of that definition to the conjunction of three formally defined conditions regarding consistent dominance of worlds across choice situations, no moral dilemmas, and agent neutrality. For each condition Brown is then able to show a form of ethical theory that will be inconsistent with consequentialism as defined—a form of ethical theory that can’t be “consequentialized.”

In our judgment, each of these articles constitutes an exemplary contribution to philosophy, worthy of being read and discussed by a wide range of philosophers. Were one able to read only ten articles from the philosophical literature of 2011, we think these would be a very good ten to read.

Patrick Grim
Chloe Armstrong
Billy Dunaway
Robin Zheng





CURRENT PAST VOLUMES