""http://www.w3.org/TR/html4/loose.dtd" The Philosopher's Annual



Volume XXIX
Introduction


The papers collected here represent our effort to showcase ten of the best philosophy articles from the literature of 2009. Any claim regarding a selection of the ‛best,’ of course, will carry immediate caveats and qualifications.

Each year we solicit nominations from a committee of nominating editors representing a diverse sample of philosophical sub-fields. The list of articles nominated is then returned to all nominating editors for secondary review. On the basis of nominations and reflections we then narrow a larger pool down to thirty articles. We as editors read all 30 articles, and over the course of several argument-filled days arrive at a selection of ten—the selection you see before you.

We would be at a loss to articulate any precise criteria that gave us these ten. In a field of such diverse focus, approach, and method as philosophy, we doubt that one could formulate any precise criteria. Certainly strength and originality of argument, new approaches and novel techniques, fairness to the wider literature, and clarity and cogency of presentation all count as virtues. Quality is to count for everything; specific area of philosophy, prestige of author, and original place of publication are to count for nothing. Our aim is simply to demonstrate philosophy at its manifold best. This year articles are drawn from meta-ethics and philosophy of mind, from logic and philosophy of mathematics, from history of philosophy and philosophy of science, and from metaphysics and metaphilosophy. Though any claims to have selected a unique ‛ten best’ must be taken with generous grains of salt, we think we can claim to have identified ten outstanding articles worthy of particular attention.

In “The Normative Insignificance of Neuroscience,” Selim Berker responds to claims regarding the alleged normative implications of recent experimental findings which suggest a link between characteristically deontological moral judgments and emotional neural processes, on the one hand, and characteristically consequentialist moral judgments and “cognitive” neural processes, on the other. Berker argues that, on their own, these empirical findings have no normative significance at all; contrary to widespread claims of others, the findings provide no direct support for consequentialist moral theories over their deontological rivals.

Two of the papers make innovative use of tools from the theory of probability to investigate and clarify the evidential import of various kinds of data. In “Absence of Evidence and Evidence of Absence—Evidential Transitivity in Connection with Fossils, Fishing, Fine-Tuning and Firing Squads,” Elliot Sober appeals to the likelihood measure of evidential support to give a novel account of what is right and what is wrong in the well-known aphorism that ‛absence of evidence is not evidence of absence.’ Along the way, he turns up distinctions which prove relevant to the general validity of the anthropic principle and fine-tuning arguments for the existence of a designer. In a related vein, Rachael Briggs approaches the question of how rational agents should adjust their current beliefs in light of information about their future beliefs. Her “Distorted Reflection” provides a careful and persuasive account of what is right and what is wrong in the principle known as Reflection: that, when you learn that you will in the future believe p to degree x, you should now believe p to degree x. Her discussion also provides an intriguing and insightful diagnosis of the failure of so-called Diachronic Dutch Book arguments to establish the rational incoherence of agents who fail to always abide by Reflection. By rigorously investigating the foundations undergirding these principles of epistemic rationality, Briggs and Sober do much to elucidate central issues and advance the state-of-the-debate surrounding the evidential support of 1) lacunae in the data, 2) data which were guaranteed to be observed (because one would not be around to be collecting data, had things been otherwise), and 3) data about one’s future doxastic state.

Mark Schroeder and James Dreier both address central debates in foundational meta-ethics. Schroeder’s “Hybrid Expressivism: Virtues and Vices,” impressive in scope, offers a comprehensive and critical evaluation of a meta-ethical position defended by a number of philosophers which blends a cognitivist treatment of the semantics of normative language with a non-cognitivist treatment of its use. In “Relativism (and Expressivism) and the Problem of Disagreement,” Dreier offers an insightful look at the most troubling issue for contemporary proponents of expressivism about normative language—the problem of giving an account of disagreement and the semantics of negation. He synthesizes some of his earlier work on these issues to paint a powerful picture of the problem of disagreement as an issue not just for expressivism but for a form of relativism recently defended by John MacFarlane, Andy Egan, and others. Disagreement has received a enormous amount of attention in many different philosophical areas. Dreier’s appears to be an important contribution for many of those areas.

What is the epistemic status of axioms in formal proof? In “We Hold These Truths To Be Self-Evident: But What Do We Mean By That?,” Stewart Shapiro offers an examination that is both critically focused on that central issue and sweeping in its historical scope. Shapiro offers a careful examination of both claims regarding axioms and the actual use of axioms in practice

across a century and a half of logic, with particular attention to Frege and Zermelo. Despite claims regarding stand-alone self-evidence, Shapiro argues that axioms have characteristically been supported holistically or even pragmatically in the context and practice of systematization. Shapiro also suggests that this is precisely as it should be.

In “Akrasia and Perceptual Illusion,” a similarly impressive display of scholarly ability, Jessica Moss weaves together arguments from three different Aristotelian texts in an effort to clarify Aristotle’s account of akrasia and resolve an apparent inconsistency in his view. Drawing on material from Aristotle’s de Insomniis—a text which has been largely ignored in the literature—Moss argues that Aristotle conceives of akrasia as analogous to certain cases of perceptual illusion. She then demonstrates how a recognition of this parallel can facilitate a reconciliation of Aristotle’s account of akrasia as a conflict between phantasia and rational cognition, as presented in de Anima, and his account of akrasia as a form of ignorance, as presented in the Nicomachean Ethics.

Johan van Benthem, Patrick Girard, and Olivier Roy’s “Everything Else Being Equal: A Modal Logic for Ceteris Paribus Preferences” is perhaps the technically most challenging article we have included. The authors offer a full semantics and several completeness theorems for a new modal logic for preferences ‛all other things being equal,’ where the latter is carefully distinguished from the very different logic that would be required for ‛all other things being normal.’ More importantly, perhaps, they are careful to link their work to both what has gone before and to contemporary applications. In the former regard, they build on an exposition of Von Wright’s work in preference logic in the 1960s. In the latter regard, they trace implications for formal definition of solution concepts in game theory—best response and dominant strategy, for example. In the dynamic context, they extend their work to a formal treatment for changing preferences under public announcement.

Jonathan Schaffer’s “On What Grounds What” offers an original, engaging, and lively essay in meta-ontology. Schaffer sets his sights on a pervasive presupposition about ontology’s subject matter and method: that the task of ontology is to determine what exists, and that this is to be done by examining the existential commitments of our best theories of the world. Schaffer offers a battery of arguments for the thesis that such an understanding of ontology would make ontology trivial, and that genuinely interesting ontological questions—questions of the sort that have motivated debates between nominalists and Platonists, for example, or modal realists and actualists—are properly understood as concerned not with what exists but with what is ontologically fundamental.

In “A Tale of Two Vectors,” Marc Lange resurrects a forgotten debate in the history of physics, arguing that it holds important lessons for contemporary philosophical accounts of natural lawhood. As Lange explains, the historical debate centered upon the proper explanation of the law of Newtonian mechanics that force vectors compose parallelogram-wise. Without taking sides, Lange contends that an account of lawhood should be able to make the debate sensible. In a striking and innovative conclusion, Lange provides a multi-tiered account of lawhood designed to do precisely this. In addition to articulating a fresh source of worries for existing analyses of natural law, Lange’s paper introduces a fresh alternative.

We hope you enjoy perusing this collection as much as we enjoyed our days of argument in putting it together. Many thanks to our nominating editors for their fine selection of candidates and to the Philosophy Department of the University of Michigan for their continued support.



  Patrick Grim
Nate Charlow
Dmitri Gallow
Warren Herold




CURRENT PAST VOLUMES