The following papers were presented at AAP2009.
Mr S.M.Hassan A.Shirazi (University of Brussels(Belgium)), A Truthmaker For Necessary Truths
In the course of his works on color, David Armstrong upholds the three following statements: i) Relation truthmaker: the property red and the property blue subsume under color in virtue of certain (internal) relation held between them, ii) Rigidity: the color terms 'red' and 'blue' are rigid terms, and iii)Physicalism: the property red, for instance, is identical with a certain type of molecular structure instantiated by the surface of a red object. On the side of critiques, almost all texts are concerned with rejecting his physicalism. However, even if his physicalism is well-established, I will argue that his approach suffers from a more fundamental problem: it is circular. To reveal this problem, I use Salmon's formalization of rigidity and a posteriori identity. Instead of denying his approach totally, my ambitious is to diagnose the problem from Armstrongian point of view. From this perspective, I see that it would be less costly to give up i. Moreover, there are also some independent reasons to drop i. For instance, it sounds more intuitive to maintain colorness of the property red apart from any (potential) relation that it might have with the property blue. In the place of i, I suggest a non-relation truthmaker: the symmetrical demonstration of physical properties which determine color properties. Due to the fact that 'red is a color' is a necessary truth, whatever is the truthmaker for falling red under color will be a truthmaker of a necessary truth as well. Therefore, my suggestion appoints to some helpful idea for notorious topic of 'truthmaker for necessary truths'.
Dr. Marshall Abrams (University of Alabama at Birmingham), Borrowing Laplacean Children's Toys: On the Utility of Largely Unknowable Models
Using examples from evolutionary biology, I discuss ways in which it is useful to assume the existence of what I'll call \ideal models\" hypothetical models which can reasonably be thought to exist in principle but which are too complex for analysis or simulation or empirical testing or even formulation. I'll argue that reference to such ideal models can and does play a useful role in science and that ideal models if they do exist may often be literally true of systems in the world. I'll also argue that the notion of an ideal model can help us understand the relationship between commonplace models: Commonplace models approximate an ideal model which is literally true of the world.
Dr Miri Albahari (University of Western Australia), Does the sense of self weaken knowledge that there is no self?
Anna is anxious about her talk on no-self. Gripping her sweaty palms she climbs the podium trembling so violently that she trips and breaks the power-point projector and has to give her talk on the fly. Suppose Anna's anxiety betrays a strong sense of self -- of exactly the sort whose existence she denies. Is she harbouring inconsistent beliefs? And if Anna knows that there is no self, could her knowledge of that fact be made defective by such inconsistency? Conversely, could her knowledge of no-self be improved by losing the sense of self? It depends partly on whether the sense of self involves a belief -- and of the right sort. It also depends on whether knowledge can vary in its quality. In this paper I discuss a number of points that arise in relation to this case, drawing on the parallel debate over whether one can be a consistent determinist while having a sense of libertarian freewill.
Prof. Erik Anderson (Drew University), Sailing the Seas of Cheese
Cheesiness abounds in popular culture. Consider a few obvious examples: Celine Dion's over the top big-tent Vegas act that sold out nightly for over three years from 2003-7; much of what appears on American Idol, the most popular show on American television; as well as just about anything by Barry Manilow, Pat Boone, Michael Bolton, and Kenny G, just to name a few. Other kinds of examples might include a pandering political speech, a gold chain on a hairy chest, the Rock and Roll McDonald's in Chicago, some Anne Geddes works, many Hallmark greeting cards, special effects in some movies, precious photos of cute little baby tigers wearing hats, and so on and so forth. It would be difficult to understand many aesthetic assessments in popular culture these days without a good grasp of the concepts of cheese, cheesy and cheesiness. Part of the reason is that the high art/low art distinction upon which aesthetic assessments in the modern tradition following Hume and Kant depend is not operative within contemporary popular culture. It would be a bit too strong to assert that there are no sets of disproportionate pairs of artworks at all to serve as standards by which to orient our aesthetic assessments. But this is precisely what makes the concept of cheesiness useful, and that is perhaps what explains its ubiquity. Cheesiness is relative, and the conditions of application of the term are subjective in just the right way to make it useful in a sea of relativity. My hope is that by shedding light on the nature of cheesiness, we will, indirectly, shed light on what it is for a work to be good art in contemporary popular culture.
Mr. Jamin Asay (The University of North Carolina at Chapel Hill), Truthmaker Gaps
In this paper, I take up the topic of truthmaker gaps: truths that are true in spite of having no truthmaker. David Armstrong has charged those who believe in truthmaker gaps with being dualists about truth, and further suggests that we might need to be minimalists about truth for any truth we take to be a truthmaker gap. I argue that Armstrong's charge relies upon some false assumptions about the nature of truthmaking. In particular, I argue that truthmaker theory does not by itself offer a theory of truth, and that truthmaker theory is perfectly consistent with minimalism about truth. Still, I believe that Armstrong's demand for a systematic account of truthmaker gaps has yet to be met, and I take this paper to be a first step toward offering a defensible metaphysics of truthmaker gaps.
Dr Conrad Asmus (University of Melbourne), Expression, truth and use in a trivial language.
Doesn't triviality ensue? Isn't every sentence (therefore) both true and false? Yes. (Azzouni, Tracking Reason 2006, pg 101-102) Prima facia natural language appears inconsistent and tradition says that a contradiction entails any sentence. Most people take this as reason to either reject the inconsistency of natural language or to employ a paraconsistent logic. Jody Azzouni takes the road less trodden; he accepts the consequence that every sentence is true and false. Nonetheless, Azzouni argues, natural language remains useful for our purposes. In this presentation I will use Azzouni's position to throw further light on the connections between meaning, use and truth. I will argue that the very principles which Azzouni uses to rescue his position from absurdity should force him to recognise that the arguments leading to his position are mistaken.
Dr Greg Bamford (The University of Queensland), Design and Designing
Design may seem to have acquired a new found significance in philosophy with the recent interest in, on the one hand, intelligent design, and on the other, in natural design or design in nature, allied with the ongoing interest in function. Thinking about the world as exhibiting design features, however, with or without the services of a designer to thank for those features, hasn't led to much thought about what it is to design something. Attention has focussed largely on artifacts, the objects of design, probably because 'the artifact model' is at the heart of each of these concerns. Intelligent design and natural design are analogies with design as a human activity or practice, so what are the philosophically interesting or relevant features of this activity, of designing, that cutting to the chase might have given us to overlook? I consider what it is to design something, like the Sydney Opera House or a paperweight that appears to be nothing more than a found object. And I sketch some features of how design is that I think we need to recognize or account for, set against intuitions about good knives, makeshift crowbars, scary stuffed bears, and god-like designers.
Dr Jc Beall (University of Connecticut), Truth, necessity, and abnormal worlds
A theory of truth answers both `nature' and `logic' questions. On the former front, questions concern the `nature', if any, of truth. On the latter front, questions---with truth-theoretic paradoxes in the forefront---concern the logic of `true'. In /Spandrels of Truth/ (Oxford, 2009), I answer the former question along deflationary lines, and the latter question along `dialetheic' lines. In short: `true' is a see-through device introduced for expressive purposes; and liar-like sentences are spandrels of the device that give us true sentences with true negations. A suitable paraconsistent logic (in the B-ish vicinity) keeps us from absurdity; and a suitable (i.e., suitably deflationary) philosophy of truth keeps us from grimacing at contradictions. Work is not done after answering both the `nature' and `logic' questions. Room must be made for other philosophically important notions. One such notion is alethic modality, and in particular /alethic necessity/ (and, derivatively, possibility). This is particularly pressing in light of the worlds-involving---and, in particular, /abnormal/-worlds-involving---formal semantics of the underlying (B-vicinity) logic of the target truth theory (-ies). My task, in this talk, is to add a plausible (say, S5-ish) necessity operator to the target truth theory (-ies). While the solution is relatively straightforward, there are a few surprising obstacles along the way. This talk records some of the difficulties and advances a solution.
Dr Simon Beck (University of KwaZulu-Natal), Misunderstanding Ourselves
Marya Schechtman has argued that contemporary attempts to save Locke's account of personal identity suffer the same faults that are to be found in Locke. To avoid these problems, she advocates giving up the mainstream Psychological View and adopting a narrative account like her Self-Understanding View, which has the further virtue of maintaining important insights from Locke. My paper argues that it is misleading to understand the Psychological View as sharing Locke's commitments and that (partly as a result) Schechtman has not isolated a problem that needs fixing or a reason for going narrative. It further argues that the Self-Understanding view is a great deal more at odds with Locke's view than Schechtman cares to acknowledge.
Prof. Helen Beebee (University of Birmingham), Is there any evidence for libertarianism?
Libertarians claim not only that free will requires that (at least some) decisions are metaphysically undetermined right up to the moment of choice, but that this condition is (at least sometimes) actually met -- for example in what Robert Kane calles 'self-forming actions'. This paper concentrates on the second claim, and argues that there is, in fact, no evidence to support this empirical claim about the causal history of decision. Even if we grant that there is empirical evidence for indeterminism in general, this does not license the claim that all our decisions are indeterministically caused; and libertarians have no plausible story to tell about how we might have epistemic access to which of our decisions are indeterministically caused (and hence free) and which are not.
Prof. John Bigelow (Monash University), Tensed instantiation: joint presenter with Neil McKinnon
abstract submitted by lead author Neil McKinnon
Professor Alexander Bird (University of Bristol (UK) and Monash University), Can Dispositions Have Intrinsic Finks and Antidotes?
One might suppose that dispositions cannot have intrinsic finks and antidotes (masks). For what would then be the difference between having a disposition that for intrinsic reasons does not yield its manifestation, and not having that disposition at all? If that is right, then standard answers to certain important problems fail, for example the dispositional accounts of rule following or of intentional action, which require intrinsic finks or antidotes to respond to standard objections. In this paper I examine whether the dismissal of intrinsic finks and antidotes just given stands up, and if not, what does make the difference between possessing such a disposition and not possessing it. I suggest that there is a difference, and that there can be intrinsic interference with a disposition when that interference does not originate in a design feature (artificial entities) or a natural function (natural entities).
Dr Martin Black (Boston University), The Socratic Turn in Plato's Phaedo, Parmenides, and Symposium
Recently, more studies of Plato have paid attention to the dialogue form as an aspect of his comprehensive intention. This procedure implies that rather than focus on Plato's ostensible development we need to account for Plato's depiction of Socrates' development or the Socratic turn. This term denotes Socrates' criticism of the inquiry into nature and turn to an inquiry orientated by dialogue, the forms, and erÅs. The Socratic turn in shown in three stages through Socrates' intellectual autobiography in the Phaedo, the first part of the Parmenides, and Socrates' instruction in erÅs in the Symposium. The Phaedo passage shows Socrates' criticism of versions of materialism and teleology for abstracting from our incorrigible experience of the unity of things and from our experience that it is our opinions of what it is better to do that are the true cause of our actions. Socrates' hypothesis of the forms is intended to furnish the ground for his return of philosophy to its origins in the comprehensive horizon of opinion. Parmenides criticizes this hypothesis for effectively turning the unity or form of things into a thing, but also asserts that some such hypothesis is necessary for philosophy. Socrates' instruction in erÅs in the Symposium is intended to vindicate the philosophical life against its poetic and political alternatives, by demonstrating that the forms are inherent in human experience, which experience we normally misunderstand. The interpretation of the stages of the Socratic turn provides a plausible basis for the mix of wisdom and ignorance Socrates claims generally in the dialogues. It also shows Plato's concentration upon the problem of theory and practice: the broadest perspective on practical concerns is motivated by theoretical and not ethical demands.
Dr Russell Blackford (Monash University), NOMA No More
Attempts are sometimes made to take the sting out of the science/religion debate by invoking the idea of Non-Overlapping Magisteria (\NOMA\") or similar ideas. If we accept NOMA science and religion do not overlap but are complementary. Science asks questions about the the workings of the natural world while religion asks questions about how we should live find a sense of meaning in our lives and so on. Sometimes this is portrayed as a difference between \"how\" and \"why\" questions or between \"is\" and \"ought\" questions. However the distinctions on which NOMA and similar ideas depend are at best simplistic and dubious. NOMA is probably a false account of the distinction between science and religion and is at the least highly controversial. Although it may have some political value as a way of smoothing passionate disagreements it is an inadequate solution to the perennial conflict between religion and science.
Dr Jennifer Bleazby (St Leonards College), The Development of Imagination in Classroom Philosophical Inquiries
The imagination has traditionally been thought of as the antithesis of reason. As such, education, which has traditionally focused on the cultivation of reason, has devalued the imagination and encouraged children to transcend their imaginative natures. When the imagination has been considered important, it is has normally been thought of as a distinct form of creative thinking that compliments critical thinking. In this paper I will draw on the work of John Dewey to argue that imagination is actually integral to all thinking. Dewey describes thinking as the reconstruction of problematic experiences. Problematic experiences evoke imagination, because they compel us to imagine alternative possibilities, in which a fragmented, incomplete situation is a coherent, meaningful whole. Without the capacity to imagine problematic situations as other than they are, there would be no need for thinking because there would be no need, or means, for reconstructing experience. Thus, imagination enables us to interact with reality in a meaningful, transformative manner. I will then address how, in contrast to traditional pedagogies, Philosophy for Children (P4C) facilitates this Deweyian ideal of imagination. P4C's classroom community of inquiry involves the imaginative construction of alternative possibilities as a means to reconstructing philosophical problems. The communal nature of the classroom also facilitates imagination by exposing children to the alternative perspectives of others, which requires the use of the sympathetic imagination. Furthermore, I will explore how the imaginary, as well as the fantastical, can help children develop philosophical ability and understanding, especially in logic, critical thinking, metaphysics and ethics. Finally I will briefly address the importance of the teacher's imagination.
Ms Kylie Bourne (University of Wollongong), Crowds and Collective Moral Responsibility
This paper explains and defends the notion that collective moral responsibility can be ascribed to crowds. It examines the question of whether crowds can be the object of moral judgements such that they can bear ascriptions of praise and blame. In general, crowds have been overlooked in the philosophical debate regarding collective moral responsibility. Crowds have tended to be conceptualised atomistically such that intention, action and responsibility are not seen to exist at the level of the collective but are instead fully disaggregated to the individual crowd members. This paper examines May's (1987) account of the mob that stormed the Bastille and Held's (1970) account of a random collection of bystanders and uses then as starting points for the construction of a taxonomy of variety of crowds. This taxonomy then informs a model of how collective moral responsibility can be attributed to different types of crowds. The paper concludes by saying that some crowds do have a capacity to form a collective intention and then to direct action according to this intention. In such cases both the intention and the action may be legitimate objects of moral judgement.
Mr Hugh Breakey (University of Queensland), Two Concepts of Property: Ownership of Things. Property in Activities.
All current theoretical understandings of the concept 'property' are flawed, and all for the same reason. They are flawed because they hold that property is (at least in the limit case) a determinate ethico-political relation to some thing. Such ownership-of-things is, indeed, one of the two important senses of the concept 'property'. But there is another sense, conceptually and normatively distinct: property-in-activities. In this sense the concept 'property' describes a determinate ethico-political relation to some activity -- a relation that may (but equally may not) subsequently effect a wide variety of relations to some thing. In such cases the relation with the activity is essential, fixed and primary, and the ensuing relations with tangible or intangible things are contingent, variable and derivative. Appreciation of property-in-activities illuminates much of the substance of communal, intellectual and resource property rights as well as the more obvious cases of customary, recreation, riparian, hunting and easement property rights. Further, it allows us increased understanding of important philosophical applications of the concept 'property': ranging from Locke's property in life and labour to recent analyses of the hacker ethos. And historically, property-in-activities bridges the conceptual gap between the 'propriety' that was a perennial normative concern up to the seventeenth century and the full-blown ownership-of-things that had achieved dominance by the end of the eighteenth. I argue that while intuitive, legal and philosophical use of property-in-activities remains widespread, serious misunderstandings and flawed policy arise from interpreting such use as referring to ownership-of-things.
Dr Rachael Briggs (The University of Sydney), Decision Rules and Voting Rules
Evidential decision theory (henceforth EDT) and causal decision theory (henceforth CDT) both advise agents to maximize expected value. The two theories give different definitions of expected value, so their advice sometimes conflicts. In certain famous cases of conflict----medical Newcomb problems----CDT seems to get things right. In other cases of conflict, including some recent examples suggested by Andy Egan, EDT seems to get things right. Ratificationism looks like a promising way of combining the theories' insights, and refined ratificationist proposal by Ralph Wedgwood, which I call Benchmark Theory or BT, gets things right in both the medical Newcomb problems and the Egan examples. Unfortunately, there are other examples where both CDT and EDT get things right, while BT gets things wrong. It's no accident, I claim, that all three decision theories fail. Decision rules are analogous to voting rules, and the problematic examples have the structure of voting paradoxes. The upshot of voting paradoxes is that no voting rule can do everything we want. Likewise, the upshot of the decision theoretic paradoxes is that no decision rule can do everything we want in every situation. Luckily, the so-called `tickle defense' establishes that EDT, CDT, and BT will do everything we want in a wide range of situations.
Dr Stuart Brock (Victoria University of Wellington), What Fictional Characters Could Not Be: The Creationist Fiction
In this paper I explain why creationism about fictional characters is an abject failure. It suffers from the same problem as theological creationism: the purported explanation is more mysterious than the data it seeks to explain. Unlike theological creationism, though, the phenomenon to be accounted for is not particularly mysterious in the first place. This uniquely philosophical variety of creationism does not explain why there is something rather than nothing, or why the universe and elements within it have the appearance of design, or why some people have apparent experiences of a creator. Instead, creationism about fictional characters is put forward as the best explanation for why people occasionally say things that, if taken at face value, seem to entail that fictional characters exist and are created by their authors. One might wonder if taking the folk at their word in this way is appropriate, particularly when the same individuals deny these entailments when asked explicitly about them. One might already suspect that a better explanation, then, is that the folk are mistaken, or pretending, or speaking metaphorically, or speaking elliptically. I will not be exploring the merits of these alternative explanations, here, however. Instead I will attempt to show that when the details of creationism about fictional characters are filled in, the hypothesis becomes far more puzzling than the linguistic data it is used to explain. The basic idea is that no matter how the creationist identifies where, when and how fictional objects are created, the proposal conflicts with other strong intuitions we have about fictional characters.
Dr. Berit Brogaard (University of Missouri), Some Kind of Seeing
I offer a simple argument against the thesis that natural kind properties sometimes occur in the phenomenal content of visual experience which rests on reflections on what the phenomenal content of an experience is. I then respond to three arguments aimed at establishing that natural kind properties do occur in the phenomenal content of experience: the argument from phenomenal difference, the argument from mandatory seeing, and the argument from associative agnosia. Finally, I offer criteria for when a natural kind property is visually detectable and use these criteria to formulate a new argument for the thesis that natural kind properties sometimes occur in the non-phenomenal content of visual experience.
Assistant Professor Chris Brown (National University of Singapore), What Tree Huggers and Animal Lovers Should Do For Meat Eaters
My aim is to describe and advocate a much neglected form of activism, one which should be of particular interest to anyone with serious, motivating concerns about the way we treat the environment and/or the way we treat (non-human) animals. I will start by evaluating the typical forms in terms of effectiveness, rather narrowly construed, and moral permissibility. Although several of these do fairly well on both counts, I argue that not enough is being done. The problem is not simply that too few people are adequately motivated, but also that too few of the available approaches have been recognized. One additional approach reveals itself, however, once we fully appreciate the fact that many of the harms done to animals and the environment are effects of practices that bring products to the market. Conscientious consumption is an admirable response, but competing on the market with the industries that fuel the relevant practices is a much more effective way of diminishing the relevant harms. Using meat as an example, I will argue that, for many existing products, green and humane practices can produce alternatives which are more appealing, in all respects, even to consumers who do not care about the environment or animals. Non-profit organizations that make this their business could be uniquely effective, if run by the right people.
Dr Deborah Brown (University of Queensland), Descartes' Secular Biology: Functions without Final Causes
Recent debates about functions have been largely dominated by the question of whether functions are to be characterised etiologically or in causal terms, or whether both approaches are required to understand contemporary biological practices of classification and explanation. Interestingly, these debates have precursors in the early modern period. The teleological conception of function used by Aristotle and his followers relies explicitly on the assumption that functions make sense only in relation to the ends or purposes which they serve. Even though they typically rejected final causality in natural philosophy, Descartes and other mechanists frequently resorted to functional explanations, particularly when discussing the organization and behaviour of animals and plants. This raises the question of what they thought was essential to an organism's exhibiting functionality. An increasingly popular approach in Cartesian studies is to read a non-intentional teleology back into the corpus as the context in which references to functions are to be understood. I am sceptical of the soundness of this strategy but do not think that Descartes is working with a purely causal notion of function either (as causal functions are generally understood nowadays). A tour of Descartes' fanciful account of embryogenesis reveals a commitment to a notion of function neither etiological nor causal but grounded rather in an understanding of the complex interdependence of the parts of organisms. One of the advantages of the Cartesian approach to functional explanation is that it offers some relief from what Dennis Des Chene calls the 'boundary problem' for mechanistic approaches in the life sciences of the seventeenth century, the problem of specifying what does and doesn't count as belonging within an organic system when many things, both internal and external, may contribute to fitness in some direct or indirect way.
Dr. Paolo Diego Bubbio (University of Sydney), Sacrifice in Hegel's Phenomenology of Spirit
In this paper I apply the post-Kantian revisionist interpretation of Hegel, and specifically the recognition-theoretic approach, to the notion of sacrifice in the Phenomenology of Spirit. Firstly, I conduct a preliminary analysis by examining the general meaning of sacrifice as a form of determinate negation. Secondly, I focus on two phenomenological moments (the struggle between faith and pure insight, and the cult) in order to answer the question, Is a real (effective and unselfish) sacrifice possible? Finally, I argue that sacrifice should be considered as a Darstellung, and I explain the twofold connection between sacrifice and recognition. I conclude that there is no sacrifice without recognition, and the process of recognition is intrinsically sacrificial.
Dr. Mary Buck (University of New England), A Spatial Approach to Hearing Absolute Music
Hearing classical music in the Western diatonic tradition is commonly regarded as a subjective, emotional experience for listeners. Theorists suggest that hearing Western classical music foregrounds our emotions and expectations. Ordinarily, we have emotions concerning an object. In music, the title of a musical work may assist the listener in discerning the object the composer has in mind, such as Smetana's orchestral work, 'Die Moldau'. It is also suggested that the music arouses in the listener memories of past experiences of an emotion. The composer provides groups of tones, themes, and structures that lead the listener to organise and re-organise his perception that becomes familiar to him over the course of the music. Fugues are an example of this method of composition. My philosophical project is focussed upon the experience of hearing 'absolute' music in the Western diatonic tradition. Absolute music is solely instrumental music, without reference to a narrative or drama external to the assembly of tones. Usually there is no title that offers a reference for a listener's experience. As such, without a text or title to relate to, the listener is mistaken if he expects an emotional object in hearing absolute music. We may doubt that an experience of 'absolute' music is a requisite subject for a psychology of emotions and expectations. I will consider an alternative to a psychology of emotion. I affirm that ordering space is a valuable method of perceiving absolute music. I support this approach by showing that an experience of kinematic and geometric orderings of space external to the listener is intrinsic to the musical scale. I suggest that David Marr's study of the spatial parameters of vision in pursuit tracking enhances a spatial theory of musical perception.
Dr Grzegorz Bugajak (Cardinal Stefan Wyszynski University in Warsaw and Medical University of Lodz), On the types of observability and a 'criterion' of existence
Old definitions of materiality, formulated in various philosophical schools, were usually given in the form of a list of properties, which all material objects, and only such objects, should share. It can be shown that in the light of now well-established concepts in physics, nearly none of these properties can be rightly attributed to all material objects. What seems to be left from these lists is an epistemological property of 'being observable', or 'observability'. What is material has to be -- in a wide sense -- observable. But what does it mean to be observable? For example, atoms are not observable in the same sense as are tennis balls. Or are they? When the notion of 'atom' appeared in modern chemistry some 200 years ago it was just a conventional 'invention' which made it possible to explain certain phenomenological laws. Subsequent development in physics proved it to be a very useful notion indeed. But were atoms observable in the times of Dalton or Rutherford? Until fairly recently, when nanotechnology enabled us to manipulate single atoms, their observability was certainly of a different kind than the observability of more 'common' physical objects. Other entities of contemporary physics, quarks, are not, and -- according to quantum chromodynamics -- will never be subject to similar manipulations. However, being material, they have to be observable, and if so, it would be yet another type of observability. One of the profound questions in the philosophy of nature is the following: under what conditions it is justified to say that some objects, to which certain theoretical scientific notions [seem to] refer, really exist? Perhaps, given the above example, one possible answer, a possible criterion of existence is this: to exist means to be observable, but in a special way: to be subject to manipulation.
Mr Jan-Willem Burgers (ANU (RSSS)), A Reason for Weighted Probabilistic Allocation
Probabilistic allocation is frequently defended as being the best mechanism for allocating goods in some contexts. Those who advertise the virtues of this practice, however, almost exclusively tend to extol equiprobabilistic allocation. Rarely are the virtues of weighted probabilistic allocation discussed. Are there any virtues to the lat-ter, and, if so, what are they? This paper investigates the virtues of weighted probabilistic allocation from the perspective of one important general justification for probabilistic allocation: its (net) positive behavioural effects. Specifically, I address the following question: do behavioural effects as a reason for probabilistic allocation ever warrant weighted as opposed to equiprobabilistic allocation? I believe there are two necessary conditions for behavioral effects to do so: 1) the probabilistic allocation must be primarily in-tended to address the possible disadvantageous behavioural responses of potential re-cipients, as resulting from their predictive capacities; and 2) the recipients must vary in the amount of 'harm' they can potentially cause through their behaviour. The paper proceeds in three parts. In Part I, I briefly attempt to justify the claim that, from the argument for probabilistic allocation due to (net) positive behav-ioural effects, weighted probabilistic allocation is only warranted if the purpose is to address the predictive opportunities of the potential recipients. In Part II, I attempt to illustrate how weighted probabilistic allocation can address these with my case of a Tax Authority's Monitoring Policy. I first lay out the ideal conditions for weighted probabilistic allocation, and then speculate on the actual importance of this under more realistic empirical conditions. Although this case shows well why weighted probabilistic allocation should, in principle, be an important consideration, I contend in Part III that this case is rare and we should not often expect the right conditions for weighting to arise in practice.
Dr Simon Burgess (CQUniversity), Moral judgement in professional counselling: both legitimate and important
Many theorists and practitioners have stressed that counsellors should refrain from forming or expressing moral judgements about their clients. The idea gains intellectual sustenance from various sources, including 'Person-Centered Therapy' (sometimes known as 'client-centered therapy' or 'Rogerian psychotherapy'). It cannot be denied that such 'nonjudgementalism' makes counselling easier, and generally makes good business sense too. But there are also important questions to raise about its effectiveness, its effect on social norms, and its moral justifiability. In this paper I argue that there are certain cases in relation to which counsellors should form certain moral judgements of their clients and subtly encourage those clients to adopt those moral judgements as their own. I also discuss the idea of counsellors forming and explicitly expressing moral judgements in certain cases. While some of the cases raised involve criminal actions or habits, others do not. Some of the most philosophically intricate issues arise through consideration of clients who have developed their habits in households and social milieux that are exceptionally violent and dysfunctional. The complexity is due to the fact that such cases raise the issue of whether, to some extent, certain causal explanations of behaviour can excuse such behaviour, and if so, precisely how such excuses may influence the nature of any relevant moral judgements.
Mr James Burrowes (University of Auckland), Your Symbols are Finite: Cassirer, Heidegger and Lask and the Kantian Tradition
Heidegger and Cassirer, and their philosophical systems, were brought into direct confrontation at a debate in Davos, Switzerland in 1929. The debate was structured on their respective readings of Kant; the influence of these interpretations on their wider philosophy is telling. Both Heidegger and Cassirer were initially trained within the Neo-Kantian tradition and, more importantly, under its third phase. It was at this stage that the ontological issues arising from Lebensphilosophie were beginning to challenge the epistemological framework of transcendental logic at the basis of Neo-Kantianism. Emil Lask most clearly uncovered the contradictions within the Southwest School and integrated the concepts of Lebensphilosophie into a Kantian framework, thereby anticipating the final phase of Neo-Kantianism. In this paper I intend to compare the main elements of Heidegger's existential analytic of Dasein with Cassirer's conception of man as a Symbolic Animal and how these are inherent to their wider philosophical systems. Also, I will show how important Emil Lask's theory of Kantian logic is to our understanding of both Heidegger and Cassirer. In this respect, we will investigate the effect of Lebensphilosophie on Kantian thought and put into perspective the elements of Kantian philosophy which remain within Heidegger's thought. Specifically, we will look at each of these philosophers' interpretations of Intuition and Imagination and the ontological conditions of Logic, Mathematics and the Transcendental Aesthetic. I will argue that the correct interpretations will need to rest on a clarified notion of finitude. I will also argue that, in line with the likes of Theodore Kisiel and Steven Galt Crowell, comparing Lask and Heidegger provides fundamental insights into Heidegger's Philosophy, and will make further clarifications to the relationship between Cassirer and Lask and the strands of Neo-Kantianism that they each represent.
Dr Sam Butchart (Monash), Why we need a theory of mathematical explanation
Explanation and justification in mathematics are intimately connected, in much the same way as they are in the physical sciences. Understanding mathematical explanation is therefore vital to the project of providing an adequate epistemology of mathematics - an account of the ways in which mathematics is justified. In this paper I will discuss the relevance of an account of mathematical explanation to a wide variety of problems in the epistemology of mathematics. In particular, I argue that understanding the concept of mathematical explanation is crucial to issues such as the the status of indispensability arguments, the justification of axioms, the role of non-deductive evidence in mathematics and the nature of proof.
Dr Jeremy Butterfield (Trinity College, University of Cambridge, UK), On Discerning Quantum Particles
In several papers, Saunders, Muller and Seevinck have recently argued that quantum particles---both bosons and fermions---obey the principle of the identity of indiscernibles (contrary to most previous authors). Their position depends on two key ideas. (1): Two objects can be discerned without differing in their intrinsic properties, or in their relations to yet other objects, merely by a symmetric irreflexive relation between them. (This idea goes back to Hilbert, Bernays and Quine.) (2): Appropriate symmetric irreflexive relations can be found in the formalism of quantum theory. This paper assesses their position: in part developing it, and in part criticizing it---and so allowing objects, in particular quantum particles, to be merely numerically distinct. (This is joint work with Adam Caulton, of Cambridge.)
Dr Philip Catton (University of Canterbury), Relationship, symbol, science
In the human use of symbolic forms just as in the human adventure of an interpersonal relationship, there is an open-texturedness, an incapability of any individual to be in total control, and a potentiality for returns well beyond or well below what parties purposed at the outset the symbol or the relationship to produce. Whether a symbolism that persons use, or the interpersonal relationships to which those persons are assimilated, truly contribute to flourishing, depends upon continual creative endeavour to which multiple persons contribute and of which no individual person is all that fully in control. We might be pressed into thinking of the one set of creative demands epistemically and of the other morally, but the connection between the two is nevertheless complete. In this paper I explore KantÂ¹s understanding of these points and address in this light the possibility of science.
Prof David Chalmers (Australian National University), Kaplan's Paradox and Epistemically Possible Worlds
Kaplan's paradox suggests that there is a possible world for every set of possible worlds, so that the possible worlds cannot comprise a set with a cardinality, and so that (arguably) there is something incoherent about the very notion of a possible world. Some (e.g. Lewis) have responded by denying the premise, holding that some apparent possibilities here are not possible. But this move is harder to maintain for those (like me) who think that there is a possible world for every epistemically possible scenario (one that cannot be ruled out a priori). And the problem arises in any case for the framework for epistemically possible worlds, or scenarios, which is central to epistemic two-dimensionalism among other applications. In this paper, I attempt to respond to Kaplan's paradox by developing a stratified system of epistemically possible worlds, with different sets of worlds corresponding to different cardinalities, and I argue that this system can do most of the work that we need possible worlds to do.
Dr Colin Cheyne (University of Otago), Emotion, Fiction and Rationality
Our emotional responses to fiction, in particular our responses to fictional characters, apparently gives rise to a paradox. We emotionally respond to fictional characters that we do not believe to exist, although rational emotional responses to objects presuppose belief in the existence of those objects. I argue that when we bring together recent work in evolutionary psychology, naturalised epistemology and cognitive science, we see that such responses are not surprising and nor are they irrational. We have both the capacity to hold contradictory beliefs and to respond emotionally when reasoning hypothetically and contemplating imaginary scenarios. These capacities, properly deployed, are useful and rational. We cannot avoid their coming into play when we consume fiction, and thank goodness for that.
Mr Sungho Choi (Kyung Hee University), What is a dispositional masker?
Manley and Wasserman put forward an apparently strong objection to the conditional analysis of dispositions and propose an alternative account of the link between dispositional ascriptions and counterfactual conditionals. But I will argue that their discussion rests on a fundamentally wrong understanding of the phenomenon of masking. The key idea is that they neglect a crucial difference between cases of masking where the disposition is manifested because the appropriate stimulus conditions are present but a masker prevents the manifestation, on the one hand, and other plain cases where the disposition is not manifested because the appropriate stimulus conditions are not present. To develop this idea with rigour and clarity, however, it will be necessary to look closely into the context-dependence of dispositional ascriptions and the incompleteness of dispositional predicates.
Dr Wayne Christensen (Macquarie University), Agency in skilled action
Recently Pacherie (2005, 2008) has proposed a 3-level framework for understanding the intentionality of action control. Pacherie's account distinguishes distal intentions from proximal and motor intentions, and argues that there is a control cascade from distal to proximal to motor intentions. Here I set out to extend Pacherie's framework by clarifying the nature of motor intentions, the nature of relations between proximal and motor intentions, and the conditions under which higher intentional control can be effective. Common views of skilled performance see higher intentional control as only impairing action control. That is, skilled action is only skilled to the extent that it is fully automated. I argue to the contrary that there are conditions in which higher intentional control can make a positive contribution to skilled action, and I distinguish several forms of agentic control that may occur in skilled action. Since much human action involves skill these are important cases for understanding the nature and scope of agentic control.
Dr Stephen Clarke (University of Oxford), Governance and the Yuck Factor
Steve Clarke and Rebecca Roache (Oxford) Throughout his election campaign, and in his inaugural address, President Barack Obama expressed an ambition to bridge the divide between predominantly conservative 'red states' and predominantly liberal 'blue states' (Haidt 2009; Loven 2008; Obama 2009), and to unite all Americans in a 'common purpose of remaking th[e] Nation for our new century' (Obama 2009). We consider the difficulty of meeting this objective given the prima facie evidence that conservatives and liberals not only hold very different moral views but also that the two respective groups think very differently about morality. We consider recent work in the psychology of morality which shows how moral judgments tend to change in response to changing social circumstances, such as the introduction of transformative technologies. We argue that governments can utilise these findings in order to plan changes in societies that will have the long term effect of reducing the gap between liberal and conservative moral thought. Some attempts to implement such a policy would be condemned as unacceptably paternalistic. We argue, however, that such a policy can be conducted in a way that is consistent with Thaler and Sunstein's (2008) unobjectionable 'libertarian paternalism'.
Ms. Jacklyn Cleofas (National University of Singapore), Fallible Omniscience: Wittgenstein's Argument Against Moral Naturalism
In A Lecture on Ethics, Wittgenstein explicitly adopts some of Moore's ideas: My subject, as you know is Ethics, and I will adopt the explanation of that term which Professor Moore has given in his book Principia Ethica. Despite this reference, Wittgenstein's and Moore's similar views on ethics have not been explored much. Darwall, Gibbard and Railton note that unlike Moore, Wittgenstein recognized that attributing moral goodness to something cannot be captured by a complete description of that thing in terms of its natural properties because action-guidingness is semantically built into the former but not the latter. Nevertheless, nobody saw that Wittgenstein also presents an improved version of Moore's argument. This paper seeks not only to demonstrate that Wittgenstein's argument against moral naturalism is similar to Moore's well-known open question argument; it also argues that Wittgenstein's argument is better. This argument is based on the possibility that someone who knows all natural facts could fail to know whether something is morally good. Wittgenstein's argument anticipates and even improves on contemporary versions of Moore's argument by highlighting something that any metaethical theory has to account for: disparity between knowledge of the natural features of something and knowledge of that thing's moral worth. Unlike Moore, who tried to establish that moral properties are not natural by focusing on the semantic incongruity between being morally good and having some natural property, Wittgenstein uses the epistemic discrepancy between moral and natural knowledge to show that moral properties are either non-natural or non-existent.
Dr David Coady (University of Tasmania), The Epistemology of the Blogosphere
Blogging has changed the way in which people acquire knowledge and justify their beliefs. But are these changes good or bad? In particular, are we epistemically better off as a result of blogging, or is it the case, as Alvin Goldman has argued, that the blogosphere's emergence as an alternative to the conventional media is \bad news for the epistemic prospects of the voting public\"? Come along and find out."
Dr Daniel Cohen (Charles Sturt University), The Puzzle of the Self-Torturer and Newcomb's Problem
Attached to your body is a shock generator with 1001 settings, ranging from no pain to excruciating agony. While you are barely able to distinguish adjacent settings, distant settings are easily distinguishable. Every day you are offered $10,000 in return for permanently raising the settings by 1. The puzzle is that while you will clearly be tempted, each day, to advance, you will nevertheless regret advancing beyond a certain point. So what should you do? Is there some point beyond which it is irrational to advance, despite the temptation, or are rational agents committed to advancing all the way to 1000? I will argue that we can better understand this puzzle by seeing an analogy with Newcomb's problem. According to causal decision theory you ought, every day, to advance, while according to evidential decision theory there is some point beyond which advancing is irrational.
Professor Mark Colyvan (University of Sydney), A Ricci Curvature Tensor By Any Other Name
A common view of mathematics is that it is \the language of science\". Although intended as a compliment this slogan seriously understates the role mathematics plays in science. I will start by saying a little about what is right about the slogan. Thinking of mathematics as a language is useful in appreciating the significance of and the difficulties encountered in arriving at a good notational system. Good notation is far from trivial. The development of differential geometry for example with its Ricci curvature tensors and the like is intimately connected with the notation employed. Next I turn to what is wrong about the slogan. Thinking of mathematics as a mere language is to ignore the role of mathematics as an explanatory tool. I will look at the recent work on mathematical explanation and argue that there are genuinely mathematical explanations of empirical facts and moreover the transparency of some of these explanations is dependent upon the mathematical notation used. While a Ricci curvature tensor represented via different notation would still be a Ricci curvature tensor it may not live up to its full potential and deliver the kinds of explanations it is capable of.
Mr Wilson Cooper (Macquarie University), Can Functional Reduction Close the Explanatory Gap?
Reductively explaining the mental in terms of the physical has been an enterprise attracting scant support recently. However, Jaegwon Kim has argued that this is because of a flaw in the most prominent method of reduction employed to date. Bridge law reduction seeks laws that connect higher-level descriptions with lower-level descriptions to allow a derivation of the higher-level laws from lower-level laws. The main criticism Kim makes against bridge law reduction is that the bridge laws cannot reductively explain the higher-level property, since the correlation itself needs an explanation. In order to explain why a lower-level property is correlated with a higher-level property, such explanations need to respect a constraint that the explanatory premises contained in the deductive nomological argument do not refer to the property being explained, or to any other properties at that higher-level or above. Abiding by this constraint results in explanatory ascent and thus the closure of explanatory gaps. Kim argues that functional reduction can deliver explanatory ascent, and thus close explanatory gaps by functionally defining higher-level properties, like pain, in terms of their causal roles. Since definitions are not explanatory premises in a D-N explanation, if the defined causal role is found at a lower level, say neurophysiology, as a law about a neural state then we have an explanation of why pain defined by causal role C is correlated with neural state N. N satisfies causal role C and pain is nothing more than having causal role C. In this paper, I argue that Kim's method of functional reduction is unsuccessful in closing explanatory gaps between the non-reducible and the physical because of two assumptions that beg the question against opponents of reductive physicalism: the Principle of Physical Causal Closure and Causation as Generation.
Dr Richard Corry (University of Tasmania), Can Dispositional Essences Ground the Laws of Nature?
A dispositional property is a tendency, or potency, to manifest some characteristic behaviour in some appropriate context. The mainstream view in the 20th Century was that such properties are to be explained in terms of more fundamental non-dispositional properties, together with the laws of nature. In the last few decades, however, a rival view has become popular. According to the rival view, some properties are essentially dispositional in nature, and the laws of nature are to be explained in terms of these fundamental dispositions. Indeed the supposed ability of fundamental dispositions to ground natural laws is the strongest reason to believe that some fundamental properties have a dispositional essence. I am sympathetic to the dispositional essentialist position, but in this paper I point out a serious obstacle to the claim that the laws of nature can be grounded in dispositional essences.
Mr Tama Coutts (The University of Melbourne), Mass Terms and Absolute Truth Definitions
Many philosophers, most obviously Donald Davidson, think that there is something philosophically significant about absolute truth definitions; that is to say truth definitions of the kind Tarski showed how to construct. Let us suppose so, and suppose so for Davidson's reasons; namely that the possibility of constructing such truth definitions, together with the argumentation he provides, give an insight into the nature of content and the structure of agency. It turns out that the argumentation requires one to be able to deal with recalcitrant stretches of natural languages. Davidson lists many such stretches. Of these I was unable to understand in what the recalcitrance of two consisted: claims involving probability and mass terms. In this paper I attempt to work out just what the problem is with mass terms, and insofar as this is a problem to solve it. The view that emerges is roughly like that of Terrence Parsons, involving a commitment to an ontology of (what Parsons calls) substances, although we shall see that his view is perhaps somewhat inadequate.
Mr Alexander Cox (University at Buffalo (SUNY)), Against Subject-Sensitive Invariantism
In their recent books, John Hawthorne and Jason Stanley each present a version of subject-sensitive invariantism (SSI). SSI is an epistemic thesis that claims that there is a single semantic value of 'knows'. It is distinguished from other invariantist theories by its claim that whether an instance of 'S knows p' is true or not depends in part upon the context and interests of the subject. SSI is opposed to contextualism, which claims that the semantic value of 'knows' varies across contexts. According to contextualism, the context and interests of the attributor determine which meaning of 'knows' applies. Whether an instance of 'S knows p' is true or not depends in part upon this semantic value. Hawthorne and Stanley each attempt to motivate subject-sensitive invariantism by applying it to an epistemic puzzle and arguing that it handles these puzzles at least as well as, if not better than, contextualism does. In this paper, I introduce these epistemic puzzles--lottery cases and high/low stakes cases--and the contextualist and SSI solutions. After briefly motivating SSI over contextualism, I present three criticisms of SSI. First, I question whether the truth of knowledge ascriptions is sensitive to practical facts as SSI claims. Second, I argue that SSI is consistent with the claim that knowledge vacillates. That is, on this view it is easy for one to lose and regain knowledge simply by changing one's context or interests. Third, I argue that SSI has difficulty accounting for the ubiquity of third-person knowledge ascriptions. In particular, it faces a dilemma between claiming that most third-person knowledge ascriptions are inappropriate and claiming that many such ascriptions are only true as the result of luck. I conclude by briefly showing that contextualism is not susceptible to these charges.
Prof Max CRESSWELL (Victoria University of Wellington), ARE CONTINGENT FACTS A MYTH?
In pp.78 80 of Real Time II, Hugh Mellor presents a 'truthmaker' version of McTaggart's argument, which is designed to establish that there are no tensed facts. In this paper I consider the modal analogue of this argument, and shew first that, while there is a sense in which untensed facts might be held to make utterances of tensed sentences true, in that same sense non contingent facts can make utterances of contingent sentences true. I then shew that, while there is a sense in which tensed facts can be held to be contradictory, in that same sense contingent facts are equally contradictory.
Prof. Garrett Cullity (University of Adelaide), Loving the Bad
Loving the bad is bad, according to Brentano and Chisholm. They add that loving the good is good, hating the good is bad, and (with qualifications) hating the bad is good. However, these claims (even with the qualifications) are too crude to be true. Sometimes, loving the bad is good. There are some forms of loving the bad which we have reason to promote and celebrate. This paper offers a framework for understanding and explaining the differences between these cases. Its main focus is on understanding the way in which someone's enjoyment can provide us with reasons to promote it. Often, the fact that you will enjoy something is a reason for me to help you to get it. Brentano and Chisholm seem right that whether this is true depends on the object of your enjoyment. The fact that you will enjoy seeing someone suffer gives me no reason to procure that for you. However, their attempted explanation of this fails. It is not because malicious enjoyment is a case of loving the bad that I have no reason to promote it. So what explanation should we offer instead?
Mr. William Cunningham (University of New South Wales), The Quest for Unity in Ethics
The recent prevalence of discussions regarding moral dilemmas has served to emphasize what is a fundamental necessity in moral philosophy: theoretical and practical unity. There are several forms of moral dilemma, each arising out of an inconsistency inherent in a particular moral theory. Varying theories fall victim to varying forms of dilemma but most do face one or another. This theoretical vulnerability forces us to revise or reject our theories. If one is to escape this type of theoretical weakness it is necessary to hold to a theory that presents the moral life as a unified whole. Inasmuch as morality is both a theoretical and a practical endeavour, the adequate theory must both be theoretically coherent and able to unite personal and social beliefs and desires. In this paper I argue that there is one such theory that is capable of escaping moral dilemmas and providing the above type of unity. I present a teleological approach that draws on Aristotelian and Judeo-Christian traditions. I argue that this theory is able to unite the obligations and desires of the individual and of rational beings as a whole, in such a way as to escape theoretical and practical inconsistency.
Mr Adrian Currie (Victoria University of Wellington), Towards a Cladistics of Analogy
Sometimes in Biology, particularly in cases where we wish to reconstruct past species, we rely on data from a range of other species -- a set of inferential tools called 'the comparative method.' It has been suggested that in using analogous cases (cases where similarity of biological traits is due to independent evolution as opposed to descent) to construct retrodictive models of species we ought to constrain our area of interest using either homologies (restriction to clades) or parallel cases (restriction to developmental resources). I will examine the motivations behind this and consider whether a different approach that relies on classifying animals by trait, rather than by descent, might have use in the comparative method.
Dr Laura D'Olimpio (The University of Western Australia), What Aestheticism is Really About.
Aestheticism denies that the ethical value of an artwork can be taken into consideration when judging the work's overall aesthetic value. Why is this question even of concern? It seems clear that at least sometimes the ethical component of a work of art can impact on its overall (aesthetic) value. The arguments about the aesthetic and ethical evaluation of artworks that are made on definitional and theoretical grounds only make sense when we examine the use and effect of artworks in society as people interact with them and are influenced and affected by these interactions. Some artworks are intended to produce an aesthetic effect and make a moral, social or political point, enhanced by the overall impact of the work. The autonomist, formalist and essentialist all object to such artworks or the use of art in this way as, they claim, the primary purpose of art is the aesthetic. Aestheticism should be viewed as largely a political or social or moral claim itself. Aestheticism seeks to liberate art and artists in order for them to be able to perform such roles (i.e social commentary) without risk of censure or condemnation precisely by arguing that art is only to be judged by its aesthetic element(s). By acknowledging this, we can conclude that ethical evaluations of art are appropriate and necessary when required by the artwork in question.
Mr Andrew Donnelly (University of Otago), Epistemic Uncertainty and Clinical Trials
It is often thought that there is an 'equipoise' or 'uncertainty' constraint on a clinician's offering a patient entry into a randomised controlled trial (RCT). In recent years there has been much discussion as to what this contraint might involve. Some writers maintain that equipoise exists where a clinician is indifferent as to whether one treatment offered in an RCT is superior to the others. Others suggest equipoise exists when the clinical community is in a state of collective uncertainty. I maintain that these accounts are hopeless because they make equipoise a matter of individual or group beliefs. Instead I advocate a re