Tuesday, September 19, 2017

New Paper in Draft: The Insularity of Anglophone Philosophy: Quantitative Analyses

by Eric Schwitzgebel, Linus Ta-Lun Huang, Andrew Higgins, and Ivan Gonzales-Cabrera

Abstract:

We present evidence that mainstream Anglophone philosophy is insular in the sense that participants in this academic tradition tend mostly to cite or interact with other participants in this academic tradition, while having little academic interaction with philosophers writing in other languages. Among our evidence: In a sample of articles from elite Anglophone philosophy journals, 97% of citations are citations of work originally written in English; 96% of members of editorial boards of elite Anglophone philosophy journals are housed in majority-Anglophone countries; and only one of the 100 most-cited recent authors in the Stanford Encyclopedia of Philosophy spent most of his career in non-Anglophone countries writing primarily in a language other than English. In contrast, philosophy articles published in elite Chinese-language and Spanish-language journals cite from a range of linguistic traditions, as do non-English-language articles in a convenience sample of established European-language journals. We also find evidence that work in English has more influence on work in other languages than vice versa and that when non-Anglophone philosophers cite recent work outside of their own linguistic tradition it tends to be work in English.

Full version here.

Comments and criticisms welcome, either by email to my academic address or as comments on this post. By the way, I'm traveling (currently in Paris, heading to Atlanta tomorrow), so replies and comments approvals might be a bit slower than usual.

Thursday, September 14, 2017

What would it take for humanity to survive? (And does it matter if we do?) (guest post by Henry Shevlin)

guest post by Henry Shevlin

The Doctor: You lot, you spend all your time thinking about dying, like you're gonna get killed by eggs, or beef, or global warming, or asteroids. But you never take time to imagine the impossible. Like maybe you survive. (Doctor Who, “The End of the World”)

It’s tempting to think that humanity is doomed: environmental catastrophe, nuclear war, and pandemics all seem capable of wiping us out, and that’s without imagining all of the exciting new technologies that might be lying in wait across the horizon waiting to devour us. However, I’m an optimist. I think there’s an excellent chance humanity will see this century out. And if we eventually become a multi-planetary species, the odds start looking really quite good for us. Nonetheless, in thinking about the potential value in human survival (or the potential loss from human extinction), I think we could do more first to pin down whether (and why) we should care about our survival, and exactly what would be required for us to survive.

For many hardnosed people, I imagine there’s an obvious answer to both questions: there is no special value in human survival, and in fact, the universe may be a better place for everyone (including perhaps us) if we were to all quietly go extinct. This is a position I’ve heard from ecologists and antinatalists, and while I won’t debate it here, I find it deeply unpersuasive. As far as we know, humanity is the only truly intelligent species in the universe – the only species that is capable of great works of art, philosophy, and technological development. And while we may not be the only conscious species on earth, we are likely the only species capable of the more rarefied forms of happiness and value. Further to that, even though there are surely other conscious species on earth worth caring about, our sun will finish them off in a few billion years, and they’re not getting off this planet without our help (in other words: no dogs on Mars unless we put them there).

However, even if you’re sympathetic to this line of response, it admittedly doesn’t show there’s any value in specifically human survival. Even if we grant that humans are an important source of utility worth protecting, surely there are intelligent aliens somewhere out there in the cosmos capable of enjoying just as fancy pleasures as those we experience. Insofar as we’re concerned with human survival at all, then, maybe it should just be in virtue of our more general high capacity for well-being?

Again, I’m not particularly convinced by this. Leaving aside the fact that we may be alone in the universe, I can’t shake the deep intuition that there’s some special value in the thriving of humanity, even if only for us. To illustrate the point, imagine that one day a group of tiny aliens show up in orbit and politely ask if they can terraform earth to be more amenable to them, specifically replacing our atmosphere with one composed of sulphur dioxide. The downside of this will be that humanity and all of the life on Earth will die out. On the upside, however, the aliens’ tiny size means that Earth could sustain trillions of them. “You’re rational ethical beings,” they say. “Surely, you can appreciate that it’s a better use of resources to give us your planet? Think of all the utility we’d generate! And if you’re really worried, we can keep a few organisms from every species alive in one of our alien zoos.”

Maybe I’m parochial and selfish, but the idea that we should go along with the aliens’ wishes seems absurd to me (well, maybe they can have Mars). One of my deepest moral intuitions is that there is some special good that we are rationally allowed – if not obliged – to pursue in ensuring the continuation and thriving of humanity.

Let’s just say you agree with me. We now face a further question: what would it take for humanity to survive in this ethically relevant sense? It’s a surprisingly hard question to answer. One simple option would be that we survive as long as the species Homo sapiens is still kicking around. Without getting too deeply into the semantics of “humanity”, it seems like this misses the morally interesting dimensions of survival. For example, imagine that in the medium term future, beneficial gene-modding becomes ubiquitous, to the point where all our descendants would be reproductively cut off from breeding with the likes of us. While that would mean the end of Homo sapiens (at least by standard definitions of species), it wouldn’t, to my mind, mean the end of humanity in the broader and more ethically meaningful sense.

A trickier scenario would involve the idea that one day we may cease to be biological organisms, having all uploaded ourselves to computers or robot bodies. Could humanity still exist in this scenario? My intuition is that we might well survive this. Imagine a civilization of robots who counted biological humans among their ancestors, and went around quoting Shakespeare to each other, discussing the causes of the Napoleonic Wars, and debating whether the great television epic Game of Thrones was a satisfactory adaptation of the books. In that scenario, I feel that humanity in the broader sense could well be thriving, even if we no longer have biological bodies.

This leads me to a final possibility: maybe what’s ethically relevant in our survival is really the survival of our culture and values: that what matters is really that beings relevantly like us are partaking in the artistic and cultural fruits of our civilization.

While I’m tempted by this view, I think it’s just a little bit too liberal. Imagine we wipe ourselves out next year in a war involving devastating bioweapons, and then a few centuries later, a group of aliens show up on Earth to find that nobody’s home. Though they’re disappointed that there are no living humans, they are delighted by the cultural treasure trove of they’ve found. Soon, alien scholars are quoting Shakespeare and George R.R. Martin and figuring out how to cook pasta al dente. Earth becomes to the aliens what Pompeii is to us: a fantastic tourist destination, a cultural theme park.

In that scenario, my gut says we still lose. Even though there are beings that are (let’s assume) relevantly like us that are enjoying our culture, humanity did not survive in the ethically relevant sense.

So what’s missing? What is it that’s preserved in the robot descendant scenario that’s missing in the alien tourist one? My only answer is that some kind of appropriate causal continuity must be what makes the difference. Perhaps it’s that we choose, through a series of voluntary, purposive actions to bring about the robot scenario, whereas the alien theme park is a mere accident. Or perhaps it’s the fact that I’m assuming there’s a gradual transition from us to the robots, rather than the eschatological lacuna of the theme park case.

I have some more thought experiments that might help us decide between these alternatives, but that would be taking us beyond the scope of a blogpost. And perhaps my intuitions that got us this far are already radically at odds with yours. But in any case, as we take our steps into the next stage of human development, I think it’s important for us to figure out what it is about us (if anything) that makes humanity valuable.

[image source]

Tuesday, September 12, 2017

Writing for the 10%

[The following is adapted from my advice to aspiring writers of philosophical fiction at the Philosophy Through Fiction workshop at Oxford Brookes last June.]

I have a new science fiction story out this month in Clarkesworld. I'm delighted! Clarkesworld is one of my favorite magazines and a terrific location for thoughtful speculative fiction.

However, I doubt that you'll like my story. I don't say this out of modesty or because I think this story is especially unlikable. I say it partly to help defuse expectations: Please feel free not to like my story! I won't be offended. But I say it too, in this context, because I think it's important for writers to remind themselves regularly of one possibly somewhat disappointing fact: Most people don't like most fiction. So most people are probably not going to like your fiction -- no matter how wonderful it is.

In fiction, so much depends on taste. Even the very best, most famous fiction in the world is disliked by most people. I can't stand Ernest Hemingway or George Eliot. I don't dispute that they were great writers -- just not my taste, and there's nothing wrong with that. Similarly, most people don't like most poetry, no matter how famous or awesome it is. And most people don't like most music, when it's not in a style that suits them.

A few stories do appear to be enjoyed by almost everyone who reads them ("Flowers for Algernon"? "The Paper Menagerie"?), but those are peak stories of great writers' careers. To expect even a very good story by an excellent writer to achieve almost universal likability is like hearing that a philosopher has just put out a new book and then expecting it to be as beloved and influential as Naming and Necessity.

Even if someone likes your expository philosophy, they probably won't like your fiction. The two types of writing are so different! Even someone who enjoys philosophically-inspired fiction probably won't like your fiction in particular. Too many other parameters of taste also need to align. They'll find your prose style too flowery or too dry, your characters too flat or too cartoonishly clever, your plot too predictable or too confusing, your philosophical elements too heavy-handed or too understated....

I draw two lessons.

First lesson: Although you probably want your friends, family, and colleagues to enjoy your work, and some secret inner part of you might expect them to enjoy it (because it's so wonderful!), it's best to suppress that desire and expectation. You need to learn to expect indifference without feeling disappointed. It's like expecting your friends and family and colleagues to like your favorite band. Almost none of them will -- even if some part of you screams out "of course everyone should love this song it's so great!" Aesthetic taste doesn't work like that. It's perfectly fine if almost no one you know you likes your writing. They shouldn't feel bad about that, and you shouldn't feel bad about that.

Second lesson: Write for the people who will like it. Sometimes one hears the advice that you should "just write for yourself" and forget the potential audience. I can see how this might be good advice if the alternative is to try to please everyone, which will never succeed and might along the way destroy what is most distinctive about your voice and style. However, I don't think that advice is quite right, for most writers. If you really are just writing for yourself -- well, isn't that what diaries are for? If you're only writing for yourself you needn't think about comprehensibility, since of course you understand everything. If you're only writing for yourself, you needn't think about suspense, since of course you know what's going to happen. And so forth. The better advice here is write for the 10%. Maybe 10% of the people around you have tastes similar enough to your own that there's a chance that your story will please them. They are your target audience. Your story needn't be comprehensible to everyone, but it should be comprehensible to them. Your story needn't work intellectually and emotionally for everyone, but you should try to make it work intellectually and emotionally for them.

When sending your story out for feedback, ignore the feedback of the 90%, and treasure the feedback of the 10%. Don't try to implement every change that everyone recommends, or even the majority of changes. Most people will never like the story that you would write. You wouldn't want your favorite punk band taking aesthetic advice from your country-music loving uncle. But listen intently to the 10%, to the readers who are almost there, the ones who have the potential to love your story but don't quite love it yet. They are the ones to listen to. Make it great for them, and forget everyone else.

[Cross-posted at The Blog of the APA]

Tuesday, September 05, 2017

The Gamer's Dilemma (guest post by Henry Shevlin)

guest post by Henry Shevlin

As an avid gamer, I’m pleased to find that philosophers are increasingly engaging with the rich aesthetic and ethical issues presented by videogames, including questions about whether videogames can be a form of art and the moral complexities of virtual violence.

One of the most disturbing ethical questions I’ve encountered in relation to videogames, though, is Morgan Luck’s so-called “Gamer’s Dilemma”. The puzzle it poses is roughly as follows. On the one hand, we don’t tend to regard people committing virtual murders as particularly ethically problematic: whether I’m leading a Mongol horde and slaughtering European peasants or assassinating clients as a killer for hire, it seems that, since no-one really gets hurt, my actions are not particularly morally troubling (there are exceptions to this of course). On the other hand, however, there are still some actions that I could perform in a videogame that we’re much less sanguine about: if we found out that a friend enjoyed playing games involving virtual child abuse or torture of animals, for example, we would doubtless judge them harshly for it.

The gamer’s dilemma concerns how we can explain or rationalize this disparity in our responses. After all, the disparity doesn’t seem to track any actual harm – there’s no obvious harm done in either case – or even the quantity of simulated harm (nuclear war simulations in which players virtually incinerate billions don’t strike me as unusually repugnant, for example). And while it might be that some forms of simulated violence can lead to actual violence, this remains controversial, and again, it’s unlikely that any such causal connections between simulated harm and actual harm would appropriately track our different intuitions about the different kinds of potentially problematic actions we might take in video games.

However, while the Gamer’s Dilemma is an interesting puzzle in itself, I think we can broaden the focus to include other artforms besides videogames. Many of us have passions for genres like murder mystery stories, serial killer movies, or apocalyptic novels, all of which involve extreme violence but fall well within the bounds of ordinary taste. However, someone who had a particular penchant for stories about incest, necrophilia, or animal abuse might strike us as, well, more than a little disturbed. Note that this is true even when we focus just on obsessive cases: someone with an obsession for serial killer movies might strike us as eccentric, but we’d probably be far more disturbed by someone whose entire library consisted of books about animal abuse.

Call this the puzzle of disturbing aesthetic tastes. What makes it the case that some tastes are disturbing and others not, even when both involve fictional harm? Is our tendency to form negative moral judgments about those with disturbing tastes rationally justified? While I’m not entirely sure what to think about this case, I am inclined to think that disturbing aesthetic tastes might reasonably guide our moral judgment of a person insofar as they suggest that that person’s broader moral emotions may be, well, a little out of the ordinary. Most of us feel revulsion rather than fascination with even the fictional torture of animals, for example, and if someone doesn’t share this revulsion in fictional cases, it might provide evidence that they might be ethically deviant in other ways. Crucially, this doesn’t apply to depictions of things like fictional murder, since almost all of us have enjoyed a crime drama at some point in our lives, and it's well within the boundaries of normal taste.

Note that there’s a parallel here with one possible response to Bernard William’s famous example of the truck driver who – through no fault of his own – kills a child who runs into the road, and subsequently feels no regret or remorse. As Williams points out, there’s no rational reason for the driver to feel regret – ex hypothesi, he did everything he could – yet we’d think poorly of him were he just to shrug the incident off (interestingly paralleled by the recent public outcry in the UK following a similar incident involving a unremorseful cyclist). I think what’s partly driving our intuition in such cases is the fact that a certain amount of irrational guilt and regret even for actions outside our control is to be expected as part of normal human moral psychology. When such regret is absent, it’s an indicator that a person is lacking at least some typical moral emotions. In much the same way, even if there is nothing intrinsically wrong about enjoying videogames or movies about animal torture, the fact that it constitutes a deviance from normal human moral attitudes might make us reasonably suspicious of such people’s broader moral emotions in such cases.

I think this is a promising line to take in regards to both the gamer’s dilemma and the puzzle of disturbing tastes. One consequence of this, however, would be that as society’s norms and standards change, certain tastes may no longer come to be indicative of more general moral deviancy. For example, in a society with a long history of cannibal fiction, people in general might lack the same intense disgust reactions that we ourselves display despite their being in all respects morally upstanding. In such a society, then, the fact that someone was fascinated with cannibalism might not be a useful indicator as to their broader moral attitudes. I’m inclined to regard this as a reasonable rather than counterintuitive consequence of the view, reflecting the rich diversity in societal taboos and fascinations. Nonetheless, no matter what culture I was visiting, I doubt I’d trust anyone who enjoyed fictional animal torture with watching my dog for the weekend.

[image source]

Friday, September 01, 2017

How Often Do European Language Journals Cite English-Language vs Same-Language Work?

By Eric Schwitzgebel and Ivan Gonzalez-Cabrera

Elite English-language philosophy journals cite almost exclusively English-language sources, while elite Chinese-language philosophy journals cite from a range of linguistic traditions.

How about other European-language journals? To what extent do articles in languages like French, German, and Spanish cite works originally written in the same language vs. works originally written in other languages?

To examine this question, we looked at a convenience sample of established journals that publish primarily or exclusively in European languages other than English -- journals catalogued in the Philosophy section of JStor with available records running at least from 1999 through 2010. [note 1] We downloaded the most recently available JStor archived issue of each of these journals and examined the references of every research article in those issues (excluding reviews, discussion notes, editors' introductions, etc.). This gave us a total of 96 articles to examine, 41 in French, 23 in German, 14 in Italian, 8 in Portuguese, 6 in Spanish, and 4 in Polish.

Although this is not a systematic or proportionate sample of non-English European-language journal articles, we believe it is broad and representative enough to provide a preliminary test of our hypothesis. Are citation patterns in these journals broadly similar to the citation patterns of elite Anglophone journals (where 97% of citations are to same-language sources)? Or are they closer to the patterns of elite Chinese-language journals (51% of citations to same-language sources)?

In all, we had 2883 citations for analysis. For each citation, we noted the language of the citing article, whether the cited source had originally been published in the same language as the citing article or in a different language, and if it was a different language whether that language was English. As in our previous studies, sources in translation were coded based on the original language of publication rather than the language into which it had been translated (e.g., a translation of Plato into German would be coded as ancient Greek rather than German). We also noted the original year of publication of the cited source, sorting into one of four categories: ancient to 1849, 1850 to 1945, 1946-1999, or 2000-present. [note 2]

In our sample, 44% of citations (1270/2883) were to same-language sources, 30% were to sources originally published in English (some translated into the language of the citing article), and 26% (749/2883) were to all other languages combined. These results are much closer to the Chinese-language pattern of drawing broadly from a variety of language traditions than they are to the English-language pattern of citing almost exclusively from the same linguistic tradition.

French- and German-language articles showed more same-language citation than did articles in other languages (51% and 71% respectively, compared to an average of 20% for the other sampled languages), but we interpret this result cautiously due to the small and possibly unrepresentative samples of articles in each language.

Breaking the results down by year category, we found the following: [if blurry, click for clearer display]

Thus, in this sample, cited sources originally published between 1946 and 1999 were just about as likely to have been originally written in English as to have been written in the language of the citing article. When the cited source was published before 1946 or after 1999, it was less likely to be in English.

Looking article by article, we found that only 5% of articles (5/96) cited exclusively same-language sources. This contrasts sharply with our study of articles in Anglophone journals, 73% of which cited exclusively English-language sources.

We conclude that non-English European-language philosophy articles cite work from a broad range of linguistic traditions, unlike articles in elite Anglophone philosophy journals, which cite almost exclusively from English-language sources.

One weakness of this research design is the unsystematic sampling of journals and languages. Therefore, we hope to follow up with at least one more study, focused on a more carefully chosen set of journals from a single European language. Stay tuned!

----------------------------------------------

note 1: Included journals were Archives de Philosophie, Archiv für Rechts- und Sozialphilosophie, Crítica: Revista Hispanoamericana de Filosofía, Gregorianum, Jahrbuch für Recht und Ethik, Les Études Philosophiques, Revista Portuguesa de Filosofia, Revue de Métaphysique et de Morale, Revue de Philosophie Ancienne, Revue Internationale de Philosophie, Revue Philosophique de la France et de l'Étranger, Rivista di Filosofia Neo-Scolastica, Rivista di Storia della Filosofia, Roczniki Filozoficzne, Rue Descartes, Sartre Studies International, Studi Kantiani, and Studia Leibnitiana. We excluded journals for which substantially more than half of recent articles were in English, as well as journals not listed as philosophy journals on the PhilPapers journals list.

note 2: Coding was done by two expert coders, each with a PhD in philosophy. One coder was fluent only in English but had some reading knowledge of German, French, and Spanish. The other coder was fluent in Spanish and English, had excellent reading knowledge of German and Portuguese, and had some reading knowledge of French and Italian. The coding task was somewhat difficult, especially for journals using footnote format. Expertise was required to recognize, for example, the original language and publication period of translated works, which was not always immediately evident from the citation information. We randomly selected 10 articles to code for inter-rater reliability, and in 91% of cases (235 of 258 citations) the coders agreed on both the original language and the year-category of original publication. Errors involved missing or double-counting some footnoted citations, typographical error, or mistakes in language or year category. Errors did not fall into any notable pattern, and in our view are within an acceptable rate given the difficulty of the coding task and the nature of our hypothesis.

Monday, August 28, 2017

How Often Do Chinese Philosophy Journals Cite English-Language Work?

By Linus Huang and Eric Schwitzgebel

In a sample of elite Anglophone philosophy journals, only 3% of citations are to works that were originally written in a language other than English. Are philosophy journals in other languages similar? Do they mostly cite sources from their own linguistic tradition? Or do they cite more broadly?

We will examine this question by looking at citation patterns from several non-English languages. Today we start by examining a sample of 208 articles published in fifteen elite Chinese-language journals from 1996 to 2016. [See Note 1 for methodological details.]

In our sample of 208 Chinese-language articles, 49% (1422/2929) of citations are to works originally written in languages other than the language of the citing article, in stark contrast with our results for Anglophone philosophy journals.

English is the most frequently cited foreign language, constituting 31% (915/2929) of all citations (compared to 17% for all other languages combined). Other cited languages are German, French, Russian, Japanese, Latin, Greek, Korean, Sanskrit, Spanish, Italian, Polish, Dutch, and Tibetan.

Our sample of elite Anglophone journals contained no journals focused on the history of philosophy. In contrast, our sample of elite Chinese-language journals contains three that focus on the history of Chinese philosophy. Excluding the Chinese-history journals from the analysis, we found that the plurality of citations (44%, 907/2047) are to works originally written in English (often in Chinese translation for the older works). Only 32% (647/2047) of citations are to works originally written in Chinese (leaving 24% for all other languages combined).

Looking just at the journals specializing in history of Chinese philosophy, 98% (860/882) of citations are to works originally written in Chinese – a percentage comparable to the percentage of same-language citations in the non-historical elite Anglophone journals in our earlier analysis. Chinese journals specificially discussing Chinese history cite Chinese sources at about the same rate as Anglophone journals cite Anglophone sources when discussing general philosophy.

We were not able to determine original publication date of all of the cited works. However, we thought it worth seeing whether the English-language citations are mostly of classic historical philosophers like Locke, Hume, and Mill, or whether instead they are mostly of contemporary writers. Thus, we randomly sampled 100 of the English-language citations. Of the 100, 68 (68%) of the cited English-language works were published in the period from 1946-1999 and 19 (19%) were published from 2000 to the present.

Finally, we broke the results down by year of publication of the citing article (excluding the three history journals). This graph shows the results.

Point-biserial correlation analysis shows a significant increase in rates of citation of English-language sources from 1996 to 2016 (34% to 49%, r = .11, p < .001). Citation of both Chinese and other-language sources may also be decreasing (r = -.05, p = .03; r = -.08, p = .001), but we would interpret these trends cautiously due to the apparent U-shape of the curves and the possibility of article-level effects that would compromise the statistical independence of the trials.

Citation patterns in elite Chinese-language philosophy journals thus appear to be very different from citation patterns in elite Anglophone philosophy journals. The Anglophone journals cite almost exclusively works that were originally written in English. The Chinese journals cite about half Chinese sources and about half foreign language sources (mostly European languages), with English being the dominant language in the foreign language group, and increasingly so in recent years.

We leave for later discussion the question of causes, as well as normative questions such as to what extent elite journals in various languages should be citing mostly from the same language tradition versus to what extent they should aim instead to cite more broadly from work written in a range of languages.

Stay tuned for some similar analyses of journals in other languages!

------------------------------------------

Note 1: The journals are: 臺灣大學哲學論評 (National Taiwan University Philosophical Review), 政治大學哲學學報 (NCCU Philosophical Journal), and 東吳哲學學報 (Soochow Journal of Philosophical Studies), which are ranked as the Tier I philosophy journals by Research Institute for the Humanities and Social Sciences Ministry of Science and Technology, Taiwan; and 哲学研究(Philosophical Researches), 哲学动态 (Philosophical Trends), 自然辩证法研究 (Studies in Dialectics of Nature), 道德与文明 (Morality and Civilization), 世界哲学 (World Philosophy), 自然辩证法通讯 (Journal of Dialectics of Nature), 伦理学研究 (Studies in Ethics), 现代哲学 (Modern Philosophy), 周易研究 (Studies of Zhouyi), 孔子研究 (Confucius Studies), 中国哲学史 (History of Chinese Philosophy), 科学技术哲学研究 (Studies in Philosophy of Science and Technology), which are ranked as the core philosophy journals in the Chinese Social Sciences Citation Index by Institute for Chinese Social Sciences Research and Assessment, Nanjing University, China. We sampled the research articles of their first issues in 1996, 2001, 2006, 2011, and 2016, generating a list of 208 articles. A coder fluent in both Chinese and English and with a PhD in philosophy (Linus Huang) coded the references of these articles, generating a list of 2952 citations to examine. For each reference, we noted its original publication language. Translated works were coded based on original language in which it was written rather than the language into which it had been translated. If that information was not available in the reference, Linus hand-coded by searching online or based on his knowledge of the history of philosophy. The original language was determinable in 2929 of the 2952 citations.

Thursday, August 24, 2017

Am I a Type or a Token? (guest post by Henry Shevlin)

guest post by Henry Shevlin

Eric has previously argued that almost any answer to the problem of consciousness involves “crazyism” – that is, a commitment to one or another hypotheses that might reasonably be considered bizarre. So it’s in this spirit of openness to wild ideas I’d like to throw out one of my own longstanding “crazy” ideas concerning our identity as conscious subjects.

To set the scene, imagine that we have one hundred supercomputers, each separately running a conscious simulation of the same human life. We’re also going to assume that these simulations are all causally coupled together so that they’re in identical functional states at any one time – if a particular mental state type is being realized in one at a given time, it’s also being realized in all the others.

The question I want to ask now is: how many conscious subjects – subjective points of view – exist in this setup? A natural response is “one hundred, obviously!” After all, there are one hundred computers all running their own simulations. But the alternate crazy hypothesis I’d like to suggest is that there’s just one subject in this scenario. Specifically, I want to claim that insofar as two physical realizations of consciousness give rise to a qualitatively identical sequence of experiences, they give rise to a single numerically identical subject of experience.

Call this hypothesis the Identity of Phenomenal Duplicates, or IPD for short. Why would anyone think such a crazy thing? In short, I’m attracted by the idea that the only factors relevant to the identity and individuation of a conscious subject are subjective: crudely, what makes me me is just the way the world seems to me and my conscious reactions to it. As a subject of phenomenal experience, in other words, my numerical identity is fixed just by those factors that are part of my experience, and factors that lie outside my phenomenal awareness (for example, which of many possible computers are running the simulation that underpins my consciousness) are thus irrelevant to my identity.

Putting things another way, I’d suggest that maybe my identity qua conscious subject is more like a type than a token, meaning that a single conscious subject could be multiply instantiated. As a helpful analogy, think about the ontology of something like a song, a novel, or a movie. The Empire Strikes Back has been screened billions of times over the years, but all of these were instantiations of one individual thing, namely the movie itself. If the IPD thesis is correct, then the same might be true for a conscious subject – that I myself (not merely duplicates of me!) could be multiply instantiated across a host of different biological or artificial bodies, even at a single moment. What *I* am, then, on this view, is a kind of subjective pattern or concatenation of such patterns, rather than a single spatiotemporally located object.

Here’s an example that might make the view seem (marginally!) more plausible. Thinking back to the one hundred simulations scenario above, imagine that we pick one simulation at random to be hooked up to a robot body, so that it can send motor outputs to the body and receive its sensory inputs. (Note that because we’re keeping all the simulations coupled together, they’ll remain in ‘phenomenal sync’ with whichever sim we choose to be embodied as a robot). The robot wakes up, looks around, and is fascinated to learn it’s suddenly in the real world, having previously spent its life in a simulation. But now it asks us: which of the Sims am I? Am I the Sim running on the mainframe in Tokyo, or the one in London, or the one in Sao Paolo?

One natural response would be that it was identical to whichever Sim we uploaded the relevant data from. But I think this neglects the fact that all one hundred Sims are causally coupled with one another, so in a sense, we uploaded the data from all of them – we just used one specific access point to get to it. To illustrate this, note that in transferring the relevant information from our Sims to the robot, we might wish (perhaps for reasons of efficiency) to grab the data from all over the place – there’s no reason we’d have to confine ourselves to copying the data over from just one Sim. So here’s an alternate hypothesis: the robot was identical to all of them, because they were all identical to one another – there was just one conscious subject all along! (Readers familiar with Dennett’s Where Am I? may see clear parallels here.)

I find something very intuitive about the response IPD provides in this case. I realize, though, that what I’ve provided here isn’t much of an argument, and invites a slew of questions and objections. For example, even if you’re sympathetic to the reading of the example above, I haven’t established the stronger claim of IPD, which makes no reference to causal coupling. This leaves it open to say, for example, that had the simulations been qualitatively identical by coincidence (for example, via being a cluster of Boltzmann brains) rather than being causally coupled, their subjects wouldn’t have been numerically identical. We might also wonder about beings whose lives are subjectively identical up to a particular point in time, and afterwards diverge. Are they the same conscious subject up until the point of divergence, or were they distinct all along? Finally, there’s also some tricky issues concerning what it means for me to survive in this framework – if I’m a phenomenal type rather than a particular token instantiation of that type, it might seem like I could still exist in some sense even if all my token instances were destroyed (although would Star Wars still exist in some relevant sense if every copy of it was lost?).

Setting aside these worries for now, I’d like to quickly explore how the truth or falsity of IPD might actually matter – in fact, might matter a great deal! Consider a scenario in which some future utilitarian society decides that the best way to maximize happiness in the universe is by running a bunch of simulations of perfectly happy lives. Further, let’s imagine that their strategy for doing this involves simulating the same single exquisitely happy life a billion times over.

If IPD is right, then they’ve just made a terrible mistake: rather than creating a billion happy conscious subjects, they’ve just made one exquisitely happy subject with a billion (hedonically redundant) instantiations! To rectify this situation, however, all they’d need to do would be to introduce an element of variation into their Sims – some small phenomenal or psychological difference that meant that each of the billion simulations was subjectively unique. If IPD is right, this simple change would increase the happiness in the universe a billion-fold.

There are other potential interesting applications of IPD. For example, coupled with a multiverse theory, it might have the consequence that you currently inhabit multiple distinct worlds, namely all those in which there exist entities that realize subjectively and psychologically identical mental states. Similarly, it might mean that you straddle multiple non-continuous areas of space and time: if the same identical simulation is run at time t1 and time t2 a billion years apart, then IPD would suggest that a single subject cohabits both instantiations.

Anyway, while I doubt I’ve convinced anyone (yet!) of this particular crazyism of mine, I hope at least it might provide the basis for some interesting metaphysical arguments and speculations.

[Image credit: Paolo Tonon]

Wednesday, August 16, 2017

What Would (or Should) You Do With Administrator Access to Your Mind (guest post by Henry Shevlin)

guest post by
Henry Shevlin

'Dial 888,' Rick said as the set warmed. 'The desire to watch TV, no matter what's on it.

'I don't feel like dialling anything at all now,' Iran said.

'Then dial 3,' he said.

'I can't dial a setting that stimulates my cerebral cortex into wanting to dial! If I don't want to dial, I don't want to dial that most of all, because then I will want to dial, and wanting to dial is right now the most alien drive I can imagine.’

(PHILIP K. DICK, Do Androids Dream of Electric Sheep)

--------------------------

We don’t have direct control over most our beliefs and attitudes, let alone most of our drives and desires. No matter how much money was offered as an incentive, for example, I couldn’t will myself to believe in fairies by this evening. Similarly, figuring out how to rid ourselves of our involuntary prejudices and biases is tricky (see here for an attempt), and changing our basic drives (such as our sexual orientation) is almost certainly impossible.

That’s not to say that we have zero control over any of these things. If I wanted to increase the likelihood of having religious beliefs, for example, I might decide to start hanging out with religious people, or attending services. But it’s a messy and indirect path to acquiring new beliefs and values.

Imagine, then, how useful it would be if we had some kind of more direct ability to control our minds. In thinking about this possibility, a useful analogy comes from the idea of Administrator access on a computer. What if – perhaps for just a few hours a month – you could delve into your beliefs, your values, and your drives, and reconfigure them to your heart’s content, before ‘logging back in’ as your (now modified) self?

Some immediately tempting applications of this possibility are fairly clear. For one, we’d perhaps want to eliminate or tone down our most egregious cognitive biases: confirmation bias, post-purchase rationalization, the sunk cost fallacy, and so on. Similarly, we might want to rid ourselves of implicit prejudices that we may have against groups or individuals. Prejudiced against elderly people? Just go into Settings Menu and adjust your slider to correct it. Irrationally resentful of a colleague who accidentally slighted you? A quick fix to remove the relevant emotion and you’re sorted.

Another attractive application might be to bring our immediate desires into line with our higher-order desires. Crave cigarettes and wish you didn’t? Tamp down the relevant first-order desire and you’re sorted. Wish you had the motivation to run in the mornings? Then ramp up the slider for “desire to go jogging”. We might even want to give ourselves some helpful false beliefs or ‘constructive myths’. Disheartened by the fact that you as an individual can do little to prevent climate change? Maybe a false belief that you can be a powerful agent for change will help you do good.

Finally, we come to the most controversial stuff, like values, drives, and memories. Take values first. Imagine that you find yourself trapped in a small town where you’re ostracised for your deviant political beliefs. One easy option might be to simply tweak your values to come into line with your community. Or imagine if you could adjust things like your sexual drives and orientation. Certainly, some people might feel relief at ridding themselves of certain kinks or fetishes that they found oppressive, while others might enjoy experimenting with recalibrating their sexuality. But we could also find that people were pressured or tempted to adjust their sexuality to bring it into line with the bigoted social expectations of their community, and it’s hard not to find that a morally troubling idea. Finally, imagine if we could wipe away unpleasant memories at will – the bad relationships, social gaffes, and painful insults could be gone in a moment. What could possibly go wrong with that?

As much as I like the idea of tweaking my mind, I feel uncomfortable about lot of these possibilities. First, at the risk of sounding cliched, it seems like the gains of personal growth are often as much in the journey as the destination. So, take someone who learns to become more patient with others’ failings. Along the way, she’s probably going to pick up a bunch of other important realizations – of her own fallibility, perhaps, or of the distress she’s caused in the past by dismissing people. Skipping straight to the outcome threatens to cheat her, and us, of something valuable. Similarly, sometimes along the road of personal change, we realize that we’ve been aiming for the wrong thing. Someone who desperately wants to fit in with their peer group, for example, might slowly and painfully realize that they don’t like their peers as much as they thought. Skipping out the journey, then, not only robs us of potential goods we might find along the way, but also of the capacity to change our mind about where we’re going.

There might also be some kinds of extrinsic goods that would be lost if we could all tweak our minds so effortlessly. Take the example of someone who wishes he could fit in with his more conservative community. Even though he might relish not having values that are different from those around him, by holding onto them, he could be providing encouragement and cover for other political deviants in his town. In much the same way, diversity of opinion, outlook, and motivation may be valuable for the community at large, despite not always being pleasant for those in the minority. This can be true even if the majority perspective in the community is in the right: dissenters can helpfully force the dominant voices to articulate and justify their views.

Finally, we could run into serious unexpected consequences – maybe getting rid of the availability heuristic would turn out to drastically slow down my reasoning, for example, or perhaps making myself more prosocial could backfire on me if I live in an antisocial community. Still more catastrophic consequences might involve deviant paths to fulfilment of desires. If (in Administrator mode) I give myself an overriding desire to be “fitter than the average person in my town”, for example, I might (as a normal user) go on to decide that the fastest way to achieve that goal is to kill all the healthy people in my community! More prosaically, it’s also easy to imagine people being tempted to reconfigure their difficult-to-achieve desires (like becoming rich and famous) and instead replacing them with stuff that’s easy to achieve (collecting paperclips, say, or counting blades of grass). Perhaps they would be well advised to do so, but this is philosophically controversial to say the least!

While Administrator Access to our own minds is of course just science fiction for now, I think it’s a useful tool for probing our intuitions about well-being, rationality, and personal change. It could also potentially guide us in situations where do have more powerful ways of influencing the development of minds. This may be a big deal in the development of future forms of artificial intelligence, for example, but something similar arguably applies even when we’re deciding how to raise our children (should we encourage them to believe in Santa Claus?).

For my part, I doubt I could resist making a few tweaks to myself (maybe I’d finally get to make good use of that gym membership). But I’d do so carefully... and likely with a sense of trepidation and unease.

[image source]

Tuesday, August 08, 2017

The Ethical Significance of Toddler Tantrums (guest post by Henry Shevlin)

guest post by
Henry Shevlin

As any parent can readily testify, little kids get upset. A lot. Sometimes it’s for broadly comprehensible stuff - because they have to go to bed or to daycare, for example. Sometimes it’s for more bizarre and idiosyncratic reasons – because their banana has broken, perhaps, or because the Velcro on their shoes makes a funny noise.

For most parents, these episodes are regrettable, exasperating, and occasionally, a little funny. We rarely if ever consider them tragic or of serious moral consequence. We certainly feel some empathy for our children’s intense anger, sadness, or frustration, but we generally don’t make a huge deal about these episodes. That’s not because we don’t care about toddlers, of course – if they were sick or in pain we’d be really concerned. But we usually treat these intense emotional outbursts as just a part of growing up.

Nonetheless, I think if we saw an adult undergoing extremes of negative emotion of the kind that toddlers go through on a daily or weekly basis, we’d be pretty affected by it, and regard it as something to be taken seriously. Imagine you’d visited a friend for dinner, and upon announcing you were leaving, he broke down in floods of tears, beating on the ground and begging you not to go. Most of us wouldn’t think twice about sticking around until he felt better. Yet when a toddler pulls the same move (say, when we’re dropping them off with a grandparent), most parents remained, if not unmoved, then at least resolute.

What’s the difference between our reactions in these cases? In large part, I think it’s because we assume that when adults get upset, they have good reasons for it – if an adult friend starts sobbing uncontrollably, then our first thought is going to be that they’re facing real problems. For a toddler, by contrast – well, they can get upset about almost anything.

This makes a fair amount of sense as far as it goes. But it also seems to require that our moral reactions to apparent distress should be sensitive not just to the degree of unhappiness involved, but the reasons for it. In other words, we’re not discounting toddler tantrums because we think little kids aren’t genuinely upset, or are faking, but because the tantrums aren’t reflective of any concerns worth taking too seriously.

Interestingly, this idea seems at least prima facie in tension with some major philosophical accounts of happiness and well-being, notably like hedonism or desire satisfaction theory. By the lights of these approaches, it’s hard to see why toddler emotions and desires shouldn’t be taken just as seriously as adult ones. These episodes do seem like bona fide intensely negative experiences, so for utilitarians, every toddler could turn out to be a kind of negative utility monster! Similarly, if we adopt a form of consequentialism that aims at maximizing the number of satisfied desires, toddlers might be an outsize presence – as indicated by their tantrums, they have a lot of seemingly big, powerful, intense desires all the time (for, e.g., a Kinder Egg, another episode of Ben and Holly, or that one toy their older sibling is playing with).

One possibility I haven’t so far discussed is the idea that toddlers’ emotional behavior might be deceptive: perhaps the wailing toddler, contrary to appearances, is only mildly peeved that a sticker peeled off his toy. There may be something to this idea: certainly, toddlers have very poor inhibitory control, so we might naturally expect them to be more demonstrative about negative emotions than adults. That said, I find it hard to believe that toddlers really aren’t all that bothered by whatever it is that’s caused their latest tantrum. As much as I may be annoyed at having to leave a party early, for example, it’s almost inconceivable to me that it could ever trigger floods of tears and wailing, no matter how badly my inhibitory control had been impaired by the host’s martinis. (Nonetheless, I’d grant this is an area where psychology or neuroscience could be potentially informative, so that we might gain evidence that toddlers’ apparent distress behavior was misleading).

But if we do grant that toddlers really get very upset all the time, is it a serious moral problem? Or just an argument against theories that take things like emotions and desires to be morally significant in their own right, without being justified by good reasons? As someone sympathetic to both hedonism about well-being and utilitarianism as a normative ethical theory, I’m not sure what to think. Certainly, it’s made me consider whether, as a parent, I should take my son’s tantrums more seriously. For example, if we’re at the park, and I know he’ll have a tantrum if we leave early, should I prioritize his emotions above, e.g., my desire to get home and grade student papers? Perhaps you’ll think that in reacting like this, I’m just being overly sentimental or sappy – come on, what could be more normal than toddler tantrums! – but it’s worth being conscious of the fact that previous societies normalized ways of treating children that we nowadays would regard as brutal.

There’s also, of course, the developmental question: toddlers aren’t stupid, and if they realize that we’ll do anything to avoid them having tantrums, then they’ll exploit that to their own (dis)advantage. Learning that you can’t always get what you want is certainly part of growing up. But thinking about this issue has certainly made me take another look at how I think about and respond to my son’s outbursts, even if I can’t fix his broken bananas.

Note: this blogpost is an extended exploration of ideas I earlier discussed here.

[image: Angelina Koh]

Thursday, August 03, 2017

Top Science Fiction and Fantasy Magazines 2017

In 2014, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on awards and "best of" placements in the previous ten years. Since people seemed to find it useful or interesting, I've been updating it annually. Below is my list for 2017.

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies or standalones.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Eugie, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, or Adams "Year's Best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) Lists of this sort do tend to reinforce the prestige hierarchy. I have mixed feelings about that. But since the prestige hierarchy is socially real, I think it's in people's best interest -- especially the best interest of outsiders and newcomers -- if it is common knowledge.

(8.) I take the list down to 1.5 points.

(9.) I welcome corrections.

Results:

1. Asimov's (244.5 points)
2. Fantasy & Science Fiction (182)
3. Clarkesworld (129.5)
4. Tor.com (120) (started 2008)
5. Lightspeed (83.5) (started 2010)
6. Subterranean (79.5) (ceased 2014)
7. Strange Horizons (48)
8. Analog (47.5)
9. Interzone (45.5)
10. Beneath Ceaseless Skies (30.5) (started 2008)
11. Fantasy Magazine (27.5) (merged into Lightspeed 2012, occasional special issues thereafter)
12. Uncanny (19) (started 2014)
13. Apex (15.5)
14. Jim Baen's Universe (11.5) (ceased 2010)
14. Postscripts (11.5) (ceased short fiction in 2014)
14. Realms of Fantasy (11.5) (ceased 2011)
17. Nightmare (10) (started 2012)
18. The New Yorker (8)
19. Black Static (7)
20. Intergalactic Medicine Show (6)
21. Electric Velocipede (5.5) (ceased 2013)
22. Helix SF (5) (ceased 2008)
22. Tin House (5)
24. McSweeney's (4.5)
24. Sirenia Digest (4.5)
26. Conjunctions (4)
26. The Dark (4) (started 2013)
28. Black Gate (3.5)
28. Flurb (3.5) (ceased 2012)
30. Cosmos (3)
30. GigaNotoSaurus (3) (started 2010)
30. Harper's (3)
30. Shimmer (3)
30. Terraform (3) (started 2014)
35. Lady Churchill's Rosebud Wristlet (2.5)
35. Lone Star Stories (2.5) (ceased 2009)
35. Matter (2.5) (started 2011)
35. Slate (2.5)
35. Weird Tales (2.5) (off and on throughout period)
40. Aeon Speculative Fiction (2) (ceased 2008)
40. Futurismic (2) (ceased 2010)
42. Abyss & Apex (1.5)
42. Beloit Fiction Journal (1.5)
42. Buzzfeed (1.5)
42. Daily Science Fiction (1.5) (started 2010)
42. e-flux journal (1.5) (started 2008)
--------------------------------------------------

Comments:

(1.) The New Yorker, Tin House, McSweeney's, Conjunctions, Harper's, and Beloit Fiction Journal are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos, Slate, and Buzzfeed are popular magazines that have published a little bit of science fiction on the side. e-flux is a wide-ranging arts journal. The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Clarkesworld (66.5)
2. Tor.com (61)
3. Asimov's (59)
4. Lightspeed (49.5)
5. F&SF (37.5)
6. Analog (21)
7. Beneath Ceaseless Skies (20)
8. Uncanny (19)
9. Subterranean (16)
10. Interzone (11.5)
11. Strange Horizons (11)
12. Nightmare (9)

(3.) One important thing left out of these numbers is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are an increasingly important venue. Some text-based magazines, like Clarkesworld, Lightspeed, and Strange Horizons also regularly podcast their stories.

(5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

(6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

(7.) The "Sad Puppy" kerfuffle threatens to damage the once-sterling reputation of the Hugos, but the Hugos are a small part of my calculation and the results are pretty much the same either way.

[image source; admittedly, it's not the latest issue!]

Wednesday, August 02, 2017

Welcome to the Blogosphere, Nomy Arpaly

One of my favorite living philosophers, Nomy Arpaly, has a new blog, The View from the Owl's Roost!

It's off to a great start, with a fun, insightful post about our excessive confidence in our limited imaginations.

Tuesday, August 01, 2017

Why Was Sci-Fi So Slow to Discover Time Travel? (Guest Post by Henry Shevlin)

guest post by
Henry Shevlin

Time travel is a more or less ubiquitous feature of modern sci-fi. Almost every long running SF show – Star Trek, Futurama, The X-Files – will have a time travel episode sooner or later, and some, like Doctor Who, use time travel as the main narrative device. The same applies to novels and, of course, to Hollywood – blockbuster SF franchises like the Terminator and Back to the Future employ it, as do quirkier pictures like Midnight in Paris. And of course, there’s no shortage of time travel novels, including old favorites like A Christmas Carol, and perhaps most influentially, HG Wells’s wonderful social sci-fi novella The Time Machine.

I don’t find it particularly surprising that we’re so interested in time travel. We all engage in so called ‘mental time travel’ (or Chronaesthesia) all the time, reviewing past experiences and imagining possible futures, and the psychological capacities required in our doing so are the subject of intense scientific and philosophical interest.

Admittedly, the label “mental time travel” may be a bit misleading here; most of what gets labelled mental time travel is quite different from the SF variant, consisting in episodic recall of the past or projection into the future rather than imagining our present selves thrown back in time. But I think we also do this latter thing quite a lot. To give a commonplace example, we’re all prone to engage in “coulda woulda shoulda” thinking: if only I hadn’t parked the car under that tree branch in a storm, if I only I hadn’t forgotten my wedding anniversary, if only I hadn’t fumbled that one interview question. Frequently when we do this, we even elaborate how the present might have been different if we’d just done something a bit differently in the past. This looks a lot like the plots of some famous science fiction stories! Similarly, I’m sure we’ve all pondered what it would be like to experience different historical periods like the American Revolution, the Roman Empire, or the age of dinosaurs (you can even buy a handy t-shirt). More prosaically, I imagine many of us have also reflected on how satisfying it would be to relive some of our earlier life experiences and do things differently the second time round – standing up to high school bullies, or studying harder in high school (again, a staple of light entertainment).

Given the above, I had always assumed that time travel was part of fiction because it was simply part of us. Time travel narratives, in other words, were borrowed from the kind of imaginings we all do all the time. It was with huge surprise, then, that I discovered (while teaching a course on philosophy and science fiction) that time travel doesn’t appear in fiction until the 18th century, in the short novel “Memoirs of the Twentieth Century”. Specifically, this story imagines letters from the future being brought back to 1728. The first story of any kind (as far as I’ve been able to find) that features humans being physically transported back into the past doesn’t come until 1881, in Edward Page Mitchell’s short story “The Clock That Went Backwards”.

Maybe this doesn’t seem so surprising – isn’t science fiction in the business of coming up with bizarre, never before seen plot devices? But in fact, it’s pretty rare for genuinely new ideas to show up in science fiction. Long before we had stories about artificial intelligence, we had the tales of Pinocchio and the Golem of Prague. Creatures on other planets? Lucian's True History had beings living on the moon and sun back in the 2nd century AD. For huge spaceships, witness the mind-controlled Vimanas of the Sanskrit epics. And so on. And yet, for all the inventiveness of folklore and mythology, there’s very little in the way of time travel to be found. The best I’ve come up with so far is some time dilation in the stories of Kakudmi in the Ramayanas, and visions of the past in the Book of Enoch. But as far as I can tell, there’s nothing that fits the conventional time travel narratives we’re used to, namely physically travelling to ages past or future, let alone any idea that we might alter history.

What’s going on here? One possibility is that something changed in science or society in the 18th century that paved the way for stories about time travel. But what would that be, and how would it lead to time travel seeming more plausible? For example, if the first time travel literature had accompanied the emergence of general relativity (with all its assorted time related weirdness), then that would offer a satisfying answer. However, Newtonian physics was already in place by the late 17th century, and it’s not clear which of Newton’s principles might pave the way for time travel narratives.

I’m very open to suggestions, but let me throw out one final idea: time travel narratives don’t show up in earlier fiction because they’re weird, unnatural, and counterintuitive. Even weirder than the staples of folklore and mythology, like people being turned into animals. Time travel is just not the kind of thing that naturally occurs to humans to think about at all, and it’s only via a few fateful books in the 18th century and its subsequent canonisation in The Time Machine that it’s become established as a central plot device in science fiction.

But doesn’t that contradict what I said earlier about how we all often naturally think about time travel related scenarios, like changing the past, or witnessing historical events firsthand? Not necessarily. Maybe these kinds of thought patterns are actually inspired by time-travel science fiction. In other words, prior to the emergence of time travel as a trope, maybe people really didn’t daydream about meeting Julius Caesar or going back and changing history. Perhaps the past was seen simply as a closed book, rather than (in the memorable words of L. P. Hartley) just “a foreign country”. That’s not to suggest, of course, that people didn’t experience memories and regrets, but maybe they experienced them a little differently, with the past seeming simply an immutable background to the present.

I’m excited the idea that a science fiction trope might have birthed a new and widespread form of thinking. Partly that’s because it suggests that science fiction may be more influential than we realize, and partly it’s because, as a philosopher, I’m interested in where patterns of thought come from. However, I’m very happy to proven wrong in this conjecture – perhaps there are letters from the Middle Ages in which writers engage in precisely this kind of speculation. Or perhaps the emergence of science fiction in the 18th century can be explained in terms of some historical event I’ve missed. Or who knows: maybe there’s an untranslated gnostic manuscript out there where Jesus has a time machine....

[image source]

Thursday, July 27, 2017

How Everyone Might Reasonably Believe They Are Much Better Than Average

In a classic study, Ola Svenson (1981) found that about 80% of U.S. and Swedish college students rated themselves as being both safer and more skilled as drivers than other students in the room answering the same questionnaire. (See also Warner and Aberg 2014.) Similarly, most respondents tend to report being less susceptible to cognitive biases and sexist bias than their peers, as well as more honest and trustworthy -- and so on for a wide variety of positive traits: the "Better-Than-Average Effect".

The standard view is that this is largely irrational. Of course most people can't be justified in thinking that they are better than most people. The average person is just average! (Just between you and me, don't you kind of wish that all those dumb schmoes out there would have a more realistic appreciation of their incompetence and mediocrity? [note 1])

Particularly interesting are explanations of the Better-Than-Average Effect that appeal to people's idiosyncratic standards. What constitutes skillful driving? Person A might think that the best standard of driving skill is getting there quickly and assertively, while still being safe, while Person B might think skillful driving is more a matter of being calm, predictable, and within the law. Each person might then prefer the standard that best reflects their own manner of driving, and in that way justify viewing themselves as above average (e.g., Dunning et al. 1991; Chambers and Windschitl 2004).

In some cases, this seems likely to be just typical self-enhancement bias: Because you want to think well of yourself, in cases where the standards are ambiguous, you choose the standards that make you look good. Changing example, if you want to think of yourself as intelligent and you're good at math, you might choose to think of mathematical skill as central to intelligence, while if you're good at practical know-how in managing people, you might choose to think of intelligence more in terms of social skills.

But in other cases of the Better-Than-Average Effect, the causal story might be much more innocent. There may be no self-flattery or self-enhancement at all, except for the good kind of self-enhancement!

Consider the matter abstractly first. Kevin, Nicholas, and Ana [note 2] all value Trait A. However, as people will, they have different sets of evidence about what is most important to Trait A. Based on this differing evidence, Kevin thinks that Trait A is 70% Property 1, 15% Property 2, and 15% Property 3. Nicholas thinks Trait A is 15% Property 1, 70% Property 2, and 15% Property 3. Ana thinks that Trait A is 15% Property 1, 15% Property 2, and 70% Property 3. In light of these rational conclusions from differing evidence, Kevin, Nicholas, and Ana engage in different self-improvement programs, focused on maximizing, in themselves, Properties 1, 2, and 3 respectively. In this, they succeed. At the end of their training, Kevin has the most Property 1, Nicholas the most Property 2, and Ana the most Property 3. No important new evidence arrives in the meantime that requires them to change their views about what constitutes Trait A.

Now when they are asked which of them has the most of Trait A, all three reasonably conclude that they themselves have the most of Trait A -- all perfectly rationally and with no "self-enhancement" required! All of them can reasonably believe that they are better than average.

Real-life cases won't perfectly match that abstract example, of course, but many skills and traits might show some of that structure. Consider skill as a historian of philosophy. Some people, as a result of their training and experience, might reasonably come to view deep knowledge of the original language of the text as most important, while others might view deep knowledge the the historical context as most important, while others might view deep knowledge of the secondary literature as most important. Of course all three are important and interrelated, but historians reasonably disagree substantially in their comparative weighting of these types of knowledge -- and, I think, not always for self-serving or biased reasons. It's a difficult matter of judgment. Someone committed to the first view might then invest a lot of energy in mastering the details of the language, someone committed to the second view might invest a lot of energy in learning the broader historical context, and someone committed to the third view might invest a lot of energy in mastering a vast secondary literature. Along the way, they might not encounter evidence that requires them to change their visions of what makes for a good historian. Indeed, they might quite reasonably continue to be struck by the interpretative power they are gaining by close examination of language, historical context, or the secondary literature, respectively. Eventually, each of the three might very reasonably regard themselves as a much better historian of philosophy than the other two, without any irrationality, self-flattery, or self-enhancing bias.

I think this might be especially true in ethics. A conservative Christian, for example, might have a very different ethical vision than a liberal atheist. Each might then shape their behavior according to this vision. If both have reasonable ethical starting points, then at the end of the process, each person might reasonably regard themselves as morally better than the other, with no irrational self-enhancing bias. And of course, this generalizes across groups.

I find this to be a very convenient and appealing view of the Better-Than-Average Effect, quite comforting to my self-image. Of course, I would never accept it on those grounds! ;-)

Friday, July 21, 2017

New Journal! The Journal of Science Fiction and Philosophy

This looks very cool:

Call for Papers

General Theme

The Journal of Science Fiction and Philosophy, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.

The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.

We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie Gattacca to introduce the ethics of genetic technologies, or The Island of Dr. Moreau to discuss personhood)? Turn that class into a paper!

Yearly Theme

Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal. The Yearly Theme for 2017 is All Persons Great and Small: The Notion of Personhood in Science Fiction Stories.

SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?

The Journal accepts papers year-round. The deadline for the first round of reviews, both for its general and yearly theme, is October 1st, 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

Wednesday, July 19, 2017

Why I Evince No Worry about Super-Spartans

I'm a dispositionalist about belief. To believe that there is beer in the fridge is nothing more or less than to have a particular suite of dispositions. It is to be disposed, ceteris paribus (all else being equal, or normal, or absent countervailing forces), to behave in certain ways, to have certain conscious experiences, and to transition to related mental states. It is to be disposed, ceteris paribus, to go to the fridge if one wants a beer, and to say yes if someone asks if there is beer in the fridge; to feel surprise should one open the fridge and find no beer, and to visually imagine your beer-filled fridge when you try to remember the contents of your kitchen; to be ready to infer that your Temperance League grandmother would have been disappointed in you, and to see nothing wrong with plans that will only succeed if there is beer in the fridge. If you have enough dispositions of this sort, you believe that there is beer in the fridge. There's nothing more to believing than that. (Probably some sort of brain is required, but that's implementational detail.)

To some people, this sounds uncomfortably close to logical behaviorism, a view according to which all mental states can be analyzed in terms of behavioral dispositions. On such a view, to be in pain, for example, just is, logically or metaphysically, to be disposed to wince, groan, avoid the stimulus, and say things like "I'm in pain". There's nothing more to pain than that.

It is unclear whether any well-known philosopher was a logical behaviorist in this sense. (Gilbert Ryle, the most cited example, was clearly not a logical behaviorist. In fact, the concluding section of his seminal book The Concept of Mind is a critique of behaviorism.)

Part of the semi-mythical history of philosophy of mind is that in the bad old days of the 1940s and 1950s, some philosophers were logical behaviorists of this sort; and that logical behaviorism was abandoned due to several fatal objections that were advanced in the 1950s and 1960s, including one objection by Hilary Putnam that turned on the idea of super-spartans. Some people have suggested that 21st-century dispositionalism about belief is subject to the same concerns.

Putnam asks us to "engage in a little science fiction":

Imagine a community of 'super-spartans' or 'super-stoics' -- a community in which the adults have the ability to successfully suppress all involuntary pain behavior. They may, on occasion, admit that they feel pain, but always in pleasant well-modulated voices -- even if they are undergoing the agonies of the damned. The do not wince, scream, flinch, sob, grit their teeth, clench their fists, exhibit beads of sweat, or otherwise act like people in pain or people suppressing their unconditioned responses associated with pain. However, they do feel pain, and they dislike it (just as we do) ("Brains and Behavior", 1965, p. 9).

Here is some archival footage I have discovered:

A couple of pages later, Putnam expands the thought experiment:

[L]et us undertake the task of trying to imagine a world in which there are not even pain reports. I will call this world the 'X-world'. In the X-world we have to deal with 'super-super-spartans'. These have been super-spartans for so long, that they have begun to suppress even talk of pain. Of course, each individual X-worlder may have his private way of thinking about pain.... He may think to himself: 'This pain is intolerable. If it goes on one minute longer I shall scream. Oh No! I mustn't do that! That would disgrace my whole family...' But X-worlders do not even admit to having pains" (p. 11).

Putnam concludes:

"If this last fantasy is not, in some disguised way, self-contradictory, then logical behaviourism is simply a mistake.... From the statement 'X has a pain' by itself no behavioral statement follows -- not even a behavioural statement with a 'normally' or 'probably' in it. (p. 11)

Putnam's basic idea is pretty simple: If you're a good enough actor, you can behave as though you lack mental state X even if you have mental state X, and therefore any analysis of mental state X that posits a necessary connection between mentality and behavior is doomed.

Now I don't think this objection should have particularly worried any logical behaviorists (if any existed), much less actual philosophers sometimes falsely called behaviorists such as Ryle, and still less 21st-century dispositionalists like me. Its influence, I suspect, has more to do with how it conveniently disposes of what was, even in 1965, only a straw man.

We can see the flaw in the argument by considering parallel cases of other types of properties for which a dispositional analysis is highly plausible, and noting how it seems to apply equally well to them. Consider solubility in water. To say of an object that it is soluble in water is to say that it is apt to dissolve when immersed in water. Being water-soluble is a dispositional property, if anything is.

Imagine now a planet in which there is only one small patch of water. The inhabitants of that planet -- call it PureWater -- guard that patch jealously with the aim of keeping it pure. Toward this end, they have invented technologies so that normally soluable objects like sugar cubes will not dissolve when immersed in the water. Some of these technologies are moderately low-tech membranes which automatically enclose objects as soon as they are immersed; others are higher-tech nano-processes, implemented by beams of radiation, that ensure that stray molecules departing from a soluble object are immediately knocked back to their original location. If Putnam's super-spartans objection is correct, then by parity of reasoning the hypothetical possibility of the planet PureWater would show that no dispositional analysis of solubility could be correct, even here on Earth. But that's the wrong conclusion.

The problem with Putnam's argument is that, as any good dispositionalist will admit, dispositions only manifest ceteris paribus -- that is, under normal conditions, absent countervailing forces. (This has been especially clear since Nancy Cartwright's influential 1983 book on the centrality of ceteris paribus conditions to scientific generalizations, but Ryle knew it too.) Putnam quickly mentions "a behavioural statement with a 'normally' or 'probably' in it", but he does not give the matter sufficient attention. Super-super-spartans' intense desire not to reveal pain is a countervailing force, a defeater of the normality condition, like the technological efforts of the scientists of PureWater. To use hypothetical super-super-spartans against a dispositional approach to pain is like saying that water-solubility isn't a dispositional property because there's a possible planet where soluble objects reliably fail to dissolve when immersed in water.

Most generalizations admit of exceptions. Nerds wear glasses. Dogs have four legs. Extraverts like parties. Dropped objects accelerate at 9.8 m/sec^2. Predators eat prey. Dispositional generalizations are no different. This does not hinder their use in defining mental states, even if we imagine exceptional cases where the property is present but something dependably interferes with its manifesting in the standard way.

Of course, if some of the relevant dispositions are dispositions to have certain types of related conscious experiences (e.g., inner speech) and to transition to related mental states (e.g., in jumping to related conclusions), as both Ryle and I think, then the super-spartan objection is even less apt, because super-super-spartans do, by hypothesis, have those dispositions. They manifest such internal dispositions when appropriate, and if they fail to manifest their pain in outward behavior that's because manifestation is prevented by an opposing force.

(PS: Just to be clear, I don't myself accept a dispositional account of pain, only of belief and other attitudes.)

Thursday, July 13, 2017

THE TURING MACHINES OF BABEL

[first published in Apex Magazine, July 2017]

In most respects, the universe (which some call the Library) is everywhere the same, and we at the summit are like the rest of you below.  Like you, we dwell in a string of hexagonal library chambers connected by hallways that run infinitely east and west.  Like you, we revere the indecipherable books that fill each chamber wall, ceiling to floor.  Like you, we wander the connecting hallways, gathering fruits and lettuces from the north wall, then cast our rinds and waste down the consuming vine holes.  Also like you, we sometimes turn our backs to the vines and gaze south through the indestructible glass toward sun and void, considering the nature of the world.  Our finite lives, guided by our finite imaginations, repeat infinitely east, west, and down.
But unlike you, we at the summit can watch the rabbits.
The rabbits!  Without knowing the rabbits, how could one hope to understand the world?

#

The rabbit had entered my family's chamber casually, on a crooked, sniffing path.  We stood back, stopping mid-sentence to stare, as it hopped to a bookcase.  My brother ran to inform the nearest chambers, then swiftly returned.  Word spread, and soon most of the several hundred people who lived within a hundred chambers of us had come to witness the visitation -- Master Gardener Ferdinand in his long green gown, Divine Chanter Guinart with his quirky smile.  Why hadn't our neighbors above warned us that a rabbit was coming?  Had they wished to watch the rabbit, and lift it, and stroke its fur, in selfish solitude?
The rabbit grabbed the lowest bookshelf with its pink fingers and pulled itself up one shelf at a time to the fifth or sixth level; then it scooted sideways, sniffing along the chosen shelf, fingers gripping the shelf-rim, hind feet down upon the shelf below.  Finding the book it sought, it hooked one finger under the book's spine and let it fall.
The rabbit jumped lightly down, then nudged the book across the floor with its nose until it reached the reading chair in the middle of the room.  It was of course taboo for anyone to touch the reading chair or the small round reading table, except under the guidance of a chanter.  Chanter Guinart pressed his palms together and began a quiet song -- the same incomprehensible chant he had taught us all as children, a phonetic interpretation of the symbols in our sacred books.
The rabbit lifted the book with its fingers to the seat of the chair, then paused to release some waste gas that smelled of fruit and lettuce.  It hopped up onto the chair, lifted the book from chair to reading table, and hopped onto the table.  Its off-white fur brightened as it crossed into the eternal sunbeam that angled through the small southern window.  Beneath the chant, I heard the barefoot sound of people clustering behind me, their breath and quick whispers.
The rabbit centered the book in the sunbeam.  It opened the book and ran its nose sequentially along the pages.  When it reached maybe the 80th page, it erased one letter with the pink side of its tongue, and then with the black side of its tongue it wrote a new letter in its place.
Its task evidently completed, the rabbit nosed the book off the table, letting it fall roughly to the floor.  The rabbit leaped down to chair then floor, then smoothed and licked and patiently cleaned the book with tongue and fingers and fur.  Neighbors continued to gather, clogging room and doorways and both halls.  When the book-grooming was complete, the rabbit raised the book one shelf at a time with nose and fingers, returning it to its proper spot.  It leaped down again and hopped toward the east door.  People stepped aside to give it a clear path.  The rabbit exited our chamber and began to eat lettuces in the hall.
With firm voice, my father broke the general hush: "Children, you may gently pet the rabbit.  One child at a time."  He looked at me, but I no longer considered myself a child.  I waited for the neighbor children to have their fill of touching.  We lived about a hundred thousand levels from the summit, but even so impossibly near the top of our infinite world, one might reach old age only ever having seen a couple of dozen visitations.  By the time the last child left, the rabbit had long since finished eating.
The rabbit hopped toward where I sat, about twenty paces down the hall, near the spiral glass stairs.  I intercepted it, lifting it up and gazing into its eyes.
It gazed silently back, revealing no secrets.

[continued here]

[author interview]

-----------------------------------------

Related:

What Is the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape? (Jul 5, 2017)

Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct 31, 2012)