(Under Review*, In Preparation**)
1. Epistemic Norms and Epistemic Evaluations*
Abstract: Jane Friedman points out a tension between traditional evidential epistemic norms and zetetic norms which guide our inquiry more generally. This paper proposes a new solution to the epistemic/zetetic tension by arguing that traditional epistemic norms do not guide our epistemic actions. Epistemic actions concerned by traditional epistemic norms are usually spontaneous and involuntary. Thus, guides for those actions are inert and purposeless. Instead, I argue that we use traditional epistemic norms merely to evaluate other epistemic agents in our epistemic network. We regard one as a bad informant and modify our relationship accordingly if we find one violating traditional epistemic norms. Understood evaluatively, traditional epistemic norms are no longer in tension with zetetic inquiry norms because they are in different normative domains. Borrowing a distinction by John Greco, I argue traditional epistemic norms concern knowledge generation while zetetic norms concern knowledge distribution.
2. Virtual-Inclusive Categories and the Simulation Hypothesis*
Abstract: David Chalmers argues that Hilary Putnam’s argument against the brains-in-a-vat hypothesis based on the on reference does not apply to the simulation hypothesis—which says that “we are living in a computer simulation”—because the latter employs virtual-inclusive categories, such as computer or virtual reality, that are real both in and out of a simulation. This result is astonishing, as the difference between the two hypotheses lies only in their sci-fi details, which seems philosophically inert. Chalmers’s argument relies on a structuralist theory of virtual-inclusive categories, claiming that virtual-inclusive categories like computer are defined by their structural properties. In this paper, I argue for an alternative theory. I show that the essence of virtual-inclusive artifact categories lies in their functional properties instead. Consequently, I argue that this strategy for bypassing semantic externalism with virtual-inclusive categories fails, because relevant functional properties, contra structural properties, are observer relative and consequently not symmetric with respect to virtual simulations. Therefore, the brains-in-a-vat and simulation hypotheses are equally answerable to the causal constraint on reference. This paper thus serves as the necessary first step in defending Putnam-style anti-skepticism arguments in light of the simulation hypothesis.
3. Understanding How It Works Defeats Mental Attribution*
Abstract: This paper argues for a new epistemic condition for machine mentality: we are only justified to attribute mentality to systems whose internal workings we cannot fully understand. I call this the mechanistic opacity condition. Methodologically, I argue that machine consciousness studies should be continuous with other minds studies. With an inference to the best explanation argument, I argue that a mechanistic explanation that is better than a mental explanation always exists for a mechanistically transparent system, thus rendering consciousness attribution to them unjustified. Despite behavioral similarities to humans, current AI systems cannot be considered conscious, intelligent, intentional, or mental in important ways, simply because we understand how they work. This condition explains intuitions about classic thought experiments (China Brain, Blockhead, Chinese Room) and provides principled AI consciousness skepticism without biological chauvinism.
4. Alignment Is All You Need: No Catastrophic Risk to Worry at the Intentional Level**
Abstract: Philosophers debate whether superintelligent agents might have desires incompatible with those of humanity. If so, these agents could pose catastrophic risks. Consequently, the standard alignment approach to ensure AI safety might be inadequate. In this paper, I argue that concerns about artificial systems’ desires are misplaced. Previous works have assumed that machines can have desires that are similar to human’s without considering possible theories of machine desires. This paper surveys two theories of machine desires. Both views suggest that meaningful interpretations of machine desires derive from the desires of their developers. Therefore, I argue that there is no AI safety concern at the intentional level. To ensure AI safety, our focus should instead be on the design level.
5. The Personal Identity Assumption in External World Skeptical Scenarios**
Abstract: Dreams, brain-in-a-vat simulations, and other external world skepticism scenarios carry an implicit personal identity assumption: that the real subject of the scenario—the sleeping person or the envatted brain—is numerically identical to the subject of the first-person dreamed or simulated experience. Drawing on considerations of identity relations in simulations and virtual reality technology, this paper argues that this assumption is unwarranted. All prominent personal identity theories suggest that the real subject of the skeptical scenario and the subject of the first-person experience are not identical. Consequently, standard formulations of external world skepticism fail to pose a genuine epistemological threat, as they presuppose an identity relation that their own structure undermines.