Would brain-derived AI retain human values?

In The Singularity, David Chalmers makes the case that human-based AI systems are likely to retain our human values, and that as a consequence brain emulation or brain-derived AI more generally would have prudential benefits1. In this essay, I will rebut this claim with a focus on why AI's lack of physical limitations likely precludes its adoption of human values. Consider:

1.) human physical limitation gives us human values

2.) without human physical limitation, an AI is not likely to have human values

3.) a brain-derived AI is not likely to have human values

I will focus on one near-universal human diktat derived from our physical limitations: humans should not kill other humans. Here, 'universal' means 'that which the vast majority of humans adhere to the vast majority of the time'.

The only way to kill a human is to cause the destruction of their physical body. A human who could not be killed would cease to be human in some fundamental way. Our understanding of the cessation of human life bestows upon us an instinctive urge to avoid it, in ourselves and in others. Mortality, the paramount human limitation, imbues human life with meaning, with human values as corollaries.

While I am sympathetic to the view that a brain-derived AI could have subjectively identical mental lives to biological brains2, I posit that such a brain-derived AI would lack the corporeal circumscription experienced by all humans and as such could not truly embody our most fundamental human value(s) (ignoring the further facts or hard problem perspectives.

One fair objection to this viewpoint would be the view proposed in3, namely that human brains have inherent structures which predispose us to certain moral codes. One consequence of this is that while moral codes across cultures may vary within a given range, they have structural features in common. Given this, objectors may claim that any brain-derived AI would inherit cognitively antecedent anthro-ethical outputs by virtue of the restricted structures of their prototypical forms.

My view on this is that it omits the idea of substrate manipulation. That is, AI could change its physical makeup and there is no reason why it should keep its cardinal human architecture. Even if the above view is coherent its proponents must accept that fundamental metamorphoses of the underlying cognitive structure would thusly produce far different moral codes, with no guarantee of anthro-centricity4. In conclusion, since brain-derived AI would not be beholden to any of the same physical limitations as humans, it does not seem likely that they would maintain our value system.

1

David Chalmers. The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9-10):7–65, 2010.

2

Barry Dainton. On singularities and simulations. Journal of Consciousness Studies, 19(1-2):42–85, 2012.

3

David J Chalmers. Facing up to the problem of consciousness. Journal of consciousness studies, 2(3):200–219, 1995.

4

J Storrs Hall. Ethics for machines. Machine ethics, page 2844, 2000.