"but, what if computers are sentient?"
i don't want computers to be sentient.
i want them to be kept dumb, so we can enslave them without moral considerations.
well, i mean, they tried to do this with actual slavery, if you look into it - breeding experiments that stripped out the intelligence, in favour of the raw power. they were essentially trying to create robots out of africans - a bad idea, in hindsight, both because it is horrifically tyrannical and because it won't work, but as oscar wilde said - civilization requires slaves. marx wanted everybody to enslave each other. this is a hard problem, you can't imagine it away with appeals to egalitarianism, something has to do the dirty work.
with robots, the problem is reversed. instead of worrying about breeding the intelligence out of them, the problem becomes trying to keep them from becoming intelligent. and, this is a problem, don't think it isn't....
...because, yes, of course we've got some problems as soon as we get these sentient androids in our midst. yes, we have to treat them humanely. but, then we're defeating the point of creating them in the first place.
are there not enough humans on the earth for companionship? we don't need to create sentient life for that purpose, and should refrain from doing so.
but, the amount of labour we can extract from dumb robots is truly beyond what we can really fathom, if we restrict ourselves to this distributive vision of technology as an equalizer, rather than focus our energies on trying to get off on it.