I’ve been a little lax with my personal writing recently. In part that’s because I’m spending most of my time researching at the Cosmos Institute, but it’s also because my wife and I are expecting our first child any day now. I’ll keep writing as often as I can, but for the foreseeable future my posting schedule may be more irregular than usual.
Humanoid robots are cool. Like driverless cars, they are one of those rare pieces of modern technology that feel appropriately futuristic. Should the makers be able to judo flip teleoperation into full automation, we can expect the stuff of sci-fi serials to be used enthusiastically by anyone who can get their hands on them.
This week, X1 made its NEO model available for pre-order. For $500 a month and a deposit, you can get one of these 5’6 guys delivered to your house in 2026 (so long as you live in the United States). By all accounts NEO, which looks a bit like a walking 2000s PC speaker, can do a pretty good job at helping you around the house. X1 CEO Bernt Øyvind Børnich called it “robotics slop” insofar as the robot (or for now, its human pilot) can perform basic household chores to a good-but-not-great level.
I don’t live in the United States, so I don’t hold out hope for seeing one in action any time soon. I’m also not exactly sure when teleoperation will become full automation. Maybe a few years. Maybe sooner. But even if I could get my hands on a completely autonomous bot, I wonder what it would be like to have an electronic footman that lives in my house and does all the stuff I don’t want to do. Sure, a humanoid robot would (probably) save me more time than not, but I suspect it might be a strange experience to boss around a thing cosplaying as a human every day.
If our character is shaped by our habits, then it seems to me that interacting with a bipedal robot on a daily basis would be pretty relevant for the type of person I am and would like to be in the future. Should I start mistreating my new guest — or if I get used to commanding a human-like thing that always obeys — then I might find that style of interaction rubs off on my personality. I’m not saying it will make me evil, but on some level practising this kind of domination strikes me as Not Good for the soul.
Of course, it might be fine. Maybe I’d get used to it quickly, and it wouldn’t have too much of an impact on the sort of person I am learning to become. In either case, I won’t know what it means for me or anyone else until we give it a go.
Objectification 
Is it wrong to treat an inanimate object badly? In some respects no. A rock doesn’t have a sense of interiority, so you don’t need to worry about hurting its feelings. If there is no subject of experience on the receiving end, then there is no moral patient to fret over. There’s nothing to wrong, nothing to injure, and no duty owed. At that level, treating an inanimate object badly is simply not a moral act. It is value-neutral, like clearing a fallen branch from a path or dismantling a broken chair.
This is all well and good, but it doesn’t tell us much about the person treating an object badly. From this perspective, we might still worry about what kind of person we become by taking pleasure in destruction. Throwing a rock into an empty patch of dirt may not exactly be a morally troublesome act, but what about if you threw it somewhere more interesting, say in the direction of a gravestone?
Even if it does little damage, almost everyone recognises that this would still be a kind of desecration. Clearly the stone still feels nothing, but the act signals contempt toward the human world the stone belongs to. It violates a norm of care that flows outward from the living and the dead alike.
These kinds of acts reveal something about who we already are, but they also shape our growth by accustoming us to certain ways of being. When we rehearse indifference toward something that carries significance, we become the kind of person for whom indifference comes naturally. In that sense there’s a harm taking place to the self who becomes habituated to treating the world as something beneath them.
Then we have more sophisticated artefacts like, for example, a common household vacuum cleaner. You probably wouldn’t destroy your own, partly because you might need to hoover something up later but also because doing so would feel petty and self-corroding. Just like flinging a stone in a cemetery, the act changes who we are for the worse. Maybe not by much, but enough to matter with enough repetition.
But humanoid robots aren’t vacuum cleaners. These are things that will live alongside us, proxies for real people that we interact with as if they were social partners. This relationship strikes me as different in kind to the vast majority of tools and technologies that we have at our disposal.
Even if you know your new companion is a machine, it’s still a person-shaped thing that elicits scripts of command, deference, status, greeting, blame, and praise. That means your mind treats it as a social partner by default, but it also means that every interaction is coloured by a posture of mastery. You issue orders without negotiation, expect compliance without comment, and correct behaviour without apology. All the while, you are practicing being a person who takes compliance to be the natural order of things.
To be clear, my concern here isn’t that the robot feels but that we respond as if it belongs in the moral space normally occupied by persons. I am not thinking about people forming sentimental attachments that lead to over-reliance (that is one scenario, but not the most interesting one). The deeper issue is the kind of stance we learn to inhabit.
So, we might distinguish two different forms of relation:
- The instrumental, wherein the human form is used to secure trust and ease of interaction. The robot is still treated as a tool, but one that works better because it feels familiar. This provides a kind of psychological leverage in that the design nudges us into a cooperative stance. 
- The moral, wherein we begin to treat it as a quasi-subject that sits inside the space we normally reserve for persons. Once it occupies that zone, our behaviour towards it becomes expressive. In engaging with it, we are practising a way of relating that, over time, changes who we are. 
The latter dimension has animated discussions about our relationship with technology for the better part of two thousand years. Its roots go at least as far back as Plato, who described technology as a form of craft knowledge that shaped both product and practitioner. The cobbler’s téchnē produced shoes, but it also cultivated habits of judgment about fit, durability, and beauty; the navigator’s téchnē guided ships, but it also demanded an attunement to winds, stars, and currents.
In this framing, technology is something like a “training ground” or a set of practices that form the character of those who wield it. Technology externalises human capacity, but it also bends those faculties back towards us by fostering new dispositions and habits.
Aristotle argues that character is moulded by habit, that over time your actions in the world form the essence of who you are. For the man the early moderns called simply The Philosopher, the self is built one act at a time. He famously reminds us that we become just by doing just acts, wise by doing wise acts, and brave by doing brave acts.
But what about when people look like tools and tools look like people?
Humans are, after all, predisposed to treat anything with eyes and a voice as a social partner. We respond to appearance as if it indicates personhood, we extend the grammar of interaction to anything we can, and we adopt the stance that normally accompanies a face-to-face encounter if the situation allows for it. Some people soften their tone and say “thank you” to a voice assistant on their phone, even though they know perfectly well there’s no one on the other end. Others do the same with ChatGPT, though there is some logic here insofar as politeness often produces better responses.
The point is that human-like cues pull us into patterns of social behaviour. Given these robots are about as human-looking a form as we can imagine, we should expect them to stimulate our social reflexes and modify our expectations accordingly. Provided enough time, expectations become habits and habits become character.
There is a second concern here, adjacent to character but distinct from it. This issue deals with the nature of shared life that requires us to encounter other wills and adjust to them. Freedom is in one sense the skill of navigating a world full of other agents, each with claims and desires of their own.
If we spend enough time commanding a thing that always does what we ask, we may come to see effortful negotiation as an irritation and other minds as obstacles. A life without friction may feel pleasant, but it also risks dampening our sense of what freedom really is: the discipline of sharing a world with other beings.
Benevolent authority 
When I’m writing about AI and philosophy I often find myself circling something that one could call the “skill issue” objection. This basically holds that people are pretty good at figuring stuff out for themselves and concerns about waning autonomy in the era of AI are overplayed. It’s not that deep, buddy.
In some ways, I have a soft spot for this idea. It’s true that most people can separate play-acting from real life and that we don’t instantly absorb every influence in our environment like sponges. This comes down to the nature of the self, which needs to be both flexible enough to accommodate change when experiencing new things and stable enough to avoid an about-face at the drop of a hat.
We’ve been here before insofar as servant societies of the past also supported civic virtue. Comments on the obvious shortcomings of these particular social relations aside, the butler didn’t corrupt the statesman and the aristocrat had a thing for civic society. This tells us that hierarchy and assistance do not automatically corrupt character, that you can maintain a semblance of virtuousness so long as authority is exercised with restraint and dignity.
Nor is it obvious that delegation is always bad news for becoming good people. Much of human achievement rests on being relieved of drudgery so we can spend time on the good stuff of judgement, creativity, and civic engagement. In a world already full of service relationships (e.g. apps, assistants, and actual people who help us) most of us somehow manage not to become petty tyrants.
The question with humanoid robots is not “will they deform the self by default?” but rather “how do we govern them in a way that makes us better?” In the best case, owning a humanoid robot and treating it well could actually allow us to grow by cultivating a kind of benevolent authority.
Aristotelian ethics describes this dynamic as oikonomia or “proper rule”. It suggests that some types of virtue are expressed through right use of power, that the point is not to renounce authority but to wield it in a way that disciplines the self as much as it directs others. Augustine argues something similar by insisting that power is only just when guided by “rightly ordered love”. If we are to rule over others, we must rule the self first.
Humanoid robots are going to eventually stand in for people. Maybe not right now, but likely one day in the not too distant future. When that moment arrives, many us will have a thing that walks and talks like a human that we can command to do our bidding. If we treat them with respect, we will become better for it; if we treat them with contempt, we will be the ones who suffer.



Lovely news. Congrats to you both!
Congratulations to the both of you, certainly an exciting time in one’s life :)