Some day, someone will come up with an equation that accurately defines the tipping point between our natural laziness and williness to give up personal privacy to relieve some of it. I don’t think today is that day.
Years ago, I wrote about a robot that you could teleoperate from anywhere in the world over WiFi. We tried it in our offices, and I even ran it around my home, where it freaked out my family. Back then, we didn’t think too much about the privacy implications because it was me, not a third party, navigating the robot and seeing what it could see.
TechRadar AI Week 2025
This article is part of TechRadar’s AI Week 2025. Covering the basics of artificial intelligence, we’ll show you how to get the most from the likes of ChatGPT, Gemini, or Claude, alongside in-depth features, news, and the main talking points in the world of AI.
With ads for 1X’s still unreleased robot appearing on Subway ad screens, as well as testing in only limited ways at some journalists’ homes, consumers are being asked to consider their willingness to invite the 5’6″, 66 lb humanoid robot into their home. While the $20,000 (or $499-a-month-for-ever rental) robot is designed for autonomy, the reality is that it could encounter numerous unknown scenarios in your home. In those instances, 1X technicians can, with your permission, take over, teleoperate, and ostensibly train the robot’s Redwood AI.
In even casual conversations with people, this news gives them pause, but we decided to survey our nearly half a million WhatsApp followers with this question:
“The 1X Neo is a new $20,000 home robot that can be remotely controlled by a human. But how do you feel about an Al robot learning skills based on your in-home data?”
While the majority (409 people) said they were unsure about that but still thought a “housekeeping robot would be awesome”, a substantial number (340 respondents) were decidedly less sanguine, choosing, “Sounds awful and a total breach of privacy.”
73 described Neo as what they’ve “always dreamed of,” and just 48 were happy to let 1X and Neo do its training thing in the home.
I get the concern, and, to be honest, it’s far from new. Back in 2019, as Sony was unveiling its latest refresh of its AIBO robot dog, some expressed concerns about a mobile robot with a camera in its snout, the embedded facial recognition AI (useful for AIBO remembering family faces), and Sony’s access to any collected data.
At the time, Sony stored data locally and in its cloud, but hashed it in a way that was not identifiable as personal info. Even so, the robot couldn’t be sold in Illinois because its capabilities skirted outside the state’s Biometric Information Privacy Act.
With 1X’s far more powerful AI and models, one might assume that the privacy concerns should triple.
I asked technology and regulatory lawyer Kathleen McGee over email just how concerned consumers should be.
McGee, who previously served as a government lawyer and most recently Bureau Chief of Internet & Technology at the New York Attorney General’s Office and is now Partner in Lowenstein Sandler’s Data Privacy, Security, Safety & Risk Management Practice told me the data companies like 1X collect “ranges from the mundane (where you place your dish soap) to the very personal (real-time video capture of your home, physical layout, and the images of you and your household occupants, including children). Any data collection that is that sensitive and ongoing requires high-level security measures to ensure that data is anonymized, only kept as necessary, and that the AI model(s) are being trained in accordance with ethical standards as well as legal ones.”
Clarity, McGee notes, is key. “Presumptive users of these products should be very clear about how the data is being used and shared, and what rights users have to delete data – when an AI model is created and trained on your sensitive data, it is virtually impossible to completely unwind.”
1X, though, does make it clear in its FAQ that while the data collected from “real world tasks” is used to build NEO’s base intelligence and boost both its capabilities and safety, “we do not use this data to build a profile of you, nor do we sell this data. If you don’t want to participate in helping improve NEO further, you can always opt out.”
Data aside, though, a robo-roving camera attached to fully articulated limbs and hands raises the specter of a remote-controlled ransack of your home. Reddit is well-stocked with these concerns.
Presumptive users of these products should be very clear about how the data is being used and shared
Kathleen McGee
In a scathing Neo robot privacy concern post, Reddit user GrandyRetroCandy wrote :
“If law enforcement goes to the 1X office. Says ‘we have a warrant’. They may order an operator to take control of the Neo Robot, and while you are out shopping or are away from home, they could make this robot look through your wallet. Your diary. Your house. Your drawers. And see everything about you.,”
That does sound terrifying, but GrandRetroCandy quickly clarified,
“Technically, that part is not legal. It’s technically possible (it could be done), but it’s not legal. But if they do have a warrant, they can see all of the camera footage that is stored from your Neo Robot. That part is legal.”
McGee also told me, “Another concern generally for these types of domestic products is the potential exposure of data that a user may believe is private to them but which may be the subject of a subpoena, search warrant, or threat actor intrusion. The privacy concerns for users cannot be divorced from the security issues.”
AI needs your data…and you need your privacy
Basically, the idea of anyone suddenly using X1 Neo to roam your home and go through your things is well beyond the realm of likelihood, if not possibility.
The truth is, humanoid robots will never become practical and useful without healthy amounts of data input from every user and home, especially in the early days when they are bound to make mistakes.
For robotics and automation, one of the major advancements in recent years has been simulated training. It’s helped autonomous driving and many of these early humanoid robots. And yes, we can see how hard it is to get humanoid robots prepared for the unexpected.
At this point, 1X Neo Beta is so unprepared that most of its abilities are teleoperated. Getting humanoid robots ready for the spotlight remains hard work. In Russia, the Idol robot was so unprepared for the bright lights of fame that it spectacularly face-planted.
Giving freely of data that cannot be used to invade our privacy will help these robots learn and improve quickly, but there must be limits and controls.
A big part of the responsibility lies with companies like 1X, especially those that develop AI. As McGee pointed out in an email, “Many jurisdictions have privacy laws, and for AI developers, the focus should always be on adherence to the most stringent of those regulations. Again, both ethics and law have a place here, and we counsel our clients to build in a strong foundation of trust and transparency to ensure stability and longevity in their AI design.”
As of April of this year, just 20 US states have data privacy laws. At least in the EU, they have the GDPR (General Data Protection Regulation), which is so strict that some AI technologies have been held back from the 27 countries making up the EU. The UK has an almost identical GDPR.
There’s likely a happy medium between what we have here in the US and the GDPR, but the intention should be the same: the safe training of a humanoid robot army that knows how to help us, and even do our household chores for us without raising massive privacy alarms.

The best MacBooks and Macs for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.











Add Comment