and was wondering how to script this thing. E.g. appear at meeting and record it for ppl in different timezone. However, it seems to be hard with Double.
It remembers me a south park halloween special episode, where Stan facetimes with it's iPad [0].
It's my personal opinion, but even though idea looks rather good (give a physical presence, by working remotely), wouldn't it be just a very strange and awkward (and maybe smug) to use? Like the original segway, which was very cool invention, but did not gain much traction, due to how awkward it is, how people look smug using it, etc (even though 'hoverboard' gained a lot of popularity, but is IMHO an inferior gadget).
All in all, cool, though IMHO people will need to take a lot of time to be accustomed by such device.
Though if I would work in such office, personally it would be rather awkward and cringeworthy experience on either end.
And the video did not show any advantages against classic conference video call (it wasn't the goal of a video, but if there would've been more uses for it, then it would've shown in the video). You basically just can wander around the office in glorified RC vehicle, drive into the conference room and chat during a meeting.
The value of the robot is that everyone else doesn't need to be set up for video calls. We have a remote worker who uses one in my office periodically (actually two remote workers share it, but I rarely see the other person use it). He can roll into anyone's office and chat with them. I have a camera set up in my office specifically for video conferences, but most people do not. He can still conference with then in an ad hoc manner.
And how do the other people in the room see the remote person? How easy it is to do that while making sure the camera catches everyone? How do you start the call, what software do you use? The robot sidesteps all of these.
My work laptop has one, yes, but it is closed and docked so I can use the two larger monitors on the desk. Ergo it is equivalent to not having a camera at all.
I work remote. Most video conferences are great, except when too many people are in the same office and start talking to each other. Then audio quality goes down, and you can't place a word in the discussion.
I couldn't imagine using one of those robot to participate in a discussion with 15 people in the same room, and being the only remotee.
The Segway is quite large and needs parking or storage. Also, it can't go up stairs. Hoverboards can be picked up and carried easily around an office or home, and can be tossed under a desk or in a backpack for storage.
A Segway is like a bike in that you always have to be thinking about where to put it when you arrive. Hoverboard is much more like walking, in that you can toss it in your backpack or just carry it when you arrive.
The immediate use case I thought of when reading about it was that you don't have to worry about where to put it. You ride it to work then the robot pops up its cute little face and it drives itself back home. Then it's waiting to pick you up when you're done. Obviously, that would depend on software that's probably a lot more advanced than the current prototype. But I suppose that's why they're soliciting developers.
I would suggest buy one and try it before you buy more of them.
I like it, use it on average twice in any week. Got first generation. So much better than hangout. I got all accessories (additional mic and charging dock) and from what I heard ppl have so much worser experience without them.
That may be a cool thing, but the site is almost impossible to read. At least on my iPad. It might be complex to program that robot, but navigating their site shouldn't be.
I just looked at the source code and, well, wow. Comments sometimes in English and sometimes in Chinese, scripts are being loaded all over the place, one tag is just commented out, one inline script does nothing but assign a global variable to a value that's presumably just an output from PHP and many more things that feel really... odd for something that's supposed to advertise a new and complex technology.
Surely it's far worse if they contracted it out as then this is the results of people who they bought in on the basis of their web design. If it was in house then there's an excuse of sorts.
Nah. If you're not a web design company, than you're not going to know what to look for when trying to contract a web design company. I'd expect shit-show engineering to be the norm rather than the exception.
I really don't understand why for this use case people do not use a third wheel and save the energy consumed to stand still with self balancing.
Not just this segway, also the double robot (the latter, at least, can claim aesthetic motivations given the barrel shape).
Tri-wheel robots wobble really badly. For telepresence, the screen is at the top of a mast so wobbles are amplified. The balancing robots have very smooth motion.
No, it's very easy and inexpensive to balance that robot at this point for Segway. They've already solved those problems, it's a natural extension to use their existing technology for stabilizing the robot. Further, it's not an option to not stabilize the whole robot, you have to or the whole concept implodes. Put another way, they were going down that road regardless.
Put it this way: The head on the end of the mast is a significant mass at the end of a long lever. To dynamically stabilize that, you're going to need big, hefty, expensive motors. Well, you already have big, hefty expensive motors to move the thing around in the first place, so it actually costs less money to dynamically stabilize the whole robot.
Check out the inverted pendulum on a cart problem, or go balance a couple bats on your hand. When near vertical the inertia of the pendulum mass helps you out, it's the inertia of the cart the motors need to overcome. The better the control loop the fewer and smaller the corrections will be, tending towards zero.
I have a video camera with optical image stabilization. It uses teeny, tiny cheap motors to move some optics around to stabilize the image. It is not obvious that stabilizing video justifies giving up the passive stability of a tripod.
Active balancing is actually really cheap, if I'm remembering correctly. In an ideal situation the cost should be zero as the corrections required would be infinitesimal. That case isn't really reached, but the robot I worked on massed 150 kilos and probably spent way more power on computational homeostasis then it did on balance.
Agreed, I did a short bit of research on simulation, and in this case (rotational dynamics) moment of inertia is substituted for mass in the force calculation, and when standing still, that is always near zero. Therefore very little energy should be necessary to maintain it.
I don't know much about them, but wouldn't inclines be harder for anything with more than two wheels? With two wheels, you can easily maintain balance, but with three you would have to have another control (or two) to make sure the 3rd wheel is in the correct direction and I presume change the angel as well.
Did you see any video or press from the actual Segway event? The "robot" here is actually a rideable mini-segway. You fold its head sideways and awkwardly put your crotch over it as you step on its wheeled "legs."
Its a strange concept. Am I buying a mini segway or a robot? Do I really want to ride my robot around? I imagine for the price this thing is going to sell for, the answer will probably be, "Hell yeah, for this kind of money it better carry me around."
I think a proper home robot wouldn't have these capabilities and a third wheel or four wheel platform would be more likely. The problem is that Segway can't make a robot that isn't, well, a segway.
Balancing enables it to be any height without increasing footprint or hence weight. The camera can be at human height, but only take up as much floor space as a human. As an example, the Beam Pro does not balance, has 4 wheels and weighs over 100 pounds with a camera at 5'6". Anybots QB weighs about 35 pounds and can have the camera at 6'2".
It also enables added mass without changing the drive system. So instruments and equipment can be added.
Hum, I think a free swivel wheel would only work for very cheap designs. If you're not convinced, you can search for 3 wheels vehicles on google image [1]: none of the vehicles there have the 2 wheels + 1 free rotating wheel design that you recommend.
I think that's more of a speed thing than a cost thing. At higher speeds you want to control all the wheels to ensure you don't go sliding off into a utility pole or something.
Segway was acquired by Xiaomi. And the background in the video https://www.youtube.com/watch?v=nr-9p8o60gY is Beijing. I believe this project is developed in Beijing, China.
So you sit on its head if you use it as a 'hoverboard', that's curious.
But what could be the use cases for a robot like this? Its head is too low for telepresence like Double Robotics. The arms-extension looks nice, but it can't grasp anything with it because it seems to stiff, Lego-like hands. Plus the hands can't reach up a table or kitchen to grab anything. Holding anything with weight would also shift the center of gravity and mess with the balancing.
On the other hand, mixing this robot with a thing that is useful otherwise can get the technology out the door and get sales going to add functional arms later on.
Making robots with Segways was actually a thing at least around 2004-2006. Robocup is a robotics soccer league with several divisions that each have a different kind of challenge.
Briefly, there was a Segway league. Each team had a single robotic Segway and a single human driven Segway which had to cooperate to score goals on the other team. Here's the page of one of the teams with pictures and videos of the Segway robots in action: http://www.nsi.edu/~nomad/segway/
Well, its a telepresence robot, which means it will be at arbitrary locations doing its job. How many places have the type of reliable low-latency, high-bandwidth, low packet loss, etc wifi network to give a reliable Rift experience? I mean, we can barely do this with the powerhouse of high-end video cards via hdmi. How can we transmit a high quality 1440p per eye at 60fps over the common internet? Maybe 25mbps per eye with a lot of compression, so a good 55mbps with overhead? Note, this is live video so no buffering past a few dozen milliseconds. We can't without some really dedicated connections and a wifi client>AP relationship with little to no interference. You're not getting this in your average office or other common telepresence locales. You sure as hell aren't getting it over 3G/4G.
I don't think people appreciate how exotic HMD's like the rift are. We can't just plug them in anywhere. For the 3D effect to "work" you need high resolutions and high framerates. If you can't get that, then a normal screen should be used because its a waste of resources and a waste of time strapping that thing to your face.
High framerate video isn't necessary. The reason for the 60fps capable video card where the rift is connected is that when you move your head, you need the video to update quickly. If you had a 360 degree camera like the parent suggested, you don't need the source to be 60fps, just that when you move your head, you need the view on the rift to change at 60fps.
Think of a panoramic photo. It's not even 1fps, it's 1 frame ever and it still gives you a great 3D effect. I think you'd be fine with a much lower framerate source. And that would make a wifi and a not-that-impressive internet connection fine.
Edit to add: of course, with a 360 video you have much larger frames, not 1440. You could probably do with less than 360 though, since you can assume a person isn't going to be looking behind them or at the floor.
The theft problem could be mitigated with remote kill switches. iPhone thefts have gone down now that people have learned that they will be useless if stolen[1]
You'd still want to be careful where you use it but perhaps it could have non-lethal anti-theft mechanisms. Maybe paint packs like banks use, some sort of taser-like shock mechanism? You could still throw a sack on it of course.
But then cars and motorbikes are some of our most expensive possessions and we leave those outside quite a lot.
Definitely not paint packs. The segway is worth a lot more than a cheap change of clothing from Walmart, or a pair of overalls/coveralls.
The average new car weighs 4,000 pounds, and is increasingly difficult to steal, requiring ever greater tech to do so. NYC for example has seen a 96% drop in car thefts (!) since 1990. Stealing a car is a pain in the ass. Stealing this robot, would be relatively trivial.
I have thousands of dollars to spend on something like this if it had an arm and could do things like get items from the fridge by verbal order (with no step by step programming, just visual recognition of the fridge and how to open it), play catch with the dog, sweep up, act as a security guard when I'm gone, etc.
Robotics is one of those things where the hardware, price, and networking are there but the software isn't. We don't have an AI-lite engine we can toss in for simple things a dog could understand like "get my slippers." Until someone cracks that code, this stuff is just going to be rich-boy novelties and unneeded contenders in the already over-saturated telepresence market.
Batteries aren't good enough, linear actuators have awful power/weight ratios, and computers just aren't fast enough to solve CV problems and calculate grasps and paths in seconds, rather than minutes.
Saying you "have thousands of dollars to spend" is great, but not sufficient. The PR2 I talk about in the blog post costs four hundred thousand US dollars, and it sucks! It's like saying you're willing to spend five thousand bucks to buy a Lamborghini Aventador. It's going to be many decades before a useful household robot only costs ten grand. A household robot just requires too many breakthroughs in too many different fields.
Oh I don't know, it seems to me that the PR2 is designed for industry so its pricing is going to reflect that. I suspect there is a home robot space that some startup can fill sooner than later. Whether it does all the things I listed is the big question and I suspect it won't, but it may be able to do a few things that make it a worthwhile purchase.
I've played a bit with opencv, pcl, ros, etc. There's some very impressive image recognition stuff available right now that works well on commodity x86 platforms(1). I don't think the market is expecting a HAL-like or Jetsons like robot, but I could see something akin to an early 80s home computer where the product is clearly a long list of compromises but it does a few things very well and is compelling. Home robotics may be the same way for a while until it has its 1984 Macintosh moment, which as you say might be a decade or two or four away.
I did appreciate your posting, but I think its a little dismissive of the some of the homebrew and smaller scale stuff out there. The PR2 is a VC backed monster designed to bring industrial robots to retail, hospitals, etc. These guys want to build the 747 of the robot world. That's great. But there are people out there building the Cessnas of the robotics world. I expect an affordable consumber product that isn't a joke xmas 2018-2020. There's just way too much potential here.
1. Home robot hackers falling in love with the super low power NUC which gives a CPUMark score around 5,000+
The PR2 has an arm payload of 1.8kg, total payload of 20kg, and a top speed of 1m/s. That doesn't sound "industrial" to me, that sounds like "the absolute bare minimum to be a mobile anthropomorphic robot". And to achieve those numbers, it weighs 480 kilos! That's a payload fraction of 4.1%! How is this anything like a 747?
If it only weighed a 60 kilos, the average weight of a human, then we would expect a total payload of 2.4kg, and an arm payload of... 7.3 grams. Doesn't sound too useful to me. And the damn thing would still cost $50,000!
The PR2 was a VC backed monster, but remember, Willow Garage went out of business last year, because their products just weren't very useful! The technology just isn't there, and won't be for a long time.
Home robot hackers falling in love with
the super low power NUC
I'm sure that's fine for pathing, and Kinect-SLAM, but "getting a beer from the fridge" is picking arbitrary items in arbitrary poses in an unconstrained environment, basically the Amazon Picking Challenge, which nobody can solve with reasonable speed yet, even with hundreds of thousands of dollars of equipment.
If you honestly think you can build a cheap robot that can do all that by 2020, then by all means, launch a startup and earn billions of dollars. But I don't think it's going to be done before 2040.
The most interesting thing about this for me is that the third partner is Xiaomi. They are a company that I expect any day now to eat a large portion of the consumer electronics market. Items like the Mi Band and Pistons compete well with things 4X their price.
I'm very curious about their involvement here.
The problem that I see with the real-sense is that just like the Kinect it's based on structured light. This means that it will only work indoors (just like the Kinect). So their cool demo with putting that on a drone is kind of pointless, because who flies a drone indoors?
There are now some devices available which can provide depth data through passive stereo vision. I have recently seen this one in action: http://nerian.com/products/sp1-stereo-vision/
The problem with that, however, is that it is targeted at industrial market and probably way to expensive for any ordinary consumers. I guess we will still have to wait for some major revolution in depth sensing.
I fly my 35 grams drone[1] indoors all the time in my apartment. Granted, it would be bigger and heavier with an Intel real-sense processor and camera on it, but I don't think it's out of the realm of possibilities.
Heh - RealSense is not exactly one depth camera. This particular formfactor uses active stereo IR (R200 camera), so it also works outside but loses the projective texturing (where it's usually not needed anyway).
Actually, I don't think so. Its kind of hard to get any technical information on the real sense and I know that Intel is making different versions of it, so please correct me if I'm wrong and if they have one which really does stereo. On http://www.intel.com/content/www/us/en/architecture-and-tech... they say the following:
"The Intel® RealSense™ Camera F200 is actually three cameras in one—a 1080p HD camera, an infrared camera, and an infrared laser projector"
So this is just looks like the first Kinect. The infrared camera will observe the projected pattern and the RGB camera is there to capture the color information. You can't really match an infrared image (which is also covered with a laser pattern) with a visible light image, as they will look very different. So you would require a yet another camera (infrared or visible light) in order to do stereo.
The robot is using the R200, an active stereo camera, not the F200. Even the F200 uses a fundamentally different technique (coded light, projected grey code) rather than structured light as the Kinect uses.
Source: I work as a computer vision engineer on these products for Intel RealSense.
Anyways, this seems to confirm a worry i had about Realsense. Intel seems to have turned it into a "bundle" item, so that you can't buy it for something without also buying other components from Intel.
and was wondering how to script this thing. E.g. appear at meeting and record it for ppl in different timezone. However, it seems to be hard with Double.
With Segway it may the killer use case.