The article says that a high skill, high knowledge, safety critical task performed on expensive machinery saw productivity gains of 10%.
It seems for low knowledge/low skill tasks the gains would be less (because there is likely less need for operator to switch their information context during the tasks).
Low knowledge tasks aren't a bottleneck in contemporary manufacturing though, at least in so far as "low knowledge" correlates with being easily automated.
Tightening a screw or riveting a steel frame for example require little knowledge, but on the other hand they're easily automated. Or rather, more easily automated than reassembling and disassembling an engine.
Human-centric workspaces and robot-centric workspaces are vastly different due to different capabilities of humans and robots. Using those glasses to improve reliability and speed of a worker on a "low-skilled" checklist-following job may be cheaper than redesigning the entire workspace and workflow around industrial robots. This would make sense as an incremental improvement for tasks already done by humans.
Yes, that's what I meant. Any productivity gains from a HUD are likely to be in high skill/high information tasks and the article states that so far they were modest.
To be fair, there might be some more productivity gains lurking around the corner if augmented reality allows a tighter integration of each process in the factory line.
Any time lost involving searching anything in the factory floor could be reduced with augmented reality by peppering the world with "quest markers".
Besides the 10% gain on searching which parts of the engine you're supposed to either screw in or unscrew out, there might be a gain when searching your wrench, when searching Joe, when trying to find out where exactly the new parts are and so on.
It seems for low knowledge/low skill tasks the gains would be less (because there is likely less need for operator to switch their information context during the tasks).