Researchers from the University of Waterloo are using deep learning and computer vision to build autonomous exoskeleton legs to enable users to walk, climb stairs, and avoid obstacles.
The system records the user’s surroundings through a camera. Deep learning and Computer vision algorithms then analyze the scenario to determine the best movements for the upcoming terrain.
The equipment might give people with impaired mobility a far more natural control system than present exoskeletons, which are operated through smartphone apps or joysticks.
"That could be inconvenient and cognitively demanding," stated project lead Brokoslaw Lachowski, "Every point you want to do a new locomotor activity, you've to quit, bring out the smartphone and choose the preferred mode."
The scientists employed an NVIDIA TITAN GPU for neural network training and real-time image categorization of walking environments. They collected over 5.6 million images of human locomotion environments to create a database dubbed ExoNet — which was used to train the primary model, created using the TensorFlow deep learning framework.
Still, under development, the exoskeleton body must learn to run on uneven terrain and learn to stay away from obstacles before becoming completely functional. In order to boost battery life, the team plans to use human movement to assist charge the devices.
Their recent paper analyzed how the joint mechanical energy from an individual sitting down may regenerate electric power used to charge the robotic exoskeletons.
However, the system could prove much more convenient than most established exoskeletons — as long as it’s not easily hackable!