A tactile Internet is potentially the next phase of the Internet of Things, in which humans can touch and interact with distant or virtual objects while receiving realistic haptic feedback.
A team of researchers led by Elaine Wong at the University of Melbourne, Australia, has developed a method to improve haptic feedback experiences in human-machine applications typical of the tactile Internet. The researchers believe their method can be used to predict appropriate feedback in applications ranging from electronic healthcare to virtual reality games.
Wong and his colleagues will present their proposed module, which uses an artificial neural network to predict affected material, at the Optical Fiber Communication Conference and Exhibition (OFC), which will be held March 8-12, 2020 at the San Diego Convention Center, in California. , UNITED STATES
Depending on the dynamics of the interaction, an optimal human-machine application may require a network response time as short as a millisecond.
“These response times put a limit on the distance between humans and machines,” Wong said. “Therefore, solutions to decouple this distance from the response time of the network are essential to realize the tactile Internet.”
To achieve this goal, the team trained a reinforcement learning algorithm to guess the correct haptic feedback in a human-machine system before the correct feedback was known. The module, called Event-based HAptic SAmple Forecast (EHASAF), speeds up the process by providing a tactile response based on a probabilistic prediction of the hardware the user is interacting with.
“To facilitate human-machine applications over long-distance networks, we are relying on artificial intelligence to overcome the effects of long propagation latency,” said Sourav Mondal, author of the article.
Once the actual material is identified, the unit adapts and updates its probability distribution to help choose the correct feedback for the future.
The group tested the EHASAF module with a pair of virtual reality gloves used by a human to touch a virtual ball. The gloves contain sensors on the fingers and wrists to detect contact and track the movements, forces and orientation of the hand.
Depending on which material the user chooses to touch from the four virtual options provided, the return of the glove should vary. For example, a metal ball will be firmer than a foam ball. When a neural network determines that one of the fingers has touched the ball, the EHASAF module begins to cycle through the feedback options to generate until it resolves the actual material of the chosen ball.
Currently, with four materials, the modulus prediction accuracy is around 97%.
“We believe that it is possible to improve the accuracy of the predictions with a greater number of materials,” Mondal said. “However, more sophisticated models based on artificial intelligence are needed to achieve this.”
“More and more sophisticated models with improved performance can be developed based on the fundamental idea of our proposed EHSAF module,” Mondal said.
These results and additional research will be presented on-site at OFC 2020.
Source of the story:
Material provided by The optical company. Note: Content can be changed for style and length.