Simple Wi-Fi routers can be used to detect and perceive the poses and positions of humans and map their bodies clearly in 3D, a new report has found.
With the help of AI neural networks and deep learning, researchers at Carnegie Mellon University were also able to create full-body images of subjects.
This proof-of-concept would be a breakthrough for healthcare, security, gaming (VR), and a host of other industries. It would also overcome issues affecting regular cameras, such as poor lighting or simple obstacles like furniture blocking a camera lens, while also eclipsing traditional RBG sensors, LiDAR, and radar technology. It also would cost far less and consume less energy than the latter, researchers noted.
However, this discovery comes with a host of potential privacy issues. If the technology does make it to the mainstream, one’s movements and poses could be monitored — even through walls — without prior knowledge or consent.
Perceiving Humans Via WiFi Antenna, Seeing Past Obstacles
Researchers used three WiFi transmitters, such as those on a $50 TP-Link Archer A7 AC1750 WiFi router, positioned it in a room with several people, and successfully came up with wireframe images of those detected in the room.
With the help of artificial intelligence algorithms, researchers managed to create 3D images from the WiFi signals that bounce off of people.
Technically speaking, researchers analyzed the amplitude and phase of the WiFi signal to find human