[ad_1]
“See” people through walls with the Wifi and rebuild its body in 3D. How? Thanks to two simple routers and al artificial intelligence system that a group of scientists from Carnegie Mellon University in Pittsburg, in the United States, has developed. The results of this search are available on arXiva platform that makes accessible scientific articles (or rather “preprints”) that have not yet undergone the peer review process (called “peer-reviewing”).
According to the researchers, their system would be capable of estimate the position of several subjects at the same time with a relatively high degree of accuracy compared to similar systems developed previously, using i signals Wifi how single input. The convenience of this technology compared to others based on radar sensors, for example, is the much lower cost. Compared to normal video cameras, however, the considerable advantage of a unique system wi-fi based is the fact that the presence of obstacles or poor lighting do not interfere with the detection. “This – write the authors in the first lines of their article – paves the way for low-cost, widely accessible algorithms”.
Previous studies
The line of research is not new, previous studies had in fact developed similar systemseven if the level of accuracy got was definitely inferior than that of the study in question. A group of researchers from MIT (Massachusetts Institute of Technology, United States), for example, had published in 2018 a similar device from a conceptual point of view, i.e. capable of detect body parts hidden by walls or other obstacles thanks to the combination of the WiFi signal and a specially developed neural network. In that case, the idea was specifically that of apply this technology to monitor patients suffering from diseases such as Parkinson’s diseasethe multiple sclerosis or the muscular dystrophy. “We have seen – he had declared at the time one of the authors of the research – che monitoring of patients’ walking speed and their ability to carry out basic activities on their own offers healthcare professionals a window into their lives they didn’t have before, which could be significant for a whole range of diseases. A key benefit of our approach is that patients don’t have to wear sensors or remember to charge their devices.”. In this case though the output was a 2D stick figuretherefore, as we said, with a relatively low level of detail.
The news
Going back to the recent study, the researchers involved used a already existing system, called DensePose, capable of producing 3D representations of the human body starting from two-dimensional images, i.e. simple photographs. “In this work – explain the authors in the article – we borrow the DensePose architecture itself; however, our input won’t be an image or video, we’ll be using 1D wi-fi signals to recover the dense match”, where by “dense correspondence” we mean precisely the three-dimensional representation. Relying on an artificial intelligence system, the device still needs a training set to start with. In fact, as the researchers themselves report, i cases that have turned out bankruptcy during their experiments there were mainly two: when the system was asked to locate positions that rarely occurred in the training setor when three or more subjects are present simultaneously in an acquisition. In the first case it was therefore likely that distorted images were reproduced, while in the second case the system struggled to extract detailed information on single individuals. “We believe – conclude the scientists in the article – that both problems can be solved by obtaining more complete training sets”.
.
[ad_2]
Source link