In conjunction with the Army Corps of Engineers, my research aims to make digital twins of smart installations . These digital twins are able to pull real time information from a physical space, and use that data to update a Virtual Reality twin of that space. They are also able to have changes made in the Virtual space change devices in the real space. This all is done with secure network communication, and smart sensor integration, and Unity to power the VR space.
Part of making a digital twin is updating changes in real life, digitally. As part of this proof of concept we were able to integrate our VR simulation with a Boston Dynamics Spot. When the operator moved any part of the dog in real life, the part on its model moved in real time in VR. This allowed the user to see its position in the building, and any body movements from the simulation. We were even able to simulate digitally if someone physically picked up the dog or manually moved its limbs.
To take this a step further, we wanted the digital simulation to change the real world. We added a feature where a user was able to walk up to the dog in VR, and take control. This mounted the VR player to the dog, and allowed you to drive it around the simulation, and those input commands were also used to steer the dog in real life.
We explored different methods of recreating real spaces from scans into a game engine. We tried many different scanning techniques using different technologies and applications. However, the method we settled on was using photogrammetry and a LiDAR scanner. This point cloud was then imported into Blender and manually cleaned to reduce the number of vertices from the tens of millions, to the thousands. Our scans gave us millimeter accuracy on size, and the cleanup kept the detailed dimensions, but allowed optimization for virtual reality uses.
Using custom programed and developed sensors, each room is given a sensor that is identified by its location. That sensor collects a variety of data depending on sensor type and passes that information to our server. The Virtual Reality application is able to then request the real time information from the server and use it to update a handheld informational menu to display all the data types available for that room. This hand held menu is updated between rooms so it will always display the sensor data for the current room.
The interaction system that powers the users wrist menu can also allowed to be attached to objects. When the player hovers over an interactable Machine, the machine will glow blue. When the player selects this machine it will display an interactable UI that displays all available system information from the server. This menu can be flipped through and even toggled on and off based on user preference.
As a proof of concept for controlling smart devices from the virtual space, we took a smart light and added a python interface to our server, allowing the server to pass commands to this smart light. In the virtual reality simulation we developed a custom UI for this device that allows all the same functionality from the proprietary app, but in the VR application. The Unity project communicates with the server, which sends those translated commands to the smart Light and makes the desired changes to the real light.