Gesture Detection To-Do List for Brockton
Untitled section
Based on your analysis and some discussions here in the office, these are the next steps we would like to take:
- Compartmentalize the gesture detection logic so that it can be a prefab to drop into another project, without carrying along unnecessary entanglements from the Marshalling project.
- Create that prefab.
- Export the prefab and import it to a new, clean VR-capable project to validate that it is fully functional outside of the Marshalling project.
- Create another prefab which can connect to the gesture detection prefab which places a monochrome, translucent doppelganger of you 6 feet ahead of you, facing the same direction as you are, so that you can see how your arm gestures are being interpreted. (This is a bit like a 3rd person game with a tracking camera following the player. You're seeing "yourself" ahead of you and can see what "you" are doing represented there.) We can assign a seat of the Final IK (inverse kinematics) license to you -- Ian has some experience using it and can probably answer any questions you have about it. We'll find a minimalist generic human model for your translucent doppelganger and send it your way.
- Starting with only one defined gesture (the "move left" gesture -- see attached diagram), add features to the doppelganger representation to show a wireframe box to represent the target box collider for the current step of the gesture, so you can see where your hand is ending up relative to it. This should help you see if the problem is, say, your hand not going down far enough, or your hand being too far forward or back to trigger the collider. Once you trigger that collider, it should disappear and a wireframe box representing the next collider should light up.
- Stretch goals:
- Add a "demonstration mode" in which the doppelganger becomes opaque and models the motions you should be doing.
- Add visual indicators to help guide the user through the sequence. Say, a long, curved red arrow to show the arc through which your hand should be moving to transition from the first box collider to the second box collider. Or a pop-up infographic showing the needed baton orientation once you reach the box. Or sequence numbers. Open to ideas on the details.
- Once all of the above is working well for the initial gesture, add some additional gestures and provide a mechanism for switching which gesture is being demonstrated / detected.
Once all of that is working well, we can feed in the remaining gestures and use the standalone app to fine-tune our detection logic for each gesture. Then the prefabs can be reimported back into the marshalling app to be used there, and into belt loader to handle the banksman portion of the belt loader training.