Moving object to target x,y,z triggers action in Apple Vision
under review
Sam Granleese
under review
We're reviewing this currently and will be sharing some prototype ideas in the coming month
Sam Granleese
Merged in a post:
More interactive actions
Chris K
When using the Vision Pro, something like a ghost image that cannot be moved by the user and area trigger actions when a piece is put in the correct place. I see so much potential, but really want to boost interactivity, especially in training scenarios.
Maxim Cooper
The aid of "snapping" locations, such as snapping things out of a location and snapping back into their original location would help feedback to the user that the action was performed correctly
Maxim Cooper
Enabling this through some Custom Gestures would enable picking up a "large box" with 2 hands and "small" tool with a single hand for more natural interaction
Sam Granleese
Hi Chris,
Can you confirm this is what you mean re: "area trigger actions when a piece is put in the correct place"
Use "pull apart" action as the method for a training participant to move an object from its current x,y,z position on a step, to a different target position x,y,z, - if or when this proximity trigger occurs, go to the next step.
If this is correct, what would your accuracy tolerance of the target x,y,z be?
Chris K
Hi Sam,
Thanks for the response, I was thinking more along the lines of target x,y,z at maybe an 80% accuracy tolerance. The idea led to my other suggestion of locking rotation in pull-apart, since focus would be taken away from the training process and put on rotating the object back to the right position.