› Forums › Personal Topics › Unbidden Thoughts › Testing Scene Understanding/VR Modeling
This topic contains 5 replies, has 1 voice, and was last updated by
Josh Stern November 9, 2022 at 2:59 pm.
-
AuthorPosts
-
November 9, 2022 at 2:27 pm #124586

Josh Stern
ModeratorQ: Why is the combination synergistic for research & applications?
A: There are uncountable applications for all of these areas:
a) improving machine vision/understanding
b) improving the common knowledge & representational base of VR to include more familiar human/man built environments & natural scenes
c) improving common sense reasoning about every day situations & possibilities
d) improving distributional knowledge about what is common/uncommon in different environments
e) sensing what holds audience interest
f) getting a sense of the interaction between language use & what is present or requested to happen in physical scenes.
g) improving smart video editing for meaning & focal interest and smart search.
h) evaluating how much scene context adds to speech understanding accuracyCompared to machine vision in the field with LIDAR and/or stereo cameras & structure from motion, video interpretation is harder. But it is easily available to human adults, & we believe that competency plays a supervisory role in everyday vision. Similar remarks apply to the difficulty of understanding isolated speech recordings in a video.
A process can gain efficiency by working on all these things simultaneously in a computational setting. It’s a setting that fits a large research collective better than a small academic group. Combing multiple areas of study in the same work framework & pre-processing setup improves effiency.
-
November 9, 2022 at 2:37 pm #124587

Josh Stern
ModeratorLookings at summary stats alongside good examples of “what went wrong” in a computational process helps researchers & engineers to focus efforts on overall quality improvement, with a justified sense of where algorithmic improvements are needed & justified.
-
November 9, 2022 at 2:43 pm #124588

Josh Stern
ModeratorFor example of interest robotics – it’s relatively straight forward to built a catalog of videos with falls & crashes & then a feature similar set where that doesn’t happen, & then test whether machine prediction is at the level of human prediction.
-
November 9, 2022 at 2:56 pm #124589

Josh Stern
ModeratorThe platform also provides a firm base for the drone surveillance work to add computational understanding from unfamiliar vantage points.
-
November 9, 2022 at 2:59 pm #124590

Josh Stern
ModeratorFor example, signal can simultaneous be collected from a drone at HIGH ft. with video & a low flying drone in a focal area of interest with LIDAR & structure from motion. The detail from the low flying one can be enhanced with the knowledge from the video platform & the result can be used a a teaching & evaluation signal for the HIGH view.
-
-
AuthorPosts
You must be logged in to reply to this topic.