Use VR/Simulations To Engineer Genie Low Level Sensing

Forums Personal Topics Unbidden Thoughts Use VR/Simulations To Engineer Genie Low Level Sensing

This topic contains 4 replies, has 1 voice, and was last updated by  Josh Stern December 26, 2022 at 11:01 pm.

  • Author
    Posts
  • #125678

    Josh Stern
    Moderator

    Formulating the formal task as ability to distribute probability of match between various minimal contrast query samples may be a particularly efficient way to examine sensor performance. See literature on “scoring rules” & related topics.

  • #125679

    Josh Stern
    Moderator

    The perspective here is philosophical & general for research in the AI sensing area, but it is not without application. Consider our drone surveillance problem. Pick scenarios where you are currently seeing weakest or most inefficient performance & work them up to an adequate level of detail in VR/simulation. The engineering analysis should be able to describe what different types of optimal performance look like for different hardware configurations & VR modeling assumptions. Generalizing we should be able to add some devops efficiency to refinement process.

  • #125685

    Josh Stern
    Moderator

    Applications of low level sensing deal with the problem of linking possible sensor readings of [persistent OBJECTS/coherences, with categorical or unique features] across different frames of sensation, proprioception, & external reference. We propose that work in this area is parameterized by particular approaches to doing this. At a very low level, each receptor channel & location might divide it’s time series into centered patterns of “on” and boundary regions of “off”, with some other shape/amplitude parameters. These molehills have a neighborhood structure that can be viewed as a graph. Across time & space, the inverse problem of perception implicitly includes estimates of “in what, if any way, are these pairs of situated graphs perceptions of the same or similar time of OBJECT/coherence?” Again, a fully controlled VR simulation/distribution of events allows performance of the Deep Learning+innately structured style of a particular matching methodology to be compared and optimized against a ground truth of correctness and efficient performance.

  • #125686

    Josh Stern
    Moderator

    The right kind of thought experiment doesn’t use the IEEE style of “optimization”. It’s more like this:

    As a thought experiment, imagine

    a) A set of VR Simulations that includes different kinds of dynamic events & news, & initial positions of the sensing agent. Pick a given level of sensor design for the inverse problem of recovering information about the particular VR events being sensed at a given time/location. Pick a metric for evaluation of solutions.

    b) Can you use the problem data from a) to automatically construct a set of sensors/processing/behavior to create good candidate solutions that seem worth evaluating at the given level of sensor design? (levels here is ala Deep Learning type of conception).

    c) Say that a compute engine returns the best set of candidate designs from b) as a multi-criteria optimization. Are there algorithsm to further refine these solutions?

    d) Given a particular emergent candidate, what grey areas leave you most uncertain about achieving matching performance on a similar real world problem? Are there furthe refinements to the VR data set or to the other methods to promote robust chances for that match?

    There’s some weakly structured “in principle, this is doable” at each of these steps. How can a team of researchers be used to chip away at the cruft with concrete specific methods that work well?

    By design here, we created artificial situations where “It’s all math” but the math is still too messy to compute with. One of the most helpful frameworks for thinking about it is to compare the messy regional problem with a classic multi-variable discriminant analysis problem. All of the following differences emerge:

    a) Data is complex, drifting, multivariate time stream vs. clean sets of collated numbers

    b) There are agent action spaces & the results of actions affect everything else. Optimizing actions is part of the problem and part of the data space and creates dependencies spanning temporal & spatial coordinates

    c) In any given situation, different portions of sensor space become more or less relevant to the intended discirminations. Conceptually, we need to solve a problem of focusing/relevance weighting as part of discrimination.

    d) The dimensionality is very high and we don’t have a canonical theory of how to best reduce it.

    e) Cross-Validation/Leave-one-out/Bootstrap are ez to understand in our classical problem. We have to work out how we want to take advantage of those ideas to test solution stability in the VR/simulation environment.

    I believe that the approaches will be on a helpful track if they provide some reasonable solutions for issues a)-e) above. Look for leadership in that form.

You must be logged in to reply to this topic.