Learning Control With Multi-Dimensional Error

Forums Personal Topics Unbidden Thoughts Learning Control With Multi-Dimensional Error

This topic contains 3 replies, has 1 voice, and was last updated by  josh January 8, 2022 at 7:50 pm.

  • Author
    Posts
  • #108860

    josh

    Interesting to speculate that this may be a common human algorithm that fails to be optimal in certain special conditions – consider for example the 100m sprinter who learns to fall forward in an unusual way at the starting blocks because that works out to best performance with the special payoff circumstance of the race competition & the unusual speed of propulsion pickup that follows the starting lurch when everything goes right for the sprinter. A counterexample that supports the overall view.

  • #108873

    josh

    The intuitive algorithm I propose can also be seen as analogous to Deep Learning of a global error signal with some kind of a penalty barrier error function used to improve the “smartness” of the learning based on spatial continuity properties of the real world geometry with inertial mass system. Another angle for research is the best way to anticipate & use such tactics for Deep Learning in situations where it would help.

    • #108876

      josh

      Deep Learning with a fixed network doesn’t start with the benefit or limitations of independent subspaces. It has more cross product terms. Cross product terms can also be added to the subspace algorithm to refine the results as it gets closer to optimal solutions. In general we expect faster convergence from models with fewer parameters & more decomposition & more control around a solution (given lots of data) with more complex models.

You must be logged in to reply to this topic.