› Forums › Personal Topics › Unbidden Thoughts › Networks of Computers, Sensors, Robotic Drones, & People
This topic contains 6 replies, has 1 voice, and was last updated by
josh September 2, 2021 at 5:04 pm.
-
AuthorPosts
-
September 2, 2021 at 3:26 am #99892

joshIn a visual scene recognition setting…
Say that my AI/CV mind can look at regions of interest (still image or video) & consider, indirectly, a large set of hypotheses for the model of things/events/light sources etc. that generated that image region. Say that it can unconsciously imagine the best try at matching for each of those hypotheses and pick the Bayes MAP choice or small set of choices. The image features that do the most work in differentiating the likely candidates from the rest of the also rans are key for data compression interest. The right sort of data compression algorithms should be good at saying “these features were found & this other large set were not observed”. Drone control wants to know which investigation movements would be most valuable to resolving key discrepancies. So modeling the accessibility patterns of feature observation space is also likely to be important.
-
September 2, 2021 at 3:11 pm #99918

joshTheoretical Mathematics work on convergence of Empirical Process Estimators tends to be more general than ML literature on learnability, while being less accessible to conceptual pictures in engineering. Learnability theories for these sorts of economical recognition activities might be obtained by considering things like episilon covers of utility functions on mappings between metric spaces. Can we relate the theoretical results to performance on learningr relevant VR simulations. Do they match the complexity & performance in reality? If not, how do we fix that. That theory should be able to play a constructive role.
See references & summary
http://www.stat.columbia.edu/~bodhi/Talks/Emp-Proc-Lecture-Notes.pdf -
September 2, 2021 at 3:35 pm #99919

joshWhat is the mathematical form of the drone decision optimization problem?
There are situations in the form high dimensional scenes – drawn from some unknown distribution/proces. Within the scene are sub-parts drawn from other processes that can be reasoned about in isolation or with low dimensional connections to the broader process. There is a kind of bandit problem to decide whether to poke around more & gather more views of the current scene & whether to intiate other actions like communicating reports. Complexity of the optimization is related to how likely it is that there are very similar looking things that require dramatically different actions. Can more views tell them apart?
-
September 2, 2021 at 3:36 pm #99920

josh"I have worked at 3 other broadcast companies and ESPN, by far, is my best experience. The culture at our company is unique, with employees being considered the greatest asset." – Gerard, Remote Traffic Coordinator, ESPN pic.twitter.com/h9kBeQK4G4
— Disney Careers (@DisneyCareers) September 2, 2021
-
September 2, 2021 at 5:03 pm #99922

josh -
September 2, 2021 at 5:04 pm #99924

joshprior to 2017
https://www.linkedin.com/in/jennifer-martin-news-manager -
AuthorPosts
You must be logged in to reply to this topic.