AI Language for Visual Expressions & Clarifications of Thought

Forums Personal Topics Unbidden Thoughts AI Language for Visual Expressions & Clarifications of Thought

This topic contains 4 replies, has 1 voice, and was last updated by  josh November 27, 2020 at 10:19 pm.

  • Author
    Posts
  • #73016

    josh

    Inside of various projects to boost NLP mastery, we want researchers to be able to use a key diagnostic like this:

    “Show my a typical situation that features X,Y,Z.” By varying the elements, and checking that a correct & typical diagram is produced, then can quickly assess positive elements of machine NLP understanding in their domain. If the performance fails, the diagram is incorrect, or it is atypical that will also be quickly informative. This capability will dramatically expand the group of people who can usually contribute to boosting NLP understanding/teaching/mastery.

  • #73149

    josh

    Psychologically, the most parsimonious descriptions of form are usually expressed in terms of deviations/edits to a known prototype. For example – 3D model + editType1 + editType2 + view. Cartooning often highlights the edits by exaggerating the deltas.

    Human vision will also recognize form in images by relating parts/compositions to most similar past views. The vision system asks is the current form a modification of that known object or is there evidence of a fundamental variation (big hole in the middle or missing part, not occluded).

    • #73150

      josh

      Psychologically, the chosen deformations are the ones that do the best job of preserving noticed features under bijective transformation. Adding or extinguishing noticed features is costly. Extremities map to extremities, bump to bumps, etc., commenting on all the costs.

  • #73558

    josh

    For learning & reasoning with language, it’s most efficient for the ML to learn the relationships between text <=> content & content1 <=> content2 while styled representations help with learning to map visual images to representations that connect to language in a compositional way, as opposed to just associations for image parts or objects. It should be demonstrable that correct rules/expressions for compositionality of meaning have much shorter expression in the space of logical context than they would in the space of text. Trying to capture what makes an answer to a science question correct should be many orders of magnitude more concise in the space of logical content compared to some kind of tensor flow operating on associations of text. We are trying to help the machine out with some sort of innate structure.

You must be logged in to reply to this topic.