The Agent/OS/System/MetaTheory Level Above KR/Reasoning/Estimating Boxes

Forums Some things I noticed The Agent/OS/System/MetaTheory Level Above KR/Reasoning/Estimating Boxes

This topic contains 5 replies, has 1 voice, and was last updated by  josh August 14, 2022 at 7:15 pm.

  • Author
    Posts
  • #120236

    josh

    The Agent OS Systems support modeling/inference/reasoning by creating useful maps from their data experience & theory to an informatics structure of cases needed for modeling/inference.

    Note that we use informatics & “case” here at the level of analysis, without prejudice about methods of implementation as a computing system of which types of analysis are useful at in between levels – i.e. “neural computing” is not a relevant yeah or nay issue at our level of focus here.

    We can think of the bridges to cases in this way:

    Raw Events – Are noticed/recorded by the agents sensorium during a given subjective time interval.

    Embellished Raw Events – A level of processing is applied to the raw events. Tags are added – theory about what is going on is added – e.g. beliefs about location, temperature, pressure, gravity, the holiday calendar, the day of the week, etc.

    Raw Event Pattern Families – Systems of classification & embellishment that group Embellished Raw Events. Some groupings are based on constant values of predicates. Other values are based on our ability to control case conditions – e.g. I made sure it wasn’t too windy & I acted to prevent any of the cannisters from tipping over during the event.

    Prime Candidate Patterns – Within Raw Event Pattern Families and their application to streams of Embellished Raw Events, we may notice both of these “folk experiment” types:

    Repeated Event Types that We Directly Cause (subject to case limitations – i.e. we have a dry indoor area, or it isn’t raining too hard)

    Repeated Event Types that we Observe as They Happen Sometimes – i.e. we can’t directly generate them or make as many as we want but we can notice them & perhaps predict where & when they are plentiful.

    Golden Event Types – decison-theoretic utility, in the most general & abstract sense, should drive consideration of which event types are the ones that the agent is natively hard wired to notice and which event types are acquired for use in processing. The most useful event types are involved in decision support for decisions that matter to gain/loss & minimizing catastrophic risk, and the event types that are useful for information-processing support and for helping the system to monitor & understand/make correct predictions about the environments it encounters.

    An Informatics Learning Theory – can describe the native structure of a system, types of environments it encounters, and adaptive learning strategies that lead to useful decision making & modeling, often working with formal/VR models that consider repeatable sequences of cases occurring withing Golden Event Types. These theories can and often should have informal elements that are not covered by the formal elements because we wish to apply “intelligent agents” to many environments where complete modelling & levels of stability for patterns of interest are unknown. Questions of what is heuristically working well & why that might be will continue to be interesting alongside completely formalizable models. The formation of event structures per se only happens with the aid of some sort of native predispositions to do that. The ability to use information from language & linguistic sources we provisionally trust has a potentially large impact on learning trajectories & success/failure. The truth & stability of linguistic claims is also something which can & should be checked by some systems.

  • #120237

    josh

    Q: What are you really saying about how this view of Agent/OS can drive useful engineering work?

    A: I’m saying that you should be able to take your examples of interesting robots/agents which do some reasoning and/or some learning and conceptually work them into this framework. You should be able to take your interesting examples of ML/data mining & situate them within this framework. You should be able to take examples of case based reasoning & locate within this framework. There may be some modifications or oversites to fix. I’m confident that can be done. Seeing this will given insights into a useful format for software architecture of agents, possibly cloud connected to data mining/news/updates etc where we expect to get good value for software & analysis reuse & quick expansion of capabilities that are shared across many domains & many problems. If the informatics & analsysis have this flexibility, we can capture existing state of the art capability with efficient future engineering growth. That’s the plan & hope.

  • #120238

    josh

    Computing task analysis typically works with a model of activity that looks like develop solutions for this sequence: task is presented, computing is done until all task variations are finished, and then solution summary is output.

    A scientific laboratory does not work with the model of exploring all reasonable theory variants. Resources for that are never available.

    Consider the use of computing models that have different roles/slots for different components of larger theories/models which are continually being developed. Candidate versions for sub-parts arrive in some sequence and the most promising ones may be kept on a queue for examination/testing when that looks like a good use of idle resources. The theoretical model is never “finished” in every respect. But it gets better over time – at least to the extent that historical trends and phenomena are stable. Signs of ahistorical change may prompt more urgent evaluations (“do I have a broken leg?”)

  • #120239

    josh

    Understanding new theory is hard. For state of the art AI, understanding the daily news is hard. Both new theories & daily news could be highly relevant to the operation of even a modest robot or online agent working at a long running service task. Theory & practice need to find bridges from complex linguistic descriptions of the world events & theories to the operating representations of even modest agents. There are probably many transformative steps involved.

    We are saying that limited box models of VR/theory with boundaries & example cases & prob/stat statements about distributions of cases can be part of a mediating digital format for understanding theory & news. But seeing where the boundary conditions are & where we move from one sort of modeling situation to a very different one is part of understanding & part of commonsense reasoning/planning.

    We think that procedural warehousing at the informatics level is at least an equal partner in juggling these different digital boxes & we think the system of juggling is necessary for language & theory understanding & for agents with human like intelligence.

  • #120275

    josh

    One kind of issue that is tricky to get right at the OS level is the difference between box boundaries & boundaries of individuation of objects in the box.

    Examples:

    a) And then The SUn exploded – hits a box boundary
    b) If we zoom in 100000x, we don’t see big objects anymore – hits a box boundary
    c) Both toys fell apart, but we were able to put the pieces back together -not sure if we mixed them or not – deals with individuation boundaries in the same box
    d) We analyze the future value of the company in two -scenarios. One involves a spin-off into 2 parts & the other does not – individuation boundaries.

You must be logged in to reply to this topic.