› Forums › Personal Topics › Unbidden Thoughts › Speculative Concurrency At The Run-Time Environment Level
This topic contains 4 replies, has 1 voice, and was last updated by
josh March 3, 2021 at 7:25 am.
-
AuthorPosts
-
March 2, 2021 at 10:17 pm #85180

joshQ: This sounds like a hybrid of Profile-Guided Optimization and Just-in-time compilation with an observation that multi-core architectures are now the norm & ML might help with predictions. Please comment on the proposed innovation in that context.
A: Innovation in response to multi-core norm is one part of the point. Multi-core offers the possibility of doing more run-time adaptation while helping performance in critical cases & not paying a performance penalty in non-critical cases. Run-time information is also able to optimize to the specific computer & user preferences (e.g. performance vs. energy saving). The additional cores also enhance the viability of some brute force (e.g. guard region) computations of safety that can be carried out at run time & used to decide whether a given instance of speculative parallelism turned out to be safe in the actual code run. ML can consider both application binary hints and adaptive site-specific hints to make guesses about where to look for speculative success. The JIT article discusses issues related to dynamic modification of executable pages (more so than e.g. dynamic linking). This has not stopped JIT from being commonly used, but it would be possible to design the initiative in a way that the mods to executable pages were closer to dynamic linking.
-
March 2, 2021 at 11:14 pm #85181

joshHaving something like “bin_analysis_utils for call graph” would help a group effort to develop semantic strategies.
-
March 2, 2021 at 11:26 pm #85182

joshWhy do these methods not add to the set of race conditions in a program?
If the program was written correctly, to avoid the possibility of race conditions in its thread semantics, then the methods should not add any. If the program was written incorrectly so it has latent race conditions, then there is the possibility of exposing their effects. However, the ML analysis based on actual data of actual executions of various signatures related to concurrency should have an overall beneficial effect, even for bad code. You are optimizing speed results, but good methodology will also be checking for correctness.
-
March 3, 2021 at 7:25 am #85220

joshSpeculative branch prediction in the CPU feels a bit safer than JIT. Why? Because any potential changes are cached – there is no issue with canceling the set & the CPU is not going to ultimately make a mistake about the results of the actual branching test cmp. One of the motivating thoughts here is that this outline of safety can also be applied to speculative concurrency at the system level. There is some extra complexity with avoiding stale reads & IO races, but the thought is that these can be engineered correctly while still leaving meaningful performance wins.
-
AuthorPosts
You must be logged in to reply to this topic.