Optimization of Large Logistics Networks

Forums Personal Topics Unbidden Thoughts Optimization of Large Logistics Networks

This topic contains 2 replies, has 1 voice, and was last updated by  josh May 9, 2022 at 7:00 pm.

  • Author
    Posts
  • #114784

    josh

    I reflected on my engineering hunches about good ways to structure this. Early thoughts:

    a) Use fine grained discrete event time

    b) Allow stochastic process simulations for the network to be run forward & backward from any given time point

    c) In general, at any given time point, there will be a set of “sensor readings” of network state that have been received & compiled. These readings are what is available to decision making. In general, they may lag the actual state of the network that contributes to cost/value realization.

    d) We can think of actions(t_i,sensor(t_i),sensor(t_i-1),sensor(t_i-2)….) as realized control policy

    e) Every realized control policy is affected by some subset of the overall set of control parameters that are available at that time. The temporal affects of a given element of a realized control policy may be spread out over a long time in the future as well as the present time point.

    f) Stochastic optimization may try to compute a local derivative for each control variable based on stochastic assessment of future run continuations given current decisions. The network should not be reguarded as omniscient about its stochastic models. The stochastic model that is used to model decision making is constructed by “work”. That is different than “God’s eye view” of the stochastic model truth. The work itself may involve costs. The most accurate modeling would include decisions/time courses of building the stochastic model that is used to evaluate control policy choices at time t_i. In a large network, the availability & categorization/refinement of data will not be independent of the stochastic model being built, which is independent of the God’s eye view. The modeling approach should be abl to test robustness to alternative formulations of God’s eye view.

    g) Incompleteness of modeling truth for analysis introduces a healthy skepticism into control policy. For real world biz, it is important to use evaluation metrics that take extra care to minimize the possibility of heavy tail losses. The incompleteness of knowing which step would truly optimize future performance is not ontologically the same kind of thing as incompleteness of knowing which parameter adjustment in a large neural net is optimal to optimize a given function of a fixed training data set. But there are parallels in the form of uncertainty that may lead to parallels in solutions for stochastic hill climbing – i.e in a long run, the current state of control parameters includes “received wisdom” from past states that should not be entirely discounted by the current delta.

  • #114790

    josh

    Other Notes:
    Sensor Readings at time t will generally include variables relevant to demand forecast that are not a function of the organizations network state.

    “What If” scenarios, asking questions about building new depots, outlets, running promotions, etc. are highligh relevant to biz. Using the framework to consider them with accuracy implies a stability of the framework, so it’s probably best to think about “What If” as a version 2 goal.

    Credit assignment models are interesting – there is an infinite regress of possibilities, somewhat like “Deep Learning”. Consider that in a real world run we expect to build new sensors & new sensor modeling & new stores over time. Decision models that gradually re-weight decisions towards the recommendations of the most successful “Logistic Decision Recommender Bots” may provide a robust way to deal with these drifts while continuing to add well regularized modeling power (endless partitions of unity).

You must be logged in to reply to this topic.