Unifying Embedded Platform for Mfg. Machine & Robotic Self-Diagnosis

Forums Personal Topics Unbidden Thoughts Unifying Embedded Platform for Mfg. Machine & Robotic Self-Diagnosis

This topic contains 8 replies, has 1 voice, and was last updated by  josh March 29, 2021 at 9:10 am.

  • Author
    Posts
  • #88144

    josh

    Elements of the design should provide great support for new development of minimalist sensor designs that benefit from non-commital choices to offload most of the memory, logging, & generic analysis hardware & software requirements onto the memory, logging, testing, & diagnosis modes of the platform infrastructure & the tools it provides for development & testing. The testing, diagnosis, & development contributions of the platform are enlisted to support the faster, cheaper, lower-energy design of new sensors. The cheap design is not a committal choice that forces use of the platform. It becomes a comfortable choice that makes good biz sense for the reasons given. Developers who want to use the sensors in other ways still benefit from the same test/diagnosis platform for working in isolation with the embedded sensor and allies.

    How low can you go – can a unique hardware design serial number & a std. electrical I/O interface be linked to a set of fields that makes database logging, querying, & replay simulation of its behavior already available with nice interaces? Seems possible. This whitepaper provides a survey of contemporary open source tools for data stream processing that is written from the point of view of a commercial cloud provider – AWS. AWS works hard to offer full feature service in the cloud and get lots of devices to go their directly. Our discussion benefits from their survey, but we are starting from a lower level focus on individual sensors – integrating & analyzing their data, prior to a decision about engagement with a third party cloud. The lowest level protocol discussed there seems to be Apache Storm. They offer a higher level they call Trident that hooks into the higher level sorts of services, including various forms of cloud receiver. I am not in a position of expertise to say which tech group is based or in need of further development. I can say that what I am talking about here is something like a standard mapping from the electronic nameplates on sensor hardware to a standard Apache storm config (or comparable) that sets up immediate, plug in analysis & development.

    The development toolset can also help the designer to configure & plan the level of “system brain” components they need to support various data speed/quantity levels of storage – at this point, the economics of offloading to the cloud become particularly relevant too.

  • #88145

    josh

    IPC mechanisms are conceptually akin to message passing, yet the community overlap & software standardization/bridging is low. For example, Qt has a lot of nice tools for embedded development that use its D-Bus. D-Bus is an IPC choice. Here is an article by someone working with BlueZ, a Linux lib for Bluetooth handling, who ended up rolling his own thing. XML-RPC might have been faster, but Qt is correct that embedded same machine are not going to use XML-RPC for everything. How about standard mappings between QBusArgument and Apache Storm protocol? That’s something GT can do – not as “the 1 way” to do things, but a reasonable way to go that’s provided with a lot of acceleration.

  • #88166

    josh

    Side point: I have read about apps that allow Android phones to bind to local wifi transmitters rather than their phone tower. Perhaps the same is available for Bluetooth & other transport. It shouldn’t be necessary to make a round trip to space just to look at the local output.

  • #88175

    josh

    This Mappedbus.io library is not very well know & I can’t speak to the quality of it’s implementation. I cite it here as a demo of the valid & interesting point that the modern Java language provides a reasonable cross-platform implementation of memory-mapped files which can be used as the basis for fast IPC messaging in a way that is as portable as modern Java. I’m saying look at the cross-platform recipe of Java to see if it is suitable for GT purpose.

  • #88188

    josh

    An issue that seems to be commonly overlooked in messaging & microservices platforms that involve centralized message delivery – the system defaults are not generally aware that bounding the queue length of undelivered messages is a high priority. Architecture needs to respond in some way – possibly by running scheduler threads which react to queue length with changes to whatever rate of processing control levers they are able to work in a given situation.

    • #88189

      josh

      Messages can have different priorities. Priority queues are implemented with heaps. In C++, an in-memory B-Tree that stores priority value with handle values might be the fastest heap implementation (considering CPU/cache/memory issues).

      • #88245

        josh

        Apache Kafka is apparently optimized to deal with similar/parallel issues in a distributed cloud context. Perhaps some of that work can be adapted. Kafka uses
        RocksDB, and I believe this usage includes the fast memory-based queues. RocksDB has a lot of nice persistence features. The online sources says it uses
        Log-structured merge tree which is similar to the B-tree idea, but does not provide the O(log N) behavior of the balanced data structure as it grows…

  • #88192

    josh

    Concretely – my view is

    a) the platform should allow sensors to be built with minimal hardware & internal logic. They can serve basic signaling functions analogous to peripheral nerves in the human nervous system or sensors that detect particular elements/temperature/pressure etc. in the state of a factory device. Some sensors, like video cameras are much more complex, but we note that in the complex AI/Robotics/cloud/simulation/etc. proprietary platform offered by NVidia, the reality for video cameras is connecting to a CPU stack by standard USB cables with providers writing their own device drivers.

    b) In addition to USB, the platform should be potentially adaptable to every form of standard digital & analog-to-digital connector.

    c) Device drivers don’t need to be written entirely from scratch if they follow a familiar format for input & a familiar format for output. They would typically be specializations of a pattern that converts to/from specializations of a messaging protocol.

    d) The drivers support injection of information that alters their behavior in response to operating conditions such as NORMAL_OPERATION, DEBUG_LOGGING, STARTUP, SHUTDOWN, STEP_THRU, etc. Injection also allows the drivers are also able to accept different categories of destination for their messaging/retrieval queing that vary across direct memory access, various network destinations, etc – typically via a software factory type of design. The driver infrastructure allows specification of where absolute timestamps should be applied – e.g. first digital stage, with each packet received, etc.

    e) The software system offers immediate access to GUI interfaces, data analytics, logging, browsing of stored logs (possibly retrieved from cloud storage), etc. The creator & software developer of a new device doesn’t need to specifically target a particular cloud architecture of proprietary computer architecture at the device level.

    f) The mathematical/data content of the information delivered by a sensor is distinct from the details of its data transmission. The system should support standard mappings between the 2 & standard ways of talking about the data provided by particular sensors at the content level. This includes things like saying which sensors group together in larger formal/mathematical relations and giving names to patterns of data that are particularly interesting. ML & other adaptive techniques can be used to add to the set of interesting patterns associated at the content level with groups of sensors. The Apache Software stack of data analysis tools for streaming data is an example of an existing software framework that has significantly focused on some of this work. Once the data is translated to a mathematical form (e.g. multivariate time series) then there are numerous analytical methods available & numerous ways of implementing each, on PCS, local servers, or in cloud computing centers. It’s not relevant to the new work here to emphasize my particular set of choices. Our focus is ease of new development, diagnosis, debugging, efficiency, & performance at the edge & ez integration of a completely open & heterogenous micro-service architecture.

You must be logged in to reply to this topic.