-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Emc race and cppcheck warnings #3734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Good to see that you fixed the off-by-one in read() :-) Seeing There are also no allowances for signals causing interrupted syscalls. That would simply cause the application to bail and exit. The That brings us to a general LCNC issue: error handling. There are many more instances in LCNC programs and utilities that are not exactly well-behaved when we talk about error resilience and recovery. But that would require a complete review and well defined strategy how to cope. |
|
Is this stuff even thread safe? It looks like it uses emcsched.cc, and addProgram there just uses push_back on a std::list... did I muss a lock?. Seems incorrect. |
|
I like to where this cppcheck warning brought us. The pthread_detach should always declare the pthread_t to be reallocatable after it returned, but correct and direct me. At one of the very last lines, just prior to the invocation of sockMain, another pthread_create is performed. That should be detached, too, but I wait for the clarification on the position of this pthread_detach. @rmu75, I also just skimmed through emcsched.cc and emcsched.hh - the queueStatus is an enum that may indeed immediately benefit from some thread safety, as in testing for a condition and changing its status should not be interrupted. And the addProgram changes the queue and then triggers a queue sort, which may come as a disturbing inconvenience to anyone iterating through the queue at the same time. Should we have this as a separate issue? |
You are right! It is incorrect and not thread safe. There are probably many more problems with thread safety. These would not have been seen as the "normal" use is to attach only one remote client to send stuff. Seen in this light, the choice is to limit to one client only (or use a select/poll construct to serialize access) or to review and fix the entire code path to be thread safe. |
A detached thread will have its resources purged when it exits. That means that the return value cannot be retrieved, for which you would have to use a join. In this case, it doesn't matter because no return value is used. Therefore, a simple detach should suffice.
That doesn't matter. That specific thread is the "queue runner" and there is only one during the lifetime of the program. Any resources associated with it will automatically die when the program dies. |
|
We have two separate threads already. One is started in linuxcnc/src/emc/usr_intf/schedrmt.cc Line 1289 in 46c3b99
linuxcnc/src/emc/usr_intf/emcsched.cc Line 313 in 46c3b99
|
Done it anyway, may eventually avoid false positive memory leaks in valgrind. |
|
I think the likelihood that concurrency-problems are triggered in real application of schedrmt are very low and in this case it would be acceptable to just document the shortcomings. Introducing a global mutex that is held while processing data from read() should be possible and relatively easy. As no significant amounts of data are moved that should have no performance implications. Proper way to fix this amounts to a rewrite with boost::asio or equivalent (which would also get rid of the threads btw) (or port to python). |
|
With a global mutex, I think only invocation of |
|
with something like this class MutexGuard {
static pthread_mutex_t mutex_;
public:
MutexGuard() { while (pthread_mutex_lock(&mutex_)) /* sleep? exit?*/; }
~MutexGuard() { pthread_mutex_unlock(&mutex_); }
};
pthread_mutex_t MutexGuard::mutex_ = PTHREAD_MUTEX_INITIALIZER;it should be only a matter of creating a local MutexGuard object at the start of the functions to be mutex'ed. Don't pthread_exit though. |
|
There are no thread-die-exit-or-else afaics. So just mutex the two functions should handle the problem. Then, thisQuit() is called from a signal handler and uses exit(0). However, you should use _exit(0) when called from a signal handler. |
e95461e to
fc012ca
Compare
|
@smoe, did you see my review comments? |
fc012ca to
b820de0
Compare
Well, yes and no, had not mentally parsed the exit -> _exit one. This is all a bit beyond my daily routines - what is the signal meant to stop? I was wrong to remove the exit from the implementation of shutdown, from how interpret this all now, but what flavour of exit is appropriate when the shutdown code also frees everything? |
Don't we need to expand any such global mutex also to emcsched.cc, etc? |
|
I would keep the mutexes in the main loop of the threads. Blocking read returns data -> acquire mutex -> do stuff -> release mutex -> go back to blocking read of socket. |
I can answer that question for me now - no, we don't, since schedrmt is an application on its own, so there is no other route to invoke the same lower-level routines. Sorry, this seems rather obvious but somehow I needed that. |
It can make sense in case you have multiple HMIs / operator stations. The only real problem is that error messages and user message can only be received by one endpoint and if you connect multiple guis it is more or less non-deterministic which GUi receives a particular message. The status OTOH is polled and while in theory there could be races where one gui receives the status another gui polled, that is harmless in my experience. I wrote a debug tool gui that polls with screen refresh rate, and it is no problem to run that parallel to some "proper" gui, in fact, I use that also as a kind of DRO on a second screen on a mill.. In a sense multiple sessions / guis / operator stations is like attaching a pendant that allows to jog and "cycle run". On a real machine, all safety-relevant stuff has to be external to the linuxcnc control anyways. Also linuxcncrsh is something that has to be explicitely enabled / started. |
|
multiplexing multiple sessions onto one NML session is probably a bad idea. |
Ah, but then shcom.{cc,hh} needs to become a proper class that can be instantiated for every connection to encapsulate separate NML sessions within the same process. That is no problem, I already had a feeling it would be necessary. There is also other work in those files because I detected static buffers that need to go anyway. When that is done, then there should be no problem in accepting multiple sessions. |
|
see https://github.com/rmu75/cockpit/blob/main/src/shcom.cpp and https://github.com/rmu75/cockpit/blob/main/include/shcom.hh I also have variants that talk zeroMQ. |
Thanks, that is a good start. I'll have a deeper look and see what I can "steal" ;-) |
|
You should probably connect to user and error channels only once. Or not connect at all -- it won't work correctly if there is a real GUI in parallel. |
|
There are three NML channels: emcCommand(W), emcStatus(R) and emcError(R). All are connected to namespace Which of these is the 'user' channel? The problem is that if two processes send a command on emcCommand, then the first reader of the emcError will retrieve any current error. Interestingly, the emcStatus channel is only peek'ed at and never actually read. I am a bit baffled how this is supposed to work at all... |
|
It has been some time since I last looked at that stuff, seems I misrembered about a "user" channel. I didn't want to look too closely, there are too many things that induce headache. Like https://github.com/LinuxCNC/linuxcnc/blob/master/src/libnml/nml/nml.cc#L1260 and https://github.com/LinuxCNC/linuxcnc/blob/master/src/libnml/nml/nml.cc#L1281. Maybe there is some stubtle stuff I miss, but to me, it looks like a copy/paste error. I assume that every code path that is not used by linuxcnc contains some subtle bugs (like messages containing long truncating those to 32bit or the really strange length calculations in the ascii and display updaters). |
|
Yeah, I tried going down that rabbit hole... At least I can explain the peek() in the status channel. It is because the nml-file shows the buffer as not queued. Both emcCommand and emcError are queued buffers, hence the write/read. Unfortunately, I've not seen an unread(). Anyway, multiple processes may interfere with each other on basis of the serial number. It seems that a lot of synchronization between write command and read status are based on the serial number. No one is checking the serial number of the error, so I don't even know if any error is correlated to the last command. |
This bit has been known to be a problem for decades. The first thing to "look" gets the error status, and might or might not inform the user. |
Ok then. My suspicions confirmed. The real problem is that there are separate uncorrelated queued read and write channels that share the backing store. Analysis: The emc.emcCommand(R) (emctaskmain.cc) receives the commands from any of emcsrv.emcCommand(W) or xemc.emcCommand(W) (for the default nml file). However, it cannot reply to the appropriate ???.emcError channel because these are shared for both emcsrv and xemc. To fix this, you need a back channel for emcError that matches the command connection. This is not a scalable solution because you need to create complex nml configs with an error buffer for each connection and also track that in the task handling, which is a near impossibility. The changes required are not worth the effort. The way it looks is that NML is inappropriate for handling this particular case. For this you'd want a socket connection and use sendmsg()/recvmsg(). That means, zeroMQ will be much better than NML to handle communication, including status because it makes a simple bidirectional connection. @rmu75 you said something about files using zeroMQ. Have you hacked a LCNC version, ripped out NML and replaced it with zeroMQ? |
I didn't rip out anything, but added all nml messages as flatbuf structures and played around a bit stuffing those into zeromq sockets. See https://github.com/rmu75/linuxcnc/tree/rs/zmq-experiments and https://github.com/rmu75/cockpit/tree/rs/zmq. It is just a hack, nothing is integrated into the build system, and generated files are checked in. If you want to try that, you will need to install flatbuffer and zeromq dev libs and invoke flatc by hand so generated stuff matches your installed flatbuf libs. flatc needs "--gen-object-api" IIRC. I did look at machinekit and their machinetalk stuff (google protobuf and zeromq), and I wondered why they didn't go the full length and ripped out NML but built this "bridge". Turned out it is not that hard to feed alternate message format into task. I didn't like protobuf though, flatbuf is more efficient and doesn't suffer from a problematic compatibility story and a 2-3 version-transition that reminds of python's. IIRC in my zeromq implementation, if you connect the error "channel", you receive error messages. Also status updates are not "peeked" but messages are send to each connection of the status socket. |
|
The ROS community now has adopted Zenoh (https://docs.ros.org/en/kilted/Installation/RMW-Implementations/Non-DDS-Implementations/Working-with-Zenoh.html). Open Source and faster than ZMQ - and would possibly help building bridges to the ROS community. |
|
On So, 2026-02-08 at 05:21 -0800, Steffen Möller wrote:
smoe left a comment (LinuxCNC/linuxcnc#3734)
The ROS community now has adopted Zenoh
(https://docs.ros.org/en/kilted/Installation/RMW-Implementations/Non-
DDS-Implementations/Working-with-Zenoh.html). Open Source and faster
than ZMQ - and would possibly help building bridges to the ROS
community.
At first glance it looks like zenoh is something like MQTT: "Zenoh is a
distributed service to define, manage and operate on key/value spaces."
It looks like stuff is organised in a hierarchical tree and there you
can store vlues like JSON or plain text.
not sure how that compares to 0mq and in what sense it could be faster.
|
|
If I am reading the docs correctly, then Zenoh seems to be a key/value bases system that looks a bit like MQTT. The part that is bothering me is that I do not know how to map the NML functionality. This is a very non-trivial problem because we have both queued (commands) and non-queued (status) communication. The odd one is the NML error channel, which is also queued but should be one-to-many mapped, but only to a point. |
|
we need some kind of remote procedure call (maybe with useful return-values, that doesn't exist now) and a channel to report status and error. error messages could be put into a lru structure that keeps the last 10 or some number error messages. status could in principle be published into a key value structure like mqtt but I think per default we need something that could deal with 1kHz or even 4khz of updates. |
4 kHz - that is 250 µs between invocations. That suggests to look for the one with minimal latencies. MQTT and all other contestants are already distributed if I get this right, so some external error monitoring would be supported? |
|
The fact that the realtime thread runs this fast does not necessarily mean that the non-realtime system must be updated this fast. Actually, when you look at the non-realtime, like GUIs, they only updates at about 10 times per second. For example, the [DISPLAY]CYCLE_TIME is set at 0.1 seconds (sim's axis.ini). Another thing to remember is that status updates can be done in an incremental fashion. But, for the sake of argument, let a status update happen at 4kHz with a 1400 byte payload (about one eth frame size excl. overhead) and it needs to be handled by a network connection (very unlikely). That would mean we're sending ~5.4 MiB/s data. Well, my internet connection is 4 times faster and I manage to saturate it with little CPU efforts :-) What is important is data sequence and correlation. Every client (non-realtime) needs to be able to track the server (realtime) and not get confused because multiple clients are acting on the same connection (like we have with the error channel). That is the big challenge to solve so that we have consistency, predictability and repeatability for all clients. |
|
4kHz is probably on the extreme end, but it could also cover cases where you want to synchronise multiple linuxcnc instances or implement remote access to hal. |
I don't understand what you mean with this. Now NML just broadcasts a command into emctask. No result is communicated back except for changes in status. IMO it can remain that way. Status can also be more or less broadcast to all interested parties. Error or user messages can be put in a list. Error message only has informal character and don't need to be acked per se. If something really goes wrong then machine state will reflect that. So in principle, we can treat communication channels as a room where every client can shout in commands through a window and observe what is happening, who is shouting isn't interesting, state has to be handled by emctask anyways. It might make sense to limit acceptable commands depending on channel. |
I got this as "among all the many independently created sensor or position or whatever status updates you need to know at what moment what error kicked in and who triggered it".
There is the kind of error info that is printed to stdout/stderr. That is lost to the system. Such messages are both informal and informative :-) But lost.
It may be a constructive misunderstanding on my side, but once we have a bit of a more formal representation of errors, then those errors could be also be interpreted by the controller, beyond some "-1" return value, and affect the systems further action - and human monitoring alike. I think we should start out with a diagram about how NML is used today and ornament our documentation with it. And then think more about what we all want. My instant hunch is that we want to borrow more from what ROS2 is doing. |
A typical error that is communicated via error channel is "probe move stopped without contact" or similar. That is a global situation that can and should be communicated to each operator station. But there is not much else to do there. You only need to look at emctask.cc and emctaskmain.cc. In a nutshell, emccanon sends stuff from the interpreter as NML into task, and the other thing is the GUI that sends commands. I would proceed step by step, new API can be bolted on later, for me it is a separate issue. |
No, NML does not broadcast. It consists of two queues (command and error) and one (virtually) shared memory region (status). There is, at this time, no way to correlate the command issued with the error generated. Even worse, the error channel carries a mix of operator error, text and display messages. You have no way to relate that to what the machine was executing at the time the message was generated. And more disastrous, multiple clients reading the error queue will prevent consistency between readers because only the first-to-read will actually get the information. When you connect two DROs that can display messages, then you really want to have them display the same thing, don't you?
Status is a non-queued broadcast. It is the virtually shared memory segment containing the machine's state. Errors are not necessarily a list nor merely informal. These "error" messages may be broadcast to all clients but correlating this with the current status remains a problem because status is asynchronous from the queues. There needs to be more information. Maybe the status channel could contain some of the actual status, display and error messages and the "error" channel is a broadcast that "something important happened", which listeners can use to act upon. But there also needs to be (possibly historic) information on the error channel which client/command caused what situation.
I disagree that clients/servers just (should) shout at each other. It must be a very coordinated communication that guarantees both consistency and repeatability. I do agree that we might want to limit commands on some channel depending function/intent of the client. |
Maybe I formulated a bit too informal. The effect is the same, commands are fed into a big switch statement, no distinction is made who sent the command.
That is a design issue of linuxcnc / emc task, not a comms interface issue.
This is a comms issue and can be fixed (but probably not with NML and acceptable dev-effort)
yes absolutely. |
It could make sense to include some sort of "state tag" to error messages created in the interpreter that contains information like filename, linenumber etc... in a non-sprintf-way.
Why. If something happens, machine will go either into "Idle", "Off" or "ESTOP", depending on severity. Some message should indicate the cause of course. I agree that if g-code or remapped code is involved, the file/linenumber could/should be indicated. Historic error information IMO needs to go into a separate component that logs problems, I wouldn't put that into emctask. What would actually be helpful: a message that says "machine can't be enabled because cover X is not closed" or "machine can't be enabled because spindle coolant is out of safe temperature range" or "machine can't be enabled because vacuum pump motor protection tripped", but those things usually are in the HAL domain and would need customization. AFAIK there is currently no possibility to communicate such info from HAL to a GUI. Usually in these situations you just get an enable button that doesn't enable. |
|
I try to find a good example for the need of some archeological research in error messages - or just a context-sensitive error interpretation as a start. My imagination describes a scenario with errors that are non-fatal, so the machine continues to operate but was unhappy - in parts (pun intended). So if there is a robot feeding raw stock in over night and puts the finished parts on a pile, then likely some QA report for each part is also performed, not necessarily directly after the part being produced. Maybe some parts fail in their QA. And then suddenly it is all fine again because a dull end mill was swapped out, which deteriorated earlier than expected. And then you want to understand what was happening when returning in the morning. I guess any sort of supervision of a process wants to see such context - be it human or an automation. It guides to the cause of the problem as the first step torwads a recovery. And if I understand the exchange between the two of you correctly then you ask what the role of LinuxCNC in any such more complex scenario may be and to what degree it can behave constructively already or if some additional mechanisms are needed. |
|
On Mo, 2026-02-09 at 15:40 -0800, Steffen Möller wrote:
smoe left a comment (LinuxCNC/linuxcnc#3734)
I try to find a good example for the need of some archeological
research in error messages - or just a context-sensitive error
interpretation as a start. My imagination describes a scenario with
errors that are non-fatal, so the machine continues to operate but
was unhappy - in parts (pun intended). So if there is a robot
feeding raw stock in over night and puts the finished parts on a
pile, then likely some QA report for each part is also performed, not
necessarily directly after the part being produced. Maybe some parts
fail in their QA. And then suddenly it is all fine again because a
dull end mill was swapped out, which deteriorated earlier than
expected. And then you want to understand what was happening when
returning in the morning.
IMO stuff like this belongs in a layer outside of machine controllers,
like some production orchestration system that interfaces with the ERP.
I'm lacking imagination what kind of "non-fatal" error there could be
in linuxcnc. To me, the definition of "error" means that user
intervention is needed.
Things like bad quality because of dull tools or air-milling stock
because of brocken endmill is nothing that the machine controller can
prevent. Yes there could be tool breakage detection but it probably
only makes sense in a very high volume regime to deal with that
automatically.
I guess any sort of supervision of a process wants to see such
context - be it human or an automation. It guides to the cause of the
problem as the first step torwads a recovery. And if I understand the
exchange between the two of you correctly then you ask what the role
of LinuxCNC in any such more complex scenario may be and to what
degree it can behave constructively already or if some additional
mechanisms are needed.
For me, I'm more concerned about distributed operator stations that are
somewhat on equal footing. Like main screen, a phone/tablet UI and some
sort of remote web interface, where it is possible to control machine
from each of those, at least in principle and after possibly
authentication. I think those things could be very useful in a hobby
and in a professional context, and currently, it would be hard to
implement correctly.
It could be done like in machinekit with a bridge between NML and
something more modern / supported, but IMO it would be better to
directly address the messaging and not introduce a translation layer.
|
|
We should move that discussion to some more appropriate place. |
Yes, but the controller should pass all information needed.
This depends on the complexity of the mill and the number of sensors it is equipped with. Imagine the mill being a car. "No fuel" is an error in your sense. An unexpected "low fuel" (traffic jam, going to fast, whatever) will grant you a chance to react. Same for your motor losing performance for an unknown reason. You want to continue driving to safety/petrol station with what you have. If your glass scales give a different position than you are expecting to be at, then at a lower level this may be an error. At a higher level you may change the acceleration or maybe the depth of cut ? I think I would want to understand "Error" as "Unexpected". And there may be no user around to deal with the situation but the controller (or some supervising agent) gives commands to react. So you do not want to halt the machine immediately, but a halt (and switching of the coolant) to be one way for the machine to react to the unexpected situation. |
Everything you describe is pure customization. The logic that uses those sensors and generates something actionable can hardly be part of linuxcnc, at best you could implement or configure HAL components and connect some warning lamps. But HAL doesn't have a notion of "error", it is just a bunch of virtual components and virtual wires, there is no or not much state to HAL. If there is an error condition in HAL (like lost comms with a mesa card), the machine is dead.
This also would be a very machine specific configuration. Machine specific components and configuration has to implement this logic. The NML error channel communicates text messages (either "operator error", "operator text" or "operator display"). That should be augmented to include a source (component, source-file, linenumber, stuff like that) and in the future we will need something more "broadcast style" so it is not one sender talking to one non-deterministic receiver (out of potentially many) Higher levels are outside, we should provide useful information, relevant stuff is available via status and HAL, not sure what can/should be added there. |
Took the patch proposed by @BsAtHome for #3700 and prepared this PR for it. It looks fine to me but I also did not test it. Any takers?
The most important bits are
There are some meta talking points: