Occasionally we receive a support request asking for the latency of a message in the RTI. Answering these questions is difficult as there are many factors which can affect RTI latency. Not least of these is the network itself and the tick rate of the federate. These are probably the two largest factors in determining latency.If a federate doesn't tick frequently (or for long enough) it will take longer for it to receive the federate ambassador callback. As a result it is hard to give a firm number. Every federation is going to be a bit different. I can say that using a test federate here at MAK on a gigabit network we have recorded latencies of a little over 100 microseconds.
While federate design is by far the largest factor, there are several RID parameters that will certainly have an impact:
RTI_fomDataReliableWhenUsingRtiexec: To be fully compliant this parameter should be enabled. However, sending data best effort is certainly faster. Of course sending data best effort also means you run the risk of dropping messages.
RTI_forceFomDataReliable: Similar to the above parameter. The above parameter only tells the RTI it can send data reliably if the FED file says it should be. This parameter forces data to be sent reliably, regardless of the FED file.
RTI_processingModel: This can definitely have an impact. Processing models 1, 2, and 3 definitely have more overhead than processing model 1. However, processing model 3 also means that the federate will receive callbacks in a separate thread without having to tick. This certainly has the potential to reduce latency if used correctly, but this choice must be considered when designing your federates.
RTI_enableLrcWebservice: I would recommend disabling the LRC RTIspy web service if you are trying to maximize performance. This definitely increases overhead. RTI_notifyLevel: Increasing the notify level also slows things down a bit as the RTI spends more time printing messages to the display and/or to the log file. If you would like to generate a log file but have as little impact on performance as possible, I would suggest dropping RTI_detachNotifyLevelFromStdOut down to 0. This will prevent the RTI from printing anything to std::out.
RTI_tcpNoDelay: Sets the TCP_NODELAY socket option.
RTI_enableTcpCompression & RTI_enableUdpCompression: Compression helps to optimize bandwidth utilization, but it will also increase latency and CPU utilization as the RTI will have to compress and uncompress each message. RTI_enablePacketBundling: Bundling will reduce bandwidth and CPU utilization, but will also increase latency as the RTI waits for additional messages to fill the bundle before sending.
RTI_enableMessageThrottling: Specifically affects internal messages, not updates or interactions. Helps with congestion, but will result in delaying the sending of messages.
RTI_smartForwardingLevel: Can help save CPU utilization if used properly, but requires more overhead in the forwarder.
These are the RID parameters which have the largest effect; others may have a smaller but still significant effect.
The easiest way to tell what is taking the most time is to use the MAK RTI’s built in latency test found in the RTI Assistant. By clicking on a Federate in the RTI Federations View, you can see all the round trip times between that federate and every other federate. In the example below, you can see that the majority of the latency occurred when the message was sitting in a queue waiting for the receiving federate to read it.