2024-11-21 - 23:06

Dates and Events:

OSADL Articles:

2024-10-02 12:00

Linux is now an RTOS!

PREEMPT_RT is mainline - What's next?


2023-11-12 12:00

Open Source License Obligations Checklists even better now

Import the checklists to other tools, create context diffs and merged lists


2023-03-01 12:00

Embedded Linux distributions

Results of the online "wish list"


2022-01-13 12:00

Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completed

Another important milestone on the way to interoperable Open Source real-time Ethernet has been reached


2021-02-09 12:00

Open Source OPC UA PubSub over TSN project phase #3 launched

Letter of Intent with call for participation is now available



OSADL Projects

OSADL QA Farm on Real-time of Mainline Linux

About - Hardware - CPUs - Benchmarks - Graphics - Benchmarks - Kernels - Boards/Distros - Latency monitoring - Latency plots - System data - Profiles - Compare - Awards

Real-time Ethernet (UDP) worst-case round-trip time monitoring

Wakeup latency of all systems - Real-time optimization - Peer-to-peer UDP duplex link - OPC UA PubSub over TSN - Powerlink - Ethercat - Network load - kvm - Sleep states

Two pairs of Linux real-time test systems are equipped with second Ethernet adapters. The pairs are connected to each other using a cross-over cable to form a real-time network communication based on a peer-to-peer full duplex UDP link. All systems run standard user-space applications solely based on POSIX network calls such as bind() and connect(). The first plots of the two main sections below are generated from subsequent 5-minute maxima of the time elapsed between sending a UDP frame and receiving the response packet. The tests are running twice a day for three hours at a cycle interval of 500 µs. Thus, the maximum at the rightmost columns of the below 30-h plots are based on a total of nine hours of recording time resulting in a total of 64,800,000 individual timed cycles. The related histograms are available here.

Configurations and settings

The following recommendations assume that no user space task or IRQ thread is running with a priority higher than 79. If the network traffic is sent at a high frequency and/or with a high payload that may prevent RCU from catching up, the kernel configuration should contain

 CONFIG_RCU_BOOST=y  
  CONFIG_RCU_BOOST_PRIO=99 

Kernel version 2.6.x:

  • Set the priority of the related Ethernet IRQ thread to 90, bind it to a selected CPU
  • Disable irqbalance or provide adequate environment IRQBALANCE_BANNED_CPUS= and IRQBALANCE_BANNED_INTERRUPTS=
  • Set the priority of the sirq-net-rx kernel thread of the selected CPU to 89
  • Set the priority of the sirq-net-tx kernel thread of the selected CPU to 89
  • Set the priority of the related user-space application to 80 and bind it to the selected CPU

Kernel versions 3.0 to 3.4 (without splitsoftirq backport):

The softirq split that was available in kernel 2.6.x has not been re-implemented before kernel version 3.6. To achieve a comparable worst-case latency as under 2.6.x, the following settings must be made (requires at least two processor cores):

  • Disable CONFIG_RT_GROUP_SCHED
  • Specify kernel command line parameter irqaffinity=<othercpus> isolcpus=<cpu>
  • Set the priority of the Ethernet IRQ thread to 90 and bind it to the isolated CPU
  • Disable irqbalance
  • Set the priority of the softirqd kernel thread of the isolated CPU to 89
  • Set the priority of the related user-space application to 80 and bind it to the isolated CPU

Kernel versions 3.2 and 3.4 with splitsoftirq backport, and kernel versions 3.6 up to 4.14:

The softirq workaround explained above is no longer needed! Kernel developer and RT maintainer Thomas Gleixner found the most elegant solution that directly runs the network task in the context of the IRQ thread of the related device and, thus, implicitly adopts its priority. This avoids any additional configuration; in consequence, a network RT task is now configured in the same way as any other non-network RT task that requires deterministic response to a device interrupt, i.e. by simply setting the priority of the IRQ thread and the user-space application.

  • Set the priority of the Ethernet IRQ thread to 90, bind it to a suitable core in a multi-core processor system
  • Disable irqbalance
  • Set the priority of the related user-space application to 80, bind it to the same core as the IRQ thread in a multi-core processor system
  • "That's All Folks!"

The softirq split was backported to 3.2 and 3.4, but the related patches are not part of the regular 3.2-rt and 3.4-rt release. They are available here and must be applied separately. Alternatively, Steven Rostedt has created a -featN branch of the RT patch that contains the softirq split backport. The below plots are generated on the server and the client systems at the primary slots of rack #1, slot #2 and rack #5, slot #4, respectively.

Last update 6 minutes ago

Real-time Ethernet worst-case round-trip time recording
Please note that the recorded values represent maxima of 5-min intervals. Thus, the data in the columns labeled "Min:" and "Avg:" should not be considered; the only relevant result is the maximum of consecutive 5-min maxima at the rightmost column labeled "Max:".

Real-time traffic
Real-time traffic the round-trip time of which is displayed above.

Non real-time traffic
Non real-time UDP traffic generated using the iperf tool - artificially limited to 20 Mb/s by traffic shaping and policy.

Real-time priority of all IRQ threads
The real-time priorities of the network IRQ thread and of ksoftirqd/3 are temporarily increased from 50 to 98 while the tests are running.

SMP core of all IRQ threads
Except the IRQ service handler of the network adapter that runs the real-time performance test, all other handlers are allowed to run on any core.

SMP core of all non-IRQ kernel threads
Non-IRQ kernel threads are allowed to run on any core.

Kernel versions 4.16 and later:

In version 4.15 of the mainline Linux kernel, developers decided to rework the softirq framework in such a way that the above explained softirq split could no longer be implemented. In order to cope with this new situation, the recommendation was given to always use a multi-core processor for real-time networking. This allows to isolate one of the cores from the remaining system and run real-time network exclusively on that core. A feature that was added in kernel version 4.17 to copy the affinity mask of the hard IRQ to the IRQ service routine can additionally be used to avoid migrating to another core while executing the IRQ handlers; however, some hardware devices may not support this.

The following example configuration assumes that a 4-core processor is used with core #3 isolated for real-time and the network device is named enp0s25 and uses interrupt #27:

  • Add isolcpus=3 to the kernel command line. This will prevent user space processes from running on core #3.
  • Write the affinity mask 0x7 to the virtual file smp_affinity of all interrupts at /proc/irq/<irqnum> to prevent them from running on core #3. As already mentioned above, this feature may not be implemented for all devices, see script and example output.

cd /proc/irq
for i in [0-9]*
do
  echo 7 >$i/smp_affinity 2>/dev/null
done

  • Write the affinity mask 0x8 to the virtual file /proc/irq/27/smp_affinity of the network interrupt:

echo 8 >27/smp_affinity

  • Determine the process IDs of all kernel threads and set their affinity mask to 0x7. This may not work in all cases, see script and example output.
  • Set the priority of the network interrupt service routine that is irq/27-enp0s25 in our case to 98.
  • Set the affinity mask of the related user-space application to 0x8 and its priority to 97.

The below plots are generated on the server and the client systems at the shadow slots of rack #1, slot #2 and rack #5, slot #4, respectively.

Last update 3 minutes ago

Real-time Ethernet worst-case round-trip time recording
Please note that the recorded values represent maxima of 5-min intervals. Thus, the data in the columns labeled "Min:" and "Avg:" should not be considered; the only relevant result is the maximum of consecutive 5-min maxima at the rightmost column labeled "Max:".

Real-time traffic
Real-time traffic the round-trip time of which is displayed above.

Non real-time traffic
Non real-time UDP traffic generated using the iperf tool - artificially limited to 20 Mb/s by traffic shaping and policy.

Real-time priority of all IRQ threads
The real-time priorities of the network IRQ thread and of ksoftirqd/3 are temporarily increased from 50 to 98 while the tests are running.

SMP core of all IRQ threads
The affinity mask of all IRQ threads excludes core #3, but some of them are still running on it. The network IRQ thread is temporarily limited to core #3 while the tests are running.

SMP core of all non-IRQ kernel threads
Non-IRQ Kernel threads are configured not to run on core #3.

Topology

The interfaces of the two systems in rack #1/slot #2 and rack #5/slot #4 are configured as VLAN and the packets are sent at VLAN priority (QoS) 7 which is the highest possible value. The systems are connected to ports of a VLAN-capable switch (HP J9773A 2530-24G). The two ports are set to the same VLAN ID as the two network interfaces and also given highest priority.