-
Notifications
You must be signed in to change notification settings - Fork 950
No probed ethernet devices Troubleshooting Guide
Compiled from GitHub F-Stack/f-stack repository issues and DPDK official documentation Related issues: #1035, #837, #693, #663, #583, #581, #531, #386, #379, and more Last updated: 2026-03-09
When running F-Stack applications (helloworld, nginx, custom apps, etc.), you may encounter the following error:
EAL: Error - exiting with code: 1
Cause: No probed ethernet devices
or:
ff_init failed with error "No probed ethernet devices"
The root cause is that DPDK cannot detect any available Ethernet devices, i.e., rte_eth_dev_count_avail() returns 0.
This is one of the most frequently reported issues in the F-Stack community, with over 10 related issues spanning from 2019 to 2025.
DPDK requires the NIC to be unbound from the kernel driver (e.g., ixgbe, virtio) and rebound to a DPDK-specific driver (igb_uio or vfio-pci).
Diagnosis:
cd /data/f-stack/dpdk
python3 usertools/dpdk-devbind.py --statusIf the NIC appears under Network devices using kernel driver instead of Network devices using DPDK-compatible driver, you need to rebind it.
Solution: Bind with igb_uio
# 1. Load the igb_uio module
modprobe uio
insmod /data/f-stack/dpdk/build/kernel/linux/igb_uio/igb_uio.ko
# 2. Check NIC PCI address
python3 usertools/dpdk-devbind.py --status
# 3. Bind the NIC (replace 0000:00:03.0 with your actual PCI address)
python3 usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
# 4. Verify binding
python3 usertools/dpdk-devbind.py --status
# NIC should now appear under "Network devices using DPDK-compatible driver"Solution: Bind with vfio-pci (Recommended for modern systems / VMs)
# 1. Load the vfio-pci module
modprobe vfio-pci
# 2. Bind the NIC
python3 usertools/dpdk-devbind.py --bind=vfio-pci 0000:00:03.0
# 3. For environments without IOMMU support
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_modeThe port_list in config.ini must match the actually bound DPDK port. The pci_whitelist (older DPDK) or allow (newer DPDK) must match the NIC's PCI address.
Solution:
[dpdk]
lcore_mask=1
channel=4
port_list=0
# Older DPDK versions
pci_whitelist=0000:00:03.0
# Newer DPDK (21.11+)
# allow=0000:00:03.0Verify with testpmd first:
dpdk-testpmd -l 0-1 -n 4 -a 0000:00:03.0 -- -i
# If you see "Found 1 port(s)", DPDK can detect the device correctlyWhen linking a custom application against F-Stack, omitting the --whole-archive flag causes DPDK NIC Poll Mode Drivers (PMDs) to be silently excluded from the binary, resulting in no devices being detected at runtime.
Incorrect (incomplete linking):
LIBS += -lfstack -ldpdkCorrect — DPDK 19.11 and below:
LIBS += -L${FF_PATH}/lib -Wl,--whole-archive,-lfstack,--no-whole-archive
LIBS += -L${FF_DPDK}/lib -Wl,--whole-archive,-ldpdk,--no-whole-archiveCorrect — DPDK 20.11 / 21.11+ (using pkg-config):
CFLAGS += $(shell pkg-config --cflags libdpdk)
LDFLAGS += $(shell pkg-config --libs libdpdk)
LDFLAGS += -Wl,--whole-archive,-lfstack,--no-whole-archiveUse
f-stack/example/Makefileas a reference template.
A pkg-config version older than 0.28 cannot correctly parse DPDK's .pc files, leading to incomplete driver library linking.
Check version:
pkg-config --versionUpgrade to 0.29.2:
cd /data
wget https://pkg-config.freedesktop.org/releases/pkg-config-0.29.2.tar.gz
tar xzvf pkg-config-0.29.2.tar.gz
cd pkg-config-0.29.2
./configure --with-internal-glib
make && make install
mv /usr/bin/pkg-config /usr/bin/pkg-config.bak
ln -s /usr/local/bin/pkg-config /usr/bin/pkg-config
# Verify
pkg-config --version # should print 0.29.2When using Mellanox ConnectX NICs, the DPDK MLX5 PMD requires additional runtime libraries.
Error log signature:
net_mlx5: cannot load glue library: librte_pmd_mlx5_glue.so.xx.xx.x:
cannot open shared object file: No such file or directory
net_mlx5: cannot initialize PMD due to missing run-time dependency
on rdma-core libraries (libibverbs, libmlx5)
Solution:
# 1. Install rdma-core dependencies
apt-get install rdma-core libibverbs-dev libmlx5-1 # Ubuntu
yum install rdma-core libibverbs libmlx5 # CentOS
# 2. Copy glue library to system path
cp /data/f-stack/dpdk/build/lib/librte_pmd_mlx5_glue.so.* /lib64/
ldconfig
# 3. Enable MLX5 support when compiling DPDK
cd /data/f-stack/dpdk
make config T=x86_64-native-linuxapp-gcc
sed 's/CONFIG_RTE_LIBRTE_MLX5_PMD=n/CONFIG_RTE_LIBRTE_MLX5_PMD=y/g' \
-i build/.config
make clean && make && make install
# 4. Specify PCI address in config.ini
# [dpdk]
# pci_whitelist=0000:03:00.0On laptops, virtual machines, or containers without a DPDK-compatible physical NIC, use a virtual device (vdev) for testing.
Using net_ring (memory loopback device):
[dpdk]
lcore_mask=1
channel=4
vdev=net_ring0
port_list=0
[port0]
addr=10.0.0.2
netmask=255.255.255.0
broadcast=10.0.0.255
gateway=10.0.0.1Note:
net_ringis for development/testing only, not for production use.
Fix igb_uio in a virtual machine (source patch):
pci_intx_mask_supported() may return false in VMs. Apply this patch:
// File: f-stack/dpdk/kernel/linux/igb_uio/igb_uio.c, line 274
// Before:
if (pci_intx_mask_supported(udev->pdev)) {
// After:
if (true || pci_intx_mask_supported(udev->pdev)) {Then rebuild DPDK:
cd /data/f-stack/dpdk
meson -Denable_kmods=true build
ninja -C build && ninja -C build installWhen DPDK is built as shared libraries (CONFIG_RTE_BUILD_SHARED_LIB=y), PMD drivers are not automatically loaded.
Solution:
- Switch to the
devbranch (this issue has been fixed there) - Or use static library linking instead
When using F-Stack from a Rust project, NIC PMD driver libraries must be explicitly listed in the build script:
// build.rs
println!("cargo:rustc-link-lib=fstack");
println!("cargo:rustc-link-lib=rte_net_bond"); // explicitly add driver lib
pkg_config::Config::new()
.print_system_libs(false)
.probe("libdpdk")
.unwrap();Ensure pkg-config version ≥ 0.28, otherwise driver libraries won't be resolved correctly.
Step 1: Is the NIC bound to a DPDK driver?
Run: dpdk-devbind.py --status
├─ Not bound → Bind with igb_uio or vfio-pci (Cause 1)
└─ Bound → Step 2
Step 2: Does config.ini have correct pci_whitelist / allow?
├─ Mismatch → Fix config.ini (Cause 2)
└─ Correct → Step 3
Step 3: Is this a custom application with its own Makefile?
├─ Yes → Check --whole-archive linker flags (Cause 3) → Step 4
└─ No → Step 4
Step 4: Is pkg-config version < 0.28?
├─ Yes → Upgrade pkg-config (Cause 4)
└─ No → Step 5
Step 5: What type of NIC?
├─ Mellanox → Install rdma-core / copy glue library (Cause 5)
├─ VM / none → Use vdev=net_ring0 or fix igb_uio (Cause 6)
└─ Other → Confirm NIC is on DPDK supported hardware list
After applying a fix, verify with these steps:
# 1. Confirm NIC binding
python3 /data/f-stack/dpdk/usertools/dpdk-devbind.py --status
# 2. Use testpmd to verify DPDK can detect the device
dpdk-testpmd -l 0-1 -n 4 -a 0000:00:03.0 -- -i
# Expected output: "Found 1 port(s)"
# 3. Run helloworld to verify F-Stack starts cleanly
cd /data/f-stack/example
./helloworld --conf /etc/f-stack.conf --proc-type=primary --proc-id=0
# Expected: port initialization messages, no "No probed ethernet devices" error| Issue | Status | Scenario | Solution |
|---|---|---|---|
| #1035 | Open | vdev=net_ring0 not working | Fix vdev config format |
| #837 | Open | Alibaba Cloud, eth0 still using kernel driver | Bind NIC to DPDK driver |
| #693 | Open | Custom app ff_init fails | Add --whole-archive to Makefile |
| #663 | Open | Rust binding fails at runtime | Explicitly link PMD driver libs |
| #583 | Closed | Using DPDK .so shared library | Switch to dev branch |
| #581 | Closed | helloworld fails to start | Upgrade pkg-config to ≥ 0.28 |
| #531 | Closed | Mellanox MLX5 NIC | Copy glue library to /lib64 |
| #386 | Closed | ixgbe NIC, driver not bound | Bind igb_uio driver |
| #379 | Closed | Custom PMD driver not linked | Link .a file in Makefile |
- F-Stack Build Guide
- DPDK Linux Drivers Guide
- DPDK Binding NIC Drivers
- DPDK MLX5 NIC Guide
- Alibaba Cloud: Replace UIO with VFIO drivers
This document was automatically compiled by OpenClaw, based on F-Stack GitHub issue history and DPDK official documentation.