Debian版本: 7.4
内核:3.2.0
gcc:4.7.2
后来在安装的过程当中发现没有sudo,因此若是没有的话最好先装一个:apt-get install sudo
给虚拟机分配3块网卡,注意别用NAT模式,不然测试程序没法收发包,统计结果都是0。
Debian环境下启用一块网卡便可,有实际地址,用来ssh登陆进行操做,其余两块给dpdk折腾html
先设置好环境变量:
export RTE_SDK=`pwd`
export RTE_TARGET=x86_64-native-linuxapp-gccnode
进入源码目录执行:linux
root@debian:~/code/dpdk-1.8.0# ./tools/setup.sh ------------------------------------------------------------------------------ RTE_SDK exported as /root/code/dpdk-1.8.0 ------------------------------------------------------------------------------ ---------------------------------------------------------- Step 1: Select the DPDK environment to build ---------------------------------------------------------- [1] i686-native-linuxapp-gcc [2] i686-native-linuxapp-icc [3] ppc_64-power8-linuxapp-gcc [4] x86_64-ivshmem-linuxapp-gcc [5] x86_64-ivshmem-linuxapp-icc [6] x86_64-native-bsdapp-clang [7] x86_64-native-bsdapp-gcc [8] x86_64-native-linuxapp-clang [9] x86_64-native-linuxapp-gcc [10] x86_64-native-linuxapp-icc ---------------------------------------------------------- Step 2: Setup linuxapp environment ---------------------------------------------------------- [11] Insert IGB UIO module [12] Insert VFIO module [13] Insert KNI module [14] Setup hugepage mappings for non-NUMA systems [15] Setup hugepage mappings for NUMA systems [16] Display current Ethernet device settings [17] Bind Ethernet device to IGB UIO module [18] Bind Ethernet device to VFIO module [19] Setup VFIO permissions ---------------------------------------------------------- Step 3: Run test application for linuxapp environment ---------------------------------------------------------- [20] Run test application ($RTE_TARGET/app/test) [21] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd) ---------------------------------------------------------- Step 4: Other tools ---------------------------------------------------------- [22] List hugepage info from /proc/meminfo ---------------------------------------------------------- Step 5: Uninstall and system cleanup ---------------------------------------------------------- [23] Uninstall all targets [24] Unbind NICs from IGB UIO driver [25] Remove IGB UIO module [26] Remove VFIO module [27] Remove KNI module [28] Remove hugepage mappings [29] Exit Script Option:
我是64位系统,选择[9]开始编译,一开头就碰到这么个错误:
/lib/module/`uname -r`/build: no such file or directory
即便手动建立对应的目录,一样会报错:No targets specified and no makefile found.这是由于正常状况build不是个目录,而是个软连接,指向/usr/src下对应的kernel头文件目录。所以手动建立个build的软连接便可,即/usr/src/linux-headers-`uname -r`/
若是没有安装kernel header,建议根据本身的内核版本下载:apt-get install linux-headers-`uname -r`
而后就是加载内核模块、分配大页内存、绑定网卡之类了
这里选[11],[14],[17], 大页内存可设置为128,网卡的话,须要填写网卡的PCIE的地址,如0000:02:05.0之类,操做过程的指导很清楚,能够照着提示信息选网卡。git
选择21,启动测试程序,我只有两个核,在选择 bitmask of cores时,输入了3
可是start后会不断打印错误日志:
EAL: Error reading from file descriptor
貌似是因为VMWare对PCIE的INTX中断模拟得比较差致使,
修改源码:app
diff --git a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c index d1ca26e..c46a00f 100644 --- a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c +++ b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c @@ -505,14 +505,11 @@ igbuio_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) } /* fall back to INTX */ case RTE_INTR_MODE_LEGACY: - if (pci_intx_mask_supported(dev)) { - dev_dbg(&dev->dev, "using INTX"); - udev->info.irq_flags = IRQF_SHARED; - udev->info.irq = dev->irq; - udev->mode = RTE_INTR_MODE_LEGACY; - break; - } - dev_notice(&dev->dev, "PCI INTX mask not supported\n"); + dev_dbg(&dev->dev, "using INTX"); + udev->info.irq_flags = IRQF_SHARED; + udev->info.irq = dev->irq; + udev->mode = RTE_INTR_MODE_LEGACY; + break; /* fall back to no IRQ */ case RTE_INTR_MODE_NONE: udev->mode = RTE_INTR_MODE_NONE;
不光是这块,因为修改后pci_intx_mask_supported()
函数没有用到,编译还会报错(dpdk认为warning也是出错),得把头文件compat.h里这个函数的定义也去掉...
从新编译后一切ok:ssh
testpmd> start io packet forwarding - CRC stripping disabled - packets/burst=32 nb forwarding cores=1 - nb forwarding ports=2 RX queues=1 - RX desc=128 - RX free threshold=32 RX threshold registers: pthresh=8 hthresh=8 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=32 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 testpmd> stop Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 829923 RX-dropped: 0 RX-total: 829923 TX-packets: 829856 TX-dropped: 0 TX-total: 829856 ---------------------------------------------------------------------------- ---------------------- Forward statistics for port 1 ---------------------- RX-packets: 829915 RX-dropped: 0 RX-total: 829915 TX-packets: 829856 TX-dropped: 0 TX-total: 829856 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 1659838 RX-dropped: 0 RX-total: 1659838 TX-packets: 1659712 TX-dropped: 0 TX-total: 1659712 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Done.
不过我这里一旦start,CPU都占满了,虚拟机系统会变得很慢,敲stop都要等好一会函数
交互式加载很差自动化,能够写个脚本加载
首先编译安装dpdk:
make install T=x86_64-native-linuxapp-gcc
接下来的命令能够写到脚本里,PCI地址须要根据本身的状况设置:测试
echo 128 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mount -t hugetlbfs nodev /mnt/huge modprobe uio insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko ./tools/dpdk_nic_bind.py -b igb_uio 0000:02:05.0 ./tools/dpdk_nic_bind.py -b igb_uio 0000:02:06.0
执行测试程序:ui
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 2 -- -i