[ipxe-devel] iPXE boot fails if multique is enabled in Openstack
Ladi Prosek
lprosek at redhat.com
Mon Nov 27 16:01:26 UTC 2017
I think I understand what's going on. DPDK simply won't consider the
interface 'ready' until after all queues have been initialized.
http://dpdk.org/browse/dpdk/tree/lib/librte_vhost/vhost_user.c#n713
It looks like Maxime is the right person to bug about this. One of his
recent commits appears to be somewhat related:
http://dpdk.org/browse/dpdk/commit/?id=eefac9536a
Maxime, iPXE has a simple virtio-net driver that never negotiates the
VIRTIO_NET_F_MQ feature and never initializes more than one queue.
This makes it incompatible with vhost-user configured with mq=on, as
Rafael and Zoltan have discovered.
Is there any chance DPDK can be made aware of the VIRTIO_NET_F_MQ
feature bit being acked by the guest driver, and successfully operate
with one queue in case it was not acked? There's some context below in
this email. I can provide instructions on how to build iPXE and launch
QEMU to test this if you're interested.
Thank you!
Ladi
On Fri, Nov 24, 2017 at 6:09 PM, Zoltan Kanizsai
<zoltan.kanizsai at ericsson.com> wrote:
> Hi Ladi,
>
> That's great news, good to see that the problem is reproducible!
> Please take your time, we are waiting for the analysis.
>
> Thanks,
> Zoltan
>
> On 11/24/2017 04:01 PM, Ladi Prosek wrote:
>
> On Wed, Nov 22, 2017 at 8:10 PM, Ladi Prosek <lprosek at redhat.com> wrote:
>
> Indeed, "VIRTIO-NET %p tx complete iobuf %p\n" would be printed if the
> TX packets were processed.
>
> I'll try to set up a local repro to investigate.
>
> Quick update: I was able to set up OVS with DPDK and see the same
> behavior as you guys, i.e. packets in the (first) tx queue seem to be
> ignored by the backend if virtio-net is configured with more than one
> queue.
>
> This is not really my forte, please give me more time to drill into it.
>
> Thanks,
> Ladi
>
> On Wed, Nov 22, 2017 at 5:06 PM, Zoltan Kanizsai
> <zoltan.kanizsai at ericsson.com> wrote:
>
> Hi Ladi,
>
> Thanks for the effort!
>
> The main problem is not with incoming DHCP Replies, but that DHCP Request
> packets cannot be sent out from the VM by the iPXE image: TX errors present.
> And I can see this also when using ovs-tcpdump on the vhost-user socket:
> nothing comes out from the VM trying to pxe boot.
>
> BR,
> Zoltan
>
> On 11/22/2017 04:57 PM, Ladi Prosek wrote:
>
> Hi Zoltan,
>
> I'll work on setting this up on my host in the upcoming days. Adding
> Victor to cc in case this is a known issue or he knows off-hand why
> this doesn't work.
>
> In summary, this is vhost-user networking configured with mq=on but
> the simple iPXE driver uses only the first queue. Maybe incoming DHCP
> packets are delivered to one of the other queues, which is why the
> driver doesn't see them.
>
> Thanks,
> Ladi
>
> On Mon, Nov 20, 2017 at 4:44 PM, Zoltan Kanizsai
> <zoltan.kanizsai at ericsson.com> wrote:
>
> Hi Ladi,
>
> I'm in the same team with Rafael. Here are the information you asked for:
>
> Qemu related package versions:
>
> root at blade0-7:~# apt list|grep qemu
>
> WARNING: apt does not have a stable CLI interface yet. Use with caution in
> scripts.
>
> ipxe-qemu/trusty-updates,now 1.0.0+git-20131111.c3d1e78-2ubuntu1.1 all
> [installed,automatic]
> nova-compute-qemu/mos9.0-updates 2:13.1.4-7~u14.04+mos38 all
> qemu/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-block-extra/mos9.0-updates,now 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> [installed,automatic]
> qemu-guest-agent/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-kvm/mos9.0-updates,now 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64 [installed]
> qemu-slof/trusty-updates 20140630+dfsg-1ubuntu1~14.04 all
> qemu-system/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-arm/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-common/mos9.0-updates,now 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> [installed,automatic]
> qemu-system-mips/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-misc/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-ppc/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-sparc/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-system-x86/mos9.0-updates,now 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> [installed,automatic]
> qemu-user/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-user-binfmt/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-user-static/mos9.0-updates 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> qemu-utils/mos9.0-updates,now 1:2.5+dfsg-5ubuntu6~u1404+mos4 amd64
> [installed,automatic]
>
>
> Qemu command line with multiqueue enabled:
>
> root at blade0-7:~# ps aux|grep qemu
> libvirt+ 18516 100 0.1 64700976 75052 ? Sl 15:38 1:41
> qemu-system-x86_64 -enable-kvm -name instance-00000018 -S -machine
> pc-i440fx-wily,accel=kvm,usb=off -cpu host -m 59392 -realtime mlock=off -smp
> 16,sockets=8,cores=1,threads=2 -object
> memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/hugepages_1GB/libvirt/qemu,share=yes,size=62277025792,host-nodes=0,policy=bind
> -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -uuid
> 40f79f06-fd66-47f1-a952-cb1366117c15 -smbios type=1,manufacturer=OpenStack
> Foundation,product=OpenStack
> Nova,version=13.1.4,serial=db887cbe-039b-453f-9eb9-4443a3ac48e5,uuid=40f79f06-fd66-47f1-a952-cb1366117c15,family=Virtual
> Machine -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000018/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew
> -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot
> strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=rbd:compute/40f79f06-fd66-47f1-a952-cb1366117c15_disk:id=compute:key=AQC8Z/hZiuytBRAAS9nrU5BkLQyVnngO/1az2A==:auth_supported=cephx\;none:mon_host=192.168.1.1\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=writeback
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=rbd:compute/40f79f06-fd66-47f1-a952-cb1366117c15_disk.config:id=compute:key=AQC8Z/hZiuytBRAAS9nrU5BkLQyVnngO/1az2A==:auth_supported=cephx\;none:mon_host=192.168.1.1\:6789,format=raw,if=none,id=drive-virtio-disk25,cache=writeback
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk25,id=virtio-disk25
> -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu5ff7901d-17 -netdev
> type=vhost-user,id=hostnet0,chardev=charnet0,queues=16 -device
> virtio-net-pci,mq=on,vectors=34,netdev=hostnet0,id=net0,mac=02:10:20:10:03:00,bus=pci.0,addr=0x3
> -chardev socket,id=charnet1,path=/var/run/openvswitch/vhu6137cefb-a5 -netdev
> type=vhost-user,id=hostnet1,chardev=charnet1,queues=16 -device
> virtio-net-pci,mq=on,vectors=34,netdev=hostnet1,id=net1,mac=02:10:20:10:03:02,bus=pci.0,addr=0x4
> -chardev socket,id=charnet2,path=/var/run/openvswitch/vhu98d81e14-60 -netdev
> type=vhost-user,id=hostnet2,chardev=charnet2,queues=16 -device
> virtio-net-pci,mq=on,vectors=34,netdev=hostnet2,id=net2,mac=02:10:20:10:03:04,bus=pci.0,addr=0x5
> -chardev socket,id=charnet3,path=/var/run/openvswitch/vhu38230b89-15 -netdev
> type=vhost-user,id=hostnet3,chardev=charnet3,queues=16 -device
> virtio-net-pci,mq=on,vectors=34,netdev=hostnet3,id=net3,mac=02:10:20:10:03:05,bus=pci.0,addr=0x6
> -chardev socket,id=charnet4,path=/var/run/openvswitch/vhu173ca317-4f -netdev
> type=vhost-user,id=hostnet4,chardev=charnet4,queues=16 -device
> virtio-net-pci,mq=on,vectors=34,netdev=hostnet4,id=net4,mac=02:10:20:10:03:06,bus=pci.0,addr=0x7
> -chardev
> file,id=charserial0,path=/var/lib/nova/instances/40f79f06-fd66-47f1-a952-cb1366117c15/console.log
> -device isa-serial,chardev=charserial0,id=serial0 -chardev
> pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device
> usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -device
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action reset -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa -msg timestamp=on
> root at blade0-7:~#
>
>
> Without multiqueue:
>
> root at blade0-7:~# ps aux|grep qemu
> libvirt+ 12808 277 0.3 65044396 201752 ? Sl Nov16 17003:49
> qemu-system-x86_64 -enable-kvm -name instance-00000018 -S -machine
> pc-i440fx-wily,accel=kvm,usb=off -cpu host -m 59392 -realtime mlock=off -smp
> 16,sockets=8,cores=1,threads=2 -object
> memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/hugepages_1GB/libvirt/qemu,share=yes,size=62277025792,host-nodes=0,policy=bind
> -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -uuid
> 40f79f06-fd66-47f1-a952-cb1366117c15 -smbios type=1,manufacturer=OpenStack
> Foundation,product=OpenStack
> Nova,version=13.1.4,serial=db887cbe-039b-453f-9eb9-4443a3ac48e5,uuid=40f79f06-fd66-47f1-a952-cb1366117c15,family=Virtual
> Machine -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000018/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew
> -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot
> strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=rbd:compute/40f79f06-fd66-47f1-a952-cb1366117c15_disk:id=compute:key=AQC8Z/hZiuytBRAAS9nrU5BkLQyVnngO/1az2A==:auth_supported=cephx\;none:mon_host=192.168.1.1\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=writeback
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=rbd:compute/40f79f06-fd66-47f1-a952-cb1366117c15_disk.config:id=compute:key=AQC8Z/hZiuytBRAAS9nrU5BkLQyVnngO/1az2A==:auth_supported=cephx\;none:mon_host=192.168.1.1\:6789,format=raw,if=none,id=drive-virtio-disk25,cache=writeback
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk25,id=virtio-disk25
> -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu5ff7901d-17 -netdev
> type=vhost-user,id=hostnet0,chardev=charnet0 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=02:10:20:10:03:00,bus=pci.0,addr=0x3
> -chardev socket,id=charnet1,path=/var/run/openvswitch/vhu6137cefb-a5 -netdev
> type=vhost-user,id=hostnet1,chardev=charnet1 -device
> virtio-net-pci,netdev=hostnet1,id=net1,mac=02:10:20:10:03:02,bus=pci.0,addr=0x4
> -chardev socket,id=charnet2,path=/var/run/openvswitch/vhu98d81e14-60 -netdev
> type=vhost-user,id=hostnet2,chardev=charnet2 -device
> virtio-net-pci,netdev=hostnet2,id=net2,mac=02:10:20:10:03:04,bus=pci.0,addr=0x5
> -chardev socket,id=charnet3,path=/var/run/openvswitch/vhu38230b89-15 -netdev
> type=vhost-user,id=hostnet3,chardev=charnet3 -device
> virtio-net-pci,netdev=hostnet3,id=net3,mac=02:10:20:10:03:05,bus=pci.0,addr=0x6
> -chardev socket,id=charnet4,path=/var/run/openvswitch/vhu173ca317-4f -netdev
> type=vhost-user,id=hostnet4,chardev=charnet4 -device
> virtio-net-pci,netdev=hostnet4,id=net4,mac=02:10:20:10:03:06,bus=pci.0,addr=0x7
> -chardev
> file,id=charserial0,path=/var/lib/nova/instances/40f79f06-fd66-47f1-a952-cb1366117c15/console.log
> -device isa-serial,chardev=charserial0,id=serial0 -chardev
> pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device
> usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -device
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action reset -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa -msg timestamp=on
> root at blade0-7:~#
>
> BR,
> Zoltan
>
>
> ________________________________
> From: Ladi Prosek <lprosek at redhat.com>
> Sent: Tuesday, November 14, 2017 7:58 AM
> To: Rafael Gellert
> Cc: ipxe-devel at lists.ipxe.org
> Subject: Re: [ipxe-devel] iPXE boot fails if multique is enabled in
> Openstack
>
> Hi,
>
> On Mon, Nov 13, 2017 at 8:33 AM, Rafael Gellert
> <rafael.gellert at ericsson.com> wrote:
>
> Hi,
>
> Could you please help me resolve this issue:
> http://forum.ipxe.org/showthread.php?tid=10521
>
> Can you please post the QEMU command line with and without the
> problematic hw_vif_multiqueue_enabled="true" option. Also QEMU version
> running on the host.
>
> Thank you,
> Ladi
>
>
>
>
More information about the ipxe-devel
mailing list