[ipxe-devel] hyperv net device driver design question

Roman Kagan rkagan at virtuozzo.com
Fri Jan 19 17:59:52 UTC 2018


I'm developing Hyper-V / VMBus device emulation for QEMU, and I'd like
to make the firmware (SeaBIOS / OVMF) to be able to boot off the Hyper-V
network devices.

I'm looking into using iPXE for that, which happens to already have the

In particular, I've been able to boot with it building it as a BIOS ROM
image and passing it to QEMU with -option-rom.

However, there are issues I'm struggling with:

1) the driver assumes full control over VMBus: it initializes,
   enumerates, and shuts it down at its own discretion.  As a result,
   should the network booting fail, upon return to the BIOS, which may
   have its own VMBus state (e.g. to access disks via Hyper-V
   SCSI), it is screwed up

2) there's no way to pass the boot order from QEMU if there's more than
   a single Hyper-V net device

3) in case of OVMF, I ran out of ideas how to provide iPXE to the
   firmware in a ROM; building it as a .efidrv and incorporating it as a
   .efi blob in the main OVMF image[*] results in the driver starting,
   discovering the VMBus and the netdevices, but then giving up because
   the parent bus is unknown to EFI.  1) and 2) apply, too.

[*] licensing issues may also preclude this scheme

I guess that the hyperv netdevice driver in iPXE was designed to cover
only the scenario when, on the real Hyper-V, the vendor PXE stack pulled
iPXE from the network, and then it took over completely, with no way
out, so the above problems didn't stand in the way.

I wonder if it can also be made the only PXE stack in the system,
interoperating with the firmware, and, if it can, how to do it?  It
looks like the VMBus will have to be managed by the firmware for that to
work, with the driver using it through the interfaces, but it's unclear
how feasible it is.

Any suggestions?


More information about the ipxe-devel mailing list