[ipxe-devel] hyperv net device driver design question

Michael Brown mcb30 at ipxe.org
Fri Jan 26 00:22:59 UTC 2018


On 22/01/18 17:12, Roman Kagan wrote:
> Our workaround was to disable the message page once the VMBus setup was
> over and the message page was no longer needed, and not to enable the
> event page as we were using polling anyway so we could busy-wait
> directly on the ringbuffer descriptors.

That didn't seem to work for us; the virtual NIC stopped passing traffic 
as soon as Windows took ownership of VMBus.

>> I'd be very happy to update iPXE to bind via the VMBus UEFI protocols, if
>> you have documentation and/or an open source implementation of the protocols
>> matching those found in Hyper-V's own UEFI firmware.
> 
> Frankly I think Microsoft has no incentive to even make this protocol
> stable, let alone document it, so I don't see this as a viable solution.

Fair point.

> OTOH we already have a preliminary version of Hyper-V SCSI driver for
> OVMF; it also includes the VMBus bus driver defining its own protocol
> (similar to XenBus) to which the devices attach.  I guess iPXE can be
> made to use it if found in the system.

That sounds like a good plan to me.

>  What is unclear to me is whether
> it's beneficial compared to re-implementing the Hyper-V network driver
> in OVMF proper, given that OVMF has a fairly complete network stack
> itself and that it's not quite obvious how to make iPXE available in
> OVMF.

The network stack in EDK2 (and hence OVMF) is nowhere near as fast as 
iPXE.  I've personally observed iPXE obtain sustained throughputs of 
5000 Mbps; the EDK2 stack typically struggles to reach even 100 Mbps. 
Also, we have meaningful error messages.  :)

If there's going to be a stable (and, ideally, documented) protocol for 
VMBus in OVMF, then I'll be updating iPXE to be able to use it anyway. 
Is the protocol header file available somewhere?

Thanks,

Michael



More information about the ipxe-devel mailing list