[ipxe-devel] hyperv net device driver design question
Roman Kagan
rkagan at virtuozzo.com
Mon Jan 22 17:12:00 UTC 2018
On Mon, Jan 22, 2018 at 12:05:55AM +0000, Michael Brown wrote:
> On 19/01/18 17:59, Roman Kagan wrote:
> > 1) the driver assumes full control over VMBus: it initializes,
> > enumerates, and shuts it down at its own discretion. As a result,
> > should the network booting fail, upon return to the BIOS, which may
> > have its own VMBus state (e.g. to access disks via Hyper-V
> > SCSI), it is screwed up
>
> Yes. There's no interface I'm aware of that would allow iPXE to share
> control with other legacy BIOS components.
Sigh...
> This problem is similar to an issue that iPXE faces when performing an iSCSI
> boot of Windows Server 2016. Various Windows components (winload.exe et al)
> will assume full control over VMBus, which then causes iPXE's emulated INT13
> iSCSI drive to fail.
>
> We work around this by checking the SynIC message page MSR to detect when
> some other component has stolen control of VMBus, and reclaiming ownership
> if needed:
>
> http://git.ipxe.org/ipxe.git/commitdiff/b91cc98
>
> For legacy BIOS, I think you'll need to do something similar.
Interesting... We've implemented a Hyper-V SCSI driver for SeaBIOS, and
we had a problem booting Windows 2016 too. However, in our experience
winload didn't try to take over VMBus. Rather it just initialized the
SynIC and its message and event pages and passed them on to the booted
Windows kernel; if winload saw the respective registers already
initialized it just left them unchanged so it was Windows kernel itself
that started to use the pages it didn't own and thus crashed.
Our workaround was to disable the message page once the VMBus setup was
over and the message page was no longer needed, and not to enable the
event page as we were using polling anyway so we could busy-wait
directly on the ringbuffer descriptors.
> > 3) in case of OVMF, I ran out of ideas how to provide iPXE to the
> > firmware in a ROM; building it as a .efidrv and incorporating it as a
> > .efi blob in the main OVMF image[*] results in the driver starting,
> > discovering the VMBus and the netdevices, but then giving up because
> > the parent bus is unknown to EFI. 1) and 2) apply, too.
> >
> > [*] licensing issues may also preclude this scheme
>
> On the licensing point: this seems like "mere aggregation" to me, so I don't
> see a problem.
Good to know, thanks.
> > I wonder if it can also be made the only PXE stack in the system,
> > interoperating with the firmware, and, if it can, how to do it? It
> > looks like the VMBus will have to be managed by the firmware for that to
> > work, with the driver using it through the interfaces, but it's unclear
> > how feasible it is.
>
> As discussed in
>
> http://git.ipxe.org/ipxe.git/commitdiff/9366578
>
> the "proper" solution for UEFI would be for iPXE to attach to whatever VMBus
> abstraction is provided by the platform firmware. Unfortunately the
> abstraction provided by the Hyper-V UEFI firmware is totally undocumented.
>
> I'd be very happy to update iPXE to bind via the VMBus UEFI protocols, if
> you have documentation and/or an open source implementation of the protocols
> matching those found in Hyper-V's own UEFI firmware.
Frankly I think Microsoft has no incentive to even make this protocol
stable, let alone document it, so I don't see this as a viable solution.
OTOH we already have a preliminary version of Hyper-V SCSI driver for
OVMF; it also includes the VMBus bus driver defining its own protocol
(similar to XenBus) to which the devices attach. I guess iPXE can be
made to use it if found in the system. What is unclear to me is whether
it's beneficial compared to re-implementing the Hyper-V network driver
in OVMF proper, given that OVMF has a fairly complete network stack
itself and that it's not quite obvious how to make iPXE available in
OVMF.
Thanks,
Roman.
More information about the ipxe-devel
mailing list