[ipxe-devel] Recaptcha

Rusty Weber RWeber at onestopsystems.com
Fri Apr 20 04:25:45 UTC 2018


PS.. I would use the forum.. but I cannot sign up because recapcha on the site is broken.
The recaptcha directs me to tell you to go to:
g.co/recaptcha/upgrade

From: Rusty Weber
Sent: Thursday, April 19, 2018 9:59 PM
To: 'ipxe-devel at lists.ipxe.org'
Subject: Big endian VS little endian decided during compile time.

After setting some machines to re-provision themselves through ipxe, I noted that some of the UUID's I gathered from the running OS () of the system did not match the UUID that ipxe returned.
Furthermore, I specifically noted a pattern in which the UUID's returned by ipxe differed in a very big endian to little endian way for the first 8 bytes of the UUID (At first I wasn't really certain why the last half of the ID worked fine).  To make matters more complicated, some of the machines in my lab work as expected while others do not.

Example:
(System UUID returned by installed OS) != (System UUID returned by ipxe)
>From a Dell R-730
44454c4c-4d00-1051-8037-b8c04f583532 != 4c4c4544-004d-5110-8037-b8c04f583532
>From an HP DL-380 gen8
32333536-3030-5355-4532-333343395834 == 32333536-3030-5355-4532-333343395834

The first 8 bytes of the first UUID returned by ipxe in the previous example are wrong,  either that or the Linux and windows guys both got their code for reading the value wrong.  In any case, those values should match and my investigation started with ipxe.
Investigating where the UUID was being generated from, "./core/uuid.c" I noticed that the first half of the uuid was being printed out and processed by a number of macros defined in "./include/byteswap.h" named "be(16|32)_to_cpu".  The names of these macros are as follows:

```
#if __BYTE_ORDER == __LITTLE_ENDIAN
#define __cpu_to_leNN( bits, value ) (value)
#define __cpu_to_beNN( bits, value ) __bswap_ ## bits (value)
#define __leNN_to_cpu( bits, value ) (value)
#define __beNN_to_cpu( bits, value ) __bswap_ ## bits (value)
#define __cpu_to_leNNs( bits, ptr ) do { } while ( 0 )
#define __cpu_to_beNNs( bits, ptr ) __bswap_ ## bits ## s (ptr)
#define __leNN_to_cpus( bits, ptr ) do { } while ( 0 )
#define __beNN_to_cpus( bits, ptr ) __bswap_ ## bits ## s (ptr)
#endif

#if __BYTE_ORDER == __BIG_ENDIAN
#define __cpu_to_leNN( bits, value ) __bswap_ ## bits (value)
#define __cpu_to_beNN( bits, value ) (value)
#define __leNN_to_cpu( bits, value ) __bswap_ ## bits (value)
#define __beNN_to_cpu( bits, value ) (value)
#define __cpu_to_leNNs( bits, ptr ) __bswap_ ## bits ## s (ptr)
#define __cpu_to_beNNs( bits, ptr ) do { } while ( 0 )
#define __leNN_to_cpus( bits, ptr ) __bswap_ ## bits ## s (ptr)
#define __beNN_to_cpus( bits, ptr ) do { } while ( 0 )
#endif
#define be16_to_cpu( value ) __beNN_to_cpu ( 16, value )
#define be32_to_cpu( value ) __beNN_to_cpu ( 32, value )
```

>From uuid.c
```
const char * uuid_ntoa ( const union uuid *uuid ) {
        static char buf[37]; /* "00000000-0000-0000-0000-000000000000" */

        sprintf ( buf, "%08x-%04x-%04x-%04x-%02x%02x%02x%02x%02x%02x",
                  be32_to_cpu ( uuid->canonical.a ),
                  be16_to_cpu ( uuid->canonical.b ),
                  be16_to_cpu ( uuid->canonical.c ),
                  be16_to_cpu ( uuid->canonical.d ),
                  uuid->canonical.e[0], uuid->canonical.e[1],
                  uuid->canonical.e[2], uuid->canonical.e[3],
                  uuid->canonical.e[4], uuid->canonical.e[5] );
        return buf;
```

Here is where I get lost/ am not sure to proceed on, why does the UUID differ from the UUID that the OS returned?  Is it possible that there are difference between CPU's where the valued in the UUID are not little endian?  Differences in endianness from uefi and the x86 bios images?
Russell Weber
Software Support and Quality engineer

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ipxe.org/pipermail/ipxe-devel/attachments/20180420/beabdd6d/attachment.htm>


More information about the ipxe-devel mailing list