binutils/readelf: decode AMDGPU-specific e_flags
Decode and print the AMDGPU-specific fields of e_flags, as documented
here:
https://llvm.org/docs/AMDGPUUsage.html#header
That is:
- The specific GPU model
- Whether the xnack and sramecc features are enabled
The result looks like:
- Flags: 0x52f
+ Flags: 0x52f, gfx906, xnack any, sramecc any
The flags for the "HSA" OS ABI are properly versioned and documented on
that page. But the NONE, PAL and MESA3D OS ABIs are not well documented
nor versioned. Taking a peek at the LLVM source code, we see that they
encode their flags the same way as HSA v3. For example, for PAL:
https://github.com/llvm/llvm-project/blob/
c8b614cd74a92d85936aed5ac7c642af75ffdc29/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUTargetStreamer.cpp#L601
So for those other OS ABIs, we read them the same as HSA v3.
binutils/ChangeLog:
* readelf.c: Include elf/amdgcn.h.
(decode_AMDGPU_machine_flags): New.
(get_machine_flags): Handle flags for EM_AMDGPU machine type.
include/ChangeLog:
* elf/amdgcn.h: Add EF_AMDGPU_MACH_AMDGCN_* and
EF_AMDGPU_FEATURE_* defines.
Change-Id: Ib5b94df7cae0719a22cf4e4fd0629330e9485c12