gdb/amdgpu: add follow fork and exec support
Prior to this patch, it's not possible for GDB to debug GPU code in fork
children or after an exec. The amd-dbgapi target attaches to processes
when an inferior appears due to a "run" or "attach" command, but not
after a fork or exec. This patch adds support for that, such that it's
possible to for an inferior to fork and for GDB to debug the GPU code in
the child.
To achieve that, use the inferior_forked and inferior_execd observers.
In the case of fork, we have nothing to do if `child_inf` is nullptr,
meaning that GDB won't debug the child. We also don't attach if the
inferior has vforked. We are already attached to the parent's address
space, which is shared with the child, so trying to attach would cause
problems. And anyway, the inferior can't do anything other than exec or
exit, it certainly won't start GPU kernels before exec'ing.
In the case of exec, we detach from the exec'ing inferior and attach to
the following inferior. This works regardless of whether they are the
same or not. If they are the same, meaning the execution continues in
the existing inferior, we need to do a detach/attach anyway, as
amd-dbgapi needs to be aware of the new address space created by the
exec.
Note that we use observers and not target_ops::follow_{fork,exec} here.
When the amd-dbgapi target is compiled in, it will attach (in the
amd_dbgapi_process_attach sense, not the ptrace sense) to native
inferiors when they appear, but won't push itself on the inferior's
target stack just yet. It only pushes itself if the inferior
initializes the ROCm runtime. So, if a non-GPU-using inferior calls
fork, an amd_dbgapi_target::follow_fork method would not get called.
Same for exec. A previous version of the code had the amd-dbgapi target
pushed all the time, in which case we could use the target methods. But
we prefer having the target pushed only when necessary, it's less
intrusive when doing native debugging that doesn't involve the GPU.
Change-Id: I5819c151c371120da8bab2fa9cbfa8769ba1d6f9
Reviewed-By: Pedro Alves <pedro@palves.net>