From fbea99ea8a062e5cd96e2d88336984ed3adc93d4 Mon Sep 17 00:00:00 2001 From: Pedro Alves Date: Fri, 7 Aug 2015 17:24:01 +0100 Subject: [PATCH] Implement all-stop on top of a target running non-stop mode This finally implements user-visible all-stop mode running with the target_ops backend always in non-stop mode. This is a stepping stone towards finer-grained control of threads, being able to do interesting things like thread groups, associating groups with breakpoints, etc. From the user's perspective, all-stop mode is really just a special case of being able to stop and resume specific sets of threads, so it makes sense to do this step first. With this, even in all-stop, the target is no longer in charge of stopping all threads before reporting an event to the core -- the core takes care of it when it sees fit. For example, when "next"- or "step"-ing, we can avoid stopping and resuming all threads at each internal single-step, and instead only stop all threads when we're about to present the stop to the user. The implementation is almost straight forward, as the heavy lifting has been done already in previous patches. Basically, we replace checks for "set non-stop on/off" (the non_stop global), with calls to a new target_is_non_stop_p function. In a few places, if "set non-stop off", we stop all threads explicitly, and in a few other places we resume all threads explicitly, making use of existing methods that were added for teaching non-stop to step over breakpoints without displaced stepping. This adds a new "maint set target-non-stop on/off/auto" knob that allows both disabling the feature if we find problems, and force-enable it for development (useful when teaching a target about this. The default is "auto", which means the feature is enabled if a new target method says it should be enabled. The patch implements the method in linux-nat.c, just for illustration, because it still returns false. We'll need a few follow up fixes before turning it on by default. This is a separate target method from indicating regular non-stop support, because e.g., while e.g., native linux-nat.c is close to regression free with all-stop-non-stop (with following patches will fixing the remaining regressions), remote.c+gdbserver will still need more fixing, even though it supports "set non-stop on". Tested on x86_64 Fedora 20, native, with and without "set displaced off", and with and without "maint set target-non-stop on"; and also against gdbserver. gdb/ChangeLog: 2015-08-07 Pedro Alves * NEWS: Mention "maint set/show target-non-stop". * breakpoint.c (update_global_location_list): Check target_is_non_stop_p instead of non_stop. * infcmd.c (attach_command_post_wait, attach_command): Likewise. * infrun.c (show_can_use_displaced_stepping) (can_use_displaced_stepping_p, start_step_over_inferior): Likewise. (internal_resume_ptid): New function. (resume): Use it. (proceed): Check target_is_non_stop_p instead of non_stop. If in all-stop mode but the target is always in non-stop mode, start all the other threads that are implicitly resumed too. (for_each_just_stopped_thread, fetch_inferior_event) (adjust_pc_after_break, stop_all_threads): Check target_is_non_stop_p instead of non_stop. (handle_inferior_event): Likewise. Handle detach-fork in all-stop with the target always in non-stop mode. (handle_signal_stop) : Check target_is_non_stop_p instead of non_stop. (switch_back_to_stepped_thread): Check target_is_non_stop_p instead of non_stop. (keep_going_stepped_thread): Use internal_resume_ptid. (stop_waiting): If in all-stop mode, and the target is in non-stop mode, stop all threads. (keep_going_pass): Likewise, when starting a new in-line step-over sequence. * linux-nat.c (get_pending_status, select_event_lwp) (linux_nat_filter_event, linux_nat_wait_1, linux_nat_wait): Check target_is_non_stop_p instead of non_stop. (linux_nat_always_non_stop_p): New function. (linux_nat_stop): Check target_is_non_stop_p instead of non_stop. (linux_nat_add_target): Install linux_nat_always_non_stop_p. * target-delegates.c: Regenerate. * target.c (target_is_non_stop_p): New function. (target_non_stop_enabled, target_non_stop_enabled_1): New globals. (maint_set_target_non_stop_command) (maint_show_target_non_stop_command): New functions. (_initilize_target): Install "maint set/show target-non-stop" commands. * target.h (struct target_ops) : New field. (target_non_stop_enabled): New declaration. (target_is_non_stop_p): New declaration. gdb/doc/ChangeLog: 2015-08-07 Pedro Alves * gdb.texinfo (Maintenance Commands): Document "maint set/show target-non-stop". --- gdb/ChangeLog | 45 +++++++++++++++ gdb/NEWS | 14 +++++ gdb/breakpoint.c | 2 +- gdb/doc/ChangeLog | 5 ++ gdb/doc/gdb.texinfo | 24 ++++++++ gdb/infcmd.c | 4 +- gdb/infrun.c | 122 ++++++++++++++++++++++++++++++++--------- gdb/linux-nat.c | 25 ++++++--- gdb/target-delegates.c | 31 +++++++++++ gdb/target.c | 71 ++++++++++++++++++++++++ gdb/target.h | 13 +++++ 11 files changed, 318 insertions(+), 38 deletions(-) diff --git a/gdb/ChangeLog b/gdb/ChangeLog index fa0225f4694..11a60d477e5 100644 --- a/gdb/ChangeLog +++ b/gdb/ChangeLog @@ -1,3 +1,48 @@ +2015-08-07 Pedro Alves + + * NEWS: Mention "maint set/show target-non-stop". + * breakpoint.c (update_global_location_list): Check + target_is_non_stop_p instead of non_stop. + * infcmd.c (attach_command_post_wait, attach_command): Likewise. + * infrun.c (show_can_use_displaced_stepping) + (can_use_displaced_stepping_p, start_step_over_inferior): + Likewise. + (internal_resume_ptid): New function. + (resume): Use it. + (proceed): Check target_is_non_stop_p instead of non_stop. If in + all-stop mode but the target is always in non-stop mode, start all + the other threads that are implicitly resumed too. + (for_each_just_stopped_thread, fetch_inferior_event) + (adjust_pc_after_break, stop_all_threads): Check + target_is_non_stop_p instead of non_stop. + (handle_inferior_event): Likewise. Handle detach-fork in all-stop + with the target always in non-stop mode. + (handle_signal_stop) : Check target_is_non_stop_p + instead of non_stop. + (switch_back_to_stepped_thread): Check target_is_non_stop_p + instead of non_stop. + (keep_going_stepped_thread): Use internal_resume_ptid. + (stop_waiting): If in all-stop mode, and the target is in non-stop + mode, stop all threads. + (keep_going_pass): Likewise, when starting a new in-line step-over + sequence. + * linux-nat.c (get_pending_status, select_event_lwp) + (linux_nat_filter_event, linux_nat_wait_1, linux_nat_wait): Check + target_is_non_stop_p instead of non_stop. + (linux_nat_always_non_stop_p): New function. + (linux_nat_stop): Check target_is_non_stop_p instead of non_stop. + (linux_nat_add_target): Install linux_nat_always_non_stop_p. + * target-delegates.c: Regenerate. + * target.c (target_is_non_stop_p): New function. + (target_non_stop_enabled, target_non_stop_enabled_1): New globals. + (maint_set_target_non_stop_command) + (maint_show_target_non_stop_command): New functions. + (_initilize_target): Install "maint set/show target-non-stop" + commands. + * target.h (struct target_ops) : New field. + (target_non_stop_enabled): New declaration. + (target_is_non_stop_p): New declaration. + 2015-08-07 Pedro Alves * breakpoint.c (breakpoints_should_be_inserted_now): If any thread diff --git a/gdb/NEWS b/gdb/NEWS index 7e58cc3f31a..3fe603615af 100644 --- a/gdb/NEWS +++ b/gdb/NEWS @@ -8,6 +8,14 @@ * The 'record instruction-history' command now indicates speculative execution when using the Intel(R) Processor Trace recording format. +* New commands + +maint set target-non-stop (on|off|auto) +maint show target-non-stop + Control whether GDB targets always operate in non-stop mode even if + "set non-stop" is "off". The default is "auto", meaning non-stop + mode is enabled if supported by the target. + *** Changes in GDB 7.10 * Support for process record-replay and reverse debugging on aarch64*-linux* @@ -104,6 +112,12 @@ maint print symbol-cache-statistics maint flush-symbol-cache Flush the contents of the symbol cache. +maint set target-non-stop (on|off|auto) +maint show target-non-stop + Control whether GDB targets always operate in non-stop mode even if + "set non-stop" is "off". The default is "auto", meaning non-stop + mode is enabled if supported by the target. + record btrace bts record bts Start branch trace recording using Branch Trace Store (BTS) format. diff --git a/gdb/breakpoint.c b/gdb/breakpoint.c index 7d14ac9237c..91a53b91511 100644 --- a/gdb/breakpoint.c +++ b/gdb/breakpoint.c @@ -12330,7 +12330,7 @@ update_global_location_list (enum ugll_insert_mode insert_mode) if (!found_object) { - if (removed && non_stop + if (removed && target_is_non_stop_p () && need_moribund_for_location_type (old_loc)) { /* This location was removed from the target. In diff --git a/gdb/doc/ChangeLog b/gdb/doc/ChangeLog index 473debfcf5e..846ac24d746 100644 --- a/gdb/doc/ChangeLog +++ b/gdb/doc/ChangeLog @@ -1,3 +1,8 @@ +2015-08-07 Pedro Alves + + * gdb.texinfo (Maintenance Commands): Document "maint set/show + target-non-stop". + 2015-08-07 Markus Metzger * gdb.texinfo (Process Record and Replay): Document prefixing of diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo index 863bb66fcad..900970bd8c7 100644 --- a/gdb/doc/gdb.texinfo +++ b/gdb/doc/gdb.texinfo @@ -34437,6 +34437,30 @@ asynchronous mode (@pxref{Background Execution}). Normally the default is asynchronous, if it is available; but this can be changed to more easily debug problems occurring only in synchronous mode. +@kindex maint set target-non-stop @var{mode} [on|off|auto] +@kindex maint show target-non-stop +@item maint set target-non-stop +@itemx maint show target-non-stop + +This controls whether @value{GDBN} targets always operate in non-stop +mode even if @code{set non-stop} is @code{off} (@pxref{Non-Stop +Mode}). The default is @code{auto}, meaning non-stop mode is enabled +if supported by the target. + +@table @code +@item maint set target-non-stop auto +This is the default mode. @value{GDBN} controls the target in +non-stop mode if the target supports it. + +@item maint set target-non-stop on +@value{GDBN} controls the target in non-stop mode even if the target +does not indicate support. + +@item maint set target-non-stop off +@value{GDBN} does not control the target in non-stop mode even if the +target supports it. +@end table + @kindex maint set per-command @kindex maint show per-command @item maint set per-command diff --git a/gdb/infcmd.c b/gdb/infcmd.c index 29aaf445736..d1d3c87763f 100644 --- a/gdb/infcmd.c +++ b/gdb/infcmd.c @@ -2542,7 +2542,7 @@ attach_command_post_wait (char *args, int from_tty, int async_exec) selected thread is stopped, others may still be executing. Be sure to explicitly stop all threads of the process. This should have no effect on already stopped threads. */ - if (non_stop) + if (target_is_non_stop_p ()) target_stop (pid_to_ptid (inferior->pid)); /* Tell the user/frontend where we're stopped. */ @@ -2644,7 +2644,7 @@ attach_command (char *args, int from_tty) init_wait_for_inferior (); clear_proceed_status (0); - if (non_stop) + if (target_is_non_stop_p ()) { /* If we find that the current thread isn't stopped, explicitly do so now, because we're going to install breakpoints and diff --git a/gdb/infrun.c b/gdb/infrun.c index 9690a3608f5..f1d8e7c564e 100644 --- a/gdb/infrun.c +++ b/gdb/infrun.c @@ -1630,7 +1630,7 @@ show_can_use_displaced_stepping (struct ui_file *file, int from_tty, fprintf_filtered (file, _("Debugger's willingness to use displaced stepping " "to step over breakpoints is %s (currently %s).\n"), - value, non_stop ? "on" : "off"); + value, target_is_non_stop_p () ? "on" : "off"); else fprintf_filtered (file, _("Debugger's willingness to use displaced stepping " @@ -1643,7 +1643,8 @@ show_can_use_displaced_stepping (struct ui_file *file, int from_tty, static int use_displaced_stepping (struct gdbarch *gdbarch) { - return (((can_use_displaced_stepping == AUTO_BOOLEAN_AUTO && non_stop) + return (((can_use_displaced_stepping == AUTO_BOOLEAN_AUTO + && target_is_non_stop_p ()) || can_use_displaced_stepping == AUTO_BOOLEAN_TRUE) && gdbarch_displaced_step_copy_insn_p (gdbarch) && find_record_target () == NULL); @@ -2014,7 +2015,7 @@ start_step_over (void) because we wouldn't be able to resume anything else until the target stops again. In non-stop, the resume always resumes only TP, so it's OK to let the thread resume freely. */ - if (!non_stop && !step_what) + if (!target_is_non_stop_p () && !step_what) continue; switch_to_thread (tp->ptid); @@ -2033,7 +2034,7 @@ start_step_over (void) return 1; } - if (!non_stop) + if (!target_is_non_stop_p ()) { /* On all-stop, shouldn't have resumed unless we needed a step over. */ @@ -2178,6 +2179,25 @@ user_visible_resume_ptid (int step) return resume_ptid; } +/* Return a ptid representing the set of threads that we will resume, + in the perspective of the target, assuming run control handling + does not require leaving some threads stopped (e.g., stepping past + breakpoint). USER_STEP indicates whether we're about to start the + target for a stepping command. */ + +static ptid_t +internal_resume_ptid (int user_step) +{ + /* In non-stop, we always control threads individually. Note that + the target may always work in non-stop mode even with "set + non-stop off", in which case user_visible_resume_ptid could + return a wildcard ptid. */ + if (target_is_non_stop_p ()) + return inferior_ptid; + else + return user_visible_resume_ptid (user_step); +} + /* Wrapper for target_resume, that handles infrun-specific bookkeeping. */ @@ -2389,7 +2409,7 @@ resume (enum gdb_signal sig) insert_single_step_breakpoint (gdbarch, aspace, pc); insert_breakpoints (); - resume_ptid = user_visible_resume_ptid (user_step); + resume_ptid = internal_resume_ptid (user_step); do_target_resume (resume_ptid, 0, GDB_SIGNAL_0); discard_cleanups (old_cleanups); tp->resumed = 1; @@ -2498,12 +2518,7 @@ resume (enum gdb_signal sig) use singlestep breakpoint. */ gdb_assert (!(thread_has_single_step_breakpoints_set (tp) && step)); - /* Decide the set of threads to ask the target to resume. Start - by assuming everything will be resumed, than narrow the set - by applying increasingly restricting conditions. */ - resume_ptid = user_visible_resume_ptid (user_step); - - /* Maybe resume a single thread after all. */ + /* Decide the set of threads to ask the target to resume. */ if ((step || thread_has_single_step_breakpoints_set (tp)) && tp->control.trap_expected) { @@ -2514,6 +2529,8 @@ resume (enum gdb_signal sig) breakpoint if allowed to run. */ resume_ptid = inferior_ptid; } + else + resume_ptid = internal_resume_ptid (user_step); if (execution_direction != EXEC_REVERSE && step && breakpoint_inserted_here_p (aspace, pc)) @@ -2935,11 +2952,52 @@ proceed (CORE_ADDR addr, enum gdb_signal siggnal) other thread was already doing one. In either case, don't resume anything else until the step-over is finished. */ } - else if (started && !non_stop) + else if (started && !target_is_non_stop_p ()) { /* A new displaced stepping sequence was started. In all-stop, we can't talk to the target anymore until it next stops. */ } + else if (!non_stop && target_is_non_stop_p ()) + { + /* In all-stop, but the target is always in non-stop mode. + Start all other threads that are implicitly resumed too. */ + ALL_NON_EXITED_THREADS (tp) + { + /* Ignore threads of processes we're not resuming. */ + if (!ptid_match (tp->ptid, resume_ptid)) + continue; + + if (tp->resumed) + { + if (debug_infrun) + fprintf_unfiltered (gdb_stdlog, + "infrun: proceed: [%s] resumed\n", + target_pid_to_str (tp->ptid)); + gdb_assert (tp->executing || tp->suspend.waitstatus_pending_p); + continue; + } + + if (thread_is_in_step_over_chain (tp)) + { + if (debug_infrun) + fprintf_unfiltered (gdb_stdlog, + "infrun: proceed: [%s] needs step-over\n", + target_pid_to_str (tp->ptid)); + continue; + } + + if (debug_infrun) + fprintf_unfiltered (gdb_stdlog, + "infrun: proceed: resuming %s\n", + target_pid_to_str (tp->ptid)); + + reset_ecs (ecs, tp); + switch_to_thread (tp->ptid); + keep_going_pass_signal (ecs); + if (!ecs->wait_some_more) + error ("Command aborted."); + } + } else if (!tp->resumed && !thread_is_in_step_over_chain (tp)) { /* The thread wasn't started, and isn't queued, run it now. */ @@ -3151,7 +3209,7 @@ for_each_just_stopped_thread (for_each_just_stopped_thread_callback_func func) if (!target_has_execution || ptid_equal (inferior_ptid, null_ptid)) return; - if (non_stop) + if (target_is_non_stop_p ()) { /* If in non-stop mode, only the current thread stopped. */ func (inferior_thread ()); @@ -3632,7 +3690,7 @@ fetch_inferior_event (void *client_data) /* If an error happens while handling the event, propagate GDB's knowledge of the executing state to the frontend/user running state. */ - if (!non_stop) + if (!target_is_non_stop_p ()) ts_old_chain = make_cleanup (finish_thread_state_cleanup, &minus_one_ptid); else ts_old_chain = make_cleanup (finish_thread_state_cleanup, &ecs->ptid); @@ -3871,7 +3929,8 @@ adjust_pc_after_break (struct thread_info *thread, to get the "stopped by SW BP and needs adjustment" info out of the target/kernel (and thus never reach here; see above). */ if (software_breakpoint_inserted_here_p (aspace, breakpoint_pc) - || (non_stop && moribund_breakpoint_here_p (aspace, breakpoint_pc))) + || (target_is_non_stop_p () + && moribund_breakpoint_here_p (aspace, breakpoint_pc))) { struct cleanup *old_cleanups = make_cleanup (null_cleanup, NULL); @@ -4148,7 +4207,7 @@ stop_all_threads (void) ptid_t entry_ptid; struct cleanup *old_chain; - gdb_assert (non_stop); + gdb_assert (target_is_non_stop_p ()); if (debug_infrun) fprintf_unfiltered (gdb_stdlog, "infrun: stop_all_threads\n"); @@ -4460,7 +4519,7 @@ handle_inferior_event_1 (struct execution_control_state *ecs) { ptid_t mark_ptid; - if (!non_stop) + if (!target_is_non_stop_p ()) mark_ptid = minus_one_ptid; else if (ecs->ws.kind == TARGET_WAITKIND_SIGNALLED || ecs->ws.kind == TARGET_WAITKIND_EXITED) @@ -4774,7 +4833,8 @@ Cannot fill $_exitsignal with the correct signal number.\n")); child = ecs->ws.value.related_pid; /* In non-stop mode, also resume the other branch. */ - if (non_stop && !detach_fork) + if (!detach_fork && (non_stop + || (sched_multi && target_is_non_stop_p ()))) { if (follow_child) switch_to_thread (parent); @@ -5058,7 +5118,7 @@ finish_step_over (struct execution_control_state *ecs) clear_step_over_info (); } - if (!non_stop) + if (!target_is_non_stop_p ()) return 0; /* Start a new step-over in another thread if there's one that @@ -5638,15 +5698,17 @@ handle_signal_stop (struct execution_control_state *ecs) /* Reset trap_expected to ensure breakpoints are re-inserted. */ ecs->event_thread->control.trap_expected = 0; - if (non_stop) + if (target_is_non_stop_p ()) { + /* Either "set non-stop" is "on", or the target is + always in non-stop mode. In this case, we have a bit + more work to do. Resume the current thread, and if + we had paused all threads, restart them while the + signal handler runs. */ keep_going (ecs); - /* The step-over has been canceled temporarily while the - signal handler executes. */ if (was_in_line) { - /* We had paused all threads, restart them. */ restart_threads (ecs->event_thread); } else if (debug_infrun) @@ -6541,7 +6603,7 @@ process_event_stop_test (struct execution_control_state *ecs) static int switch_back_to_stepped_thread (struct execution_control_state *ecs) { - if (!non_stop) + if (!target_is_non_stop_p ()) { struct thread_info *tp; struct thread_info *stepping_thread; @@ -6632,7 +6694,8 @@ switch_back_to_stepped_thread (struct execution_control_state *ecs) ALL_NON_EXITED_THREADS (tp) { - /* Ignore threads of processes we're not resuming. */ + /* Ignore threads of processes the caller is not + resuming. */ if (!sched_multi && ptid_get_pid (tp->ptid) != ptid_get_pid (ecs->ptid)) continue; @@ -6778,7 +6841,7 @@ keep_going_stepped_thread (struct thread_info *tp) stop_pc); tp->resumed = 1; - resume_ptid = user_visible_resume_ptid (tp->control.stepping_command); + resume_ptid = internal_resume_ptid (tp->control.stepping_command); do_target_resume (resume_ptid, 0, GDB_SIGNAL_0); } else @@ -7199,6 +7262,11 @@ stop_waiting (struct execution_control_state *ecs) /* Let callers know we don't want to wait for the inferior anymore. */ ecs->wait_some_more = 0; + + /* If all-stop, but the target is always in non-stop mode, stop all + threads now that we're presenting the stop to the user. */ + if (!non_stop && target_is_non_stop_p ()) + stop_all_threads (); } /* Like keep_going, but passes the signal to the inferior, even if the @@ -7313,7 +7381,7 @@ keep_going_pass_signal (struct execution_control_state *ecs) insert_breakpoints below, because that removes the breakpoint we're about to step over, otherwise other threads could miss it. */ - if (step_over_info_valid_p () && non_stop) + if (step_over_info_valid_p () && target_is_non_stop_p ()) stop_all_threads (); /* Stop stepping if inserting breakpoints fails. */ diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c index 0aecfc89e5b..cc909e9e21c 100644 --- a/gdb/linux-nat.c +++ b/gdb/linux-nat.c @@ -1401,13 +1401,13 @@ get_pending_status (struct lwp_info *lp, int *status) signo = GDB_SIGNAL_0; /* a pending ptrace event, not a real signal. */ else if (lp->status) signo = gdb_signal_from_host (WSTOPSIG (lp->status)); - else if (non_stop && !is_executing (lp->ptid)) + else if (target_is_non_stop_p () && !is_executing (lp->ptid)) { struct thread_info *tp = find_thread_ptid (lp->ptid); signo = tp->suspend.stop_signal; } - else if (!non_stop) + else if (!target_is_non_stop_p ()) { struct target_waitstatus last; ptid_t last_ptid; @@ -2938,7 +2938,7 @@ select_event_lwp (ptid_t filter, struct lwp_info **orig_lp, int *status) having stepped the thread, wouldn't understand what the trap was for, and therefore would report it to the user as a random signal. */ - if (!non_stop) + if (!target_is_non_stop_p ()) { event_lp = iterate_over_lwps (filter, select_singlestep_lwp_callback, NULL); @@ -3288,7 +3288,7 @@ linux_nat_filter_event (int lwpid, int status) { enum gdb_signal signo = gdb_signal_from_host (WSTOPSIG (status)); - if (!non_stop) + if (!target_is_non_stop_p ()) { /* Only do the below in all-stop, as we currently use SIGSTOP to implement target_stop (see linux_nat_stop) in @@ -3554,7 +3554,7 @@ linux_nat_wait_1 (struct target_ops *ops, status = lp->status; lp->status = 0; - if (!non_stop) + if (!target_is_non_stop_p ()) { /* Now stop all other LWP's ... */ iterate_over_lwps (minus_one_ptid, stop_callback, NULL); @@ -3596,7 +3596,7 @@ linux_nat_wait_1 (struct target_ops *ops, clears it. */ last_resume_kind = lp->last_resume_kind; - if (!non_stop) + if (!target_is_non_stop_p ()) { /* In all-stop, from the core's perspective, all LWPs are now stopped until a new resume action is sent over. */ @@ -3748,7 +3748,7 @@ linux_nat_wait (struct target_ops *ops, specific_process, for example, see linux_nat_wait_1), and meanwhile the event became uninteresting. Don't bother resuming LWPs we're not going to wait for if they'd stop immediately. */ - if (non_stop) + if (target_is_non_stop_p ()) iterate_over_lwps (minus_one_ptid, resume_stopped_resumed_lwps, &ptid); event_ptid = linux_nat_wait_1 (ops, ptid, ourstatus, target_options); @@ -4589,6 +4589,14 @@ linux_nat_supports_non_stop (struct target_ops *self) return 1; } +/* to_always_non_stop_p implementation. */ + +static int +linux_nat_always_non_stop_p (struct target_ops *self) +{ + return 0; +} + /* True if we want to support multi-process. To be removed when GDB supports multi-exec. */ @@ -4808,7 +4816,7 @@ linux_nat_stop_lwp (struct lwp_info *lwp, void *data) static void linux_nat_stop (struct target_ops *self, ptid_t ptid) { - if (non_stop) + if (target_is_non_stop_p ()) iterate_over_lwps (ptid, linux_nat_stop_lwp, NULL); else linux_ops->to_stop (linux_ops, ptid); @@ -5005,6 +5013,7 @@ linux_nat_add_target (struct target_ops *t) t->to_can_async_p = linux_nat_can_async_p; t->to_is_async_p = linux_nat_is_async_p; t->to_supports_non_stop = linux_nat_supports_non_stop; + t->to_always_non_stop_p = linux_nat_always_non_stop_p; t->to_async = linux_nat_async; t->to_terminal_inferior = linux_nat_terminal_inferior; t->to_terminal_ours = linux_nat_terminal_ours; diff --git a/gdb/target-delegates.c b/gdb/target-delegates.c index d2d794f6647..64b86c22698 100644 --- a/gdb/target-delegates.c +++ b/gdb/target-delegates.c @@ -1743,6 +1743,33 @@ debug_supports_non_stop (struct target_ops *self) return result; } +static int +delegate_always_non_stop_p (struct target_ops *self) +{ + self = self->beneath; + return self->to_always_non_stop_p (self); +} + +static int +tdefault_always_non_stop_p (struct target_ops *self) +{ + return 0; +} + +static int +debug_always_non_stop_p (struct target_ops *self) +{ + int result; + fprintf_unfiltered (gdb_stdlog, "-> %s->to_always_non_stop_p (...)\n", debug_target.to_shortname); + result = debug_target.to_always_non_stop_p (&debug_target); + fprintf_unfiltered (gdb_stdlog, "<- %s->to_always_non_stop_p (", debug_target.to_shortname); + target_debug_print_struct_target_ops_p (&debug_target); + fputs_unfiltered (") = ", gdb_stdlog); + target_debug_print_int (result); + fputs_unfiltered ("\n", gdb_stdlog); + return result; +} + static int delegate_find_memory_regions (struct target_ops *self, find_memory_region_ftype arg1, void *arg2) { @@ -4005,6 +4032,8 @@ install_delegators (struct target_ops *ops) ops->to_async = delegate_async; if (ops->to_supports_non_stop == NULL) ops->to_supports_non_stop = delegate_supports_non_stop; + if (ops->to_always_non_stop_p == NULL) + ops->to_always_non_stop_p = delegate_always_non_stop_p; if (ops->to_find_memory_regions == NULL) ops->to_find_memory_regions = delegate_find_memory_regions; if (ops->to_make_corefile_notes == NULL) @@ -4232,6 +4261,7 @@ install_dummy_methods (struct target_ops *ops) ops->to_is_async_p = tdefault_is_async_p; ops->to_async = tdefault_async; ops->to_supports_non_stop = tdefault_supports_non_stop; + ops->to_always_non_stop_p = tdefault_always_non_stop_p; ops->to_find_memory_regions = dummy_find_memory_regions; ops->to_make_corefile_notes = dummy_make_corefile_notes; ops->to_get_bookmark = tdefault_get_bookmark; @@ -4380,6 +4410,7 @@ init_debug_target (struct target_ops *ops) ops->to_is_async_p = debug_is_async_p; ops->to_async = debug_async; ops->to_supports_non_stop = debug_supports_non_stop; + ops->to_always_non_stop_p = debug_always_non_stop_p; ops->to_find_memory_regions = debug_find_memory_regions; ops->to_make_corefile_notes = debug_make_corefile_notes; ops->to_get_bookmark = debug_get_bookmark; diff --git a/gdb/target.c b/gdb/target.c index 4b7d5187c63..3f49079bf10 100644 --- a/gdb/target.c +++ b/gdb/target.c @@ -3781,6 +3781,67 @@ maint_show_target_async_command (struct ui_file *file, int from_tty, "asynchronous mode is %s.\n"), value); } +/* Return true if the target operates in non-stop mode even with "set + non-stop off". */ + +static int +target_always_non_stop_p (void) +{ + return current_target.to_always_non_stop_p (¤t_target); +} + +/* See target.h. */ + +int +target_is_non_stop_p (void) +{ + return (non_stop + || target_non_stop_enabled == AUTO_BOOLEAN_TRUE + || (target_non_stop_enabled == AUTO_BOOLEAN_AUTO + && target_always_non_stop_p ())); +} + +/* Controls if targets can report that they always run in non-stop + mode. This is just for maintainers to use when debugging gdb. */ +enum auto_boolean target_non_stop_enabled = AUTO_BOOLEAN_AUTO; + +/* The set command writes to this variable. If the inferior is + executing, target_non_stop_enabled is *not* updated. */ +static enum auto_boolean target_non_stop_enabled_1 = AUTO_BOOLEAN_AUTO; + +/* Implementation of "maint set target-non-stop". */ + +static void +maint_set_target_non_stop_command (char *args, int from_tty, + struct cmd_list_element *c) +{ + if (have_live_inferiors ()) + { + target_non_stop_enabled_1 = target_non_stop_enabled; + error (_("Cannot change this setting while the inferior is running.")); + } + + target_non_stop_enabled = target_non_stop_enabled_1; +} + +/* Implementation of "maint show target-non-stop". */ + +static void +maint_show_target_non_stop_command (struct ui_file *file, int from_tty, + struct cmd_list_element *c, + const char *value) +{ + if (target_non_stop_enabled == AUTO_BOOLEAN_AUTO) + fprintf_filtered (file, + _("Whether the target is always in non-stop mode " + "is %s (currently %s).\n"), value, + target_always_non_stop_p () ? "on" : "off"); + else + fprintf_filtered (file, + _("Whether the target is always in non-stop mode " + "is %s.\n"), value); +} + /* Temporary copies of permission settings. */ static int may_write_registers_1 = 1; @@ -3883,6 +3944,16 @@ Tells gdb whether to control the inferior in asynchronous mode."), &maintenance_set_cmdlist, &maintenance_show_cmdlist); + add_setshow_auto_boolean_cmd ("target-non-stop", no_class, + &target_non_stop_enabled_1, _("\ +Set whether gdb always controls the inferior in non-stop mode."), _("\ +Show whether gdb always controls the inferior in non-stop mode."), _("\ +Tells gdb whether to control the inferior in non-stop mode."), + maint_set_target_non_stop_command, + maint_show_target_non_stop_command, + &maintenance_set_cmdlist, + &maintenance_show_cmdlist); + add_setshow_boolean_cmd ("may-write-registers", class_support, &may_write_registers_1, _("\ Set permission to write into registers."), _("\ diff --git a/gdb/target.h b/gdb/target.h index ac21c43bc91..741f8585036 100644 --- a/gdb/target.h +++ b/gdb/target.h @@ -669,6 +669,10 @@ struct target_ops comment on 'to_can_run'. */ int (*to_supports_non_stop) (struct target_ops *) TARGET_DEFAULT_RETURN (0); + /* Return true if the target operates in non-stop mode even with + "set non-stop off". */ + int (*to_always_non_stop_p) (struct target_ops *) + TARGET_DEFAULT_RETURN (0); /* find_memory_regions support method for gcore */ int (*to_find_memory_regions) (struct target_ops *, find_memory_region_ftype func, void *data) @@ -1744,6 +1748,15 @@ extern int target_async_permitted; /* Enables/disabled async target events. */ extern void target_async (int enable); +/* Whether support for controlling the target backends always in + non-stop mode is enabled. */ +extern enum auto_boolean target_non_stop_enabled; + +/* Is the target in non-stop mode? Some targets control the inferior + in non-stop mode even with "set non-stop off". Always true if "set + non-stop" is on. */ +extern int target_is_non_stop_p (void); + #define target_execution_direction() \ (current_target.to_execution_direction (¤t_target)) -- 2.30.2