gdb: Don't corrupt completions hash when expanding the hash table
Commit:
commit
724fd9ba432a20ef2e3f2c0d6060bff131226816
Date: Mon Jan 27 17:37:20 2020 +0000
gdb: Restructure the completion_tracker class
caused the completion hash table to become corrupted if the table ever
needed to grow beyond its original size of 200 elements.
The hash table stores completion_tracker::completion_hash_entry
objects, but hashes them based on their name, which is only one field
of the object.
When possibly inserting a new element we compute the hash with
htab_hash_string of the new elements name, and then lookup matching
elements using htab_find_slot_with_hash. If there's not matching
element we create a completion_hash_entry object within the hash
table.
However, when we allocate the hash we pass htab_hash_string to
htab_create_alloc as the hash function, and this is not OK. This
means that when the hash table needs to grow, existing elements within
the hash are re-hashed by passing the completion_hash_entry pointer to
htab_hash_string, which obviously does not do what we expect.
The solution is to create a new hash function that takes a pointer to
a completion_hash_entry, and then calls htab_hash_string on the name
of the entry only.
This regression was spotted when running the gdb.base/completion.exp
test on the aarch64 target.
gdb/ChangeLog:
* completer.c (class completion_tracker::completion_hash_entry)
<hash_name>: New member function.
(completion_tracker::discard_completions): New callback to hash a
completion_hash_entry, pass this to htab_create_alloc.
gdb/testsuite/ChangeLog:
* gdb.base/many-completions.exp: New file.