Testing shows the setting of 32:16 for jump alignment has a significant
codesize cost, however it doesn't make a difference in performance.
So set jump-align to 4 to get 1.6% codesize improvement.
gcc/
* config/aarch64/aarch64.c (neoversen1_tunings): Set jump_align to 4.
+2020-01-20 Wilco Dijkstra <wdijkstr@arm.com>
+
+ * config/aarch64/aarch64.c (neoversen1_tunings): Set jump_align to 4.
+
2020-01-20 Andrew Pinski <apinski@marvell.com>
PR middle-end/93242
3, /* issue_rate */
(AARCH64_FUSE_AES_AESMC | AARCH64_FUSE_CMP_BRANCH), /* fusible_ops */
"32:16", /* function_align. */
- "32:16", /* jump_align. */
+ "4", /* jump_align. */
"32:16", /* loop_align. */
2, /* int_reassoc_width. */
4, /* fp_reassoc_width. */