+2015-11-12 Uros Bizjak <ubizjak@gmail.com>
+
+ * config/i386/i386.c (ix86_legitimate_combined_insn): Reject
+ combined insn if the alignment of vector mode memory operand
+ is less than ssememalign.
+
2015-11-12 Tom de Vries <tom@codesourcery.com>
- * gen-pass-instances.awk (handle_line): Print parentheses and pass_name
- explicitly.
+ * gen-pass-instances.awk (handle_line): Print parentheses and
+ pass_name explicitly.
2015-11-12 Tom de Vries <tom@codesourcery.com>
- * gen-pass-instances.awk (handle_line): Add pass_num, prefix and postfix
- vars.
+ * gen-pass-instances.awk (handle_line): Add pass_num, prefix
+ and postfix vars.
2015-11-12 Tom de Vries <tom@codesourcery.com>
Move Convert C1/(X*C2) into (C1/C2)/X to match.pd.
Move Optimize (X & (-A)) / A where A is a power of 2, to
X >> log2(A) to match.pd.
-
+
* match.pd (rdiv (rdiv:s @0 @1) @2): New simplifier.
(rdiv @0 (rdiv:s @1 @2)): New simplifier.
(div (convert? (bit_and @0 INTEGER_CST@1)) INTEGER_CST@2):
/* For pre-AVX disallow unaligned loads/stores where the
instructions don't support it. */
if (!TARGET_AVX
- && VECTOR_MODE_P (GET_MODE (op))
- && misaligned_operand (op, GET_MODE (op)))
+ && VECTOR_MODE_P (mode)
+ && misaligned_operand (op, mode))
{
- int min_align = get_attr_ssememalign (insn);
- if (min_align == 0)
+ unsigned int min_align = get_attr_ssememalign (insn);
+ if (min_align == 0
+ || MEM_ALIGN (op) < min_align)
return false;
}
+2015-11-12 Uros Bizjak <ubizjak@gmail.com>
+
+ * gcc.target/i386/sse-1.c (swizzle): Assume that a is
+ aligned to 64 bits.
+
2015-11-11 David Edelsohn <dje.gcc@gmail.com>
* gcc.dg/pr65521.c: Fail on AIX.
void
swizzle (const void *a, vector4_t * b, vector4_t * c)
{
- b->v = _mm_loadl_pi (b->v, (__m64 *) a);
- c->v = _mm_loadl_pi (c->v, ((__m64 *) a) + 1);
+ __m64 *t = __builtin_assume_aligned (a, 64);
+
+ b->v = _mm_loadl_pi (b->v, t);
+ c->v = _mm_loadl_pi (c->v, t + 1);
}
/* While one legal rendering of each statement would be movaps;movlps;movaps,