mesa/vbo: Fix scaling issue in 2-bit signed normalized packing.
Since a signed 2-bit integer can only represent -1, 0, or 1, it is
tempting to simply to convert it directly to a float. This maps it
onto the correct range of [-1.0, 1.0]. However, it gives different
values compared to the usual equation:
(2.0 * 1.0 + 1.0) * (1.0 / 3.0) = +1.0 (same)
(2.0 * 0.0 + 1.0) * (1.0 / 3.0) = +0.
33333333... (different)
(2.0 * -1.0 + 1.0) * (1.0 / 3.0) = -0.
33333333... (different)
According to the GL_ARB_vertex_type_2_10_10_10_rev extension, signed
normalization is performed using equation 2.2 from the GL 3.2
specification, which is:
f = (2c + 1)/(2^b - 1). (2.2)
Comments below that equation state: "In general, this representation is
used for signed normalized fixed-point parameters in GL commands, such
as vertex attribute values." Which is what we're doing here.
The 3.2 specification goes on to declare an alternate formula:
f = max{c/(2^(b-1) - 1), -1.0} (2.3)
which is closer to the existing code, and maps the end points to exactly
-1.0 and 1.0. Comments below the equation state: "In general, this
representation is used for signed normalized fixed-point texture or
framebuffer values." Which is *not* what we're doing here.
It then states: "Everywhere that signed normalized fixed-point
values are converted, the equation used is specified." This is the real
clincher: the extension explicitly specifies that we must use equation
2.2, not 2.3. So we need to do (2x + 1) / 3.
This matches the behavior expected by oglconform's packed-vertex test,
and is correct for desktop GL (pre-4.2). It's not correct for ES 3.0,
but a future patch will correct that.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Marek Olšák <maraeo@gmail.com>