mesa: Use GLdouble for depthMax in final unpack conversions.
The final step of _mesa_unpack_depth_span is to take the temporary
GLfloat depth values and convert them to the desired format. When
converting to GL_UNSIGNED_INTEGER with depthMax > 0xffffff, we use
double-precision math to avoid overflow and precision problems.
Or at least that's the idea. Unfortunately
GLdouble z = depthValues[i] * (GLfloat) depthMax;
actually causes single-precision multiplication, since both operands are
GLfloats. Casting depthMax to GLdouble causes the scaling to be done
with double-precision math.
Fixes a regression in oglconform's depth-stencil basic.read.ds test
since
c60ac7b17993d28af65b04f9bbbf3ee74c35358c, where the expected and
actual values differed slightly. For example, 0xcfa7a6 vs. 0xcfa7a4.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49772
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>