It is legal to have a texture view of a single layer from a 2D array texture;
you can sample from it, or render to it. Intel hardware needs to be made aware
when it is using a 2d array surface in the surface state. The texture view is
just a 2d surface with the backing miptree actually being a 2d array surface.
This caused the previous code would not set the right bit in the surface state
since it wasn't considered an array texture.
I spotted this early on in debug but brushed it off because it is clearly not
needed on other platforms (since they all pass). I have no idea how this works
properly on other platforms (I think gen7 introduced the bit in the state, but I
am too lazy to check). As such, I have opted not to modify gen7, though I
believe the current code is wrong there as well.
Thanks to Chris for helping me debug this.
v2: Just use the underlying mt's target type to make the array determination.
This replaces a bug in the first patch which was incorrectly relying only
on non-zero depth (not sure how that had no failures). (Ilia)
Cc: Chris Forbes <chrisf@ijw.co.nz>
Reported-by: Mark Janes <mark.a.janes@intel.com> (Jenkins)
References: https://www.opengl.org/registry/specs/ARB/texture_view.txt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92609
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
format == BRW_SURFACEFORMAT_BC7_UNORM))
surf[0] |= GEN8_SURFACE_SAMPLER_L2_BYPASS_DISABLE;
- if (_mesa_is_array_texture(target) || target == GL_TEXTURE_CUBE_MAP)
+ if (_mesa_is_array_texture(mt->target) || mt->target == GL_TEXTURE_CUBE_MAP)
surf[0] |= GEN8_SURFACE_IS_ARRAY;
surf[1] = SET_FIELD(mocs_wb, GEN8_SURFACE_MOCS) | mt->qpitch >> 2;
/* fallthrough */
default:
surf_type = translate_tex_target(gl_target);
- is_array = _mesa_is_array_texture(gl_target);
+ is_array = _mesa_is_array_texture(mt->target);
break;
}