gallium/util: implement layered framebuffer clear in u_blitter
authorMarek Olšák <marek.olsak@amd.com>
Thu, 21 Nov 2013 14:41:36 +0000 (15:41 +0100)
committerMarek Olšák <marek.olsak@amd.com>
Tue, 3 Dec 2013 18:39:13 +0000 (19:39 +0100)
commit6b919b1b2d296f7d7410c2291b7e0332d7bef1a0
tree38ad55fa40a9954919f97d41c11d8e6a4d0807bc
parent1a02bb71ddbf7312a84ac1693f562cca191a7d42
gallium/util: implement layered framebuffer clear in u_blitter

All bound layers (from first_layer to last_layer) should be cleared.

This uses a vertex shader which outputs gl_Layer = gl_InstanceID, so each
instance goes to a different layer. By rendering a quad and setting
the instance count to the number of layers, it will trivially clear all
layers.

This requires AMD_vertex_shader_layer (or PIPE_CAP_TGSI_VS_LAYER), which only
radeonsi supports at the moment. r600 could do this too. Standard DX11
hardware will have to use a geometry shader though, which has higher overhead.
src/gallium/auxiliary/util/u_blitter.c
src/gallium/auxiliary/util/u_blitter.h
src/gallium/auxiliary/util/u_framebuffer.c
src/gallium/auxiliary/util/u_framebuffer.h
src/gallium/auxiliary/util/u_simple_shaders.c
src/gallium/auxiliary/util/u_simple_shaders.h
src/gallium/drivers/ilo/ilo_blitter_pipe.c
src/gallium/drivers/r300/r300_blit.c
src/gallium/drivers/r600/r600_blit.c
src/gallium/drivers/radeonsi/r600_blit.c