Remove CVS keywords.
[mesa.git] / src / glu / sgi / libtess / alg-outline
1 /*
2 */
3
4 This is only a very brief overview. There is quite a bit of
5 additional documentation in the source code itself.
6
7
8 Goals of robust tesselation
9 ---------------------------
10
11 The tesselation algorithm is fundamentally a 2D algorithm. We
12 initially project all data into a plane; our goal is to robustly
13 tesselate the projected data. The same topological tesselation is
14 then applied to the input data.
15
16 Topologically, the output should always be a tesselation. If the
17 input is even slightly non-planar, then some triangles will
18 necessarily be back-facing when viewed from some angles, but the goal
19 is to minimize this effect.
20
21 The algorithm needs some capability of cleaning up the input data as
22 well as the numerical errors in its own calculations. One way to do
23 this is to specify a tolerance as defined above, and clean up the
24 input and output during the line sweep process. At the very least,
25 the algorithm must handle coincident vertices, vertices incident to an
26 edge, and coincident edges.
27
28
29 Phases of the algorithm
30 -----------------------
31
32 1. Find the polygon normal N.
33 2. Project the vertex data onto a plane. It does not need to be
34 perpendicular to the normal, eg. we can project onto the plane
35 perpendicular to the coordinate axis whose dot product with N
36 is largest.
37 3. Using a line-sweep algorithm, partition the plane into x-monotone
38 regions. Any vertical line intersects an x-monotone region in
39 at most one interval.
40 4. Triangulate the x-monotone regions.
41 5. Group the triangles into strips and fans.
42
43
44 Finding the normal vector
45 -------------------------
46
47 A common way to find a polygon normal is to compute the signed area
48 when the polygon is projected along the three coordinate axes. We
49 can't do this, since contours can have zero area without being
50 degenerate (eg. a bowtie).
51
52 We fit a plane to the vertex data, ignoring how they are connected
53 into contours. Ideally this would be a least-squares fit; however for
54 our purpose the accuracy of the normal is not important. Instead we
55 find three vertices which are widely separated, and compute the normal
56 to the triangle they form. The vertices are chosen so that the
57 triangle has an area at least 1/sqrt(3) times the largest area of any
58 triangle formed using the input vertices.
59
60 The contours do affect the orientation of the normal; after computing
61 the normal, we check that the sum of the signed contour areas is
62 non-negative, and reverse the normal if necessary.
63
64
65 Projecting the vertices
66 -----------------------
67
68 We project the vertices onto a plane perpendicular to one of the three
69 coordinate axes. This helps numerical accuracy by removing a
70 transformation step between the original input data and the data
71 processed by the algorithm. The projection also compresses the input
72 data; the 2D distance between vertices after projection may be smaller
73 than the original 2D distance. However by choosing the coordinate
74 axis whose dot product with the normal is greatest, the compression
75 factor is at most 1/sqrt(3).
76
77 Even though the *accuracy* of the normal is not that important (since
78 we are projecting perpendicular to a coordinate axis anyway), the
79 *robustness* of the computation is important. For example, if there
80 are many vertices which lie almost along a line, and one vertex V
81 which is well-separated from the line, then our normal computation
82 should involve V otherwise the results will be garbage.
83
84 The advantage of projecting perpendicular to the polygon normal is
85 that computed intersection points will be as close as possible to
86 their ideal locations. To get this behavior, define TRUE_PROJECT.
87
88
89 The Line Sweep
90 --------------
91
92 There are three data structures: the mesh, the event queue, and the
93 edge dictionary.
94
95 The mesh is a "quad-edge" data structure which records the topology of
96 the current decomposition; for details see the include file "mesh.h".
97
98 The event queue simply holds all vertices (both original and computed
99 ones), organized so that we can quickly extract the vertex with the
100 minimum x-coord (and among those, the one with the minimum y-coord).
101
102 The edge dictionary describes the current intersection of the sweep
103 line with the regions of the polygon. This is just an ordering of the
104 edges which intersect the sweep line, sorted by their current order of
105 intersection. For each pair of edges, we store some information about
106 the monotone region between them -- these are call "active regions"
107 (since they are crossed by the current sweep line).
108
109 The basic algorithm is to sweep from left to right, processing each
110 vertex. The processed portion of the mesh (left of the sweep line) is
111 a planar decomposition. As we cross each vertex, we update the mesh
112 and the edge dictionary, then we check any newly adjacent pairs of
113 edges to see if they intersect.
114
115 A vertex can have any number of edges. Vertices with many edges can
116 be created as vertices are merged and intersection points are
117 computed. For unprocessed vertices (right of the sweep line), these
118 edges are in no particular order around the vertex; for processed
119 vertices, the topological ordering should match the geometric ordering.
120
121 The vertex processing happens in two phases: first we process are the
122 left-going edges (all these edges are currently in the edge
123 dictionary). This involves:
124
125 - deleting the left-going edges from the dictionary;
126 - relinking the mesh if necessary, so that the order of these edges around
127 the event vertex matches the order in the dictionary;
128 - marking any terminated regions (regions which lie between two left-going
129 edges) as either "inside" or "outside" according to their winding number.
130
131 When there are no left-going edges, and the event vertex is in an
132 "interior" region, we need to add an edge (to split the region into
133 monotone pieces). To do this we simply join the event vertex to the
134 rightmost left endpoint of the upper or lower edge of the containing
135 region.
136
137 Then we process the right-going edges. This involves:
138
139 - inserting the edges in the edge dictionary;
140 - computing the winding number of any newly created active regions.
141 We can compute this incrementally using the winding of each edge
142 that we cross as we walk through the dictionary.
143 - relinking the mesh if necessary, so that the order of these edges around
144 the event vertex matches the order in the dictionary;
145 - checking any newly adjacent edges for intersection and/or merging.
146
147 If there are no right-going edges, again we need to add one to split
148 the containing region into monotone pieces. In our case it is most
149 convenient to add an edge to the leftmost right endpoint of either
150 containing edge; however we may need to change this later (see the
151 code for details).
152
153
154 Invariants
155 ----------
156
157 These are the most important invariants maintained during the sweep.
158 We define a function VertLeq(v1,v2) which defines the order in which
159 vertices cross the sweep line, and a function EdgeLeq(e1,e2; loc)
160 which says whether e1 is below e2 at the sweep event location "loc".
161 This function is defined only at sweep event locations which lie
162 between the rightmost left endpoint of {e1,e2}, and the leftmost right
163 endpoint of {e1,e2}.
164
165 Invariants for the Edge Dictionary.
166
167 - Each pair of adjacent edges e2=Succ(e1) satisfies EdgeLeq(e1,e2)
168 at any valid location of the sweep event.
169 - If EdgeLeq(e2,e1) as well (at any valid sweep event), then e1 and e2
170 share a common endpoint.
171 - For each e in the dictionary, e->Dst has been processed but not e->Org.
172 - Each edge e satisfies VertLeq(e->Dst,event) && VertLeq(event,e->Org)
173 where "event" is the current sweep line event.
174 - No edge e has zero length.
175 - No two edges have identical left and right endpoints.
176
177 Invariants for the Mesh (the processed portion).
178
179 - The portion of the mesh left of the sweep line is a planar graph,
180 ie. there is *some* way to embed it in the plane.
181 - No processed edge has zero length.
182 - No two processed vertices have identical coordinates.
183 - Each "inside" region is monotone, ie. can be broken into two chains
184 of monotonically increasing vertices according to VertLeq(v1,v2)
185 - a non-invariant: these chains may intersect (slightly) due to
186 numerical errors, but this does not affect the algorithm's operation.
187
188 Invariants for the Sweep.
189
190 - If a vertex has any left-going edges, then these must be in the edge
191 dictionary at the time the vertex is processed.
192 - If an edge is marked "fixUpperEdge" (it is a temporary edge introduced
193 by ConnectRightVertex), then it is the only right-going edge from
194 its associated vertex. (This says that these edges exist only
195 when it is necessary.)
196
197
198 Robustness
199 ----------
200
201 The key to the robustness of the algorithm is maintaining the
202 invariants above, especially the correct ordering of the edge
203 dictionary. We achieve this by:
204
205 1. Writing the numerical computations for maximum precision rather
206 than maximum speed.
207
208 2. Making no assumptions at all about the results of the edge
209 intersection calculations -- for sufficiently degenerate inputs,
210 the computed location is not much better than a random number.
211
212 3. When numerical errors violate the invariants, restore them
213 by making *topological* changes when necessary (ie. relinking
214 the mesh structure).
215
216
217 Triangulation and Grouping
218 --------------------------
219
220 We finish the line sweep before doing any triangulation. This is
221 because even after a monotone region is complete, there can be further
222 changes to its vertex data because of further vertex merging.
223
224 After triangulating all monotone regions, we want to group the
225 triangles into fans and strips. We do this using a greedy approach.
226 The triangulation itself is not optimized to reduce the number of
227 primitives; we just try to get a reasonable decomposition of the
228 computed triangulation.