GRADUAL GENERALIZATION OF NAUTICAL CHART CONTOURS WITH A B-SPLINE SNAKE METHOD

This study was sponsored by NOAA grant NA10NOS400007, and supported by the
Center for Coastal and Ocean Mapping. Professor Larry Mayer introduced me to the world of
Ocean Mapping, and taught me new information about geological oceanography; Professor Brian
Calder initiated this study and has always been able to selflessly help me with any questions;
Professor Steven Wineberg gave me many insights of how to transfer math concepts to graphic
behavior; Professor Kurt Schwehr helped me with many intelligent thoughts and suggestion about
computer programming implementation. I am grateful for all their selfless help and patience, and
I would like to thank all of them for their guidance, encouragement and proof-reading of this
thesis: without them and CCOM’s support, this work would not have happened.
Finally, I would like to thank my parents and friends, for encouragement and trust. Your
love and faith gave me the strength to keep holding on and finally make it work! Love you all!
B-spline snake methods have been used in cartographic generalization in the past
decade, particularly in the generalization of nautical charts where these methods yield good
results with respect to the shoal-bias rules for the generalization of chart contours. However,
previous studies only show generalization results at particular generalization (or scale) levels,
and show only two states of the algorithm: before and after generalization, but nothing in
between. This thesis presents an improved method of using B-spline snakes and other auxiliary
functions and workflows for generalization in the context of nautical charts which can generalize
multiple nautical chart features from large scale to small scale without creating any invalid
intermediate features that require special processing to resolve. This process allows users to
generate charts at any intermediate scale without cartographic irregularities, and is capable of
extension to include more specialized generalization operators.
Generalization is a branch of cartography which studies the process of how the contents of
a map change when the scale of the map changes (Figure 1-1). The generalization process is
traditionally done by cartographers manually, even though computers are widely used in the map
production process. Generalization, due to its complex nature, remains a procedure that requires
large amounts of manual processing.
Although generalization is a complex process, many studies have been done in this field.
For land maps, studies have been conducted to establish principles of generalization (Shea and
McMaster, 1989; Wang and Muller, 1998; Ware, 2003). However, the studies listed here focus on
land maps. A nautical chart, on the other hand, is another type of map; it is a graphic
representation of a maritime area and adjacent coastal regions. The contents of a nautical chart are
different from a land map, and the purposes are different too, which leads to distinct generalization
rules for nautical charts. Studies on nautical chart generalization are not as frequent as for land
maps. Guilbert and collaborators (Guilbert and Lin, 2007; Guilbert and Saux, 2008) used a B-
spline Snake method to generalize the contours of a nautical chart. However, their method only
showed the starting and finishing status of contours after the generalization process (Figure 1-2).
Generalization should be a gradual process between scales: when the scale gradually
changes the contents should change gradually too. This study focuses on finding generalization
tools, operators, and workflows to make generalization a gradual process, and to carry out
generalization without causing cartographic difficulties in the process.
Current generalization processes are mostly done by cartographers manually. With their
previous knowledge and experience, cartographers draw new contours on a smaller scale chart
based on contours and sounding data from larger scale charts. By examining how cartographers do
generalization, two rules can be summarized as principles of chart contour generalization:
1) From a large scale chart to a small scale chart, contours are simplified and smoothed, and
when their shape is changed, they are only moved to the deeper side of the original curve
(Figure 1-3 and Figure 1-4).
From Figure 1-3 and Figure 1-4, compared to the 30 foot contour on the 1:20,000 scale
chart, the 30 foot contour on the 1:80,000 scale chart is simplified and smoothed, and when
it is smoothed, its shape is changed such that the 30 foot contour on the 1:80,000 scale
chart is moved to the deeper side of the 30 foot contour on the 1:20,000 scale chart. The
reason why smoothing is only done by shifting the contour to the deeper side (primarily
navigational safety) is discussed in section 1.2.2.
2) From a large scale chart to a small scale chart, polyline (open contours) and polygon
contours (enclosed contour) will be aggregated; polygon contours will aggregate with each
other and eventually be aggregated with the polyline contour. Figure 1-5 and Figure 1-6
demonstrate how cartographers aggregated polygon contours with a polyline contour
during generalization: in the 1:80,000 scale chart, the polygon contours are deleted and
aggregated into the 60 foot contour line. Figure 1-7 and Figure 1-8 show that cartographers
aggregate small polygon contours from the 1:20,000 scale chart with large polygon
contours from the 1:80,000 scale chart.
However, in the manual process of generalization of raster charts, cartographers only
provide contours at certain scales, for example at the 1:20,000 and 1:80,000 scales in Figure 1-5 to
Figure 1-8. In reality, users might want more scales in between, as it is a large scale change from
1:20,000 to 1:80,000. The question is how to create a process that does generalization similar to
the way cartographers do it manually with the ability to show contours at intermediate stages. This
study will focus on developing algorithms that simplify and smooth the contours, and exaggerate
and aggregate contours when needed. These algorithms will be combined into workflows that
generate a gradual generalization process, such that the intermediate stages of generalization will
be available, and large scale nautical chart features can be generalized into small scales without
creating any invalid intermediate cartographic errors.
1.2.1 Contours and Nautical Charts
Contours are one of the primary bathymetric features on nautical charts. They depict the
geomorphologic shape of the seafloor, indicate the shallow areas, and provide safety of navigation
information for mariners. Nautical charts make a distinction between isobaths (i.e., a line that
connects all points with the same depth) and contours (i.e., a line that contains all points shallower
[shoaler] than a given depth). This thesis is concerned with contours, as they are a more general
description of a depth boundary, and required for maintenance of navigational safety when
constructing a chart.
A nautical chart is a different form of map; it is a graphic representation of a maritime
area and adjacent coastal regions. Unlike a map, which is oriented to terrestrial use, the nautical
chart provides information relevant to marine navigation (NOAA, 1997). The focus of the
nautical chart is on water areas, providing data on water depths, aids to navigation (ATONs),
hazards, etc. (NOAA, 1997).
Charts are generally constructed from multiple sources of bathymetric data (e.g.,
soundings from various sources, contours, indications of obstructions) and non-bathymetric data
(e.g., floating aids to navigation, shore-line constructions, tides currents). Traditionally, charts
were constructed at a particular scale of representation in order to depict the information at a level
of detail suitable for the intended use (e.g., very large scale, perhaps 1:5,000, for docking charts,
through to very small scale, perhaps 1:1,000,000 or less, for planning an ocean crossing). Most
often, the source surveys for the charts were conducted at a scale twice that of the largest scale
charts for the area being surveyed and smaller scale charts were constructed from the larger scale
charts by a process of generalization. As the scale of the chart changes, the contents shown on the
chart are necessarily different, as the space available to represent any given physical area is
smaller. The detail available at the largest scale cannot be shown clearly at smaller scales. Clarity
of representation is essential in a chart in order to provide a useful working document, and to
promote navigational safety for surface vessels. Generalization is the process of choosing which
contents should been shown and how they will be represented on the chart to achieve these goals.
More recent practice has been to construct fully electronic charts (i.e., Electronic
Navigational Charts [ENCs]) for use in computer-based bridge navigation systems. These systems
allow the user to zoom in and out essentially continuously and therefore require that the display
system (either an Electronic Chart System [ECS] or Electronic Chart Display and Information
System [ECDIS]) provide generalized data to the user on demand. Currently, navigation systems
select the best chart available for the region from a set of charts (typically the chart with the closest
scale match to that required), and display it, generalizing only within the limits of the scale
minimum and maximum information coded into the chart’s source data. These systems are
essentially autonomous of the cartographer. Once the source data is supplied, automatic methods
for generalization are even more important than they are in the traditional paper-based chart
construction pipeline: here they need to be usable for safe navigation, and preferably aesthetically
pleasing, without human intervention.
Nautical charts differ from land maps in that they do not intend to faithfully represent the
true nature of the seafloor in the area of interest, or, necessarily, all of the other components in the
region. Rather, the goal, is to provide a representation of the area that is as faithful to the known
true configuration of the seabed as possible (in as much as the – usually limited – source data
provides information on the true configuration of the seabed), modified such that the information
is inherently safe for surface navigation. For example, the nautical cartographer might move an
indicated sounding in order to improve the clarity of the display, or intentionally modify the
representation in order to suggest to the mariner that an area of the chart is unsuitable for transit. In
all cases, the nautical chart must obey shoal-bias rules, meaning that the chart always shows the
shallowest depth at a given position, or a modification of the known configuration of the seafloor
such that the depth indicated on the chart is shallower than the cartographer knows where the water
to be. This difference requires the process of nautical chart generalization to be very different from
land map generalization.
1.2.2 Shoal-biased Rule of Nautical Chart Contours
A contour in a nautical chart is different from a contour in a topographic map. A nautical
chart contour has another property due to the navigation purpose of a nautical chart.
For ships, one of the largest dangers when cruising in the water is running aground.
Mariners always want to ensure the water they are in is deeper than the vessel’s draft. For that
reason, the depths on the chart always represent the shallowest water depth at that location. That
is why the chart datum is chosen to be the mean lower low water level, and hydrographic survey
data is traditionally processed by selecting the shallowest value. These practices all follow the
shoal-biased rule. For contours, the shoal-biased rule is also applied, which means if a contour
represents a depth of 30 feet, it will only be drawn around the positions where the real depths are
the same as or deeper than 30 feet. This characteristic of chart contours leads to another rule in
chart contour deformation: if a contour needs to be moved to another position due to
generalization, it can only be moved to a position deeper than that contour’s depth.
As shown in Figure 1-9, the five meter contour cannot be moved toward the inside of its
original polygon, as the real depth will be shallower than five meters. It can only be moved
toward the outside of its original polygon, as the real depth at those positions will be deeper.
Generalization has been mainly studied on land maps in prior work. Although land maps
are different from nautical charts, a subset of research results on land map generalization can be
applied to nautical charts.
Generalization in GIS contains two main aspects: database generalization and view
generalization (Peng, 2000). Database generalization is also called model generalization, and is
generalization through changes in the conceptual model, which consists of “manipulating the
geometric and thematic descriptions of spatial objects and their relationships with respect to
certain changes of the uncertainty application model” (Harrie, 2001). View generalization is also
called graphic generalization or cartographic generalization, and is “mapping/transforming the
digital description of spatial objects and their relationships into a graphic description, which is
confined to graphic legibility and cartographic principles” (Harrie, 2001).
Shea and McMaster (1989) proposed a complete concept of operators for generalization.
They divided the generalization process into several operators. The generalization process has long
been a subjective process: by dividing generalization functions into operators, the generalization
process can be described more objectively. The operators are summarized in Figure 1-10.
The Shea and McMaster operators are not all applicable to both types of generalization.
Some operators can only be applied to graphic generalization, while some can only be applied to
model generalization. This thesis research is focused on graphic generalization, so only certain
operators will be studied.
Shea and McMaster decompose generalization into 12 types of operators. In this work,
however, only four operators (simplification, smoothing, aggregation, and exaggeration) will be
considered.
The simplification operator produces a reduction in the number of derived data points by
selecting a subset of the original coordinate pairs, retaining those points considered to be the most
representative of the line (Shea and McMaster, 1989). It is useful when the input data are complex.
It increases the calculation speed and reduces the space required for storage. Figure 1-11 is an
example of the simplification process. The original line has seven vertices. The simplification
operator selects four of them, such that those four points represent the original shape best.
The simplification operator is also the operator that been studied most, and several widely
used algorithms have been developed to implement the simplification operator.
The Douglas-Peucker algorithm (Douglas and Peucker, 1973) is by far the most used
simplification algorithm. The algorithm begins by defining the first point on the line as an anchor
and the last point as a floating point (Figure 1-12). These two points then define a line segment and
the orthogonal distance to the other points on the line is computed. If the distance is longer than the
threshold distance, the point lying furthest away becomes the new floating point (Harrie, 2001).
This cycle is repeated and the floating point moves towards the anchor point. When all the
distances (from the line segment between the anchor and the floating point and intervening points)
are less than the threshold distance, the anchor is moved to the floating point and the last point is
reassigned as the new floating point. The algorithm ends when the last point becomes the anchor
The smoothing operator acts on a line by relocating or shifting coordinate pairs in an
attempt to plane away small perturbations and capture only the most significant trends of the line
(Shea and McMaster, 1989). The smoothing operator reduces the angularity of lines. Figure 1-13
shows how the smoothing process works: all vertices are preserved, but some of them are
relocated.
Smoothing is another operator that has been studied in detail. Since it is not deleting any
points but shifting the position of the vertices, it is mostly implemented by using a mathematical
model such as a smoothing kernel, or spline functions. Gaussian smoothing is one of the common
smoothing method, where the line is convolved with a Gaussian kernel. B-splines are used
frequently too due to their continuity and smoothness properties. In this study, a B-spline
smoothing method is used. The detailed properties of B-splines are explained in Chapter II.
The aggregation operator is used to combine several features into one feature to symbolize
the feature when the space on the map is limited and the features are important and need to be
shown (Figure 1-14). There are few studies specially focusing on the aggregation operator. For
different generalization objects, the aggregation operator may be implemented differently: if point
features are to be aggregated, algorithms might be related to point elimination. In this study, the
aggregating objects are polyline and polygon features, so a computer graphic approach is
developed to implement the aggregation operator, details of which are in Chapter III.The exaggeration operator is used in the generalization process such that the shapes and
sizes of features can meet the specific requirements of a map (Shea and McMaster, 1989). Figure
1-15 shows an example of exaggeration of an inlet. Inlets need to be opened and streams need to
be widened if the map must depict important navigational information for shipping (Shea and
McMaster, 1989). As with the aggregation operator, exaggeration has not been studied much in
previous research. In this work, a method to implement an exaggeration operator is developed;
details of the exaggeration operator are in Chapter III.Other operators are widely used in the generalization process. In this study, only the four
operators discussed above are used, the remaining operators will not be illustrated. Figure 1-16
illustrates how the remaining Shea and McMaster operators work.
An operator is just a concept that represents the transformation of geographic features, but
to accomplish generalization automatically, algorithms are needed to implement those
transformations. Many studies of generalization algorithms and workflows have been done. One
algorithm from this research is to use a snake method to do line simplification, smoothing and
displacement (Steiniger and Meier, 2004; Burghardt, 2005). The reason this method is superior to
traditional line simplification method such as the Douglas-Peucker (1973) or Li-Openshaw (1993)
methods is that it can combine several operators (such as simplification, smoothing, and
displacement) together (Steiniger and Meier, 2004), and also preserve the compound shape of
linear features better (Burghardt, 2005).Besides the large amount of research on land maps, there is also some research specifically
on nautical charts. NOAA has conducted several studies about nautical chart cartographic
generalization ( Shea, 1988 ), and nautical chart production (NOAA, 1996). Shea’s cartographic
generalization study provided a system that was made of several generalization operators, but
those operators can only be applied one by one, and the user cannot generate a globally controlled
generalization result. Besides that, the study did not take the special characteristics of chart
features into account; those generalization operators may produce incorrect results. The 1996 study
is preliminary research focused on proposing a new concept of how the future chart production
procedure should be. Not much was mentioned about generalization.
Besides the above NOAA conducted research, Guilbert and Lin (Guilbert and Lin, 2007)
introduced a B-spline snake method to nautical chart contour generalization. This method
demonstrates several generalization operators, and takes the shoal-bias rule into consideration.
However, this process only creates results at a given level of generalization, and there is no
intermediate result between the original chart scale and the generalized scale. In reality, when a
chart with a generalization function is being displayed on an ECS or ECDIS screen, it is more
appropriate to have the generalization happen smoothly as the user zooms in and out between
scales. Current generalization studies all provide generalization results at some given
generalization level, but no research has shown gradual generalization on a nautical chart; this
thesis addresses that question.
In summary, the current generalization process has limitations. It is a very subjective
process done by cartographers manually, which is time consuming, and cannot be included in
ENCs (Electronic Nautical Chart, which have to rely on pre-generalized contours). It limits the
generalization to fixed scale bands, which means it cannot readily deal with a continuously
variable scale. The methods that have been attempted for this allow mistakes to happen, and then
resolve them, which is sub-optimal and leads to special rules that make the process complex. The
problem here is to find a scheme that will allow for automated generalization that maintains
nautical cartography rules, while allowing for generalization to any scale from a high resolution
source of survey data and avoiding the creation of invalid intermediate solutions that would require
special processing to resolve.
This thesis presents an improved method of using B-spline snakes and other operators,
auxiliary functions and workflows for generalization in the context of nautical charts, where the
generalization process is done gradually, and large scale nautical chart features with more details
are generalized into smaller scales without creating any invalid intermediate features that require
special processing to resolve. During the generalization process, multiple contours are aware of
each other, and follow appropriate cartographic rules. This workflow also allows a user to
generate chart features at any scale, and it is capable of adding more operators, functions and
forces into its current structure.
In this chapter, a B-spline snake method will be discussed which implements the
simplify and smoothing operators while following the shoal-biased rule. Section 2.1 introduces
the background and method of construction for a B-spline curve; section 2.2 introduces
background on Snake methods; and section 2.3 discusses how a B-spline Snake method works,
and how it acts as a simplify and smoothing operator while obeying the shoal-bias rule.
2.1 B-spline Curve
2.1.1 B-spline Curve Definition
A spline is a piecewise polynomial function. A B-spline is a spline function consisting of
a sum of B-spline basis functions (see Equation (2) below).
k is the order number, and k-1 is the degree of the polynomial pieces. Degree three is the
most widely used, and the B-spline with degree three is also called a cubic B-spline.
2.1.2 Cubic B-spline Curve
A curve with a continuous first derivative is called
1
C continuous, and a curve with a
continuous second derivative is called
2
C continuous. Figure 2-2 illustrates basis functions for
degrees 1, 2 and 3..
2.2.1 Snake Method Definition
Snakes, also called active contours, were first used in image processing by Kass et al.
(1987). In image processing, a snake is a curve defined within an image domain that can move
under the influence of internal forces that describe the curve itself and external forces computed
E X u is the internal energy of the curve, describing the smoothness, and ( ( ))
ext
E X u is the
external energy, which expresses external constraints on the system. In the system defined here,
these external constraints are used to represent the shoal-bias rule such that when the external
energy is minimized, the shoal-bias rule has been satisfied. The snake in use here is an
optimization algorithm that attempts to find the ( ) X u that minimizes .
total
E In general, the
algorithm seeks a shape of the curve to balance the effects of the internal and external energies
such that the resultant curve is as smooth as possible while still satisfying the external constraints,
which may be either hard constraints – i.e., that must be satisfied – or soft constraints that express
a degree of preference.
In the most common snake method, the internal energy is represented as:
X u and ”( ) X u are the first and second derivative of ( ) X u with respect to , u  and  are
weighting parameters that control the balance between the snake’s tension and rigidity
respectively (Xu and Prince, 1998), and are adjusted to emphasize the required features for the
given problem. The exact expression of internal energy and external energy can be different
according to the particular purpose of the snake curve. Here, both terms have different definitions
for contour generalization purposes. The details of the definition are in the next section.
2.3 B-spline Snake Method
In this study, input data points representing the original contour are approximated by a
cubic B-spline curve as in (1), where ( )
k
i
N u are the piecewise approximating polynomials and
the
i
Q are the control points, i.e. the weights of the polygon. So, before the generalization starts,
the input contour is seen as a B-spline curve. Points on that B-spline curve are designated
 
0 j
X u . Because these points are an approximation of the original contour, a polygonal line
(polyline) with the  
0 j
X u as its vertices can be viewed as an approximation of the original
contour defined by the input data points. Then a “curvature” of this polyline can be defined at its
vertices as in section 2.3.1 below.
Because this approximation is only used on input contours with complex shape and large
numbers of vertices (more than 1000 vertices), the control points are normally so close to the
original contour that the human eyes cannot distinguish between them and the points
0 (
).
j
X u
As a consequence, generalizations in this paper use the control points themselves as proxies for
the  
0 j
X u , so the polyline formed with the control points as vertices is the line to be simplified
and smoothed. For future work, using the correct approximation points  
0 j
X u , and the correct
spline curvature at those points, would give greater accuracy and flexibility, especially in cases
where the number of points is not so large.
At the end of the iterative generalization process, the result is a final polyline, and a final
B-spline is fitted to its vertices.
2.3.1 B-spline Snake Energy Terms for Polylines
For use in this work, the geomorphologic constraints depend mainly on the rigidity of the
snake, and therefore the value of  is set to zero. Guilbert et al. (2006), show that the  value
has little influences on the final result, so it has been set to zero to simplify the calculation. In
this work a cubic B-spline is fitted to the data points representing the original contour. Points on
that B-spline, designated
0 (
),
j
X u are used for the subsequent contour generalization. A
polygonal line is drawn with the  
0 j
X u as vertices, and then a curvature of this polygonal line
is defined at its vertices (Figure 2-3) by finding the internal angle
j
 between three consecutive
points on the curve, ,
In E(4), the internal energy is represented as the sum of first derivative and second
derivative, but as  is set to zero, the internal energy here is only the second term, which
represents the curvature. In (6), the curvature is calculated by the approximation with k instead
of the second derivative in (4).
In image processing applications, snakes are often used to match contours in the image
(Kass et al., 1987). The external energy term, therefore, often uses distance between the current
location and some image-derived contour information. In the case of contour generalization,
however, there is no definite target as the ENC contours move continuously offshore as the scale
of the chart decreases. The primary constraint, therefore, is that the generalized snake should be
on the seaward side of the original curve, and the external energy can be set to a one-sided
function (Guilbert et al., 2006),
where
 
0 j
X u is on the original curve,
 
j
X u is on the contour generalization, and
0
c is a
coefficient used in the calculation. When ( )
j
X u is on the shoal side,
0
c is 1, but if ( )
j
X u is not on
the shoal side,
0
c is set to 0 such that there is only a penalty when the constraint is broken. The
penalty term here increases according to the severity with which the generalized curve crosses to
the wrong side of the original (how far it is on the wrong side), but uses a normalization term to
represent the ‘minimum visualizable distance’ set according to the target scale of generalization.
2
vis
 reflects the fact that lines on the chart display are non-ideal, and have a defined thickness. The
When
total
E is minimized, the curve will be moved to the desired position. The ( )
j
X u
that
minimizes
total
E are the coordinates of the vertices of the desired polyline. One way to calculate the
minimum of a function is to calculate the gradient; when the gradient of
total
E is zero,
total
E might
reach its maximum or minimum value. However, as the curvature of a curve can be infinitely
large, there is no upper bound for the total energy, so there is no maximum
total
E . That means,
when the gradient is zero,
total
E reaches a minimum (although it may be only a local minimum).
The gradient of the function is:
Here,
int
E  and
ext
E  are the gradient of the internal and external energy, but they can
also be considered as forces on the curve. A solution of (9) can be seen either as realizing the
equilibrium of the forces (Figure 2-4) in the equation or reaching the minimum of the energy
(Cohen, 1991). The curve that minimizes (9) is formed by numerically approximating the
gradient terms. The details of the solution are in section in section 2.3.3.1.
B, at the beginning, as its curvature is very large, so the internal force is applied, and there is no
external force, so it is moving towards the black dashed line. But during the generalization, as
other parts of the contour have all moved (the green dashed line), the curvature at point B’ is
smaller, and at this time, the external energy is relatively larger, so point B’ is then moved
towards the deeper side of the original contour.
Figure 2-4c: Internal and external forces in the generalization process
Eventually, the curve stops at the green solid line where the total forces are all balanced, and the
total energy is minimized. The green dashed line in Figure 2-4b and Figure 2-4c is a hypothetical
line, which this generalization will never get to due to the effect of the external forces.
2.3.2 B-spline Snake Energy Terms for Polygons
The internal energy term for polygons is the same as the polyline term (6). For polygon
features, however, there will be an exaggeration operator in the generalization process. For the
exaggeration operator, another force term is introduced in the energy equation. This new force
(Cohen and Cohen, 1993) is represented as:
The ( )
j
n u is the unit vector normal to the curve at point ( ) X u , and b is the amplitude of this
force (Cohen and Cohen, 1993). This term generates a pressure force to push the polygon curve
outward, as if air is introduced inside; the curve will be inflated like a balloon, so this force is
called a balloon force. This new term gives the polygon curve a force to expand when it is
exaggerated during the generalization.
By adding the new balloon force to the external energy, the total force becomes:
2.3.3 Smoothing Operator
The smoothing operator has been studied extensively. The most commonly used smoothing
method is to apply filters on polylines. Figure 2-5 illustrates how the smoothing operator works
on a polyline feature. Figure 2-6 and Figure 2-7 shows an example of smoothing in a
cartographer’s manual process of generalization.
Figure 2-5: Sample spatial transformation of smoothing operator
The left sub-figure is the sample polyline; the right sub-figure is the result after smoothing
operator is applied. The number of the total vertices of the curve remains the same, but the
positions of these vertices are changed, such that the curve is smoothed (Shea and McMaster,
1989).
This gradient approximation is used for the internal energy in each iterative step (the gradient of
external energy will be discussed in section 2.3.5.2)
2.3.4 Simplification Operator
There is research on simplification operators in previous studies. For example, the
Douglas-Peucker algorithm (illustrated in section 1.3; 1973) is the most widely used algorithm to
simplify linear features. Other researches prefer the bend detect algorithm developed by Wang
(1998). However, both these method do not consider the shoal-bias rule of nautical chart features,
and they cannot be used in this study. Figure 2-8 illustrates how the simplification operator works
on a polyline.
2.3.4.1 Simplification Operator Implementation
The simplification operator can be implemented by two methods. One is to reduce the
number of control points of the B-spline curve before the generalization process at the data
preprocess step, the other method is to reduce the number of points on the polyline.
B-spline approximation is the first step of the generalization workflow. In Chapter II,
When approximating a polyline with a B-spline,
the number of the B-spline parameters is the same as the number of data points on the polyline,
but the number of control points can be smaller than the number of parameters. As the basis
function can be calculated recursively, the only unknowns of each B-spline are the locations of
the control points. To approximate the original contour curve with a B-spline curve, instead of
storing all the data of original polyline, only the control points need to be stored (Saux and
Daniel, 1998).
The second method is to reduce the number of data points in a polyline. This method has
been used during the generalization process in this thesis. When neighbor points are closer than a
threshold during deformation, points will be deleted. The pseudo code is as follows:
Algorithm 2-1:
Input: one polyline (a set of control points of an approximating B-spline)
1. Calculate the distance between adjacent control points
2. Select all the indices for which the neighbor distance (Cartesian distance between two
adjacent vertices) is smaller than a threshold
3. Iterate through the selected indices of step 2
3.1 If there are three or more continuous indices in the selected indices(like index 4, 5,
6 and 7)
3.1.1 Divide this continuous segment into small segments such that each contains
three continuous indexes
3.1.2 For each segment
2.3.5 Shoal-biased Operator
The shoal-biased rule is a special generalization rule for nautical charts features. As
introduced in the previous section, contours of a nautical chart can only be moved to a deeper
position (Figure 2-10). This shoal-biased constraint operator has always been used together with
other operators like the smoothing operator.
CHAPTER III
OPERATORS, AUXILIARY FUNCTIONS AND WORKFLOW DESIGN
In this thesis, generalization is being done by using operators; each operator has its
specific generalization purpose. By combining different operators together in particular
workflows, a generalization process can be achieved with respect to holistic and aesthetic
purposes. The first part of this chapter introduces all of the generalization operators’ purposes and
how they are implemented; the second part of this chapter is about the workflow for how these
operators are combined in the overall generalization process. The operators are defined
phenomenologically, and so the best way to describe them is through pseudo code. The unit of the
distance values used in the following calculations is 1/100,000 degree (see Appendix A.0).
3.1 Operators
3.1.1 Aggregate Operator
An aggregate operator is used to combine two or more features into one. As shown in
Figure 3-1, three small Ruins features in the left figure are aggregated into two large features in
the right figure.
The features being aggregated should be in the same category. In this research, the
research objects are just contours, but they are at various depths. Only contours with the same
depth can be aggregated.
In cartographers’ manual process of generalization, aggregations are done in many
circumstances. As Figure 3-2 to Figure 3-4 demonstrate, aggregation is done between polylines
and polygon contours, two polygon contours, and a group of polygons.
3.1.1.1 Aggregate Operator Method
There are four cases for the aggregate operator. The first is aggregating a polyline with a
polygon contour (Figure 3-5, One). The second case is aggregating two polygon contours (Figure
3-5, Two). The third case is aggregating two groups of contours, where each group has a set of
polygon contours inside it (Figure 3-5, Three). The fourth case is aggregating two contours when
they have intersected with each other (Figure 3-5, Four). In this last case, the intersected contour
can be either a polygon or a polyline. This case may happen during the generalization process
when the step size of the last generalization was too large: the polyline moved a large step, and
intersected with the neighbor feature. The biggest difference between the first two cases is the
aggregation result. For Case One, the aggregated result is a polyline, but for the second case the
aggregation result is a polygon. The conditions are treated in order of complexity; the first and
second conditions are relatively simple, the third condition has more steps and is more
complicated.
3.1.1.2 Aggregate Operator Implementation
3.1.1.2.1 Aggregate Operator Case One Implementation
The basic goal of the aggregate operator is to find two supporting segments that connect
the two features, and then to remove the segments of the two features between the contours. For
example, the brown lines in Figure 3-6A are the supporting lines for these two features; the green
line in Figure 4-6B is the aggregated result of those two features.
Figure 3-6: Two steps of aggregate operator in Case One
A: find two supporting segments; B: connect the remaining part of the two features and the
supporting segments. The aggregate operator here finds the supporting segments (the brown
lines in the left figure) and combines them with the selected part of polyline and polygon, and
forms a new feature (the green line in the right figure). The dashed lines will be deleted.
Pseudo code for aggregating a polyline and a polygon is as follows:
Algorithm 3-1
Input: one polyline and one polygon contour
1. Calculate the minimum distance between the polyline and polygon
2. If the minimum distance is smaller than a threshold
2.1 Find the proper supporting segments for the polyline-polygon case
2.2 Combine the supporting segments with the rest of the features
3. End If
61
Pseudo code for finding supporting segments is as follows:
Algorithm 3-2
Input: one polyline and one polygon contour.
1. Find two points
polygon
P and
line
P (Figure 3-7) of the polygon and polyline that are
closest to each other.
2. Get the starting index
1
S and ending index
1
E of the selected section of the polygon.
1
S ’s index is the index of
polygon
P plus N,
1
E ’s index is the index of
polygon
P minus
M (point index increases clockwise from an arbitrary point, and should be considered
modeling the number of points in the polygon).
3. Get the starting index
2
S and ending index
2
E of the selected section of the
polyline. Calculate the minimum distance from each vertex of the polyline to the
polygon. The vertex which has minimum distance smaller than a threshold will be
selected. For example in Figure 3-7, the segment from index
2
S to index
2
E is the
selected segment of the polyline. Then line
1 2
S S and line
1 2
E E will be the supporting
segments of this polyline and polygon.
4. Check if line
1 2
S S and line
1 2
E E intersect with the original polygon contour. If they
do, the point closet to the intersection point will be the new starting or ending point
of the polygon contour, and the algorithm will use the new starting and ending point
as the
1
S and
1
E index (Figure 3-8). Then the new line
1 2
S S and
1 2
E E will be the
valid supporting lines.
5. Connect the valid supporting lines and remaining segments of the polyline and the
polygon. Delete the original polyline and polygon. This new polyline will be the
aggregated result (Figure 3-9).

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章