A general finite element model for segmentation in 2, 3, and 4 dimensions


A general nite element model for segmentation in 2, 3, and 4 dimensions
Jorg Bredno, Thomas Lehmann, Klaus Spitzer Institute of Medical Informatics, Medical Faculty Aachen University of Technology (RWTH), D{52057 Aachen, Germany
Medical imaging modalities often provide image material in more than two dimensions. However, the analysis of voxel data sets or image sequences is usually performed using only two{dimensional methods. Furthermore, four{dimensional medical image material (sequences of stacks of images) is available already for clinical diagnoses. Contrarily, four{dimensional image processing methods are almost unknown. We present an active contour model based on balloon models that allows a coherent segmentation of image material of any desired dimension. Our model is based on linear nite elements and combines a shape representation with an iterative segmentation algorithm. Additionally, we present a novel de nition for the computation of external in uences to deform the model. The appearance of relevant edges in the image is de ned by image potentials and a lter kernel function. The lter kernel is applied with respect to the location and orientation of nite elements. The model moves under the in uence of internal and external forces and avoids collisions of nite elements in this movement. Exemplarily, we present segmentation results in 2D (radiographs), 3D (video sequence of the mouth), and 4D (synthetic image material) and compare our results with propagation methods. The new formalism for external in uences allows the model to act on greylevel as well as color images without pre{ ltering. Keywords: spatio{temporal image data, geometric deformable model, image segmentation, nite element model Spatial and temporal information is nowadays available from many imaging modalities such as Video, CT, MR, and Ultrasound. A quanti cation of temporal or spatial data by hand can not be used in clinical routine as this is too time{consuming even for experienced observers.1 For two{dimensional images, active contour models have proven to yield robust segmentation results.2 The classic approach of Kass et al.3 is widely used for medical image analysis and is still topic of current improvements and research.4 Nevertheless, the development of new medical imaging systems exceeds the development of image analysis methods, especially for image material of higher dimensions. So far models to quantify image material of higher dimensions exist only for highly specialized applications5 and are usually bounded to three dimensions. The spatial and temporal extension of image space gives need for new developments. When a tracking of objects and their movement in image sequences is needed, often the image sequence is segmented using propagation methods.6 It is well recognized that this propagation of two{dimensional contours in an image sequence might result in fatal distractions of the contour at few ambiguous images in the sequence.7 This problem is e.g. tackled using motion estimation.8 For spatial data, the propagation of segmentation results through slices is still used9 even though three{dimensional models have been published years ago.10 More and more often, a medical diagnosis is based on image material of up to four{dimensions.11 So far, no coherent model exists to represent and segment spatial objects that additionally move and change shape over a period of time under observation. Our model is a generalization of the well known balloon model.10 It uses linear nite elements and mechanical in uences to perform an iterative segmentation process in image material of d dimensions. The segmentation result is a geometrical representation of objects contained in image space can be used for many di erent tasks in the quanti cation of medical image data.
Further author information: Send correspondence to Jorg Bredno Institute of Medical Informatics, Aachen University of Technology (RWTH), D{52057 Aachen, Germany E{mail: jbredno@mi.rwth-aachen.de

ABSTRACT

1. INTRODUCTION

2. METHODS

2.1. Image Space

To allow a formulation of the model which is independent of the dimension, rst we de ne an image as a discrete, convex subset of the d{dimensional space: (1) So far, we assume a Cartesian image space with equally{spaced axes, i.e. square picture elements (pixels) and cubic volume elements (voxels). We denominate four{dimensional image elements as stixels (spatio{temporal image elements), but will use the term image element for all elements of I irrespective of its dimension d. The image space I is mapped onto a space of image values W : IRm ; m 2 f1; 2 : : :g ~ The image function f assigns an at least one{dimensional image value for each image element of I :

I IN0 d; d 2 f2; 3; 4 : : :g

W

(2)

(3) The dimension m of the image values is not limited in our model. E.g., it is m = 1 for grey scale images and m = 3 for color images. Medical imaging systems o er an increasing amount of multi{modal images with m > 1. An object represented by the model is a subset of I . The shape representation is the surface of this object with d ? 1 degrees of freedom. For its de nition, we use a set V of discrete vertices vi and a set E of nite elements ej which are also called edge elements or edgels.10 All vertices vi are elements of the image space I , they determine a position in I with their vector ~i . v

~ x f :~ 2I !w2W ~

2.2. Shape Representation

V = fvi g; vi 2 I (4) The nite elements or edgels are a ne simplices K of dimension d ? 1. Each nite element ej provides the connection and orientation for d vertices vj;1:::d that support this edgel: E = fej g; ej = K (vj;1 ; : : : ; vj;d); vj;k 2 V The general a ne simplex K of dimension n is de ned as:
(

(5)
X

(6) k=1 k=1 For each edgel ej , the unity normal vector ~ j pointing outward and its size jej j are available. Further, we de ne n U v (vi ) as a set of vertices which lay no more that edges afar of vi . Here an edge is the straight connection between two vertices vi and vk which share at least one edgel ej . U e (vi ) is de ned as the set of all edgels ej which are supported by vi . The shape representation yields the consistent surface of a coherent subset of I if the following
k

K (v0 ; v1 ; : : : ; vn ) =

v v 0 ~0 + : : : + n ~n

0

1;

n

)

necessary requirements are met:

Every vertex vi supports at least d edgels ej . Every edgel ej has exactly d di erent adjacent edgels. Two edgels are adjacent if they share d ? 1 vertices. No edgels ej and ek ; j 6= k exist that share d vertices. The sequence vj;1:::d of vertices, which de nes the orientation of ej , complies with the orientation of the shape's surface. The edgels ej do not intersect with each other, that is fej \ ek g = fg 8 j 6= k.

A path can be found to reach each edgel ej from every other edgel ek by stepping over adjacent edgels. So far, we have not proved the su ciency of these requirements. Note that the last requirement leads to the representation of one single coherent object. If this requirement is not enforced, it becomes possible for the model to represent more than one object contained in I . Instances of this generalized model for 2,3, and 4 dimensions result in the following shape representations: In image space of two dimensions x and y, an object is represented by a polygonal shape. For spatial image data (d = 3), the nite element model forms a triangulated surface. A sequence of 2D images also results in an image space with d = 3, the image space is accessed using the coordinates x, y and t. The model consists of triangular nite elements where each vertex de nes a position (x; y) in the image plane and its time of occurrence t. Each nite element represents a line segment of a polygonal shape in di erent images of the sequence. This results in segments which are scaled and translated over a short period of time. The line segment at the time tc is obtained when the edgel is cut with a plane de ned by t = tc. An image space of four dimensions comprises a spatial object and its movement and change of shape in time. The model results in a set E of tetrahedrons that connect four vertices which each de ne a position (x; y; z ) in space and a time of occurrence t. Each edgel represents a moving facet of a triangulated object. Here, the cut of an edgel with a hyper plane de ned by t = tc may result in one or two triangles, the triangles obtained by cutting all edgels with that hyper plane result in the triangulated object at time tc . The triangulated facets are scaled and translated during the existence of each edgel ej which is given by the minimal and maximal time coordinate of the vertices vj;k .

2.3. Mechanical In uences

In uences act on the model and result in movements towards the surface of objects contained in an image. A mechanical formulation was used because forces allow a vectored de nition of in uences independent of the image's dimension d. Di erent notations are used in literature to denote such in uences. Internal12 or intrinsic13 in uences are used to describe a preferenced shape of active contour models. The in uences are used to control the reconstruction of ambiguous parts of an image. They may also include tendencies of the contour to seek object surfaces. Signi cant image information is used to attract or stabilize an active contour model at object boundaries. These external or extrinsic in uences are calculated from image values w 2 W . In di erent models, these forces are either imposed ~ ~ ~ on edgels or vertices. In our model, forces Fej that act on edgels are later transformed to their resulting forces Fvi acting on the vertices as well, because only vertices de ne a position in the image space and can be moved. The location of edgels is always de ned by their supporting vertices.

2.3.1. Pressure

The mechanical formulation of a tendency to seek object boundaries is realized by an internal or external pressure ~ p that results in a pressure force Fepj that acts on every edgel ej :

~ n The pressure force Fepj is directed along the edgel's normal vector ~ j and proportional to the size jej j of the edgel and the chosen pressure p. Positive pressure results in a seek of the contour to the outward, negative pressure to a tendency of the contour to shrink.

~ ~ Fepj = p jej j nj

(7)

Figure 1. External in uences for a three{dimensional edgel ej and one exemplary lter kernel k(h).
Most importantly, edges pick up image information. Edgels ej are in uenced by all image values that are read from the image subset Ij . This subset is de ned as a set of image elements in the vicinity of ej : (8) Ij = ~ 2 I ~ = ~ e + h ~ j ; ~ e 2 ej ; hmin h hmax x x x n x This set contains all image elements that are reached by adding the normal vector ~ j scaled in the range hmin ; hmax] n to the image elements in the a ne simplex of ej . In our model, the dimension dim(Ij ) of the subset from which external in uences are read, is the dimension d of the image material itself. For the computation of the external in uences, for every component in the image values w 2 W , a corresponding ~ entry in the image potentials ~ (w) is assigned. The image potentials are chosen to describe the appearance of ~ objects in di erent color channels. E.g., for m = 1, (w) is a scalar function for greylevels. For RGB color spaces with w = (wr ; wg ; wb )> , ~ = ( r (wr ); g (wg ); b (wb ))> assigns individual potentials for the values in di erent color ~ channels. Furthermore, we use a lter kernel ~ (h) with entries for each dimension of the value space. This adaptable k lter kernel de nes the strength and scale of gradients in the image at the border of objects of interest. So far, we use nearest neighbor interpolation to map continuous positions ~ into the discrete image space I . External in uences x ~ acting on an edgel ej are de ned with respect to ~ (h) and ~ (w). The absolute value of the external force Feej is the k ~ unstandardized correlation of the image potentials ~ and the lter kernel ~ (h) applied in direction of the normal k vector ~ j : n
n o

2.3.2. Image information

~ ~ Feej = ?nj

X

hmax X

The image potentials are summed up and individually weighted by the lter kernel ~ (h) in respect to their distance k h to the edgel (Fig. 1). This force is always directed against the normal vector of each edgel. The choice of ~ (h) k allows the model to nd high gradients in the image as well as the occurrence of image potentials with a magnitude that equals the pressure p. It can be interpreted as a directional adaptive lter of adaptable scale, the image is always ltered normal to the direction of edgels. Therefore, our model works on the original image material with no pre{ ltering. In Figure 1, external information is read from the three{dimensional image subset Ij of a triangular edgel ej . The exemplary lter kernel of size 7 is the discrete approximation of the rst order derivative of a Gaussian function with a standard deviation of one pixel.

x ~ e 2ej h=hmin

~ (h) ~ (f (~ e + h nj )) k x ~

(9)

~ ~ ~ The forces Fepj and Feej have to be transferred to the supporting vertices vj;k . The pressure force Fepj is equally divided:
1 ~ Fvpi = d
X

2.3.3. Supporting vertices

~ Every vertex vi supports 1=d of all pressure forces Fepj in its vicinity. In contrast, the external force is divided to the supporting vertices with respect to the position of image elements and their potentials. Vertices are more in uenced by image potentials from close image elements. Assuming that the position ~ e of image elements of edgel ej is x composed as ~e = x
X

ej 2U (vi )
e

~ Fepj

(10)

d

(compare Eq. (6)), the l;j in the range 0; 1] are near to 1 if xe is near to vl;j and decrease with increasing distance. ~ The external force Fvei acting on vertices is computed as

l=1

l;j

~l;j ; v

(11)

~ Fvei = ?

X

Here l is the corresponding index of vertices in ej so that vi = vl;j . Eq. (12) is motivated to achieve an equilibrium ~ not only of forces but also of turning moments for all external in uences and supporting forces Fvei . The equilibrium of turning moments was chosen because it enhances the property of nite element models to nd segmentation results independent of their sampling.2

ej 2U (vi )
e

~j n

X

hmax X

~ e 2ej h=hmin x

~ (h) ~ (f (~ e + h nj )) l;j : k x ~

(12)

~ Internal in uences from deformation energies directly result in forces Fvdi on vertices. The deformation energy of a nd order derivative. Therefore, the aim of the deformation force exural rigid body is proportional to its absolute 2 is to reduce the models 2nd order derivative. This is achieved easily for the model with discrete vertices, ~ Fvdi = jU vs(dv )j
X

2.3.4. Deformation force

i vk 2U v (vi )

vk ? vi ; ~ ~

(13)

if = 1. Then every vertex is pulled with adjustable strength to the average position of all vertices in its vicinity. For > 1, we use weights wn in respect to the distance i;k to calculate the average position of U v (vi ). Here i;k is the number of straight edges between vi and vk .

~ Fvdi =

The decreasing sequence wn ( i;k ) : IN ! IR is heuristically chosen. We use

X sd ~ ~ wn ( i;k ) vk 2U v (vi ) wn ( i;k ) (vk ? vi ): vk 2U v (vi ) P

(14)

with A 2 1; 3].

wn ( i;k ) = A +A i;k

(15)

The model detects structures in image space I with an iterative segmentation algorithm. During segmentation, vertices move until they are in contact with signi cant edges whose appearance is coded in the lter kernel ~ (h). The k segmentation starts with an initial contour. This contour is either a seed for an in ating segmentation (p > 0) or an instance of the model that encloses the whole image space for a de ating process (p < 0). An initial seed contour is a contour that represents the subset Is de ned as
2 Is = ~ 2 I (~ ? ~ s )2 rs x x x
n o

2.4. Iterative Segmentation

(16)

with small initial radii rs and ~ s laying in the center of an object of interest. x ~ After the initialization of the contour, all forces Fvi acting on vertices are calculated. The vertices are then moved under these in uences. The forces are normalized so that the movement of the contour is independent of the size of edgels. For this normalization, we de ne the supported size Oi for every vertex vi . 1 Oi = d
X

ej 2U e (vi )

jej j

(17)

~ Oi gives the average amount of image elements in the contour that have in uence on Fvei . For normalization of ~ the deformation force Fvdi , we use the sampling L, this is the average distance of vertices in the contour. All vertices
are moved proportional to the sum of forces: 1 ~ ~ ~ vi = 1 1 Fvdi + O (Fvpi + Fvei ) ~ L i (18)

This movement corresponds to the movement of objects under the in uence of liquid friction with the coe cient . The shift ~i of vertices needs not to be whole{numbered. We therefore store the coordinates ~i as vectors of IRd v v and limit this space with the bounds of I . The movement of vertices changes their distance and therefore the size of nite elements. After each iterative movement, the contour is resampled. This resampling regards all Cartesian distances Li;k of adjacent vertices vi and vk .
1 If the distance Li;k is above an upper bound Lmax , a new vertex vl is created at position 2 (vi + vk ). All edgels that are supported by both, vi and vk , are duplicated. In one set of duplicates, vi is exchanged for vl , in the other set this exchange is done with vk . 1 If Li;k is below a lower bound Lmin, also a new vertex vl is created at position 2 (vi + vk ). All edgels that contain both, vi and vk , are deleted. For edgels that contain either vi or vk , this vertex is replaced by vl . Thereafter, vi and vk are deleted as well.

The resampling results in changes of the represented shape. These changes can be minimized if the order of resampling operations is optimized and the new vertex vl is placed at a more appropriate position compared to the 1 2 center of vi and vk .14 So far, we heuristically set 3 Lmax Lmin 5 Lmax and estimate L = 1 (Lmin + Lmax ). 2 Every vertex keeps track of its own position during the previous iterations. If the Cartesian distance from the actual position to one of the prior positions is below a minimum value Lf , the vertex is frozen. No forces are calculated for frozen vertices, they remain at their current position. The segmentation process ends if all vertices are frozen or if a maximum count of iterations is reached. The position of frozen vertices still contributes to the deformation force ~ Fvdi . This bounds the movements of adjacent active vertices and contributes to an implicit annealing process of the contour, the activity of vertices decreases in the vicinity of already frozen vertices.

Figure 2. 2{D{segmentation result of a radiograph.
Following the movement and resampling of the contour, collisions of nite elements might occur. A collision of nite elements is detected if fej \ ek g 6= fg 8 j 6= k, that is if image elements are found that belong to the a ne simplices of two di erent edgels ej and ek . For the two{dimensional instance of the model, this cut of two edgels can easily be solved by exchanging the two vertices vj;1 and vk;1 . In higher dimensions, the handling of collisions is more complicated and requires the start of the following algorithm: First all nite elements that are involved in collisions are deleted. The resulting contour does not ful ll the conditions for a consistent surface given in Sec. 2.2. Afterwards, vertices are deleted from the contour if they support no edgels. Then the cuts that exist in the inconsistent model are extracted. The resulting cut objects are contours of dimension d ? 1 that are based on vertices in image space of dimension d. E.g., a cut object in a triangulated surface is a non{planar polygon. For the four{dimensional model, this polygon occurs at one point in time, changes shape and size and vanishes again. Such cut objects are found by tracking coherent inconsistencies in the model. These inconsistencies are formed by d ? 1 vertices that support only one edgel. Such vertices are used to create an edgel in the cut object of dimension d ? 1. Then all vertices are checked. If they support edgels in more than one cut object, the vertices as well as all supported edgels are deleted. This constellation can not be solved by the following consistency operations. For every cut object, its size, the location of its center of gravity and an average normal vector are computed. If two cut objects approximately match in size, position and orientation, they are connected with new nite elements to close both cuts. If no matching cut object is found, single objects are closed by the creation of a new vertex in its center of gravity and the creation of new nite elements that connect this new vertex with all vertices in the cut object. After this operation, one or more coherent subsets of I are represented by the model. In dependency of the quanti cation task, supernumerary objects are either accepted or deleted. The model has been implemented in C++ on di erent UNIX{based platforms. The implementation consists of polymorphistic objects which create an adequate instance of the model according to the present image material. We present examples for instances of the model in 2, 3, and 4 dimensions.

2.5. Handling of Collisions

3. RESULTS

y 200

100

x 200 100

5 10 15 t

Figure 3. A coherent segmentation result of an image sequence.

3.1. 2D{Instance of the Model

We use the model to detect the contour of body parts in radiographs. The shape of the detected contour is used as one method to categorize radiographs which is necessary for content{based image retrieval in medical applications.15 To detect a contour in radiographs, we initialize the model at the border of the image and apply negative pressure (p = ?2). An exemplary radiograph of a right hand (Fig. 2) is 357 540 pixel in size and has 256 gray values. The contour is sampled with Lmax = 20; Lmin = 8. We set the image potential proportional to the image values w using (w) = 0:0138w. The lter kernel is located only on the inside of the contour. Its size is hmin = ?3; hmax = ?1 1 and its values are set to k( ?3; ?2; ?1]) = 2 ; 1; 3 ]. The strength of the deformation forces is set to sd = 0:2. The 2 segmentation result is found after 320 Iterations i . These parameters can be applied on a wide variety of skeletal radiographs and were set using an automated method.16 As a segmentation result in 3D, we present the tracking of a human lip in an image sequence acquired with a CCD{camera. The images where stacked to create a three{dimensional image space. The green channel of the RGB image sequence was used for the detection. The images are 274 205 pixel in size, the exemplary short sequence consists of 19 images and shows the pronunciation of the syllable 'ba'. The whole study consists of many sequences that show the pronunciation of di erent selected syllables and is used for phoniatric diagnoses. A three{dimensional instance of the model segmented the outer contour of the lips in this image stack. Whereas prior methods, which used propagation methods, often drifted away in later images in the stack, the three{dimensional model yields a coherent segmentation result (Fig. 3). Note that the reduction of the 2nd oder derivative of the contour along the t{axis corresponds to a local motion estimation of the tracked object. To obtain segmentation results for every image in the sequence, the resulting three{dimensional object was cut into slices along the t{axis. The resulting two{dimensional contours where superimposed into the original images (Fig. 4).

3.2. 3D{Instance of the Model

Figure 4. Result of the segmentation for planes in the image sequence.
Not all sequences acquired within this study can be su ciently segmented using only the green channel. We will therefore extend the detection method to a three{dimensional color space. We still gain experiences with the four{dimensional model. As an example, we present the segmentation result of a synthetic dataset. This dataset is an image space I IN4 that consists of 32 successive volume datasets of the size 32 32 32 voxel. Each voxel set contains a binary image of a sequence in which a cube is morphed into a sphere. The initialization of the model was a a ne 4{simplex consisting of ve vertices and ve edgels in the center of this four{dimensional image space. Then the model performed 25 segmentation iterations with a sampling of Lmax = 10; Lmin = 3. The resulting contour consists of 644 vertices in IR4 which are connected by 3559 edgels. This contour was then cut with di erent hyper planes t = tc (Fig. 5). Each edgel results in 0, 1, or 2 triangular edgels in IR3 , all these edgels form a triangulated surface of the object at the time tc . Additionally, to compare our model with propagation methods, the segmentation process was also started for subsets of I in two and three dimensions. In order to achieve the same sampling,p when changing from dimension p d1 to d2 , the average distance between vertices L has to be readjusted so that L1 = (d1 ) L2 = (d2 ). One spatial data set of 32 32 32 voxel would result in 86 vertices and 168 edgels with Lmax = 8; Lmin = 3. The incoherent dataset of 32 three{dimensional contours would therefore consist of 2752 vertices and 5376 edgels. One central slice of 32 32 pixel was segmented using Lmax = 7; Lmin = 3. The resulting contour consists of 16 vertices and edgels.

3.3. 4D{Instance of the Model

y

y

y

y

y

z x tc=1

z x tc=8

z x tc=15

z x tc=22

z x tc=29

Figure 5. A coherent 4{D contour.
We estimate that the segmentation of the dataset in 1024 two{dimensional slices would result in more than 10.000 edgels and vertices with no coherence in depth and time. The bene t of our model is not only the coherence but also a reduced amount of data for the segmentation result. Our generalized nite element model ful lls the task of image segmentation in up to four dimensions and automatically incorporates the knowledge of spatial and temporal coherence of objects. It is therefore more stable than propagation methods for segmentation of image sequences or stacks. Additionally, the knowledge of the coherence is automatically contained in the dataset and needs not to be recomputed from propagation slices. Our new method for the computation of external in uences allows the segmentation process to work on the original image data with no need for pre{ ltering. The newly introduced adaptable kernel function for external in uences allows the model to control the direction of this lter that is used to detect signi cant gradients in the image. The formulation also holds for a value space of higher dimension, e.g. RGB color images. Our model reads image information from subsets Ij of the image with dim(I ) = dim(Ij ). This intensi es the ability of active contour models to consider global image information. With these extensions, we also increase the complexity of the segmentation process. The order of magnitude is dominated by the computation of external in uences and was estimated to O(m d2 ). As a consequence, segmentation results in 4D exist so far only for synthetic image material with coarse image space resolution. For the use of the model on medical image data, di erent parameters have to be set. The model looses the ability to reproducibly segment image material if these parameters are heuristically chosen by experts. For the use of the model on medical images, we developed methods to determine the parameters using one hand segmentation result. This allows physicians to perform the required parameter adjustments.16 Though the concepts are formulated in d dimensions with no upper bound for d, instances of the model exist only for 2, 3, and 4 dimensions as real world space is bounded to 4D. The author holds a scholarship from the Studienstiftung des Deutschen Volkes (German Scholarship Foundation). 1. T. McInerney and D. Terzopoulos, \Deformable models in medical image analysis: A survey," Medical Image Analysis 1, pp. 91{108, 1996. 2. V. Metzler, J. Bredno, T. Lehmann, and K. Spitzer, \A deformable membrane for the segmentation of cytological samples," Proc. SPIE 3338, pp. 1246{1257, 1998. 3. M. Kass, A. Witkin, and D. Terzopoulos, \Snakes: Active contour models," International Journal of Computer Vision 1(4), pp. 321{331, 1988. 4. D. J. Kang, \A fast and stable snake algorithm for medical images," Pattern Recognition Letters 20, pp. 507{512, 1999.

4. DISCUSSION

ACKNOWLEDGMENTS REFERENCES

5. D. Kucera and R. W. Martin, \Segmentation of sequences of echocardiographic images using a simpli ed 3D active contour model with region-based external forces," Computerized Medical Images and Graphics 21, pp. 1{ 21, 1997. 6. C. L. Lam and S. Y. Yuen, \An unbiased active contour algorithm for object tracking," Pattern Recognition Letters 19, pp. 491{498, 1998. 7. S. Chan, C. W. Ngo, and K. F. Lai, \Motion tracking of human mouth by generalized deformable models," Pattern Recognition Letters 20, pp. 879{887, 1999. 8. N. Peterfreund, \The velocity snake: Deformable contour for tracking in spatio{velocity space," Computer Vision and Image understanding 73(3), pp. 346{356, 1999. 9. B. Levienaise-Obadia and A. Gee, \Adaptive segmentation of ultrasound images," Image and Vision Computing 17(8), pp. 583{588, 1999. 10. L. D. Cohen and I. Cohen, \Finite{element methods for actice contour models and balloons for 2{D and 3{D images," IEEE Transactions on Pattern Analysis and Machine Intelligence 15(11), pp. 1131{1147, 1993. 11. J. Duann, S. Lin, W. Hu, and J. Su, \Computer system for four{dimensional transoesophageal echocardiographic image reconstruction," Computerized Medical Images and Graphics 23, pp. 173{179, 1999. 12. V. Chalana, D. T. Linker, D. R. Haynor, and Y. Kim, \A multiple active contour model for cardiac boundary detection on echocardiographic sequences," IEEE Transactions on Medical Imaging 15, pp. 290{298, June 1996. 13. R. Grzeszczuk and D. Levin, \Brownian strings: Segmenting images with stochastically deformable contours," IEEE Transactions on Pattern Analysis and Machine Intelligence 19(10), pp. 1100{1114, 1997. 14. A. E. Johnson and M. Hebert, \Control of polygonal mesh resolution for 3{D computer vision," Graphical Models and Image Processing 60, pp. 261{285, 1998. 15. T. Lehmann, B. Wein, J. Dahmen, J. Bredno, F. Vogelsang, and M. Kohnen, \Content-based image retrieval in medical applications: A novel multi{step approach," Proc. SPIE 3972, 2000. (in press). 16. J. Bredno, T. Lehmann, and K. Spitzer, \Automatic parameter setting for balloon models," Proc. SPIE 3979, 2000. (this issue).


相关文档

更多相关文档

Mechanical Model and Finite Element Analysis of the Loader Drive Axle Housing in Typical Operating
A finite element model for static and dynamic analysis of thin-walled beams with asymmetric cross-se
Material model for finite element modelling of fatigue crack growth in concrete
An adaptive heat source model for finite-element analysis
A dynamic finite element surface model for segmentation and tracking in multidimensional me
CONSTITUTIVE MODEL FOR REINFORCED-CONCRETE FINITE-ELEMENT ANALYSIS
Additive Schwarz methods for elliptic finite element problems in three dimensions
碰撞有限元模型特征与设置过程Experiences in reverse-engineering of a finite element automobile crash
Finite Element Model for the Fatigue Analysis of Cracked bular T-Joints under Complex Loads
A finite element model for the analysis of steady state heat transfer in disccoil transformerwinding
Finite Element Model Development and Validation for Aircraft Fuselage Structures
Clough_1999_EARLY FINITE ELEMENT RESEARCH AT BERKELEY
Simulating space capsule water landing with explicit finite element method
hypermesh_dyna卡片说明
A Finite Element Model for 2-Dimensional Slice
电脑版