Point-tracked quantitative analysis of left-ventricular surface motion from 3D images seque_图文

Point–Tracked Quantitative Analysis of Left Ventricular Motion from 3D Image Sequences

Pengcheng Shi?,? , Albert J. Sinusas?,? , R. Todd Constable? , Erik Ritman , and James S. Duncan?,?

Departments of ? Electrical Engineering,
?

Diagnostic Radiology, and ? Medicine Yale University New Haven, CT 06520-8042

Department of Physiology and Biophysics Mayo Clinic Rochester, MN 55905

This work was supported by NIH–NHLBI Grant No. R01HL44803.

i

Abstract
We propose and validate the hypothesis that we can use di?erential shape properties of the myocardial surfaces to recover dense ?eld motion from standard three–dimensional image data (MRI and CT). Quantitative measures of left ventricular regional function can be further inferred from the point correspondence maps. The non-invasive, algorithm–derived results are validated in two levels: the motion trajectories are compared to those of implanted imaging–opaque markers of a canine model in two imaging modalities, where sub–pixel accuracy is achieved, and the validity of using motion parameters (path length and thickness changes) in detecting myocardial injury is tested by comparison to post–mortem TTC staining of myocardial tissue, where the Pearson product–moment correlation value is 0.968.

Keywords: Motion Analysis, Cardiac Function, Shape Modeling, Algorithm Validation

1

1

Introduction

It has been the fundamental goal of many e?orts in cardiac imaging and image analysis community to assess the regional function of the left ventricle (LV) of the heart. The general consensus is that the analysis of heart wall dynamics (motion, thickening, strain, etc) provides quantitative estimates of the location and extent of ischemic myocardial injury. It is also felt that image–based serial analysis of regional function is helpful in assessing the e?cacy of therapeutic agents or mechanical interventions (i.e. angioplasty) directed at reducing myocardial injury [14, 22]. However, imagingbased clinical analysis of regional LV function, usually from SPECT and Echocardiogarphy, often results in overestimation of true myocardial infarction. More sophisticated quantitative analysis of myocardial dynamics may improve the accuracy in the estimation of myocardial injury, and add to the understanding of the remodeling process following myocardial infarction as it relates to the extent of myocardial injury. There have been considerable e?orts within the medical image analysis and computer vision communities aimed at establishing sparse correspondences between di?erent time instants of a moving object, as well as mapping the entire object from one time instant to next, both with applications in cardiac motion analysis. In addition, a signi?cant level of activity has been performed within the magnetic resonance (MR) imaging community regarding the measurement of LV motion and deformation using MR tagging, and to a lesser extent, MR phase contrast velocity images. However, because of the complexity of the heart motion, the lack of unambiguous reference landmarks, and the limitations of the existing imaging techniques (i.e. out of plane motion problem for two–dimensional (2D) image slices), accurate assessment of LV motion remains a di?cult problem. In this paper, we propose and validate the hypothesis that we can use di?erential shape properties of the myocardial surfaces to track dense ?eld three–dimensional (3D) trajectory of the surface points over the entire cardiac cycle. The matching and tracking criteria are based upon locating and matching di?erential geometric surface features, as well as a mathematical optimization reasoning strategy to combine the data–driven matching with a local coherent smoothness model. While it may not be true for non-rigid motion analysis in general, we show that geometrical token tracking to be a viable strategy in myocardial surface motion analysis, as suggested by an earlier visual study [55]. Very few of previous e?orts in cardiac image analysis attempted to validate the accuracy of their results against recognized gold standards. We believe that carefully designed ground truth

2 is essential in validating the results of any image analysis based approach. Towards this goal, we have developed an experimental setup using a canine model where imaging–opaque markers are implanted on the surfaces of the left ventricular wall, and the motion of these markers are used as the gold standard against which the validity of our non–invasive motion analysis algorithms are evaluated. The results of this validation process in two image modalities (standard cine MRI and computer tomography (CT)) are presented. Also, the usefulness of using image-derived motion measures in predicting regional myocardial viability is also validated using post-mortem TTC staining of myocardial tissue, which is the current clinical standard. We believe that our proposed method overcomes many of the problems associated with the 2D image–based approaches, and complements some of the existing 3D e?orts, such as MRI tagging and MR phase contrast imaging. It is non–invasive in nature and can follow a dense ?eld of points. It o?ers the high accuracy of the gold standard implanted–markers, with sub-pixel average errors for both imaging modalities. And it accurately detects the location and extent of myocardial infarction, which is evident from its high correlation with the TTC staining. Also very importantly, this approach does not rely on special imaging methods such as MR tagging or phase contrast, but rather is applicable to several standard imaging protocols (cine MRI and CT, and we are experimenting with ultrasound images). Our e?orts in integrating the shape-based approach with MR tagging and phase contrast information are reported in other papers [53, 51, 34, 32].

2
2.1

Previous E?orts
Medical Image Analysis

Earlier image-based approaches to measuring cardiac motion largely fall into two groups. The ?rst one attempts to measure wall motion by detecting gray level changes over time for each pixel, the optical ?ow strategy [31, 56, 60]. Besides the issue that optical ?ow is usually not the same as the motion ?eld of the physical material points [26], it is quite di?cult to post–process the results to separate the wall motion from other motion in the images. The second group attempts to track the motion of pre–extracted boundaries of the cardiac wall, and recent works often use geometric measures as tracking tokens. It includes the earlier works such as the chord shortening method [20], and the centerline method [8], which treats the end-diastole (ED) and end-systole (ES) contour di?erence as a thickness measure but does not track each contour point through the cardiac cycle. The important work of Slager [55] established

3 the experimental foundation for salient feature tracking. He used visually extracted contour shape landmarks to track point motion over time from 2D image sequences, and validated the results using implanted physical markers. The correlation between the two motion trajectories was good (r = 0.86), which seemed to support the concept of tracking identi?able shape cues over time in cardiac images. More recently, several boundary di?erential shape matching ideas for LV motion tracking are reported, using Gaussian curvature under conformal stretching surface models [30], hybrid bending and stretching surface model [2], and 2D elastic rod bending model [32, 40]. One key issue in cardiac motion rises from the fact that the heart is a non–rigid moving object that rotates, translates, and deforms in 3D space. Hence, motion measurements from 2D image sequences most likely are not derived from the same points of the heart wall at di?erent time instants, thus do not measure the true movement of the heart. Only 3D based analysis has the potential to accurately describe the true dynamics of the heart. We also want to point out, without much elaboration, that 3D approaches are technically very di?erently from corresponding 2D approaches, even though conceptually they could be very similar. Extension of 2D approaches to 3D are neither technically trivial nor always possible, because of the added topological complexity of the third dimension.

2.2
2.2.1

Imaging Physics–Based Methods
MR Tagging

In magnetic resonance tagging, a spatially varying pattern of magnetization is encoded into the myocardium prior to each phase-encoded data acquisition cycle of a spin-echo imaging sequence, forming a grid pattern of image voids generated at each temporal instant in the cardiac cycle [5, 66]. Motion occurring between the tag pulse sequence and the image pulse sequence results in distortions of the tag pattern, and the grid deformation can be tracked over a portion of the cardiac cycle. Sparse sets of grid motion measurements of several orthogonal tagging grid acquisitions, where each acquisition provides the 2D motion components of tags in that particular imaging plane, are assembled. 3D motion descriptions are then derived from the tracking of the sparse set of tag–tag or tag–boundary crossings. MR tagging certainly shows promises but still has some major technical limitations. It remains a fundamental problem to track the tags over the complete cardiac cycle, due to decay of the tags over time. While it is often remedied by performing a second acquisition (a re-tagging)

4 somewhere later in the cardiac cycle, the same tissue elements are generally not tagged in each of the two tagging processes. Hence, the motion trajectory of the tag crossing point in each tagging sequence is not acquired for the same tissue or point. The further derived motion ?elds from two tagging processes may be incoherent if the myocardium is heterogeneous and the deformation is ?nite. Also, only sparse in-plane motion, the approximated 2D projection of the 3D motion of the material points being imaged, can be obtained from tagging images directly [41]. In order to obtain data pertaining to motion in three dimensions, 2D tag data must be acquired in two orthogonal views, typically short axis and long axis [6]. Once again, the same tissue elements are not tagged in each of the two views, and thus the motion in each view must be seen as partial 2D data at di?erent points that contributes to an overall scheme aimed at estimating the complete 3D motion and deformation. There are very active researches in the MR tag image analysis. Some of the most interesting ideas include the use of ?nite element models with spring-like interconnections and nonlinear basis functions [65], the use of locally deformable superquadrics Park94,Park95, the use of stochastic models and statistical estimation techniques [18], the high order polynomial ?tting of sparse displacements [39], and the use of B-snake grid [1]. While there is no doubt that MR tagging can provide unique and interesting data regarding LV myocardial movement, there are quite a few processing steps required to assemble the data into meaningful measures of 3D motion. Just having the MR tag data alone does not necessarily mean that physiologically and clinically useful analysis is forthcoming. The proper choice of image analysis and processing algorithms is essential and remains a signi?cant open question. 2.2.2 MR Phase Contrast

Another imaging-physics based technique for cardiac motion analysis employs MR phase contrast, which relies on the fact that a uniform motion of tissue in the presence of a magnetic ?eld gradient produces a change in the MR signal phase that is proportional to the velocity of the tissue [46]. The velocity in any particular spatial direction can be estimated by measuring the di?erence in phase shift between two acquisitions with di?erent ?rst gradient moments. Hence, velocity maps encoded for motion in all three spatial dimensions may be obtained at multiple time instances throughout the cardiac cycle using a phase contrast cine-MR imaging sequence, which may then be used to provide information on tissue displacement, strain, strain rate, and other quantitative measures of

5 motion and deformation [15, 38, 46, 43, 45, 44]. In principle, the instantaneous velocity can be derived for each pixel in an image acquisition. However, because of the size of the acquisition region of interest (ROI) for each pixel, accurate velocity information is only available at the middle of the myocardial wall, but not at the wall boundaries. Since the velocity maps themselves only provide instantaneous motion information, the important ?rst step in using these maps to estimate Lagrange motion parameters is to devise methods that can accurately track each segment of myocardium as it deforms through the heart cycle. Methods have been proposed to use forward integration, backward integration, or combinations of the two, of the velocity information to estimate the displacement vectors [15, 23, 44, 64]. Meyer’s approach combines a spatially regularizing velocity ?eld with temporal Kalman ?ltering, and produces more robust results in 2D [34]. All these methods assume some kind of constant velocity conditions between time frames, which generally is not true even for small time intervals. Errors resulting from the constant velocity assumptions can be signi?cant if the velocity change rate is large, and small error accumulates as velocity is integrated through the cardiac cycle.

2.3

Computer Vision Based Motion Analysis

Motion measurement from image sequences has long been studied by many computer vision researchers. The primary emphasis has been on the determination of optical ?ow [4, 25, 27, 36], and the determination of point correspondences between successive frames of rigid objects [7, 21]. They are generally not very successful in estimating the true motion ?eld of deformable objects because non–rigid motion makes it more di?cult to enforce appropriate smoothness constraints on the optical ?ow ?eld which are necessary to deal with the aperture problem. In computer vision, quantifying the boundary motion of non-rigid objects has often been seen as a two step process: establishing correspondence between certain sub-sampled points of the boundary between time t and time t+1, then using these correspondences as a guide to solve for a complete mapping of the object boundary between two time frames. While early approaches used simple global distance measures to ?nd correspondence, more recent matching methods have been mainly based on tracking salient image features over time, from simple tokens such as points or line segments to complex features such as eigenmodes of ?nite element representations [2, 30, 32, 35, 40, 50, 52]. The task of establishing a complete non-linear mapping between object frames has received more attention. The basic assumption is that estimates of correspondence between sparse indi-

6 vidual points on objects are either speci?cally assumed to be known or can be established based on some global distance measures. Physically-based ?nite element models are used to provide a framework for the mappings [59, 29], and modal analysis is adopted to reduce computational cost [28, 37]. Bookstein used deformable thin-plate splines to interpolate dense correspondences from very sparse initial matches [9]. Locally deformable superquadrics approaches are also used for nonrigid problems [33, 58], including several using MR tagging data to establish initial correspondence [42, 41].

3

Surface Characterization

Since the proposed algorithm is based upon tracking surface shape tokens over time, the endocardial and epicardial surfaces of the LV, as well as their di?erential properties, over the cardiac cycle are derived from the 4D image data at the start of the process.

3.1

LV Wall Boundary Finding

The boundary ?nding problem is solved twice for each 2D image slice of a 3D image frame, once for the endocardial border and once for the epicardial one. To speci?cally address robustness of the segmentation results, we use an integrated approach which combines gradient based deformable boundary ?nding with region based segmentation [12]. This approach uses Green’s theorem to derive the boundary of a homogeneous region-classi?ed area in the image, and integrates this with a gray level gradient based boundary ?nder. It combines the perceptual notion of edge information with gray level homogeneity, and is more robust to noise and poor initialization. For either endocardial or epicardial boundary segmentation, a contour is manually initialized within a 2D image frame representing a middle–ventricle slice of a 3D image, halfway between ED and ES. The above described boundary ?nding algorithm is run, and the ?nal result is used to initialize similar 2D boundary ?nding problems in image frames spatially and temporally adjacent to the starting image frame. The results are propagated and the process is repeated until all of the contours that make up the LV surfaces in all 3D frames are detected. During the process, human operator decides the top and bottom slices to be segmented beforehand, and may re-initialize the algorithm if needed. The reproducibility of the algorithm under di?erent initialization and noise conditions has been reported [13]. Figure 1 shows the segmentation results of one of the 3D MR image frame.

7 Some of the 3D image data, such as MR, may have higher in-plane resolution than interplane resolution because of the limitations in the imaging protocol. However, it is often more desirable to have roughly equally sampled representations in all three dimensions to achieve more accurate quantitative analysis and display [61]. Since we are only interested in the boundary points of the LV, a shape-based contour interpolation method [24], using the chamfer distance transformation [10, 11], is implemented. Here, interpolated contours are computed from linear combination of the distance maps of the two spatially adjacent data contours. After the segmentation and the contour-interpolation process, endocardial and epicardial boundaries are represented by contour stacks at each time instant.

3.2

Delaunay Triangulation

We want to tessellate each contour stack to infer a surface representation of the LV boundary, which will then allow the calculation of the geometrical parameters of the surface that form the basis of the motion analysis. Delaunay triangulation is chosen as the surface representation because it is most suitable for volumetric representation of a set of spatial sample points without a priori connectivity information. In the case of from contour stack to surface that we are facing, even though we do know the explicit relationship for boundary points within any given slice which will be enforced, we do not have any prior knowledge of the inter-slice spatial relationship. The two-dimensional Delaunay facets which are on the boundary of a bounded and constrained Delaunay triangulation constitute the corresponding LV surface [54], which de?nes a symmetrical and isotropic neighborhood point relationship. For a set V of N points in 3D, Delaunay triangulation performs a simplex decomposition of the convex hull of the point set, where the vertices of the tetrahedra are contained in the point set [62]. It is a globally optimal triangulation in the sense that the circumsphere of every tetrahedron in the triangulation contains no other point from the point set, and every tetrahedron is as regular as possible. With the exception of the degenerate case, the Delaunay triangulation is a unique minimal shape representation for a given set of points, and can be computed e?ciently. A morti?ed incremental point insertion algorithm is implemented [62]. For LV surface representation, the distribution of the points is limited to certain subspace of
3,

and a subset of the

points has a priori connectivity relationship. A bounded and constrained Delaunay triangulation is developed [54], and a volumetric tessellation of both endocardial and epicardial points is constructed. The Delaunay facets which are on the outside boundary of the triangulation constitute the

8 epicardial surface, while the facets which are on the inside boundary of the triangulation constitute the endocardial surface. The surfaces are left open at the top and bottom slices. Figure 2a shows a Delaunay tessellated MR endocardial surface. The circumspheres of the Delaunay triangles express a special adjacency relationship among spatial points [63]. For any point Q, its natural neighbor points are those points which share the same Delaunay edges with Q. For example, points Fi in Figure 2b are the natural neighbor points of Q. We can even further de?ne multiple orders of natural neighbor relationships of spatial points from the Delaunay triangulation, hence measure the natural closeness of arbitrary points [54]. Even though Euclidean distance measure performs a similar function, the natural neighbor relationship takes into account the geometrical relations between points, thus provides a more accurate description of the relationship. In the remaining of the paper, we use this natural neighbor relationship extensively for de?ning surface patches and search windows.

3.3

Surface Curvature Estimation

Since our motion tracking algorithm is shape based, it is thus closely related to the characteristics, (and implicitly their reliability, reproducibility, and accuracy), of the LV surface shape and their variations along the entire cardiac cycle. We have developed a method which results in intrinsic, locally smooth curvature maps. 3.3.1 Triangulated Surface Smoothing

As described previously, the myocardial surfaces are represented by the aggregate of the twodimensional Delaunay facets. These polyhedral surfaces, however, often appear faceted. One reason is that each 2D boundary contour from the segmentation is digitized at the resolution of the image sampling lattice which may not be ?ne enough for smooth reconstruction of continuous contour. A more fundamental reason arises from the fact that contours of di?erent planes are segmented independently, which means that there is no spatial constraints between contour points of di?erent planes during segmentation, even though Delaunay triangulation does establish an a posterior connectivity relationship For shape characterization and visualization purposes, the jagged boundary surfaces are desired to be smoothed to achieve locally coherent surface representations. Most of the existing smoothing methods su?er from the shrinkage problem, however, which means that when the smoothing process is applied iteratively a large number of times, a surface will collapse into a point

9 [57]. A two-stage Gaussian smoothing algorithm has been proposed to produce non-shrinking result [57]:
N ?1

xnew1 = (1 ? λ1 )xcurrent + λ1
i=0 N ?1

ωi x i

(1)

xnew2 = (1 ? λ2 )xnew1 + λ2
i=0

ωi x i

(2)

Here, xcurrent is a point in the Delaunay triangulation, {xi } is its natural neighbor point set, {ωi } is the weighting coe?cient set associated with the neighbor point set, xnew1 is the new location of xcurrent after one iteration of the ?rst stage smoothing, and xnew2 is the new location of xcurrent after one iteration of the second stage smoothing. For each iterative smoothing step, after a ?rst Gaussian smoothing stage with a positive smoothing factor λ1 which produces a slightly shrunk surface, a second Gaussian smoothing stage is applied with a negative smoothing factor λ2 which produces a slightly expanded surface. λ2 should have slightly greater magnitude than λ1 (0 < λ1 < ?λ2 ), with the di?erence determined by a bounded band which makes sure that the surface does not shrink or expand after each two-stage smoothing. This method is in fact a linear low-pass ?lter that removes high curvature variations (usually caused by noise), with the two smoothing factors determining the passing band. Figure 3 shows the e?ect of the two-stage non-shrinkage smoothing on an endocardial surface. On the left is the original triangulated surface (rendered), and on the right is the smoothed surface of the two-stage non-shrinkage Gaussian smoothing process (λ1 = 0.33, λ2 = ?0.34, iteration number n = 10), where the z-coordinates of the points in top and bottom contours are ?xed. It is quite obvious, at least visually, that the smoothing process keeps the overall size and shape-feature of the surface. We have applied this non-shrinkage smoothing on all the myocardial surfaces to achieve coherent surface representations. 3.3.2 Local Surface Fitting and Curvature Estimation

To analyze the local shape properties of a point p in a neighborhood of a triangulated surface, polynomials are used to approximate this neighborhood, in the form of the graph of a di?erentiable function. We begin the process by de?ning a local surface patch S at point p as a discrete surface

10 triangulation consisting of point p itself, the ?ducial point, and a set of neighboring points of p. Two questions arise for constructing a smooth local surface for these points: 1. How do we de?ne the neighborhood of the ?ducial point p? 2. How do we de?ne the parameterization of the surface patch S? The second question is easier to answer: this surface patch is represented by the graph of a di?erentiable function. A point-dependent local coordinate system is chosen, with its origin at the ?ducial point p, and its z axis in the normal direction of the surface patch S at p. The normal at p is determined as the weighted average of the outward normal vectors of all the Delaunay facets which share p as one of their Delaunay vertices. The weighting coe?cient for each facet is its normalized area. Since the xy plane agrees with the tangent plane Tp (S) of S at p, any pair of orthogonal directions in Tp (S) can be chosen as the x and y axes. After this local coordinate system is established, surface patch S can be represented in the graph form z = h(x, y), which has a one-to-one
2



mapping if the neighborhood of p is well chosen.

Traditionally, the neighborhood of point p is de?ned as a sphere centered at p with a radius R in
3,

and all the points that fall inside this sphere constitute the neighboring point set [3, 49].

However, depending on the value of R and the roughness and folding of the local surface, z = h(x, y) may not be a unique function. Figure 4a shows the possible multiple mapping problem in a 2D representation. In this example, the surface patch de?ned from the neighborhood folds around p, and causes the graph function to have multiple mappings in
2

for one value in

. And since this

nearest Euclidean distance based method does not take account of the known connectivity from triangulation, it may also include points which are disconnected from the valid surface points and should not be in the neighborhood of p at all, such as the case shown in Figure 4b. Of course, these two problems may be avoided by choosing a smaller radius R. However, the determination of the value for R is a di?cult task without much of the knowledge about the geometry of the underlying surface, which is exactly what we are trying to estimate. An adaptive window for curvature estimation, controlled by the error of the least square ?t, is another possible solution [17]. In order to choose a neighborhood of p which is unique (no folding) and local (no nonneighboring points), the natural neighbor relationship between surface points is used to de?ne a more precise and ?exible neighborhood. First of all, if we only include the 1-order natural neighbors of p in the neighborhood, the graph function will always have a unique mapping. If the surface

11 patch is well-behaved, which means that the orientation of one facet does not change dramatically from those of its adjacent facets, higher orders of natural neighbor points can also be included in the neighborhood of p. Depending on how many orders of natural neighbor points are included in the neighborhood, the local surface can be considered to have multi-scale details. Secondly, the natural connectivity ensures that only valid neighbor points will be chosen, since each of these points is connected to at least one other chosen point. This is true even if higher order natural neighbor points are included in the neighborhood. Since the Weingarten mapping matrix only depends on the ?rst and second derivatives of the graph function h(x, y), biquadratic functions are all we need to estimate the curvatures of the surface patch. The estimate of the biquadratic graph function involves ?nding the ?ve coe?cients of the polynomial [3, 49]:

z = h(x, y) = a1 x2 + a2 xy + a3 y 2 + a4 x + a5 y

(3)

If there are at least four neighboring points, say n-1 (n ≥ 5) points, the least-square estimate of the ?ve coe?cients is evident. Since natural neighbor points are used for p, three neighboring points are the worst case (two 1-order neighbor in the same slice, one 1-order neighbor in the adjacent slice). We can always include higher order natural neighbors to have at least four neighbors. In matrix form, the coe?cient vector a which minimizes the ?t error is:

a = (AT A)?1 AT Z

(4)

2 where A is a n x 5 matrix with row i being [x2 xi yi yi xi yi ], Z is a column vector of the zi values. i

Once the coe?cient vectors are found, computing the actual di?erential properties on the ?ducial point is straightforward [19]. The graph function and curvature estimation may follow a multi-scale procedure, depending on how many orders of natural neighbor points are included in the neighborhood, to examine the consistence of the shape information across di?erent extent of the chosen neighborhood. Let’s denote the k-scale graph function as the one estimated from the neighborhood which includes 0order, ..., k-order natural neighbor points. If certain prior information is available, we can use lower scale (small k) graph functions to characterize highly curved surface patches to avoid the

12 smoothing e?ect, and use higher scale (large k) graph functions to characterize ?at surface patches to alleviate the noise e?ect. The consistency of the estimated shape characteristics through di?erent scales, such as from higher to lower, may also indicate the goodness and reliability of predominant shape features (corners, ridge lines, etc.) in the surface. From our experiments, it is clear that the signi?cant shape patterns are generally consistent across di?erent scales. The curvature maps from lower scale graph functions (small neighborhood) may appear noisier than curvature maps from higher scale graph functions (large neighborhood), while the latter ones may over-smooth the surface shape features. Figure 5 shows the Gaussian curvature, mean curvature, and the two principal curvature maps of an endocardial surface derived from 1-scale, 2-scale and 3-scale graph functions. The motion tracking examples in the remaining of this paper are achieved only using 2-scale estimated curvatures.

4

Non–Rigid Motion Tracking

The movement of a set of LV surface points over cardiac cycle is determined by following local surface shape between successive temporal frames. In the ?rst step of the tracking algorithm, endocardial and epicardial surfaces at one given time instant are used as reference surfaces and are sub-sampled, creating a sparser set of points (from ?ve to ten percent of the total points). Signi?cant shape cues, such as cup and cap points, are always chosen. Other points are selected with random spatial distribution. The reason to use this sparse set of points for initial match is to avoid cross-over between nearby matches. Under the assumption that the LV surface deforms as little as possible between small time intervals, the sub-sampled surface points are matched and followed over the LV surface sequences under a minimum bending energy model. These initially-calculated 3D displacement vectors can be noisy due to inherent data noise and matching uncertainty. Match-con?dence-weighted regularization process is utilized to help limiting these problems and to compute the dense ?eld motion for all the surface points.

4.1

Thin Plate Bending Model

Local surface patches on the LV are modeled as thin ?exible plates, loosely constrained to deform in a predetermined fashion which will be de?ned in the next section. The potential energy of an ideal thin ?exible plate of elastic material is a measure of the strain energy of the deformation from its equilibrium state, a plane. It is a function of the two principal curvatures of the plate, and is

13 invariant to 3D rotation and translation [16]: κ2 + κ2 1 2 2

=A

(5)

where A is a material related constant for the particular plate, κ1 and κ2 are the two principal curvatures of the surface patch of interest. Note that even though the principal curvatures have associated directions, only their magnitudes are relevant in the potential energy metric. The potential energy maps of an endocardial surface over the entire cardiac cycle (sixteen temporal sampling frames) are shown in Figure 6. In these maps, most of the high potential energy features run through several temporal frames or even the entire sequence. This consistency of shape features over time provides the visual basis for matching surface points based upon geometrical tokens. We modify the idea of surface potential energy slightly to de?ne the bending energy required to bend a curved plate or surface patch to a newly deformed state as: (κ1 ? κ1 )2 + (κ2 ? κ2 )2 ? ? 2

be

=A

(6)

The principal curvatures of the initial surface patch are κ1 and κ2 , while the same parameters of the deformed surface patch are κ1 and κ2 . This equation assumes that the corresponding points ? ? on the surface patches are known, and arrives at a numerical value measuring the energy required to deform the initial surface patch to achieve a deformed shape. To use the square of curvature di?erences instead of the di?erences of the curvature squares is because the principal curvatures are signed quantities. Obviously, if the surface patch undergoes rigid motion, there is no bending energy required for the movement since the shape of the patch does not change. Only non-rigid deformation requires bending energy which measures a form of strain energy.

4.2

Shape–Based Surface Point Match

The bending energy of Equation 7 measures the required energy to deform a surface patch from one state to another. However, it is the goal of the motion tracking algorithm to ?nd the point correspondences between temporally successive surfaces in the sequence. We use the bending energy as the matching criterion to obtain the point correspondences, where the essence of the algorithm is the hypothesis that a surface patch deforms as little as possible between successive temporal

14 frames, and the deformation can be fully characterized by the bending energy. The motion of the surface patch surrounding each sample point consists of two distinctive parts: the rigid motion (global and local) and the non-rigid deformation. Since rigid motion does not change the shape properties of a surface patch, point correspondences with no deformation can be found by matching surface patches requiring no bending energy. Hence, the minimum bending energy criteria can be used for rigid motion. For the non-rigid deformation, we make the assumption that the surface patch is constrained to deform in a fashion such that the extent of the deformation is determined solely by the required bending energy. This assumption makes it reasonable to use the bending energy measure as the matching criteria for the non-rigid deformation. Under the strong assumption that the surface patch surrounding each sample point deforms only slightly and locally within a small time interval, we construct a physically-plausible search region on the second surface at time ti+1 for each sub-sampled point on the ?rst surface at time ti . For any point at time ti which is a cup, trough, rut, ridge, dome, or cap point, a search region W is de?ned as the area surrounding the intersection of normal vector of the point x(u) at t i with the surface at ti+1 . For any point at time ti which is a saddle rut, saddle, or saddle ridge point, a search region W is de?ned as the area surrounding the nearest surface point at ti+1 to the point x(u) at ti . W consists of the natural neighbor points of the intersected point, often of the same or slightly higher order as the surface patch ?tting process. These points inside W is denoted as point set {xW,i , i = 0, ..., N ? 1}. Bending energy measures between the surface patch surrounding ? point x(u) and surface patches surrounding points xW,i are computed, and the point x within the search window W that has the minimum corresponding bending energy is chosen as the point corresponding to point x:

? x = arg min

xW ,i

be (x, xW,i )

= arg min A
xW ,i

(κ1 (x) ? κ1 (xW,i ))2 + (κ2 (x) ? κ2 (xW,i ))2 2

(7)

Here, we assume the material constant A is the same for all surface patches surrounding points xW,i of the search window, and it does not change from ti to ti+1 . Performing the matching process for all the sub-sampled points of the surface at ti , the ? results yield a set of sparse, shape-based, best-matched motion vectors dinit (u, ti ) = x ? x for pairs of surfaces derived from 3D surface sequence. An example of bending energy-based matching

15 of an endocardial surface point is shown in Figure 7. The complete trajectory of the point over the cardiac cycle (sixteen temporal frames) superimposed onto the endocardial surface at ES is shown in Figure 7a as an overview, and in Figure 7b as a more detailed blowup. Evidently, the point moves upwards during the contraction phase (ED to ES) and it moves downwards during the expansion phase (ES to ED). This out-of-plane motion demonstrates one important reason to use 3D images for LV motion analysis. It should be noted that the trajectory of this point is almost a complete loop, meaning the point moves back to where it begins. Since there are no periodic motion constraints imposed on the point correspondence tracking over the cardiac cycle, this result cannot be guaranteed. However, we have begun to work on the temporal and periodic motion modeling for two-dimensional image sequences [32]. For our current results in 3D, surface points usually move back near where they begin but not to the exact beginning positions in most cases, with average di?erences in the range of 1 to 1.5 pixel. The bending energy measures for all the points inside each search region are recorded as the basis to measure the goodness and uniqueness of the matching choices. The value of the minimum bending energy in the search region between the matched points indicates the goodness of the match. This is because the matching criteria is based on the notion that the smaller the deformation between matched surface patches, the better the match. Denote this value as m g , we have the following measure for matching goodness:

mg (x) =

? be (x, x)

(8)

On the other hand, it is desirable that the chosen matching point is a unique choice among the candidate points within the search window. Ideally, the bending energy value of the chosen point should be an outlier (much smaller value) compared to the values of the rest of the points (much larger values). If we denote the mean values of the bending energy measures of all the points inside window W except the chosen point as ?be and the standard deviation as σbe , we de?ne the uniqueness measure as: ? be (x, x) ?be ? σbe

mu (x) =

(9)

This metric means that if the bending energy of the chosen point is small compared to some smaller

16 value (mean minus standard deviation) of the remaining bending energy measures, it is quite unique and reliable, otherwise it is not. Obviously for both goodness and unique measures, the smaller the values are, the more reliable the match. Combining these two measures together, we arrive at one ? con?dence measure for the matched point x of point x: 1 1 k1,g + k2,g mg (x) k1,u + k2,u mu (x)

c(x) =

(10)

where k1,g , k2,g , k1,u , and k2,u are scaling constants for normalization purposes. The con?dence measures for all the surface matches are normalized to the range of 0 to 1. The matches with the highest con?dence have the con?dence measure value of 1, and the matches with the lowest con?dence have the con?dence measure value of 0. In the shape-based work on 2D contour matching [32, 40], a similar goodness measure is used. However, the uniqueness measure in that case is based upon the assumption that the matching metric (the bending energy) is spatially distributed as Gaussian, centered at the matched point. We do not believe this is the situation in the 3D case.

4.3

Optimized Dense Motion Field

A smoothing-interpolation process is needed for accurate estimation of the dense motion ?eld of the surfaces. One reason is that the initial surface matching results are obtained by purely local processes, without any constraint on the motion of neighboring points, even though surface motion of biological origin often varies smoothly in direction and magnitude over space. Another reason is that some sort of interpolation is needed to compute the dense motion ?eld for every point on the surface, while temporal correspondences of only a set of sub-sampled points are found. Under the assumption that the displacement ?elds of the LV surfaces are su?ciently smooth and that any changes of direction and magnitude take place in a gradual manner over space, a regularizing smoothing functional which includes both an initial estimate adherence term and a local smoothness term is constructed. Since the reliability of the initial correspondences vary based on their con?dence measures, it should be clear that good, unique matches from the initial bending energy matching process are preserved, while ambiguous matches are smoothed over by their neighboring good and unique matches. The con?dence measures from the initial match are used here as smoothing coe?cients.

17 In the continuous domain, the regularizing smoothing function is constructed to be ?d(u) 2 ) }du ?u

d? (u) = arg min
d S

{c(u)[d(u) ? dinit (u)]2 + (

(11)

Here, S is the surface space, u is the parameter vector representing a point in S, d init and d? are the initial and ?nal displacement vectors, respectively, and c(u) is the con?dence measure. Since we are dealing with discrete triangulated polygonal surfaces, the derivative of the displacement vector d(u) at u is approximated by the weighted average ?nite di?erences performed on the neighboring points around u:
N ?1

?d(u) ?u
N ?1

=
i=0

ωi {d(u) ? d(ui )}, i = 0, ..., N ? 1

(12)

ωi = 1
i=0

(13)

where {d(ui )} is the displacement vector set associated with the 1-order natural neighbor points , and {ωi } is the natural neighbor weighting set associated with these neighbor points [54]. Furthermore, the discrete version of the smoothing process can be conveniently posed as a series of linear equations, and can be solved iteratively:
N ?1

d(u)new = (1 ? c (u))d(u)old + c (u)
i=0

ωi d(ui )

(14) (15)

c (u)

=

1 c(u) + 1

In the same spirit of the two-stage, non-shrinkage Gaussian smoothing of the triangulated surface described above, the regularization process is also solved by a two-stage iterative procedure:
N ?1

d(u)new1 = (1 ? c1 (u))d(u)old + c1 (u)
i=0

ωi d(ui )
N ?1

(16)

d(u)new2 = (1 ? c2 (u))d(u)new1 + c2 (u)
i=0

ωi d(ui )

(17)

where c1 and c2 are derived from the con?dence coe?cient with 0 < c1 < ?c2 ), and their di?erence

18 are determined by a bounded band [54, 57]. The solution of this system of linear equations will yield a dense set of smoothed motion vectors that adhere to the reliable initial shape-based matches, yet vary smoothly on the surface space. Repeating the process for all the LV wall surfaces and assembling the vectors end-to-end over the sixteen-frame temporal sequence, we have a dense set of motion trajectories of the myocardial surface points throughout the cardiac cycle. Figure 8 shows the dense endocardial displacement vector ?eld from ED to ES, sub-sampled for visualization purposes. The trajectories are shown against the rendered endocardial surface at ES. We usually have about 1,800 endocardial and 2,600 epicardial points for a typical MRI dataset, and around 8,000 endocardial and 15,000 epicardial points for a typical DSR dataset.

5

Experimental Validation and Evaluation

To evaluate the e?cacy of using image-derived in vivo measures of regional LV function for predicting the extent of regional myocardial injury, we conducted experiments on fasting anesthetized, open chest, adult mongrel dogs at the Yale University School of Medicine and the Mayo Clinic. The results reported here are obtained from eight sets of canine magnetic resonance imaging (MRI) data under both baseline and post–infarction conditions, and four sets of canine Dynamic Spatial Reconstructor (DSR) data. The purpose of the animal study is to validate our shape-based boundary motion tracking approach, and to obtain the most accurate and reproducible measures for characterizing in vivo measures that best predict post mortem myocardial injury. The ?rst step has to do with validating the algorithm-derived motion versus the implanted-marker-derived motion (using both MRI and DSR data). The trajectories of surface points which are near implanted imaging-opaque markers are compared to those of the markers. The second step is to evaluate how well the algorithmderived measures predict the spatial extent of post mortem myocardial injury (MRI data only). The changes of motion measurements between baseline and post-infarct data are quanti?ed to estimate the extent of myocardial injury, and are compared to post-mortem histo-chemical staining maps of the actual injury (TTC-stained post-mortem optical photographs). The clinical usefulness of the proposed motion analysis framework lies with its ability to identify disease states from derived cardiac regional function measures, such as path length and wall thickening. Since baseline values are most likely to be unavailable for comparison, it is worthwhile

19 to point out that e?ort is underway to establish a database of baseline distributions of these physiological measures, against which conditions depicted by a new dataset can be evaluated. It will be the subject of a forthcoming paper.

5.1
5.1.1

Image Data
Magnetic Resonance Imaging Data

Pairs of endocardial and epicardial markers were implanted. They were loosely tethered, paired combinations of small copper beads (which show up as small dark signal voids in the images) at the endocardial wall and small Plexiglas capsules ?lled with a 200:1 mixture of water to Gd-DTPA at the epicardial wall (which show up as small bright signal spots in the images). The markers within each pair were attached by an elastic string, which maintained the endocardial and epicardial markers on their respective surfaces during the entire cardiac cycle without restricting myocardial thickening. Marker pairs were placed in four locations on the canine heart: 1) central infarct zone, 2) normal zone, 3) peri-infarct border zone, and 4) the site of coronary occlusion. These zones were identi?ed by the experimental cardiologists who were performing the animal surgeries. An example of an MRI image which contains both the endocardial and epicardial markers is shown in Figure 9. ECG-gated magnetic resonance imaging was performed on a GE Signa 1.5 Tesla scanner at Yale. Axial images through the LV were obtained with the gradient echo cine technique. The imaging parameters were as the following: section thickness 5 mm, no inter–section gap, 40 cm ?eld of view, TE 13 msec, TR 28 msec, ?ip angle 30 degrees, ?ow compensation in the slice and read gradient directions, 256 x 128 matrix and 2 excitation. The resulting 3D image set consists of sixteen 2D image slices per temporal frame, and sixteen temporal 3D frames per cardiac cycle. Dogs were positioned in the magnetic resonance scanner for initial imaging under baseline conditions. The left anterior descending coronary artery was then occluded, without noticeable movement of the dog relative to the imaging planes. Images were again acquired after 10 minutes of coronary occlusion. This way, it was assured that the images were roughly registered with each other pre- and post-occlusion. Figure 10 shows sixteen spatial slices which cover the entire left ventricle, from apex to base. Sixteen temporal frames of a mid-ventricle slice are shown in Figure 11, from ED to ES and then back to ED. The location of each implanted marker was determined in each temporal frame by ?rst manually delineating a point within the marker area in one of the 2D image slices, grouping all

20 evidence of the brightest (or darkest) values in the vicinity of that point over all slices within the 3D frame using a connected components algorithm and then computing the 3D centroid of that cluster of points, weighted by the gray level of each point. The markers were spatially placed far apart enough so that trajectories can be formed by simple nearest Euclidean distance correspondence between centroids in adjacent temporal frames. It should be noted that because of the complexity of the implantation of the markers, not all the markers are found all the time. However, for the eight studies presented here, 101 out of 128 possible markers have been found at all the sixteen time frames. Finally, we note that in a 2D moving phantom study, the trajectories traced out by tracking the identi?ed marker from a cine gradient-echo MRI acquisition using the technique just described and the known physical trajectory of the phantom were virtually identical. 5.1.2 Dynamic Spatial Reconstructor Data

The Dynamic Spatial Reconstructor is a three-dimensional computerized tomography scanner at Mayo Clinic [48, 47]. It can provide accurate, stop-action images of moving organs of the body. The canine data we are using was acquired at 33 msec frame intervals in real time, with the spatial resolution of 0.91mm in all three dimensions. During the DSR imaging, four pairs of un– tethered small lead beads (X-ray opaque) were placed on the endocardium and epicardium, and they appeared as small bright spots in the images. The markers were hand-traced as the average positions of the most probable points. Since the DSR has very high spatial resolution in all three dimensions, the image data usually contain ?fty to sixty slices to cover the LV, and have sixteen to eighteen temporal volumes. Sixteen sample spatial slices which cover the left ventricle are shown in Figure 12, and sixteen temporal frames of a mid-ventricle slice are shown in Figure 13. Here, the spatial slices cover the LV from apex to base, and the temporal frames cover the cardiac cycle from ED to ES and then back to ED.

5.2

Quantitative Measures of LV Motion and Function

The complete trajectories of all the myocardial surface points are built by concatenating motion vectors between pairs of surfaces found from consecutive temporal frames, using the proposed method. Flow vectors running through the surface points over time are connected to form trajectories that track each individual point’s motion over the entire cardiac cycle. Indices for LV regional function can be derived from these trajectories, including endocardial–epicardial surface motion measures, LV thickening measures, and LV cross–wall strain measures, etc. Each of these

21 measures are recorded for each point trajectory on each heart image data set. A subscript α will be used to di?erentiate algorithm–derived measures (α = alg) from gold–standard–derived measures (α = gold). The gold standard measurements for the validation are derived from the trajectories of the implanted markers described in the in vivo acute animal model. From these two types of measures, correlations are observed between the shape–tracked functional indices and the marker-derived indices. We have proposed several functional measures for the analysis of the left ventricle. The single surface measures (on either endocardial or epicardial surface) include the overall path length (sum of the magnitude of the ?ow vectors) and the displacement vector with respect to some known cardiac state such as end–diastole. The LV thickening measures (the cross–wall measures) are computed using cross–wall vectors which relate points on the endocardial surface to corresponding points on the epicardial surface. The magnitude of each of these connecting vectors is taken as a measure of wall thickness, and its temporal change is an indication of the wall thickening.

5.3

Motion Validation

In order to validate the motion trajectories resulting from the shape-based tracking algorithm, the gold–standard marker trajectories and the algorithm–derived point trajectories are compared across the acute dog studies (eight MRI datasets and four DSR datasets). To avoid the possible distortion of the boundary surfaces by the implanted markers which may cause false shape features, the image intensity around each marker is interpolated to be the average gray level of the surrounding same type tissue after the markers’ positions have been detected. This procedure is performed before the contour segmentation process. Since the position of each marker is the 3D centroid of a cluster of candidate points in the image, the markers are usually not exactly on the myocardial surfaces. For both MRI and DSR studies, the point which is the nearest surface point to each marker is chosen as the surface marker point for the given physical marker. After o?setting the initial positional di?erence, the algorithm–derived trajectory of this surface marker point is considered as the algorithm–derived trajectory of the physical marker. Then, point-by-point positional errors ||xalg (i) ? xgold (i)|| between the algorithm and gold standard trajectories are compiled across all 16 time frames for each marker, where xα (i) is the 3D position of physical marker (α = alg) and the surface marker point (α = gold) at frame i. The mean and the standard deviations of the positional errors for all the markers of both MRI and DSR data are represented in Table 1. For the baseline MRI studies, endocardial markers have a mean error of about one pixel and

22 the epicardial markers have a mean error around two-thirds of a pixel. Since the endocardial points usually move two-to-three times the distance of the corresponding epicardial points, this mean error disparity is somewhat expected. However, the endocardial surface often has more shape features than the epicardial surface, which may explain that both of them have similar error standard deviations. For the post–infarct MRI studies, both endocardial and epicardial markers have mean errors in the half pixel range, with their error standard deviations also much smaller than those under baseline condition. This is because the infarction dramatically reduces the motion of the myocardium, especially at the endocardium (a thirty percent drop of average path length from baseline to post-infarct for the endocardial surface, a ten percent drop of average path length from baseline to post-infarct for the epicardial surface). This reduction of motion makes it less likely to make large matching errors. Even though it still has larger motion, the feature-rich endocardial surface now has similar error statistics as the epicardial surface. Since the motion tracking algorithm is based on matching local shape features, it is not surprising that the higher resolution DSR baseline studies produce better endocardial tracking results than the MRI studies. The high resolution DSR images pick up more local detail of the endocardium, thus have smaller mean errors as validated by the markers. Meanwhile, the accuracy of the relatively feature-less epicardium does not improve, in terms of pixel units, from the MRI studies, and it is slightly worse than the endocardial results. Evidently, the rich shape features of the endocardial surface are su?cient to track the larger motion presented in this case. Overall, this table illustrates that the motion trajectories predicted by the algorithm at points near the implanted markers are very close to the gold–standard motion of the markers, and are more accurate with better image resolution. No mean errors are statistically signi?cantly di?erent from the in–plane resolution of the image data. A visual illustration of the marker position comparison for one MRI baseline study and one DSR baseline study is shown in Figure 14. Here, the real physical marker (green) trajectories and surface marker trajectories (blue), both from ED to ES, are plotted together, against the endocardial surfaces at ES. It is evident from these ?gures that the algorithm trajectories are usually very close to the corresponding marker trajectories.

5.4

Injury Validation

For ?ve MRI studies under both baseline and post-infarct conditions, two in vivo measures (path length and wall thickening) computed from the algorithm–derived trajectories are used to evaluate

23 how well the shape–based motion tracking approach can distinguish and predict myocardial regional injury. Three other MRI studies are not included in the study because the implanted makers in their normal and/or infarct zone are missing from some the image frames which make it di?cult to identify the zones unambiguously. 5.4.1 Path Length

The path length of any surface point is the sum of the magnitudes of all sixteen displacement vectors of this point over the cardiac cycle, and it measures the overall motion of the point. One physiological hypothesis of myocardial injury is that the post–infarct heart will have dramatically decreased motion in the injured zone from its baseline condition. Hence, it is reasonable to expect that we can measure myocardial regional injury from the comparison of the baseline and post– infarct path length maps of the myocardial surfaces. Two validation tests are performed on the path length data. The ?rst test veri?es the validity of using path length changes to di?erentiate the normal zone and the infarct zone motion behavior. For both the baseline and the post–infarct data of ?ve MRI studies, the mean and standard deviation of the path lengths are computed for the antero–apical infarct zone and the remote normal zone from sampled points near the two implanted markers. These data are shown in Table 2. Two–way analysis of variance is performed for the path length measure, where the F–statistics and p values are computed for comparisons between a) baseline and post–infarct conditions, b) infarct zone and normal zone, and the interaction of a) and b). As can be seen in the table, the path length measure shows signi?cant di?erences in the interaction term, indicating that the algorithm can discriminate between baseline and infarction conditions, and it can di?erentiate the motion at the location of the normal and infarct markers. More detailed testing showed that the signi?cance of the interaction is due to the di?erence of the measures in the infarct zone at baseline and post–infarct conditions (p < 0.028), and the di?erence between the infarct zone and the normal zone at post–infarct condition (p < 0.022). The second test veri?es the algorithm–predicted extent (area) of the infarct zone on the endocardial surface against the post-mortem de?nition of myocardial injury. Three–dimensional post mortem endocardial injury zones are found by digitizing color photographs of TTC–stained post mortem myocardium slices (5mm thick) of the excised hearts. The endocardial, epicardial and infarction zone boundaries of each post mortem left ventricular slice are hand-traced, aligned, and stacked to reconstruct the LV surface representations. The areas of the endocardial infarction zones

24 are computed and compared to the areas de?ned by thresholding the magnitudes of the in vivo algorithm–derived path length measures. Within a search window de?ned by a sphere centered at the central infarct marker I with radius being the distance between I and the occlusion marker O, points are considered to be inside the infarct zone if their post–infarct path length fell to levels below 80% of the mean infarct zone baseline value, computed from sample points near the infarct zone marker. Figure 15 shows the injury zones depicted by post mortem TTC–staining (left) and in vivo algorithm–derived trajectories (right). The mean and standard deviation of both post mortem and in vivo infarct areas are reported in Table 3, as is their Pearson product–moment correlation. The high correlation value points to the utility of these measures for measuring the extent of endocardial injury. 5.4.2 Wall Thickening

We have shown that the algorithm–derived path length changes of the myocardial surface points predict the existence and extent of the infarct zone. However, the path length measure is a gross overall measure of local deformation and global motion, which may result in over- or under- estimation of the injury. This is because the motion of the injury zone may interact with adjacent normal tissue. Meanwhile, myocardial thickening, measured by the change of thickness of LV wall between ED and ES, may provide additional insight into myocardial viability since it is a more local measure similar to radial strain 1 . For every epicardial point, its thickening measure is de?ned as: τED ? τES τED

τ =

(18)

where τED and τED are the thickness of the LV wall at ED and ES. The thickness of the wall is computed by emitting a ray from the epicardial point in its inward normal direction until it intersects the endocardial surface, and the distance between the intersection and the epicardial point is considered to be the wall thickness. For both the baseline and the post infarct data from ?ve MRI studies, the mean and standard deviation of the thickening measures are computed for the antero–apical infarct zone and the remote
1

In fact, this lays the groundwork for our further e?orts in estimating transmural properties of the left ventricle,

see [51, 53]

25 normal zone from sampled points near the two implanted markers. The data are shown in Table 4. Two–way analysis of variance is again performed for the change of thickening measure, and the F–statistics and p values are computed for comparisons between a) baseline and post–infarct conditions, b) infarct zone and normal zone, and the interaction of a) and b). As can be seen in the table, the thickening measure shows signi?cant di?erences in the interaction term, indicating that the algorithm can distinguish infarction. More detailed testing showed that the signi?cance of the interaction is due to the di?erence of the measure in the infarct zone at baseline and post– infarct conditions (p < 0.010), and the di?erence between the infarct zone and the normal zone at post–infarct conditions (p < 0.045). This shows the validity of using our algorithm–derived, point–tracked LV wall thickening measure to di?erentiate the normal zone and the infarct zone motion behavior. And since thickening is a gross approximation to the radial strain of the wall, it shows that shape–based tracking may be quite useful for estimating strains near or across the endocardial and epicardial surfaces.

6

Conclusions

In this paper, we have described a shape–based methodology aimed at accurately and objectively determining and quantifying the local, regional and global dynamic function of the left ventricle of the heart from 3D image data. It is our hypothesis that we can use the shape properties of the myocardial surfaces to track the motion trajectories of a dense ?eld of points over the entire cardiac cycle, which can then be used to predict the existence and extent of myocardial injury. Our shape-based tracking algorithm has been validated in a canine model with eight MRI studies under both baseline and post–infarct conditions, and on four DSR baseline studies. The accuracy of the algorithm point tracking results has been validated using the gold–standard motion of implanted imaging–opaque markers with very favorable results. We have shown that the changes in motion parameters (path length and wall thickening measures) between baseline and post–infarct data are indicative of the myocardial function, and that we can use these changes to diagnose the location and extent of myocardial injury, which is validated using post mortem TTC staining technique. Further and future research involves incorporation of temporal periodic characteristics of the heart motion within the current framework, integration of the transmural information from phase contrast MR images and/or tagged MR images, and developing richer surface models. We

26 are also investigating the proper ways to build a standardized canine, human in the future, heart atlas of representative baseline function distributions using this method.

27

Study Type

Image Resolution. (mm/pixel)

Average Errors Endocardial Markers 0.993±0.522(pixel) 1.623±0.856(mm) Epicardial Markers 0.685±0.509(pixel) 1.123±0.835(mm) 0.498±0.339(pixel) 0.817±0.556(mm) 0.645±0.338(pixel) 0.587±0.308(mm)

MRI Baseline MRI Post-Infarct DSR

1.64

1.64

0.488±0.322(pixel) 0.800±0.528(mm)

0.91

0.556±0.305(pixel) 0.506±0.278(mm)

Table 1: Average positional errors (in pixel units and in mm) between implanted markers and algorithm-derived point positions for eight MRI baseline, eight MRI post-infarct, and four DSR studies. Each data cell represents the mean±standard deviation errors for all the same type markers over the entire temporal frames.

Path Length b Factor Baseline Post-Infarct

a Factor Infarct Zone 22.1 ± 3.8 11.0 ± 7.7 Normal Zone 21.7 ± 3.5 19.9 ± 4.5 F 3.46

a p 0.14 F 4.50

b p 0.10 F

axb p 0.03

10.44

Table 2: Table of means and standard deviations of algorithm-derived in vivo path length measures (in mm), observed in the infarct zone and in a remote normal zone for ?ve acute canine studies under both baseline and post-infarct conditions.

28

Infarct Area (mm2 ) Study Post Mortem In Vivo 1 963 939 2 811 855 3 833 807 4 749 710 5 623 602 average 796±124 783±131 Correlation 0.968

Table 3: Correlation between post mortem and in vivo infarct areas.

Thickening Measure(τ ) b Factor Baseline Post-Infarct

a Factor Infarct Zone 44.6 ± 26.2 6.1 ± 13.2 Normal Zone 34.7 ± 19.5 37.4 ± 13.4 F 0.82

a p 0.42 F 4.90

b p 0.09 F 44.7

axb p 0.003

Table 4: Table of means and standard deviations of algorithm-derived in vivo measure of radial thickness change (in percentage), observed in the infarct zone and in a remote normal zone for ?ve acute canine studies under both baseline and post-infarct conditions.

29

Figure 1: LV boundary segmentation. Left: segmented endocardial and epicardial boundaries superimposed on one baseline MR image; Middle: stacked endocardial contours; Right: stacked epicardial contours.

S4 S5 F2 S3 S2

S6 F3 S7 Q S8 S9 F4

F1

S1

F0

S0

S12 S10 S11

Figure 2: Left: A Delaunay tessellated MR endocardial surface representation. Right: Local surface patch de?ned by multiple orders of natural neighboring points. For point Q, points F i , i = 0, .., 4 are its 1-order natural neighbor points, points Si , i = 0, .., 12 are its 2-order natural neighbor points.

30

Figure 3: Regular and non-shrinkage Gaussian smoothing of triangulated surface. Top: the original triangulated endocardial surface (left: rendered; right: wireframe); Lower: the smoothed triangulated endocardial surface (left: the non-shrinkage Gaussian smoothing result with λ 1 = 0.33, λ2 = ?0.34 and iteration number n = 10; right: regular Gaussian smoothing result with λ = 0.33, and iteration number n = 10.

31

z = h(x,y)

z = h(x,y)

R p xy Plane

R p xy Plane

Multiple mappings

Invalid points

Figure 4: Left: Possible multiple mappings in the graph representation of surface patch. The surface is the solid line, with the surface points the ?lled dots (inside the neighborhood sphere). Obviously, the graph function z = h(x, y) is not a unique mapping. Right: Invalid neighborhood points. The valid surface points are the ?lled dots, while the un?lled dots are invalid points even though they are inside the neighborhood sphere (the un?lled dots are disconnected from the valid surface points, and belong to another surface patch).

32

Figure 5: Curvature maps of an endocardial surface derived from multi–scale graph functions. The ?rst column is for 1-scale, the second column is for 2-scale, and the third column is for 3scale graph functions. From top to bottom row: Gaussian curvature, mean curvature, maximum principal curvature, and minimum principal curvature maps. For the curvature maps, the values from negative to positive are represented by a continuous color spectrum from red to green. The value-to-color scale is the same for all three maps within each row, but it is di?erent between di?erent curvatures.

33

Figure 6: Potential energy maps of an endocardial surface over the entire cardiac cycle. The temporal sequence of the maps runs from top to bottom, left to right. The potential energy maps have the same color scale: darker for larger values, brighter for smaller values.

34

End Systole

End Diastole

Figure 7: Complete trajectory of one endocardial point over the cardiac cycle (sixteen temporal frames), superimposed onto the potential energy maps the endocardial surface at end systole. Top: an overview ; Bottom: a blowup of the trajectory.

35

Figure 8: Dense (sub-sampled for visualization purposes) endocardial displacement vector ?eld from ED to ES. The trajectory arrows are shown against the rendered endocardial surface at ES.

36

Epicardial Marker

Endocardial Marker

(a)

(b)

Figure 9: MRI markers and their relative positions. Left: the upper picture shows the endocardial marker (copper bead on the left) and the epicardial marker (capsule on the right), the lower diagram illustrates the procedure to insert the endocardial marker into the myocardium through a needle as well as the relative positions of the markers on the myocardium; Right: MRI image with endocardial and epicardial markers.

37

apex

base

Figure 10: Sixteen spatial MRI images which cover the entire left ventricle. The images run from top to bottom, left to right, to cover the LV from apex to base.

38

ED

ES

ED

Figure 11: Sixteen temporal MRI images of a mid-ventricle slice. The images run from top to bottom, left to right, for the temporal sequence of ED to ES to ED.

39

apex

base

Figure 12: Sixteen spatial DSR images which cover the left ventricle. The images run from top to bottom, left to right, to cover the LV from apex to base. Note the implanted markers which appear as the bright dots in the images.

40

ED

ES

ED

Figure 13: Sixteen temporal DSR images of a mid-ventricle slice. The images run from top to bottom, left to right, for the temporal sequence of ED to ES to ED.

41

Figure 14: Comparison of algorithm and gold standard marker trajectories. Top (MRI study): algorithm trajectory (blue) and the implanted marker trajectory (green); Bottom (DSR study): algorithm trajectory (blue) and the implanted marker trajectory (green). In both studies, the paths are shown relative to the endocardial surfaces rendered at end–systole.

42

Figure 15: Comparison of TTC-staining and motion algorithm derived injury zones. Left: rendered endocardial surface along with the injury zone (dark) from post mortem TTC staining; Right: rendered endocardial surface along with the injury zone (dark) from algorithm-derived in vivo motion trajectories.

43

References
[1] A. Amini, R. Curwen, and J. Gore. Snakes and splines for tracking non-rigid heart motion. In European Conference on Computer Vision, Cambridge, England, 1996. [2] A. A. Amini and J. S. Duncan. Bending and stretching models for LV wall motion analysis from curves and surfaces. Image and Vision Computing, 10(6):418–430, 1992. [3] A. A. Amini and J. S. Duncan. Di?erential geometry for characterizing 3D shape change. SPIE- Mathematical Methods in Medical Imaging, 1768:170–181, 1992. [4] P. Anandan. A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision, 2:283–310, 1989. [5] L. Axel and L. Dougherty. MR imaging of motion with spatial modulation of magnetization. Radiology, 171:841–845, 1989. [6] Azhari et al. Noninvasive quanti?cation of principal strains in normal canine hearts using tagged MRI images in 3D. American Journal of Physiology, 264:H205–H216, 1993. [7] S. T. Barnard and W. B. Thompson. Disparity analysis of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(4):333–340, 1980. [8] E. L. Bolson, S. Kliman, F. Sheehan, and H. T. Dodge. Left ventricular segmental wall motion - A new method using local direction information. In Proceedings of Computers in Cardiology, 1980. [9] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 567–585, 1989. [10] G. Borgefors. Distance transformations in arbitrary dimensions. Computer Vision, Graphics and Image Processing, 27:321–345, 1984. [11] G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics and Image Processing, 34:344–371, 1986. [12] A. Chakraborty, L. H. Staib, and J. S. Duncan. An integrated approach to boundary ?nding in medical images. In Proceedings of the IEEE Workshop on Biomedical Image Analysis, pages 13–22, 1994.

44 [13] Amit Chakraborty. Feature and Module Integration for Image Segmentation. PhD thesis, Yale University, New Haven, CT, 1996. [14] J. Chesebro, G. Knatterrud, and others. Thrombolysis in myocardial infarction (TIMI) trial phase I: A comparison between intravenous tissue plasminogen activator and intravenous streptokinase. Circulation, 76:142–154, 1987. [15] R.T. Constable, K. Rath, A. Sinusas, and J. Gore. Development and evaluation of tracking algorithms for cardiac wall motion analysis using phase velocity MR imaging. Magnetic Resonance in Medicine, 32:33–42, 1994. [16] R. Courant and D. Hilbert. Methods of Mathematical Physics. Interscience, London, 1957. [17] C. Davatzikos and R. N. Bryan. Using a deformable surface model to obtain a shape representation of the cortex. IEEE Transactions on Medical Imaging, 15:785–795, December 1996. [18] T. Denney and J. L. Prince. 3D displacement ?eld reconstruction from planar tagged cardiac MR images. In Proceedings of the IEEE Workshop on Biomedical Image Analysis, pages 51–60, 1994. [19] M. P. do Carmo. Di?erential Geometry of Curves and Surfaces. Prentice-Hall, New Jersey, 1976. [20] H. Gelberg, B. Brundage, S. Glantz, and W. Parmley. Quantitative left ventricular wall motion analysis: A comparison of area, chord and radial methods. Circulation, 59(5):991–1000, 1979. [21] F. Glazer, G. Reynolds, and P. Anandan. Scene matching by hierarchical correlation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 432– 441, 1983. [22] W. Grossman. Assessment of regional myocardial function. Journal of American College of Cardiology, 7(2):327–328, 1986. [23] R. J. Herfkens, N. J. Pelc, L. R. Pelc, and J. Sayre. Right ventricular strain measured by phase contrast MRI. In Proceedings of the 10th Annual Society of Magnetic Resonance Medicine, page 163, San Francisco, 1991. [24] G. T. Herman, J. Zheng, and C. A. Bucholtz. Shape-based interpolation. IEEE Computer Graphics and Applications, pages 69–79, 1992.

45 [25] E. C. Hildreth. The Measurement of Visual Motion. MIT Press, Cambridge, 1984. [26] B. K. P. Horn. Robot Vision. MIT Press, Cambridge, 1986. [27] B. K. P. Horn and B. G. Schunck. Determining optical ?ow. Arti?cial Intelligence, 17:185–203, 1981. [28] B. Horowitz and S. Pentland. Recovery of non- rigid motion and structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 325–330, Maui, June 1991. [29] W. C. Huang and D. B. Goldgof. Adaptive-size physically-based models for nonrigid motion analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 833–835, 1992. [30] C. Kambhamettu and D. Goldgof. Point correspondence recovery in non–rigid motion. In Computer Vision and Pattern Recognition, pages 222–227, June 1992. [31] G. E. Mailloux, A. Bleau, M. Bertrand, and R. Petitclerc. Computer analysis of heart motion from two dimensional echocardiograms. IEEE Transactions on Biomedical Engineering, BME34(5):356–364, May 1987. [32] J. C. McEachen, A. Nehorai, and J. S. Duncan. A recursive ?lter for temporal analysis of cardiac motion. In Proceedings of the IEEE Workshop on Biomedical Image Analysis, pages 124–133, 1994. [33] D. Metaxas and D. Terzopoulos. Constrained deformable superquadrics and nonrigid motion tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 337–343, 1991. [34] F. G. Meyer, R. T. Constable, A. G. Sinusas, and J. S. Duncan. Tracking myocardial deformation using spatially constrained velocities. In Information Processing in Medical Imaging. Kluwer, 1995. [35] S. Mishra, D. Goldgof, and T. Huang. Motion analysis and epicardial deformation estimation from angiography data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 331–336, Maui, June 1991.

46 [36] H. H. Nagel and W. Enkelmann. An investigation of smoothness constraints for the estimation of displacement vector ?elds from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8:565–593, 1986. [37] C. Nastar and N. Ayache. Non-rigid motion analysis in medical images: a physically based approach. In Information Processing in Medical Imaging. Springer-Verlag, 1993. [38] G. Nayler, N. Firmin, and D. Longmore. Blood ?ow imaging by cine magnetic resonance. Journal of Computer Assisted Tomography, 10:715–722, 1986. [39] W.G. O’Dell, C.C. Moore, W. Hunter, E.A. Zerhouni, and E.R. McVeigh. Displacement ?eld ?tting for calculating 3D myocardial deformations from tagged MR images. Radiology, 195:829–835, 1995. [40] R. Owen, L. Staib, P. Anandan, and J. Duncan. Measurement of non-rigid motion in images using contour shape descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1991. [41] J. Park, D. Metaxas, and L. Axel. Volumetric deformable models with parameter functions: a new approach to the 3D motion analysis of the LV from MRI-SPAMM. In Fifth International Conference on Computer Vision, pages 700–705, 1995. [42] J. Park, D. Metaxas, and A. Young. Deformable models with parameter functions: application to heart-wall modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 437–442, 1994. [43] N. J. Pelc and M. A. Bernstein. Optimized encoding for phase contrast ?ow measurement. In Proceedings of the 9th Annual Society of Magnetic Resonance Medicine, page 475, New York, 1990. [44] N. J. Pelc, R. J. Herfkens, and L. R. Pelc. 3D analysis of myocardial motion and deformation with phase contrast cine MRI. In Proceedings of the 11th Annual Society of Magnetic Resonance Medicine, page 18, Berlin, 1992. [45] N. J. Pelc, R. J. Herfkens, A. Shimakawa, and D. Enzmann. Phase contrast cine magnetic resonance imaging. Magnetic Resonance Quarterly, 7(4):229–254, 1991.

47 [46] N. J. Pelc, A. Shimakawa, and G. H. Glover. Phase contrast cine MRI. In Proceedings of the 8th Annual Society of Magnetic Resonance Medicine, page 101, Amsterdam, 1989. [47] E. L. Ritman. State of the art and future perspectives-fast CT for quantitative cardiac analysis. Mayo Clinic Proceedings, 65:1336–1349, 1990. [48] E. L. Ritman, J. H. Kinsey, L. D. Harris, and R. A. Robb. Some imaging requirements for quanti?cation of structure and function of the heart. In M. D. Short, D. A. Pay, S. Leeman, and R. M. Harrison, editors, Physical Techniques in Cardiological Imaging, pages 189–198. Adam Hilger, Ltd., 1982. [49] P. T. Sander and S. W. Zucker. Inferring surface trace and di?erential structure from 3D images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(9):833–854, 1990. [50] S. Sclaro? and A. P. Pentland. Modal matching for correspondence and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(6):545–561, 1995. [51] P. Shi, G. Robinson, R. T. Constable, A. Sinusas, and J. Duncan. A model-based integrated approach to track myocardial deformation using displacement and velocity constraints. In Fifth International Conference on Computer Vision, pages 687–692, 1995. [52] P. Shi, G. Robinson, and J. Duncan. Myocardial motion and function assessment using 4D images. In Visualization in Biomedical Computing, pages 148–159, 1994. [53] P. Shi, A. J. Sinusas, R. T. Constable, and J. S. Duncan. Volumetric deformation analysis using mechanics–based data fusion: Applications in cardiac motion recovery. International Journal of Computer Vision, in press. [54] Pengcheng Shi. Image Analysis of 3D Cardiac Motion Using Physical and Geometrical Models. PhD thesis, Yale University, New Haven, CT, 1996. [55] C. Slager, T. Hooghoudt, P. Serruys, J. Schuurbiers, and J. Reiber et al. Quantitative assessment of regional left ventricular motion using endocardial landmarks. Journal of American College of Cardiology, 7(2):317–326, 1986. [56] S. Song and R. Leahy. Computation of 3D velocity ?elds from 3D cine CT images. IEEE Transactions on Medical Imaging, 10:295–306, Sept 1991.

48 [57] G. Taubin. Curve and surface smoothing without shrinkage. In Proceedings of the Fifth International Conference on Computer Vision, pages 852–857, 1995. [58] D. Terzopoulos and D. Metaxas. Dynamic 3D models with local and global deformation: Deformable superquadrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(17), 1991. [59] D. Terzopoulos and M. Vasilescu. Sampling and reconstruction with adaptive meshes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 70– 75, 1991. [60] M. Tistarelli and G. Marcenaro. Using optical ?ow to analyze the motion of human body organs from bioimages. In Proceedings of the IEEE Workshop on Biomedical Image Analysis, pages 100–109, 1994. [61] J. Udupa and G. Herman. 3D Imaging in Medicine. CRC Press, Boca Raton, 1991. [62] D. F. Watson. Computing the n-dimensional delaunay tessellation with application to Voronoi polytopes. The Computer Journal, 24:167–172, 1981. [63] D. F. Watson and G. M. Philip. Neighborhood-based interpolation. Geobyte, 2:12–16, 1987. [64] V. Wedeen. Magnetic resonance imaging of myocardial kinematics: technique to detect, localize and quantify the strain rates of active human myocardium. Magnetic Resonance Medicine, 27:52–67, 1992. [65] A. A. Young and L. Axel. Three-dimensional motion and deformation in the heart wall: estimation from spatial modulation of magnetization – a model-based approach. Radiology, 185:241–247, 1992. [66] E. Zerhouni et al. Tagging of the human heart by multiplanar selective RF saturation for the analysis of myocardial contraction. In Abstracts of the Annual Meeting of the Society of Magnetic Resonance in Imaging, page 10, San Francisco, 1988.


相关文档

Abstract Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechani
quantitative surface analysis
Using Genetic Algorithms in Computer Vision Registering Images to 3D Surface Model
Surface reconstruction by propagating 3d stereo data in multiple 2d images
3D ANALYSIS OF X-RAY MICROTOMOGRAPHIC CONCRETE IMAGES
Deformable Fourier models for surface finding in 3D images
Dynamic Motion Analysis using Wavelet Flow Surface Images
Analysis of Brain Images Using the 3D-CSC Segmentation Method
Surface Visualisation and 3D Feature Analysis in Confocal
Localization of 3D Anatomical Point Landmarks in 3D Tomographic Images Using Deformable Mod
电脑版