The employment interview A review of current studies and directions for future research

Human Resource Management Review 19 (2009) 203–218

Contents lists available at ScienceDirect

Human Resource Management Review
j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / h u m r e s

The employment interview: A review of current studies and directions for future research
Therese Macan ?
Department of Psychology, University of Missouri-St. Louis, One University Boulevard, St. Louis, MO 63121-4499, USA

a r t i c l e

i n f o

a b s t r a c t
The employment interview continues to be a prevalent device used by organizations and a popular topic of study among researchers. In fact, over 100 new articles have been published since Posthuma, Morgeson and Campion's [Posthuma, R. A., Morgeson, F. P., & Campion, M. A. (2002). Beyond employment interview validity: A comprehensive narrative review of recent research and trends over time. Personnel Psychology, 55, 1–81] review that are selectively examined and critiqued. During this timeframe, three main areas that have received considerable research attention are discussed: (1) understanding why “structured” interviews predict, (2) examining the constructs interviews may measure, and (3) investigating the applicant and interview factors that may affect the interview process. Despite advances made in our knowledge of employment interviews, numerous ideas for future research are advanced. Three key areas that deserve immediate research attention are: (1) establishing a common model and measurement of interview structure, (2) focusing on what constructs could be or are best measured, and (3) formulating consistent de?nitions, labeling and measurement of applicant factors. In this way, employment interview research can be advanced. ? 2009 Elsevier Inc. All rights reserved.

Keywords: Employment interviews Structured interviews Personnel selection

Employment interviews are a popular selection technique from many viewpoints. In organizations around the world, employment interviews continue to be one of the most frequently used methods to assess candidates for employment (Ryan, McFarland, Baron, & Page, 1999; Wilk & Cappelli, 2003). Among organizational decision-makers, interviews have been found to be the assessment method most preferred by supervisors (Lievens, Highhouse, & De Corte, 2005) and human resources (HR) practitioners (Topor, Colarelli, & Han, 2007). Moreover, applicants perceive interviews as fair as compared to other selection procedures (e.g., Hausknecht, Day, & Thomas, 2004) and applicants expect interviews as part of a selection process (e.g., Lievens, De Corte, & Brysse, 2003). In fact, from an applicant's perspective, obtaining a job interview is fundamental to job search success (Saks, 2006). The employment interview has also been a popular topic among researchers for almost 100 years and is still garnering considerable research interest. Notably, numerous meta-analyses have revealed that “structured” interviews can display relatively high levels of validity without the adverse impact typically found with cognitive ability tests (Conway, Jako, & Goodman, 1995; Huffcutt & Arthur, 1994; Huffcutt & Roth, 1998; McDaniel, Whetzel, Schmidt, & Maurer, 1994; Wiesner & Cronshaw, 1988; Wright, Lichtenfels, & Pursell, 1989). While we have learned much about the employment interview, current research activity suggests that more remains to be uncovered. In the last six years since Posthuma, Morgeson and Campion's (2002) comprehensive review of the employment interview literature, over 100 new articles have appeared in journals and books examining the interview.

? Tel.: +1 314 516 5416. E-mail address: 1053-4822/$ – see front matter ? 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.hrmr.2009.03.006


T. Macan / Human Resource Management Review 19 (2009) 203–218

1. Goals and focus of review Given the broad coverage of issues over the last six years, I do not try to provide an exhaustive review of the selection interview method. Instead, I present a selective and qualitative review of published research since roughly 2002. Readers are directed to previous reviews for work conducted prior to this timeframe (e.g., Posthuma et al., 2002, Moscoso, 2000). With this approach in mind, my goals in writing this review are threefold: (a) to provide the reader with a sense of the current status of research on the employment interview, (b) to examine advances made in our knowledge and note areas to be improved, and (c) to stimulate future research and understanding of employment interviews. Although the interview can be used for a variety of purposes, my review focuses on the use of interviews for selection. In addressing the goals of this paper, I took a number of steps to identify research studies on employment interviews published in the last six years. Keyword searches (i.e., interview, employment interview, selection interview) of the PsycINFO, ABI-inform, and Google Scholar databases were conducted. In addition, PsycINFO searches by name of all authors in this review's reference section were performed. Manual article-by-article searches of all journals listed in the references were performed since 2002. Finally, the reference sections of all articles were examined for additional relevant published articles. From this search, I found that researchers have predominantly focused on the interview itself, and on the interviewer indirectly, in an effort to understand how adding interview process structure affects the reliability and validity of interviewer judgments as well as the underlying constructs assessed within the framework of the employment interview. Because the employment interview is an interactional social process between the interviewer and applicant, recent studies have also explored the characteristics and behaviors of applicants and interviewers. This paper is organized around the overarching themes of reliability, validity and construct breadth, within a social framework. 2. Interview (and interviewer) factors 2.1. What factors might moderate the reliability and validity of interviewer judgments? A major ?nding in interview research a few years ago is that interviewer judgments based on structured interviews are more predictive of job performance than those from unstructured interviews. In fact, many quantitative and qualitative reviews of the employment interview research have concluded that adding structure to the interview process can enhance the reliability and validity of interviewer evaluations (e.g., Conway et al., 1995; Huffcutt & Arthur, 1994; Huffcutt & Woehr, 1999; Posthuma et al., 2002). Given the impact that interview structure has been found to have on criterion-related validity, researchers continue to look for explanations and additional moderators. Schmidt and Zimmerman (2004) explored a measurement-error explanation. They hypothesized that structured interviews show higher validity for predicting job performance compared to unstructured interviews because structured interview ratings are more reliable. Support for this hypothesis was mixed: when job performance ratings were made for purely research purposes, reliability differences accounted for the validity difference; however, when job performance ratings were made for administrative and research purposes, reliability differences did not appear to account for the differences in validity. The authors called for additional research to examine these mixed ?ndings. Perhaps a more detailed focus on interview structure and constructs measured would help us understand the criterion-related differences. Other researchers have examined “structured interviews” only, and situational and behavioral description interviews in particular, to see if a focus on interview format provides any clues regarding interview reliability and validity. Using a meta-analytic approach, Huffcutt, Conway, Roth and Klehe (2004) showed that the type of validation study design moderated the criterionrelated validity of situational and behavior description interviews. Concurrent studies displayed higher overall validity estimates than predictive studies across both interview formats. Further exploration is warranted to understand why this moderator effect occurs. Coding studies for other relevant factors, for example, whether the interview score was used to make the selection decision may prove useful, as Sussmann and Robertson (1986) found that a key statistical factor limiting predictive validity was whether the particular device being evaluated was used to make selection decisions. Job complexity has also been investigated as a possible moderator and has led to a number of differing conclusions. Taylor and Small's (2002) meta-analysis found no support for job complexity as a moderator, while Huffcutt et al.'s (2004) meta-analysis found job complexity to be a moderator of the validity for situational interviews, with less prediction for high complex jobs. No such result was found for behavioral description interviews. (Huffcutt et al. did caution that the number of studies in some job complexity conditions was small). A relatively complex element of jobs, “team-playing” behavior or the ability to work in a team, has also been examined but no differences between situational and behavioral description interviews were found in a predictive validation study (Klehe & Latham, 2005). Both types of interview methods were correlated with later peer assessments of the interviewee's teamwork behavior during the ?rst term of the MBA program. In a conclusion that seems quite plausible, the researchers note the care with which they developed and implemented the two interview formats to ensure similarity. This level of correspondence between the two interviews may explain why they did not ?nd differences, and why others may have found differences. For instance, while Krajewski, Gof?n, McCarthy, Rothstein and Johnston (2006) found evidence supporting a job complexity moderator – “past behavior” interview ratings predicted supervisors' performance ratings of the managers (r = .32, p b .01) while situational interviews did not (r = .09, ns) – their interviews were not necessarily conducted identically. Speci?cally, for the “past behavior” interview, standardized probes were made available to assessors to prompt responses in the event that the applicant did not

T. Macan / Human Resource Management Review 19 (2009) 203–218


provide suf?cient information for scoring the relevant dimension. Such probing and difference in implementation between interviews can be one explanation for the difference found in validities. Researchers have also focused on the criterion measures of validity studies. For example, situational and past behavior description interviews assessing the “team-playing” behavior of MBA students were both correlated with typical performance measures. However, only situational interviews signi?cantly predicted maximum performance in their study, suggesting the two interviews could be measuring different performance constructs (Klehe & Latham, 2006). In sum, during the last six years a number of research studies have been conducted comparing situational and behavior description interviews. This research has raised many important issues that need to be investigated further. In particular, future researchers interested in enhancing the reliability and validity of interviewer judgments should focus on differences between situational and behavioral interviews that have not allowed for direct comparisons, such as differences in how the interviews are conducted (e.g., probing). Researchers should also pay close attention to all aspects of structure in the interview process. Elements of structure beyond question type and the use of behaviorally anchored rating forms may impact results (e.g., note-taking, panel interviews, how scores are combined). In fact, researchers might consider ?rst taking a step back to ensure that we hold a common de?nition and means of determining the degree of structure in interviews. 2.2. What really is a structured interview? In the studies reviewed, there was much variability among researchers in what they meant when they indicated their interview was structured. Researchers sometimes classi?ed interviews dichotomously as being “unstructured” or “structured,” although the components of the interview that led to such determination varied. Other general labels used to describe structured interviews included: “situational”, “behavioral”, “conventional structured”, and “structured situational.” Some researchers highlighted speci?c components of structure to provide justi?cation for their determination of structure. Still others used Huffcutt and Arthur's (1994) nomenclature to refer to their interviews as corresponding to one of four levels of interview structure (e.g., Level 3 or Level 4). Such variation in labels and inconsistency in reporting can result in misleading interpretations of ?ndings and slow our progress in understanding interview structure and its effects. Correct categorization of interviews in meta-analyses may be hampered, and subsequently identi?cation of moderators may be limited. In short, much valuable information is lost. For research to advance, it is important that a common de?nition be embraced by researchers and practitioners regarding interview structure. Following the lead of Chapman and Zweig (2005), a measure of interview structure should be a continuous and multi-faceted construct, used by all parties as a common metric to describe the degree of structure in employment interviews. We have a strong foundation on which to build a common taxonomy and measure of interview structure. Huffcutt and Arthur (1994) and Campion, Palmer and Campion (1997) studied methods of enhancing interview structure and identi?ed two germane categories, those components that relate to the interview's content and those that relate to evaluation. Conway et al. (1995) provided a framework in which interviews can be coded according to ?ve progressively higher levels of question standardization and three progressively higher levels of standardization of response evaluation. These components are usually combined into three overall levels (low, medium and high) (Huffcutt, Roth, & McDaniel, 1996). Perhaps more categories of structure and ?ner distinctions can be investigated to advance this existing framework. In particular, Dipboye, Wooten and Halverson (2004) propose a three-dimensional model of interview structure that corresponds to the life cycle or process of an interview; that is, from developing the interview, to conducting it and then using the information gathered. In short, the three factors are: (a) jobrelatedness of the interview, (b) standardization of the process, and (c) structured use of the data to evaluate the candidate, with key elements under each. They also provide a theoretical discussion of how adding structural elements to the interview process may lead to higher levels of interviewer reliability and validity judgments. This three-dimensional conceptualization that involves adding structure to the “interview process” may provide a meaningful way to view interview structure and should be further examined. After establishing a common de?nition and comprehensive framework of interview structure, a measure of these elements needs to be developed. In creating a measure, the relative importance of components needs to be described conceptually and could be determined empirically (e.g., LeBreton & Tonidandel, 2008). Based on previous research, some components may be more important than others (see discussions by Campion et al., 1997 and Dipboye et al., 2004). For example, we know that validity improves if the interview content is developed based on a job analysis (Wiesner & Cronshaw, 1988). Likewise, reliability improves when the method of conducting the interview is standardized and anchored rating scales are used (Maurer, 2002). The measure should also re?ect the “degree of structure,” as others have suggested (e.g., Lievens & De Paepe, 2004). Initial work by Chapman and Zweig (2005) provides a springboard for constructing such a measuring device although re?nements of items to capture the taxonomy and testing of the measure to ensure coverage of important elements would be necessary. Using such a measure, interviews could be given a “score” which maintains the notion of structure being on a continuum. In addition, this continuous measure would afford researchers the opportunity to examine non-linear relationships of variables with structure, further advancing our knowledge and our ability to offer practical recommendations. Of course, for ease of communicating, speci?c labels or categories could be established corresponding to scores. Viewing structure as multidimensional provides a more comprehensive conceptualization of the various ways interviews are and can be structured. Any and all of a collection of components can be utilized to add structure to an interview. Having a common de?nition of structure would allow researchers to examine not only what elements are critical for maximizing interview validity, but also which ones are merely desirable. This approach would offer useful information for practitioners who often work in organizations that take a “satis?cing” versus the maximizing perspective toward selection research (Ployhart, 2006).


T. Macan / Human Resource Management Review 19 (2009) 203–218

Therefore, in order to advance research on employment interviews, researchers and practitioners must begin utilizing a common dimensional model of interview structure as well as a common measure of these elements. Until these advancements are established, I strongly encourage individuals to include as much information as possible regarding the degree of structure of their interview. At minimum, the presence or absence of Campion et al.'s (1997) 15 components should be noted. In addition, all constructs or dimensions the interview is designed to assess should be reported (Arthur & Villado, 2008), the importance of which will become clear later in this paper. 2.3. What has research recently found with regard to interview structure components? Researchers continue to investigate the effects of various means of enhancing interview structure. Three of these components have received recent scrutiny and are reviewed. 2.3.1. Behaviorally-anchored rating scales or scoring guides A key element of the interview structure is establishing a standard process of evaluation (Campion et al., 1997). A number of studies in the past six years have documented the importance of this aspect of interview structure across various interview formats and criteria. Maurer (2002) examined the use of behaviorally anchored rating scales (vs. conventional scales with evaluative anchors) and the use of job experts as interviewers (vs. students) using situational interview questions. He found evidence showing that both job content experts and students rated videotaped interviews with greater accuracy when using behaviorally anchored scales than when using the conventional format. No difference was found for job expertise. The necessity of a scoring guide for behavioral interviews has been demonstrated (Taylor & Small, 2002; Klehe & Latham, 2006). Situational and behavioral description interviews designed to assess team-playing behavior correlated with peers' teamwork behavior scores when behavioral scoring guides were used (Klehe & Latham, 2005). Both situational and behavioral description interviews also predicted interviewees' GPA when scoring guides were provided (Day & Carroll, 2003). Further, telephone interviews in which the interviewer used descriptively-anchored rating scales for both situational and past behavior questions revealed high criterion-related validities with supervisors' performance ratings (and no moderating effect of interviewee prior work experience) (Gibb & Taylor, 2003). Finally, Honer, Wright and Sablynski (2007) showed that structuring the rating process by using anchored rating scales for “puzzle” interview questions resulted in acceptable inter-rater reliabilities. Based on these ?ndings, the use of scoring guides with behavioral benchmarks appears to be bene?cial for interview reliability and validity. 2.3.2. Note-taking Another component of structuring the interview process that has been investigated in the last six years is interviewer notetaking. The extent to which interviewers take notes or are instructed to do so during an interview is uncertain. Only occasionally will researchers mention whether interviewers in their study took notes (e.g., Klehe & Latham, 2005). Middendorf and Macan (2002) found that note-taking may be important for memory and legal reasons, but not necessarily for improving the accuracy of interviewer judgments. While note-taking itself can increase the cognitive demand placed on interviewers, these ?ndings have not yet been generalized to situations in which interviewers actually conduct the interview (which may further tax the interviewer cognitively); for example, in Middendorf and Macan (2002) interviewers watched videotaped interviews. In addition, examining the effect of note-taking in conjunction with other means of structuring the interview process on interviewer reliability and validity would advance our understanding of this practice. 2.3.3. Panel interviews Panel interviews, also referred to as board interviews or team interviews, consist of two or more interviewers who together interview one candidate and combine their ratings into an overall score. Human resource professionals typically have favorable perceptions of panel interviews, especially those who have had prior experience conducting them (Camp, Schulz, Vielhaber, & Wagner-Marsh, 2004). In addition, panel interviews are another means of adding structure and are expected to result in increased rater reliability and validity (Campion et al., 1997). However, Dixon, Wang, Calvin, Dineen, and Tomlinson (2002) reviewed past research conducted exclusively on panel interviews and found that prior ?ndings are con?icting and inconclusive. Reasons for this confusion include a lack of consistency in performance criteria used to evaluate the predictive validity of panel interviews. Research continues to examine a variety of issues related to panel interviews. Some studies have focused on the relational demography of the interview panel with regard to race and its effects on interview scores. Two separate studies found that the racial composition of the interview panel affected judgments in ways consistent with similarity-attraction and social identity theories (McFarland, Ryan, Sacco, & Krista, 2004; Buckley, Jackson, Bolino, Veres & Field, 2007). In general, interviewer ratings showed a same-race bias and difference between panels dependent upon the racial composition of the panel, but the size of the effects were small (Buckley et al., 2007). With this caveat, it appears important to consider not only the interviewer and applicant race, but also the race of other interview panel members. Consistent with prior studies showing differential interviewer validity (Posthuma et al., 2002), Van Iddekinge, Sager, Burn?eld and Heffner (2006) observed that criterion-related validities varied considerably across both interview panels and across individual interviewers for all criteria. Their meta-analytical ?ndings suggest that most or all of the variance for many of the validities may be due to statistical artifacts. Examining the individual differences among raters within panel interviews can serve as a useful mechanism for better understanding panel interview reliability and validity (Dipboye, Gaugler, Hayes, & Parker, 2001;

T. Macan / Human Resource Management Review 19 (2009) 203–218


Van Iddekinge et al., 2006). The manner in which the ratings of panel interviewers are combined (e.g., consensus, statistical) may also be an important consideration. Findings to date suggest that panel interviews might not necessarily provide the psychometric bene?ts expected, but could be important for reasons related to perceived fairness. Future research could explore the social dynamic and group decision-making processes that potentially operate in panel interviews. For example, Bozionelos (2005) suggests that some members of an interview panel may engage in political games and power struggles, which affect the decisions they make. 2.4. Why don't interviewers typically use structured interviews? Despite the evidence showing that interviews containing high levels of structure can be valid predictors, surveys show that managers, HR professionals, and organizations infrequently use them (Klehe, 2004; Lievens & De Paepe, 2004; Simola, Taggar, & Smith, 2007; van der Zee, Bakker, & Bakker, 2002). Conceptualizing interview structure as a continuous variable with various levels along two dimensions, most HR professionals reported using interviews with a moderate degree of structure (Lievens & De Paepe, 2004). Interviewers typically engaged in interviews in which they had identi?ed the topics beforehand (i.e., moderate level of question standardization) and rated candidates on multiple established criteria (i.e., moderate level of response scoring). One has to wonder to what extent research studies conducted to date that have denoted using “structured interviews” more closely resemble these moderately structured interviews. A number of mechanisms might be affecting the use of structural elements in employment interviews. For example, Lievens and De Paepe (2004) found that interviewers' concerns about (a) having discretion in how the interview is conducted, (b) losing the informal, personal contact with the applicant, and (c) the time demands in developing structured interviews were all related to interviewers' use of less structured interviews. Interviewer individual differences, such as cognitive style and need for power also play a role (Chen, Tsai, & Hu, 2008). Finally there is the tendency for operational and HR personnel to use ‘satis?cing’ versus maximizing selection practices. Consequently, organizational factors (e.g., organizational climate) as well as interviewer factors (e.g., knowledge of structure, motivation) need to be addressed in future assessment of interview validity and content (see Dipboye, 1994; 1997 for a conceptual model of factors affecting interviewers' use of high structure interviews). Another strategy that may encourage interviewer use of structured interviews stems from a research conducted by Brtek and Motowidlo (2002) on interviewer accountability. Interviewers held accountable for the procedures they followed – procedure accountability – made judgments about videotaped interviewees that correlated more strongly with supervisory ratings of performance than if they were held accountable for the outcome or accuracy of their judgments – outcome accountability. Perhaps requiring interviewers to justify the procedures they followed in making their ratings may not only result in increased interviewer use of structured interview procedures, but also better judgments. In summary, studies have provided information regarding the role of potential moderators and the effects of greater interview structure on enhancing the reliability and validity of interviewer judgments. Comparisons between behavioral and situational interview formats have also uncovered information as to why interviews may predict job performance. Yet the reason why structured interviews are so much more predictive and reliable than unstructured formats has not been conclusively established. One interesting approach to understanding the levels of reliability and validity bene?ts afforded by greater interview structure has been to investigate the dimensions that are actually measured in employment interviews, and to explore whether other, possibly more performance-relevant constructs can be measured. 2.5. Employment interview construct validity evidence Using predominantly meta-analytic approaches, researchers have sought to determine the constructs measured in the interview. These studies have employed existing selection interviews to understand what underlying constructs these interviews measure. Interview questions are typically examined to identify the variables thought to be measured in different types of interviews. A host of constructs have been examined including cognitive ability and the Big Five personality dimensions. Findings from these studies conducted in the last six years are reviewed. 2.5.1. Do interviews measure applicant cognitive ability? Meta-analyses have examined whether interviews measure cognitive ability and the potential for interviews to explain additional variance beyond cognitive ability (e.g., Cortina, Goldstein, Payne, Davison, & Gilliland, 2000; Huffcutt et al., 1996; Salgado & Moscoso, 2002). Berry, Sackett, and Landers (2007) conducted a meta-analysis that (a) included more recent studies, (b) excluded data from samples in which interviewers may have had access to cognitive test scores, and (c) addressed speci?c range restriction issues. They found correlations between interviews and cognitive ability scores, but of a lower magnitude than previous meta-analyses (full sample, fully corrected correlation of .27). The level of interview structure and the content of the interview moderated the correlations. High interview structure tended to result in lower interview-cognitive test correlations; situational interviews tended to be most correlated and behavioral description interviews least correlated with ability scores. Also, interview-cognitive test correlations increased as uncorrected criterion-related validity of interviews increased and as job complexity decreased. Berry et al. (2007) concluded that “the interview may be a useful supplement to cognitive tests for many employers” (p. 867). The soundness of this conclusion rests on the sample of interviews included in their meta-analysis and depends on the constructs an interview is designed to measure. Thus, it is necessary to denote in meta-analyses the constructs measured in the interviews examined and consequently, researchers must describe in future studies the constructs measured in their interviews to allow such


T. Macan / Human Resource Management Review 19 (2009) 203–218

inclusion. Clearly, an interview could be developed to measure applicant cognitive ability (although arguably the interview may not necessarily be the best or preferred method for measuring cognitive ability). 2.5.2. Do interviews measure applicant personality? Findings provide evidence that personality inferences are also made in selection interviews. Huffcutt, Conway, Roth and Stone (2001) found that personality traits and social skills were the most frequently measured constructs in the interview studies they examined. Across a variety of jobs, interviewers referred to all Big Five personality dimensions in their notes, but more frequently to Agreeableness (24.5%) and Extraversion (22.4%) (van Dam, 2003). Interviewers seem to be using the interview method to infer applicant personality characteristics. Nevertheless, as has been noted in this review, the type and degree of interview structure seems to moderate this correspondence. Roth, Van Iddekinge, Huffcutt, Eidson, and Schmit (2005) examined personality saturation in a situational interview and a behavioral interview. Combining these data in a meta-analytic approach, they conclude that “most structured interviews are not saturated with a great deal of personality variance” (p. 269). Interview scores in their study did not tend to correlate with self-report Big Five personality dimensions. Thus, the extent to which interviews inadvertently measure an applicant's personality seems to depend on the extent to which interpersonal skill is muted or allowed to play a role through the interview process. Can interviews be constructed to measure personality intentionally? Van Iddekinge, Raymark and Roth (2005) explicitly developed a “personality-based interview” to measure three facet-level traits related to the job of grocery store customer service manager. Panels of two experienced interviewers conducted each mock interview, and then interviewers and interviewees completed the corresponding facet scales of the revised NEO Personality Inventory (NEO PI-R; Costa & McCrae, 1992). A con?rmatory factor analysis and a multitrait–multimethod approach revealed some supportive construct-related validity evidence for the personality constructs the employment interview was designed to assess. In developing and implementing their personality-based interview, Van Iddekinge et al. employed a number of components of interview structure. Along with developing job-related interview questions that directly tapped each of the three personality facets, panel raters evaluated each facet immediately after the interviewee responded using a behavioral rating scale speci?cally designed for each question. This level of structure and the focus on a small number of personality dimensions at a very speci?c, facet level may have been instrumental in ?nding the supportive evidence. Similarly, assessment centers have been found to show more construct validity evidence when they are well-designed and focused on just a few dimensions (Arthur, Woehr & Maldegen, 2000; Lievens & Conway, 2001). Additional research is needed to examine if these ?ndings generalize to other types of jobs for which speci?c aspects of personality are relevant and to actual employment situations. Studies should also examine whether some personality constructs (both broad and narrow) can be better assessed in employment interviews than others. Using trait activation theory may prove a fruitful direction as research investigating the construct validity of assessment centers has found (Haaland & Christiansen, 2002; Lievens, Chasteen, Day, & Christiansen, 2006). Moreover, it would be essential to determine the utility of using the two methods (i.e., personality-based interviews vs. paper-and-pencil personality inventories) for assessing various personality constructs. As Van Iddekinge et al. (2005) noted, from a practical perspective we need to know: does any additional predictive validity of personality-based interviews outweigh the costs of developing, validating and administering a personality-based interview compared to these costs for a paper-and-pencil measure? Interviewer individual differences might play a role in the interviewers' ability to make personality judgments. Interviewer differences have been found in the traits they focus on to assess in employment interviews, and the in?uence of these choices on personality judgments and hiring decisions (van Dam, 2003), as well as the in?uence of interviewers' own personality on their personality ratings of applicants (Christiansen, Wolcott-Burnam, Janovics, Burns, & Quirk, 2005; Sears & Rowe, 2003; Hilliard & Macan, 2009). Understanding these individual differences may help identify the structural components of interviews that increase the probability of interviewers making accurate personality judgments. For example, at a basic level, accuracy requires the interviewer to have a good memory in order to recall applicant responses. Lessening this demand on memory by having interviewers take notes (Middendorf & Macan, 2002), providing them with behaviorally based evaluation forms and having them rate the applicants' responses immediately after each question (as was done in Van Iddekinge et al.'s study), could prove quite bene?cial for the assessment of personality in interviews and potentially mitigate interviewer individual differences. Van Iddekinge et al. (2005) also compared response in?ation (i.e., instructions to respond like a job applicant vs. honestly) in their personality-based structured interview to that of a paper-and-pencil inventory. They found less evidence of response in?ation in the interview. Instructions to respond like a job applicant, however, did affect the underlying structure of the personality judgments. Understanding why response in?ation of constructs may differ by formats needs to be explored. From the applicants' perspective, Levashina and Campion (2007) have documented that a large majority of interviewees report faking in job interviews. Perhaps various elements of structure mitigate the level of interviewee faking. Future research should examine theoretical connections between interview structure components and their effects on interviewee faking. From the interviewer's perspective, behaviorally anchored rating forms may help interviewers focus on relevant behaviors. Interviewers' individual differences may also play a role. In the performance appraisal literature, rater personality factors (i.e., Conscientiousness and Agreeableness) predicted rating elevation (Bernardin, Cooke & Villanova, 2000). Research should investigate if these ?ndings generalize to interviewer judgments, and the effects on interview validity. 2.5.3. Do interviews measure an applicant's counterproductive traits? Researchers have also proposed that integrity or counterproductive traits can be assessed in the interview (Blackman & Funder, 2002). Across two studies, Townsend, Bacigalupi and Blackman (2007) found a mean correlation between self and interviewer

T. Macan / Human Resource Management Review 19 (2009) 203–218


ratings of .27 using their measure of overt integrity, SPI, (i.e., the Substance abuse, Production loss, and Interpersonal problems inventory). They also found higher mean correlations between self-ratings and interviewer ratings in their unstructured and informal interviews compared to the structured condition, but only for their covert measure. While this research shows some evidence that interviewers may be able to assess counterproductive behaviors, care must be taken in interpreting the results. Townsend et al. (2007) purport to measure “self-interviewer agreement” by computing correlations between ratings. Correlations at best indicate a pattern of responding; even a perfect correlation does not necessarily provide evidence of agreement. Consequently, their statement that ?ndings from their studies “should allow lay judges and employers alike to take some comfort in knowing that it is possible in a 10-min interview to judge an individual's integrity level with a good deal of accuracy” (p. 555) is premature and potentially inaccurate. Additional research employing accuracy measures are needed before such claims are warranted. 2.5.4. What other constructs have been explored that interviews might measure? In addition to examining cognitive ability and the Big Five personality dimensions in their study, Salgado and Moscoso (2002) explored the relationship between interviews and ?ve other variables: job knowledge, job experience, situational judgment, grade point average, and social skills. Other constructs have also been proposed to be measured by employment interviews in the last six years. For example, emotional intelligence correlated with the situational interview score and completely mediated the relationship between the interview and “team-playing” behavior (Sue-Chan & Latham, 2004). Dispositional variables theoretically relevant to organizational citizenship behavior, such as empathy and positive affect had meaningful correlations with interview scores (Allen, Facteau, & Facteau, 2004). Also, Cliffordson (2002) suggested that interviewers can judge applicant empathy in interviews for caring professions. Self-discipline, tenacity-resilience, teamwork and cross-cultural awareness (along with other measures) are additional traits investigated (Lievens, Harris, Van Keer, & Bisqueret, 2003), but their behavioral description interview did not signi?cantly predict these constructs. Moreover, Van Iddekinge, Raymark, Eidson and Attenweiler (2004) examined whether two behavioral description interviews measured interpersonal skills, conscientiousness, and stress management. Using a multitrait–multimethod approach, correlations of interviewer ratings with applicant self-report scores from paper-and-pencil tests did not show support for a link between the interviews and the three constructs that “the two interviews appeared to measure” (p. 78). (The authors identi?ed these three constructs after the interviews were developed and conducted by examining the interview items.) Instead, they note that their results show that interview ratings were more correlated with interviewee factors, interview form, and interviewer factors. 2.5.5. What constructs should employment interviews measure? To date, the predominant research focus has been to explore through meta-analysis what underlying constructs may be measured in all interviews, sometimes regardless of the speci?c constructs the interview may have been designed to measure (studies have not consistently indicated the interview constructs measured). As research efforts continue to examine the construct-related validity of interviews, this area could bene?t from a shift in inquiry from asking what does the interview measure to developing individual studies that ask: how might I design an employment interview to measure the construct of interest? (Ployhart, 2006). This shift would require an initial move away from meta-analyses that generally inform us about what has been done. More studies about what constructs could be measured or what constructs are best measured in employment interviews are needed (Posthuma et al., 2002). Moreover, when investigating the construct-related validity of interviews, it is essential that researchers maintain the distinction between predictor constructs and predictor methods, especially when making comparisons across interview studies (Arthur & Villado, 2008). The interview is a method of assessment, regardless of whether it is called “structured”, “unstructured”, “conventional”, “behavioral”, or “situational,” during which interviewers assess applicant traits or constructs. As a measurement procedure, the employment interview may be administered using a variety of approaches beyond face-to-face, including telephone (e.g., Bauer, Truxillo, Paronto, Weekley & Campion, 2004; Blackman, 2002; Silvester & Anderson, 2003; Schmidt & Rader, 1999), videoconference (e.g., Straus, Miles, & Levesque, 2001; Chapman, Uggerslev, & Webster, 2003), interactive voice response methods (Bauer et al., 2004), and a “written structured interview” (Whetzel, Baranowski, Petro, Curtin, & Fisher, 2003). For construct validity evidence, care should be taken to compare across predictor constructs versus across predictor methods. Researchers can aid in this effort by reporting the constructs their employment interview is designed to measure. Consistent with a shift in focus to what interviews could or should measure, improvements in the way interviews are developed to measure speci?c constructs could provide substantial bene?ts for interview reliability and validity. That is, borrowing from what is known about developing psychometrically sound measures (e.g., Nunnally & Bernstein, 1994), interviews should be designed by ?rst establishing the constructs of interest based on job analyses and/or competency modeling. Then, the interview format and questions can be developed to assess these particular constructs. For developers of interviews not taking this approach, such practice could prove very important in the search for the construct-related validity evidence of employment interviews. Furthermore, in measuring interview constructs, multiple items or questions per construct are necessary. A one-item assessment usually is not deemed suf?cient to measure a construct adequately, yet may be a very typical approach adopted in interviews. For example, in the situational interview used by Roth et al. (2005) to examine personality saturation, “researchers administered the 6-item interview to sales associates” (p.265) to assess six different areas of performance in the job. They found no evidence of personality saturation. In contrast, when explicitly designing a “personality-based structured interview, Van Iddekinge et al. (2005) developed an interview with 9 questions, three for each of the three facets of personality identi?ed to be important for the job in question and found some supportive construct-related validity evidence.


T. Macan / Human Resource Management Review 19 (2009) 203–218

Interviewers themselves may see the need to ask more questions per construct by the extent to which they use follow-up questions and probes. Interviewers who were allowed to probe in Krajewski et al.'s (2006) study did ask additional questions to gather information (i.e., only one item (or interview question) was used to assess each of the six managerial dimensions in their study). Thus, studies need to address the extent to which additional questions and other test development procedures prove bene?cial for interview construct measurement. Of course, adding additional questions per construct allows for fewer constructs to be assessed. Rather than measuring a variety of elements in employment interviews, the trade-off would be to assess a few key constructs well. On a related note but in a different direction, researchers have explored interview transparency as a possible remedy to lessthan-desired low construct-related validity. Telling applicants beforehand the dimensions they will be assessed on during the interview could increase the likelihood that their responses actually pertain to the purported constructs. In Klehe, Konig, Richter, Kleinmann and Melcher (2008), candidates in transparent interviews received higher scores than those in the nontransparent condition. Some evidence for increased construct validity of interviews was also found. Providing such transparency to all applicants may also increase fairness perceptions (Day & Carroll, 2003). This is relevant in light of ?ndings that some individuals are better able to identify criteria in nontransparent interviews than others (Konig, Melchers, Kleinmann, Richter, & Klehe, 2007). Given that limited construct-related validity evidence currently exists for employment interviews, more innovative, individual research study approaches should be explored. For instance, some organizations have identi?ed competencies required of all employees in their organizations and developed interview questions to assess individual competencies. In a scientist–practitioner collaboration, these organizational databases could be tapped to explore the construct-related validity evidence of the interview method by competency. In addition to issues of structure and construct-related validity evidence, an employment interview is a process and dynamic exchange in which interviewers and applicants bring their prior impressions, engage in a social interaction, and process information gathered on both sides. We now turn to research conducted in the last six years that has investigated these applicant (and interviewer) factors. 3. Applicant factors and characteristics A variety of applicant factors and characteristics, including applicant demographics on interviewer judgments, have been investigated in the last six years. Past ?ndings summarized by Posthuma et al. (2002) revealed small and inconsistent effects on judgments across the various demographic characteristics. More recent studies reviewed have also examined both direct and more subtle effects of gender, race, and age and typically found results consistent with prior ?ndings. Other applicant characteristics, including disability, obesity and pregnancy have received attention. The issues of applicant impression management, and more recently, faking behavior, have garnered extensive research interest in the past six years. Coaching or training interviewees and the effect of anxiety on applicant interview performance are additional topics covered. 3.1. How do applicant demographic characteristics in?uence interviewer judgments? 3.1.1. Gender and race In an innovative use of hierarchical linear modeling (HLM), Sacco, Scheu, Ryan, and Schmitt (2003) investigated the effects of actual demographic similarity between interviewer and applicant (i.e., gender and race) on interviewer ratings in individual, one on-one interviews. Because interviewers typically conduct multiple interviews for a given job opening, HLM accommodates such nesting of applicants within interviewers, without discarding or averaging data points. No evidence of similarity rating effects or interviewer gender or race differences in interview ratings was found. While this may be the case, these conclusions should be tempered because standardized ethnic group differences may be larger than previously estimated due to corrections for range restriction issues (Roth, Van Iddekinge, Huffcutt, Eidson, & Bobko, 2002). That is, interviews could present an opportunity for subtle cues to affect perceptions and judgments, especially for interviews that include fewer interview components of structure. Recent research has found effects in relation to subtle discrimination. For example, one week after conducting interviews, interviewers recalled African-American applicants as having given less intelligent answers compared to White applicants (Frazer & Wiersma, 2001). Similarly, Purkiss, Perrewe, Gillespie, Mayes and Ferris (2006) studied implicit bias. Those applicants with both an ethnic name and corresponding accent received the least favorable interviewer ratings, whereas applicants with a Hispanic name but no accent were evaluated most favorably. This result provides support for expectancy violation theory (Jussim, Coleman, & Learch, 1987). The applicant with a Hispanic name was likely expected to speak with an accent. When he did not, thus violating expectations, he was viewed more positively. Some studies have examined the direct effects of discrimination, predominantly on the questions interviewers ask during the interview. For example, MBA students reacted negatively when four of 10 interview questions provided were discriminatory (e.g., Have you ever been arrested for a crime?) (Saks & McCarthy, 2006). Two studies explored the effects of sexual questions in job interviews (Woodzicka and LaFrance, 2005) and male interviewers' beliefs that the female applicant is attracted to them (Ridge & Reber, 2002) on interviewer and applicant perceptions. Both studies found that female applicant interview performance was affected by male interviewer behavior, as evaluated by outside raters. These studies are more generalizable to situations where interviewers would not have used predetermined, job-related questions and behavioral-based rating forms. These studies speak to the need for greater interview structure and better education of interviewers on what is legal and appropriate to ask during interviews. With regard to such fairness perceptions, the role of the organizational context should also be considered (see also Dipboye, 2005). For example, when the majority of top-level management positions were occupied by African-Americans in one

T. Macan / Human Resource Management Review 19 (2009) 203–218


organization, African-American candidates for promotion perceived a situational interview to be more job-related than did White candidates (Becton, Field, Giles, & Jones-Farmer, 2008). 3.1.2. Age The effects of applicant age on employment interview processes and outcomes have been examined as well (see Morgeson, Reider, Campion, and Bull, 2008 for a review since the passage of ADEA in 1967). In general, while age stereotypes have been found, both laboratory and ?eld studies show evidence that other applicant characteristics beyond age in?uence interviewer perceptions and hiring recommendations. With some exceptions, much of this recent work on the effects of various applicant demographic characteristics has been conducted in less structured interview situations. Clearly this work is important but future research should examine subtle discrimination effects on interviewer judgments in interviews with greater degrees of structure (McKay & Davis, 2008). As Garcia, Posthuma, and Colella (2008) have shown, it is also important for researchers to examine the effects of interviewer perceived (rather than actual) similarity to applicants. In addition, further examinations of doubly-stigmatized applicants (gender and race as in Sacco et al., 2003) and other combinations (such as gender and disability) would be informative. 3.1.3. Applicants with disabilities The Americans with Disabilities Act not only covers individuals who have disabilities that limit one or more of their major life functions, but also extends to those who have a record of having such an impairment in the past. Most research on applicants with disabilities in the employment interview has addressed candidates who currently have a disability. Reilly, Bocketti, Maser and Wennet (2006) extend research to those who have a past “record” of a disability by manipulating this information in their study. Candidates who were previously depressed or had a prior record of substance abuse were less likely to be recommended for hire than those in the no disability condition. No signi?cant differences were found between the prior cancer candidate and the other three conditions on hiring recommendations. More research is needed to understand these results. Raters' perceptions of whether applicants were responsible for their disability may have played a role, but this potential explanation was not examined. For applicants currently with disabilities, research has examined the effects of discussing the disability during the interview. In general, it is advantageous for individuals with physical disabilities to talk about their disability. Applicants who discussed their disability early in the interview were perceived more favorably than those who discussed it near the end or did not discuss it at all. This result held true for applicants with visible, physical disabilities (i.e., in a wheelchair; Hebl & Skorinko, 2005) and those with non-visible, physical disabilities, such as transverse myelitis (Roberts & Macan, 2006). Caveats that limit these conclusions should be explored in future research. The favorable effects of discussing one's disability were for those with uncontrollable conditions, such as when the individual was disabled because of an auto accident that was not their fault. Studies need to investigate whether these ?ndings generalize to those who are perceived responsible for the disability. Moreover, one may choose to acknowledge a disability during the interview in order to ask for a reasonable accommodation. Studies investigating the effects of acknowledging one's disability and also asking for a reasonable accommodation should be conducted. Given the variety of disabilities that limit different life functions, researchers are cautious about over generalizing their ?ndings. In order to advance this area of research, a theoretical model based on what we already know from the interview and the disability literatures is needed. Such a model would allow for a more systematic examination of issues around applicants with disabilities, and consequently lead to practical recommendations for both applicants with disabilities and interviewers. 3.1.4. Overweight applicants Although not a protected group by legal standards, selection biases against overweight applicants have been shown in two studies (Kutcher & Bragger, 2004). Speci?cally, evaluations differed based on the level of structure in the interview, with a behaviorally based evaluation resulting in a lessening of rater bias. Strategies that have been shown to be bene?cial for applicants with disabilities may not be advantageous for obese individuals. In fact, a better strategy for obese individuals was to say nothing about their condition (Hebl & Kleck, 2002). Perceived controllability of the applicant's condition was a key factor. The more interviewers perceived the applicants' obesity as controllable, the less favorably they evaluated them and the less likely they were to recommend them for hire. These ?ndings, coupled with those from the disability studies, point to the importance of controlling, manipulating or at least measuring this controllability variable in future research. 3.1.5. Pregnant applicants Evidence from the U.S. Equal Employment Opportunity Commission (EEOC) indicates pregnancy discrimination claims are the fastest growing type of employment discrimination charge (Armour, 2005; “The Pregnancy Discrimination Act 25 Years Later”, 2006). Research though has only begun to examine the effects of applicant pregnancy on interviewer decision-making. Results show lower hiring ratings for pregnant than non-pregnant applicants (Bragger, Kutcher, Morgan & Firth, 2002; Cunningham & Macan, 2007). Interviewers indicate concern that a pregnant applicant will need time off, miss work and quit (Cunningham & Macan, 2007). These interviewer concerns may be rationally based and/or may re?ect bias but at present research has not provided such evidence. To examine this issue, it would be important to know if interviewers hold these same concerns for males and females who may need time off or need to take a leave for a variety of reasons that may also include the care of a new child, a spouse or a parent, and whether discussion of these concerns by the applicant during the interview would impact the selection decision.


T. Macan / Human Resource Management Review 19 (2009) 203–218

3.2. To what extent do speci?c applicant behaviors in?uence interviewer judgments? In addition to applicant characteristics, researchers have focused on applicant behaviors that can in?uence their interview performance. Posthuma et al. (2002) reported a notable increase in studies on impression management (IM) from previous reviews (i.e., 20 studies since 1989). Indeed, this momentum has not faltered, with 14 additional studies conducted since 2002. 3.2.1. Applicant impression management and/or faking What applicants say and do in an interview affects interviewers' evaluations of them. Applicants will, therefore, be motivated to use the interview to create a positive impression. Both lab and ?eld studies have typically demonstrated a positive relationship between applicant impression management and interviewer evaluations (e.g., Ellis, West, Ryan, & DeShon, 2002; Higgins & Judge, 2004; McFarland et al., 2003; Peeters & Lievens, 2006; Tsai, Chen, & Chiu, 2005). Much variability exists, however, in the types of applicant impression management behaviors that affect interviewer evaluations (see Higgins, Judge, & Ferris, 2003 for metaanalysis). Researchers have been exploring the reasons for such variability. Some researchers have investigated impression management use in different interview situations, focusing predominantly on behavioral description interviews and situational interviews. In similar ?ndings across studies, applicants typically used more selfpromotion or self-focused IM tactics when responding to experience-based questions and more ingratiation or other-focused IM tactics (i.e., opinion conformity and other enhancement) when responding to situational questions (Ellis et al., 2002; McFarland et al., 2003; Peeters & Lievens, 2006; Tsai et al., 2005; Van Iddekinge, McFarland, & Raymark, 2007). Based on these ?ndings, elements of structured interviews may not necessarily limit applicant impression management behavior as previously thought. Additional studies have investigated applicant individual difference variables that may be associated with the use of impression management tactics in interviews. Kristof-Brown, Barrick, and Franke (2002) present a theoretical model to explain how applicant characteristics (e.g., personality traits) in?uence the types of impression management used during the interview and the subsequent effects on interviewer judgments. Findings in this area have been mixed. For example, Higgins and Judge (2004) found that high self-monitors engaged in more self-promotion and more ingratiation behaviors. Peeters and Lievens (2006), however, showed that high self-monitors tended to use more nonverbal IM behaviors than low self-monitors when given instructions to convey a favorable interview impression. Further, while Kristof-Brown et al. (2002) found that more agreeable applicants displayed more nonverbal behaviors such as smiling and eye contact, Van Iddekinge et al. (2007) did not. Instead, they found that the altruism facet of Agreeableness negatively related to defensive impression management for applicants in their study who were instructed to respond honestly. There are a number of possible explanations for these mixed ?ndings. Peeters and Lievens (2006) and Van Iddekinge et al. (2007) each proposed that instructions to manage impressions (or situational strength) may be a factor. Clearly, additional work is needed to untangle the inconsistencies. In addition studies should examine how applicant personality characteristics relate to impression management at the facet level because broad personality factors may not reveal some relevant relationships. Researchers also need to try matching predictors and criteria at similar levels (e.g., applicant personality facets to self-promotion versus all self-focused behaviors). More importantly, we need more consistency in the: (a) de?nition, (b) labeling, and (c) measurement of impression management. Findings may appear inconsistent because different labels are used across studies although the de?nitions for the terms are exactly the same, making comparisons challenging. For example, Ellis et al. (2002) used the label, “ingratiation” tactics, whereas, McFarland et al. (2003) and Van Iddekinge et al. (2007) labeled these same behaviors, “other-focused.” Higgins et al. (2003) also noted little agreement on labels used to denote in?uence tactics. Beyond labeling, various methods have been used to measure impression management in interviews. The two prominent approaches are: (a) coders assess the frequency of the behaviors from audio-tapes or videotapes or (b) applicants self-report impression management use. When coding, even though the same label is used, different behaviors are often examined within that label resulting in potentially misleading comparisons. For example, McFarland et al. (2003) used the label “self-focused” behaviors and included six elements (i.e., self-promotion, entitlements, enhancements, overcoming obstacles, basking in re?ected glory and personal stories). Peeters and Lievens included the ?rst four elements listed above in their measure of “self-focused” and Van Iddekinge et al. (2007) used the same label, but only included self-promotion. Examples of differences in measurement can also be found when applicants self-report impression management behaviors. A cursory examination of scale items used show some overlap, but typically no two studies used the exact same items. In at least one instance, the exact same set of items was assigned to different IM tactics. Kristof-Brown et al. (2002) used two items as their measure for nonverbal impression management (i.e., “I maintained eye contact with the interview” and “I smiled a lot or used other friendly nonverbal behaviors” (p. 36)). Higgins and Judge (2004) used these exact two items (along with seven other items) in a scale they labeled, “ingratiation” (see Appendix p. 632). Finally, more clarity is needed with regard to the impression management construct. Impression management has typically been de?ned based on Schlenker (1980) as: “a conscious or unconscious attempt to in?uence images that are projected in social interaction” (e.g., Ellis et al., 2002; McFarland et al., 2003, Van Iddekinge et al., 2007). As Levashina and Campion (2006) note “the question of whether IM is deceptive or honest has not been raised or addressed” (p. 299). This distinction is an important missing element in this research on impression management in employment interviews and may help explain the mixed ?ndings. The behaviors described above could be labeled “Honest IM” or “Deceptive IM.” Job candidates could use impression management behaviors to portray a positive image without being dishonest. Alternatively, they may use them with a clear and conscious intention of being dishonest in the way they are portraying themselves (Weiss & Feldman, 2006). Applicants may also engage in

T. Macan / Human Resource Management Review 19 (2009) 203–218


both Honest IM and Deceptive IM in the same interview situation (see also Levashina & Campion, 2006 for a more detailed discussion of this issue). In sum, the similarities and differences between faking behaviors and impression management behaviors need to be addressed (Levashina & Campion, 2006, 2007; Van Iddekinge et al., 2007). To advance research on faking, Levashina and Campion (2006, 2007) provide a model of faking in the employment interview and an Interview Faking Behavior (IFB) scale. They de?ne faking as “dishonest impression management or intentional distortion of responses to interview questions or misrepresentations in order to create a good impression” (p. 301). According to the model, faking can include: (a) information that is added, (b) information that is omitted, and (c) information that is invented. They advocate for a broader view of faking to include not only lying, but also “pretense, concealment, exaggeration, and so forth” (p. 1639). The IFB scale can be found in the appendix of their study (Levashina & Campion, 2007). Even though their work focuses on Deceptive IM, their model might aid in identifying and measuring a theoretically separate construct, Honest IM behaviors. An examination of the relationships between types of faking and types of impression management could prove informative. For example, does their IFB faking facet, Slight Image Creation, overlap with the self-promotion tactic? To answer this and other questions in this area, research may also need to explore qualitative strategies. Lipovsky (2006) used a “metapragmatic assessment” of applicant interview answers by conducting separate post-interview “interviews” with candidates and interviewers. Each separately watched the videotape of their interview. Candidates commented on the impressions they were trying to make and interviewers identi?ed the impressions they had formed of the applicant. This approach allowed additional information about the impression, including how successful the candidate had been in making the impression. Future research could also ask candidates to indicate the level of truthfulness of each impression and this information could be compared to the extent interviewers had believed the applicant. The issue with the construct validity of faking and impression management is our lack of understanding about what is being measured; that is, obtaining scores for applicants that do not accurately describe them. Future research could use the IFB scale to examine the likelihood of faking and advance our understanding of faking behavior in employment interviews. Because their scale was not developed to be used as a selection device (Levashina & Campion, 2007), it instead provides an opportunity to examine systematically ways the employment interview process can be improved, if individuals are found to fake. Future research could examine other interviewer variables and how they might in?uence applicant impression management and faking. For example, interviewer individual differences might affect the relative importance interviewers attach to impression management (Lievens & Peeters, 2008) or their interpretation of applicant impression management behaviors (Silvester, Anderson-Gough, Anderson, & Mohamed, 2002). Another possibility is that interviewers engage in impression management themselves during the interview. An interviewer's impression management may play a role in the level of applicant impression management (Higgins & Judge, 2004). Lopes and Fletcher (2004) showed that interviewees perceive the use of IM tactics to be fair when they also considered the use of the same tactics by interviewers to be fair. Finally, given that applicants may engage in both Deceptive IM and Honest IM, future research should investigate whether interviewers are able to detect different faking behaviors that candidates use during an interview. Evidence suggests that some people have dif?culty detecting deception and focus on the wrong cues (Mann, Vrij, & Bull, 2004). 3.2.2. What role might nonverbal impression management behaviors play? Despite the fact that a relationship between applicant self-ratings of nonverbal impression management and interviewer evaluations had not been found, as expected (Kristof-Brown et al., 2002; Tsai et al., 2005), this is an important area for research. One reason this relationship has escaped researchers may be that the items used to measure nonverbal impression management focus almost exclusively on smiling and eye contact. When examining a variety of nonverbal behaviors (i.e., smiling, eye contact, forward lean and body orientation), applicant nonverbal behaviors can affect interviewer ratings (Levine and Feldman, 2002). Many additional nonverbal behaviors relevant to employment interview interactions should be assessed in future research on impression management and faking. In fact, an extensive body of research on nonverbal behaviors in interactions exists, and much of it is relevant for employment interviews (Manusov & Patterson, 2006). Also, an applicant's physical attractiveness (Hosada, Stone-Romero, & Coats, 2003; Goldberg & Cohen, 2004) and vocal attractiveness (i.e., “appealing mix of speech rate, loudness, pitch and variability in these characteristics” (p. 31); DeGroot & Kluemper, 2007) have been found to be positively related to interviewer ratings. Furthermore, the current assessment of smiling and eye contact as impression management behaviors in employment interviews could be improved by adopting techniques used in the nonverbal research literature. A recent meta-analysis of over 160 studies provided evidence that women smile more often than men (LaFrance, Hecht, & Levy, 2003); yet, gender differences are not usually investigated in studies that have examined nonverbal impression management in employment interviews (see Levine & Feldman, 2002 for an exception). The nonverbal literature has also distinguished between genuine (Duchenne) and false (nonDuchenne) smiling (Ekman, Friesen, & Ancoli, 1980). Findings suggest that false smiling during an interview results in less favorable evaluations than does genuine smiling (Woodzicka, 2008; Woodzicka & LaFrance, 2005). Such distinctions in smiling behavior are rarely made in studies of nonverbal impression management in the employment interview. Interestingly, Woodzicka (2008) has also demonstrated that participants are self-aware when they engage in false smiling during mock interviews suggesting that self-report measures may be useful. In conclusion, research on impression management behaviors in the employment interview has come a long way. This area offers much to learn and numerous opportunities for future research. With consistent de?nitions, measurement and labeling, research on impression management and faking in employment interviews can prove quite useful to our understanding of these applicant behaviors. Examining non-linear associations among impression management behaviors and outcomes might be useful


T. Macan / Human Resource Management Review 19 (2009) 203–218

because it seems possible that overuse of such behaviors, even if truthful, (e.g., too much opinion conformity) could be interpreted negatively by interviewers. Also important would be to consider whether some or all of these impression management behaviors are job-related skills for some jobs. When a job requires a high level of customer contact, for example, applicant self-focused impression management behaviors are positively correlated with interviewer evaluations (Tsai et al., 2005). Finally, based on their results, Kristof-Brown et al. (2002) suggested that “training on how to self-promote one's ?t with the available job appears to be bene?cial to applicants” (p. 43). Research has begun to examine the effect of providing this sort of interview coaching to applicants. What we have learned thus far on this topic is reviewed next. 3.2.3. What are the effects of applicant interview coaching and training? The popular literature on employment interviews abounds with books for applicants on how to do well in an interview with some recent titles including: “Winning Job Interviews” (Powers, 2004), “Sell Yourself: Master the Job Interview Process” (Williams, 2004), and “The Perfect Interview: How to Get the Job You Really Want” (Drake, 1997). As these books suggest, instructing or coaching candidates on how to present their positive attributes honestly may help them better convey the skills and abilities they possess. In fact, interview training appears to be related to self-promotion (Kristof-Brown et al., 2002). With a better knowledge of the applicant's positive qualities, interviewers might be able to make more valid interview decisions. Higgins and Judge (2004) suggest that future research examine the extent to which impression management is and can be learned. Much of the existing research on interviewee training has been conducted with speci?c populations (e.g., underemployed, unemployed, and patients with psychiatric disorders). It also includes examinations of teaching underemployed and unemployed minority adults (Sniad, 2007) and training Native North Americans (Latham & Budworth, 2006). Few studies have examined the effects of interviewee training or coaching in general working adult populations to whom many popular books on interviewing are addressed. In the last few years, Maurer and colleagues (see Maurer & Solaman, 2006 for a review) conducted studies on interview coaching, which based on their de?nition, appears to be interchangeable with what is typically labeled, “interviewee training.” Generally, a positive relationship between attending an interview coaching session and interview performance has been shown. In support of this trend, Maurer, Solamon, and Lippstreu (2008) found recently that reliability and validity were somewhat higher in a sample that received coaching compared to an uncoached group. The complete coaching booklet can be found in Maurer and Solaman (2006). In the research described above, company-sponsored interview coaching sessions were offered to all job applicants but only some chose to participate. This area of research would bene?t from studies where applicants are randomly assigned to conditions. In addition, examining whether results generalize to populations beyond ?re and police personnel, and to other types of interviews would be bene?cial. Furthermore, as noted by Maurer and colleagues, research on interviewee reactions, self-perceptions and justice perceptions would be useful. If potential employees view an organization's interview coaching positively, and the coaching does not harm the interview's psychometric characteristics, it may be in the organization's best interest to offer such programs. Other more non-traditional strategies have recently been explored with similar positive results. Applicants who engaged in verbal self-guidance (Latham & Budworth, 2006) and mental imagery (Knudstrup, Segrest, & Hurley, 2003) received more favorable interview evaluations than those who had not engaged in the approach. Mental imagery was also related to less perceived stress. Interviews can be stressful situations for many individuals and recent research has examined applicant anxiety. 3.2.4. What effect does applicant interview anxiety have on interview performance? Highly evaluative situations, such as the employment interview, may create anxiety for job applicants. Undergraduates who participated in mock interviews with peers self-reported sustained levels of anxiety immediately before the interview as well as during the interview, with levels only signi?cantly dropping after the interview (Young, Behnke, & Mann, 2004). To examine this issue more systematically, McCarthy and Gof?n (2004) developed a multidimensional measure of interview anxiety that showed good psychometric properties. Their Measure of Anxiety in Selection Interviews (MASI) assesses ?ve dimensions: Communication, Appearance, Social, Performance, and Behavioral. (The actual items can be found in the appendix of their paper.) McCarthy and Gof?n (2004) found negative correlations between the ?ve MASI dimensions and interview performance. Although not yet tested, they propose that applicant anxiety may actually bias interview predictive validity. Applicants experiencing anxiety during the interview may receive lower scores despite the fact that their job performance would have been successful. Their measure looks promising and additional research should be conducted using their scale. Numerous questions remain regarding the role of applicant interview anxiety on interviewer evaluations and job performance, as well as how it operates. Interviewer qualities such as warmth, agreeableness and humor have been suggested to reduce applicant anxiety (Carless & Imber, 2007). Future research should also examine other mechanisms that may reduce interview anxiety. Perhaps an element of interview coaching could be devoted to techniques known to reduce interviewee anxiety. 4. Concluding remarks For almost 100 years, the employment interview has been a topic of research, resulting in notable advances in our understanding. Considerable research continues to be conducted on a variety of employment interview topics. One main focus in the last six years has centered on the reasons “structured” interviews show greater predictive validity compared to “unstructured” interviews. From this review of the current research, we see that researchers have explored a number of explanations and

T. Macan / Human Resource Management Review 19 (2009) 203–218


variables, uncovering the potential bene?ts of some interview structuring components. To advance research in this area and obtain a greater understanding of the role of structure on the reliability and validity of interviewer judgments, I have argued that a common taxonomy and measure of the degree of interview structure is necessary. Previous researchers have laid a strong foundation on which future research efforts can build this important missing link (e.g., Campion et al., 1997; Huffcutt & Arthur, 1994; Dipboye et al., 2004; Dipboye, 2005). As another way of understanding the value of interviews with greater structural elements, researchers have examined the constructs interviews measure. In the last six years, a number of new meta-analyses have been explored this issue (e.g., Berry et al., 2007; Roth et al., 2005; Salgado & Moscoso, 2002). While these studies have examined a wide variety of constructs from cognitive ability to personality, evidence generally points to low construct-related validity evidence. However, interviews that are better designed and developed speci?cally to assess particular constructs show greater evidence of construct-related validity (e.g., Van Iddekinge et al., 2005). Therefore, I believe a shift in research focus is needed to increase our understanding of the construct-related validity of interviews. We need to move away from meta-analyses that tell us what constructs interviews measure to individual studies that explore: what constructs could interviews measure? And, what constructs are best measured in employment interviews? While studying these questions, researchers should pay close attention to good test construction techniques (e.g., multiple items per construct). The employment interview is a social interaction where the interviewer and applicant exchange and process the information gathered from each other. Current research has investigated the applicant and interviewer factors that may affect the interview process. Speci?cally, research continues to examine the effects of applicant demographics and characteristics on interviewer judgments (e.g., gender, race, disability, pregnancy), and has done so in innovative ways using statistical procedures such as HLM (Sacco et al., 2003). Further, applicant impression management continues to receive substantial research interest, but could bene?t from a more consistent de?nition and sounder measurement. As part of this, researchers need to answer whether impression management behaviors are deceptive or honest. Recent work on faking behaviors in interviews provides an important mechanism and foundation for this inquiry (Levashina & Campion, 2006, 2007). Employment interviews enjoy much popularity as a selection technique among applicants, organizational decision-makers, and researchers. My hope is that this sentiment continues for all these constituents and spreads world-wide. Numerous issues for future research to explore are outlined within this review. In particular, three key issues merit immediate attention to best advance our understanding of employment interviews: (1) a common model of interview structure with corresponding measure to provide clarity on the role of all aspects of interview structure, (2) a revised focus on what constructs could be measured or are best measured in employment interviews, and (3) consistency in de?nitions, labeling and measurement of all applicant factors and characteristics, most notably the concept of applicant impression management. In addition to these ideas, we know very little about the use and effectiveness of employment interviews across cultures although technology and competition has allowed organizations to expand into the global marketplace. In all, countless opportunities exist to advance our understanding of employment interviews both theoretically and practically. It is exciting to ponder what a literature review even ?ve years from now will tell us about the state of the global art in interviewing research. References
Allen, T. D., Facteau, J. D., & Facteau, C. L. (2004). Structured interviewing for OCB: Construct validity, faking, and the effects of question type. Human Performance, 17, 1?24. Armour, S. (2005, February). Pregnant workers report growing discrimination. USA Today. Retrieved July 15, 2008, from workplace/2005-02-16-pregnancy-bias-usat_x.htm Arthur, W., Jr., & Villado, A. J. (2008). The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice. Journal of Applied Psychology, 93, 435?442. Arthur, W., Jr., Woehr, D. J., & Maldegen, R. (2000). Convergent and discriminant validity of assessment center dimensions: A conceptual and empirical examination of the assessment construct-related validity paradox. Journal of Management, 26, 813?835. Bauer, T. N., Truxillo, D. M., Paronto, M. E., Weekley, J. A., & Campion, M. A. (2004). Applicant reactions to different selection technology: Face-to-face, interactive voice response, and computer-assisted telephone screening interviews. International Journal of Selection and Assessment, 12, 135?148. Becton, J. B., Field, H. S., Giles, W. F., & Jones-Farmer, A. (2008). Racial differences in promotion candidate performance and reactions to selection procedures: A ?eld study in a diverse top-management context. Journal of Organizational Behavior, 29, 265?285. Bernardin, H. J., Cooke, D. K., & Villanova, P. (2000). Conscientiousness and agreeableness as predictors of rating leniency. Journal of Applied Psychology, 85, 232?236. Berry, C. M., Sackett, P. R., & Landers, R. N. (2007). Revisiting interview-cognitive ability relationships: Attending to speci?c range restriction mechanisms in metaanalysis. Personnel Psychology, 60, 837?874. Blackman, M. C. (2002). The employment interview via the telephone: Are we sacri?cing accurate personality judgments for cost ef?ciency? Journal of Research in Personality, 36, 208?223. Blackman, M. C., & Funder, D. C. (2002). Effective interview practices for accurately assessing counterproductive traits. International Journal of Selection and Assessment, 10, 109?116. Bozionelos, N. (2005). When the inferior candidate is offered the job: The selection interview as a political and power game. Human Relations, 58, 1605?1631. Bragger, J. D., Kutcher, E., Morgan, J., & Firth, P. (2002). The effects of the structured interview on reducing biases against pregnant job applicants. Sex Roles, 46, 215?226. Brtek, M. D., & Motowidlo, S. J. (2002). Effects of procedure and outcome accountability on interview validity. Journal of Applied Psychology, 87, 185?191. Buckley, M. R., Jackson, K. A., Bolino, M. C., Veres, J. G., III, & Field, H. S. (2007). The in?uence of relational demography on panel interview ratings: A ?eld experiment. Personnel Psychology, 60, 627?646. Camp, R., Schulz, E., Vielhaber, M., & Wagner-Marsh, F. (2004). Human resource professionals' perceptions of team interviews. Journal of Managerial Psychology, 19, 490?505. Campion, M. A., Palmer, D. K., & Campion, J. E. (1997). A review of structure in the selection interview. Personnel Psychology, 50, 655?702. Carless, S. A., & Imber, A. (2007). The in?uence of perceived interviewer and job organizational characteristics on applicant attraction and job choice intentions: The role of applicant anxiety. International Journal of Selection and Assessment, 15, 359?371. Chapman, D. S., Uggerslev, K. L., & Webster, J. (2003). Applicant reactions to face-to-face and technology-mediated interviews: A ?eld investigation. Journal of Applied Psychology, 88, 944?953. Chapman, D. S., & Zweig, D. I. (2005). Developing a nomological network for interview structure: Antecedents and consequences of the structured selection interview. Personnel Psychology, 58, 673?702.


T. Macan / Human Resource Management Review 19 (2009) 203–218

Chen, Y., Tsa, W., & Hu, C. (2008). The in?uences of interviewer-related and situational factors on interviewer reactions to high structured job interviews. International Journal of Human Resource Management, 19, 1056?1071. Christiansen, N. D., Wolcott-Burnam, S., Janovics, J. E., Burns, G. N., & Quirk, S. W. (2005). The good judge revisited: Individual difference in the accuracy of personality judgments. Human Performance, 18, 123?149. Cliffordson, C. (2002). Interviewer agreement in the judgement of empathy in selection interviews. International Journal of Selection and Assessment, 10, 198?205. Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565?579. Costa, P. T., & McCrae, R. R. (1992). The NEO-PI Personality Inventory. Odessa, FL: Psychological Assessment Resources. Cortina, J. M., Goldstein, N. B., Payne, S. C., Davison, H. K., & Gilliland, S. W. (2000). The incremental validity of interview scores over and above g and conscientiousness scores. Personnel Psychology, 53, 325?351. Cunningham, J., & Macan, T. (2007). Effects of applicant pregnancy on hiring decisions and interview ratings. Sex Roles, 57, 497?508. Day, A. L., & Carroll, S. A. (2003). Situational and patterned behavior description interviews: A comparison of their validity, correlates, and perceived fairness. Human Performance, 16, 25?47. DeGroot, T., & Kluemper, D. (2007). Evidence of predictive and incremental validity of personality factors, vocal attractiveness and the situational interview. International Journal of Selection and Assessment, 15, 30?39. Dipboye, R. L. (1994). Structured and unstructured interviews: Beyond the job-?t model. In G. Ferris (Ed.), Frontiers of industrial and organizational psychology: Personnel selection. CA: Jossey-Bass, Inc. Dipboye, R. L. (1997). Structured selection interviews: Why do they work? Why are they underutilized? In N. Anderson & P. Herriot (Eds.), International Handbook of Selection and Assessment (pp. 455?473). London: John Wiley and Sons. Dipboye, R. L. (2005). The selection/recruitment interview: Core processes and contexts. In A. Evers, N. Anderson, & O. Voskuijl (Eds.), The Blackwell Handbook of Personnel Selection (pp. 121?142). Malden, MA: Blackwell Publishing. Dipboye, R. L., Gaugler, B. B., Hayes, T. L., & Parker, D. (2001). The validity of unstructured panel interviews: More than meets the eye. Journal of Business and Psychology, 16, 35?49. Dipboye, R. L., Wooten, K., & Halverson, S. K. (2004). Behavioral and situational interviews. In J. C. Thomas (Ed.), Comprehensive Handbook of Psychological Assessment, 4, industrial and organizational assessment (pp. 297?316). Hoboken, NJ: John Wiley & Sons Inc. Dixon, M., Wang, S., Calvin, J., Dineen, B., & Tomlinson, E. (2002). The panel interview: A review of empirical research and guidelines for practice. Public Personnel Management, 31, 397?428. Drake, J. D. (1997). The perfect interview: How to get the job you really want. New York: AMACOM. Ekman, P., Friesen, W. V., & Ancoli, S. (1980). Facial signs of emotional experience. Journal of Personality and Social Psychology, 39, 1125?1134. Ellis, A. P. J., West, B. J., Ryan, A. M., & DeShon, R. P. (2002). The use of impression management tactics in structured interviews: A function of question type? Journal of Applied Psychology, 87, 1200?1208. Frazer, R. A., & Wiersma, U. J. (2001). Prejudice versus discrimination in the employment interview: We may hire equally, but our memories harbour prejudices. Human Relations, 54, 173?191. Garcia, M. F., Posthuma, R. A., & Colella, A. (2008). Fit perceptions in the employment interview: The role of similarity, liking, and expectations. Journal of Occupational and Organizational Psychology, 81, 173?189. Gibb, J. L., & Taylor, P. J. (2003). Past experience versus situational employment: Interview questions in a New Zealand social service agency. Asia Paci?c Journal of Human Resources, 41, 371?383. Goldberg, C., & Cohen, D. J. (2004). Walking the walk and talking the talk: Gender differences in the impact of interviewing skills on applicant assessments. Group and Organization Management, 39, 369?384. Haaland, S., & Christiansen, N. D. (2002). Implications of trait-activation theory for evaluating the construct validity of assessment center ratings. Personnel Psychology, 55, 137?163. Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639?683. Hebl, M. R., & Kleck, R. E. (2002). Acknowledging one's stigma in the interview setting: Effective strategy or liability? Journal of Applied Social Psychology, 32, 223?249. Hebl, M. R., & Skorinko, J. L. (2005). Acknowledging one's physical disability in the interview: Does “when” make a difference? Journal of Applied Social Psychology, 35, 2477?2492. Higgins, C. A., & Judge, T. A. (2004). The effect of applicant in?uence tactics on recruiter perceptions of ?t and hiring recommendations: A ?eld study. Journal of Applied Psychology, 89, 622?632. Higgins, C. A., Judge, T. A., & Ferris, G. R. (2003). In?uence tactics and work outcomes: A meta-analysis. Journal of Organizational Behavior, 24, 89?106. Hilliard, T., & Macan, T. (2009). Can mock interviewers' own personality in?uence their personality ratings of applicants? The Journal of Psychology: Interdisciplinary and Applied, 143(2), 161?174. Honer, J., Wright, C. W., & Sablynski, C. J. (2007). Puzzle interviews: What are they and what do they measure? Applied H.R.M. Research, 11, 79?96. Hosada, M., Stone-Romero, E., & Coats, G. (2003). The effects of physical attractiveness on job-related outcomes: A meta-analysis of experimental studies. Personnel Psychology, 56, 431?462. Huffcutt, A. I., & Arthur, W., Jr. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79, 184?190. Huffcutt, A. I., Conway, J. M., Roth, P. L., & Klehe, U. (2004). The impact of job complexity and study design on situational and behavior description interview validity. International Journal of Selection and Assessment, 12, 262?273. Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identi?cation and meta-analytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86, 897?913. Huffcutt, A. I., & Roth, P. L. (1998). Racial group differences in employment interview evaluations. Journal of Applied Psychology, 83, 179?189. Huffcutt, A. I., Roth, P., & McDaniel, M. (1996). A meta-analytic investigation of cognitive ability in employment interview evaluations: Moderating characteristics and implications for incremental validity. Journal of Applied Psychology, 81, 459?473. Huffcutt, A. I., & Woehr, D. J. (1999). Further analysis of employment interview validity: A quantitative evaluation of interviewer-related structuring methods. Journal of Organizational Behavior, 20, 549?560. Jussim, L., Coleman, L. M., & Learch, L. (1987). The nature of stereotypes: A comparison and integration of three theories. Journal of Personality and Social Psychology, 52, 536?546. Klehe, U. (2004). Choosing how to choose: Institutional pressures affecting the adoption of personnel selection procedures. International Journal of Selection and Assessment, 12, 327?342. Klehe, U., Konig, C. J., Richter, G. M., Kleinmann, M., & Melcher, K. G. (2008). Transparency in structured interviews: Consequences for construct and criterionrelated validity. Human Performance, 21, 107?137. Klehe, U., & Latham, G. (2005). The predictive and incremental validity of the situational and patterned behavior description interviews for teamplaying behavior. International Journal of Selection and Assessment, 13, 108?115. Klehe, U., & Latham, G. (2006). What would you do – really or ideally? Constructs underlying the behavior description interview and the situational interview in predicting typical versus maximum performance. Human Performance, 19, 357?382. Knudstrup, M., Segrest, S. L., & Hurley, A. E. (2003). The use of mental imagery in the simulated employment interview simulation. Journal of Managerial Psychology, 18, 573?591. Konig, C. J., Melchers, K. G., Kleinmann, M., Richter, G. M., & Klehe, U. (2007). Candidates' ability to identify criteria in nontransparent selection procedures: Evidence from an assessment center and a structured interview. International Journal of Selection and Assessment, 15, 283?292. Krajewski, H. T., Gof?n, R. D., McCarthy, J. M., Rothstein, M. G., & Johnston, N. (2006). Comparing the validity of structured interviews for managerial-level employees: Should we look to the past or focus on the future? Journal of Occupational and Organizational Psychology, 79, 411?432.

T. Macan / Human Resource Management Review 19 (2009) 203–218


Kristof-Brown, A., Barrick, M. R., & Franke, M. (2002). Applicant impression management: Dispositional in?uences and consequences for recruiter perceptions of ?t and similarity. Journal of Management, 28, 27?46. Kutcher, E. J., & Bragger, J. D. (2004). Selection interviews of overweight job applicants: Can structure reduce the bias? Journal of Applied Social Psychology, 34, 1993?2022. LaFrance, M., Hecht, M. A., & Levy, P. E. (2003). The contingent smile: A meta-analysis of sex differences in smiling. Psychological Bulletin, 129, 305?334. Latham, G. P., & Budworth, M. (2006). The effect of training in verbal self-guidance on the self-ef?cacy and performance of Native North Americans in the selection interview. Journal of Vocational Behavior, 68, 516?523. LeBreton, J. M., & Tonidandel, S. (2008). Multivariate relative importance: Extending relative weight analysis to multivariate criterion space. Journal of Applied Psychology, 93, 329?345. Levashina, J., & Campion, M. A. (2006). A model of faking likelihood in the employment interview. International Journal of Selection and Assessment, 14, 299?316. Levashina, J., & Campion, M. A. (2007). Measuring faking in the employment interview: Development and validation of an interview faking behavior scale. Journal of Applied Psychology, 92, 1638?1656. Levine, S. P., & Feldman, R. S. (2002). Women and men's nonverbal behavior and self-monitoring in a job interview setting. Applied H.R.M. Research, 7, 1?14. Lievens, F., Chasteen, C. S., Day, E. A., & Christiansen, N. D. (2006). Large-scale investigation of the role of trait activation theory for understanding assessment center convergent and discriminant theory. Journal of Applied Psychology, 91, 247?258. Lievens, F., & Conway, J. M. (2001). Dimension and exercise variance in assessment center scores: A large-scale evaluation of multitrait–multimethod studies. Journal of Applied Psychology, 86, 1202?1222. Lievens, F., De Corte, W., & Brysse, K. (2003). Applicant perceptions of selection procedures: The role of selection information, belief in tests, and comparative anxiety. International Journal of Selection and Assessment, 11, 67?77. Lievens, F., & De Paepe, A. (2004). An empirical investigation of interviewer-related factors that discourage the use of high structure interviews. Journal of Organizational Behavior, 25, 29?46. Lievens, F., Harris, M. M., Van Keer, E., & Bisqueret, C. (2003). Predicting cross-cultural training performance: The validity of personality, cognitive ability, and dimensions measured by an assessment center and a behavior description interview. Journal of Applied Psychology, 88, 476?489. Lievens, F., Highhouse, S., & De Corte, W. (2005). The importance of traits and abilities in supervisors' hirability decisions as a function of method of assessment. Journal of Occupational and Organizational Psychology, 78, 453?470. Lievens, F., & Peeters, H. (2008). Interviewers' sensitivity to impression management tactics in structured interviews. European Journal of Psychological Assessment, 24, 174?180. Lipovsky, C. (2006). Candidates' negotiation of their expertise in job interviews. Journal of Pragmatics, 38, 1147?1174. Lopes, J., & Fletcher, C. (2004). Fairness of impression management in employment interviews: A cross-country study of the role of equity and Machiavellianism. Social Behavior and Personality, 32, 747?768. Mann, S., Vrij, A., & Bull, R. (2004). Detecting true lies: Police of?cers' ability to detect suspects' lies. Journal of Applied Psychology, 89, 137?149. Manusov, V., & Patterson, M. L. (2006). The Sage handbook of nonverbal communication. Thousand Oaks, CA: Sage Publications, Inc. Maurer, S. D. (2002). A practitioner-based analysis of interviewer job expertise and scale format as contextual factors in situational interviews. Personnel Psychology, 55, 307?327. Maurer, T. J., & Solaman, J. M. (2006). The science and practice of a structured employment interview coaching program. Personnel Psychology, 59, 433?456. Maurer, T. J., Solamon, J. M., & Lippstreu, M. (2008). How does coaching interviewees affect the validity of a structured interview? Journal of Organizational Behavior, 29, 355?371. McCarthy, J., & Gof?n, R. (2004). Measuring job interview anxiety: Beyond weak knees and sweaty palms. Personnel Psychology, 57, 607?637. McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599?616. McFarland, L. A., Ryan, A. M., & Krista, S. D. (2003). Impression management use and effectiveness across assessment methods. Journal of Management, 29, 641?661. McFarland, L. A., Ryan, A. M., Sacco, J. M., & Krista, S. D. (2004). Examination of structured interview ratings across time: The effects of applicant race, rater race, and panel composition. Journal of Management, 30, 435?452. McKay, P. F., & Davis, J. (2008). Traditional selection methods as resistance to diversity in organizations. In K. Thomas (Ed.), Diversity resistance in organizations (pp. 151–174). New York: Taylor Francis Group/Lawrence Erlbaum. Middendorf, C. H., & Macan, T. H. (2002). Note-taking in the employment interview: Effects on recall and judgments. Journal of Applied Psychology, 87, 293?303. Morgeson, F. P., Reider, M. H., Campion, M. A., & Bull, R. A. (2008). Review of research on age discrimination in the employment interview. Journal of Business and Psychology, 22, 223?232. Moscoso, S. (2000). Selection interview: A review of validity evidence, adverse impact and applicant reactions. International Journal of Selection and Assessment, 8, 237?247. Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. NY: McGraw-Hill. Peeters, H., & Lievens, F. (2006). Verbal and nonverbal impression management tactics in behavior description and situational interviews. International Journal of Selection and Assessment, 14, 206?222. Ployhart, R. E. (2006). Staf?ng in the 21st century: New challenges and strategic opportunities. Journal of Management, 32, 868?897. Posthuma, R. A., Morgeson, F. P., & Campion, M. A. (2002). Beyond employment interview validity: A comprehensive narrative review of recent research and trends over time. Personnel Psychology, 55, 1?81. Powers, P. (2004). Winning Job Interviews. New Jersey: Career Press. Purkiss, S. L., Segrest, Perrewe, P. L., Gillespie, T. L., Mayes, B. T., & Ferris, G. R. (2006). Implicit sources of bias in employment interview judgments and decisions. Organizational Behavior and Human Decision Processes, 101, 152?167. Reilly, N. P., Bocketti, S. P., Maser, S. A., & Wennet, C. L. (2006). Benchmarks affect perceptions of prior disability in a structured interview. Journal of Business and Psychology, 20, 489?500. Ridge, R. D., & Reber, J. S. (2002). “I think she's attracted to me”: The effect of men's beliefs on women's behavior in a job interview scenario. Basic and Applied Social Psychology, 24, 1?14. Roberts, L. L., & Macan, T. H. (2006). Disability disclosure effects on employment interview ratings of applicants with nonvisible disabilities. Rehabilitation Psychology, 51, 239?246. Roth, P. L., Van Iddekinge, C. H., Huffcutt, A. I., Eidson, C. E., Jr., & Bobko, P. (2002). Corrections for range restrictions in structured interview ethnic group differences: The values may be larger than researchers thought. Journal of Applied Psychology, 87, 369?376. Roth, P. L., Van Iddekinge, C. H., Huffcutt, A. I., Eidson, C. E., Jr., & Schmit, M. J. (2005). Personality saturation in structured interviews. International Journal of Selection and Assessment, 13, 261?273. Ryan, A. M., McFarland, L., Baron, H., & Page, R. (1999). An international look at selection practices: Nation and culture as explanations for variability in practice. Personnel Psychology, 52, 359?391. Sacco, J. M., Scheu, C. R., Ryan, A. M., & Schmitt, N. (2003). An investigation of race and sex similarity effects in interviews: A multilevel approach to relational demography. Journal of Applied Psychology, 88, 852?865. Saks, A. M. (2006). Multiple predictors and criteria of job search success. Journal of Vocational Behavior, 68, 400?415. Saks, A. M., & McCarthy, J. M. (2006). Effects of discriminatory interview questions and gender on applicant reactions. Journal of Business and Psychology, 21, 175?191. Salgado, J. F., & Moscoso, S. (2002). Comprehensive meta-analysis of the construct validity of the employment interview. European Journal of Work and Organizational Psychology, 11, 299?324. Schmidt, F. L., & Rader, M. (1999). Exploring the boundary conditions for interview validity: Meta-analytic validity ?ndings for a new interview type. Personnel Psychology, 52, 445?464.


T. Macan / Human Resource Management Review 19 (2009) 203–218

Schmidt, F. L., & Zimmerman, R. D. (2004). A counterintuitive hypothesis about employment interview validity and some supporting evidence. Journal of Applied Psychology, 89, 553?561. Schlenker, B.R. (1980). Impression management: The self-concept, social identity, and interpersonal relations. Monterey, CA: Brooks/Cole. Sears, G. J., & Rowe, P. M. (2003). A personality-based similar-to-me effect in the employment interview: Conscientiousness, affect- versus competence-mediated interpretations, and the role of job relevance. Canadian Journal of Behavioural Science, 35, 13?24. Silvester, J., & Anderson, N. (2003). Technology and discourse: A comparison of face-to-face and telephone employment interviews. International Journal of Selection and Assessment, 11, 206?214. Silvester, J., Anderson-Gough, F. M., Anderson, N. R., & Mohamed, A. R. (2002). Locus of control, attributions and impression management in the selection interview. Journal of Occupational and Organizational Psychology, 75, 59?76. Simola, S. K., Taggar, S., & Smith, G. W. (2007). The employment selection interview: Disparity among research-based recommendations, current practices and what matters to Human Rights Tribunals. Canadian Journal of Administrative Sciences, 24, 30?44. Sniad, T. (2007). “It's not necessarily the words you say…it's your presentation”: Teaching the interactional text of the job interview. Journal of Pragmatics, 39, 1974?1992. Straus, S. G., Miles, J. A., & Levesque, L. L. (2001). The effects of videoconference, telephone, and face-to-face media on interviewer and applicant judgments in employment interviews. Journal of Management, 27, 363?381. Sue-Chan, C., & Latham, G. P. (2004). The situational interview as a predictor of academic and team performance: A study of the mediating effects of cognitive ability and emotional intelligence. International Journal of Selection and Assessment, 12, 312?320. Sussmann, M., & Robertson, D. U. (1986). The validity of validity: An analysis of validation study designs. Journal of Applied Psychology, 71, 461?468. Taylor, P. J., & Small, B. (2002). Asking applicants what they would do versus what they did do: A meta-analytic comparison of situational and past behaviour employment interview questions. Journal of Occupational and Organizational Psychology, 75, 277?294. Topor, D. J., Colarelli, S. M., & Han, K. (2007). In?uences of traits and assessment methods on human resource practitioners' evaluations of job applicants. Journal of Business and Psychology, 21, 361?376. Townsend, R. J., Bacigalupi, S. C., & Blackman, M. C. (2007). The accuracy of lay integrity assessments in simulated employment interviews. Journal of Research in Personality, 41, 540?557. Tsai, W., Chen, C., & Chiu, S. (2005). Exploring boundaries of the effects of applicant impression management tactics in job interviews. Journal of Management, 31, 108?125. van Dam, K. (2003). Trait perception in the employment interview: A ?ve-factor model perspective. International Journal of Selection and Assessment, 11, 43?55. van der Zee, K. I., Bakker, A. B., & Bakker, P. (2002). Why are structured interviews so rarely used in personnel selection? Journal of Applied Psychology, 87, 176?184. Van Iddekinge, C. H., McFarland, L. A., & Raymark, P. H. (2007). Antecedents of impression management use and effectiveness in a structured interview. Journal of Management, 33, 752?773. Van Iddekinge, C. H., Raymark, P. H., Eidson, C. E., Jr., & Attenweiler, W. J. (2004). What do structured selection interviews really measure? The construct validity of behavior description interviews. Human Performance, 17, 71?93. Van Iddekinge, C. H., Raymark, P. H., & Roth, P. L. (2005). Assessing personality with a structured employment interview: Construct-related validity and susceptibility to response in?ation. Journal of Applied Psychology, 90, 536?552. Van Iddekinge, C. H., Sager, C. E., Burn?eld, J. L., & Heffner, T. S. (2006). The variability of criterion-related validity estimates among interviewers and interview panels. International Journal of Selection and Assessment, 14, 193?205. Weiss, B., & Feldman, R. S. (2006). Looking good and lying to do it: Deception as an impression management strategy in job interviews. Journal of Applied Social Psychology, 36, 1070?1086. Whetzel, D. L., Baranowski, L. E., Petro, J. M., Curtin, P. J., & Fisher, J. F. (2003). A written structured interview by any other name is still a selection instrument. Applied H.R.M. Research, 8, 1?16. Wiesner, W., & Cronshaw, S. (1988). A meta-analytic investigation of the impact of interview format and degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61, 275?290. Wilk, S. L., & Cappelli, P. (2003). Understanding the determinants of employer use of selection methods. Personnel Psychology, 56, 103?124. Williams, J. (2004). Sell yourself: Master the job interview process. Arlington, TX: Principle Publications. Wright, P. M., Lichtenfels, P. A., & Pursell, E. D. (1989). The structured interview: Additional studies and a meta-analysis. Journal of Occupational Psychology, 62, 191?199. Woodzicka, J. A. (2008). Sex differences in self-awareness of smiling during a mock job interview. Journal of Nonverbal Behavior, 32, 109?121. Woodzicka, J. A., & LaFrance, M. (2005). The effects of subtle sexual harassment on women's performance in a job interview. Sex Roles, 53, 67?77. Young, M. J., Behnke, R. R., & Mann, Y. M. (2004). Anxiety patterns in employment interviews. Communication Reports, 17, 49?57.



Knowledge sharing A review and directions for future research问卷来源2
A Review of the Current Research on Vocabulary Instruction
Research on Women in Family Firms Current Status and Future Directions
Review and Analysis of Current and Future European e-ID Card Schemes
VQEG-Current Results and Future Directions
Towards Autonomic Network Management:an analysis of current and future research directions
Data Mining in Finance and Accounting A Review of Current Research Trends
The START multimedia information system Current technology and future directions
2010 ZnO devices and applications a review of current status and future prospects
job interview 1
How to succeed in a job interview
Barney-Firm Resources and Sustained Competitive Advantage