Thou Shalt not Lie! Exploring and testing countermeasures against faking intentions and faking in selection interviews
Abstract
Many applicants use faking in interviews to present themselves more favorably than they really are. There is widespread concern that this may affect interview validity. As previous research on countermeasures is sparse, we conducted an exploratory study to identify the most promising countermeasures. For technology-mediated interviews, these were warnings referring to a criterion-based content analysis and lie detection algorithms focusing on nonverbal or paraverbal cues. For face-to-face interviews, these were objective questions and a personable interviewer. We then investigated the effects of these countermeasures on faking intentions in two experimental vignette studies and on faking in another simulated interview study. However, none of the countermeasures could reduce faking intentions or faking. Additionally, in the vignette studies, warnings impaired applicant reactions.
Practitioner points
-
There is the fear that faking can jeopardize the criterion-related validity of selection interviews.
-
Faking in interviews can hardly be detected, which is why we investigated potential countermeasures to prevent faking.
-
In three experiments, none of the different countermeasures were effective in reducing faking intentions or faking.
-
Warnings against faking in interviews partially led to negative applicant reactions.
Selection interviews are one of the most popular methods for personnel selection (Wilk & Cappelli, 2003). However, applicants commonly use faking to present themselves favorably in selection interviews (Levashina & Campion, 2007). In line with this, Weiss and Feldman (2006), for example, found that 81% of the interviewees told at least one lie in a short 15-min interview, with an average of 2.19 lies per interview. Furthermore, there is widespread fear that faking can jeopardize the criterion-related validity of selection procedures (e.g., Salgado, 2016). In addition, it has been found that faking behavior can hardly be detected in interviews (see Melchers et al., 2020; for a review). Because of this, knowledge on suitable countermeasures that avoid or at least reduce faking in selection interviews is of relevance for high-stakes selection settings. However, hardly any research on such countermeasures is available (Melchers et al., 2020). Therefore, the first aim of our research was to identify possible countermeasures and to provide an initial evaluation of their effects on faking intentions or faking behavior in interviews.
In addition to the unclear effectiveness of possible countermeasures to reduce faking in interviews, another question concerns potential side effects of these countermeasures. Specifically, some countermeasures such as the use of warnings not to fake might lead to negative applicant reactions (Burns et al., 2015; Lammers, 2017). However, given that interviews are often not only used to select but also to recruit applicants (Dipboye et al., 2012), countermeasures that lead to negative applicant reactions may impair the recruitment function of interviews. Therefore, the second aim of this paper was to examine the effects of potential countermeasures against faking in interviews on applicant reactions.
1 THEORETICAL BACKGROUND OF INTERVIEW FAKING
Levashina and Campion (2007) define faking in interviews as “conscious distortions of answers to the interview questions to obtain a better score on the interview and/or otherwise create favorable perceptions” (p. 1639). According to them, four subfacets of faking can be distinguished: ingratiation, image protection, slight image creation, and extensive image creation. Ingratiation represents a behavior in which applicants insincerely compliment the interviewer or organization, or try to express the same values, beliefs, or opinions as them. Image protection means that applicants deliberately omit or disguise disadvantageous information or distance themselves from it to maintain the image of a good applicant. In the case of slight image creation, they exaggerate their competencies, or their fit with the organization, or the job. Finally, applicants show extensive image creation when they invent qualifications or stories, or present accomplishments of others as their own (Levashina & Campion, 2007).
There are different theoretical models to explain the occurrence and amount of faking (e.g., Levashina & Campion, 2006; McFarland & Ryan, 2000, 2006; Roulin, Krings & Binggeli, 2016). Although these models differ in detail—and not all of them are specifically directed at interview faking—they all have in common that situational, motivational, and ability antecedents are relevant for the occurrence of faking. This is also the case with the faking model by Levashina and Campion (2006), which is the only model that explicitly refers to faking in interviews.
The model by Levashina and Campion (2006) assumes that faking depends on applicants' capacity, willingness, and opportunity to fake. Capacity to fake is defined by capabilities that enable applicants to fake effectively. This includes, for example, oral skills, cognitive ability, and knowledge of the constructs being measured. Willingness to fake refers to dispositional and situational antecedents that influence the applicants' tendency to distort their answers. Dispositional antecedents include personality traits such as Agreeableness, Honesty-humility, or Integrity, and situational antecedents include aspects such as the probability of getting caught or unfair treatment. Finally, opportunity to fake is defined by interview characteristics that cannot be influenced by applicants and that determine to what extent faking is possible. This involves, for example, the purpose of the interview (selection vs. recruitment), whether an interview is structured or not, and what the interview questions measure.
2 PREVIOUS RESEARCH ON FAKING IN INTERVIEWS
Research has shown that faking can have a positive effect on interview outcomes. Some studies found relationships between self-reported faking behavior and interview outcomes in real selection interviews (e.g., Levashina & Campion, 2007), or between self-reported faking behavior in general and ratings of interviewees' performance in simulated interview settings (Buehl & Melchers, 2017; Ingold et al., 2015; but see Ho et al., 2021). Furthermore, experimental studies found that interviewees in mock interviews were able to obtain higher interview ratings in a condition in which they were instructed to answer as they would in a real selection setting than in a condition in which they were instructed to provide honest answers (Buehl et al., 2019; Van Iddekinge et al., 2005).
In addition to evidence that faking can improve interview performance, previous research also revealed that the use of faking or the intention to fake in interviews is related to personality variables that are also associated with important work outcomes such as task performance, organizational citizenship behavior, or counterproductive work behavior. Specifically, previous research found positive correlations between interview faking or faking intentions and all facets of the Dark Triad (Narcissism, Machiavellianism, and Psychopathy) and negative correlations with Honesty-humility (e.g., Bill et al., 2020; Bourdage et al., 2018; Law et al., 2016; Roulin & Bourdage, 2017; Roulin & Krings, 2016). For the former, meta-analytic research by O'Boyle et al. (2012) revealed negative correlations between traits from the Dark Triad and task performance and positive correlations with counterproductive work behavior. In addition, other primary studies also revealed negative relationships between the Dark Triad and organizational citizenship behavior (e.g., Szabó et al., 2018; Webster & Smith, 2019). Similarly, the meta-analysis by Lee et al. (2019) found that Honesty-humility is strongly negatively related to counterproductive work behavior and slightly positively to task performance and organizational citizenship behavior. Given the evidence that faking can lead to better evaluations in selection interviews and also given that it is related to personality traits that are related to negative work outcomes, it seems possible that faking might indeed impair the criterion-related validity of selection interviews.
In addition to a potentially higher saturation of interview scores with undesirable personality attributes, Salgado (2016) also assumed that faking not only leads to increased mean scores in a selection procedure but also to decreased variation of applicants' scores. He suggested that this leads to a homogenization of scores, which in turn impairs the criterion-related validity.
So far, there are only a few studies that have dealt with the potential influence of faking in selection interviews on criterion-related validity. However, at least partial support for the predicted negative impact of interview faking on interview validity was found. Specifically, Henle et al. (2019) found medium-sized negative correlations between three different types of self-reported interview faking behaviors (inventing, embellishing, and omitting) and self-reported job performance. Additionally, they found large positive correlations between the different faking behaviors and organizational as well as interpersonal deviance. In contrast to this, Ingold et al. (2015) found no significant relationship between self-reported faking behavior in a simulated interview and supervisor ratings of interviewees' job performance. Finally, Buehl et al. (2019) used an interview to predict academic performance and interviewed students in an honest and an applicant condition. They found that interview scores in the applicant condition were a better predictor for students' grade point average than in the honest condition. In contrast, interview scores in the honest condition were a better predictor for university citizenship behavior than in the applicant condition.
Although results concerning the effects of faking on interview validity are mixed, they suggest that faking in interviews can affect criterion-related validity in at least some situations. Therefore, it is of practical relevance to extend knowledge on suitable ways to prevent faking behavior in selection interviews. In addition to face-to-face interviews, technology-mediated interviews are becoming increasingly important due to technological progress and the COVID-19 pandemic (Melchers et al., 2021). These can either be videoconference interviews, in which the interviewer and interviewee communicate via microphone and camera, or asynchronous video interviews, in which the questions are presented on the screen and the interviewees' answers are recorded using a camera and microphone (Lukacik et al., 2022). Previous research revealed differences between face-to-face and technology-mediated interviews concerning interviewees' performance, interviewees' use of impression management (IM), and interviewees' perceptions of and reactions to these interviews.
Concerning interview performance, previous research found that interviewees' performance in videoconference interviews is lower compared to face-to-face interviews (Blacksmith et al., 2016; Melchers et al., 2021). Furthermore, Basch et al. (2021) found that the difference in interview performance ratings was mediated by IM, meaning the lower level of IM in videoconference interviews led to a lower performance. This is also in line with a result by Basch et al. (2020) that the perceived possibility to show IM is lower for videoconference interviews and asynchronous video interviews compared to face-to-face interviews. However, it should be noted that even if the perceived opportunity to use IM (Basch et al., 2020) and the reported IM behavior were lower for videoconference interviews (Basch et al., 2021), the respective mean values turned out to be around the midpoint of the corresponding scale. Therefore, although this previous research has not distinguished between faking and honest IM, we suspect that faking also matters in technology-mediated interviews and thus may be associated with a similar possible threat to criterion-related validity. Furthermore, with regard to applicant reactions, it has been found that technology-mediated interviews are perceived more negatively than face-to-face interviews (Basch et al., 2020; Blacksmith et al., 2016).
3 COUNTERMEASURES AGAINST FAKING
Research has shown that interviewers are hardly able to detect faking in interviews. Roulin et al. (2015), for example, found that interviewers' detection rate of faking tactics was only between 11.8% and 18.5%. Additionally, they only found nonsignificant correlations between interviewees' self-reported and interviewers' perceived use of faking tactics. In line with this, the review by Melchers et al. (2020) pointed out that faking in interviews can currently not be detected. However, there is at least one promising recent approach by Roulin and Powell (2018) that might help to detect faking. Specifically, Roulin and Powell used a criterion-based content analysis (CBCA) adapted from the legal context to detect faking. To do so, the interviewees' answers were transcribed and coded according to 14 CBCA indicators. These indicators were, for example, the quantity of details, logical structure, and contextual embedding. With this approach, it was possible to achieve a faking/honest assessment with an accuracy of up to 63.4%. However, Roulin and Powell (2018) emphasized that this approach is not yet ready for use in organizations. Thus, it is important to consider measures to reduce faking, while it is not yet possible to detect faking when it occurs.
So far, different approaches have been investigated to reduce faking in job interviews but only some of them seem to be promising (see Melchers et al., 2020). These approaches were the use of follow-up questions, different types of interview questions, the number and type of interviewers, and warnings against faking. With regard to follow-up questions, Levashina and Campion (2007, Study 5) investigated whether these questions are suitable to reduce faking in a structured selection interview. However, it turned out that follow-up questions had exactly the opposite effect as expected and led to more faking. Concerning the effect of different question types on faking, Levashina and Campion (2007, Study 5) found that situational questions were more prone to faking than past behavior questions and this effect seemed to be mainly driven by slight image creation and ingratiation. However, in a survey among recent applicants, Bourdage et al. (2018) found that applicants reported less image protection, and less slight and extensive image creation in interviews when situational questions were asked compared to when no such questions were asked. No difference was found for past behavior questions. In addition, Bourdage et al. (2018) found that applicants reported more deceptive ingratiation when resume-based questions were asked compared to when no such questions were asked. Finally, concerning the effect of the number and type of interviewers on faking, Bourdage et al. (2018) also hypothesized that panel interviews or interviews with a future supervisor lead to less faking. However, these hypotheses could not be confirmed. Interviews with a future supervisor even led to more deceptive ingratiation.
Another approach to reduce faking in selection interviews is to use warnings. This approach has repeatedly been used in personality tests and biodata inventories (e.g., Dwight & Donovan, 2003; Kluger & Colella, 1993; McFarland & Ryan, 2006; Schrader & Osburn, 1977). Dwight and Donovan (2003) reviewed 15 studies investigating the impact of warnings on noncognitive selection measures (including personality inventories, biodata inventories, and self-assessments). In 12 of these studies, the samples consisted of real job applicants or military recruits. They found that identification warnings (i.e., warnings about the inclusion of items that are capable of identifying faking) had almost no effect on test scores. However, warnings about consequences (i.e., disciplinary actions) or combination warnings consisting of an identification warning and a warning about consequences had a small negative effect on test scores. In their own experimental study, Dwight and Donovan (2003) examined a student sample and also found that the only warning that led to significantly lower personality scores compared to an unwarned condition was a combination warning. Additionally, they found that an identification warning, a warning about consequences, and a combined warning reduced the ratio of potential fakers within the top ten students. Therefore, Dwight and Donovan (2003) assumed that all three warnings were suitable to reduce faking. In line with this, McFarland and Ryan (2006) told their participants in a laboratory experiment that a social desirability scale was included to identify dishonest individuals in a personality test. They found that faking intentions were less likely and that personality scores were indeed lower on four of the Big Five personality dimensions among participants who received this warning.
In contrast to personality testing, there are only two studies that have examined the impact of warnings on faking or faking intentions in selection interviews so far. In the first of these, Law et al. (2016) compared an identification warning, a moral warning, and a combination warning with an unwarned condition. In the identification warning condition, participants were told that the interview would contain questions suitable for measuring honesty. In the moral warning condition, it was pointed out that honesty was the right thing to do and fair to all applicants. Finally, in the combined warning condition, participants were told that the interview contained questions suitable for measuring honesty and that they would be rewarded (i.e., considered for a $50 prize) if they were honest. Law et al. (2016) found that only the identification warning led to less faking compared to the unwarned condition. However, in a recent experimental vignette study, Bill et al. (2020) found a verification warning had hardly any effect on participants' faking intentions. In this study, participants were told that their answers would be verified via self-named references (e.g., instructors, coworkers, and previous supervisors) and that lying would lead to exclusion from the application process. Interestingly though, Bill et al. (2020) found a small but significant positive effect of a verification warning on intentions to use honest IM. Thus, it might be that participants tried to compensate for the lower possibility to fake by using alternative strategies such as emphasizing their actual qualities more strongly.
Taken together, the available knowledge on potential countermeasures in selection interviews is limited and results are mixed even for the few countermeasures that have already been investigated. Therefore, more information on potential countermeasures to reduce faking in interviews is needed.
4 EFFECTS OF COUNTERMEASURES ON APPLICANT REACTIONS
Interviewers often not only want to select but also recruit applicants (Dipboye et al., 2012). Therefore, it is also important to know whether potential countermeasures against faking provoke negative applicant reactions that impair the recruiting function of interviews. According to the model by Gilliland (1993), there are ten different rules that determine whether a selection system is perceived as procedurally fair or unfair. This perception in turn leads to different applicant reactions. The ten procedural rules are related to three general aspects of selection systems: formal characteristics, explanations, and interpersonal treatment. They concern aspects such as the opportunity to perform (i.e., the possibility for applicants to demonstrate their qualifications), information about the selection process, honesty towards applicants, the interpersonal effectiveness of the administrators of the selection process, and the propriety of the questions asked during the selection process.
To ensure that countermeasures do not lead to negative applicant reactions, they should not violate the different justice rules. However, a possible problem with warnings and other countermeasures is that they might violate some of the procedural rules from Gilliland's (1993) fairness model such as applicants' perceived opportunity to perform. In line with this, McFarland (2003) argued that if organizations take precautions against faking, applicants may feel restricted in the way they present themselves. Furthermore, it might also be possible that warnings violate the interpersonal effectiveness rule. The violation of this rule depends on the extent to which applicants perceive their treatment as respectful and warm (Gilliland, 1993). When applicants are warned about faking countermeasures, they might take this as a suspicion that they would otherwise fake, which could be perceived as disrespectful.
Besides these theoretical assumptions, there is at least some empirical evidence concerning effects of warnings on procedural fairness and applicant reactions. The majority of this research again stems from the field of personality testing. Some of these studies found no relationships between identification warnings and overall fairness perceptions, procedural fairness perceptions, recommendation intentions, or intentions to take a job offer (Lammers, 2017; McFarland, 2003). However, some studies found that negatively framed identification warnings (i.e., dishonest answering will invalidate the test results) impaired perceived interpersonal justice, reapplication intentions, perceptions of the organization's benevolence, and other applicant reaction variables and these warnings also led to higher test anxiety in personality tests and other assessment procedures (Burns et al., 2015; Lammers, 2017). In contrast, however, if they were positively framed (e.g., honesty qualifies for a $50 prize) these warnings could also lead to a higher motivation in assessment procedures (Burns et al., 2015). Finally, in the only study that examined the effect of warnings in selection interviews on applicant reactions, Law et al. (2016) found that none of their three warnings (an identification warning, a moral warning, and a combination warning) affected performance anxiety and procedural justice. Thus, taken together, the results are mixed and partially depend on the framing of the warnings.
5 THE PRESENT RESEARCH
As described above, there has been relatively little research on measures that can be applied to reduce faking or faking intentions in interviews. The few promising findings to date suggest that certain question types may be more or less susceptible to faking (Bourdage et al., 2018; Levashina & Campion, 2007) and that warnings may reduce faking in interviews (Bill et al., 2020; Law et al., 2016). With regard to the question type, however, the findings are mixed and, depending on what one wants to know from an applicant, it is not always possible to choose a specific question type. With regard to warnings, the findings are also mixed (cf. Bill et al., 2020) and seem to be impacted by how warnings are designed in terms of content.
To compare different countermeasures with respect to their effectiveness and to identify potentially effective warnings, we conducted an exploratory study as a first step. In addition, we also wanted to look for additional countermeasures, since there are only relatively few different approaches to reduce faking in job interviews to date. Furthermore, given that the use of technology-mediated interviews also allows the potential use of technology-based countermeasures such as the use of algorithms that analyze video or audio recordings (see Lukacik et al., 2022), we decided to cover countermeasures that seem more suitable for technology-mediated interviews, as well as countermeasures that can also be used in face-to-face interviews. We then conducted three experimental studies. The goal of Study 2 was to investigate how effective the most promising countermeasures for technology-mediated interviews are in reducing faking intentions in videoconference interviews in a vignette study. In Study 3, simulated videoconference interviews were then conducted to determine whether these countermeasures have a comparable effect on faking behavior. In Study 4, we then examined the effect of the most promising countermeasures for face-to-face interviews on faking intentions in another vignette study. By doing so, we want to test the following hypotheses and seek an answer to the subsequent research question:
H1: Countermeasures against faking reduce faking intentions and faking behavior in selection interviews.
RQ1: Which countermeasures are most suitable to reduce faking intentions and faking behavior in selection interviews?
In addition to faking (i.e., deceptive IM), we investigated whether the countermeasures identified in the exploratory study also affect honest IM intentions and honest IM. As described above, a verification warning against faking was able to increase honest IM intentions in a previous study (Bill et al., 2020). Even though the effect was very small and this result refers specifically to a verification warning, we suspect that applicants might try to compensate for the limited possibility to fake in general by using more honest IM such as emphasizing their actual qualities more strongly. Therefore, we pose the following hypothesis:
H2: Countermeasures against faking increase honest IM intentions and honest IM behavior in selection interviews.
As noted above, we assume that countermeasures may violate Gilliland's (1993) procedural fairness rules by limiting applicants' perceived opportunity to perform and/or impairing the interpersonal effectiveness rule. Results from the field of personality testing and other assessment procedures also provided partial evidence that countermeasures can impair applicant reactions (Burns et al., 2015; Lammers, 2017). Therefore, the final aim of the present research was to investigate whether the countermeasures from the exploratory study impair applicant reactions. Thus, we investigated potential effects in the three main studies to test the following hypothesis:
H3: Countermeasures against faking negatively affect applicant reactions to selection interviews.
6 STUDY 1
6.1 Methods
The sample consisted of 148 German-speaking volunteers (108 females, 40 males). Data from another 40 participants were excluded because they failed the attention check items (see below). Participants' age ranged from 18 to 60 (M = 24.95 years, SD = 7.39) and 90 of them were currently working. The majority (95.95%) of them had prior interview experience with an average of 6.14 (SD = 9.61) interviews. Participants were recruited through social media or contacted directly.
The study was conducted as an online study. After participants had completed an informed consent form, we asked them separately to think of a technology-mediated interview and a face-to-face interview. For each kind of interview, we showed them descriptions of the respective interview type. The description for technology-mediated interviews covered videoconference interviews as well as asynchronous interviews. Then we showed them a list of different possible countermeasures against faking that were suitable for technology-mediated or for face-to-face interviews1 and asked them to rate whether these measures would prevent them from “answering dishonestly”. Finally, participants were asked for some demographic information.
The different countermeasures (see Table 1) were inspired by the previous literature on faking/lying and/or from the interview context (e.g., Becker & Colquitt, 1992; Bill et al., 2020; Hart et al., 2010; Lammers, 2017; Law et al., 2016; Masip et al., 2016; Naim et al., 2016; Reinhard et al., 2013; Roulin & Powell, 2018; Sporer & Schwandt, 2006; Vrij et al., 2019). In addition, we added further measures ourselves. We also had an open-ended question for each kind of interview so that participants could suggest further countermeasures.
Faking countermeasures | M | (SD) |
---|---|---|
Technology-mediated interviews: | ||
Everything you say is recorded and checked for faking based on content aspects.* | 4.03a | (1.13) |
An algorithm evaluates your nonverbal behavior to detect dishonest behavior. | 3.86a | (1.11) |
An algorithm evaluates your paraverbal behavior to detect dishonest behavior. | 3.86a | (1.12) |
An algorithm evaluates the content of the answers to detect dishonest behavior | 3.48a | (1.17) |
The interview is evaluated using a method based on artificial intelligence. | 3.22b | (1.17) |
The time for reflection before each answer is limited to a minimum. | 3.21 | (1.30) |
Face-to-face interviews: | ||
You will be asked objective, verifiable questions. | 4.07a | (1.25) |
Faking is punished with negative consequences if discovered. | 4.03a | (1.14) |
The same interview will be conducted with you again after a certain period of time to check whether you actually give the same answers again. | 3.78a | (1.23) |
The organization strongly emphasizes values such as honesty, sincerity and trust. | 3.72a | (1.23) |
Faking interferes with testing the best fit between you and the potential job. | 3.64a | (1.14) |
The interviewer makes the impression that he has a lot of experience with conducting job interviews. | 3.62a | (1.02) |
Only honest answers enable a fair selection of applicants. | 3.41a | (1.21) |
Faking is ethically inappropriate and therefore, should be refrained from. | 3.33a | (1.25) |
The interviewer claims to be able to recognize dishonest answers. | 3.28b | (1.30) |
The interviewer is a psychologist. | 3.19 | (1.25) |
Faking is unfair to other applicants. | 3.08 | (1.35) |
- Note: *Item refers to a criterion-based content analysis.
- a Difference between the respective mean value and 3.00 as the midpoint of the scale is significant at p < .01
- b Difference between the respective mean value and 3.00 as the midpoint of the scale is significant at p < .05.
For each of the different countermeasures against faking in technology-mediated and face-to-face interviews, we asked participants “Which of the following measures/circumstances/hints (wording was adjusted to fit each of the following items) would prevent you from answering dishonestly in a face-to-face selection interview/selection interview via videoconference or video recording?”. Then, they had to rate whether the respective measure (e.g., “An algorithm evaluates the content of your answers to detect dishonest behavior.”) would prevent them from faking on a 5-point Likert scale from 1 = does not apply at all to 5 = fully applies. The open-ended question for each of the two kinds of interviews asked participants “What other measures would keep you from answering dishonestly in a face-to-face selection interview/selection interview via videoconference or video recording?”.
To check whether participants were attentive during the survey, we used two attention check items (“I read the questions in this study carefully” and “I don't read the questions in this survey”). These items were interspersed with the other items and had to be rated on the same 5-point Likert scale. Participants who rated the first item as 3 or less, or the second item as 2 or more were excluded.
6.2 Results
As can be seen by looking at the means in Table 1, all countermeasures seem to be at least moderately effective to reduce faking intentions given that most of the means were significantly larger than the scale-midpoint of 3. However, the means also showed some variability, which suggests that some countermeasures are more promising than others. Descriptively, warnings referring to a criterion-based content analysis and to lie detecting algorithms focusing on nonverbal or paraverbal cues had the highest mean scores for technology-mediated interviews. Similarly, the use of objective questions and negative consequences if faking is discovered had the highest mean scores for face-to face interviews. To test this impression statistically, we conducted separate one-way repeated measures analysis of variance (ANOVAs) for the two types of interviews with Greenhouse-Geisser corrections.
For the technology-mediated interviews, the ANOVA found that the different countermeasures indeed differed in their potential effectiveness, F(3.85, 558.70) = 24.04, p < .01, η2 = 0.14. Bonferroni-corrected post hoc tests revealed that the three descriptively most promising countermeasures were significantly more likely to prevent participants from faking than the remaining countermeasures, all ps < .01.
Similarly, the ANOVA for face-to-face interviews also revealed a significant effect, F(6.65, 937.42) = 15.06, p < .01, η2 = 0.10. Bonferroni-corrected post hoc tests revealed that the two descriptively most promising countermeasures were indeed significantly more likely to prevent participants from faking than 6 of the 10 remaining countermeasures, all ps < 0.05.
In addition to the ratings, the answers to the open-ended questions revealed another interesting result. Specifically, a relatively high number of 25 answers implied that a personable and/or friendly interviewer would prevent participants from faking (see Table 2).
Reason | Frequency | Percentage |
---|---|---|
Reasons for the intention to avoid faking during a technology mediated interview: | ||
Checking the answers afterwards | 9 | 12.16 |
A recording of the interview | 7 | 9.46 |
Contradicts my values | 4 | 5.45 |
Personable interviewer | 3 | 4.05 |
Limited time to answer | 3 | 4.05 |
Live video | 3 | 4.05 |
Interview will be repeated | 3 | 4.05 |
Verifiability of given answers | 2 | 2.70 |
Rejections in the case of dishonest answers | 2 | 2.70 |
Indication that answering honestly is fair | 2 | 2.70 |
Psychologists will evaluate the answers afterwards | 2 | 2.70 |
Perceived honesty of the interviewer | 2 | 2.70 |
Job-specific questions | 2 | 2.70 |
Others | 30 | 40.54 |
Total | 74 | 100.00 |
Reasons for the intention to avoid faking during a face-to-face interview: | ||
Personable interviewer | 25 | 18.12 |
Perceived honesty of the interviewer | 11 | 7.97 |
Contradicts my values | 10 | 7.25 |
Interviewer known to me | 8 | 5.80 |
Answers are objectively verifiable | 5 | 3.62 |
Many interviewers | 5 | 3.62 |
Checking the answers afterwards | 5 | 3.62 |
Negative consequences | 4 | 2.90 |
Cannot live up to claims of competence in practice | 4 | 2.90 |
Identifies with the company | 3 | 2.17 |
Camera records interview | 3 | 2,17 |
Interviewer shows genuine interest | 3 | 2,17 |
Trusting atmosphere | 2 | 1.45 |
Strict interviewer | 2 | 1.45 |
Interviewer/company greatly values authenticity | 2 | 1.45 |
Several applicants are interviewed at the same time | 2 | 1.45 |
Referees will be contacted | 2 | 1.45 |
Job is very important for the interviewee | 2 | 1.45 |
Interviewer accepts applicant for who they are | 2 | 1.45 |
Lie detector | 2 | 1.45 |
Others | 36 | 26.09 |
Total | 138 | 100.00 |
6.3 Discussion
The main aim of this (exploratory) study was to identify the most promising countermeasures that were then used for the three following studies. For technology-mediated interviews, we decided to use a warning referring to a criterion-based content analysis and a warning of a lie detecting algorithm focusing on nonverbal and paraverbal cues. Since the warnings of a lie detecting algorithm focusing on nonverbal or paraverbal cues had the same means and were similar in content, we decided to combine them. The selected warnings had the highest mean scores and differed significantly from the remaining countermeasures. In addition, we decided to test them for videoconference interviews given that these are more common than asynchronous video interviews (Basch & Melchers, 2021) but we would assume that they would also be suitable for the latter (Lukacik et al., 2022). For face-to-face interviews, we decided to test the use of objective (i.e., verifiable) questions, which had the highest mean score and differed significantly from 6 of the 10 remaining countermeasures. Additionally, we decided to test the personable and/or friendly interviewer, which was mentioned by a surprisingly high number of participants—and far more often than any other suggestion—in the open-ended question. We consider the 25 mentions to be a surprisingly high number given that many participants in surveys usually do not answer at all to open-ended questions (e.g., Zuell et al., 2015). Furthermore, none of the other potential countermeasures were even mentioned half as often to the open-ended questions. Therefore, we did not consider any of them in the subsequent studies.
This choice of countermeasures for the following studies thus focuses on those countermeasures that seem most promising on the one hand regarding the prevention of faking, and on the other hand also cover different approaches such as explicit warnings, the use of questions that might affect the risk of getting caught, or the kind of interviewer, a measure that might not even be perceived as a faking countermeasure in the past.
With regard to warnings—and as described above—identification warnings could lead to less faking in interviews (Law et al., 2016) but might also impair applicant reactions and justice perceptions (Burns et al., 2015; Lammers, 2017). However, warnings about the use of a criterion-based content analysis or of a lie detecting algorithm were not investigated in any previous study. Similarly, no previous research has investigated the use of objective and verifiable questions as a countermeasure to reduce faking in interviews. However, research on faking biodata instruments found that objective and verifiable biodata items are faked less in practice (Becker & Colquitt, 1992). Regarding applicant reactions, we assume that objective questions might violate applicants' perceived opportunity to perform and therefore, one of the procedural justice rules by Gilliland (1993). Specifically, these questions may limit the freedom with which answers can be given compared to more general self-assessment questions (see Conway & Peneno, 1999; for a similar suggestion).
The suggestion in participants' answers concerning the effects of a personable and/or friendly interviewer on faking, admittedly came as a surprise to us. The impact of interviewer friendliness is not taken into account in faking models such as the one by Levashina and Campion (2006). In addition, there is hardly any prior research on interview faking related to interviewer friendliness (also see the review by Melchers et al., 2020). However, in a recent qualitative study, Ho et al. (2021) also found that informants were less likely to fake with a friendly interviewer because they felt less interview anxiety. Therefore, it might be the case that applicants feel more comfortable with a personable interviewer, making it easier for them to admit weaknesses or that they simply reward the interviewer's friendliness with honesty. Since there is no quantitative research on the influence of a personable interviewer on faking intentions, we decided to investigate this suggestion experimentally, although negative consequences as a countermeasure would also have been a viable option. With regard to applicant reactions, according to Gilliland (1993), a warm and respectful treatment by the interviewer fulfills the procedural justice rule of interpersonal effectiveness (also see Chapman et al., 2005; or Nikolaou & Georgiou, 2018). Thus, contrary to Hypothesis 3, a personable interviewer should be associated with positive applicant reactions and justice perceptions.
7 STUDY 2
7.1 Methods
7.1.1 Participants
To determine the required sample size to find a medium-sized effect in a one-way ANOVA with a power of 0.80, we conducted an a priori power analysis (Faul et al., 2007). The analysis revealed that 159 participants are needed.
Our initial sample even consisted of 297 German-speaking participants. However, data from 84 participants had to be excluded because they answered at least one content-related question incorrectly or failed the attention check items (see below). Thus, the final sample consisted of 213 participants (157 females, 56 males, 112 students, 101 nonstudents), resulting in a power of 0.91. Their mean age was 27.20 years and ranged from 18 to 61. Most participants had prior interview experience (94.8%) with an average of 5.46 (SD = 4.85) interviews. Participants were recruited through social media, flyers, mailing lists, and personal approaches. Psychology students from the authors' university could participate to partially fulfill a course requirement.
7.1.2 Procedure and experimental conditions
We used a one-factorial between-subjects design with three conditions (CBCA warning vs. nonverbal/paraverbal algorithm warning vs. no warning control condition). The experiment was administered as an online survey.
After participants had completed an informed consent form and demographic questions, they were randomly allocated to the different experimental conditions. Then, each participant was given one of three vignettes to read, in which they were asked to imagine that they were currently unemployed and were applying for an attractive job. Furthermore, they should imagine that they were invited for a selection interview that took place via videoconference.
Depending on the experimental condition, the vignette then provided participants with different information. In the CBCA condition, the vignette stated that the interviewer told the participants that the interview would be recorded, that all their statements would be checked for typical dishonest content patterns, and that this technique was originally developed to check witnesses' statements to distinguish dishonest answers from honest ones. Finally, it was explicitly stated that the organization would use this procedure to detect dishonest answers (faking) in selection interviews. In the nonverbal/paraverbal algorithm condition, the vignette stated that the interviewer told the participants that the interview would be recorded and a specific algorithm would verify their nonverbal (gestures, facial expressions, posture and movement, eye contact…) and paraverbal behavior (voice, intonation, volume, speaking rate…). The data from this recording would then be checked for typical dishonest behavior. Finally, it was explicitly stated that this would enable the organization to recognize dishonest answers (faking) in selection interviews. In the control-condition, there was no additional information about any verification of given answers.
After reading their respective vignette, participants were shown content-related questions and a manipulation check (see below). Based on this hypothetical situation, they were asked about their honest IM and faking intentions and about several applicant reaction variables. Finally, they filled out the Honesty-humility and Dark Triad scales.
7.1.3 Measures
We used a 16-item version (e.g., “I would make up stories about my work experiences that are well developed and logical”) of Levashina and Campion's (2007) faking scale developed by Bourdage et al. (2018) to measure faking (i.e., deceptive IM) intentions. Coefficient alpha was .87. In addition, honest IM intentions were measured with a 12-item scale (e.g., “I would make sure the interviewer is aware of my skills and abilities”) by Bourdage et al. (2018). Coefficient alpha was .68. All items for honest IM and faking intentions were adjusted to the hypothetical situation and had to be rated on a 5-point Likert scale from 1 = to no extent to 5 = to a very great extent.
We measured Honesty-humility with the 10-item scale of the HEXACO-60 by Moshagen et al. (2014). This scale represents the German version of the corresponding HEXACO-60 scale by Ashton and Lee (2009). The items (e.g., “I wouldn't pretend to like someone just to get that person to do favors for me”) had to be rated on a 5-point Likert scale from 1 = fully agree to 5 = fully disagree (the same scale was also used for all subsequent measures unless indicated otherwise). Coefficient alpha was .71.
We measured the Dark Triad with a 9-item scale by Küfner et al. (2015) which contains the three subfacets Machiavellianism (e.g., “I have used deceit or lied to get my way.”), Psychopathy (e.g., “I tend to be unconcerned with the morality of my actions.”), and Narcissism (e.g., “I tend to seek prestige or status.”). This scale represents the German version of the Naughty Nine by Jonason and Webster (2010). Coefficient alpha was .73 for Machiavellianism, .70 for Psychopathy, and .87 for Narcissism.
Concerning applicant reactions, we measured procedural justice, general attractiveness, recommendation intentions, job pursuit intentions, and the intention to reapply. The different items were taken from different sources (see the Appendix for all the items and their respective sources). Where necessary, the applicant reaction items were adjusted to the hypothetical situation for the present study. Coefficient alpha was .89 for procedural justice, .91 for general attractiveness, .95 for recommendation intentions, .91 for job pursuit intentions (after the deletion of a problematic item2), and .95 for the intention to reapply.
To check whether the manipulation worked as intended, participants had to answer two items asking whether they perceived a verification or not (e.g., “The company verifies whether I answer dishonestly in the interview”). The items had to be rated on a 7-point Likert scale from 1 = fully disagree to 7 = fully agree (alpha = .93). Additionally, participants had to answer five content-related questions (e.g., “An appointment was made with me for an online selection interview”) to verify they had read the vignettes. These questions had to be answered as true or not true.
Finally, we verified that participants paid attention while completing the survey with three attention check items. The first two items were the same as in the pilot study with the same exclusion criteria. The third item was “I answered this questionnaire seriously so that the data can be used to analyze this survey” and had to be answered with yes or no. Participants who answered no were also excluded.
7.2 Results
Correlations and descriptive information can be seen in Tables 3 and 4. To verify whether the experimental manipulation worked as intended, we conducted a one-way ANOVA with the independent variable warning type and the dependent variable perceived verification. We found a significant effect for warning type, Welch's F(2, 127.43) = 208.09, p < .001. We used Welch's correction because the assumption of homogeneity of variance was not met. Games-Howell post hoc tests confirmed that perceived verification was lower in the no warning control condition than in the CBCA warning condition and in the nonverbal/paraverbal algorithm warning condition, both ps < .001 (see Table 4). There was no significant difference between the CBCA and nonverbal/paraverbal algorithm warning condition, p = .77. Taken together, these results indicated that participants had clearly perceived the experimental treatment.
Variable | M | SD | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Sex | 0.74 | 0.44 | – | |||||||||||||||
2. Age | 27.20 | 8.99 | −0.17* | – | ||||||||||||||
3. I. experience | 5.46 | 4.85 | −0.12 | 0.11 | – | |||||||||||||
4. CBCA warning | 0.37 | 0.48 | −0.01 | 0.00 | 0.10 | – | ||||||||||||
5. Algorithm warning | 0.33 | 0.47 | −0.01 | 0.07 | 0.06 | −0.54** | – | |||||||||||
6. Honesty-humility | 3.59 | 0.66 | 0.27** | 0.11 | −0.13 | 0.13 | −0.09 | 0.71 | ||||||||||
7. Machiavellianism | 2.07 | 0.91 | −0.25** | −0.18** | 0.20** | −0.06 | −0.01 | −0.63** | 0.73 | |||||||||
8. Psychopathy | 1.62 | 0.77 | −0.45** | −0.10 | 0.16* | 0.01 | 0.04 | −0.35** | 0.39** | 0.70 | ||||||||
9. Narcissism | 2.78 | 1.07 | −0.16* | −0.13 | 0.10 | −0.19** | 0.06 | −0.54** | 0.41** | 0.24** | 0.87 | |||||||
10. Faking intentions | 2.30 | 0.61 | −0.01 | −0.22** | 0.09 | 0.01 | −0.02 | −0.52** | 0.54** | 0.22** | 0.30** | 0.87 | ||||||
11. Honest IM int. | 3.75 | 0.44 | 0.05 | −0.10 | 0.09 | 0.07 | −0.06 | −0.14* | 0.20** | 0.07 | 0.12 | 0.28** | 0.68 | |||||
12. Procedural justice | 2.88 | 1.06 | −0.09 | −0.09 | −0.03 | −0.20** | −0.17* | −0.10 | 0.12 | 0.03 | 0.09 | 0.07 | 0.15* | 0.89 | ||||
13. Org. attractiveness | 3.94 | 1.15 | −0.01 | −0.06 | −0.06 | −0.18** | −0.10 | −0.06 | 0.13 | −0.07 | 0.09 | 0.11 | 0.04 | 0.56** | 0.91 | |||
14. Reapplication int. | 3.06 | 1.27 | −0.01 | −0.04 | −0.02 | −0.12 | −0.10 | −0.09 | 0.15* | −0.04 | 0.07 | 0.12 | 0.08 | 0.51** | 0.57** | 0.95 | ||
15. Job pursuit int. | 4.21 | 0.98 | 0.00 | −0.02 | −0.09 | −0.14* | −0.07 | −0.11 | 0.14* | 0.05 | 0.09 | 0.12 | −0.03 | 0.52** | 0.71** | 0.59** | 0.91 | |
16. Recommend. int. | 3.23 | 1.03 | −0.04 | −0.06 | −0.14* | −0.11 | −0.17* | −0.09 | 0.08 | −0.06 | 0.08 | 0.03 | 0.06 | 0.57** | 0.68** | 0.64** | 0.69** | 0.95 |
- Note: N = 213. I. = interview, CBCA = criterion-based content analysis, org. = organizational, int. = intentions, recommend. = recommendation. Coding of sex: 0 = male, 1 = female; coding of CBCA warning: 0 = control group or nonverbal/paraverbal algorithm warning, 1 = CBCA warning; coding of algorithm warning: 0 = control group or CBCA warning, 1 = para-/nonverbal algorithm warning. Values in the diagonal show coefficient alpha for the different variables.
- * p < .05
- ** p < .01.
Variable | Groups | Effect sizes | |||
---|---|---|---|---|---|
No warning control | CBCA warning | Algorithm warning | Control vs. CBCA warning | Control vs. algorithm warning | |
n = 64 | n = 78 | n = 71 | |||
M (SD) | M (SD) | M (SD) | d | d | |
Perceived verification | 2.29 (1.59) | 6.66 (0.90) | 6.56 (0.83) | −3.47** | −3.42** |
Faking intentions | 2.30 (0.64) | 2.31 (0.63) | 2.28 (0.55) | −0.02 | 0.03 |
Honest IM intentions | 3.75 (0.41) | 3.79 (0.46) | 3.71 (0.45) | −0.09 | 0.09 |
Procedural justice | 3.49 (0.90) | 2.61 (0.99) | 2.62 (1.05) | 0.93** | 0.88** |
Organizational attractiveness | 4.45 (0.74) | 3.67 (1.25) | 3.78 (1.22) | 0.74** | 0.66** |
Reapplication intentions | 3.49 (1.12) | 2.86 (1.27) | 2.89 (1.31) | 0.52** | 0.49* |
Job pursuit intentions | 4.54 (0.61) | 4.03 (1.16) | 4.11 (0.98) | 0.54** | 0.52** |
Recommendation intentions | 3.69 (0.76) | 3.08 (1.07) | 2.98 (1.08) | 0.65** | 0.76** |
- Abbreviations: algorithm, nonverbal/paraverbal algorithm; CBCA, criterion-based content analysis.
- * p < .05
- ** p < .01.
To investigate whether the different warnings affect faking intentions, we conducted a one-way ANOVA with the independent variable warning type and the dependent variable faking intentions (see Table 4 for descriptive information). We found no significant effect of warning type, F(2, 210) = 0.04, p = .96. To examine whether the different warning conditions affect honest IM intentions, we conducted a comparable one-way ANOVA with the dependent variable honest IM intentions. Again, we found no significant effect of warning type, F(2, 210) = 0.59, p = .55. Furthermore, we conducted two corresponding ANCOVAs and included age, sex, interview experience, Honesty-humility, and all facets of the Dark Triad as covariates. However, this did not qualitatively change the results for warning type.
Next, we investigated the effect of warnings on applicant reactions (see Table 4 for Ms and SDs). To do so, we conducted a multivariate analysis of variance (MANOVA) with the independent variable warning type and the dependent variables procedural justice, organizational attractiveness, intentions to reapply, recommendation intentions, and job pursuit intentions. Using Pillai's Trace, we found a significant effect for warning type, V = 0.17, F(10, 414) = 3.93, p < .001, η2 = 0.09. We conducted subsequent one-way ANOVAs to specify this effect. If the assumption of homogeneity of variance was not met, we used Welch's correction. Each of the different dependent variables revealed significant effects for warning type concerning procedural justice, F(2, 210) = 17.72, p < .001, organizational attractiveness, Welch's F(2, 136.30) = 14.06, p < .001, intentions to reapply, F(2, 210) = 5.56, p < .01, recommendation intentions, F(2, 210) = 10.11, p < .001, and job pursuit intentions, Welch's F(2, 135.34) = 8.18, p < .01. Gabriel (if variance homogeneity was given) and Games-Howell post hoc tests (if variance homogeneity was not given) revealed higher mean scores for the no warning condition compared to the CBCA warning and the nonverbal/paraverbal algorithm warning condition, all ps < .02. However, there was no significant difference between the CBCA warning and the nonverbal/paraverbal algorithm warning condition, all ps > .85.
7.3 Discussion
Contrary to our expectation and to the results of Study 1, we found that neither a warning concerning a CBCA nor concerning a nonverbal/paraverbal algorithm affected faking intentions in videoconference interviews. Additionally, these warnings had no impact on honest IM intentions. However, in line with our expectations, we found that both warnings had a negative impact on applicant reactions. In both warning conditions, procedural justice, organizational attractiveness, intentions to reapply, recommendation intentions, and job pursuit intentions were lower than in the no warning condition. To test whether these results generalize to behavior in actual interviews, we conducted Study 3.
8 STUDY 3
8.1 Methods
8.1.1 Participants
Our initial sample consisted of 103 participants. One participant had to be excluded because she failed the attention check items (see below). Therefore, the final sample consisted of 102 German-speaking participants (63 females, 39 males, 83 students, 19 nonstudents), resulting in a power of 0.59 to detect a medium-sized effect. Participants' age ranged from 18 to 61 years (M = 25.97, SD = 6.87). Most participants had prior interview experience (95.0%) with an average of 5.31 (SD = 8.26) interviews.
Participants were recruited through social media, mailing lists, and personal approaches. In addition, they received 10 € as compensation for their participation. Alternatively, psychology students could participate to partially fulfill a course requirement.
8.1.2 Procedure and experimental conditions
As for Study 2, we used a one-factorial between-subjects design with three conditions (CBCA warning vs. nonverbal/paraverbal algorithm warning vs. no warning control condition). This time, however, we conducted simulated videoconference interviews.
The study consisted of three parts. In the first part, participants completed an online survey. After being presented with a consent form, they were asked about demographic information and their personality. At the end of the survey, participants selected an appointment online for the simulated job interview.
In the second part, the simulated job interview was conducted via a videoconference platform. Before the interview, participants were instructed to dress as they would in an actual interview for an attractive position. Right before the interview, the interviewer first introduced herself as a psychologist and employee of the Department of Work and Organizational Psychology. Then participants were instructed to imagine that they did not have a job and were currently looking for employment. In doing so, they came across an attractive job advertisement, applied to the job, and were invited to the following interview. The participants were instructed to answer the interview questions as if they were eager to get the job. They were told that the three best candidates would receive 50 € as a reward. In addition, they were offered feedback on their interview performance after completing the interview. After that, participants were randomly assigned to one of three experimental conditions. Depending on the experimental condition, the interviewer then provided participants with different information. Similar to Study 2, there was either a warning about the usage of a CBCA, about the usage of an algorithm, or there was no information about any verification of given answers. The descriptions were analogous to Study 2. To increase realism of the countermeasures (and to allow that the interviews were evaluated by a second rater), all interviews were recorded, which was explicitly communicated to the participants. In addition, in both warning conditions participants were told that if the analysis procedure revealed that they were dishonest, they would not qualify for the 50 € reward. Then the interview (see below) was administered according to the interview guide.
The final part of the study consisted of an online survey that participants completed immediately after the interview. They were instructed by the interviewer that this survey had no effect on the evaluation of their interview performance, the feedback, or the cash prize and was solely for research purposes. Within the survey, participants answered manipulation check items, questions about their faking and honest IM behavior during the interview and questions about the applicant reaction variables.
8.1.3 Measures
We measured self-reported faking and honest IM with the same two scales as in Study 2 with the exception that no adjustments to a hypothetical context were made and that one item for honest IM and two items for faking were replaced with items from the same subscale by Levashina and Campion (2007) or by Bourdage et al. (2018) because they were not suitable for the simulated context. Coefficient alpha was .81 for faking and .77 for honest IM.
The interview contained five questions. Given previous findings of relatively low base rates for faking in highly structured interviews with situational of past behavior questions (Bourdage et al., 2018; Melchers et al., 2020), we decided to use more traditional question for the interview (e.g., “How do you handle criticism?”). The questions were designed by researching frequently occurring questions in job interviews and commonly used questions in job application guides (e.g., Hesse & Schrader, 2016; Mai, 2021). Each interview was evaluated independently by two different raters. The answers were rated on a 5-point scale from 1 = very poor performance to 5 = very good performance. Descriptive anchors were provided for very poor, average, and very good answers. All interviewers were students specializing in work and organizational psychology who underwent a frame-of-reference training (Melchers et al., 2011) before the interview. If the ratings of the respective answer to an interview question were more than one point apart between the two raters, the rating was discussed. After that, the raters independently re-rated the answer. Interrater reliability (ICC 1.1) was .86 before and .90 after discussion.
Honesty-humility, the Dark Triad, and applicant reactions were measured with the same scales as in Study 2. Coefficient alpha was .62 for Honesty-humility, .68 for Machiavellianism, .71 for Psychopathy, .79 for Narcissism, .83 for procedural justice, .96 for general attractiveness of the organization, .96 for recommendation intentions, .89 for job pursuit intentions, and .97 for the intention to reapply.
To check whether the manipulation worked as intended, participants had to answer two items asking whether they had perceived a verification information or not. The items were taken from Study 2 and adapted to the simulated setting (e.g., “It will be verified whether I answer dishonestly in the interview”). The items had to be rated on a 7-point Likert scale from 1 = fully disagree to 7 = fully agree. Coefficient alpha was .89.
To check whether the participants were attentive when answering the surveys, we used one attention check item each in the surveys before and after the interview. The items were: “I don't read the questions in this survey” and “I answered this questionnaire seriously so that the data can be used to analyze this survey”. Both items were already used in Study 2 with the same exclusion criteria.
8.2 Results
Correlations and descriptive information can be seen in Tables 5 and 6. To verify whether the experimental manipulation worked as intended, we conducted a one-way ANOVA with the independent variable warning type and the dependent variable perceived verification. We found a significant effect for warning type, F(2, 99) = 36.40, p < .001. Bonferroni corrected post hoc tests confirmed that perceived verification was lower in the no warning control condition than in the CBCA and nonverbal/paraverbal algorithm warning condition each, both ps < .001 (see Table 6). There was no significant difference between the two warning conditions, p = 1.00. Taken together, these results indicate that the experimental treatment was clearly perceived by the participants.
Variable | M | SD | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Sex | 0.62 | 0.49 | – | ||||||||||||||||
2. Age | 25.97 | 6.87 | −0.10 | – | |||||||||||||||
3. I. experience | 5.31 | 8.26 | −0.01 | 0.30** | – | ||||||||||||||
4. CBCA warning | 0.33 | 0.47 | 0.00 | −0.06 | −0.06 | – | |||||||||||||
5. Algorithm warning | 0.33 | 0.47 | −0.09 | −0.06 | 0.08 | −0.50** | – | ||||||||||||
6. Honesty-humility | 3.50 | 0.56 | 0.25* | 0.02 | −0.12 | −0.08 | −0.03 | 0.62 | |||||||||||
7. Machiavellianism | 2.17 | 0.80 | −0.19 | 0.07 | 0.08 | 0.07 | 0.16 | −0.40** | 0.68 | ||||||||||
8. Psychopathy | 1.66 | 0.72 | −0.35** | 0.17 | 0.24* | −0.09 | 0.09 | −0.32** | 0.33** | 0.71 | |||||||||
9. Narcissism | 2.99 | 0.96 | −0.15 | 0.05 | 0.20* | 0.06 | 0.08 | −0.24* | 0.37** | 0.05 | 0.79 | ||||||||
10. Faking | 1.35 | 0.36 | −0.13 | 0.16 | −0.08 | 0.00 | 0.10 | −0.07 | 0.26** | 0.15 | 0.20* | 0.81 | |||||||
11. Honest IM | 2.58 | 0.63 | −0.03 | 0.14 | 0.02 | 0.02 | −0.04 | −0.09 | 0.19 | −0.13 | 0.14 | 0.33** | 0.77 | ||||||
12. I. performance | 3.39 | 0.47 | 0.13 | 0.01 | 0.03 | −0.16 | 0.10 | −0.14 | 0.13 | −0.19 | 0.09 | −0.01 | 0.19 | – | |||||
13. Procedural justice | 3.12 | 0.99 | 0.14 | −0.12 | 0.02 | −0.02 | 0.01 | 0.02 | −0.16 | −0.07 | 0.04 | −0.09 | 0.16 | 0.08 | 0.83 | ||||
14. Org. attractiveness | 2.98 | 1.14 | −0.12 | −0.02 | 0.04 | 0.02 | −0.05 | −0.27** | −0.01 | 0.10 | 0.07 | −0.03 | 0.20* | −0.03 | 0.52** | 0.96 | |||
15. Reapplication int. | 3.17 | 1.21 | −0.07 | −0.15 | 0.00 | −0.03 | −0.08 | −0.17 | −0.11 | 0.07 | 0.01 | −0.04 | 0.22* | −0.09 | 0.54** | 0.70** | 0.97 | ||
16. Job pursuit int. | 3.56 | 1.10 | −0.07 | −0.20* | −0.12 | −0.06 | 0.02 | −0.16 | −0.01 | 0.06 | 0.05 | 0.04 | 0.14 | −0.13 | 0.49** | 0.65** | 0.86** | 0.89 | |
17. Recommend. int. | 2.99 | 1.11 | −0.03 | −0.11 | 0.05 | 0.07 | −0.07 | −0.07 | −0.06 | 0.06 | 0.00 | −0.02 | 0.18 | −0.01 | 0.59** | 0.69** | 0.72** | 0.74** | 0.96 |
- Note: N = 102. CBCA = criterion-based content analysis, I. = Interview, org. = organizational, int. = intentions, recommend. = recommendation. Coding of sex: 0 = male, 1 = female; coding of CBCA warning: 0 = control group or nonverbal/paraverbal algorithm warning, 1 = CBCA warning; coding of algorithm warning: 0 = control group or CBCA warning, 1 = para-/nonverbal algorithm warning. Values in the diagonal show coefficient alpha for the different variables.
- * p < .05
- ** p < .01.
Variable | Groups | Effect sizes | |||
---|---|---|---|---|---|
No warning control | CBCA warning | Algorithm warning | Control vs. CBCA warning | Control vs. algorithm warning | |
n = 34 | n = 34 | n = 34 | |||
M (SD) | M (SD) | M (SD) | d | d | |
Perceived verification | 2.35 (1.35) | 5.43 (1.86) | 5.32 (1.80) | −1.90** | −1.87** |
Faking | 1.30 (0.32) | 1.35 (0.42) | 1.40 (0.33) | −0.13 | −0.31 |
Honest IM | 2.61 (0.65) | 2.60 (0.70) | 2.54 (0.55) | 0.02 | 0.12 |
Interview performance | 3.44 (0.48) | 3.28 (0.51) | 3.45 (0.41) | 0.32 | −0.02 |
Procedural justice | 3.13 (1.01) | 3.09 (1.05) | 3.14 (0.93) | 0.04 | −0.01 |
Organizational attractiveness | 3.03 (1.22) | 3.02 (1.30) | 2.90 (0.89) | 0.01 | 0.12 |
Reapplication intentions | 3.35 (1.31) | 3.13 (1.30) | 3.03 (1.03) | 0.17 | 0.27 |
Job pursuit intentions | 3.63 (1.23) | 3.46 (1.15) | 3.59 (0.93) | 0.14 | 0.04 |
Recommendation intentions | 2.99 (1.03) | 3.10 (1.21) | 2.88 (1.11) | −0.10 | 0.10 |
- Abbreviations: algorithm, nonverbal/paraverbal algorithm; CBCA = criterion-based content analysis.
- * p < .05
- ** p < .01.
To test whether the different warnings affect faking, we conducted a one-way ANOVA with the independent variable warning type and self-reported faking as the dependent variable (cf. Table 6). We found no significant effect of warning type, F(2, 99) = 0.66, p = .52. A similar one-way ANOVA with the dependent variable self-reported honest IM also found no significant effect of warning type, F(2, 99) = 0.10, p = .91. Even when we controlled for age, sex, interview experience, Honesty-humility and all facets of the Dark Triad in two corresponding ANCOVAs, we could not find an effect for warning type.
Finally, we examined the effect of warnings on applicant reactions (see Table 6 for Ms and SDs). We conducted a MANOVA with the independent variable warning type and the dependent variables procedural justice, organizational attractiveness, intentions to reapply, recommendation intentions, and job pursuit intentions. Using Pillai's Trace, we found no significant effect for warning type, V = 0.08, F(10, 192) = 0.82, p = .61, η2 = 0.04.
8.3 Discussion
In contrast to Study 2, we conducted an actual interview in Study 3 and offered an attractive cash prize for the best interviewees so that participants would have a real incentive for good interview performance. Despite these differences, the results concerning faking and honest IM were very similar to Study 2, meaning that neither a warning concerning a CBCA nor concerning a nonverbal/paraverbal algorithm had an effect on self-reported faking or honest IM. However, in contrast to our expectations and to the results from Study 2, we found no impact on applicant reactions.
9 STUDY 4
9.1 Methods
9.1.1 Participants
To determine the required sample size to find medium-sized effects in a 2 × 2 ANOVA with a power of 0.80, we conducted an a priori power analysis (Faul et al., 2007). The analysis revealed that 128 participants are needed.
The initial sample consisted of 314 German-speaking participants. We excluded 85 participants because they answered at least one content-related question incorrectly or failed the attention check items. Thus, the final sample consisted of 229 participants (143 females, 86 males, 119 students, 110 nonstudents), resulting in a power of 0.96. Their age ranged between 18 and 64 (M = 31.97, SD = 13.66). Most participants had prior interview experience (94.3%) with an average of 6.87 previous interviews (SD = 8.77). The recruitment process was comparable with Study 2.
9.1.2 Procedure and experimental conditions
We conducted a 2 × 2 between-subjects experiment (high vs. low personableness of the interviewer and high vs. low objectivity of the questions) which was administered as an online-survey. The procedure was identical to Study 2 unless indicated otherwise.
As in Study 2, participants were told to imagine that they were currently unemployed and were applying for an attractive job. However, this time they should imagine that they had been invited to a normal (i.e., face-to-face) selection interview.
To manipulate the personableness of the interviewer, we described the interviewer in the personableness high condition as extremely friendly and interested. Participants were told that he was very attentive, repeatedly smiled at them benevolently, and treated them with great respect. It was stated that they would get along very well with each other, and it seemed that they had a lot in common. In the personableness low condition, the interviewer was described as extremely unfriendly and disinterested. Participants were told that he was very inattentive, repeatedly smiled at them arrogantly, and treated them without any respect. It was said that they would not get along well at all, and it seemed that they had nothing in common.
To manipulate the objectivity of the questions, participants in the objectivity high condition were told that they would mainly be asked about key information from their professional career. They should describe their career to date, with particular emphasis on specific experiences and measurable successes. Additionally, they should report any qualifications they had acquired for the job in question. They would also be asked about previous promotions. In the objectivity low condition, participants were told that they would primarily be asked to talk about themselves. To do so, they should first describe themselves paying particular attention to their strengths and weaknesses. They should explain why they are the right person for the job in question. Furthermore, they would be asked where they see themselves in 5 years.
9.1.3 Measures
All the dependent variables (faking and honest IM intentions, Honesty-humility, the Dark Triad, and applicant reactions) were measured with the same scales as in Study 2. Coefficient alpha for the different scales was .66 for honest IM intentions, .90 for faking intentions, .67 for Honesty-humility, .76 for Machiavellianism, .55 for Psychopathy, .84 for Narcissism, .89 for procedural justice, .89 for general attractiveness of the organization, .97 for recommendation intentions, .92 for job pursuit intentions, and .95 for the intention to reapply.
To check whether the manipulation worked as intended, participants had to answer two items concerning the perceived personableness of the interviewer (e.g., “The interviewer is personable to me”) and two items concerning the perceived objectivity of the questions (e.g., “In the interview, I was asked objective questions for which answers can be verified.”). Participants had to rate these items on a 7-point Likert scale from 1 = fully disagree to 7 = fully agree. Coefficient alpha was .98 for perceived personableness and .78 for perceived objectivity.
Participants had to answer five content-related questions (e.g., “I have received an invitation for a selection interview.”) to verify they had read the vignettes. These questions had to be answered as true or not true. Finally, to ensure that participants were attentive while completing the survey, we also used the same three attention check items as in Study 2. Exclusion criteria were also the same.
9.2 Results
Correlations and descriptive statistics can be seen in Tables 7 and 8. To evaluate whether the experimental manipulation worked as intended, we conducted two 2 × 2 (Personableness × Objectivity) ANOVAs3 with the corresponding dependent variables perceived personableness and perceived objectivity. We found significant main effects for personableness, F(1, 225) = 3642.13, p < .001, η2 = 0.94, and objectivity of questions, F(1, 225) = 187.50, p < .001, η2 = 0.46, for the corresponding dependent variables. These results confirmed a higher perceived personableness in the personableness condition (M = 6.65, SD = 0.83) compared to the not personable condition (M = 1.14, SD = 0.52, d = 7.96) and higher perceived objectivity (M = 4.98, SD = 1.50) in the objectivity condition compared to the non-objectivity condition (M = 2.38, SD = 1.35, d = 1.82). All other main effects and interactions were nonsignificant. Thus, the results clearly showed that the experimental manipulation worked as intended.
Variable | M | SD | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Sex | 0.62 | 0.49 | – | |||||||||||||||
2. Age | 32.97 | 13.66 | −0.21** | – | ||||||||||||||
3. I. experience | 6.87 | 8.77 | −0.16* | 0.45** | – | |||||||||||||
4. Personableness | 0.50 | 0.50 | 0.02 | −0.02 | 0.01 | – | ||||||||||||
5. Objectivity | 0.53 | 0.50 | −0.01 | −0.07 | 0.00 | 0.03 | – | |||||||||||
6. Honesty-Humility | 3.59 | 0.61 | 0.18** | 0.11 | −0.01 | 0.00 | 0.07 | 0.67 | ||||||||||
7. Machiavellianism | 2.11 | 0.90 | −0.08 | −0.09 | −0.01 | 0.05 | −0.01 | −0.56** | 0.76 | |||||||||
8. Psychopathy | 1.60 | 0.71 | −0.32** | −0.09 | 0.06 | 0.10 | −0.05 | −0.41** | 0.36** | 0.55 | ||||||||
9. Narcissism | 2.82 | 1.03 | −0.12 | −0.17* | −0.12 | −0.03 | 0.02 | −0.50** | 0.42** | 0.19** | 0.84 | |||||||
10. Faking intentions | 2.42 | 0.66 | −0.09 | −0.20** | −0.11 | −0.05 | −0.06 | −0.47** | 0.48** | 0.22** | 0.35** | 0.90 | ||||||
11. Honest IM int. | 3.78 | 0.44 | 0.13 | 0.01 | −0.01 | 0.04 | 0.09 | 0.11 | 0.02 | −0.14* | 0.06 | 0.04 | 0.66 | |||||
12. Procedural Justice | 2.99 | 1.10 | −0.02 | −0.03 | 0.01 | 0.68** | 0.14* | −0.02 | 0.09 | 0.03 | 0.00 | 0.06 | 0.19** | 0.89 | ||||
13. Org. attractiveness | 4.04 | 0.98 | −0.06 | −0.04 | −0.07 | 0.50** | 0.08 | −0.09 | 0.04 | 0.05 | 0.05 | 0.02 | 0.17** | 0.52** | 0.89 | |||
14. Reapplication int. | 3.23 | 1.21 | 0.02 | −0.03 | −0.06 | 0.49** | 0.12 | −0.09 | 0.14* | 0.04 | 0.00 | −0.01 | 0.18** | 0.50** | 0.61** | 0.95 | ||
15. Job pursuit int. | 4.29 | 0.88 | 0.07 | −0.14* | −0.13 | 0.42** | 0.02 | −0.03 | 0.06 | 0.05 | 0.03 | −0.01 | 0.21** | 0.41** | 0.66** | 0.56** | 0.92 | |
16. Recommend. int. | 3.23 | 1.09 | 0.02 | −0.08 | −0.14* | 0.60** | 0.10 | −0.02 | 0.10 | −0.01 | −0.05 | 0.01 | 0.22** | 0.61** | 0.67** | 0.62** | 0.65** | 0.97 |
- Note: N = 229. I. = interview, Org. = organizational, int. = intentions, recommend. = recommendation. Coding of sex: 0 = male, 1 = female; personableness: 0 = not personable, 1 = personable; objectivity: 0 = nonobjective, 1 = objective. Values in the diagonal show coefficient alpha for the different variables.
- * p < .05,
- ** p < .01.
Variable | Not personable | Personable | ||
---|---|---|---|---|
Nonobjective | Objective | Nonobjective | Objective | |
n = 56 | n = 59 | n = 52 | n = 62 | |
M (SD) | M (SD) | M (SD) | M (SD) | |
Faking intentions | 2.48 (0.70) | 2.42 (0.65) | 2.43 (0.65) | 2.35 (0.67) |
Honest IM Intentions | 3.74 (0.41) | 3.79 (0.45) | 3.75 (0.33) | 3.84 (0.52) |
Procedural justice | 2.17 (0.79) | 2.32 (0.74) | 3.54 (0.81) | 3.91 (0.85) |
Organizational attractiveness | 3.52 (0.97) | 3.58 (1.09) | 4.44 (0.68) | 4.62 (0.57) |
Reapplication intentions | 2.51 (1.12) | 2.76 (1.13) | 3.69 (1.03) | 3.95 (0.93) |
Job pursuit intentions | 3.90 (0.99) | 3.95 (0.98) | 4.66 (0.55) | 4.65 (0.58) |
Recommendation intentions | 2.52 (0.94) | 2.64 (0.87) | 3.76 (0.82) | 4.00 (0.83) |
To investigate how a personable interviewer and objective questions affect faking intentions, we conducted a 2 × 2 (Personableness × Objectivity of the questions) ANOVA with the dependent variable faking intentions. Both main effects as well as the interaction were nonsignificant, all Fs < 0.71, all ps > .39, all η2s < 0.01. To investigate potential effects on honest IM intentions, we conducted a comparable 2 × 2 ANOVA with the dependent variable honest IM intentions. Both main effects as well as the interaction were nonsignificant, all Fs < 1.67, all ps > .19, all η2s < 0.02. Furthermore, we also found no significant results when we controlled for age, sex, interview experience, Honesty-humility, and all facets of the Dark Triad in two corresponding ANCOVAs with the respective covariates.
In addition to faking and honest IM intentions, we also investigated the effects of personableness and objectivity on applicant reactions. To do so, we conducted a 2 × 2 MANOVA (Personableness × Objectivity of the questions) with the dependent variables procedural justice, organizational attractiveness, intentions to reapply, recommendation intentions, and job pursuit intentions. Using Pillai's Trace, we found a significant main effect for personableness, V = 0.53, F(5, 221) = 49.99, p < .001, η2 = 0.53. In contrast, the main effect for objectivity, V = 0.04, F(5, 221) = 1.75, p = .13, η2 = 0.04, and the Personableness × Objectivity interaction, V = 0.01, F(5, 221) = 0.39, p = .85, η2 = 0.01, were not significant. Subsequent 2 × 2 ANOVAs for each dependent variable revealed significant effects for personableness concerning procedural justice, F(1, 225) = 195.21, p < .001, η2 = 0.47, organizational attractiveness, F(1, 225) = 75.41, p < .001, η2 = 0.25, intentions to reapply, F(1, 225) = 71.67, p < .001, η2 = 0.24, recommendation intentions, F(1, 225) = 128.10, p < .001, η2 = 0.36, and job pursuit intentions, F(1, 225) = 46.79, p < .001, η2 = 0.17. The mean scores for all dependent variables were higher in the personableness compared to the nonpersonableness condition (see Table 8). In addition, we found a significant effect for objectivity concerning procedural justice, F(1, 225) = 5.72, p < .05, η2 = 0.03. Procedural justice was higher in the objectivity compared to the nonobjectivity condition. All other main effects and interactions for objectivity concerning organizational attractiveness, intentions to reapply, recommendation intentions, and job pursuit intentions were nonsignificant, all Fs < 3.28, all ps > .06, all η2s < 0.02.
9.3 Discussion
Contrary to our expectations, a personable interviewer or objective questions had no effect on faking intentions in face-to-face interviews. Additionally, they did not affect honest IM intentions either. In line with our expectations, we found that a personable interviewer had a positive effect on applicant reactions. In the high personableness condition, we found higher organizational attractiveness, procedural justice, intentions to reapply, recommendation intentions, and job pursuit intentions than in the low personableness condition. However, our concern that the objectivity of the questions might have a negative impact on applicant reactions was not confirmed.
10 GENERAL DISCUSSION
In the present studies, we addressed the limited knowledge on potential countermeasures against faking in selection interviews. We identified different countermeasures against faking in an exploratory study and evaluated the most promising of these countermeasures in three subsequent experimental studies. We also investigated their influence on applicant reactions. Taken together, our studies make several contributions and also suggest valuable next steps for future research.
As a first contribution, based on empirical evidence from Study 1, we provided information about the most suitable countermeasures against faking in interviews. The significant differences between the various possible countermeasures suggest that these countermeasures might differ in their actual effectiveness. Furthermore, the qualitative answers from the open-ended questions in Study 1 also revealed a surprising result that was not predicted by any of the current faking models (e.g., Levashina & Campion, 2006; McFarland & Ryan, 2000, 2006; Roulin, Krings & Binggeli, 2016). However, Ho et al. (2021) found similar results in a recent qualitative study. In our study, nearly 20% of the answers indicated that a personable interviewer would prevent participants from faking in an interview. In the past, the role of the interviewer was mainly investigated with regard to their role for recruitment purposes (e.g., Chapman et al., 2005; Wilhelmy et al., 2016).
Second, in contrast to our expectations, the three subsequent studies revealed that the different countermeasures did not affect faking intentions or self-reported faking behavior at all, despite our focus on the most promising countermeasures from Study 1. For videoconference interviews, neither a warning referring to a criterion-based content analysis nor a lie detecting algorithm focusing on nonverbal and paraverbal cues had any effect on faking intentions in a hypothetical scenario nor on self-reported faking in simulated videoconference interviews. These results are in contrast to the theoretical assumption (Levashina & Campion, 2006) that faking should be reduced by a higher probability of getting caught. However, it is in line with previous studies that examined different countermeasures and situational variables such as a verification warning (Bill et al., 2020), the future supervisor as the interviewer, the use of panel interviews (Bourdage et al., 2018), or of follow-up questions (Levashina & Campion, 2007). These studies also found little evidence that faking can be reduced (Bill et al., 2020; Bourdage et al., 2018; Levashina & Campion, 2007), or detected (Roulin et al., 2015). So far, only Law et al. (2016) found that an identification warning can reduce faking in interviews. In contrast to our Studies 2 and 3, Law et al. (2016) tested this countermeasure in a face-to-face interview and used a different type of warning. Reasons for the different results could be that the perceived possibility to use IM (Basch et al., 2020) and the actual use of IM is lower for videoconference interviews (Basch et al., 2021). Especially in the simulated videoconference interview study (Study 3), the mean values for faking self-reports were very low even in the control condition, which actually left little room to reduce faking any further. It could also be that participants believed more strongly that certain questions could detect faking than a CBCA or a nonverbal and paraverbal algorithm could.
Additionally, we found no impact of either objective questions or a personable interviewer on faking intentions in face-to-face interviews. The result concerning objective questions is contrary to the theoretical assumption by Levashina and Campion (2006) that verifiable questions provide less opportunity to fake. It is also contrary to the finding of less faking in biodata items that are more objective and verifiable (Becker & Colquitt, 1992). However, in a survey, Bourdage et al. (2018) also found that resume-based questions, which are in fact objective and verifiable, did not reduce faking compared to other questions. Possibly, objective questions have to be combined with a verification measure to credibly discern faking. However, even for such a combination, Bill et al. (2020) found virtually no effect on faking intentions in a vignette study when participants could name their own referees. Therefore, objective questions may need to be combined not only with a credible verification source, but also with a source that is not controllable by the applicant, to be effective. Furthermore, the potential effects could be tested in a controlled experimental study with actual interviews given that all the available evidence so far stems from studies using vignettes or retrospective surveys. However, the convergence concerning the results from our Studies 2 and 3 would at least indirectly suggest, that a similar null effect might also be found for objective questions in an actual interview.
Concerning the possible effects of a personable interviewer on faking intentions, an empirical test revealed a null effect. This is in contrast to Study 1 and to the qualitative study by Ho et al. (2021) that found that a friendly interviewer reduced interview anxiety and therefore the probability of faking. A reason why we did not find any effect could be that participants could not perceive any interview anxiety due to the hypothetical scenario and thus an interviewer described as personable had no impact on faking intentions. Another reason could be that personality variables just had a stronger effect on faking intentions in contrast to potential countermeasures. Specifically, participants who were high on the Dark Triad were also more likely to intend to fake. However, the Dark Triad in turn is associated with duplicity and emotional coldness (Paulhus & Williams, 2002), so that individuals high on the Dark Triad may be unimpressed by personableness and may not reward this kind of interviewer behavior with honesty.
As a third contribution, we extended evidence concerning honest IM. However, we found that the different countermeasures did not affect honest IM intentions and self-reported honest IM. For videoconference interviews, we found no impact of warnings referring to a criterion-based content analysis or to a lie detecting algorithm on honest IM intentions and self-reported honest IM behavior. For face-to-face interviews, we also found that objective questions and a personable interviewer had no impact on honest IM intentions. Even though there is evidence that honest IM can be affected by situational factors (e.g., Basch & Brenner, Melchers, et al., 2021; Bourdage et al., 2018), the results concerning the null effects of warnings are nevertheless in line with previous research that also found only slightly higher honest IM intentions in a verification warning condition compared to a no warning condition (Bill et al., 2020).
Fourth, even though the different warnings did not reduce faking intentions, they had negative effects on justice perceptions and applicant reactions in Study 2. This result is in line with the assumption that warnings could violate procedural rules by Gilliland (1993) and with previous research on personality tests and other assessment procedures that also found that warnings can impair applicant reactions (Burns et al., 2015; Lammers, 2017). However, this negative influence was not evident in Study 3, in which we conducted actual interviews. This is consistent with the only other study that examined the impact of warnings in a simulated interview context and also found no impact on procedural justice and, furthermore, on performance anxiety (Law et al., 2016). One possible reason why warnings in our simulated interview did not have an effect on applicant reactions could be that the warnings were perceived as less salient here. Even though the manipulation check in both of our studies clearly showed that the perceived verification was higher in the warning conditions than in the control condition, the effect size in the simulated interview study was much smaller than in the vignette study. This could also be due to the fact that the warning takes up a much smaller proportion of the simulated interview situation than when participants are only supposed to imagine the selection interview. Furthermore, there is also evidence from other studies that differences in applicant perceptions of different interview situations are smaller after participants had completed the actual interview than before (Melchers et al., 2021). Nevertheless, possible negative effects should be taken into account when applying warnings in high-stakes settings because applicant reactions can be considerably more pronounced in field settings than in simulated lab settings (e.g., Truxillo et al., 2009). For face-to-face interviews, we found that objective questions had little impact on applicant reactions and a personable interviewer—obviously—even had a positive effect on applicant reactions. However, the large effect sizes could also be due to the fact that we described the interviewer as very unsympathetic in the low personableness condition. In real interviews, it may well be that interviewers actually behave in this way, but (with the potential exception of stress interviews, e.g., Chen et al., 2019) this is probably relatively uncommon.
10.1 Limitations and future research
A limitation of these studies is that we only examined effects of the countermeasures on faking intentions in hypothetical scenarios and self-reported faking in a simulated setting. Thus, as already noted above, future research is needed to test the effects of these countermeasures in high-stakes settings in which applicants have a higher motivation to succeed (Ellingson & McFarland, 2011; Ellingson, 2012; Roulin, Krings & Binggeli, 2016).
Another limitation is that the three main studies only examined a warning referring to a CBCA and a warning about a lie detecting algorithm focusing on nonverbal and paraverbal cues for videoconference interviews, or the effects of objective questions and a personable interviewer for face-to-face interviews. However, Study 1 revealed several other potentially useful countermeasures against faking that might be investigated in future studies. So far, only follow-up questions, the number and type of interviewers, different types of interview questions, and the effects of a few different kinds of warnings against faking have been investigated (also see Melchers et al., 2020). However, when follow-up questions were used (Levashina & Campion, 2007) and when the potential future supervisor served as the interviewer this led to more instead of less faking (Bourdage et al., 2018). Furthermore, the use of panel interviews did not affect faking (Bourdage et al., 2018). Thus, more research is still needed to assess the usefulness of other countermeasures. However, for some of the countermeasures, the only available evidence stems from retrospective surveys. Here, it would we valuable to also evaluate the potential countermeasure in real interviews—and ideally also in high-stakes field settings. Furthermore, alternative approaches to better understand relevant factors and processes that contribute to more or less faking in actual selection interviews could also help. The unexpected but frequent mention of a personable interviewer in Study 1 and the related finding in the qualitative study by Ho et al. (2021) suggest, that the currently available faking models are still incomplete. Thus, more qualitative research might be needed to detect further potential gaps in our theories. Finally, the finding that faking countermeasures might also have negative (or positive) effects on applicant reaction variables stresses the need that potential countermeasures should not only be evaluated concerning their effects on faking behavior but also on applicant reactions—or on other aspects such as the psychometric properties of the interview, the legal defensibility of the countermeasures, or their implementation costs.
A third limitation concerns the issue of not being able to evaluate the effects of the countermeasures on responses to actual interview questions or different types of interview questions. Past research found that situational questions are more prone to faking in comparison to past behavior questions (Levashina & Campion, 2007) but less prone in comparison to other questions (Bourdage et al., 2018). It was also found that resume-based questions led to slightly higher deceptive ingratiation in comparison to other questions (Bourdage et al., 2018). Furthermore, faking should also be less common when responding to job-related questions than to traditional questions such as “What is your major weakness?” for which it seems highly unlikely that applicants in a high-stakes setting do indeed reveal their major weakness. In line with this suggestion, previous research that investigated the effects of IM in interviews (independent from whether it was honest or deceptive) found that the relationship between IM usage and interview performance was considerably stronger in unstructured interviews than in structured interviews (Barrick et al., 2009). Thus, more research that investigates whether interview structure and the use of certain types of interview questions affects the use and effectiveness of faking tactics is definitely needed.
A fourth limitation regarding the most promising countermeasures from Study 1 not having the expected effects in Studies 2 to 4, is that we measured the effectiveness of the countermeasures with different items. Specifically, in Study 1 participants had to evaluate each potential countermeasure with only one item on a global level whereas we used a longer instrument with multiple items in all the other studies. Even though the single item from Study 1 covers faking in essence, it may have led participants to understand faking differently in that study than in the subsequent experimental studies.
10.2 Practical implications
Based on our results, warnings referring to a lie detecting algorithm focusing on nonverbal and paraverbal cues or to a CBCA do not seem to affect faking intentions and self-reported faking behavior in videoconference interviews. Instead, our results suggest that these countermeasures might impair organizational attractiveness, procedural justice, reapplication intentions, job pursuit intentions, and recommendation intentions. Furthermore, we assume that the use of other kinds of faking warnings could have similar negative effects on applicants' perceptions and that this could not only apply to videoconference but also to face-to-face interviews.
In light of evidence that it is hardly possible to detect faking in interviews (Roulin et al., 2015), one could question whether organizations should stop using interviews altogether. However, in our view, such a suggestion would be going too far. First, there is clear evidence that structured, job-related interviews are one of the most valid selection tools that are available for personnel selection and assessment (Sackett et al., 2021; Schmidt & Hunter, 1998). Second, there is also evidence that faking in interviews has considerably smaller effects on scores from a structured interview than on scores from a personality inventory targeting the same constructs (Van Iddekinge et al., 2005). Third, a recent study by Bill et al. (2020) found that many potential applicants would actually be hesitant to fake in an interview. Answers to a qualitative question related to the use or non-use of faking suggested that many interviewees think that faking would be discovered and/or punished anyway, that they would not fake because of ethical and moral reasons, or that they would not want to jeopardize their fit with the job or organization. This is similar to findings from a qualitative study focusing on faking in personality tests, in which participants also gave various reasons why they did not want to fake as much as would have been possible (König et al., 2012). Thus, the problems that occur due to faking in interviews might be more limited than previously thought.
ACKNOWLEDGMENTS
We would like to thank Anne-Lena Leidenberger, Silja Petrig, Julia Buettner, Sophia Kaehn, Theresa Nagel, Jacqueline Elbert, and Ella Voigtlaender for their help in conducting the studies. Open access funding enabled and organized by Projekt DEAL.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
ENDNOTES
- 1 Even though the second set of countermeasures were introduced as countermeasures for face-to-face interviews, we want to acknowledge that most of these countermeasures are probably equally suitable for all kinds of interviews.
- 2 To evaluate the factorial structure of the applicant reaction variables, we conducted confirmatory factorial analyses (CFAs) for this and each of the following studies to evaluate the factorial structure of the applicant reaction variables. Results for these CFAs can be found in the supplemental materials available via OSF (https://osf.io/uy9ae/?view_only=7c3e83d1eb27489dbbba2e7bc0106f85). The CFAs revealed that a single underlying factor was not suitable to represent the five different applicant reaction variables but that a model with five correlated factors was more appropriate. However, for this model, a high modification index for one of the job pursuit intentions items led us to delete this item for the present study as well as for all subsequent studies (see supplemental materials).
- 3 Here and subsequently, the assumption of variance homogeneity was violated in some cases. In these cases, we also tested the main effects using nonparametric procedures, which yielded identical results.
APPENDIX A
(Table A1)
Applicant reaction | Item | Source |
---|---|---|
Procedural justice | Overall, I believe that the selection process was fair.a | Smither et al. (1993) |
I would feel good about the way the examination was conducted and administered.a | Smither et al. (1993) | |
The procedures used by this company to evaluate the qualifications of applicants are fair.a | Smither et al. (1996) | |
Organizational attractiveness | For me, this company would be a good place to work. | Highhouse et al. (2003) |
This company would be attractive for me as a place for employment.a | Highhouse et al. (2003) | |
A job at this company would be very appealing to me. | Highhouse et al. (2003) | |
Recommendation intentions | I would recommend this firm to friends. | Wang (2013) |
I would recommend this firm to others who seek my advice. | Wang (2013) | |
I would encourage others to apply for this firm's job.a | Wang (2013) | |
I would highly recommend this firm to my friends and family if they are looking for a job.a | Wang (2013) | |
Job pursuit intentions | I would pursue employment at this company. | Madera (2012) |
I would accept the job if it was offered to me.a | Macan et al. (1994) | |
I would seriously consider this company as a possible employer. | Smither et al. (1996) | |
Intention to reapply | I would intend to apply for a new job here again if I am not offered a job.a | Bauer et al. (1998) |
I would reapply with this company.a | Konradt et al. (2013) | |
Should there be suitable vacancies in the future, I would apply again with this company. | Konradt et al. (2013) |
- a Items were adapted.
Open Research
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.