The effects of analyst forecasts and earnings trends on perceptions of management forecast credibility
We thank Anna Cianci, Shana Clor-Proell, Sukari Farrington, Frank Hodge, Lisa Koonce, Molly Mercer, Norma Montague, Sundaresh Ramnath, Jane Thayer, Shankar Venkataraman, Julie Wayne, Ya-Wen Yang, Tina Zamora and particularly, Steven Cahan (the Editor) and two anonymous referees for helpful comments and suggestions. We also acknowledge comments provided by the reviewers and participants of the 2010 AAA Annual Meeting, the 2009 ABO Research Conference, and the New England Behavioral Accounting Research Series. We would like to express our gratitude to Robin Dillon-Merill, Dawn Porter, and David Post for assistance in recruiting participants. Finally, we thank Lee Kersting, Norma Montague and Nicole Siroonian for research assistance. Professor Kelton gratefully acknowledges financial support from the Wake Forest School of Business.
Abstract
We examine whether analyst forecasts influence investors’ perceptions of the credibility of a good news management earnings forecast. We hypothesize that the effect of analyst forecasts will depend on whether the analyst forecast confirms management’s forecast and the extent to which management’s forecast is consistent with the prior earnings trend. Findings indicate that the positive effect of a confirming analyst forecast is greater when the management forecast is trend inconsistent than when it is trend consistent. The negative effect of a disconfirming analyst forecast does not differ based on management forecast trend consistency.
1. Introduction
Investors often use management earnings forecasts to make investment-related judgments and decisions. However, investor reliance on management forecasts depends on the credibility of the disclosure (Jennings, 1987; Mercer, 2004; Hirst et al., 2007). Investors know that managers have incentives to release biased forecasts, and investors must often use other information to corroborate the information contained in a management forecast (Hutton et al., 2003). One source of information that investors may use to determine the credibility of a management forecast is analyst forecasts (Mercer, 2004). While much prior accounting research has investigated the independent effects of management forecasts (e.g. Coller and Yohn, 1997) and analyst forecasts (e.g. Lys and Sohn, 1990), little research has investigated investors’ combined use of both of these information sources. Because it is common for investors to utilize multiple sources of information in their judgments and decisions, we investigate these joint effects.
Because management forecasts are voluntary disclosures, managers’ incentives for issuing forecasts and the credibility of the forecasts are important factors affecting investors’ use of these forecasts (Healy and Palepu, 2001). Forecast credibility is of particular concern when the forecast conveys good news1 because managers have market incentives to provide good news (Ajinkya and Gift, 1984). Consequently, investors often question disclosures that are seemingly driven by management’s incentives (Hodge et al., 2006) and may require additional information to judge the credibility of such disclosures. For example, results from Hutton et al. (2003) suggest that investors only consider good news forecasts to be credible when management provides verifiable forward-looking disclosures that corroborate the forecast.2 Because the impact of management forecasts on investors’ judgments depends on the perceived credibility of such forecasts, understanding the factors that influence investor perceptions of the credibility of management forecasts, especially good news forecasts, is important (Hirst et al., 2007). Thus, the purpose of this study is to investigate how investors’ perceptions of the credibility3 of a management earnings forecast are jointly influenced by whether the analyst earnings forecast corroborates management’s forecast and the consistency of a management earnings forecast with the prior earnings trend. We specifically examine these effects under the good news forecast scenario as these forecasts are deemed to be less credible.
Analyst forecasts are useful to investors in investment-related judgments and decisions (e.g. Hirst et al., 1995; Ackert et al., 1997). Analyst forecasts are often released subsequent to management’s forecasts and provide information to investors regarding management forecast credibility (Jennings, 1987). However, recent financial scandals have led to questions related to the independence between financial analysts and firm management and have highlighted the related incentives of analysts to provide overly optimistic disclosures. For example, Feng and McVay (2010) show that incentives to please management often result in analysts issuing biased short-term earnings forecasts. Research suggests that investors are aware of these incentives and, as a result, discount both negative and positive information from analysts (Kothari et al., 2009). Thus, the nature of the relationship between management and analysts and the effect of this relationship on investors’ use of analyst forecasts make this an interesting and important area of research.
The extent to which investors will rely on analyst forecasts will likely depend on characteristics of management’s forecast, and specifically whether management’s forecast is consistent with their expectations. Absent additional information, investors develop future earnings expectations based on a firm’s historical reported earnings and use earnings trend as a benchmark to evaluate future performance (Graham et al., 2005; Koonce and Lipe, 2010). Investors are thus also likely to evaluate an earnings forecast, specifically the credibility of the forecast, based on the extent to which the forecast is consistent with the prior earnings trend (Mercer, 2004).4 Research in psychology (Hastie, 1984) and accounting (Earley, 2002; Ballou et al., 2004) suggests that investors may evaluate subsequent information (e.g. analyst forecasts) differently based on whether management’s forecast is consistent with the prior earnings trend (hereafter, trend consistency). Specifically, research shows that individuals expend more effort processing information that is inconsistent with their expectations such that their reliance on the information, and any subsequent information received, is more extreme (Earley, 2002; Ballou et al., 2004; Clor-Proell, 2009). As such, we posit that, when presented with analyst forecasts, changes to investors’ credibility judgments and decisions will be greater when management’s forecast is inconsistent with the prior earnings trend than when it is consistent with the prior earnings trend.
We conduct an experiment that utilizes a 2 × 2 between-participants research design. Across the four groups of participants, we vary the consistency of a good news management forecast with the prior earnings trends (trend consistent versus trend inconsistent) and whether the analyst forecast confirms (or contradicts) management’s forecast. Study participants are Master of Business Administration (MBA) students who are asked to assume the role of investors. Participants review information regarding a potential investment opportunity, including earnings forecasts from the company’s management and from analysts. To isolate the incremental effect of the analyst forecast on credibility judgments, we measure participants’ perceptions of management forecast credibility, as well as their own estimates of earnings per share (EPS), both prior to and after receiving the analyst forecast. We utilize credibility and EPS change scores to examine the effect of the analyst forecast on investor judgments. In all scenarios, management’s forecast is a good news forecast (i.e. the forecasted EPS is higher than the prior year’s reported EPS).
Ceteris paribus, we expect that analyst forecast confirmation and management forecast consistency will jointly affect investors’ perceptions of management forecast credibility. We predict that the positive effect of a confirming analyst forecast will be moderated by management forecast consistency. Research shows that investors often discount favourable analyst reports (Hirst et al., 1995); thus, we expect investors, when given a confirming (i.e. favourable) analyst forecast, to also consider management forecast consistency. Specifically, because of the expectations violation caused by a trend inconsistent management forecast, we predict that the positive effect of a confirming analyst forecast will be greater for trend inconsistent forecasts than for trend consistent forecasts. Alternatively, research suggests that investors consider negative information to be more relevant to their investment decisions than positive information (Cianci and Falsetta, 2008; Coram, 2010) and bad news analyst forecasts are more useful to investors than good news analyst forecasts (Frankel et al., 2006). Thus, we do not expect investors’ negative reactions to a disconfirming analyst forecast to differ based on management forecast consistency.
Collectively, our results inform both research and practice. Our results confirm findings from archival accounting research (e.g. Jennings, 1987), which suggest that analyst forecasts provide information about management forecast credibility. However, our results show that the informativeness of analyst forecasts about management forecast credibility depends on the agreement between the forecasts and on the consistency of management’s forecast with investors’ earnings expectations. Extant research in accounting has focused mostly on understanding factors that negatively influence management credibility, and very few studies examine factors that may improve the credibility of management earnings forecasts (Mercer, 2004). We show that a confirming analyst forecast can improve investor perceptions of management forecast credibility, even when management’s forecast is inconsistent with investors’ expectations.
On a practical level, our results provide information to managers on specific factors that affect the credibility of their earnings forecasts, specifically those forecasts that are perceived as inherently less credible (i.e. good news forecasts). According to Hirst et al. (2008, p. 329), ‘managers have to expend greater effort to make good news credible’. We find that the damage to credibility from an unexpected (i.e. trend inconsistent) management forecast can be offset by a confirming analyst forecast. Thus, our results suggest that to increase forecast credibility among investors, managers may have to expend additional effort in convincing intermediaries, like financial analysts, that their information is credible. Such efforts may be more rewarding in instances where the management forecast contains a surprise element (i.e. deviates from expectations). This finding is consistent with recent evidence that suggests managers use earnings guidance to lead analysts towards beatable earnings targets (Cotter et al., 2006). However, our results also show that companies may be penalized, at least temporarily, if analyst forecasts are below their own forecasts.
The remainder of this paper is organized as follows. In the next section, we provide the theoretical background and present the motivation for our hypotheses. In the third section, we discuss the research method used in the study, and in the fourth section, we present and discuss the results. We provide concluding remarks in the final section.
2. Theory and hypothesis
Jennings (1987) provides evidence that analyst forecasts made subsequent to the release of a management earnings forecast provide information about the credibility of the management forecast. Subsequent studies in accounting use analyst forecast revisions as a proxy for management earnings forecast credibility (e.g. Koch, 2005). These studies presume that analyst forecasts represent market expectations and, therefore, analyst forecast revisions proxy for investors’ assessment of the credibility of management’s forecast (i.e. the bigger the deviation of the analyst forecast from management’s forecast, the lower the perceived credibility of management’s forecast). While this presumption may be true, other research finds that investors’ expectations may differ from analyst expectations and that investors (mentally) adjust analyst forecasts to incorporate other information. For example, Hirst et al. (1995) find that investors incorporate both characteristics of the analyst (e.g. their relationship with management) and of the analyst report (e.g. the favourability of the news conveyed) in their judgments. In general, they find that investors place less weight on analyst reports that are from analysts with ties to management (i.e. less credible).5 Results from Ackert et al. (1997) and Pinello (2008) suggest that investors use analyst forecasts for investment-related decisions, but that investors recognize and adjust analyst forecasts to reduce the effects of perceived forecast bias. Thus, while research shows that investors impound analyst information into their judgments and decisions, their use of analyst forecasts depends on other factors. In this study, we examine the interactive effects of two such factors: analyst forecast confirmation and management forecast trend consistency.
Collectively, both archival and behavioural accounting research indicates that analyst forecasts are informative to investors (e.g. Hirst et al., 1995; Ackert et al., 1997), and we expect that investors will utilize analyst forecasts to assess the credibility of management earnings forecasts. Specifically, we expect that an analyst forecast that confirms (does not confirm) a management forecast will be perceived as positive (negative) information and will thus positively (negatively) affect investors’ perceptions of management forecast credibility.
Investors’ reliance on analyst forecasts will also depend to some extent on whether the management forecast is consistent with the prior earnings trend (i.e. is consistent with investor expectations).6 Expectations violations theory predicts that information that is inconsistent with or deviates from expectations will cause individuals to expend more effort in processing the information (Hastie, 1984). In general, individuals engage in deeper cognitive processing in an attempt to discover the cause of the unexpected information (i.e. causal reasoning; Hastie, 1984). Information that is inconsistent with expectations is also more salient (Erdfelder and Bredenkamp, 1998) and therefore has a greater influence on judgments (Fiske and Taylor, 1991). Information that is inconsistent with expectations is also often considered to be of lower quality (Koehler, 1993). While such findings are robust in psychology research (Stangor and McMillan, 1992), they are also found in accounting settings. For example, Earley (2002) examines the effect of consistent and inconsistent information on auditors’ judgments related to the reasonableness of a client’s real estate valuation. She predicts and finds that when initial information is consistent with auditors’ expectations, auditors process subsequent information less deeply, which in turn leads to decreased judgment quality. In addition, Ballou et al. (2004) find that auditors are less likely to consider relevant subsequent information when a client’s strategic position is consistent with industry norms (i.e. their expectations). Clor-Proell (2009) examines whether financial statement users’ judgments of a firm’s accounting choice (i.e. recognition versus disclosure) are influenced by the extent to which the firm’s choice matches their expectations. Her findings show that users’ judgments of management credibility and investment decisions are more extreme when the actual accounting choice does not match expectations. In addition, Clor-Proell (2009) finds that users are more apt to seek out additional information when their expectations are violated.7 In sum, the above research provides evidence that expectations violations invoke additional processing, influence task outcomes (Earley, 2002; Ballou et al., 2004) and credibility judgments (Clor-Proell, 2009) and affect individuals’ search for additional information (Clor-Proell, 2009).
The effect of an expectations violation (i.e. a trend inconsistent management forecast) should be most prevalent when individuals engage in sequential information processing such as when investors view various pieces of information in evaluating a company’s forecast. During sequential information processing, an individual develops an initial belief and then revises it to account for subsequent information. The magnitude of revision often depends on the extent to which new information deviates from prior beliefs (Hogarth and Einhorn, 1992). In addition, the causal reasoning processes that are evoked by inconsistent (sequential) information are most evident in settings in which it is natural or adaptive to consider the motives, intentions or circumstances of the provider of the information (Hastie, 1984), as is the case in forming credibility perceptions.
Based on the above, when management’s forecast is inconsistent with expectations, we expect that investors will consider it less credible and expend more effort processing additional (subsequent) information (Hastie, 1984). Thus, investors will rely more on analyst forecasts when the forecast is trend inconsistent than when it is trend consistent. However, we expect that this effect will be most pronounced in the confirming analyst forecast condition.
Prior research suggests that positive information from analysts often lacks credibility. For example, Hirst et al. (1995) find that investors discount favourable analyst reports and view unfavourable reports as unexpected. Additionally, archival accounting research shows that the market reacts more strongly to negative (versus positive) analyst forecasts (Frankel et al., 2006), suggesting that bad news analyst forecasts are more useful to investors. Consistent with these findings, we expect investors to discount good news (confirming) analyst forecasts and to consider other factors, such as management forecast consistency, in their judgments. Owing to the expectations violation caused by the trend inconsistent management forecast, we expect the positive effect of a confirming analyst forecast to differ between management forecast consistency conditions. Alternatively, given investors’ tendency to react more to negative news, we expect investors to place more weight on the disconfirming analyst forecast and thus do not expect investors’ reliance on the disconfirming analyst forecast to differ between management forecast consistency conditions. Formally stated:
H1: The positive effect of a confirming analyst forecast on investor perceptions of the credibility of a good news management earnings forecast will be greater for trend inconsistent forecasts than for trend consistent forecasts.
H2: The negative effect of a disconfirming analyst forecast on investor perceptions of the credibility of a good news management earnings forecast will not differ for trend inconsistent and trend consistent forecasts.
We expect results of this study to confirm findings from prior research that analyst forecasts are informative to investors and are used to assess management forecast credibility (e.g. Hirst et al., 1995; Ackert et al., 1997). Because these findings are well documented in the research, we make no formal hypotheses although they are discussed in the results.
3. Method
3.1. Design
To test our hypotheses, we conduct an experiment using a 2 × 2 between-participants design. The first factor relates to the relationship between the analyst’s earnings forecast and the management earnings forecast. Specifically, the analyst forecast either confirms or disconfirms management’s forecast. The second factor, management forecast consistency, relates to whether the management earnings forecast is consistent with the firm’s prior earnings trend. In the trend inconsistent (consistent) condition, the company has experienced a steady decrease (increase) in EPS over the past 3 years. In all conditions, management’s forecast is the same ($0.81) and is greater than the most recent year’s reported EPS ($0.71 in all conditions) (i.e. good news) and is thus either trend consistent or trend inconsistent.
3.2. Participants
One hundred and forty MBA students from the United States participated in the study as proxies for nonprofessional investors.8 Prior experimental financial accounting research often uses MBA students to proxy for nonprofessional investors (e.g. Maines and McDaniel, 2000; Hodge et al., 2004). According to Elliott et al. (2007), MBA students are reasonable proxies for nonprofessional investors for experimental tasks that are relatively low in integrative complexity.9 Our experimental task is similar to those classified by Elliott et al. (2007) as exhibiting relatively low integrative complexity; thus, MBA students are appropriate participants for our task. Approximately 88 per cent of participants reported having personal investing experience averaging 4.8 years. Approximately 94 per cent of participants reported experience conducting financial statement analysis. On average, students reported having taken 2.4 (2.1) graduate-level accounting (finance) courses.10
3.3. Procedures
Participants were instructed to assume the role of a potential investor in ProMed Corp, a hypothetical company that develops, manufactures and markets surgical products. Participants were provided with instructions for completing the case, general company background and management information,11 and the company’s actual reported EPS for the past 3 years. Actual EPS for the past 3 years revealed either a downward (trend inconsistent) or an upward (trend consistent) trend with the most current year’s EPS held constant across all conditions. All participants then received ProMed’s earnings forecast and were asked to make several judgments regarding the forecast and to provide their own EPS forecast. Next, participants were provided with the mean analyst earnings forecast from ‘Thomson First Call’, depending on the experimental condition. Participants in the confirming (disconfirming) analyst forecast condition were given an analyst forecast equal to (less than) management’s forecast.12 Participants then made several judgments regarding the information, provided a revised EPS forecast and completed the post-experimental questionnaire. Figure 1 provides a description of the experimental procedures.

Experimental procedures.
The experiment was conducted in two settings. Sixty-six per cent of participants completed the experiment during scheduled class time using hard-copy materials. Thirty-four per cent of participants completed the experiment at their convenience on the Internet. The materials used in both settings were identical except for the method in which they were obtained (hard-copy versus web based). Analysis of the dependent measures indicates no significant differences owing to experimental setting.13
3.4. Dependent variable
Our primary dependent variable is participants’ perceptions of management earnings forecast credibility which we measure at two points in time: (i) after participants receive management’s forecast (initial measure) and (ii) after participants receive the analyst earnings forecast (revised measure). We ask participants to assess both the credibility and the believability of management’s forecast using 11-point scales anchored at (0) ‘not at all credible (believable)’ and (10) ‘extremely credible (believable)’. We average these two measures into one credibility score for further analysis (Hirst et al., 2007).14 To assess the incremental effect of the analyst forecast on management forecast credibility, we construct a credibility change score calculated as the participant’s revised measure minus his/her initial measure. As we are interested in both the direction and the magnitude of participants’ reactions to the analyst forecast, we use both the change score and the absolute value of the change score in our analyses.
4. Results
4.1. Manipulation checks
To assess the effectiveness of the analyst forecast confirmation manipulation, we asked participants whether ProMed’s forecast was greater than, less than or equal to the analyst forecast. One hundred and twenty-seven (90.7 per cent) participants correctly identified the relationship. For the management forecast consistency manipulation, we asked participants to indicate ProMed’s prior years’ earnings trend. One hundred and thirty-three (95 per cent) participants correctly identified the earnings trend.15
Additionally, we asked participants to indicate on an 11-point scale, after receiving prior years’ reported earnings but prior to receiving management’s forecast and the analyst forecast, whether they believed the next year’s EPS would be ‘much less’ (scale value = ‘−5’), ‘no different’ (scale value = ‘0’) or ‘much greater’ (scale value = ‘+5’) than last year’s EPS. Participants in the trend consistent condition indicated significantly different expectations for ProMed’s future EPS (t = 14.20, p < 0.01, one-tailed) than participants in the trend inconsistent condition, and expectations were directionally consistent with our manipulation (means = 1.94 vs. −1.53). Participants also reported the extent to which they agreed that ProMed’s earnings forecast was consistent with their expectations using an 11-point scale anchored by (−5) ‘strongly disagree’ and (5) ‘strongly agree’. Responses were significantly different between the management forecast consistency conditions (means = −0.56 vs. −1.45, t = 2.45, p < 0.01, one-tailed). These items provide additional support for our manipulations.
4.2. Hypothesis tests
Table 1 presents descriptive statistics for participants’ initial and revised credibility judgments and credibility change scores. Participants’ initial credibility assessments are significantly greater in the trend consistent condition than in the trend inconsistent condition (means = 5.72 vs. 5.00, t = 2.37, p < 0.01, one-tailed), suggesting that management forecast consistency affects the credibility of management forecasts. Credibility change scores in all conditions are significantly different from zero (all p < 0.01), indicating that participants revised their initial credibility perceptions to incorporate the analyst forecast. A confirming analyst forecast has a significantly positive effect on management forecast credibility (mean = +1.16, p < 0.01), and a disconfirming analyst forecast has a significantly negative effect on management forecast credibility (mean = −1.45, p < 0.01). These results confirm that investors believe that analyst forecasts are useful for assessing management forecast credibility.
Management forecast credibility judgments*,† | |||
---|---|---|---|
Mean (median) [standard deviation] | |||
Management forecast consistency | |||
Analyst forecast | Trend | Trend | |
Confirmation | Consistent | Inconsistent | Overall |
Confirming analyst | |||
forecast | |||
Initial measure | 5.99 (6.00) [1.71] | 4.95 (5.00) [1.78] | 5.47 (5.50) [1.81] |
Revised measure | 6.89 (7.00) [1.59] | 6.39 (6.50) [1.77] | 6.63 (7.00) [1.69] |
Change score | +0.88 (1.00) [1.29] | +1.44 (1.00) [2.00] | +1.16 (1.00) [1.69] |
n = 38 | n = 37 | n = 75 | |
Disconfirming analyst | |||
forecast | |||
Initial measure | 5.43 (5.00) [2.19] | 5.07 (5.00) [1.42] | 5.25 (5.00) [1.86] |
Revised measure | 3.97 (4.00) [2.00] | 3.61 (3.00) [1.34] | 3.80 (4.00) [1.71] |
Change score | −1.46 (−1.00) [1.55] | −1.45 (−1.50) [1.33] | −1.45 (−1.00) [1.44] |
n = 34 | n = 31 | n = 65 | |
Overall | |||
Initial measure | 5.72 (5.50) [1.96] | 5.00 (5.00) [1.62] | |
Revised measure | 5.50 (5.75) [2.30] | 5.13 (5.00) [2.11] | |
Change score | −0.22 (0.00) [1.83] | +0.13 (0.00) [2.25] | |
n = 72 | n = 68 |
- *Trend consistent (inconsistent) management forecasts are good news management forecasts made in situations where the prior earnings trend is increasing (decreasing) and thus are consistent (inconsistent) with the prior trend. Confirming (disconfirming) analyst forecasts are analyst forecasts that are equal to (less than) the management forecast. †Participants assessed the credibility and believability of management’s forecast using 11-point scales anchored by (0) ‘not at all credible (believable)’ and (10) ‘extremely credible (believable)’. We average these two measures into one score, which we refer to as credibility. Participants assessed credibility at two points in time – after receiving management’s forecast (initial measure) and after receiving the analyst earnings forecast (revised measure). The change score is computed as the revised measure minus the initial measure.
Panel A of Table 2 presents results from an ANOVA on the credibility change scores,16 and Figure 2 provides a graphical summary of the results. Taken together, H1 and H2 predict an ordinal interaction between analyst forecast confirmation and management forecast consistency. The first contrast presented in Panel B of Table 2 is a comprehensive test of our predictions. Consistent with prior research on the negativity bias (Ito et al., 1998), we expect participants’ credibility change scores to be greater in the disconfirming analyst forecast conditions than in the confirming analyst forecast conditions. As predicted in H1, we also expect a confirming analyst forecast to affect credibility change scores more when the management forecast is trend inconsistent compared to trend consistent.17 As predicted in H2, we do not expect the effect of a disconfirming analyst forecast to differ between management forecast consistency conditions. The overall contrast is significant (t = 1.346, p = 0.090) providing support for the predicted pattern described above. Simple effect tests show that the positive effect of a confirming analyst forecast is greater for trend inconsistent forecasts than for trend consistent forecasts (means +1.44 vs. +0.88, p = 0.076). The negative effect of a disconfirming analyst forecast is not significantly different between management forecast consistency conditions (means −1.45 vs. −1.46, p = 0.991). These results are consistent with our hypotheses.
Panel A: Results of analysis of variance for credibility change scores | |||||
---|---|---|---|---|---|
Source | df | SS | MSE | F | p-value |
Corrected model | 3 | 243.88 | 81.29 | 32.71 | <0.001 |
Management forecast consistency | 1 | 2.81 | 2.81 | 1.13 | 0.289 |
Analyst forecast confirmation | 1 | 238.28 | 238.28 | 95.89 | <0.001 |
Management forecast consistency × analyst forecast confirmation | 1 | 2.73 | 2.73 | 1.10 | 0.297 |
Error | 136 | 337.97 | 2.49 |
Panel B: Planned contrasts | ||
---|---|---|
Contrast† | t-statistic | p-value* |
Overall effect‡ | 1.346 | 0.090 |
H1: Trend inconsistent vs. trend consistent for confirming analyst forecast | 1.448 | 0.076 |
H2: Trend inconsistent vs. trend consistent for disconfirming analyst forecast | 0.012 | 0.991 |
- *p-values are one-sided for directional hypotheses. †The overall effect is tested using the absolute value of the credibility change scores. H1 and H2 are tested using the credibility change scores. Nonparametric statistics are reported. ‡Contrast weights were assigned as follows: −3 for the confirming analyst forecast/trend consistent condition, −1 for the confirming analyst forecast/trend inconsistent condition and +2 for both disconfirming analyst forecast conditions.

Management forecast credibility change scores.
4.3. Additional analysis of credibility perceptions
We also analyse participants’ revised credibility measures to gain additional insight into the effects of analyst forecasts on management forecast credibility. As previously mentioned, ceteris paribus, investors perceive trend consistent forecasts as more credible than trend inconsistent forecasts. However, descriptive statistics presented in Table 1 suggest that an analyst forecast may mitigate this finding. Specifically, we find that a trend inconsistent forecast that has been confirmed by an analyst is significantly more credible than a trend consistent forecast that has been disconfirmed by an analyst (means 6.39 vs. 3.97, p < 0.01). Interestingly, participants perceive a trend inconsistent forecast to be just as credible as a trend consistent forecast when the forecast is confirmed by an analyst (means 6.39 vs. 6.89, p = 0.11). These findings suggest that investors rely more on information from analysts than characteristics of the management forecast (i.e. management forecast consistency) when assessing management forecast credibility.
4.4. Additional analysis of EPS estimates
The capital market consequences of forecast credibility are well documented in the accounting literature (e.g. Jennings, 1987; Coller and Yohn, 1997). Hirst et al. (2007) show that investor perceptions of forecast credibility are positively related to estimated price-earnings multiples, that is, investors assess a value premium for firms with more credible earnings forecasts. Similarly, in our setting, we expect participants to provide higher EPS estimates when the management forecast is trend consistent and when the credibility of the management forecast is strengthened by a confirming analyst forecast. Therefore, while not our main dependent variable, we also examine the effects of analyst forecast confirmation and management forecast consistency on investors’ EPS judgments.
Participants provided EPS estimates after receiving ProMed’s forecast (initial EPS) and after receiving the analyst forecast (revised EPS). Panel A of Table 3 presents descriptive statistics for participants’ initial and revised EPS estimates and the change scores (revised minus the initial EPS estimate). Interestingly, the initial EPS estimates of participants in both management forecast consistency conditions are significantly less than management’s forecasted EPS (0.81 in all conditions) (p < 0.01) but significantly greater than the most recent year’s reported EPS (0.71 in all conditions) (p < 0.01). Thus, participants in this study appear to discount management’s estimate when determining their own EPS estimate but still provide a good news/optimistic EPS estimate, even when management’s forecast lacks credibility (i.e. is trend inconsistent).
Panel A: Descriptive statistics for earnings per share estimates | |||
---|---|---|---|
Mean (median) [standard deviation] | |||
Management forecast consistency | |||
Analyst forecast confirmation | TrendConsistent | TrendInconsistent | Overall |
Confirming analyst forecast | |||
Initial measure | 0.776 (0.780) [0.026] | 0.756 (0.760) [0.039] | 0.766 (0.770) [0.035] |
Revised measure | 0.787 (0.790) [0.023] | 0.779 (0.790) [0.034] | 0.783 (0.790) [0.029] |
Change score | +0.011 (+0.010) [0.019] | +0.023 (+0.020) [0.029] | +0.017 (+0.010) [0.025] |
n = 38 | n = 37 | n = 75 | |
Disconfirming analyst forecast | |||
Initial measure | 0.768 (0.780) [0.038] | 0.760 (0.760) [0.032] | 0.764 (0.760) [0.036] |
Revised measure | 0.754 (0.750) [0.035] | 0.745 (0.750) [0.029] | 0.749 (0.750) [0.032] |
Change score | −0.015 (−0.020) [0.038] | −0.016 (−0.020) [0.029] | −0.015 (−0.100) [0.034] |
n = 34 | n = 31 | n = 65 | |
Overall | |||
Initial measure | 0.772 (0.780) [0.032] | 0.758 (0.760) [0.036] | |
Revised measure | 0.771 (0.775) [0.034] | 0.764 (0.760) [0.036] | |
Change score | −0.001 (0.000) [0.032] | +0.005 (0.000) [0.035] | |
n = 72 | n = 68 |
Panel B: Results of analysis of variance of earnings per share change scores | |||||
---|---|---|---|---|---|
Source | df | SS | MS | F | p-value |
Corrected model | 3 | 0.038 | 0.013 | 14.860 | <0.001 |
Management forecast consistency | 1 | 0.001 | 0.001 | 1.321 | 0.252 |
Analyst forecast confirmation | 1 | 0.036 | 0.036 | 41.525 | <0.001 |
Management forecast consistency × Analyst forecast confirmation | 1 | 0.001 | 0.001 | 1.706 | 0.194 |
Error | 136 | 0.116 | 0.001 |
Panel C: Planned contrasts | ||
---|---|---|
Contrast† | t-statistic | p-value* |
Overall effect‡ | 2.570 | 0.006 |
H1: Trend inconsistent vs. trend consistent for confirming analyst forecast | 2.135 | 0.019§ |
H2: Trend inconsistent vs. trend consistent for disconfirming analyst forecast | −0.094 | 0.926§ |
- *p-values are one-sided for directional hypotheses. †The overall effect is tested using the absolute value of the earnings per share (EPS) change scores. H1 and H2 are tested using the EPS change scores. Nonparametric statistics are reported. ‡Contrast weights were assigned as follows: −3 for the confirming analyst forecast/trend consistent condition, −1 for the confirming analyst forecast/trend inconsistent condition and +2 for both disconfirming analyst forecast conditions. §Applying the Bonferroni adjustment to account for the two post hoc comparison tests results in a significance criterion of 0.025.
Earnings per share change scores in all conditions are all significantly different from zero (all p < 0.01), indicating that participants revised their initial EPS estimates to incorporate the analyst forecast. In Panel B of Table 3, we present results from an ANOVA of participants’ EPS change scores, and Figure 3 provides a graphical summary of the results. Consistent with our primary results for credibility, the overall contrast presented in Panel C of Table 3 is significant (t = 2.570, p = 0.006). Simple effect tests show that the positive effect of a confirming analyst forecast on EPS estimates is greater for a trend inconsistent forecast than for a trend consistent forecast (means +0.023 vs. +0.011, p = 0.019). The negative effect of a disconfirming analyst forecast does not differ between management forecast consistency conditions (means −0.016 vs. −0.015, p = 0.926).

Earnings per share change scores.
Similar to Hirst et al. (2007) and Clor-Proell (2009), we examine whether investor perceptions of management forecast credibility mediate the effect of analyst forecast confirmation on investors’ revised EPS estimates. The following must exist to demonstrate mediation: (i) a significant analyst forecast confirmation effect on EPS estimates; (ii) a significant analyst forecast confirmation effect on credibility judgments; and (iii) a significant effect of credibility judgments on EPS estimates when analyst forecast confirmation is included in the analysis (Barron and Kenny, 1986). Untabulated analyses indicate that analyst forecast confirmation significantly affects both revised EPS estimates (F = 42.326, p < 0.01) and revised credibility judgments (F = 96.666, p < 0.01), meeting the first two criteria above. To complete the mediation test, we conduct an ANCOVA with revised EPS estimates as the dependent variable and analyst forecast confirmation as the independent variable and include participants’ revised credibility judgments as a covariate. Participants’ revised credibility judgments are significant (F = 63.295, p < 0.01); however, analyst forecast confirmation is no longer significant (F = 1.085, p = 0.299). Thus, participants’ credibility judgments appear to fully mediate the effect of analyst forecast confirmation on EPS estimates.
Further examination of participants’ revised EPS estimates shows that consistent with Pinello (2008), participants in this study discounted information from analysts when determining their own EPS estimate. As previously mentioned, participants in the confirming analyst forecast/trend consistent condition were informed that both management and analysts forecasted the same EPS (0.81). However, these participants provided a mean final EPS estimate that was significantly less than that which was provided by management and analysts (mean = 0.787, t = −6.264, p < 0.01).
5. Conclusions
Management forecast credibility is an important topic as management forecasts are not useful to investors unless they are credible. As managers have incentives to voluntarily disclose good news, forecast credibility is of particular concern for good news earnings forecasts and investors often require additional information to confirm good news forecasts in order to perceive the forecast as credible (Hutton et al., 2003). Analyst forecasts provide useful information to market participants making investment-related decisions. Prior research (e.g. Jennings, 1987) suggests that analyst forecasts represent market expectations and, accordingly, provide information regarding the credibility of management earnings forecast. However, research has not examined investors’ joint use of analyst forecasts and the prior earnings trend in determining the credibility of management’s forecasts (Mercer, 2004).
In this study, we provide experimental findings on the effects of analyst forecast confirmation and management forecast trend consistency on investor perceptions of management forecast credibility. Our results show that information from analysts can act as either a substitute for or a complement to information from management, depending on the situation. In the presence of a confirming analyst forecast, investors’ credibility judgments depend on whether management’s forecast is consistent with expectations (i.e. complements). However, in the presence of a disconfirming analyst forecast, investors rely primarily on information from analysts and do not consider characteristics of management’s forecast in their credibility judgments (i.e. substitutes).
Our results are important to both research and practice. Consistent with prior research that shows investors use analyst forecasts to evaluate actual firm performance (e.g. Bartov et al., 2002), our results suggest that investors also use analyst forecasts to evaluate firm’s earnings forecasts and, specifically, the credibility of those forecasts. Recent research suggests that analyst forecasts suffer from credibility concerns because of a perceived lack of independence between analysts and management and analysts’ incentives to please management (Kothari et al., 2009). Results from our study suggest that despite these concerns, investors still perceive analyst forecasts as useful information.
This study also provides information on the role that analyst forecasts play in investors’ judgment and decision-making. Archival research in accounting often uses analyst forecast revisions as a proxy for the credibility of management earnings forecasts and shows that the market reacts to analyst forecast revisions (e.g. Koch, 2005); yet little research to date has shown the situations and circumstances under which this relationship holds. Hirst et al. (2008) discuss the need within the management earnings forecast literature for interaction tests that specifically identify and examine potential moderating variables. Specifically, they state that such tests should help researchers understand the conditions in which specific effects may or may not hold. This study answers such calls by examining the interactive effects between characteristics of management and analyst forecasts. Our results show that the effect of a confirming analyst forecast on management forecast credibility depends on whether management’s forecast is consistent with the prior earnings trend and investor expectations. When investors receive a disconfirming analyst forecast, the analyst forecast appears to be a substitute for management’s forecast in their judgments of management forecast credibility.
Our results provide information to managers on how to improve the credibility of their earnings forecasts, specifically those forecasts that are inherently less credible (i.e. good news forecasts). Our results show that the credibility losses experienced by an unexpected (i.e. trend inconsistent) management forecast can be eliminated by a confirming analyst forecast. Recent evidence suggests that managers use earnings guidance to steer analysts down towards beatable earnings targets (Cotter et al., 2006). However, because we find that companies may be penalized if an analyst forecast is below their own forecast, management should be cautious in providing such guidance.
From a regulatory perspective, our results also provide information regarding investors’ use of analyst earnings forecasts. Recent research suggests that the relationship between management and analysts provides incentives for analysts to issue biased forecasts (Libby et al., 2008; McEwen et al., 2008). Although investors often adjust for the perceived bias in analyst forecasts when making EPS predictions (Ackert et al., 1997; Pinello, 2008), our results suggest that investors rely upon analyst forecasts (and often more so than management’s predictions) to determine management forecast credibility. Given the economic consequences of forecast credibility (e.g. Jennings, 1987; Hirst et al., 2007) and evidence of the negative consequences of management and analyst relationships (Libby et al., 2008), regulators should be interested in the role of analyst forecasts on investor perceptions of the credibility of good news management earnings forecasts.
As with all studies, this study is not without limitations. First, we use MBA students as surrogates for investors. Although our participants report having investing experience, we do not know whether our results would generalize to different populations of investors. Second, we limit the amount of information participants received making this experimental setting less complex but potentially reducing the generalizability of this study. Future research could examine the effect of other information (e.g. audited financial statements, MD&A) on investors’ use of analyst forecasts.
Our findings suggest several areas for future research. Although findings from Hodge et al. (2006) suggest that good reporting reputation may mitigate concerns about management forecast credibility, we do not know to what extent management’s reporting reputation (controlled for in our experimental design) influenced this result. Future research could examine potential interactive effects between management forecast consistency and reporting reputation and other characteristics of management earnings forecasts (e.g. timeliness, accuracy). In addition, our results suggest that in some circumstances, an analyst forecast is a substitute for management’s forecast. Future research could examine other information available to investors and how investors use this information relative to analyst forecasts and management forecasts when making investment-related judgments.