A social interaction model with an extreme order statistic
Summary
In this paper, we introduce a social interaction econometric model with an extreme order statistic to model peer effects. We show that the model is a well-defined system of equations and that it is a static game with complete information. The social interaction model can include exogenous regressors and group effects. Instrumental variables estimators are proposed for the general model that includes exogenous regressors. We also consider distribution-free methods that use recurrence relations to generate moment conditions for estimation. For a model without exogenous regressors, the maximum likelihood approach is computationally feasible.
1. INTRODUCTION
There is a growing body of literature in which the influence of social peers in economics and other social sciences is addressed; see Durlauf (2004) for a recent survey. Conventional social interaction models, including the spatial autoregressive model and the linear-in-means model, assume that an individual's outcome depends linearly on the mean of the outcomes of the individual's peers; see, e.g. Anselin (1988), Manski (1993), Moffitt (2001), Lee (2007) and Bramoulle et al. (2009). For instance, a student's test score would be affected linearly by the mean test score of their classmates. In certain situations, we believe that a better specified model might involve the full distribution or other distributional characteristics, such as order statistics, rather than the mean (e.g. Ioannides and Soetevent, 2007).
In this paper, we introduce a social interaction econometric model with an extreme order statistic to allow non-linearity in modelling endogenous peer effects.1 Let be a finite set of players in a social network (group) and let
be the set of players, except the ith player, who are i's neighbours. For example, a simple social interaction model with the maximum statistic for the ith player is specified as
, where
denotes
and
is a random element. The social interaction model with the minimum statistic can be defined in a similar way.
Empirical studies have found evidence regarding peer effects on educational outcomes; see, e.g. Hanushek (1992), Hanushek et al. (2003) and Hoxby (2000). It is plausible that peers affect educational outcomes by aiding learning through questions and answers or by hindering learning through disruptive behaviour. Lazear (2001) has suggested that disruptive students could interfere with the educational outcomes of their peers. For instance, a student who has bad grades in class might hinder learning by asking poor questions. This disruptive behaviour might have large negative effects on the academic achievements of this student's peers, and thus might generate strong endogenous peer effects. Figlio (2007) has found empirical evidence of the effects of disruptive students on the performance of their peers, which indicates that students suffer academically from the presence of classroom disruption. Thus, our social interaction model with an extreme order statistic might be useful to explore such ‘bad apple’ effects in some empirical studies.
In many real-world situations, rewards are based on relative performance rather than absolute performance, and competition in terms of extreme outcomes may occur. Holmstrom (1982) has suggested that a relative performance scheme provides a better evaluation than an absolute performance scheme in the presence of costly monitoring of the agent's effort. Lazear and Rosen (1981) have suggested that there is a convex relation between pay and performance level in tournament schemes.2 Frank and Cook (1995) have attributed much of this phenomenon to winner-take-all markets, which compete fiercely for the best. Examples of such schemes include companies offering bonuses to the ‘salesperson of the year’, universities rewarding researchers for writing the ‘best paper’ and athletes being rewarded for having the best Olympic performances. Main et al. (1993) have conducted a detailed survey of executive compensations and have found some support for a relative performance scheme. Ehrenberg and Bognanno (1990a,b) have found that the performances of professional golf players are influenced by the level and structure of prizes in major golf tournaments.
In contrast to the situation where individuals are competitors in the labor market, individuals can benefit from their peers in a social group. The benefit might take the form of the sharing of know-how between individuals. This is an efficient way for individuals to obtain valuable experiences from their peers and especially from those who are most capable.3 Therefore, individuals might react to the performance of their best peers in a strategic model of social network.
The paper is organized as follows. In Section 2., we provide some theoretical justifications for the social interaction model with the minimum or maximum statistic on economic grounds. In Section 3., we introduce a general model specification with an extreme order statistic. We show that the social interaction model with an extreme order statistic is a well-defined system of equations. The solution to the system exists and is unique. Our model has the essential features of a static game with complete information for a finite number of players. In particular, this game is asymmetric and has a unique pure-strategy Nash equilibrium. In Section 4., we provide a simple instrumental variables (IV) approach for a general model where exogenous regressors are relevant. In Section 5., we consider the generalized method of moments (GMM), which uses IV moments and recurrence relations for moments of order statistics. Our asymptotic analysis of estimation methods (IV and GMM) is based on a sample consisting of a large number of groups where group sizes remain fixed and bounded (as the number of groups tends to infinity). In Section 6., we give a brief introduction of an extended social interaction model with mixed mean and maximum. In Section 7., we provide some Monte Carlo results and in Section 8., we provide an empirical example. To illustrate the practical use of this model, we apply it to the National Collegiate Athletic Association (NCAA) men's basketball data and we obtain some interesting results. In Section 9., we draw our conclusions. In Appendix A, we list some useful notations and some moments of ordered normal random variables. In Appendix B, we give detailed proofs of the theorems provided in Sections 3., 4. and 6.. In Appendix C, we provide a brief description of the maximum likelihood estimation (MLE) of simple model specifications and some Monte Carlo results. In Appendix D, we give a useful lemma and some technical details about the GMM estimation with recurrence moments discussed in Section 5..
2. THEORETICAL ECONOMIC CONSIDERATIONS
Our model specification can be derived from a simultaneous move game with perfect information.4 The sample can be viewed as repetitions of a game played among individuals with possibly varying finite numbers of players. We give three theoretical examples to illustrate the model specification within the game-theoretic framework. These theoretical models are for illustrative purposes and are not exclusive for possible economic applications of the proposed econometric model.
2.1. A synergistic relationship
A game has m players involved in a synergistic relationship (Osborne, 2004). It is a static game with complete information, in which each player knows the pay-offs and strategies available to other players. Each player's set of actions is the set of performance levels. Denote by a performance profile chosen by players and by
, which is a subvector of p with dimension
, the performance profile of all players except the ith player in the game. Player i's preference is represented by the pay-off function
, which depends on p in the following way. First,
and
capture the concave relationship between pay-offs and performance levels. Secondly,
,
, captures the synergistic relationship between player i and player i's peer j.
















As an alternative specification, it is possible to justify the game in terms of choosing effort instead of performance level directly, as long as the response of an individual to their peers is on their maximum performance level. Suppose that an individual chooses their effort to respond to the best performance of their peers. Then, we have . Suppose that the performance of individual i is the sum of their effort and unobserved disturbance
(i.e.
). Then, the model with the measurement relation becomes
, which is similar to 2.3, where
is the overall disturbance.
2.2. A tournament scheme
























2.3. A game that combines the tournament scheme and the synergistic relationship




Note that the coefficients of in (2.3) and (2.8) are positive and negative, respectively. We can interpret the former as the social effect and the latter as the competition effect. The net pay-off function (2.11) combines the tournament scheme and the synergistic relation into a single game. In this situation, the coefficient of
in (2.12 ) would represent the difference between the social effect and the competition effect, so that it can take either a negative or a positive value. Its sign indicates the domination of one effect over the other. They cannot be separately identified from (2.12) alone. These effects would be separately identified if the cost or pay-off of a player could be observed. In the next section, we shall see that the best response function (2.3), (2.8) or (2.12) has a unique Nash equilibrium under some general conditions.
3. THE SOCIAL INTERACTION MODEL WITH MAXIMUM AND REGRESSORS





Here, m is the total number of players in a game, is a vector of observed individual characteristics and
is an individual disturbance, which is known to the agent but unobservable to econometricians. Thus, the reaction functions 2.3, 2.8 and 2.12 can all be included in this framework. For example, λ will be
for 2.12, and
will become
in this general formulation. Because
and possibly
vary across individuals, 3.1 can be interpreted as the reaction function for an asymmetric game with perfect information. With given individual characteristics
and
, 3.1 can have a unique Nash equilibrium when
. The following theorem summarizes the solution of 3.1 as the unique Nash equilibrium.
Theorem 3.1.Denote for simplicity. The system 3.1 with
but
is equivalent to the following system with order statistics:

Here, and
are the corresponding order statistics of
and
in ascending order. The system 3.2 has the unique solution:

Analogously, for the social interaction model with minimum, , the solution exists and is unique as
and
for
given
but
. We note that the existence and uniqueness of the solution are valid without any distributional assumption on
.
We see that, from the arguments of the proof for Theorem 3.1, as long as but
, the order statistics
have the same ascending order as
. In this case, the solution (Nash equilibrium) of the system exists and is unique. The restriction
provides the stability of the system. However, the system is unstable or divergent when
.
It is apparent that the system has many possible solutions when λ is 1 or −1. This is also the case with . For any
, consider the quantities defined by
, and
with
but
. The quantities of these elements have the order
. They also imply
and
. When
,
; hence, these
for
satisfy the structure
and form a solution set. For
, because i can be any individual of
, there are multiple solutions for the system.
For certain estimation methods, such as the method of maximum likelihood (ML), a unique solution of the system is essential, because it requires a well-defined mapping of the disturbance vector to the sample vector
conditional on observed exogenous variables in order to set up the likelihood function. However, for certain estimation methods, such as IV or the two-stage least-squares (2SLS) estimation, even though the system might have multiple solutions, asymptotic analysis of such estimation methods could still be valid as long as proper instrumental variables are available for the endogenous explanatory variables of the structural equation and the rank condition is satisfied, under the scenario that the number of groups R tends to infinity but the group sizes remain finite and bounded.6



4. ESTIMATION
Suppose that the economy has a well-defined group structure. Each player belongs to a social group. The players can interact with each other within a group but not with members in another group. The sample consists of R groups with players in the rth group. The total sample size is
. For empirical relevance, we allow variable group sizes in the model but group sizes remain finite and bounded as the number of groups increases.8
Assumption 4.1.Assume that for all r, where the lower bound
and the upper bound mU is finite.
For the estimation, we make the distinction whether exogenous regressors are present or not in the model. Because the model with exogenous regressors might be more relevant for practical use, we focus on the estimation of this general case in the main text, whereas the estimation of the model without regressors is given in Appendix C.
4.1 The model with exogenous regressors
Here, and
denote
and
vectors of exogenous variables. If
and
are relevant regressors, the IV approach is possible and is distribution-free.
Assumption 4.2. are i.i.d. random variables with zero mean and a finite variance
, and are independent of
and
.
























Because the possible number of permutations is large in general, the ML approach will be computationally demanding unless the number of players in a game is very small. Furthermore, a parametric ML approach will not be distribution-free. For tractable estimation, we consider an IV approach.







































Assumption 4.3.The probability limits of and
exist and are non-singular. Furthermore,
and















4.2. The model with exogenous regressors and group effects








































Assumption 4.4.The probability limits of and
exist and are non-singular. Furthermore,
and











5. THE GMM APPROACH
If all the exogenous regressors were irrelevant, the above IV methods would no longer be applicable. In other words, if β0 and γ0 were zero, IV moments alone would not identify the model parameters. Furthermore, it is impossible to test the joint significance of all the exogenous regressors based on those IV estimators. To remedy such a possible but unknown scenario, we suggest a combination of IV moments with some recurrence relations for moments of order statistics. The resulting estimation method is distribution-free. However, when regressors are really relevant, the validity of the recurrence relations would require that the regressors are i.i.d. for all i and r, which is a strong assumption of this estimation strategy for the model.




Assumption 5.1. and
are i.i.d. for all members in a group as well as across groups.






























Assumption 5.2. converges in probability to a constant weighting matrix a0 with a rank no less than
, and
converges in probability to D0 that exists and is finite.
By the generalized Schwartz inequality, the optimal choice of a0 is . It follows that the optimal GMM estimator minimizes
, where
is a consistent estimate of V2 (see Appendix D for details on the Proof of Theorem 5.1).
6. THE MODEL WITH MIXED MEAN AND MAXIMUM
















Assumption 6.1.Assume that for all r where the lower bound
and the upper bound mU are finite. Furthermore, either
or there are sufficient number of groups with
.
Assumption 6.2.The probability limits of and
exist and are non-singular. Furthermore,
and

7. MONTE CARLO RESULTS
We randomly draw from the standardized χ2(5) distribution to investigate finite sample properties of distribution-free estimators under this non-normal distribution. The two regressors
and
are i.i.d. N(0, 1) for all r and i, and are independent of
. The group effect
is generated as
. The true parameters are
,
,
and
. We allow group sizes to vary from three to five. We design the same number of groups for each group size. The average group size is four by design. In the simulation, we experiment with different numbers of groups R from 60 to 1920. The number of Monte Carlo repetitions is 300.
We try two different IV estimators. IV1 uses the group average of 4.9 as instruments, and IV2 uses
and
of 4.10 in the first model. Because
is invariant within the group, only IV2 is relevant to the second model. We also consider the GMM and the two-step procedure (TSP). The GMM uses IV1 and recurrence relation 5.3 for the first model, and uses IV2 and 5.8 for the second model. For computational simplicity, we use the identity matrix as a weighting matrix in a GMM objective function.11 The TSP uses the moments of 5.3 or 5.8 to estimate λ in the first step, and then regresses fitted residuals on exogenous variables to estimate their coefficients by the ordinary least-squares method in the second step.
Table 1 presents the bias (Bias) and standard deviation (SD) of IV1, IV2, GMM and TSP estimates of Model 1. The IV1 and IV2 estimates of λ are unbiased for all R. The GMM estimate of λ is biased upward when or 120, and this bias decreases as R increases. The TSP estimate of λ has a large upward or downward bias when
or 120, but this bias decreases sharply when
or more. All the estimates of λ have smaller SDs as R increases. The IV1 estimate of λ has much smaller SDs than those of the other estimates. The IV2 estimate of λ has larger SDs than those of the GMM estimates for all R, and has smaller SDs than that of the TSP estimate when
or less. However, it has larger SDs than that of the TSP estimate when
or more. These estimates of β, γ, μ and σ have similar properties in bias and standard deviation. Table 2 presents finite sample results of IV2, GMM and TSP estimates of Model 2. The IV2 estimate of λ is unbiased for all R. The GMM estimate of λ is biased downward when
or less, and this bias decreases as R increases. The TSP estimate of λ has large upward bias when
or 120, but this bias decreases sharply when
or more. The GMM estimate of λ has smaller SDs than those of the IV2 and TSP estimates. These estimates of β and σ have similar properties in bias and standard deviation.
R | IV1 | IV2 | GMM | TSP | R | IV1 | IV2 | GMM | TSP | ||
---|---|---|---|---|---|---|---|---|---|---|---|
λ | Bias | 60 | −0.0040 | 0.0737 | 0.1958 | 0.8023 | 480 | −0.0004 | 0.0026 | 0.0484 | −0.0421 |
SD | 0.0743 | 0.9290 | 0.2477 | 11.1507 | 0.0227 | 0.2368 | 0.1325 | 0.2097 | |||
β | Bias | −0.0065 | −0.0373 | −0.0856 | −0.2203 | 0.0005 | −0.0017 | −0.0217 | 0.0142 | ||
SD | 0.0687 | 0.2474 | 0.1403 | 3.2108 | 0.0235 | 0.0807 | 0.0650 | 0.0750 | |||
γ | Bias | 0.0048 | −0.1467 | −0.3810 | −1.7597 | 0.0012 | −0.0039 | −0.0981 | 0.0804 | ||
SD | 0.1669 | 1.8454 | 0.4833 | 24.1389 | 0.0519 | 0.4807 | 0.2612 | 0.4082 | |||
μ | Bias | −0.0013 | 0.0097 | −0.0185 | 0.0289 | 0.0014 | 0.0024 | −0.0030 | −0.0011 | ||
SD | 0.0659 | 0.1450 | 0.0978 | 1.4228 | 0.0236 | 0.0274 | 0.0250 | 0.0318 | |||
σ | Bias | 0.0075 | 0.3261 | 0.0727 | 2.1491 | 0.0012 | 0.0588 | 0.0117 | 0.0516 | ||
SD | 0.0701 | 1.0161 | 0.1516 | 15.5359 | 0.0247 | 0.1256 | 0.0567 | 0.1660 | |||
λ | Bias | 120 | 0.0013 | 0.0129 | 0.1451 | −0.2968 | 960 | −0.0013 | −0.0122 | 0.0273 | −0.0138 |
SD | 0.0464 | 0.5421 | 0.2495 | 3.3297 | 0.0181 | 0.1603 | 0.0890 | 0.1065 | |||
β | Bias | −0.0020 | −0.0051 | −0.0609 | 0.1065 | −0.0002 | 0.0033 | −0.0135 | 0.0037 | ||
SD | 0.0502 | 0.1934 | 0.1261 | 1.1990 | 0.0170 | 0.0548 | 0.0438 | 0.0402 | |||
γ | Bias | 0.0014 | −0.0296 | −0.2809 | 0.6019 | 0.0028 | 0.0239 | −0.0549 | 0.0272 | ||
SD | 0.1028 | 1.0825 | 0.4884 | 6.8572 | 0.0414 | 0.3201 | 0.1803 | 0.2147 | |||
μ | Bias | −0.0030 | −0.0027 | −0.0218 | 0.0182 | −0.0001 | 0.0003 | −0.0025 | −0.0003 | ||
SD | 0.0441 | 0.0812 | 0.0624 | 0.2489 | 0.0176 | 0.0188 | 0.0190 | 0.0187 | |||
σ | Bias | 0.0007 | 0.2169 | 0.0541 | 0.9507 | 0.0003 | 0.0313 | 0.0034 | 0.0155 | ||
SD | 0.0515 | 0.4537 | 0.1307 | 5.2488 | 0.0187 | 0.0692 | 0.0300 | 0.0535 | |||
λ | Bias | 240 | −0.0025 | −0.0043 | 0.0863 | −0.0016 | 1920 | −0.0014 | 0.0008 | 0.0085 | −0.0141 |
SD | 0.0349 | 0.4986 | 0.1721 | 1.7710 | 0.0124 | 0.1097 | 0.0638 | 0.0722 | |||
β | Bias | 0.0016 | −0.0021 | −0.0360 | 0.0045 | 0.0013 | 0.0003 | −0.0032 | 0.0054 | ||
SD | 0.0354 | 0.1546 | 0.0835 | 0.5087 | 0.0122 | 0.0376 | 0.0310 | 0.0266 | |||
γ | Bias | 0.0051 | 0.0085 | −0.1759 | 0.0102 | 0.0018 | −0.0029 | −0.0188 | 0.0262 | ||
SD | 0.0792 | 1.0067 | 0.3481 | 3.3791 | 0.0287 | 0.2190 | 0.1253 | 0.1424 | |||
μ | Bias | −0.0037 | −0.0048 | −0.0091 | −0.0056 | −0.0004 | −0.0001 | −0.0010 | −0.0006 | ||
SD | 0.0345 | 0.0557 | 0.0363 | 0.1201 | 0.0121 | 0.0128 | 0.0125 | 0.0125 | |||
σ | Bias | −0.0009 | 0.1600 | 0.0200 | 0.3584 | 0.0003 | 0.0137 | 0.0026 | 0.0094 | ||
SD | 0.0340 | 0.4625 | 0.0676 | 2.5571 | 0.0126 | 0.0369 | 0.0221 | 0.0270 |
R | IV2 | GMM | TSP | R | IV2 | GMM | TSP | ||
---|---|---|---|---|---|---|---|---|---|
λ | Bias | 60 | 0.0282 | −0.2355 | 4.2695 | 480 | 0.0068 | −0.0457 | 0.0188 |
SD | 0.9543 | 0.5486 | 71.3090 | 0.2629 | 0.2407 | 0.2746 | |||
β | Bias | −0.0073 | −0.0661 | 1.1213 | 0.0014 | −0.0122 | 0.0041 | ||
SD | 0.2071 | 0.1582 | 19.1270 | 0.0640 | 0.0646 | 0.0661 | |||
σ | Bias | 0.0403 | 0.0106 | 1.5919 | 0.0046 | −0.0038 | 0.0061 | ||
SD | 0.1949 | 0.0966 | 23.2660 | 0.0467 | 0.0461 | 0.0529 | |||
λ | Bias | 120 | 0.0279 | −0.1006 | 0.2327 | 960 | 0.0119 | −0.0073 | 0.0239 |
SD | 0.6435 | 0.4291 | 1.3350 | 0.1861 | 0.1848 | 0.1962 | |||
β | Bias | −0.0006 | −0.0308 | 0.0465 | 0.0021 | −0.0006 | 0.0051 | ||
SD | 0.1453 | 0.1235 | 0.2943 | 0.0443 | 0.0508 | 0.0484 | |||
σ | Bias | 0.0183 | −0.0067 | 0.0591 | 0.0030 | 0.0001 | 0.0048 | ||
SD | 0.1193 | 0.0793 | 0.3586 | 0.0355 | 0.0342 | 0.0364 | |||
λ | Bias | 240 | 0.0000 | −0.1326 | 0.0057 | 1920 | 0.0006 | −0.0141 | 0.0017 |
SD | 0.3751 | 0.3166 | 0.4717 | 0.1394 | 0.1320 | 0.1426 | |||
β | Bias | 0.0002 | −0.0324 | 0.0024 | 0.0014 | −0.0021 | 0.0018 | ||
SD | 0.0894 | 0.0867 | 0.1090 | 0.0330 | 0.0352 | 0.0356 | |||
σ | Bias | 0.0049 | −0.0156 | 0.0074 | 0.0008 | −0.0016 | 0.0008 | ||
SD | 0.0671 | 0.0560 | 0.0948 | 0.0242 | 0.0231 | 0.0248 |
If exogenous regressors are irrelevant to the model, the IV method is inconsistent. However, GMM and TSP, which use recurrence relations for moments of order statistics, remain valid. For illustration, we present finite sample results of the IV, GMM and TSP estimates of the models when β0 and γ0 are zero. The IV1 estimate of λ has significant upward or downward bias for all R in Table 3, and the IV2 estimate of λ has significant downward bias for all R in Table 4. In contrast, the GMM and TSP estimates of λ have smaller biases and these biases decrease as the number of groups increases.
R | IV1 | GMM | TSP | R | IV1 | GMM | TSP | ||
---|---|---|---|---|---|---|---|---|---|
λ | Bias | 60 | 1.1676 | 0.0801 | −0.1009 | 480 | −1.67 × 102 | 0.0190 | −0.0041 |
SD | 11.0140 | 0.2450 | 0.7187 | 2.82 × 103 | 0.0819 | 0.0938 | |||
β | Bias | −0.0004 | −0.0026 | −0.0055 | −2.31 × 10−2 | 0.0010 | 0.0008 | ||
SD | 0.1408 | 0.0826 | 0.0733 | 3.03 × 10−1 | 0.0232 | 0.0225 | |||
γ | Bias | −0.0860 | −0.0021 | −0.0018 | 6.20 × 100 | 0.0004 | 0.0000 | ||
SD | 1.0608 | 0.0797 | 0.0845 | 1.06 × 102 | 0.0224 | 0.0223 | |||
μ | Bias | −0.7194 | −0.0583 | 0.0660 | 1.13 × 102 | −0.0129 | 0.0037 | ||
SD | 6.5073 | 0.1760 | 0.4654 | 1.92 × 103 | 0.0602 | 0.0664 | |||
σ | Bias | 1.5278 | 0.0073 | 0.0917 | 1.38 × 102 | −0.0011 | 0.0040 | ||
SD | 8.2338 | 0.0877 | 0.3772 | 2.33 × 103 | 0.0314 | 0.0330 | |||
λ | Bias | 120 | 17.5350 | 0.0367 | −0.0860 | 960 | −0.8384 | 0.0071 | −0.0056 |
SD | 225.1000 | 0.1828 | 0.3066 | 65.9200 | 0.0571 | 0.0665 | |||
β | Bias | −0.2452 | 0.0047 | −0.0007 | 0.0037 | −0.0007 | −0.0005 | ||
SD | 3.5708 | 0.0774 | 0.0492 | 0.0868 | 0.0157 | 0.0158 | |||
γ | Bias | 0.3809 | 0.0099 | 0.0035 | 0.2426 | 0.0001 | 0.0002 | ||
SD | 9.5496 | 0.0782 | 0.0508 | 3.1064 | 0.0169 | 0.0167 | |||
μ | Bias | −12.7630 | −0.0318 | 0.0523 | 0.5270 | −0.0054 | 0.0038 | ||
SD | 165.9500 | 0.1275 | 0.2043 | 43.4560 | 0.0424 | 0.0497 | |||
σ | Bias | 16.3410 | 0.0053 | 0.0374 | 6.3651 | −0.0007 | 0.0022 | ||
SD | 178.1700 | 0.0731 | 0.1335 | 55.7460 | 0.0218 | 0.0241 | |||
λ | Bias | 240 | 0.6479 | 0.0365 | −0.0171 | 1920 | 0.7953 | 0.0068 | 0.0007 |
SD | 11.7950 | 0.1213 | 0.1576 | 6.8995 | 0.0402 | 0.0472 | |||
β | Bias | 0.0113 | 0.0020 | 0.0013 | 0.0017 | 0.0009 | 0.0009 | ||
SD | 0.0927 | 0.0330 | 0.0332 | 0.0192 | 0.0114 | 0.0113 | |||
γ | Bias | −0.0418 | 0.0004 | 0.0002 | −0.0002 | −0.0014 | −0.0013 | ||
SD | 0.4317 | 0.0348 | 0.0369 | 0.1063 | 0.0115 | 0.0113 | |||
μ | Bias | −0.4100 | −0.0298 | 0.0084 | −0.5260 | −0.0052 | −0.0008 | ||
SD | 8.2165 | 0.0865 | 0.1098 | 4.5290 | 0.0295 | 0.0345 | |||
σ | Bias | 1.6218 | −0.0052 | 0.0079 | 1.3726 | −0.0011 | 0.0003 | ||
SD | 9.5404 | 0.0398 | 0.0536 | 5.4796 | 0.0146 | 0.0158 |
R | IV2 | GMM | TSP | R | IV2 | GMM | TSP | ||
---|---|---|---|---|---|---|---|---|---|
λ | Bias | 60 | −2.7994 | −0.2025 | 0.2221 | 480 | −2.3605 | −0.0198 | 0.0133 |
SD | 11.1320 | 0.5884 | 1.1072 | 5.2979 | 0.2540 | 0.2783 | |||
β | Bias | 0.0024 | −0.0033 | −0.0050 | 0.0014 | 0.0000 | 0.0007 | ||
SD | 0.1454 | 0.0972 | 0.0791 | 0.0371 | 0.0317 | 0.0256 | |||
σ | Bias | 0.3681 | −0.0106 | 0.0422 | 0.0732 | −0.0010 | 0.0026 | ||
SD | 1.6721 | 0.0926 | 0.1724 | 0.6112 | 0.0428 | 0.0444 | |||
λ | Bias | 120 | −2.0023 | −0.0949 | 0.1380 | 960 | −2.6978 | −0.0080 | 0.0126 |
SD | 6.6691 | 0.4514 | 0.6714 | 4.7140 | 0.1734 | 0.1889 | |||
β | Bias | 0.0001 | −0.0052 | −0.0026 | −0.0019 | 0.0017 | 0.0002 | ||
SD | 0.0728 | 0.0664 | 0.0554 | 0.0235 | 0.0237 | 0.0189 | |||
σ | Bias | 0.1047 | −0.0092 | 0.0189 | 0.0297 | −0.0007 | 0.0016 | ||
SD | 0.9480 | 0.0743 | 0.1002 | 0.5201 | 0.0289 | 0.0305 | |||
λ | Bias | 240 | −2.5636 | −0.0647 | 0.0074 | 1920 | −1.4783 | −0.0102 | −0.0022 |
SD | 6.2293 | 0.3319 | 0.3822 | 10.8100 | 0.1250 | 0.1332 | |||
β | Bias | 0.0012 | 0.0026 | 0.0023 | −0.0011 | 0.0017 | 0.0016 | ||
SD | 0.0597 | 0.0467 | 0.0391 | 0.0278 | 0.0165 | 0.0135 | |||
σ | Bias | 0.0767 | −0.0070 | 0.0007 | 0.2684 | −0.0011 | −0.0002 | ||
SD | 0.8438 | 0.0521 | 0.0573 | 1.7928 | 0.0199 | 0.0208 |
8. AN EMPIRICAL EXAMPLE



There are over 10,000 men's basketball players competing in three divisions at about 1,000 colleges and universities within the NCAA. Division I (D-I) has the most prestigious basketball programmes, which have the greatest financial resources and attract the most athletically talented students. We collect men's basketball player statistics of D-I teams during the period 2005–2010 from web sites including ncaa.org, espn.com and statsheet.com. The total number of teams (groups) is 1,673 in the five seasons.12 The total number of observations is 22,122. The average team size is 13, with a minimum of eight and a maximum of 20.



- the dependent variable in Table 6 is the points per game of a player;
- the dependent variable in Table 7 is the sum of points and assists per game;
- the dependent variable in Table 8 is the sum of points, assists and rebounds per game;
- the dependent variable in Table 9 is the sum of points, assists, rebounds, and steals per game;
- the dependent variable in Table 10 is the sum of points, assists, rebounds, steals and blocks per game.
Variable | Obs. | Mean | SD | Min | Max |
---|---|---|---|---|---|
Points per game | 22,122 | 5.71 | 4.90 | 0.00 | 28.65 |
Assists per game | 22,122 | 1.10 | 1.17 | 0.00 | 11.69 |
Rebounds per game | 22,122 | 2.65 | 2.06 | 0.00 | 14.80 |
Steals per game | 22,122 | 0.57 | 0.50 | 0.00 | 4.77 |
Blocks per game | 22,122 | 0.28 | 0.43 | 0.00 | 6.53 |
Height (inches) | 22,122 | 76.69 | 3.58 | 63.00 | 91.00 |
Weight (pounds) | 22,122 | 203.76 | 26.27 | 80.00 | 380.00 |
Minutes per game | 22,122 | 17.02 | 10.51 | 0.30 | 40.00 |
Freshman | 22,122 | 0.32 | 0.47 | 0.00 | 1.00 |
Sophomore | 22,122 | 0.26 | 0.44 | 0.00 | 1.00 |
Junior | 22,122 | 0.23 | 0.42 | 0.00 | 1.00 |
Senior | 22,122 | 0.19 | 0.39 | 0.00 | 1.00 |
IV | IV | IV | IV | GMM | |
---|---|---|---|---|---|
Regressor | (1) | (2) | (3) | (4) | (5) |
Best performance of peers | −1.359** | −1.253** | −1.355** | ||
(0.218) | (0.220) | (0.217) | |||
Average performance of peers | −2.173** | −0.884 | |||
(0.477) | (0.556) | ||||
Height | 0.012** | 0.015** | 0.010** | 0.015** | 0.015** |
(0.007) | (0.006) | (0.006) | (0.005) | (0.006) | |
Weight | 0.047** | 0.045** | 0.038** | 0.042** | 0.045** |
(0.001) | (0.008) | (0.008) | (0.007) | (0.008) | |
Minutes per game | 4.379** | 3.729** | 3.594** | 3.460** | 3.731** |
(0.038) | (0.109) | (0.199) | (0.186) | (0.109) |
Note
- Number of observations is 22,122. Individual coefficients are statistically significant at the
or
levels. Standard errors are given in parentheses under coefficients.
IV | IV | IV | IV | GMM | |
---|---|---|---|---|---|
Regressor | (1) | (2) | (3) | (4) | (5) |
Best performance of peers | −1.057** | −0.948** | −1.056** | ||
(0.139) | (0.1318) | (0.139) | |||
Average performance of peers | −2.173** | −1.241** | |||
(0.477) | (0.413) | ||||
Height | −0.094** | −0.073** | −0.078** | −0.065** | −0.073** |
(0.007) | (0.006) | (0.007) | (0.006) | (0.006) | |
Weight | 0.027** | 0.032** | 0.022** | 0.029** | 0.032** |
(0.010) | (0.008) | (0.008) | (0.007) | (0.008) | |
Minutes per game | 5.176** | 4.611** | 4.355** | 4.141** | 4.612** |
(0.038) | (0.081) | (0.199) | (0.170) | (0.080) |
Note
- Number of observations is 22,122. Individual coefficients are statistically significant at the
or
levels. Standard errors are given in parentheses under coefficients.
IV | IV | IV | IV | GMM | |
---|---|---|---|---|---|
Regressor | (1) | (2) | (3) | (4) | (5) |
Best performance of peers | −1.174** | −1.092** | −1.169** | ||
(0.205) | (0.212) | (0.204) | |||
Average performance of peers | −1.983** | −0.766 | |||
(0.477) | (0.458) | ||||
Height | 0.053** | 0.047** | 0.046** | 0.045** | 0.047** |
(0.008) | (0.007) | (0.007) | (0.006) | (0.007) | |
Weight | 0.161** | 0.138** | 0.135** | 0.130** | 0.139** |
(0.011) | (0.010) | (0.011) | (0.010) | (0.010) | |
Minutes per game | 6.714** | 6.017** | 5.619** | 5.642** | 6.020** |
(0.044) | (0.127) | (0.236) | (0.218) | (0.127) |
Note
- Number of observations is 22,122. Individual coefficients are statistically significant at the
or
levels. Standard errors are given in parentheses under coefficients.
IV | IV | IV | IV | GMM | |
---|---|---|---|---|---|
Regressor | (1) | (2) | (3) | (4) | (5) |
Best performance of peers | −1.131** | −1.067** | −1.126** | ||
(0.204) | (0.214) | (0.204) | |||
Average performance of peers | −1.777** | −0.579 | |||
(0.425) | (0.462) | ||||
Height | 0.031** | 0.030** | 0.028** | 0.029** | 0.030** |
(0.008) | (0.007) | (0.007) | (0.007) | (0.007) | |
Weight | 0.152** | 0.132** | 0.129** | 0.126** | 0.133** |
(0.011) | (0.010) | (0.011) | (0.010) | (0.010) | |
Minutes per game | 7.086** | 6.391** | 6.045** | 6.093** | 6.393** |
(0.046) | (0.131) | (0.252) | (0.232) | (0.131) |
Note
- Number of observations is 22,122. Individual coefficients are statistically significant at the
or
levels. Standard errors are given in parentheses under coefficients.
IV | IV | IV | IV | GMM | |
---|---|---|---|---|---|
Regressor | (1) | (2) | (3) | (4) | (5) |
Best performance of peers | −1.121** | −1.051** | −1.118** | ||
(0.205) | (0.211) | (0.204) | |||
Average performance of peers | −1.814** | −0.698 | |||
(0.426) | (0.456) | ||||
Height | 0.090** | 0.080** | 0.078** | 0.076** | 0.081** |
(0.009) | (0.007) | (0.008) | (0.007) | (0.007) | |
Weight | 0.155** | 0.133** | 0.132** | 0.125** | 0.133** |
(0.012) | (0.010) | (0.011) | (0.010) | (0.010) | |
Minutes per game | 7.233** | 6.534** | 6.154** | 6.673** | 6.536** |
(0.047) | (0.134) | (0.258) | (0.119) | (0.133) |
Note
- Number of observations is 22,122. Individual coefficients are statistically significant at the
or
levels. Standard errors are given in parentheses under coefficients.
The explanatory variables for each of the Tables 6–10 are the same, and include height, weight and minutes per game. Height gives a major advantage in professional basketball, because taller players generally achieve more rebounds, block more shots and make more dunks than shorter players. Weight might also give a college player some advantages, because strength is an important factor in many aspects of the game. Both height and weight will have a positive effect on a player's performance. Minutes per game is defined as the total minutes played in one season divided by the number of games played, and this is expected to have a positive coefficient. For reasons of scale, we divide weight and minutes per game by ten in the actual regressions.
Tables 6–10 report the regression estimates under the fixed-effects specification. Column (1) presents the within-group IV estimates of 6.1, for which there are no endogenous interaction regressors. We find that the coefficients of these regressors (i.e. the elements of the vector discussed above) are statistically significant. Physical characteristics (with an exception in Table 7) have positive estimates, which indicate that taller and stronger athletes play better basketball. Table 7 uses the sum of points and assists as the performance measure, and finds that height has a negative effect. The point guard is one of the most important positions on a basketball team, and players at this position are shorter and are awarded more assists, which might explain the unexpected sign of height. Minutes per game is potentially endogenous because coaches generally give better players more time on the court. Hence, we use the years of college education (freshman, sophomore and junior) as the instruments for this endogenous regressor, and the IV estimates are statistically significant and positive.
Because interaction regressors are endogenous, we use the instruments proposed in Section 4.2 for the best performance of peers and the instruments proposed by Lee (2007) for the average performance of peers. Columns (2) and (3) show that the interaction regressor has a negative and statistically significant effect in the respective model.14 Column (4) shows that both interaction regressors remain negative in the generalized model 6.1. The best performance of peers is statistically significant, whereas the average performance of peers is statistically insignificant, except for the case that uses the sum of points and assists as a performance measure. Column (5) presents the GMM estimates of the model 8.1, which are similar to the IV estimates.15 We conclude that a college basketball player reacts negatively and statistically significantly to the best performance of his teammates, which suggests that there exist important competition effects among basketball players in an NCAA D-I team.
9. CONCLUSION
In this paper, we introduce a social interaction econometric model with an extreme order statistic, which differs from conventional social interaction models in that an individual's outcome is affected by the extreme outcome of peers. Our model represents one of the possible extensions to allow non-linearity in modelling endogenous social peer effects.
We show that the social interaction model with an extreme order statistic is a well-defined system of equations. The solution to the system exists and can be unique. We provide a simple IV method for model estimation if exogenous regressors are relevant. A distribution-free method that uses recurrence relations is applicable, even when regressors might be irrelevant. It should be noted that ML estimation would be infeasible for the model including exogenous regressors (unless the number of players is very small), even if one were willing to make a distributional assumption of the disturbances, because the likelihood function of order statistics for independent but not identically distributed variates is rather complicated.
We apply the social interaction models with peer's maximum, peer's average and a mixed specification to the NCAA D-I men's basketball data. We find that a college player performs negatively and statistically significantly in response to the best performance of his peers, which suggests the existence of competition effects in the basketball game. For future research, we can apply this model to explore the possible ‘bad apple’ effect in an empirical study of education. We can also consider a model with reaction to certain quantiles of the outcome distribution.
ACKNOWLEDGEMENTS
We thank two referees and the co-editor, Jaap Abbring, for their valuable comments and suggestions, which have improved the presentation of this paper.














APPENDIX A: NOTATION LIST AND MOMENTS OF ORDERED NORMAL VARIABLES
Table A.1 summarizes some frequently used notations in the text.
Notation | Description |
---|---|
R | total number of groups |
![]() |
member size of the rth group |
n | total number of observations |
![]() |
empirical mean of group size |
![]() |
![]() |
![]() |
orthogonal projector to the linear space spanned by the vector ![]() |
![]() |
covariance transformation matrix |
![]() |
column vector of disturbances in the rth group |
![]() |
column vector of dependent variables in the rth group |
![]() |
column vector of order statistics of ![]() |
![]() |
column vector of order statistics of ![]() |
![]() |
ith-order statistic of ε in a group of size m |
![]() |
ith-order statistic of y in a group of size m |
![]() |
spacing statistic between ![]() ![]() |
![]() |
spacing statistic between ![]() ![]() |
![]() |
difference between largest and ith-order statistics of ε |
![]() |
difference between largest and ith-order statistics of y |
The following properties for normal-order statistics are useful.
Lemma A.1.Suppose that are i.i.d. N(0, 1). Let the corresponding
be order statistics of standard normal variables in the ascending order. Then, (a)
, and (b)
, where
is the sample mean of
.
The first result is a special case of the recurrence relation for the standard normal distribution (Balakrishnan and Sultan, 1998, p. 174).
Lemma A.2.Let denote order statistics of standard normal variables of a random sample with size m in the ascending order. Then, (a)
; (b)
for
; (c)
for
; (d)
; (e)
; (f)
for
; (g)
; (h)
.
The property that is independent of
under normality is useful for the preceding properties. In addition, the technique of the integration by parts can be used to derive some of the above results. Detailed proofs of these results are available upon request.
APPENDIX B: PROOFS OF RESULTS IN THE MAIN TEXT
Proof of Theorem 3.1.Given , without loss of generality, assume that values of ξ are distinct.16 They can be permuted into the order statistics with an ascending order
, i.e.
. Because
, define

When (and
), the solution vector
in B.1 satisfying the system 3.1 can be seen as follows. First, the values of
also have an ascending order like those of
, i.e.,
if
. This is so, because
by assumption,











It remains to show that 3.1 has a unique solution for given . We can show that the inverse mapping theorem from y to ξ has an order preserving property. If
in 3.1 with
, then it is necessary that
. This is so, because
, and 3.1 becomes
and
for
. Hence,
and
for
. This property is used to guarantee the unique solution of the system 3.1 for given ξ.
Finally, we can show that the system 3.1 has a unique solution for given ξ. Without loss of generality, assume that . From the preceding arguments, there exists a vector
such that
satisfying 3.1. In particular, it has












From the above results, given any , there exists a unique solution vector y to the system 3.1. By reordering ξ into ascending order, 3.1 can be conveniently rewritten as the equivalent explicit expression 3.2.
APPENDIX C: LIKELIHOOD FUNCTIONS OF MODELS























Assumption C.1. are i.i.d.
.
Assumption C.2.The parameter space Θ is a compact convex subset of , and the true λ0 is in the interior of Λ, a connected compact subset of the interval ( − 1, 1).
























Theorem C.1.Under Assumptions 4.1, C.1 and C.2, θ0 is globally identifiable and is a consistent estimator of θ0.
Proof.The consistency of will follow from the uniform convergence of
to zero on Λ and the uniqueness identification condition that, for any
,





By Kolmogorov's strong law of large numbers or Chebyshev's weak law of large numbers for independent random variables, uniformly on Λ, where
is bounded away from zero on Λ. Therefore,
.
A third-order Taylor series expansion of the function at λ0 can be written as







For ,

























Proof.Applying the mean-value theorem to at θ0, we obtain




The second-order derivatives of the log likelihood function are



Denote . Then,
can be expanded in terms of a polynomial of λ as follows:



Similarly, denote
. We have
. It follows that


The first-order derivatives of the log likelihood function at θ0 are


Let k be an even integer and let denote the kth moment of a normal variate. Note that










It is revealing to decompose C.4 into two parts, , which can be called a between-group (BG) equation, and
, which is a within-group (WG) equation, because they bear some similarity to the parts of a panel data regression model (Hsiao, 1986).
Ignoring potential group effects, the MLE would be inconsistent, because the above likelihood function is misspecified. In panel data models, it often makes a difference in estimation whether researchers treat as fixed or random. In the fixed-effects model, we encounter an incidental parameters problem when m is finite but R is large. We can eliminate the group effects (by either the first-differencing method or WG transformation) or we can use the random-effects approach to obtain consistent estimates under the normality assumption. Note that the mutual independence between
and
is not necessary for the consistency of the first-differencing or WG MLE.
1 Derivation of the score equation that ignores common group factors









2 Derivation of the likelihood function of fixed-effects specification










3 Derivation of the likelihood function for first-differenced samples






























4 Derivation of the partial likelihood function of the within-group equation



















5 Derivation of the likelihood function of the random-effects approach












6 Derivation of the conditional likelihood function of the between-group equation



































The likelihood functions of all the proposed approaches (i.e. the first-differencing approach, RE specification, WG equation and BG equation) are well defined and depend on a fixed number of parameters. Similar to Theorem C.2, under the i.i.d. normality assumption, these estimators can be consistent and asymptotically normal as R goes to infinity. Among these consistent estimators, the RE MLE will be asymptotically efficient.
7 Monte Carlo results






Table C.1 presents the bias (Bias) and standard deviation (SD) of the MLE and MOMEs of Model C.1 under the normal distribution. The MLEs of λ have very small biases for all R, and its SDs tend to decrease as R increases. The MOMEs of λ have large biases for small R and its biases become small when R is 240 or larger. When R increases, the MOMEs have smaller standard errors. In general, the MLE has smaller SDs than the MOME. The estimates of μ and σ have similar properties as those of λ under the normal distribution. Table C.2 presents finite sample results of Model C.1 under the χ2 distribution. The MLE of λ has significant downward biases. The MOME of λ has large biases for small R, but small biases when R is 240 or larger. These results reveal that the MLE is sensitive to distributional assumptions, and that the MOME is distribution-free and has satisfactory performance for reasonably large R. Table C.3 presents finite sample results of various MLEs and MOMEs of Model C.2 under the normal distribution. As expected, the inconsistent MLEs that ignore group effects (IGE) or treat them as fixed effects (FEs) are biased in different directions, and the consistent MLEs (WG, BG and RE) have small biases for all R. The MOMEs of Model C.2 perform similar to those of Model C.1. The RE-MLE approach performs best in terms of SDs among the consistent estimators, and all the consistent MLEs perform better than the MOMEs.
R | MLE | MOM | R | MLE | MOM | ||
λ | Bias | 60 | −0.0043 | 14.9160 | 480 | −0.0006 | −0.0112 |
SD | 0.0432 | 263.4300 | 0.0144 | 0.0802 | |||
μ | Bias | 0.0048 | −6.1766 | 0.0008 | 0.0070 | ||
SD | 0.0659 | 109.8200 | 0.0245 | 0.0502 | |||
σ | Bias | 0.0029 | 4.6755 | 0.0000 | −0.0023 | ||
SD | 0.0481 | 79.8610 | 0.0192 | 0.0300 | |||
λ | Bias | 120 | −0.0023 | −0.0717 | 960 | −0.0001 | −0.0020 |
SD | 0.0273 | 0.2855 | 0.0101 | 0.0559 | |||
μ | Bias | −0.0008 | 0.0394 | −0.0001 | 0.0009 | ||
SD | 0.0464 | 0.1767 | 0.0157 | 0.0339 | |||
σ | Bias | −0.0021 | −0.0135 | −0.0002 | −0.0003 | ||
SD | 0.0340 | 0.0665 | 0.0135 | 0.0194 | |||
λ | Bias | 240 | −0.0008 | −0.0375 | 1920 | −0.0007 | −0.0059 |
SD | 0.0213 | 0.1685 | 0.0070 | 0.0390 | |||
μ | Bias | 0.0016 | 0.0264 | 0.0004 | 0.0032 | ||
SD | 0.0361 | 0.1178 | 0.0114 | 0.0238 | |||
σ | Bias | −0.0027 | −0.0120 | 0.0001 | −0.0012 | ||
SD | 0.0553 | 0.0693 | 0.0091 | 0.0135 |
R | MLE | MOM | R | MLE | MOM | ||
---|---|---|---|---|---|---|---|
λ | Bias | 60 | −0.0898 | −0.1833 | 480 | −0.0815 | −0.0228 |
SD | 0.0589 | 0.9894 | 0.0201 | 0.1039 | |||
μ | Bias | 0.0537 | 0.1203 | 0.0546 | 0.0157 | ||
SD | 0.0777 | 0.6868 | 0.0269 | 0.0729 | |||
σ | Bias | 0.0177 | −0.0018 | 0.0206 | −0.0028 | ||
SD | 0.0703 | 0.1270 | 0.0256 | 0.0315 | |||
λ | Bias | 120 | −0.0856 | −0.0398 | 960 | −0.0809 | −0.0125 |
SD | 0.0407 | 0.3414 | 0.0142 | 0.0768 | |||
μ | Bias | 0.0537 | 0.0254 | 0.0533 | 0.0079 | ||
SD | 0.0517 | 0.2451 | 0.0194 | 0.0537 | |||
σ | Bias | 0.0188 | −0.0013 | 0.0191 | −0.0024 | ||
SD | 0.0501 | 0.0635 | 0.0170 | 0.0212 | |||
λ | Bias | 240 | −0.0826 | −0.0177 | 1920 | −0.0810 | −0.0026 |
SD | 0.0283 | 0.1764 | 0.0097 | 0.0497 | |||
μ | Bias | 0.0546 | 0.0118 | 0.0548 | 0.0026 | ||
SD | 0.0386 | 0.1229 | 0.0135 | 0.0340 | |||
σ | Bias | −0.0208 | −0.0002 | 0.0204 | −0.0002 | ||
SD | 0.0350 | 0.0449 | 0.0128 | 0.0159 |
R | IGE | FE | WG | BG | RE | MOM | ||
---|---|---|---|---|---|---|---|---|
λ | Bias | 120 | 0.2385 | −0.7780 | 0.0057 | −0.0309 | −0.0085 | 0.3536 |
SD | 0.0199 | 0.0247 | 0.1367 | 0.1951 | 0.0935 | 3.2983 | ||
μ | Bias | −0.1249 | 0.4874 | 0.0049 | 0.0260 | 0.0131 | −0.1810 | |
SD | 0.0579 | 0.2626 | 0.1377 | 0.1694 | 0.1251 | 2.0278 | ||
σ | Bias | 0.0942 | −0.1673 | −0.0010 | −0.0641 | −0.0030 | 0.1069 | |
SD | 0.0380 | 0.0357 | 0.0471 | 1.0738 | 0.0424 | 1.0339 | ||
![]() |
Bias | −0.1088 | 0.0011 | 0.8097 | ||||
SD | 0.4993 | 0.2508 | 7.3440 | |||||
λ | Bias | 240 | 0.2391 | −0.7753 | 0.0143 | −0.0101 | 0.0015 | 0.0698 |
SD | 0.0135 | 0.0167 | 0.0955 | 0.1136 | 0.0666 | 0.4111 | ||
μ | Bias | −0.1270 | 0.4676 | −0.0083 | 0.0052 | −0.0013 | −0.0336 | |
SD | 0.0399 | 0.1958 | 0.0925 | 0.0974 | 0.0809 | 0.2270 | ||
σ | Bias | 0.0956 | −0.1641 | 0.0051 | −0.1537 | 0.0027 | 0.0216 | |
SD | 0.0300 | 0.0279 | 0.0357 | 0.8178 | 0.0314 | 0.1052 | ||
![]() |
Bias | −0.0630 | −0.0162 | 0.0258 | ||||
SD | 0.3402 | 0.1794 | 0.7527 | |||||
λ | Bias | 480 | 0.2418 | −0.7765 | 0.0049 | −0.0036 | −0.0016 | 0.0211 |
SD | 0.0096 | 0.0116 | 0.0647 | 0.0808 | 0.0483 | 0.3042 | ||
μ | Bias | −0.1314 | 0.4669 | −0.0047 | −0.0001 | −0.0011 | −0.0133 | |
SD | 0.0308 | 0.1407 | 0.0687 | 0.0755 | 0.0633 | 0.1745 | ||
σ | Bias | 0.0925 | −0.1668 | −0.0010 | −0.1795 | −0.0022 | 0.0055 | |
SD | 0.0199 | 0.0174 | 0.0227 | 0.7383 | 0.0206 | 0.0716 | ||
![]() |
Bias | −0.0331 | 0.0034 | 0.0085 | ||||
SD | 0.2455 | 0.1304 | 0.6488 | |||||
λ | Bias | 960 | 0.2408 | −0.7776 | −0.0029 | 0.0037 | −0.0043 | 0.0062 |
SD | 0.0064 | 0.0088 | 0.0480 | 0.0498 | 0.0331 | 0.1844 | ||
μ | Bias | −0.1287 | 0.4765 | 0.0033 | −0.0009 | 0.0040 | −0.0011 | |
SD | 0.0209 | 0.0986 | 0.0488 | 0.0485 | 0.0433 | 0.1059 | ||
σ | Bias | 0.0928 | −0.1679 | −0.0017 | −0.1993 | −0.0019 | 0.0013 | |
SD | 0.0136 | 0.0131 | 0.0177 | 0.6127 | 0.0152 | 0.0444 | ||
![]() |
Bias | −0.0190 | 0.0083 | −0.0206 | ||||
SD | 0.1521 | 0.0924 | 0.4518 | |||||
λ | Bias | 1920 | 0.2412 | −0.7769 | 0.0014 | 0.0045 | −0.0005 | −0.0021 |
SD | 0.0048 | 0.0059 | 0.0326 | 0.0362 | 0.0230 | 0.1223 | ||
μ | Bias | −0.1296 | 0.4685 | −0.0017 | −0.0035 | −0.0006 | 0.0004 | |
SD | 0.0145 | 0.0677 | 0.0335 | 0.0337 | 0.0296 | 0.0706 | ||
σ | Bias | 0.0936 | −0.1662 | 0.0006 | −0.2441 | 0.0003 | 0.0002 | |
SD | 0.0099 | 0.0089 | 0.0113 | 0.5209 | 0.0104 | 0.0284 | ||
![]() |
Bias | 0.0052 | 0.0012 | −0.0061 | ||||
SD | 0.1167 | 0.0645 | 0.3297 |
APPENDIX D: RECURRENCE MOMENTS
Lemma D.1.Let be a random sample of size m and let
be the corresponding order statistics in the ascending order. Assume that the first moment of any one order statistics in a sample of size m exists. Then,

This lemma follows directly from Balakrishnan and Sultan (1998, p. 155).
Proof of Theorem 5.1.For analytical convenience, we assume that there are different group sizes. Each group size has the same number of groups T, and R and T have the same order of magnitude, because
and mU is bounded. Denote
and
, where
is the ith-order statistics in the tth subgroup with group size m for
and
. The GMM estimation vector for the model 4.15 is















1 Consistent estimation of Δ11 and Δ12












