Volume 84, Issue 4 pp. 385-397
RESEARCH ARTICLE
Open Access

Algorithmic solutions for maximizing shareable costs

Rong Zou

Rong Zou

School of Mathematics, Xi'an University of Finance and Economics, Xi'an, China

Search for more papers by this author
Boyue Lin

Boyue Lin

Operations Research and Statistics, KU Leuven, Leuven, Belgium

Search for more papers by this author
Marc Uetz

Corresponding Author

Marc Uetz

Applied Mathematics, University of Twente, Enschede, The Netherlands

Correspondence

Marc Uetz, Applied Mathematics, University of Twente, 7500AE, Enschede, The Netherlands.

Email: [email protected]

Search for more papers by this author
Matthias Walter

Matthias Walter

Applied Mathematics, University of Twente, Enschede, The Netherlands

Search for more papers by this author
First published: 25 June 2024

Abstract

This article addresses the linear optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The article first gives a fairly complete picture of the computational complexity of this optimization problem, its relation to optimization over the core itself, and its equivalence to other, minimal core relaxations that have been proposed earlier. We then address minimum cost spanning tree (MST) games as an example for a class of cost sharing games with non-empty core. While submodular cost functions yield efficient algorithms to maximize shareable costs, MST games have cost functions that are subadditive, but generally not submodular. Nevertheless, it is well known that cost shares in the core of MST games can be found efficiently. In contrast, we show that the maximization of shareable costs is NP $$ \mathsf{NP} $$ -hard for MST games and derive a 2-approximation algorithm. Our work opens several directions for future research.

1 INTRODUCTION

The fundamental algorithmic question that is addressed in this article is: can we maximize the total costs that can be shared among a set of agents, while maintaining coalitional stability? Here, coalitional stability refers to the core constraints of an underlying cooperative transferable utility game, or TU game: any proper subset of the set of all agents has an outside option at a certain cost, and coalitional stability of cost shares means that all subsets of agents are willing to accept the cost shares, because their outside option is less attractive, meaning that it is at least as costly as the sum of their cost shares. This question is arguably a fundamental question for the design of cost sharing mechanisms, and the main goal of this article is to give more insight into its algorithmic complexity. Several closely related results exist, but these are a bit scattered and sometimes ignorant of each other. This will be discussed in Sections 2 and 3.

The main contributions of this article are as follows. We introduce a basic polyhedral object that we refer to as the almost core of a cooperative game. It is obtained from the core by relaxing the requirement that the cost shares must be budget balanced. By definition, this polyhedron is non-empty. The algorithmic problem that we address is to maximize cost shares that lie in the almost core, which is a linear optimization problem over that polyhedron. For the case that the core of the corresponding cooperative game is empty, it turns out that the computational problem to maximize shareable costs is equivalent to finding a minimal non-empty core relaxation in the sense of several of the core relaxations that have been proposed earlier in the literature. This is maybe not surprising, yet a good overview of how all these relaxations relate to each other does not seem to appear anywhere in the literature. The article further establishes complexity theoretic results that relate computational problems for the almost core with corresponding problems for the classical core. While it turns out that general linear optimization over almost core and core share the same algorithmic complexity, we show that there are classes of games where core elements can be efficiently computed, while the computation of maximal shareable costs cannot be done in polynomial time, unless P = NP $$ \mathsf{P}=\mathsf{NP} $$ . That hardness result is obtained for a well-studied class of games with non-empty core, namely minimum cost spanning tree games. This class of games is interesting also because the resulting cost function is subadditive but generally not submodular. And while submodularity yields polynomial-time algorithms, our hardness result shows that subadditivity does not suffice. For minimum cost spanning tree games, we further show how to obtain a 2-approximation algorithm for maximizing shareable costs, and we show that our analysis of that algorithm is tight.

The structure of this article is as follows. The basic notions and definitions are given in Section 2. As previous papers have mostly focused on core relaxations for unbalanced games (that is, with an empty core), we briefly review these in Section 3, and discuss how they relate to the problem to compute maximal “almost core” cost shares. A novel aspect of our approach is to also address games that have a non-empty core. Section 4 therefore relates linear optimization over the almost core to the core, and we derive some algorithmic consequences. Section 5 then addresses the problem to compute maximal cost shares for minimum cost spanning tree (MST) games, showing NP $$ \mathsf{NP} $$ -hardness, as well as giving a 2-approximation algorithm. We conclude with some open problems in Section 6.

2 CORE AND ALMOST CORE FOR TU GAMES

A cooperative game with transferable utility (henceforth TU game) is described by a pair ( N , c ) $$ \left(N,c\right) $$ where N = { 1 , , n } $$ N=\left\{1,\dots, n\right\} $$ denotes the set of agents, and c : 2 N 0 $$ c:{2}^N\to {\mathbb{R}}_{\ge 0} $$ is the characteristic function that assigns to every coalition S $$ S $$ a value c ( S ) $$ c(S) $$ representing the cost of an “outside option,” which is the minimum total cost that the agents in S $$ S $$ can achieve if they cooperate amongst themselves. With a slight overload of notation write n = | N | $$ n=\mid N\mid $$ for the number of agents. An allocation for ( N , c ) $$ \left(N,c\right) $$ is a vector x n $$ x\in {\mathbb{R}}^n $$ with x i $$ {x}_i $$ being the cost share allocated to agent i N $$ i\in N $$ . For convenience, we write x ( S ) = i S x i $$ x(S)={\sum}_{i\in S}{x}_i $$ . An allocation x $$ x $$ is said to be budget balanced if x ( N ) = c ( N ) $$ x(N)=c(N) $$ . That means that the total cost of the so-called grand coalition N $$ N $$ is being distributed over the individual agents. It is called stable if it satisfies coalitional stability, that is, x ( S ) c ( S ) $$ x(S)\le c(S) $$ for all S N $$ S\subsetneq N $$ . The core [20] of game ( N , c ) $$ \left(N,c\right) $$ , arguably one of the most important concepts in cooperative game theory, consists of all budget balanced allocations satisfying coalitional stability. The core of a TU game is given by
C ( N , c ) : = { x n : x ( S ) c ( S ) S N , x ( N ) = c ( N ) } . $$ {C}_{\left(N,c\right)}\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)\kern0.3em \forall S\subsetneqq N,\kern0.3em x(N)=c(N)\right\}. $$
The core of a TU game is non-empty if and only if the game is balanced [7, 47]. In fact, being balanced is just a dual characterization of the non-emptiness of the polyhedron C ( N , c ) $$ {C}_{\left(N,c\right)} $$ .
When we drop the equality constraint that a core allocation is budget balanced, so do not require that x ( N ) = c ( N ) $$ x(N)=c(N) $$ , it allows to vary the total cost that is distributed over the set of agents, resulting in a problem that always has a feasible solution. This captures the idea that, depending on the underlying game, one may have to, or want to, distribute either less or more than c ( N ) $$ c(N) $$ . We refer to the set of all such allocations as the almost core, because the cost shares fulfill almost all, that is, all except one of the core constraints. Formally, given a TU game ( N , c ) $$ \left(N,c\right) $$ , define the almost core for ( N , c ) $$ \left(N,c\right) $$ by
AC ( N , c ) : = { x n : x ( S ) c ( S ) S N } . $$ {\mathrm{AC}}_{\left(N,c\right)}\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)\kern0.3em \forall S\subsetneq N\right\}. $$
Obviously, C ( N , c ) AC ( N , c ) $$ {C}_{\left(N,c\right)}\subseteq {\mathrm{AC}}_{\left(N,c\right)} $$ . The major motivation for this definition is to systematically study the algorithmic complexity of cooperative games without having to obey to budget balance, so optimization over the polyhedron AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ . Let us motivate the relevance of this problem.

On the one hand, if the total cost c ( N ) $$ c(N) $$ of the grand coalition cannot be distributed over the set of agents while maintaining coalitional stability, that is, the game is unbalanced, it is a natural question to ask what fraction of the total cost c ( N ) $$ c(N) $$ can be maximally distributed while maintaining coalitional stability. This problem has been addressed under different names, among them the cost of stability of a cooperative game [2]. It has received quite some attention in the literature, for example, [1-4, 8, 21, 24, 32-34, 37, 38]. Indeed, for games with empty core, maximizing x ( N ) $$ x(N) $$ over the almost core is equivalent to computing the cost of stability, and also to computing some other minimal core relaxations proposed earlier in the literature; see Section 3 for more details.

On the other hand, also if the core is non-empty one may be interested in maximizing the total cost that can be distributed over the set of agents. It reveals the maximal value for c ( N ) $$ c(N) $$ that would still yield a non-empty core. One motivation for this maximization problem is to determine the maximal tax rate that could be levied on a given c ( N ) $$ c(N) $$ , without any subset of agents S N $$ S\subsetneq N $$ wanting to deviate. The problem can also be seen as a special case of cooperative games with restricted coalition formation, because the underlying assumption is that the grand coalition N $$ N $$ cannot form. We come back to this interpretation in Section 5.

That said, the object of interest of this article is the following linear program.
max { x ( N ) : x AC ( N , c ) } . $$ \max \left\{x(N):x\in {\mathrm{AC}}_{\left(N,c\right)}\right\}. $$ (1)
The objective value of this linear program indicates the largest total cost that can be shared among the agents while retaining stability in the sense that no subset of agents S N $$ S\subsetneq N $$ would prefer to deviate to the outside option. We call an optimal solution value for this linear program the almost core optimum, and any maximizer is called an optimal almost core allocation. Sometimes we also consider the restricted problem where we also require that x 0 $$ x\ge 0 $$ , which means that agents must not receive subsidies.

Clearly, the core of a game is non-empty if and only if the almost core optimum is larger than or equal to c ( N ) $$ c(N) $$ . We study problem (1) mainly for games with non-empty core, while for games with empty core we next give a fairly complete overview of its relation to earlier proposed core relaxations.

3 EQUIVALENT AND RELATED RELAXATIONS OF THE CORE

We review several well-known and related concepts that were introduced in order to deal with games having an empty core and discuss their relationship to the almost core (optimum).

The first relaxation of the core, introduced by Shapley and Shubik [46], is the strong ε $$ \varepsilon $$ -core, defined as
C s ε ( N , c ) : = { x n : x ( S ) c ( S ) + ε S N , x ( N ) = c ( N ) } . $$ {C}_{\mathrm{s}}^{\varepsilon}\left(N,c\right)\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)+\varepsilon \kern0.3em \forall S\subsetneq N,\kern0.3em x(N)=c(N)\right\}. $$
We denote the smallest ε 0 $$ \varepsilon \ge 0 $$ for which this set is non-empty by ε s $$ {\varepsilon}_{\mathrm{s}}^{\star } $$ . The corresponding set C s ε s ( N , c ) $$ {C}_{\mathrm{s}}^{\varepsilon_{\mathrm{s}}^{\star }}\left(N,c\right) $$ is called the least core [35].
Shapley and Shubik [46] also introduced the weak ε $$ \varepsilon $$ -core as
C w ε ( N , c ) : = { x n : x ( S ) c ( S ) + ε · | S | S N , x ( N ) = c ( N ) } . $$ {C}_{\mathrm{w}}^{\varepsilon}\left(N,c\right)\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)+\varepsilon \cdotp |S|\kern0.3em \forall S\subsetneq N,\kern0.3em x(N)=c(N)\right\}. $$
We denote the smallest ε 0 $$ \varepsilon \ge 0 $$ for which this set is non-empty by ε w $$ {\varepsilon}_{\mathrm{w}}^{\star } $$ . Note that by definition, for any ε 0 $$ \varepsilon \ge 0 $$ , C s ε ( N , c ) C w ε ( N , c ) $$ {C}_{\mathrm{s}}^{\varepsilon}\left(N,c\right)\subseteq {C}_{\mathrm{w}}^{\varepsilon}\left(N,c\right) $$ , and hence ε w ε s $$ {\varepsilon}_{\mathrm{w}}^{\star}\le {\varepsilon}_{\mathrm{s}}^{\star } $$ .
Instead of using an additive relaxation of the constraints, Faigle and Kern [17] defined the multiplicative ε $$ \varepsilon $$ -core as
C m ε ( N , c ) : = { x n : x ( S ) ( 1 + ε ) · c ( S ) S N , x ( N ) = c ( N ) } . $$ {C}_{\mathrm{m}}^{\varepsilon}\left(N,c\right)\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le \left(1+\varepsilon \right)\cdotp c(S)\kern0.3em \forall S\subsetneq N,\kern0.3em x(N)=c(N)\right\}. $$
Denote the smallest ε 0 $$ \varepsilon \ge 0 $$ for which this set is non-empty by ε m $$ {\varepsilon}_{\mathrm{m}}^{\star } $$ .
A different viewpoint is called approximate core or γ $$ \gamma $$ -core [24] for some γ [ 0 , 1 ] $$ \gamma \in \left[0,1\right] $$ , it is defined as
C a γ ( N , c ) : = { x n : x ( S ) c ( S ) S N , γ · c ( N ) x ( N ) } . $$ {C}_{\mathrm{a}}^{\gamma}\left(N,c\right)\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)\kern0.3em \forall S\subseteq N,\kern0.3em \gamma \cdotp c(N)\le x(N)\right\}. $$
Denote the largest γ 1 $$ \gamma \le 1 $$ for which this set is non-empty by γ a $$ {\gamma}_{\mathrm{a}}^{\star } $$ .
The gap between the almost core optimum and the total cost of the grand coalition c ( N ) $$ c(N) $$ was called the cost of stability for an unbalanced cooperative game by Bachrach et al. [2]. For (unbalanced) cost sharing games it is defined by Meir et al. [36] as
δ CoS : = c ( N ) max { x ( N ) : x ( S ) c ( S ) S N } . $$ {\delta}_{\mathrm{CoS}}^{\star}\kern0.3em := \kern0.3em c(N)-\max \left\{x(N):x(S)\le c(S)\kern0.3em \forall S\subseteq N\right\}. $$
An alternative viewpoint was independently introduced in a paper by Bejan and Gómez [4] who considered, for profit sharing games, the so-called extended core. In order to define it for cost sharing games, let
δ ec : = min { t ( N ) : ( x , t ) n × 0 n , x ( N ) = c ( N ) , ( x t ) ( S ) c ( S ) S N } . $$ {\delta}_{\mathrm{ec}}^{\star}\kern0.3em := \kern0.3em \min \left\{t(N):\exists \left(x,t\right)\in {\mathbb{R}}^n\times {\mathbb{R}}_{\ge 0}^n,\kern0.3em x(N)=c(N),\left(x-t\right)(S)\le c(S)\kern0.3em \forall S\subsetneq N\right\}. $$ (2)
The extended core is now the set of all budget balanced x n $$ x\in {\mathbb{R}}^n $$ , so all x $$ x $$ with x ( N ) = c ( N ) $$ x(N)=c(N) $$ for which the minimum above is attained (for suitable t 0 n $$ t\in {\mathbb{R}}_{\ge 0}^n $$ ).

Yet another concept to stabilize an unbalanced game was considered by Zick et al. [49]. Interpreting t i $$ {t}_i $$ in the definition of the extended core of Bejan and Gómez [4] as a discount offered to agent i $$ i $$ , in [49] a coalitional discount t S $$ {t}_S $$ is offered to each agent set S $$ S $$ . This is an exponential blowup of the solution space, which however gives more flexibility.

For unbalanced games, that is, games with an empty core, computing the almost core optimum is clearly the same as computing the cost of stability δ CoS $$ {\delta}_{\mathrm{CoS}}^{\star } $$ . The following theorem further shows how the different core relaxations are related with each other with respect to optimization. Some of these relations were known, for example, Liu et al. [31, 32] mention the equivalence of computing γ a $$ {\gamma}_{\mathrm{a}}^{\star } $$  and δ CoS $$ {\delta}_{\mathrm{CoS}}^{\star } $$  and ε m $$ {\varepsilon}_{\mathrm{m}}^{\star } $$ , and also the relation between the cost of stability δ CoS $$ {\delta}_{\mathrm{CoS}}^{\star } $$  and the smallest weak ε $$ \varepsilon $$ -core ε w $$ {\varepsilon}_{\mathrm{w}}^{\star } $$  appears in [2, 37], yet we are not aware of a summarizing overview of how the different relaxations relate. Hence we give this summary here for the sake of completeness, and also give the short proof.

Theorem 1. (In parts folklore)For any TU game ( N , c ) $$ \left(N,c\right) $$ with empty core, the optimization problems for the weak ε $$ \varepsilon $$ -core, the multiplicative ε $$ \varepsilon $$ -core, the cost of stability and the extended core are equivalent. In particular, the values satisfy

δ ec = ( 1 γ a ) · c ( N ) = ε m 1 + ε m · c ( N ) = δ CoS = ε w · n . $$ {\delta}_{\mathrm{ec}}^{\star }=\left(1-{\gamma}_{\mathrm{a}}^{\star}\right)\cdotp c(N)=\frac{\varepsilon_{\mathrm{m}}^{\star }}{1+{\varepsilon}_{\mathrm{m}}^{\star }}\cdotp c(N)={\delta}_{\mathrm{CoS}}^{\star }={\varepsilon}_{\mathrm{w}}^{\star}\cdotp n\kern0.3em . $$

Proof.First, we establish δ CoS = δ ec $$ {\delta}_{\mathrm{CoS}}^{\star }={\delta}_{\mathrm{ec}}^{\star } $$ . We substitute x t $$ x-t $$ by x $$ {x}^{\prime } $$ in (2) and obtain

δ ec = min { t ( N ) : ( x , t ) n × 0 n , x ( N ) + t ( N ) = c ( N ) , x ( S ) c ( S ) S N } . $$ {\delta}_{\mathrm{ec}}^{\star }=\min \left\{t(N):\exists \left({x}^{\prime },t\right)\in {\mathbb{R}}^n\times {\mathbb{R}}_{\ge 0}^n,\kern0.3em {x}^{\prime }(N)+t(N)=c(N),\kern0.3em {x}^{\prime }(S)\le c(S)\kern0.3em \forall S\subsetneq N\right\}. $$
Now it is easy to see that the actual entries of t $$ t $$ do not matter (except for nonnegativity), but only the value t ( N ) $$ t(N) $$ is important. This yields δ CoS = δ ec $$ {\delta}_{\mathrm{CoS}}^{\star }={\delta}_{\mathrm{ec}}^{\star } $$ .

Second, we show δ CoS = ( 1 γ a ) · c ( N ) $$ {\delta}_{\mathrm{CoS}}^{\star }=\left(1-{\gamma}_{\mathrm{a}}^{\star}\right)\cdotp c(N) $$ . To this end, observe

γ a = max { γ : x n , x ( S ) c ( S ) S N , x ( N ) = γ c ( N ) } . $$ {\gamma}_{\mathrm{a}}^{\star }=\max \left\{\gamma \in \mathbb{R}:\exists x\in {\mathbb{R}}^n,\kern0.3em x(S)\le c(S)\kern0.3em \forall S\subseteq N,\kern0.3em x(N)=\gamma c(N)\right\}. $$
Clearly, the maximum is attained by x n $$ {x}^{\star}\in {\mathbb{R}}^n $$ with x ( N ) $$ {x}^{\star }(N) $$ maximum. Moreover, the value of γ a $$ {\gamma}_{\mathrm{a}}^{\star } $$ is then equal to x ( N ) / c ( N ) $$ {x}^{\star }(N)/c(N) $$ . This shows δ CoS / c ( N ) = 1 γ a $$ {\delta}_{\mathrm{CoS}}^{\star }/c(N)=1-{\gamma}_{\mathrm{a}}^{\star } $$ .

Third, we show 1 γ a = ε m / ( 1 + ε m ) $$ 1-{\gamma}_{\mathrm{a}}^{\star }={\varepsilon}_{\mathrm{m}}^{\star }/\left(1+{\varepsilon}_{\mathrm{m}}^{\star}\right) $$ . Observe that the map π : n n $$ \pi :{\mathbb{R}}^n\to {\mathbb{R}}^n $$ defined by π ( x ) = ( 1 + ε ) x $$ \pi (x)=\left(1+\varepsilon \right)x $$ induces a bijection between allocations x n $$ x\in {\mathbb{R}}^n $$ with x ( S ) c ( S ) $$ x(S)\le c(S) $$ for all S N $$ S\subseteq N $$ and allocations π ( x ) $$ \pi (x) $$ with π ( x ) ( S ) ( 1 + ε ) c ( S ) $$ \pi (x)(S)\le \left(1+\varepsilon \right)c(S) $$ for all S N $$ S\subseteq N $$ . Moreover, π ( x ) ( N ) = ( 1 + ε ) x ( N ) $$ \pi (x)(N)=\left(1+\varepsilon \right)x(N) $$ . Hence, C m ε ( N , c ) $$ {C}_{\mathrm{m}}^{\varepsilon}\left(N,c\right) $$ is (non-)empty if and only if C a γ ( N , c ) $$ {C}_{\mathrm{a}}^{\gamma}\left(N,c\right) $$ is (non-)empty, where γ = 1 / ( 1 + ε ) $$ \gamma =1/\left(1+\varepsilon \right) $$ holds. This implies γ a = 1 / ( 1 + ε m ) $$ {\gamma}_{\mathrm{a}}^{\star }=1/\left(1+{\varepsilon}_{\mathrm{m}}^{\star}\right) $$ .

We finally show δ CoS = ε w · n $$ {\delta}_{\mathrm{CoS}}^{\star }={\varepsilon}_{\mathrm{w}}^{\star}\cdotp n $$ . To this end, in

ε w = min { ε 0 : x , x ( S ) c ( S ) + ε · | S | S N , x ( N ) = c ( N ) } , $$ {\varepsilon}_{\mathrm{w}}^{\star }=\min \left\{\varepsilon \ge 0:\exists x,\kern0.3em x(S)\le c(S)+\varepsilon \cdotp |S|\kern0.3em \forall S\subsetneq N,\kern0.3em x(N)=c(N)\right\}, $$
we substitute x $$ x $$ by x + ( ε , ε , , ε ) $$ {x}^{\prime }+\left(\varepsilon, \varepsilon, \dots, \varepsilon \right) $$ which yields
ε w = min { ε 0 : x , x ( S ) c ( S ) S N , x ( N ) + ε · n = c ( N ) } . $$ {\varepsilon}_{\mathrm{w}}^{\star }=\min \left\{\varepsilon \ge 0:\exists {x}^{\prime },\kern0.3em {x}^{\prime }(S)\le c(S)\kern0.3em \forall S\subsetneq N,\kern0.3em {x}^{\prime }(N)+\varepsilon \cdotp n=c(N)\right\}. $$
Clearly, the minimum ε w $$ {\varepsilon}_{\mathrm{w}}^{\star } $$ is attained if and only if ε · n = δ CoS $$ \varepsilon \cdotp n={\delta}_{\mathrm{CoS}}^{\star } $$ holds.

Moreover, it was shown in [37, Section 4] that ε w 1 n 1 ε s $$ {\varepsilon}_{\mathrm{w}}^{\star}\ge \frac{1}{n-1}{\varepsilon}_{\mathrm{s}}^{\star } $$ . Further relations between the cost of stability δ CoS $$ {\delta}_{\mathrm{CoS}}^{\star } $$ and other core relaxations for specific classes of games appear in [3, 37]. For instance, it is true that for superadditive (profit sharing) games, δ CoS n ε s $$ {\delta}_{\mathrm{CoS}}^{\star}\le \sqrt{n}{\varepsilon}_{\mathrm{s}}^{\star } $$ and n ε w ε s $$ \sqrt{n}{\varepsilon}_{\mathrm{w}}^{\star}\le {\varepsilon}_{\mathrm{s}}^{\star } $$ . Indeed, much of the previous work in this direction was about determining bounds on the cost of stability [2, 8, 36-38] or other structural insights [4].

However, algorithmic considerations were also made for specific (unbalanced) games such as showing hardness for computing the price of stability of general weighted voting games [2], and showing hardness for computing the price of stability of threshold network flow games [44]. Moreover, Aziz et al. [1] give several results on the computational complexity of computing the cost of stability (and other measures) for several combinatorial games such as weighted graph, voting or matching games, as well as their threshold versions.

One of the few papers which considers the impact of restrictions on possible coalition formation in relation to algorithmic questions, and in that respect also related to our work, is by Chalkiadakis et al. [10]. In the spirit of Myerson [40], they assume that the formation of coalitions is restricted by a so-called interaction graph, and analyze how the computational complexity of several core-related concepts such as core membership, core non-emptiness, or cost of stability depends on the structure of that graph. Under different assumptions on polynomial-time compact representations of the underlying game, their results include hardness as well as tractability results that depend on the interaction graph. Their results also imply hardness of computing the cost of stability for arbitrary subadditive (cost) games.

Also approximations of ε m $$ {\varepsilon}_{\mathrm{m}}^{\star } $$ for the multiplicative ( 1 + ε ) $$ \left(1+\varepsilon \right) $$ -core and corresponding allocations have been obtained, for example, for the symmetric traveling salesperson game by Faigle et al. [16], and for the asymmetric case also by Bläser et al. [6].

There are papers that attack the problem from a mathematical optimization and computational point of view. Under the name “optimal cost share problem” (OCSP), Caprara and Letchford [9] suggest how to obtain γ $$ \gamma $$ -core solutions for a generalization of certain combinatorial games, named integer minimization games, using column or row generation techniques. Under the name “optimal cost allocation problem” (OCAP), also Liu et al. [31] follow the line of research initiated by [9] and give computational results using Lagrangian relaxations. A related line of research [32] is to consider the strong ε $$ \varepsilon $$ -core relaxation parameterized by ε $$ \varepsilon $$ as given by the function
ω ( ε ) : = min x n { c ( N ) x ( N ) : x ( S ) c ( S ) + ε S N } , $$ \omega \left(\varepsilon \right)\kern0.3em := \kern0.3em \underset{x\in {\mathbb{R}}^n}{\min}\left\{c(N)-x(N):x(S)\le c(S)+\varepsilon \kern0.3em \forall S\subsetneq N\right\}, $$
and to approximate it computationally. This so-called “penalty-subsidy function” [32] is further studied in another variant in a follow up paper [33], there approximating it using Lagrangian relaxation techniques, and with computational results specifically for traveling salesperson games.

Also the problem to compute allocations in the least core has been considered in the literature. For cooperative games with submodular cost functions, it can be computed in polynomial time [12], while for supermodular cost cooperative games it is NP $$ \mathsf{NP} $$ -hard to compute, and even hard to approximate [45].

Specifically relevant for our work are results by Faigle et al. [19] who show, among other things, NP $$ \mathsf{NP} $$ -hardness to compute a cost allocation in the so-called f $$ f $$ -least core for minimum cost spanning tree games, which is a tightening of the core constraints to x ( S ) c ( S ) ε f ( S ) $$ x(S)\le c(S)-\varepsilon f(S) $$ for certain non-negative functions f $$ f $$ . As we will argue in Section 5, their result also implies hardness of computing optimal almost core allocations for the class of minimum cost spanning tree games.

Finally, we briefly discuss the relation of almost core allocations to cost sharing methods in mechanism design. First, note that cost sharing methods with an additional property called population monotonicity have been considered [48], specifically also for spanning tree games [26, 41]. Population monotonicity, also called cross monotonicity, means that one computes cost shares x Q $$ {x}^Q $$ in the cores of all subgames induced by subsets of agents Q N $$ Q\subseteq N $$ , and the cost shares per agent x i Q $$ {x}_i^Q $$ are monotonically non-increasing in Q $$ Q $$ . Cross monotonicity is desirable because it yields group-strategyproof mechanisms in a setting where the individual costs of agents are private information [39]. In that context, Könemann et al. [29] show how to obtain a cross-monotonic cost sharing method for the Steiner forest problem with the property of being 2-budget balanced. That means that one computes, for all Q N $$ Q\subseteq N $$ , a Steiner forest for agents Q $$ Q $$ , with cost c ˜ ( Q ) $$ \tilde{c}(Q) $$ , along with cost shares x Q $$ {x}^Q $$ so that 1 2 c ˜ ( Q ) x Q ( Q ) c ( Q ) $$ \frac{1}{2}\tilde{c}(Q)\le {x}^Q(Q)\le c(Q) $$ . As a byproduct, this yields an allocation x N $$ {x}^N $$ in the corresponding almost core of Steiner forest games, with x N ( S ) c ( S ) $$ {x}^N(S)\le c(S) $$ , for all S N $$ S\subseteq N $$ and the additional property that x N ( N ) 1 2 c ˜ ( N ) $$ {x}^N(N)\ge \frac{1}{2}\tilde{c}(N) $$ . However, the distinguishing feature in this article, namely that the constraint x ( N ) c ( N ) $$ x(N)\le c(N) $$ is absent and allocations with x ( N ) c ( N ) $$ x(N)\nleqq c(N) $$ are allowed, is not present in those works.

4 COMPUTATIONAL COMPLEXITY CONSIDERATIONS

In this section, we investigate the computational complexity of optimization problems related to the (nonnegative) core and almost core. To capture results for the general and the nonnegative case, we consider linear optimization over the polyhedra
AC ( N , c ) and P ( N , c ) : = { x n : x ( S ) c ( S ) S N } , $$ {\mathrm{AC}}_{\left(N,c\right)}\kern1em \mathrm{and}\kern1em {P}_{\left(N,c\right)}\kern0.80em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)\kern0.3em \forall S\subseteq N\right\}, $$
as well as optimization over P ( N , c ) 0 n $$ {P}_{\left(N,c\right)}\cap {\mathbb{R}}_{\ge 0}^n $$ and AC ( N , c ) 0 n $$ {\mathrm{AC}}_{\left(N,c\right)}\cap {\mathbb{R}}_{\ge 0}^n $$ for families of games ( N , c ) $$ \left(N,c\right) $$ . Note that if the core is non-empty then it is the set of optimal solutions when maximizing 1 · x $$ \mathbbm{1}\cdotp x $$ over P ( N , c ) $$ {P}_{\left(N,c\right)} $$ . Also note that whenever the core of a game ( N , c ) $$ \left(N,c\right) $$ is empty, this means that the constraint x ( N ) c ( N ) $$ x(N)\le c(N) $$ is implied by the set of constraints x ( S ) c ( S ) $$ x(S)\le c(S) $$ , S N $$ S\subsetneq N $$ , which in turn implies P ( N , c ) = AC ( N , c ) $$ {P}_{\left(N,c\right)}={\mathrm{AC}}_{\left(N,c\right)} $$ . For games with non-empty core, we get the following correspondence between the optimization problems for the two polyhedra. The proof goes by showing the corresponding separation problems are polynomially equivalent.

Theorem 2.For a family of games ( N , c ) $$ \left(N,c\right) $$ , linear optimization problems over AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ can be solved in polynomial time if and only if linear optimization problems over P ( N , c ) $$ {P}_{\left(N,c\right)} $$ can be solved in polynomial time.

Proof.In order to prove the result we make use of the equivalence of optimization and separation [23, 25, 42]. This means, we show that we can solve the separation problem for P ( N , c ) $$ {P}_{\left(N,c\right)} $$ if and only if we can solve the separation problem for AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ . Since P ( N , c ) = { x AC ( N , c ) : x ( N ) c ( N ) } $$ {P}_{\left(N,c\right)}=\left\{x\in {\mathrm{AC}}_{\left(N,c\right)}:x(N)\le c(N)\right\} $$ holds, separation over P ( N , c ) $$ {P}_{\left(N,c\right)} $$ reduces to separation over AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ plus an explicit check of a single inequality.

Hence, it remains to show how to solve the separation problem for AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ . For given x ^ n $$ \hat{x}\in {\mathbb{R}}^n $$ , we construct n $$ n $$ points x ^ k n $$ {\hat{x}}^k\in {\mathbb{R}}^n $$ ( k = 1 , 2 , , n $$ k=1,2,\dots, n $$ ) which are copies of x ^ $$ \hat{x} $$ except for x ^ k k : = min ( x ^ k , c ( N ) i N { k } x ^ i ) $$ {\hat{x}}_k^k\kern0.3em := \kern0.3em \min \left({\hat{x}}_k,c(N)-{\sum}_{i\in N\setminus \left\{k\right\}}{\hat{x}}_i\right) $$ . Note that by construction x ^ k x ^ $$ {\hat{x}}^k\le \hat{x} $$ and x ^ k ( N ) c ( N ) $$ {\hat{x}}^k(N)\le c(N) $$ hold. We then query a separation oracle of P ( N , c ) $$ {P}_{\left(N,c\right)} $$ with each x ^ k $$ {\hat{x}}^k $$ .

Suppose such a query yields x ^ k ( S ) > c ( S ) $$ {\hat{x}}^k(S)>c(S) $$ for some S N $$ S\subseteq N $$ . Due to x ^ k ( N ) c ( N ) $$ {\hat{x}}^k(N)\le c(N) $$ we have S N $$ S\ne N $$ . Moreover, x ^ x ^ k $$ \hat{x}\ge {\hat{x}}^k $$ implies x ^ ( S ) > c ( S ) $$ \hat{x}(S)>c(S) $$ , and we can return the same violated inequality.

Otherwise, we have x ^ k P ( N , c ) $$ {\hat{x}}^k\in {P}_{\left(N,c\right)} $$ for all k N $$ k\in N $$ and claim x ^ AC ( N , c ) $$ \hat{x}\in {\mathrm{AC}}_{\left(N,c\right)} $$ . To prove this claim we assume that, for the sake of contradiction, x ^ ( S ) > c ( S ) $$ \hat{x}(S)>c(S) $$ holds for some S N $$ S\subsetneq N $$ . Let k N S $$ k\in N\setminus S $$ . Since x ^ i k = x ^ i $$ {\hat{x}}_i^k={\hat{x}}_i $$ holds for all i S $$ i\in S $$ , we have x ^ k ( S ) = x ^ ( S ) > c ( S ) $$ {\hat{x}}^k(S)=\hat{x}(S)>c(S) $$ . This contradicts the fact that x ^ k P ( N , c ) $$ {\hat{x}}^k\in {P}_{\left(N,c\right)} $$ holds.

The result can be generalized further using the same proof technique. For instance, linear optimization over P ( N , c ) $$ {P}_{\left(N,c\right)} $$ is polynomial-time equivalent to linear optimization over
Q k : = { x n : x ( S ) c ( S ) S N with | S | n k } $$ {Q}^k\kern0.3em := \kern0.3em \left\{x\in {\mathbb{R}}^n:x(S)\le c(S)\kern0.3em \forall S\subseteq N\kern0.3em \mathrm{with}\kern0.3em |S|\le n-k\right\} $$
for constant k $$ k $$ since in this case the separation problem for Q k $$ {Q}^k $$ reduces to 𝒪 ( n k ) queries to separation problems for P ( N , c ) $$ {P}_{\left(N,c\right)} $$ and since, for the reverse direction, 𝒪 ( n k ) missing inequalities must be checked explicitly.
Moreover, it turns out that almost the same result is true when we also require that there are no subsidies, that is x 0 $$ x\ge 0 $$ . For linking the non-negative core to the non-negative almost core, it requires an assumption on the characteristic function.
c ( N { k } ) c ( N ) k N . $$ c\left(N\setminus \left\{k\right\}\right)\le c(N)\kern2em \forall k\in N. $$ (3)
This condition holds, for instance, for monotone functions c $$ c $$ , and implies that the core is contained in 0 n $$ {\mathbb{R}}_{\ge 0}^n $$ ; see Lemma 2 and Theorem 1 in [14].

Theorem 3.For a family of games ( N , c ) $$ \left(N,c\right) $$ satisfying (3), linear optimization problems over AC ( N , c ) 0 n $$ {\mathrm{AC}}_{\left(N,c\right)}\cap {\mathbb{R}}_{\ge 0}^n $$ can be solved in polynomial time if and only if linear optimization problems over P ( N , c ) 0 n $$ {P}_{\left(N,c\right)}\cap {\mathbb{R}}_{\ge 0}^n $$ can be solved in polynomial time.

The proof is a rather straightforward extension of that of Theorem 2, additionally making use of condition (3) to guarantee nonnegativity. We obtain an immediate consequence from these two theorems.

Corollary 1.For a family of games ( N , c ) $$ \left(N,c\right) $$ for which c ( · ) $$ c\left(\cdotp \right) $$ is submodular (and (3) holds) one can find a (non-negative) optimal almost core allocation in polynomial time.

Proof.For submodular c ( · ) $$ c\left(\cdotp \right) $$ one can optimize any linear objective function over P ( N , c ) $$ {P}_{\left(N,c\right)} $$ using the Greedy algorithm [15]. The result follows from Theorems 2 and 3.

One might wonder if for submodular c ( · ) $$ c\left(\cdotp \right) $$ , one can find a combinatorial algorithm to compute an optimal almost core allocation, instead of relying on the equivalence of optimization and separation. After all, for submodular c ( · ) $$ c\left(\cdotp \right) $$ , P ( N , c ) $$ {P}_{\left(N,c\right)} $$ is a polymatroid, so AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ is a polymatroid minus the single linear constraint x ( N ) c ( N ) $$ x(N)\le c(N) $$ . We leave this as an open problem.

The above results only make statements about optimizing arbitrary objective vectors over these polyhedra. In particular we cannot draw conclusions about hardness of the computation of an optimal almost core allocation, which is maximizing 1 · x $$ \mathbbm{1}\cdotp x $$ over AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ . However, it is easy to see that this problem cannot be easier than deciding non-emptiness of the core, as the core of a game ( N , c ) $$ \left(N,c\right) $$ is non-empty if and only if the almost core optimum is at least c ( N ) $$ c(N) $$ . Hence we immediately get the following.

Theorem 4.Consider a family of games ( N , c ) $$ \left(N,c\right) $$ for which deciding (non-)emptiness of the core is NP $$ \mathsf{NP} $$ -hard. Then an efficient algorithm to compute an optimal almost core allocation cannot exist, unless P = NP $$ \mathsf{P}=\mathsf{NP} $$ .

It is well known that there exist games for which it is NP $$ \mathsf{NP} $$ -hard to decide non-emptiness of the core, for example, weighted graph games [13], or unrooted metric traveling salesperson games [9]. Hence, we cannot hope for a polynomial-time algorithm that computes an optimal almost core allocation for arbitrary games.

In contrast, the maximization of x ( N ) $$ x(N) $$ becomes trivial for games ( N , c ) $$ \left(N,c\right) $$ with superadditive characteristic function c ( · ) $$ c\left(\cdotp \right) $$ , as the set of constraints x ( { i } ) c ( { i } ) $$ x\left(\left\{i\right\}\right)\le c\left(\left\{i\right\}\right) $$ , i = 1 , , n $$ i=1,\dots, n $$ , already imply all other constraints x ( S ) c ( S ) $$ x(S)\le c(S) $$ , S N $$ S\subseteq N $$ , and one can simply define x i : = c ( { i } ) $$ {x}_i:= c\left(\left\{i\right\}\right) $$ . In particular, x ( N ) c ( N ) $$ x(N)\le c(N) $$ is implied and P ( N , c ) = AC ( N , c ) $$ {P}_{\left(N,c\right)}={\mathrm{AC}}_{\left(N,c\right)} $$ . Generalizing, the same is true for classes of games where a polynomial number of constraints can be shown to be sufficient to define the complete core. As an example we mention matching games in undirected graphs [27], where the core is described by the polynomially many core constraints induced by all edges of the underlying graph, as these can be shown to imply all other core constraints.

Proposition 1.Whenever P ( N , c ) $$ {P}_{\left(N,c\right)} $$ is described by a polynomial number of inequalities, finding an optimal (almost) core allocation can be done in polynomial time by linear programming.

Note that Proposition 1 includes supermodular cost functions. It is therefore interesting to note that for supermodular cost games, it is NP $$ \mathsf{NP} $$ -hard to approximate the least core value ε s $$ {\varepsilon}_{\mathrm{s}}^{\star } $$ better than a factor 17 / 16 $$ 17/16 $$  [45]. The reason for this discrepancy is the simple fact that the least core is based on the strong ε $$ \varepsilon $$ -core, while the almost core relates to the weak ε $$ \varepsilon $$ -core as per Theorem 1.

It also turns out that condition (3) implies that the value of an almost core allocation cannot exceed that of a core allocation by much.

Proposition 2.Let ( N , c ) $$ \left(N,c\right) $$ be a game that satisfies (3). Then every x AC ( N , c ) $$ x\in {\mathrm{AC}}_{\left(N,c\right)} $$ satisfies

x ( N ) ( 1 + 1 n 1 ) c ( N ) . $$ x(N)\le \left(1+\frac{1}{n-1}\right)c(N). $$

Proof.Let x AC ( N , c ) $$ x\in {\mathrm{AC}}_{\left(N,c\right)} $$ . We obtain

( n 1 ) · x ( N ) = k N x ( N { k } ) k N c ( N { k } ) k N c ( N ) = n · c ( N ) , $$ {\displaystyle \begin{array}{ll}\hfill \left(n-1\right)\cdotp x(N)=& \sum \limits_{k\in N}x\left(N\setminus \left\{k\right\}\right)\\ {}\hfill \le & \sum \limits_{k\in N}c\left(N\setminus \left\{k\right\}\right)\le \sum \limits_{k\in N}c(N)=n\cdotp c(N),\end{array}} $$
where the first inequality follows from feasibility of x $$ x $$ and the second follows from (3).

In case of a nonempty core, condition (3) implies non-negativity for all core allocations and all optimal almost core allocations, as x ( N ) c ( N ) $$ x(N)\ge c(N) $$ , so x i c ( N ) c ( N { i } ) 0 $$ {x}_i\ge c(N)-c\left(N\setminus \left\{i\right\}\right)\ge 0 $$ for all i N $$ i\in N $$ . However, this does not mean that a non-negativity requirement implies that the almost core optimum is close to c ( N ) $$ c(N) $$ . In the next section, we will see that this gap can be arbitrarily large (see Proposition 3).

5 MINIMUM COST SPANNING TREE GAMES AND APPROXIMATION

In this section, we address a well known special class of games known as minimum cost spanning tree (MST) games [5, 11, 22]. In MST games, the agents are nodes in an edge-weighted undirected graph G $$ G $$ , and the cost of the outside option for a set of agents S $$ S $$ is determined by the cost of a minimum cost spanning tree in the subgraph induced by these agents. MST games are known to have a non-empty core. Moreover, it is known that finding some element in the core is computationally easy and can be done by computing a minimum cost spanning tree [22]. The optimization problem that we address asks for the maximal amount that can be charged to the agents while no proper subset of agents would prefer the outside option.

While in general, maximizing shareable costs is the same as asking for the maximum value c ( N ) $$ c(N) $$ that still yields a non-empty core, the question may appear a bit artificial for minimum cost spanning tree games, as there, the value c ( N ) $$ c(N) $$ is computed as the minimum cost of a spanning tree for all agents N $$ N $$ . From a practical viewpoint this can be motivated by assuming there are exogenous physical or legal restrictions that prohibit the grand coalition N $$ N $$ to form, so that the player set N $$ N $$ has no bargaining power. This problem can be seen as a special form of restricted coalition formation. As a matter of fact, cooperative game theory with restrictions on coalition formation has a rich history, see, for example, [30] for several references, and has been pioneered by Myerson [40] where the possibility of coalition formation is modeled by connectedness in a given communication graph. More generally, one may define a (downward-closed) set system that describes all those subsets of agents that are able to cooperate and hence have access to an outside option, while all other subsets do not have that option. The almost core then arises as the special case where the set system is the ( n 1 ) $$ \left(n-1\right) $$ -uniform matroid, that is, all subsets of agents except the grand coalition N $$ N $$ have an outside option.

Apart from that, there is a more theoretical perspective that motivates studying the almost core for MST games. One can easily see that for MST games the cost function c ( · ) $$ c\left(\cdotp \right) $$ is subadditive, yet it is not submodular in general; see [28] for a characterization when it is. Recalling that the computation of maximum shareable costs can be done in polynomial time when c ( · ) $$ c\left(\cdotp \right) $$ is submodular, it is a natural question to ask if this still holds for subadditive cost functions. In that respect, note that the weighted graph games as studied in [13] have polynomial-time algorithms to decide non-emptiness of the core whenever c ( · ) $$ c\left(\cdotp \right) $$ is subadditive. Hence Theorem 4 applied to weighted graph games does not give an answer to this question. Also the results of [10] yield that there exist subadditive cost games for which the computation of the price of stability, hence computation of the almost core optimum is hard, yet that result also relies on hardness of the problem to decide non-emptiness of the core. We next show that even for MST games with monotone cost function c ( · ) $$ c\left(\cdotp \right) $$ , despite always having a non-empty core, maximizing shareable costs cannot be done efficiently unless P = NP $$ \mathsf{P}=\mathsf{NP} $$ .

5.1 Preliminaries

Let us first formally define the problem and recall what is known. We are given an edge-weighted, undirected graph G = ( N { 0 } , E ) $$ G=\left(N\cup \left\{0\right\},E\right) $$ with non-negative edge weights w : E 0 $$ w:E\to {\mathbb{R}}_{\ge 0} $$ , where node 0 is a special node referred to as “supplier” node. Without loss of generality we may assume that the graph is complete by adding dummy edges with large enough cost. The agents of the game are the vertices N $$ N $$ of the graph, and the characteristic function of the game is given by minimum cost spanning trees. That is, the cost of any subset of vertices S N $$ S\subseteq N $$ is defined as the cost of a minimum cost spanning tree on the subgraph induced by vertex set S { 0 } $$ S\cup \left\{0\right\} $$ . So if we let 𝒯 ( S ) be the set of spanning trees for the subgraph induced by vertex set S { 0 } $$ S\cup \left\{0\right\} $$ , then the characteristic function is defined as:
c ( S ) : = min T 𝒯 ( S ) { w ( T ) } .
Following [22], the associated monotonized minimum cost spanning tree game ( N , c ) $$ \left(N,\overline{c}\right) $$ is obtained by defining the characteristic function using the monotonized cost function c ( S ) : = min R S c ( R ) . $$ \overline{c}(S)\kern0.3em := \kern0.3em {\min}_{R\supseteq S}c(R). $$ This is motivated by assuming that agents can also use other agents as “Steiner nodes.” Indeed, note that c ( S ) c ( R ) $$ \overline{c}(S)\le \overline{c}(R) $$ for S R $$ S\subseteq R $$ , and for the associated cores of these two games, we have that C ( N , c ) C ( N , c ) $$ {C}_{\left(N,\overline{c}\right)}\subseteq {C}_{\left(N,c\right)} $$ . Moreover, it is well known that the core of both games is non-empty, and a core allocation x C ( N , c ) $$ x\in {C}_{\left(N,\overline{c}\right)} $$ is obtained in polynomial time by just one minimum cost spanning tree computation: if T $$ T $$ is some MST, let e v T $$ {e}_v\in T $$ be the edge incident with v $$ v $$ on the unique path from v $$ v $$ to the supplier node 0 in T $$ T $$ , then letting
x v : = w ( e v ) , $$ {x}_v\kern0.3em := \kern0.3em w\left({e}_v\right), $$
one gets an element x $$ x $$ in the core of the monotonized minimum cost spanning tree game ( N , c ) $$ \left(N,\overline{c}\right) $$  [22], and hence also a core element for the game ( N , c ) $$ \left(N,c\right) $$ . One convenient way of thinking about this core allocation is a run of Prim's algorithm to compute minimum cost spanning trees [43]: starting to build the tree with vertex 0, whenever a new vertex v $$ v $$ is added to the spanning tree constructed so far, v $$ v $$ gets charged the cost of the edge e v $$ {e}_v $$ that connects v $$ v $$ .
In summary, computing some core allocation can be done efficiently, while linear optimization over the core of MST games is co-NP hard (under turing reductions) [18]. We are interested in the same questions but for the case that the budget balance constraint is absent. So we seek solutions to the almost core maximization problem
max x ( N ) s . t . x AC ( N , c ) , $$ \max x(N)\kern0.3em s.t.\kern0.3em x\in {\mathrm{AC}}_{\left(N,c\right)}, $$ (4)
when c ( · ) $$ c\left(\cdotp \right) $$ is the characteristic function defined by minimum cost spanning trees. The interpretation of the lacking constraint x ( N ) = c ( N ) $$ x(N)=c(N) $$ is that the grand coalition cannot establish the solution with cost c ( N ) $$ c(N) $$ on its own, say by legal restrictions.

5.2 Computational complexity

As a first result, and not surprising, linear optimization over the almost core is hard for MST games.

Corollary 2.For minimum cost spanning tree games ( N , c ) $$ \left(N,c\right) $$ , a polynomial-time algorithm for linear optimization over AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,c\right)} $$ would yield P = $$ = $$ NP.

Proof.The result follows from Theorem 2 and the fact that the membership problem for the core of ( N , c ) $$ \left(N,c\right) $$ is a co NP $$ \mathsf{NP} $$ -hard problem for MST games [18].

What is more interesting is that optimizing 1 · x $$ \mathbbm{1}\cdotp x $$ over the almost core remains hard for MST games.

Theorem 5.Computing an optimal almost core allocation in (4) for minimum cost spanning tree games is NP $$ \mathsf{NP} $$ -hard, and this is also true for monotonized minimum cost spanning tree games ( N , c ) $$ \left(N,\overline{c}\right) $$ .

Proof.Let ε $$ {\varepsilon}^{\star } $$ be the largest ε $$ \varepsilon $$ for which the linear inequality system

x ( S ) ( 1 ε ) c ( S ) S N , x ( N ) = c ( N ) $$ x(S)\le \left(1-\varepsilon \right)c(S)\kern0.3em \forall S\subsetneq N,\kern1em x(N)=c(N) $$ (5)
has a solution. In [19] it is shown that finding a feasible solution x $$ x $$ for (5) with respect to ε $$ {\varepsilon}^{\star } $$ is NP $$ \mathsf{NP} $$ -hard. Note that in the reduction leading to this hardness result, c ( N ) > 0 $$ c(N)>0 $$ . Then, given an optimum almost core allocation x AC $$ {x}^{\mathrm{AC}} $$ , x AC ( N ) c ( N ) > 0 $$ {x}^{\mathrm{AC}}(N)\ge c(N)>0 $$ , and we can obtain ε : = 1 c ( N ) / x AC ( N ) $$ {\varepsilon}^{\star}:= 1-c(N)/{x}^{\mathrm{AC}}(N) $$ . It is now easy to see that the vector x : = ( 1 ε ) x AC $$ {x}^{\prime}:= \left(1-{\varepsilon}^{\star}\right){x}^{\mathrm{AC}} $$ is a feasible solution for (5). To see that the so-defined ε $$ {\varepsilon}^{\star } $$ is indeed maximal, observe that scaling any feasible vector in (5) by 1 / ( 1 ε ) $$ 1/\left(1-{\varepsilon}^{\star}\right) $$ yields an almost core allocation. Hence, computation of an almost core optimum for MST games yields a solution for an NP $$ \mathsf{NP} $$ -hard problem. To see the last claim about monotonized minimum cost spanning tree games, observe that the underlying reduction from the NP $$ \mathsf{NP} $$ -hard minimum cover problem in [19] yields a minimum cost spanning tree game that has a monotone cost function c ( · ) $$ c\left(\cdotp \right) $$ by definition.

Next, we note that in general, the almost core optimum may be arbitrarily larger than c ( N ) $$ c(N) $$ for MST games. This is remarkable in view of Proposition 2, which shows that under condition (3), any core allocation yields a good approximation for an optimal almost core allocation, as they differ by a factor at most n / ( n 1 ) $$ n/\left(n-1\right) $$ . A fortiori, the same holds for the monotonized MST games ( N , c ) $$ \left(N,\overline{c}\right) $$ . For general MST games ( N , c ) $$ \left(N,c\right) $$ , and without condition (3), this gap can be large.

Proposition 3.The almost core optimum can be arbitrarily larger than c ( N ) $$ c(N) $$ , even for minimum cost spanning tree games and when we require that x 0 $$ x\ge 0 $$ .

Proof.Consider the instance depicted in Figure 1A, for some value k > 0 $$ k>0 $$ . Then c ( N ) = 0 $$ c(N)=0 $$ , while x = ( 0 , 0 , k ) $$ x=\left(0,0,k\right) $$ is an optimal non-negative almost core allocation with value k $$ k $$ .

Details are in the caption following the image
Two MST games with n = 3 $$ n=3 $$ players for insights into optimal almost core solutions. (A) MST game with unbounded relative gap between c ( N ) $$ c(N) $$ and almost core optimum. (B) MST game where subsidies are necessary for the almost core optimum.
In the following we consider problem (4) but with the added constraint that x 0 $$ x\ge 0 $$ .
max x ( N ) s . t . x AC ( N , c ) , and x 0 . $$ \max x(N)\kern0.3em s.t.\kern0.3em x\in {\mathrm{AC}}_{\left(N,c\right)},\mathrm{and}\kern0.3em x\ge 0. $$ (6)
The presence of the constraint x 0 $$ x\ge 0 $$ means that agents must not be subsidized. As can be seen from the example in Figure 1B, such subsidies may indeed be necessary in the optimal solution to (4), as there, the only optimal solution uses cost shares ( k , k , k ) $$ \left(-k,k,k\right) $$ . However, we next show that such subsidies are in some sense an artifact of “small edge costs” in graph G $$ G $$ , and for optimization purposes can be neglected in the following sense.

Theorem 6.Every instance of the almost core optimization problem (4) can be reduced in polynomial time to an instance of problem (6). Consequently, problem (6) is also NP $$ \mathsf{NP} $$ -hard for minimum cost spanning tree games.

NP $$ \mathsf{NP} $$ -hardness also follows from the fact that for monotonized MST games the core is nonempty [22], so any optimal solution x $$ x $$ to (6) has x ( N ) c ( N ) $$ x(N)\ge c(N) $$ . Hence we have that x 0 $$ x\ge 0 $$ is redundant in (6), because x i c ( N ) c ( N { i } ) 0 $$ {x}_i\ge c(N)-c\left(N\setminus \left\{i\right\}\right)\ge 0 $$ for all i N $$ i\in N $$ .

Proof.Given an instance of (4) with edge costs w $$ w $$ , define new edge costs w ( e ) : = w ( e ) + M $$ {w}^{\prime }(e):= w(e)+M $$ , e E $$ e\in E $$ , for large enough constant M $$ M $$ to be defined later. Note that c ( S ) = c ( S ) + | S | · M $$ {c}^{\prime }(S)=c(S)+\mid S\mid \cdotp M $$ for S N $$ S\subseteq N $$ . We argue that this renders x 0 $$ x\ge 0 $$ redundant. Consider an optimal solution x $$ {x}^{\prime } $$ to problem (6) for edge costs w $$ {w}^{\prime } $$ , and define x : = x ( M , , M ) $$ x:= {x}^{\prime }-\left(M,\dots, M\right) $$ . Now we have x ( S ) = x ( S ) | S | · M c ( S ) | S | · M = c ( S ) $$ x(S)={x}^{\prime }(S)-\mid S\mid \cdotp M\le {c}^{\prime }(S)-\mid S\mid \cdotp M=c(S) $$ for all S N $$ S\subsetneq N $$ , so x $$ x $$ is feasible for problem (4) with edge costs w $$ w $$ . We show that x $$ x $$ must be optimal for problem (4). Considering any solution y $$ y $$ optimal for (4), there exists a number M $$ M $$ that can be computed in polynomial time so that y : = y + ( M , , M ) 0 $$ {y}^{\prime}:= y+\left(M,\dots, M\right)\ge 0 $$ , and y ( S ) = y ( S ) + | S | · M c ( S ) + | S | · M = c ( S ) $$ {y}^{\prime }(S)=y(S)+\mid S\mid \cdotp M\le c(S)+\mid S\mid \cdotp M={c}^{\prime }(S) $$ , so y $$ {y}^{\prime } $$ is feasible for (6) with cost function w $$ {w}^{\prime } $$ . Hence y ( N ) > x ( N ) $$ y(N)>x(N) $$ yields the contradiction y ( N ) > x ( N ) $$ {y}^{\prime }(N)>{x}^{\prime }(N) $$ . To argue about M $$ M $$ , observe that in (4) we maximize x ( N ) $$ x(N) $$ , hence for any optimal solution y $$ y $$ in (4), and any i N $$ i\in N $$ there exists S i $$ S\ni i $$ so that y ( S ) = c ( S ) $$ y(S)=c(S) $$ , hence y i = c ( S ) j S , j i y j j N c ( { j } ) $$ {y}_i=c(S)-{\sum}_{j\in S,j\ne i}{y}_j\ge -{\sum}_{j\in N}c\left(\left\{j\right\}\right) $$ , where the last inequality holds because c ( S ) 0 $$ c(S)\ge 0 $$ , y j c ( { j } ) $$ {y}_j\le c\left(\left\{j\right\}\right) $$ for all j N $$ j\in N $$ , and c ( { j } ) = w ( { 0 , j } ) 0 $$ c\left(\left\{j\right\}\right)=w\left(\left\{0,j\right\}\right)\ge 0 $$ . In other words, letting M : = j N c ( { j } ) $$ M\kern0.3em := \kern0.3em {\sum}_{j\in N}c\left(\left\{j\right\}\right) $$ suffices so that y i M $$ {y}_i\ge -M $$ for all i N $$ i\in N $$ , and hence y 0 $$ {y}^{\prime}\ge 0 $$ as required.

Remark.The above reduction of computing arbitrary allocations to computing non-negative allocations generalizes to all cost sharing games ( N , c ) $$ \left(N,c\right) $$ by defining c ( S ) : = ( S ) + | S | · M $$ {c}^{\prime }(S):= (S)+\mid S\mid \cdotp M $$ for all subsets S N $$ S\subsetneq N $$ .

5.3 Two-approximation algorithm

We next propose the following polynomial time algorithm to compute an approximately optimal almost core allocation for problem (6). For notational convenience, let us define for all K N $$ K\subset N $$
N K : = N K , $$ {N}_{-K}\kern0.3em := \kern0.3em N\setminus K, $$
and write N i $$ {N}_{-i} $$ instead of N { i } $$ {N}_{-\left\{i\right\}} $$ .

Algorithm 1. Approximation algorithm for the almost core maximization problem (6) for MST games

The backbone of Algorithm 1 is effectively Prim's algorithm to compute a minimum cost spanning tree [43]. The additional line 5 yields the core allocation by Granot and Huberman [22], which we extend by adding lines 7 and 8, where we effectively maximize the last agent's cost share subject to the constraints that x ( S ) c ( S ) $$ x(S)\le c(S) $$ for all S N $$ S\subsetneq N $$ . Note that for all previous agents, by definition of Prim's algorithm, their cost share is limited by a tight constraint x ( S ) = c ( S ) $$ x(S)=c(S) $$ for some S N $$ S\subsetneq N $$ .

Let us first collect some basic properties of Algorithm 1. Henceforth, we assume w.l.o.g. that the agents get assigned their cost shares in the order 1 , , n $$ 1,\dots, n $$ (so that = n $$ \ell =n $$ in lines 7 and 8). We denote by x ALG $$ {x}^{\mathrm{ALG}} $$ a solution computed by Algorithm 1.

Lemma 1.We have that x ALG ( I k ) = c ( I k ) $$ {x}^{\mathrm{ALG}}\left({I}_k\right)=c\left({I}_k\right) $$ for all k = 1 , , n 1 $$ k=1,\dots, n-1 $$ , and for all S { 1 , , n 1 } $$ S\subseteq \left\{1,\dots, n-1\right\} $$ we have x ALG ( S ) c ( S ) $$ {x}^{\mathrm{ALG}}(S)\le c(S) $$ .

Proof.The first claim follows directly because Algorithm 1 equals Prim's algorithm to compute a minimum cost spanning tree on the vertex set { 0 , , n 1 } $$ \left\{0,\dots, n-1\right\} $$ , and x ALG ( I k ) $$ {x}^{\mathrm{ALG}}\left({I}_k\right) $$ equals the cost of the minimum cost spanning tree on vertex set { 1 , , k } $$ \left\{1,\dots, k\right\} $$ , Hence by correctness of Prim's algorithm [43], x ALG ( I k ) = c ( I k ) $$ {x}^{\mathrm{ALG}}\left({I}_k\right)=c\left({I}_k\right) $$ . The second claim follows by [22, Theorem 3], since the cost allocation for agents { 1 , , n 1 } $$ \left\{1,\dots, n-1\right\} $$ is the same as in [22].

Lemma 2.Suppose x ALG ( S ) > c ( S ) $$ {x}^{\mathrm{ALG}}(S)>c(S) $$ for some set S $$ S $$ with n S N $$ n\in S\subsetneq N $$ . Then there is a superset T S $$ T\supseteq S $$ with | T | = n 1 $$ \mid T\mid =n-1 $$ such that x ALG ( T ) > c ( T ) $$ {x}^{\mathrm{ALG}}(T)>c(T) $$ .

Proof.Recall the agents got assigned their cost shares in order 1 , , n $$ 1,\dots, n $$ . Define k : = max { i | i S } $$ k\kern0.3em := \kern0.3em \max \left\{i\kern0.3em |\kern0.3em i\notin S\right\} $$ to be the largest index of an agent not in S $$ S $$ . Let i 1 , , i $$ {i}_1,\dots, {i}_{\ell } $$ be the set of agents so that N k = S { i 1 , , i } $$ {N}_{-k}=S\cup \left\{{i}_1,\dots, {i}_{\ell}\right\} $$ and w.l.o.g.  i 1 < < i $$ {i}_1<\cdots <{i}_{\ell } $$ . We show that x ALG ( S ) > c ( S ) $$ {x}^{\mathrm{ALG}}(S)>c(S) $$ implies x ALG ( S { i 1 } ) > c ( S { i 1 } ) $$ {x}^{\mathrm{ALG}}\left(S\cup \left\{{i}_1\right\}\right)>c\left(S\cup \left\{{i}_1\right\}\right) $$ . Then repeating the same argument, we inductively arrive at the conclusion that x ALG ( N k ) > c ( N k ) $$ {x}^{\mathrm{ALG}}\left({N}_{-k}\right)>c\left({N}_{-k}\right) $$ . So observe that

x ALG ( S { i 1 } ) = x ALG ( S ) + x i 1 > c ( S ) + x i 1 , $$ {x}^{\mathrm{ALG}}\left(S\cup \left\{{i}_1\right\}\right)={x}^{\mathrm{ALG}}(S)+{x}_{i_1}>c(S)+{x}_{i_1}, $$
and c ( S ) $$ c(S) $$ is the cost of a minimum cost spanning tree for S $$ S $$ , call it MST ( S ) $$ \mathrm{MST}(S) $$ . Moreover, as i 1 n $$ {i}_1\ne n $$ , x i 1 $$ {x}_{i_1} $$ is the cost of the edge, call it e $$ e $$ , that the algorithm used to connect agent i 1 $$ {i}_1 $$ . We claim that MST ( S ) { e } $$ \mathrm{MST}(S)\cup \left\{e\right\} $$ is a tree spanning vertices S { 0 , i 1 } $$ S\cup \left\{0,{i}_1\right\} $$ , hence c ( S ) + x i 1 $$ c(S)+{x}_{i_1} $$ is the cost of some tree spanning S { 0 , i 1 } $$ S\cup \left\{0,{i}_1\right\} $$ . Then, as required we get
x ALG ( S { i 1 } ) > c ( S ) + x i 1 c ( S { i 1 } ) , $$ {x}^{\mathrm{ALG}}\left(S\cup \left\{{i}_1\right\}\right)>c(S)+{x}_{i_1}\ge c\left(S\cup \left\{{i}_1\right\}\right), $$
because c ( S { i 1 } ) $$ c\left(S\cup \left\{{i}_1\right\}\right) $$ is the cost of a minimum cost tree spanning S { 0 , i 1 } $$ S\cup \left\{0,{i}_1\right\} $$ . If MST ( S ) { e } $$ \mathrm{MST}(S)\cup \left\{e\right\} $$ was not a spanning tree for vertices S { 0 , i 1 } $$ S\cup \left\{0,{i}_1\right\} $$ , then edge e $$ e $$ would connect i 1 $$ {i}_1 $$ to some vertex outside S { 0 } $$ S\cup \left\{0\right\} $$ , but this contradicts the choice of i 1 $$ {i}_1 $$ as the vertex outside S $$ S $$ with smallest index.

Lemma 3.We have x ALG 0 $$ {x}^{\mathrm{ALG}}\ge 0 $$ .

Proof.Recall that in minimum cost spanning tree games [11, 22], the weight of edges are non-negative. Since Algorithm 1 computes the allocation for agents in line 5 by the edge weight of the first edge on the unique path to 0, there is x k ALG 0 $$ {x}_k^{\mathrm{ALG}}\ge 0 $$ for all k = 1 , 2 , , n 1 $$ k=1,2,\dots, n-1 $$ . So we only need to argue about x n ALG $$ {x}_n^{\mathrm{ALG}} $$ . To that end, note that an equivalent definition of x n ALG $$ {x}_n^{\mathrm{ALG}} $$ in line 8 of the algorithm is

max. x n s.t. x n c ( N k ) x ALG ( N { k , n } ) for all k = 1 , , n 1 . $$ \max .\kern0.3em {x}_n\kern0.3em \mathrm{s}.\mathrm{t}.\kern0.3em {x}_n\le c\left({N}_{-k}\right)-{x}^{\mathrm{ALG}}\left(N\setminus \left\{k,n\right\}\right)\kern0.3em \mathrm{for}\ \mathrm{all}\kern0.3em k=1,\dots, n-1. $$ (7)
We claim that x ˜ n : = c ( N ) c ( N n ) 0 $$ {\tilde{x}}_n\kern0.3em := \kern0.3em c(N)-c\left({N}_{-n}\right)\ge 0 $$ is a feasible solution to this maximization problem, hence the actual value of x n ALG $$ {x}_n^{\mathrm{ALG}} $$ after the update in line 8 can only be larger, and therefore in particular it is non-negative. First, note that indeed, x ˜ n 0 $$ {\tilde{x}}_n\ge 0 $$ , as this is the cost of the last edge that Prim's algorithm uses to connect the final vertex n $$ n $$ to the minimum cost spanning tree. That x ˜ n $$ {\tilde{x}}_n $$ is feasible in (7) follows from the fact that x ˜ n $$ {\tilde{x}}_n $$ is the cost share that is assigned to agent n $$ n $$ in the core allocation of [22]. Indeed, letting x ˜ $$ \tilde{x} $$ be equal to x ALG $$ {x}^{\mathrm{ALG}} $$ except for x ˜ n = c ( N ) c ( N n ) $$ {\tilde{x}}_n=c(N)-c\left({N}_{-n}\right) $$ , we have that x ˜ $$ \tilde{x} $$ is precisely the cost allocation as proposed in [22]. By the fact that this yields a core allocation, we have that x ˜ ( S ) c ( S ) $$ \tilde{x}(S)\le c(S) $$ for all S N $$ S\subseteq N $$ , so in particular for all k = 1 , , n 1 $$ k=1,\dots, n-1 $$ ,
x ˜ n + x ALG ( N { k , n } ) = x ˜ ( N k ) c ( N k ) , $$ {\tilde{x}}_n+{x}^{\mathrm{ALG}}\left(N\setminus \left\{k,n\right\}\right)=\tilde{x}\left({N}_{-k}\right)\le c\left({N}_{-k}\right), $$
and hence the claim follows.

Theorem 7.Algorithm 1 is a 2-approximation for the almost core maximization problem (6) for minimum cost spanning tree games, and this performance bound is tight for Algorithm 1.

Proof.Denote by x ALG $$ {x}^{\mathrm{ALG}} $$ a solution by Algorithm 1. We first argue that Algorithm 1 yields a feasible solution. Consider any S N $$ S\subsetneq N $$ . For S n $$ S\not\ni n $$ , feasibility of x ALG $$ {x}^{\mathrm{ALG}} $$ follows from Lemma 1. For S n $$ S\ni n $$ , assume x ( S ) > c ( S ) $$ x(S)>c(S) $$ . Then Lemma 2 yields that there exists some N k n $$ {N}_{-k}\ni n $$ with x ALG ( N k ) > c ( N k ) $$ {x}^{\mathrm{ALG}}\left({N}_{-k}\right)>c\left({N}_{-k}\right) $$ . However by definition of x n ALG $$ {x}_n^{\mathrm{ALG}} $$ in line 8 of the algorithm, we have for all k = 1 , , n 1 $$ k=1,\dots, n-1 $$

x n ALG c ( N k ) x ALG ( N { k , n } ) , $$ {x}_n^{\mathrm{ALG}}\le c\left({N}_{-k}\right)-{x}^{\mathrm{ALG}}\left(N\setminus \left\{k,n\right\}\right), $$
which yields a contradiction to x ALG ( N k ) > c ( N k ) $$ {x}^{\mathrm{ALG}}\left({N}_{-k}\right)>c\left({N}_{-k}\right) $$ .

To show that the performance guarantee is indeed 2, let x OPT $$ {x}^{\mathrm{OPT}} $$ be some optimal solution to the almost core maximization problem. Let k N n $$ {k}^{\star}\in {N}_{-n} $$ be the index for which the minimum in line 8 is attained. Observe that x n ALG $$ {x}_n^{\mathrm{ALG}} $$ is updated such that x ALG ( N k ) = c ( N k ) $$ {x}^{\mathrm{ALG}}\left({N}_{-{k}^{\star }}\right)=c\left({N}_{-{k}^{\star }}\right) $$ holds. Then by non-negativity of x OPT $$ {x}^{\mathrm{OPT}} $$ and because of Lemma 3,

x n OPT x OPT ( N k ) c ( N k ) = x ALG ( N k ) x ALG ( N ) . $$ {x}_n^{\mathrm{OPT}}\le {x}^{\mathrm{OPT}}\left({N}_{-{k}^{\star }}\right)\le c\left({N}_{-{k}^{\star }}\right)={x}^{\mathrm{ALG}}\left({N}_{-{k}^{\star }}\right)\le {x}^{\mathrm{ALG}}(N). $$
Moreover, by definition of x ALG $$ {x}^{\mathrm{ALG}} $$ , we have x ALG ( N n ) = c ( N n ) $$ {x}^{\mathrm{ALG}}\left({N}_{-n}\right)=c\left({N}_{-n}\right) $$ , and by Lemma 3,
x OPT ( N n ) c ( N n ) = x ALG ( N n ) x ALG ( N ) . $$ {x}^{\mathrm{OPT}}\left({N}_{-n}\right)\le c\left({N}_{-n}\right)={x}^{\mathrm{ALG}}\left({N}_{-n}\right)\le {x}^{\mathrm{ALG}}(N). $$
Hence we get x OPT ( N ) = x n OPT + x OPT ( N n ) 2 x ALG ( N ) $$ {x}^{\mathrm{OPT}}(N)={x}_n^{\mathrm{OPT}}+{x}^{\mathrm{OPT}}\left({N}_{-n}\right)\le 2{x}^{\mathrm{ALG}}(N) $$ .

To see that the performance bound 2 is tight for Algorithm 1, consider the instance in Figure 2A.

Here, Algorithm 1 computes the solution x ALG = ( 1 , 0 , ε ) $$ {x}^{\mathrm{ALG}}=\left(1,0,\varepsilon \right) $$ with value 1 + ε $$ 1+\varepsilon $$ , as the order in which agents get assigned their cost shares is 1, 2, 3, and in line 8 of the algorithm we get x 3 ALG = c ( { 1 , 3 } ) x 1 = ( 1 + ε ) 1 = ε $$ {x}_3^{\mathrm{ALG}}=c\left(\left\{1,3\right\}\right)-{x}_1=\left(1+\varepsilon \right)-1=\varepsilon $$ . An almost core optimum solution would be x OPT = ( 0 , 1 , 1 ) $$ {x}^{\mathrm{OPT}}=\left(0,1,1\right) $$ with value 2.

Details are in the caption following the image
Two MST games with n = 3 $$ n=3 $$ players for the analysis of Algorithm 1. (A) MST game showing that the analysis of Algorithm 1 cannot be improved. (B) MST game showing that Algorithm 1 need not compute an element of AC ( N , c ) $$ {\mathrm{AC}}_{\left(N,\overline{c}\right)} $$ .

Even though Theorem 6 suggests that the non-negativity requirement x 0 $$ x\ge 0 $$ is irrelevant for optimization, it is important for Theorem 7. Without it, so allowing x i < 0 $$ {x}_i<0 $$ for some agents i $$ i $$ , the above algorithm does not provide an approximation guarantee in general. To see that, consider again the instance given in Figure 1B, and observe that Algorithm 1 yields a cost allocation x ALG = ( 0 , 0 , 0 ) $$ {x}^{\mathrm{ALG}}=\left(0,0,0\right) $$ , while x = ( k , k , k ) $$ x=\left(-k,k,k\right) $$ is a feasible solution for the almost core.

It remains to remark that Algorithm 1 does generally not compute an allocation in the almost core of the corresponding monotonized game ( N , c ) $$ \left(N,\overline{c}\right) $$ , as can be seen for the instance in Figure 2B. Here, we have that c ( S ) = 1 $$ c(S)=1 $$ for all S N $$ S\subseteq N $$ except for c ( { 2 , 3 } ) = 2 $$ c\left(\left\{2,3\right\}\right)=2 $$ . An optimal almost core allocation is x = ( 0 , 1 , 1 ) $$ x=\left(0,1,1\right) $$ , and depending on how ties are broken, Algorithm 1 yields x ALG = ( 0 , 1 , 1 ) $$ {x}^{\mathrm{ALG}}=\left(0,1,1\right) $$ or x ALG = ( 1 , 0 , 0 ) $$ {x}^{\mathrm{ALG}}=\left(1,0,0\right) $$ . The monotonized game has c ( S ) = 1 $$ \overline{c}(S)=1 $$ for all S N $$ S\subseteq N $$ , and then an optimal almost core allocation is x = ( 1 2 , 1 2 , 1 2 ) $$ \overline{x}=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right) $$ . Note that this example also shows that Proposition 2 is tight (for n = 3 $$ n=3 $$ ), as c ( N ) = 1 $$ c(N)=1 $$ .

6 CONCLUSIONS

In the literature, one also finds minimum cost spanning tree games defined as profit sharing games, where one defines the value of a coalition S $$ S $$ by the cost savings that it can realize in comparison to the situation where all agents in S $$ S $$ connect directly to the source,
v ( S ) : = i S c ( { i } ) c ( S ) . $$ v(S)\kern0.3em := \kern0.3em \sum \limits_{i\in S}c\left(\left\{i\right\}\right)-c(S). $$
Then the core constraints, for profit shares x v n $$ {x}^v\in {\mathbb{R}}^n $$ , are x v ( S ) v ( S ) $$ {x}^v(S)\ge v(S) $$ . It is not hard to see that all our results also hold for that version of the problem via the simple transformation x i v : = c ( { i } ) x i $$ {x}_i^v\kern0.3em := \kern0.3em c\left(\left\{i\right\}\right)-{x}_i $$ . In particular, note that for value games all feasible solutions x v $$ {x}^v $$ are non-negative, as core stability requires that x i v v ( { i } ) 0 $$ {x}_i^v\ge v\left(\left\{i\right\}\right)\ge 0 $$ . Our results imply NP $$ \mathsf{NP} $$ -hardness for computation of minimum profit shares that are coalitionally stable, and the corresponding profit version of Algorithm 1 can be shown to yield a 2-approximation, which also can be shown to be tight.

We collect some open problems which we believe are interesting. First, we would like to gain more insight into the computational complexity for the almost core problem (1), also for other classes of games. In particular, note that the computation of an optimal almost core allocation for submodular cost functions as of Corollary 1 relies on the equivalence of separation and optimization. Since the computation of a core element can be done by Edmonds' greedy algorithm, it is conceivable that dropping just one single inequality from that polymatroid still allows for a combinatorial algorithm, without the need to resort to the Ellipsoid method. Moreover, we gave a 2-approximation for cost MST games under the additional assumption that subsidies are not allowed. It would be interesting to extend this result to the general, unconstrained case, or show that this is not possible. Also giving lower bounds on the approximability does seem plausible, as the “hard cases” for maximizing shareable costs are those where a minimum cost spanning tree exists which is a (Hamiltonian) path.

Finally, as discussed in Section 5, both in general and for MST games one could define a more general class of problems in the spirit of cooperative games with restricted coalition formation, by defining a more general (downward-closed) set system that describes all those subsets of agents that are able to cooperate and hence have access to an outside option, while all other subsets do not have that option.

ACKNOWLEDGMENTS

Rong Zou and Boyue Lin acknowledge the support of the China Scholarship Council (Grants No. 202006290073, 202106290010). The authors also thank the anonymous reviewers of this article, and of an earlier draft of this article for some constructive comments.

    DATA AVAILABILITY STATEMENT

    Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.