Self-Adaptive Predictor-Corrector Approach for General Variational Inequalities Using a Fixed-Point Formulation
Abstract
A literature review revealed that the general variational inequalities, fixed-point problems, and Winner–Hopf equations are equivalent. In this study, general variational inequality and fixed-point problem are considered. We introduced a new iterative method based on a self-adaptive predictor-corrector approach for finding a solution to the GVI. Adaptations in the fixed-point formulation and self-adaptive techniques have been used to predict a novel iterative approach. Convergence analyses of the suggested algorithm are demonstrated. Moreover, numerical analysis shows that we establish the new best method for solving general variational inequality which performs better than the previous one. Furthermore, it is known that GVI consisted of several classes including variational inequalities and related optimization problems, and results obtained in this study continue to hold for these problems.
1. Introduction
Applied mathematics has adorned the most promising and panoramic field referred to as variational inequality theory. This theory is a powerful unifying methodology for the study of equilibrium problems and provides us algorithms with accompanying convergence analysis for computational purposes. Therefore, in recent few years, various branches of mathematical and engineering sciences can be transformed in the framework of variational inequalities such as electronics, heat transportation, elasticity, optimization, network analysis, water resources, game theory, equilibrium problems in economics, mechanics, and traffic analysis; see [1–10]. Such remarkable development claims the most simple and unidirectional models of linear and nonlinear techniques. The idea of variational inequalities was first originated by Stampacchia [11]. Related to the variational inequalities, we have the concept of the Wiener–Hopf equations and general variational inequalities which were introduced by Noor [12] and Shi [13] in conjunction with variational inequalities from different points of view.
Several conventional improvements approach to establish the solutions for open, moving boundary value problems and asymmetric obstacle, unilateral, even-order, and odd-order problems utilizing general variational inequalities, see [13–18]. Equivalent effects of general variational inequalities and fixed-point problems utilizing the projection techniques in recent days are an active research field, see [8, 19–21]. Quantitative knowledge of pseudocontractive and nonlinear monotone (accretive) operators combined with Lipschitz-type conditions is vital to prove the convergence of fixed-point iterative procedures. The phenomena of variational inequalities have a significant contribution to solving the Wiener–Hopf equations. Salient features of Wiener–Hopf equations and optimization problems in the presence of variational inequalities are addressed by Shi, see [12, 20–23].
Variants of projection methods such as Wiener–Hopf equation techniques, auxiliary principle scheme, decomposition, and dynamical systems are advanced for solving various kinds of variational inequalities and other related optimization problems, see [8, 17–19, 22–26]. A detailed study by Lions and Stampacchia revealed the utilization of such tools for finding the detailed solution of variational inequalities was in consumption a long time ago, see [27, 28]. The primary objective for employing these abilities is to keep the variational inequality and the fixed-point problem similar through projection. Based on this formula, many projection methods for resolving variational inequalities can be developed. This approach has been critical. Convergence of the projection method has a drawback; it required a strongly monotone and Lipschitz continuous operator, which has limited much application. Therefore, innovative methods or modifications in the projection method are required to diversify the field.
Publications such as [23, 29–36] comprised of extra gradient-type methods which delimited the projection phenomenon by taking additional forward steps, and projection at each iteration is considered according to the double projection. These methods are a predictor-corrector tool and have been suggested to quantify variational inequalities and their special cases. We improve the recent best results for GVI by introducing innovative iterative methods.
The self-adaptive predictor-corrector approach is the primary goal of this research, which modifies the fixed point by incorporating a generalized residue vector that includes general variational inequalities. For the convergence of the method, we require only pseudomonotonicity, which is considered a weaker condition than monotonicity. It is comprehended that the proposed model is simple and robust. We can see from the numerical results that the proposed technique is both rapid and easy to implement, as demonstrated by an example.
2. Preliminaries
Problem (1) is called the general variational inequalities(GVI), considered by Noor in 1988. We have observed that a large number of problems in pure and applied mathematics related to physical sciences, engineering, equilibrium, moving, nonsymmetric, unified, obstacle, and contact can be discussed and studied via inequalities (1), see [18, 19, 22].
Problem (2) is defined by the original variational inequalities introduced by Stampacchia, see [11].
The following concept and known results are required to approach the main algorithms.
Definition 1. Let A : H⟶H be called the operator and also ϕ-pseudomonotone if 〈Aα, ϕ(β) − ϕ(α)〉 ≥ 0 provides 〈Aβ, ϕ(β) − ϕ(α)〉 ≥ 0, ∀ α, β ∈ H.
It is considered [22, 24] that monotonicity implies pseudomonotonicity, but the converse does not exist.
Lemma 1. For z ∈ H, α ∈ M holds for the inequality:
Lemma 2. α is a solution of the given GVI (1) if and only if α ∈ M satisfies the relation
Lemma 3. The function α ∈ H : ϕ(α) ∈ M satisfies inequalities (1) if and only if z ∈ H satisfies equation (9), provided
Lemma 3 provides that the GVI (1) is equivalent to GWHE (9). This fixed-point formulation was considered by Noor [24] to establish various iterative schemes for solving the GVI and other optimization theory and related problems.
This useful scheme has been considered to make and establish a self-adaptive method for solving the GVI (1).
This equivalent modification has been considered by Noor [31] for solving the general variational inequalities (GVI).
Algorithm 1.
-
Step 0. We set the parameters as follows. For αo ∈ H, set n = 0, and take δ0, δ ∈ (0,1), ϵ > 0, γ ∈ [1,2), μ ∈ (0,1), and ρ > 0.
-
Step 1. If R1(αn) < ϵ, then we terminate; otherwise, take where mn finds the smallest nonnegative integer that satisfies the inequality
-
Step 2. Compute d1(αn) = R1(αn) − ρnA(αn) + ρnAϕ−1PM[ϕ(αn) − ρnA(αn)] and .
-
Step 3. Find the next iteration, ϕ(αn+1) = ϕ(αn) − ωnd1(αn).
-
Step 4. If then again take ρ = ρn/μ, else ρ = ρn. Consider n = n + 1, and start iteration from step 1.
3. Main Results
We now analyze and recommend the following predictor-corrector scheme for finding the GVI (1).
Algorithm 2.
-
Step 0. For parameters ϵ > 0, ρ > 0, δ0, δ ∈ (0, 1), γ ∈ [0,1], μ ∈ (0,1), and αo ∈ H, we start from n = 0, η ∈ (0,1).
-
Step 1. We again take ρn = ρ. If ‖R(αn)‖ < ϵ, then computation stops; otherwise, we continue and consider and find the smallest nonnegative integer mn, which satisfies the inequality .
-
Step 2. Calculate the next iterate:
(20) -
where
(21)(22) -
Step 3. If
(23) -
then again we set ρ = ρn/μ, else by setting ρ = ρn. Take n = n + 1, and repeat step 1.
Here, ωn is taken as corrector step size which contains the GWHE (9).
We consider the convergence of the main established results, which is the main target of this research.
Theorem 1. If α∗ is a solution of inequality(1) and the operator A : H⟶H is ϕ-pseudomonotone, then
Proof. Let α∗ ∈ H be a solution of GVI (1). Then,
This implies that
Taking z = ϕ(α) − ρAα, α = PM[ϕ(y) − ρAα], and β = ϕ(α∗) in (3), we obtain
Using(3.1),
Adding (27) and (30), we obtain
Using (19), (21), (23), (27), and (31), we have
Theorem 2. If α∗ ∈ H is a solution of GVI (1) and αn+1 is the approximate solution found from Algorithm 2, then
Theorem 3. Let αn+1 be the approximated solution obtained from Algorithm 2 and α∗ ∈ M be a solution of GVI (1). If H is a finite-dimensional space, then
Proof. Let α∗ ∈ M be a solution of GVI (1). From (34), we get that the sequence {αn} is bounded; we have
Thus, the sequence {αn} converges exactly one cluster point and the consequences, and we obtain
Since ϕ is injective, it gives that which satisfies the GVI (1).
Suppose that (37) holds and If (32) does not hold, then, by taking the value of ηn, we obtain
Let α∗ be the cluster point of {αn} and let {αni} be the subsequence {αni} converge to α∗. We apply the limit in (41); then,
4. Numerical Example
Problem 1. This problem is relevant to inequality (1), with ϕ(α) = Aα + q and Aα = α, where
We set the following domain for the considered problem: M = {α ∈ (Rn/0) ≤ αi ≤ 1, for i = 1,2,3, …, n}. Results for Algorithm 1 are mentioned in Table 1. Tables 2 and 3 represent the outcomes of Algorithm 2 with the initial point α0 = −A−1q for the order n = 100 of the generated matrix . For all output, we consider μ, δ ∈ (0,1), γ ∈ [1,2], and ρ > 0. The iterative process will stop when we have r(αn, ρn) ≤ 10−7.
From Tables 1 and 2, we observe with the change of parameters that the number of iterations also varies. Table 2 gives the results for the newly established method (Algorithm 2). From the output, we observe that the newly established method converges more quickly than Algorithm 1 for solving the main GVI.
From Tables 3 and 4, we see that, in the new iterative scheme, the number varies (iterations) by changing the parameters δ, ρ, and μ. By changing the parameters accordingly, we can reduce the number of iterations.
Parameters | ρ = 7, δ = 0.02, | ρ = 7, δ = 0.02, | ρ = 7, δ = 0.02, |
---|---|---|---|
μ = 0.7 | μ = 0.8 | μ = 0.9 | |
Iterations | 14 | 22 | 47 |
Parameters | ρ = 7, δ = 0.02, | ρ = 7, δ = 0.02, | ρ = 7, δ = 0.02, |
---|---|---|---|
μ = 0.7 | μ = 0.8 | μ = 0.9 | |
Iterations | 12 | 19 | 42 |
Parameters | ρ = 5, δ = 0.3, | ρ = 7, δ = 0.2, | ρ = 7, δ = 0.05, |
---|---|---|---|
μ = 0.6 | μ = 0.4 | μ = 0.7 | |
Iterations | 4 | 3 | 11 |
Parameters | ρ = 4, δ = 0.2, | ρ = 4, δ = 0.1, | ρ = 4, δ = 0.04, |
---|---|---|---|
μ = 0.8 | μ = 0.8 | μ = 0.8 | |
Iterations | 9 | 12 | 16 |
5. Conclusion
For this study, the predictor-corrector self-adaptive method has been applied and considered to find the solution of the GVI. We used pseudomonotone of the operator, which is considered as a weaker condition than monotonicity. We also proved the convergence analysis, which is the main motivation of this paper. It has been analyzed that the new technique is more efficient than the already proved methods. The efficiency of the method has been illustrated through an example. Comparison is provided with other known methods. The numerical results reflect the output of our newly established algorithms well for the considered problem.
Conflicts of Interest
All authors declare no conflicts of interest in this paper.
Acknowledgments
The third and fourth authors wish to express their gratitude to Prince Sultan University for facilitating the publication of this article through the Theoretical and Applied Sciences Lab. The authors would like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges (APC) of this publication.
Open Research
Data Availability
All data and information used for implementation are available within the article.