Volume 2022, Issue 1 2478644
Research Article
Open Access

Self-Adaptive Predictor-Corrector Approach for General Variational Inequalities Using a Fixed-Point Formulation

Kubra Sanaullah

Kubra Sanaullah

Department of Mathematics, Air University, PAF Complex E-9, Islamabad 44000, Pakistan au.edu.pk

Search for more papers by this author
Saleem Ullah

Saleem Ullah

Department of Mathematics, Air University, PAF Complex E-9, Islamabad 44000, Pakistan au.edu.pk

Search for more papers by this author
Muhammad Shoaib Arif

Corresponding Author

Muhammad Shoaib Arif

Department of Mathematics, Air University, PAF Complex E-9, Islamabad 44000, Pakistan au.edu.pk

Department of Mathematics and Sciences, College of Humanities and Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia psu.edu.sa

Search for more papers by this author
Kamaleldin Abodayeh

Kamaleldin Abodayeh

Department of Mathematics and Sciences, College of Humanities and Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia psu.edu.sa

Search for more papers by this author
Rabia Fayyaz

Rabia Fayyaz

Department of Mathematics, Comsats University Islamabad, Islamabad, Pakistan comsats.edu.pk

Search for more papers by this author
First published: 12 March 2022
Citations: 1
Academic Editor: Nawab Hussain

Abstract

A literature review revealed that the general variational inequalities, fixed-point problems, and Winner–Hopf equations are equivalent. In this study, general variational inequality and fixed-point problem are considered. We introduced a new iterative method based on a self-adaptive predictor-corrector approach for finding a solution to the GVI. Adaptations in the fixed-point formulation and self-adaptive techniques have been used to predict a novel iterative approach. Convergence analyses of the suggested algorithm are demonstrated. Moreover, numerical analysis shows that we establish the new best method for solving general variational inequality which performs better than the previous one. Furthermore, it is known that GVI consisted of several classes including variational inequalities and related optimization problems, and results obtained in this study continue to hold for these problems.

1. Introduction

Applied mathematics has adorned the most promising and panoramic field referred to as variational inequality theory. This theory is a powerful unifying methodology for the study of equilibrium problems and provides us algorithms with accompanying convergence analysis for computational purposes. Therefore, in recent few years, various branches of mathematical and engineering sciences can be transformed in the framework of variational inequalities such as electronics, heat transportation, elasticity, optimization, network analysis, water resources, game theory, equilibrium problems in economics, mechanics, and traffic analysis; see [110]. Such remarkable development claims the most simple and unidirectional models of linear and nonlinear techniques. The idea of variational inequalities was first originated by Stampacchia [11]. Related to the variational inequalities, we have the concept of the Wiener–Hopf equations and general variational inequalities which were introduced by Noor [12] and Shi [13] in conjunction with variational inequalities from different points of view.

Several conventional improvements approach to establish the solutions for open, moving boundary value problems and asymmetric obstacle, unilateral, even-order, and odd-order problems utilizing general variational inequalities, see [1318]. Equivalent effects of general variational inequalities and fixed-point problems utilizing the projection techniques in recent days are an active research field, see [8, 1921]. Quantitative knowledge of pseudocontractive and nonlinear monotone (accretive) operators combined with Lipschitz-type conditions is vital to prove the convergence of fixed-point iterative procedures. The phenomena of variational inequalities have a significant contribution to solving the Wiener–Hopf equations. Salient features of Wiener–Hopf equations and optimization problems in the presence of variational inequalities are addressed by Shi, see [12, 2023].

Variants of projection methods such as Wiener–Hopf equation techniques, auxiliary principle scheme, decomposition, and dynamical systems are advanced for solving various kinds of variational inequalities and other related optimization problems, see [8, 1719, 2226]. A detailed study by Lions and Stampacchia revealed the utilization of such tools for finding the detailed solution of variational inequalities was in consumption a long time ago, see [27, 28]. The primary objective for employing these abilities is to keep the variational inequality and the fixed-point problem similar through projection. Based on this formula, many projection methods for resolving variational inequalities can be developed. This approach has been critical. Convergence of the projection method has a drawback; it required a strongly monotone and Lipschitz continuous operator, which has limited much application. Therefore, innovative methods or modifications in the projection method are required to diversify the field.

Publications such as [23, 2936] comprised of extra gradient-type methods which delimited the projection phenomenon by taking additional forward steps, and projection at each iteration is considered according to the double projection. These methods are a predictor-corrector tool and have been suggested to quantify variational inequalities and their special cases. We improve the recent best results for GVI by introducing innovative iterative methods.

The self-adaptive predictor-corrector approach is the primary goal of this research, which modifies the fixed point by incorporating a generalized residue vector that includes general variational inequalities. For the convergence of the method, we require only pseudomonotonicity, which is considered a weaker condition than monotonicity. It is comprehended that the proposed model is simple and robust. We can see from the numerical results that the proposed technique is both rapid and easy to implement, as demonstrated by an example.

2. Preliminaries

We denote Hilbert space H with the norm and inner product by ‖⋅‖ and 〈, 〉, respectively. A convex set is represented by M in H, and let A, ϕ : HH be considered the nonlinear operators. For finding αH, ϕ(α) ∈ M, such that
(1)

Problem (1) is called the general variational inequalities(GVI), considered by Noor in 1988. We have observed that a large number of problems in pure and applied mathematics related to physical sciences, engineering, equilibrium, moving, nonsymmetric, unified, obstacle, and contact can be discussed and studied via inequalities (1), see [18, 19, 22].

For ϕI (take identity operator), problem (1) reduces to finding αM,   such that
(2)

Problem (2) is defined by the original variational inequalities introduced by Stampacchia, see [11].

The following concept and known results are required to approach the main algorithms.

Definition 1. Let A : HH be called the operator and also ϕ-pseudomonotone if 〈Aα, ϕ(β) − ϕ(α)〉 ≥ 0  provides 〈Aβ, ϕ(β) − ϕ(α)〉 ≥ 0,  ∀ α, βH.

It is considered [22, 24] that monotonicity implies pseudomonotonicity, but the converse does not exist.

Lemma 1. For zH, αM holds for the inequality:

(3)
if and only if
(4)
where PM is called the projection of H onto the convex set M.

It is also known that PM is called the projection operator and also nonexpansive which satisfies the following inequality:
(5)

Lemma 2. α is a solution of the given GVI (1) if and only if αM satisfies the relation

(6)
where ρ ≥ 0 is taken as the constant and PM is considered the projection operator H onto M.

Residue vector R1(α) is defined by
(7)
From Lemma 1, we can see that α satisfy (1) if and only if α is a zero point of the function:
(8)
For the GVI (1), we consider the problem for the Wiener–Hopf equations. Let QM = IPM, where I   is the identity operator and PM is projection operation. For the operators A,   ϕ : HH, and ϕ−1 exists; then, for finding zH, we have
(9)
where (9) is the general Wiener–Hopf equation(GWHE), investigated by Noor [18]. We have seen that the Wiener–Hopf equations are considered and used to establish various efficient and powerful iterative schemes.

Lemma 3. The function αH : ϕ(α) ∈ M   satisfies inequalities (1) if and only if zH satisfies equation (9), provided

(10)
(11)

Lemma 3 provides that the GVI (1) is equivalent to GWHE (9). This fixed-point formulation was considered by Noor [24] to establish various iterative schemes for solving the GVI and other optimization theory and related problems.

This useful scheme has been considered to make and establish a self-adaptive method for solving the GVI (1).

By using (7) and (10) and (11), the GWHE (9) can be modified in the form:
(12)
For ω ∈ [0,1], (9) can be mentioned as
(13)
where
(14)

This equivalent modification has been considered by Noor [31] for solving the general variational inequalities (GVI).

Algorithm 1.

  • Step 0. We set the parameters as follows. For αoH,   set n = 0,   and take δ0, δ ∈ (0,1), ϵ > 0, γ ∈ [1,2), μ ∈ (0,1), and ρ > 0.

  • Step 1. If R1(αn) < ϵ, then we terminate; otherwise, take where mn finds the smallest nonnegative integer that satisfies the inequality

  • Step 2. Compute d1(αn) = R1(αn) − ρnA(αn) + ρnAϕ−1PM[ϕ(αn) − ρnA(αn)] and .

  • Step 3. Find the next iteration, ϕ(αn+1) = ϕ(αn) − ωnd1(αn).

  • Step 4. If then again take ρ = ρn/μ, else ρ = ρn. Consider n = n + 1, and start iteration from step 1.

3. Main Results

We suggest predictor-corrector techniques for updating the scheme to find the solution of the GVI (1):
(15)
Here, we suggest the residue vector involving projection by the relation:
(16)
It is clear that αH,   andϕ(α) ∈ M  is a solution of the GVI (1) if and only if αH,   andϕ(α) ∈ M  is satisfied with the residue vector:
(17)
Since the convex set is defined by M, then, for all η ∈ [0,1], using (16), we have
(18)
where
(19)

We now analyze and recommend the following predictor-corrector scheme for finding the GVI (1).

Algorithm 2.

  • Step 0. For parameters ϵ > 0, ρ > 0, δ0, δ ∈ (0,  1),  γ ∈ [0,1], μ ∈ (0,1), and αoH,   we start from n = 0, η ∈ (0,1).

  • Step 1. We again take ρn = ρ.  If ‖R(αn)‖ < ϵ, then computation stops; otherwise, we continue and consider and find the smallest nonnegative integer mn, which satisfies the inequality .

  • Step 2. Calculate the next iterate:

    (20)

  • where

    (21)
    (22)

  • Step 3. If

    (23)

  • then again we set ρ = ρn/μ, else by setting ρ = ρn. Take n = n + 1, and repeat step 1.

Here, ωn is taken as corrector step size which contains the GWHE (9).

We consider the convergence of the main established results, which is the main target of this research.

Theorem 1. If α is a solution of inequality(1) and the operator A : HH is ϕ-pseudomonotone, then

(24)

Proof. Let αH     be a solution of GVI (1). Then,

(25)
since T is ϕ-pseudomonotone. Taking ϕ(β) = ϕ(α) − ηR(α) in (25), we have
(26)

This implies that

(27)

Taking z = ϕ(α) − ρAα,     α = PM[ϕ(y) − ρAα], and β = ϕ(α) in (3), we obtain

(28)

Using(3.1),

(29)
from which, we have
(30)

Adding (27) and (30), we obtain

(31)

Using (19), (21), (23), (27), and (31), we have

(32)
which is the desired result.

Theorem 2. If αH is a solution of GVI (1) and αn+1 is the approximate solution found from Algorithm 2, then

(33)

Proof. From (20), (21), (22), and (32), we have

(34)
which is the required result.

Theorem 3. Let αn+1 be the approximated solution obtained from Algorithm 2 and αM   be a solution of GVI (1). If H   is a finite-dimensional space, then

Proof. Let αM be a solution of GVI (1). From (34), we get that the sequence {αn} is bounded; we have

(35)
which shows both expressions are going to be zero when n such as
(36)
and
(37)
which implies (36) holds. Let α be taken as the cluster point of {αn} , and consider the subsequence {αni} of the sequence {αn} converge to point α. We know continuity of R holds; we have
(38)
which provides that  α is a solution of GVI (1) by Theorem 3 and
(39)

Thus, the sequence {αn} converges exactly one cluster point and the consequences, and we obtain

(40)

Since ϕ is injective, it gives that which satisfies the GVI (1).

Suppose that (37) holds and If (32) does not hold, then, by taking the value of ηn, we obtain

(41)

Let α be the cluster point of {αn} and let {αni} be the subsequence {αni} converge to α. We apply the limit in (41); then,

(42)
which gives R(α) = 0, that is, αH is a solution of inequality (1), and by Lemma 1, inequality (41) holds. By repeating the same process and arguments, we approach that , the desired result.

4. Numerical Example

Problem 1. This problem is relevant to inequality (1), with ϕ(α) = Aα + q and Aα = α, where

(43)

We set the following domain for the considered problem: M = {α ∈ (Rn/0) ≤ αi ≤ 1,  for i = 1,2,3, …, n}. Results for Algorithm 1 are mentioned in Table 1. Tables 2 and 3 represent the outcomes of Algorithm 2 with the initial point α0 = −A−1q for the order n = 100 of the generated matrix .  For all output, we consider μ, δ ∈ (0,1), γ ∈ [1,2], and ρ > 0. The iterative process will stop when we have r(αn, ρn) ≤ 10−7.

From Tables 1 and 2, we observe with the change of parameters that the number of iterations also varies. Table 2 gives the results for the newly established method (Algorithm 2). From the output, we observe that the newly established method converges more quickly than Algorithm 1 for solving the main GVI.

From Tables 3 and 4, we see that, in the new iterative scheme, the number varies (iterations) by changing the parameters δ, ρ, and μ. By changing the parameters accordingly, we can reduce the number of iterations.

1. Algorithm 1 (numerical results).
Parameters ρ = 7, δ = 0.02, ρ = 7, δ = 0.02, ρ = 7, δ = 0.02,
μ = 0.7 μ = 0.8 μ = 0.9
Iterations 14 22 47
2. Algorithm 2 (numerical results).
Parameters ρ = 7, δ = 0.02, ρ = 7, δ = 0.02, ρ = 7, δ = 0.02,
μ = 0.7 μ = 0.8 μ = 0.9
Iterations 12 19 42
3. Algorithm 2 (numerical results).
Parameters ρ = 5, δ = 0.3, ρ = 7, δ = 0.2, ρ = 7, δ = 0.05,
μ = 0.6 μ = 0.4 μ = 0.7
Iterations 4 3 11
4. Algorithm 2 (numerical results).
Parameters ρ = 4, δ = 0.2, ρ = 4, δ = 0.1, ρ = 4, δ = 0.04,
μ = 0.8 μ = 0.8 μ = 0.8
Iterations 9 12 16

5. Conclusion

For this study, the predictor-corrector self-adaptive method has been applied and considered to find the solution of the GVI. We used pseudomonotone of the operator, which is considered as a weaker condition than monotonicity. We also proved the convergence analysis, which is the main motivation of this paper. It has been analyzed that the new technique is more efficient than the already proved methods. The efficiency of the method has been illustrated through an example. Comparison is provided with other known methods. The numerical results reflect the output of our newly established algorithms well for the considered problem.

Conflicts of Interest

All authors declare no conflicts of interest in this paper.

Acknowledgments

The third and fourth authors wish to express their gratitude to Prince Sultan University for facilitating the publication of this article through the Theoretical and Applied Sciences Lab. The authors would like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges (APC) of this publication.

    Data Availability

    All data and information used for implementation are available within the article.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.