Skip to main content

Incentive approaches for cloud computing: challenges and solutions

Abstract

Cloud computing enables highly configurable and reliable computing resources on a rentable per-use scheme, facilitating quick and cost-effective provisioning of large-scale applications. Thanks to the fast-paced evolution of cutting-edge technologies and the rapid spread of cloud-based solutions, the cloud computing ecosystem is now part of our everyday lives. Nevertheless, cloud computing relies on highly sophisticated data centers comprising energy-consuming servers and equipment that require much energy. Stimulating cloud services for active participation and network contributions presents several challenges. Strategies based on artificial intelligence (AI), game theory, and blockchain have great potential to create an economically sustainable cloud ecosystem. This paper explores strategies grounded in AI, game theory, and blockchain to foster an economically sustainable cloud ecosystem. Informed by a survey study, our research delves into incentive approaches within cloud computing. Theoretical foundations, motivations, and enabling techniques are comprehensively examined to provide valuable insights for a broad audience. The primary contributions of this work lie in elucidating the application of AI, game theory, and blockchain to address challenges in incentivizing cloud services, paving the way for a more sustainable and efficient cloud computing landscape.

Introduction

Cloud computing is among the most rapidly evolving trends in today’s technological age [1]. Compared with current IT services, cloud computing offers on-demand services, mobile storage, rapid elasticity, broad network services, improved security, access anytime and anywhere, resource maximization, and multi-tenacity [2]. In recent years, cloud computing has opened up opportunities for numerous businesses to move, run, and maintain software services in cloud environments, enabling them to gain immediate access to a broad range of services. Further, to satisfy customers’ requirements, each service is tailored and personalized [3]. The advent of big data originally put a strain on existing commodity hardware, which did not have the capabilities to cope with heterogeneous workloads originating from various sources and being processed [4]. Consequently, several IT organizations are transferring their facilities to cloud environments in order to handle a wide range of distinct and varied workloads. Cloud computing offers multiple advantages, such as robust virtual infrastructures, easily accessible offerings, flexibility, and expandability. These features motivate enterprises to shift infrastructure from the on-premises environment to cloud environments to minimize administrative burdens and leverage virtualized architectures in data centers to provide customers with great flexibility and high availability [5].

Cloud computing has become a powerful and revolutionary force changing how computing resources are provided in today’s digital age. The company offers three distinct services: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) [6]. These services fundamentally change the way businesses acquire and utilize computing resources. SaaS enables the immediate access and availability of software applications, while PaaS provides a framework for developers to create and launch applications. IaaS allows customers to access and utilize scalable infrastructure resources, allowing for flexible and cost-effective allocation. Virtualization is an essential strategy in cloud computing configurations. It involves dividing the hardware resources of hosts, known as physical machines (PMs), into separate execution environments called virtual machines (VMs). VMs are independent systems hosted on a PM and controlled by a VM monitor (VMM) or hypervisor. Effectively managing these virtual resources within cloud data centers is crucial, guaranteeing optimal use of resources, quality of service, and energy efficiency.

Effective PM management is crucial, especially in relation to energy consumption in cloud data centers, illustrated in Fig. 1. Efficiently utilizing these physical resources is crucial for improving service quality, increasing return on investment (ROI) and lowering energy waste. Therefore, tackling these issues in managing resources has become a central focus in maximizing the potential advantages of cloud computing while reducing the corresponding environmental effects. In addition, the remarkable scalability and flexibility of cloud computing have revolutionized conventional computing paradigms. Nevertheless, this transformation has brought forth new difficulties concerning the distribution of resources, flexible provisioning, and the general effectiveness of the system. Therefore, incentive-based strategies have become more necessary to tackle these difficulties. Incentive systems are crucial for stimulating user engagement, promoting the sharing of resources, and maximizing the optimal use of cloud resources. This, in turn, contributes to the development of a sustainable and effective cloud ecosystem.

Fig. 1
figure 1

Energy consumption considerations in cloud data centers

Considering the diversity of large distributed applications, extensive data analytics mechanisms are necessary to analyze the data effectively [7]. Furthermore, evolving software development models, including serverless computing, offer unique resource consumption forms autonomously determined by the application’s demands [8, 9]. With container technologies, lightweight virtualization improves cloud utilization and lowers application provisioning latency. To estimate resource requirements before deploying distributed resources, novel scheduling and resource provisioning strategies are necessary to support fog and cloud computing [10]. The cloud computing environment is gaining traction as a new approach for Earth science researchers to solve the complex challenges they face in computing and analysis. Currently, the community relies on dedicated supercomputers, which are expensive and frequently undergo slag periods or inefficiencies. An exciting hypothesis suggests that cloud computing could substitute for a standalone supercomputer [11]. In this paper, we conduct a thorough survey and analysis of incentive-driven methodologies in cloud computing. The study clarifies incentives’ crucial role in establishing a long-term cloud ecosystem. It emphasizes the challenges, potentials, and solutions associated with these incentive-based approaches.

Compared with related works, our study includes distinctive contributions and advances that make it stand out in the cloud computing research landscape. A comprehensive exploration of the synergies between AI, game theory, and blockchain has not been done in the identified related works, primarily focused on individual aspects of incentive-driven approaches within cloud computing. While existing studies focus on one or two incentive mechanisms, our research integrates insights from all three approaches, providing a more nuanced and comprehensive view. Unlike previous studies that occasionally provide fragmented analyses of incentive mechanisms, our study systematically analyzes the benefits and challenges of each category: AI, game theory, and blockchain. This way, we can better understand how these diverse approaches interact and collectively impact the cloud computing ecosystem.

Background

Cloud computing has consolidated several software platforms under one roof. Cloud computing is the most efficient way of addressing high resource consumption by transforming resources into on-demand services. Cloud service providers typically provide a wide range of services, which may include security issues such as throughput, availability, and reliability. Cloud computing has grown significantly as a result of virtualization [12]. This technology enables resource sharing in cloud systems, allowing different functions to operate simultaneously across multiple platforms. The technology enables the partitioning of hardware resources by simulating complete or partial machines comprising multiple execution environments capable of functioning independently. In conventional architectures, each physical machine typically supports only one natively installed operating system (OS) to ensure stability and uniformity. Native environments are faster than virtualized environments. In the latter case, CPU utilization can increase by 40 to 60%. However, virtualized environments allocate resources more closely to the needs of actual tasks than traditional architectures, where an entire host is devoted to the same task. Cloud computing encompasses various virtualization technologies, including network and storage virtualization [13].

VMs form the backbone of cloud infrastructure and are illusions created by emulating a computer device and executing applications. Hypervisors are used in the cloud environment to distribute hardware resources virtualized by VM-based virtualization. VMM, as a hypervisor, allows multiple OSs to collaborate on a single PM via VMs. It is the management layer responsible for managing and controlling all VMs running independent operating systems. H. The second type of hypervisor, a hosted hypervisor, is software running within another operating system. The third system level is the guest operating system, which runs above the hardware. Hypervisors that run bare metal on the host machine do not require a host operating system, which consumes additional resources. Bare metal hypervisors perform better than type 2 since there is no additional software layer between the host hardware and the virtual machines. The type 1 hypervisor is more resource-efficient and secure but requires a particular configuration on the host machine [14].

The IoT has emerged as a transformative force, seamlessly integrating physical devices and sensors into the digital realm [15, 16]. IoT plays a pivotal role in the cloud computing ecosystem by generating vast amounts of real-time data. This data influx, ranging from environmental conditions to user behavior, is harnessed by cloud platforms to fuel advanced analytics, enabling businesses to make informed decisions [17]. Cloud computing is the backbone for processing and storing this voluminous IoT data, providing scalable resources to handle the dynamic requirements of diverse IoT applications [18]. The symbiotic relationship between IoT and cloud computing enhances efficiency, scalability, and accessibility, laying the foundation for innovative solutions across industries, from smart cities to healthcare [19].

Artificial intelligence (AI) and its subsets, including machine learning, deep learning, and neural networks, constitute the intelligence powerhouse within the cloud computing ecosystem. AI algorithms leverage the computational prowess of cloud servers to analyze vast datasets, recognize patterns, and make data-driven predictions [20]. Embedded within cloud-based applications, machine learning algorithms continuously refine their models based on new data, fostering adaptability and enhancing decision-making capabilities [21, 22]. Deep learning, particularly with neural networks, thrives on the robust computational infrastructure offered by cloud services, enabling the training of complex models for image and speech recognition, natural language processing, and more [23, 24]. The marriage of AI and cloud computing empowers businesses with intelligent insights and democratizes access to sophisticated AI capabilities, making them accessible to a broader range of industries and applications. Together, these technologies forge a dynamic synergy, propelling the cloud computing ecosystem into the forefront of innovation and intelligent computing solutions [25].

Methods

This study aims to comprehensively explore and analyze incentive approaches applied to cloud computing, focusing on understanding the challenges and potential solutions in creating an economically sustainable cloud ecosystem. The study employs a survey-based research design to gather insights from existing literature and relevant sources. The research setting involves a thorough examination of scholarly articles, reports, and other academic resources related to incentive-driven cloud computing. Since this study is based on a survey of existing literature and resources, no direct participants are involved. Instead, the primary materials consist of published research papers, conference proceedings, white papers, and technical reports that discuss various incentive approaches within the cloud computing domain. These materials are selected based on their relevance, credibility, and contribution to understanding incentive mechanisms and their implications for cloud services. The study involves a systematic review of incentive approaches applied to cloud computing. To achieve this, the following processes are carried out:

  • Literature search and selection: A comprehensive search is conducted across various academic databases, including IEEE Xplore, ACM Digital Library, PubMed, and Google Scholar. Keywords related to cloud computing, incentives, artificial intelligence, game theory, and blockchain are used to identify relevant literature. The retrieved articles are screened based on their titles, abstracts, and full texts to determine their eligibility for inclusion in the study.

  • Data extraction and synthesis: Selected articles are thoroughly reviewed, and relevant information related to incentive mechanisms, challenges, solutions, and outcomes is extracted. This process involves categorizing the incentive approaches into different types, such as monetary rewards, resource allocation, reputation systems, and more. The extracted data are synthesized to create a comprehensive overview of the incentive landscape in cloud computing.

  • Comparison and analysis: The identified incentive approaches are analyzed in terms of their theoretical foundations, motivations, enabling technologies, and potential benefits. A comparative analysis is conducted to highlight the strengths and limitations of each approach, as well as their applicability to different cloud computing scenarios.

  • Identification of challenges and solutions: The challenges associated with implementing incentive-driven cloud computing are identified through the literature review. Each challenge is examined in detail, and potential solutions that leverage artificial intelligence, game theory, blockchain, or other relevant technologies are discussed.

  • Discussion of findings: The synthesized information is discussed in the context of the broader cloud computing ecosystem. Implications for creating an economically sustainable cloud environment are considered, and insights are drawn from the reviewed literature to provide a comprehensive understanding of the incentive mechanisms' impact.

  • Ethical considerations: Since the study is based on existing published materials, ethical considerations related to participant privacy and informed consent are not applicable. Proper citation and attribution are ensured to maintain the integrity of the sources used.

Incentive approaches for the cloud ecosystem

Blockchain-enabled approaches

The blockchain functions as a distributed ledger that extends across the entire network. With decentralized consent, blockchains allow contracting without a trusted third party and validating jointly disseminated items. Therefore, blockchains can perform decentralized contract confirmations, significantly reducing operational costs and traffic at the central organization. Blockchains also have unchallengeable transactions since every node within the network keeps a ledger of all transactions [19]. Furthermore, cryptographic methods ensure the integrity of the information blocks within the blockchain. Therefore, blockchains offer nonnegotiability of communication. In addition, all contracts can be tracked by all users due to the significant timestamp. In a blockchain, transactions are represented by headers in the form of blocks. The hash of the predecessor reflects all headers and metadata in the block. The first position resides within the primary block known as the genesis block, a block without a predecessor. Merkle trees are used to hash Ethereum and Bitcoin transactions. As illustrated in Fig. 2, blockchains are made up of repeated chains of blocks. The hash value of each transaction on a blockchain links to the block directly before the current block (referred to as the parent block) by a reciprocal step in the form of the parent block’s hash value [26].

Fig. 2
figure 2

An example of a blockchain with a continuous sequence of blocks

Figure 3 illustrates a block with two parts: the header and the body. The header includes the Merkle tree root hash, block version, parent block hash, timestamp, nonce, and nBits. The block version identifies the block validation regulations that are being followed. The parent block hash value demonstrated the original block hash, a value of 256 bits. The hash values of all transactions within the block are included in the Merkle tree root hash. Nonce, a four-byte value, typically starts at 0 and rises during all hash calculations. A timestamp represents time measured in UTC since 1970 January 1. The nBits value represents the hash point of a genuine block. There is a counter for transactions in the body of the block. The blockchain uses asymmetric encryption to validate the authenticity of transactions. In dishonest environments, asymmetric cryptography based on digital signatures is employed.

Fig. 3
figure 3

Block design

Blockchain technology and smart contracts are causing significant disruptions in several engineering and computer science domains. Blockchain technology has the potential to revolutionize the data centers in the cloud computing business. The decentralized operating paradigm of blockchain makes it a possible alternative to centralized cloud-based services. Due to its low prices and minimum administrative requirements, blockchain has the potential to become an essential element in fulfilling the needs of cloud systems. Current studies have prioritized the use of blockchain technology to ensure the security and trustworthiness of cloud-based services. The merging of blockchain with cloud computing poses many challenges:

  • The cloud is fully controlled from a central location and offers limited visibility and trust settings, whereas the blockchain was established based on the concept of decentralization. Due to legal and legislative constraints, complete elimination of centralization is unattainable. Therefore, a hybrid approach is necessary, where the cloud provider retains some authority while ensuring transparency and confidence with cloud customers.

  • Data stored in the cloud is safeguarded against unwanted access, yet it is openly available on the blockchain. The widespread adoption of blockchain-based cloud services will be driven by concerns around privacy.

  • Blockchain solutions naturally suffer from scalability issues, while cloud systems have the capacity to grow rapidly.

  • Cloud systems achieve reduced latency by leveraging edge computing. Blockchain is unsuitable for time-sensitive applications because of its low latency. To address the aforementioned issues, private and consortium blockchains have been suggested. Nevertheless, these methods diminish transparency and decentralization.

Notwithstanding the aforementioned concerns, the integration of blockchain technology with cloud-based applications is imperative. However, notwithstanding the problems mentioned before, the issue continues to arise as to how to achieve compatibility between blockchain and cloud technologies. Cloud computing’s appeal as a business model is enhanced by the availability of outsourcing services. However, there is a mutual lack of confidence between consumers and outsourcing service providers. BCPay, a blockchain-based fair payment framework for outsourcing services in cloud computing that achieves soundness and robust fairness, was introduced by Zhang and Deng [14]. It is also highly effective in transaction volume and computational costs. Velmurugadass, Dhanasekaran [27] have developed a cloud-based software-defined network (SDN) that monitors operations on particular data and evidence. The SDN controller utilizes a blockchain approach to safeguard the evidence obtained through data and user signatures. The investigator conducts report production, review of evidence, retrieval of evidence, and identification in accordance with the logical graph of evidence (LGoE).

Wilczyński and Kołodziej [28] introduced a new cloud scheduler model due to blockchain technology. The model has improved the efficiency of preparing schedules, and the simulator returns a schedule with a proper makespan compared to earlier individual scheduling methods. Li, Liang [29] proposed a blockchain-enabled robust cost-aware data caching strategy that minimizes cache data tampering. A new exchange scheme is presented to address trust issues between consumers, sellers, and agents. Rahman, Islam [30] proposed merging blockchain and SDN into a cloud computing platform to address security risks associated with cloud computing. Utilizing a distributed blockchain network allows data to be stored and transmitted securely, allowing for scalability, flexibility, and privacy. The distributed nature of the blockchain also ensures that data is kept secure and private while also providing the necessary integrity to maintain the system.

An audit log is vital for monitoring users’ and administrators’ activities, but it can be vulnerable to manipulation if attackers can access the system. Attackers can modify and delete log entries or even create false entries to cover their tracks. Securing these records from unauthorized access is essential to managing audit logs. To prevent unauthorized access to the log, it must be kept in a secure location with restricted access, and all access to the log should be logged and monitored. Additionally, the log should be regularly backed up and stored in a secure offsite location to ensure that any modifications or deletions can be detected and remedied. Ali, Khan [31] proposed a log management system using blockchain to cope with several limitations. It outperformed previous models regarding functionality and performance. Xu, Chen [32] presents a blockchain-based technique for controlling cloudlets in a multimedia workflow. It utilizes NSGA III to enhance the quality of service for multi-media applications and ELECTRE to address the decision-making requirements for optimal scheduling.

Intelligent cloud computing in the logistics industry is increasingly critical in economic development worldwide. Nevertheless, data confidentiality during the operation procedure prevents information sharing. In order to address these challenges, Fu, Hu [5] propose a hyperconnected trunk logistics alliance (HTLA) system that incorporates big data and blockchain technology. This framework improves service quality, enforces administrative control, and facilitates stakeholders’ cooperation. Throughout the last few decades, cloud computing has emerged as one of the fastest-growing technologies to affect traditional application operations. Edge computing can solve the problem of a centralized cloud lacking the dispersion required for collaborative applications. To tackle the trust and incentive model issue, Zhou, Shi [33] proposed ALLSTAR, a blockchain technology framework that enhances the trust between the cloud and edge resources to facilitate seamless integration. In this investigation, the authors examine the challenges in using distributed cloud and edge resources and propose a new business model based on permissioned blockchains.

Game theory-enabled approaches

Game theory describes the strategies employed by different players in a given game. Generally, a game involves the interaction of several players in accordance with a set of rules. The different strategies employed by each player are based on their individual objectives, knowledge, and preferences. Game theory allows players to analyze their strategies in order to maximize their chances of success in the game. A player may be an individual, a machine, a company, or an association. According to game theory, the actions of the current player are not the only factor that determines the outcome of the game. Other players’ actions are also taken into account. This makes this approach very scalable and versatile. According to game theory, the outcome is also affected by the expected return calculated for each player in advance of a decision, representing each player’s satisfaction. Consequently, players will take action and make decisions in order to maximize their reward [34].

Game models can be divided into three categories, the static game model, the cooperative game model, and the noncooperative game model. Cooperative game models involve multiple players working together towards the same objective. To illustrate such a game, Akkarajitsakul, Hossain [35] outlined a novel approach to mobile-based networks involving the formation of coalitions among a variety of mobile devices, each joining or leaving according to their reward. Noncooperative game models involve players working independently to achieve their own objectives. This problem can be solved by Nash equilibrium, which has gained wide acceptance as a solution in recent years. Static game models involve players making decisions without considering the decisions of other players. The static prisoner’s problem and the static zero-sum game are examples of this type of game. During the static prisoner’s dilemma, participants are given the option to cooperate with authorities or defy them. In the two cases, a certain punishment price is imposed in exchange for cooperation, and a certain price is imposed for deviating. The defector and co-worker will receive punishment and substandard prices, respectively, for one defector and one cooperator. In a static zero-sum game, the interests of the players are complementary. In other words, one player’s best outcome has the opposite effect on another and vice versa.

A promising trend in cloud computing is the provision of numerous resources on a charge-per-use basis. The services provided by the cloud can be categorized into three categories: software, platform, and infrastructure. Cloud service providers are usually concerned with optimizing their revenues, while cloud users expect the highest level of service quality. This can result in conflict between cloud providers and consumers, and admission requests must be made to satisfy both. Baranwal and Vidyarthi [36] apply game theory to manage cloud requests. They model the competition between cloud providers and consumers using a game-theoretic approach. This approach allows them to identify strategies that optimize the satisfaction of both parties, leading to higher-quality cloud services and improved customer satisfaction. An analysis of the performance of a model has been conducted through simulation using the CloudSim simulator. The findings suggest that it could be implemented in cloud middleware in the future.

Increasing the number of Internet-connected equipment necessitates the development of more effective approaches to determine the most appropriate service combination from a variety of services available in edge clouds, as well as to compose complex IoT workflows. Therefore, cloud-enabled fog computing demands an infrastructure for managing, composing, and providing IoT services for IoT-cloud integration. In previous research, certain characteristics of fog computing were not considered, including its efficient resource allocation, reduced energy consumption, and low latency. A cloud-based platform for IoT service selection and composition in fog computing was created by Emami Khansari and Sharifian [37]. This platform allows selecting the most suitable IoT service from a fog-enabled cloud platform to progress the quality-of-service metrics. This includes distributed resource utilization, latency, and bandwidth utilization, which are all important metrics to consider when selecting an IoT service. The research suggests an evaporation-based water cycle algorithm and multi-objective evolutionary game theory to optimize the IoT workflow in cloud-assisted fog computing environments regarding CPU utilization, power consumption, and latency. By optimizing the IoT workflow, the proposed algorithm can reduce power consumption and latency while at the same time ensuring that the available resources are being used efficiently. This contributes to ensuring that the system can meet the demands of the users while minimizing the overall cost.

Task scheduling has grown to be a hot research topic due to the popularity of cloud computing technologies. Compared to traditional distributed systems, the task scheduling problem in cloud computing is more complicated. To address this complexity, research has been conducted to develop efficient scheduling algorithms that can adapt to the ever-changing demands of cloud computing systems. As a mathematical tool, Yang, Jiang [38] created a simplified model for cloud task scheduling according to game theory. The rate allocation strategy on the node is computed utilizing a game strategy in accordance with the cooperative game model. The suggested algorithm provides a better optimization result, according to an analysis of experimental results.

The security of cloud computing is complicated by the fact that each service model uses a different infrastructure. Cloud computing systems that change their state very rapidly cannot be assessed using current security risk assessment models. A scalable security risk assessment model for cloud computing was created by Furuncu and Sogukpinar [39] as a solution to this issue. Using this method, any risk in the system is assessed to determine what role the cloud provider or system’s tenant should play in resolving it.

Mobile edge computing (MEC) allows for resource-constrained intelligent mobile devices to perform highly computational tasks. Mobile devices may provide customers with intelligent services that have reduced latency and energy consumption compared to cloud computing. This is achieved by transferring jobs to the edge of the network, which is closer to the consumers. Although mobile edge computing outperforms cloud computing in terms of low latency and reliability, cloud computing has not yet completely been replaced by mobile edge computing. Aiming to maximize the utilization of computing resources and leverage parallel computing, Xu, Xie [40] explores the possibility of offloading several mobile devices in the context of cloud-side collaboration. By way of game theory, they first determine the optimal offloading decision for edge-to-terminal terminals, then determine the optimal offloading decisions for cloud-edge terminals that select to offload to the edge, and at least arrive at optimal offloading on the three sides of the cloud-edge terminal. An algorithm for computing offloading is presented based on this concept. Compared to the traditional method, this technique typically improves by 32% under the time delay constraint.

In addition to building their IT infrastructure, more and more organizations purchase cloud resources for their daily business operations as cloud technology improves and the economy expands. Therefore, it is imperative to comprehend the economics of cloud computing. Liu, Xiao [34] concentrate their efforts on analyzing private idle computing resources that are owned by different organizations with the intention of forming a network of ad hoc cloud service providers as well as selling their services to cloud users. In this circumstance, an organization can both sell its idle computing resources in an ad hoc manner and meet its demands. Organizations, as providers, are naturally inclined to maximize their profits by adjusting sales prices and business costs. Because of the uncertainty of the number of idle computing resources, dynamic pricing is challenging. They formulate the issue as a noncooperative game between numerous organizations as well as look at it in the context of game theory. Each player’s profits are represented by a utility function. The players select requests and sales service strategies to maximize the utility function. This research has demonstrated that the game problem has a Nash equilibrium. A proximal iteration algorithm (IPA) was proposed for calculating Nash equilibrium. The authors found that the IPA algorithms converge to Nash equilibrium solutions when reasonable conditions are met and conform to the theoretical proofs. Experimental findings demonstrate that the recommended algorithm efficiently reaches a stable state by determining optimal service request methods for all companies and marketing tactics for each firm.

As cloud computing has grown in popularity, it brings many advantages to the world in terms of accessibility, scalability, and cost-effectiveness. Still, it also poses risks and vulnerabilities in terms of network security. A more safe and more secure network structure can be achieved by identifying vulnerable data centers that are susceptible to attacks. Hosseini and Vakili [41] examine the vulnerabilities of malware data centers in cloud computing infrastructure. They present a model for identifying vulnerable data centers in cloud computing based on an analysis of the cloud computing system using game theory. As a mathematical tool, this paper develops a model based on game theory.

Cloud computing provides a scalable method of consuming services beyond small systems through virtualization. There may be a need to form a group of virtual machines together in a cloud that offers resources such as processors, memory, and hard disks. In order to manage cloud resources cost-effectively, there needs to be a strategy that minimizes waste while configuring services ahead of time. Based on the principles of coalition formation and the uncertainty principle of game theory, Pillai and Rao [42] propose a resource allocation mechanism for machines in the cloud. This study compares the results of applying this mechanism with existing cloud-based resource allocation techniques. This method of allocating resources by forming coalitions of machines in the cloud also results in better utilization of resources and higher levels of satisfaction with requests.

Artificial intelligence-enabled approaches

With AI, the cloud platform can monitor the load and dynamically adjust it to improve QoS parameters, optimize power consumption, or reduce infrastructure costs. AI encompasses a diverse array of search techniques, including machine learning, reinforcement learning, and planning [43]. Data-intensive tasks in today’s world require enhanced intelligence to ensure optimal scheduling decisions, virtual machine migrations, etc., under various constraints [44]. The constraints can be characterized by bandwidth limitations, computation capabilities, and service level agreements (SLAs) or deadline requirements. Many studies have attempted to improve the performance of fog and cloud systems using AI techniques [45]. Several publications explore the most efficient scheduling strategies for cloud computing, virtualization techniques, and distribution systems. The goal functions are optimized utilizing techniques such as genetic algorithms, supervised machine learning, or even deep reinforcement learning. AI offers a profitable opportunity to enhance massive systems with vast amounts of data in a straightforward, efficient, and cost-effective way by automating decision-making instead of depending on human-designed rules.

The combination of IoT and multimedia applications with traditional cloud and edge computing architectures has changed the nature of centrally managed networks dramatically [46]. These centralized cloud and edge computing networks face new routing and security challenges. The vulnerability of these networks can be attributed to cyberattacks and malicious actions. Malicious activities result in link failures, incorrect forwarding decisions, and diverted paths [47]. The detection of these activities requires an effective method for detecting malicious switches that divert data traffic and degrade the performance of the network. The trust of edge devices is another aspect to consider, as we always need reliable devices to transmit data from IoT devices to SDN networks. Qureshi, Jeon [48] proposed an SDN-based anomaly detection framework for edge computing-based IoT network architectures. In addition, they developed an anomaly detection system that detects the behavior of edge computing and SDN devices. They proposed establishing a trusted authority for edge computing to ensure trust in edge devices. For the specified trusted domain, the edge device functions as a certificate authority. To overcome the overhead associated with edge devices, the edge node validates the key only once, and once the trust is established, communication is conducted with locally generated certificates.

Due to the widespread adoption of cloud data centers, hosting application services in the cloud has become a common practice. The resource requirements of modern applications have also increased sharply in data-intensive industries. Consequently, an increased number of cloud servers are being provisioned, resulting in more energy usage, which raises sustainability issues. Traditional strategies and algorithms for handling cloud resources efficiently are limited in their ability to address scalability and adaptability issues. Most existing research fails to capture the dependency between host characteristics, resources consumed, and scheduling decisions. This results in limited scalability and higher computing resource requirements, especially for situations requiring nonstationary resources. In order to overcome the shortcomings mentioned above, Tuli, Gill [49] introduce HUNTER, an AI-based holistic resource allocation approach for sustainable cloud computing. Based on three significant models: energy, cooling, and thermal, the proposed model represents energy conservation in data centers in terms of a multi-objective scheduling problem. An optimal scheduling decision is generated by using gated graph convolution networks to approximate the quality of service of a system. The gated graph convolution networks used by HUNTER are able to capture the dependencies between tasks in a system and make scheduling decisions based on those dependencies. This allows for more efficient scheduling decisions that result in lower energy consumption and lower SLA violations, leading to an overall better quality of service.

Deep neural networks (DNNs) are integral to an array of cutting-edge AI applications. DNNs are powerful tools for recognizing patterns in large datasets and making accurate predictions. They are used in applications like image recognition, natural language processing, and autonomous vehicles. In traditional training techniques for DNNs, cloud computing centers or central servers are typically used for centralized learning. However, this process is generally resource-intensive and time-consuming because of the need to transmit large amounts of data samples to the remote cloud from the edge device. In an effort to address these shortcomings, Liu, Chen [50] propose optimizing DNN learning by using mobile-edge-cloud computing. A hierarchical edge AI learning model is proposed by the authors, HierTrain, on a hierarchical mobile-edge architecture to train DNN efficiently. This new parallel computing technique takes advantage of the computing resources at different levels, such as edge devices, edge servers, and cloud centers, to distribute the training processes. It also uses the hierarchical structure of the DNN model layers as well as data samples, allowing the training process to be efficiently scheduled and balanced. As a result, HierTrain can achieve significant speedups over cloud-based hierarchical training.

As a long-standing issue in distributed and parallel computing, workflow scheduling aims to take advantage of computing resources to meet the requirements of the users. It has been suggested that scheduling methods can take advantage of the rapid response rates of edge computing platforms to increase the QoS for applications. Nevertheless, workflow scheduling in mobile edge-cloud environments poses significant challenges because of the heterogeneity of computational resources, mobile device latency changes, and the variability of workloads. A long-term optimization strategy with efficient modeling of the QoS targets is essential to resolve such challenges. For scheduling workflow applications in mobile edge-cloud computing environments, Tuli, Casale [51] present Monte Carlo learning using deep surrogate (MCDS) models. MCDS models use reinforcement learning to train a deep learning model to predict the performance of the scheduled workload. This can then be used to determine the best scheduling strategy to use for the environment. In order to optimize scheduling decisions robustly, MCDS employs a tree-based search method and a surrogate model based on deep neural networks to evaluate the long-term effects of immediate changes on QoS. Based on experiments conducted on real-world and simulated edge-cloud scenarios, MCDS is shown to be more efficient than traditional approaches regarding cost, SLA violations, response time, and energy consumption.

Results and discussion

In the previous section, we explored three main categories of approaches: blockchain-enabled approaches, game theory-enabled approaches, and AI-enabled approaches. Each category offers unique advantages and challenges for enhancing cloud services and addressing various concerns in the cloud market. Table 1 summarizes the key advantages and challenges associated with each category in cloud computing. The analysis provides insights into the distinctive features of each approach, emphasizing their contributions to enhancing cloud services and addressing pertinent challenges in the evolving cloud market.

Table 1 Overview of incentive approaches in cloud computing

In the context of blockchain-enabled approaches, we found that integrating blockchain with cloud computing can improve transparency, data privacy, and contract confirmations. Blockchain’s decentralized ledger allows for transparent and secure contract confirmations, reducing operational costs in the cloud ecosystem. Additionally, data privacy and access control can be enhanced by leveraging blockchain to ensure permissioned and transparent data access. Smart contracts enable fair payment frameworks, promoting robust and fair transactions in cloud outsourcing services. Furthermore, blockchain can be employed for secure log management, ensuring integrity and preventing unauthorized access to audit logs. The integration of blockchain technology also enhances trust between cloud and edge resources, facilitating seamless integration in distributed cloud environments.

Game theory-enabled approaches offer a strategic perspective for managing cloud requests, optimizing resource allocation, and addressing competition among cloud providers and consumers. By applying game theory to manage cloud requests, cloud providers can optimize the satisfaction of both parties, resulting in higher-quality cloud services and improved customer satisfaction. Coalition formation among virtual machines in cloud environments can optimize resource utilization and reduce waste. Game theory also enables the determination of dynamic pricing strategies for cloud service providers, considering the uncertainty of demand and resource availability. Additionally, game theory-based anomaly detection frameworks can enhance the security of edge computing-based IoT networks by identifying malicious devices and switches.

AI-enabled approaches provide sophisticated techniques for optimizing cloud and edge computing systems. Holistic resource management models, such as HUNTER, enable sustainable cloud computing by optimizing energy conservation and resource scheduling in data centers. Hierarchical edge AI learning models, like HierTrain, efficiently train deep neural networks by utilizing computing resources at different levels, resulting in significant speedups over traditional cloud-based training. In edge-cloud environments, MCDS models can optimize workflow scheduling decisions, improving QoS metrics and efficiency.

The reviewed incentive approaches that integrate blockchain, game theory, and AI in the cloud ecosystem hold great promise for enhancing cloud services. However, several challenges and integration issues need to be addressed to ensure their successful implementation and widespread adoption.

  • Hybrid approaches: Integrating multiple technologies such as blockchain, game theory, and AI requires carefully designed hybrid strategies. These strategies must strike a balance between decentralization, transparency, and scalability while also addressing privacy and performance concerns. Finding the right combination of these technologies is essential to create robust and efficient incentive mechanisms in the cloud ecosystem.

  • Scalability and security: Blockchain-based solutions, while offering enhanced security and transparency, can face scalability challenges as the size of the network and the number of transactions increase. Ensuring that blockchain technology can handle the growing demands of the cloud services market is crucial. Additionally, AI-based models must address data privacy and security concerns, especially when dealing with sensitive data during training processes for deep learning models.

  • Standardization: For these incentive approaches to be widely adopted, there needs to be industry-wide standardization and collaboration among cloud service providers. Standardizing protocols and frameworks will promote interoperability and enable seamless integration of different incentive mechanisms. Moreover, fostering trust among users is essential to drive adoption, as users must feel confident that their data and transactions are secure and transparent.

  • Regulation and compliance: As blockchain, game theory, and AI become integral parts of the cloud ecosystem, there will be a need for clear regulations and compliance standards. Addressing legal and ethical considerations, particularly related to data privacy and usage, will be critical to ensure responsible and ethical implementation of these technologies.

  • Cost and complexity: Implementing these advanced incentive approaches may involve additional costs and complexity for cloud service providers. It will be necessary to strike a balance between the benefits they provide and the resources required for their integration and maintenance.

  • Education and awareness: To facilitate the adoption of these innovative approaches, there is a need for education and awareness campaigns to familiarize cloud service providers and users with the benefits and workings of blockchain, game theory, and AI in the context of cloud computing.

  • Ethical considerations: As these technologies impact decision-making processes in the cloud ecosystem, ensuring ethical considerations are accounted for becomes crucial. Avoiding biases, ensuring fairness, and maintaining transparency are essential aspects that need to be addressed in designing and implementing these incentive mechanisms.

Conclusions

As one of the leading computing paradigms, cloud computing has brought centralized computing services to the market. Cloud computing provides software, platform, and infrastructure services across shared delivery networks. This allows businesses to access computing resources on demand, scaling up and down as needed, without purchasing, maintaining, and managing the underlying hardware and software. It also allows businesses to quickly launch new products and services to the market since they do not need to build and maintain the underlying infrastructure. The present study clarified trends in cloud computing and discussed the impact of incentive approaches, specifically AI, game theory, and blockchain. Key findings highlight the potential of these approaches to address challenges and promote a sustainable cloud ecosystem. Looking ahead, future research should explore hybrid incentive strategies, tackle scalability and security concerns, promote standardization and collaboration, navigate regulatory landscapes, manage costs and complexity, and prioritize education and awareness initiatives. The cloud computing landscape can evolve further by addressing these avenues, offering enhanced efficiency and user-centric services.

Availability of data and materials

Data will be available on request.

Abbreviations

IoT:

Internet of things

AI:

Artificial intelligence

PM:

Physical machines

VM:

Virtual machines

VMM:

Virtual machine monitor

OS:

Operating system

SDN:

Software-defined network

LGoE:

Logical graph of evidence

HTLA:

Hyperconnected trunk logistics alliance

MEC:

Mobile edge computing

SLA:

Service level agreements

DNN:

Deep neural networks

MCDS:

Monte Carlo learning using deep surrogate

References

  1. Pourghebleh, B., et al., The importance of nature-inspired meta-heuristic algorithms for solving virtual machine consolidation problem in cloud environments. Cluster Computing, 2021:1–24.

  2. Hayyolalam V et al (2022) Single-objective service composition methods in cloud manufacturing systems: recent techniques, classification, and future trends. Concurr Comput Pract Exp 34(5):e6698

    Article  Google Scholar 

  3. Vahideh H et al (2019) Exploring the state-of-the-art service composition approaches in cloud manufacturing systems to enhance upcoming techniques. Int J Adv Manufact Technol 105(1–4):471–498

    Google Scholar 

  4. Kunwar, V., et al., Load balancing in cloud—a systematic review. Big Data Analytics, 2018:583–593.

  5. Fu D et al (2021) An intelligent cloud computing of trunk logistics alliance based on blockchain and big data. J Supercomput 77(12):13863–13878

    Article  Google Scholar 

  6. Amini Motlagh AA (2020) Movaghar, and AM Rahmani, Task scheduling mechanisms in cloud computing: a systematic review. Int J CommunSyst. 33(6)

    Article  Google Scholar 

  7. Anupong W et al (2023) Deep learning algorithms were used to generate photovoltaic renewable energy in saline water analysis via an oxidation process. Water Reuse 13(1):68–81

    CAS  Google Scholar 

  8. Abualigah L et al (2022) Aquila optimizer based PSO swarm intelligence for IoT task scheduling application in cloud computing. Integrating Meta-Heuristics and Machine Learning for Real-World Optimization Problems. Springer, pp 481–497

    Chapter  Google Scholar 

  9. Saeidi S et al (2023) Factors affecting public transportation use during pandemic: an integrated approach of technology acceptance model and theory of planned behavior. Tehnički glasnik 18:1–12

    Google Scholar 

  10. Abdul Samad SR et al (2023) Analysis of the performance impact of fine-tuned machine learning model for phishing URL detection. Electronics 12(7):1642

    Article  Google Scholar 

  11. Priya P.S., et al. The relationship between cloud computing and deep learning towards organizational commitment. in 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM). 2022. IEEE.

  12. Gupta A, Dimri P, Bhatt R (2021) An optimized approach for virtual machine live migration in cloud computing environment. Evolutionary Computing and Mobile Sustainable Networks. Springer, pp 559–568

    Chapter  Google Scholar 

  13. Prasanna Kumar, K. and K. Kousalya, Amelioration of task scheduling in cloud computing using crow search algorithm. Neural Comput Appl. 2020;32(10):5901–5907.

  14. Hosseinzadeh M et al (2020) A hybrid service selection and composition model for cloud-edge computing in the Internet of Things. IEEE Access 8:85939–85949

    Article  Google Scholar 

  15. Pourghebleh B, Navimipour NJ (2017) Data aggregation mechanisms in the Internet of things: a systematic review of the literature and recommendations for future research. J Netw Comput Appl 97:23–34

    Article  Google Scholar 

  16. Zandi, J., A.N. Afooshteh, and M. Ghassemian. Implementation and analysis of a novel low power and portable energy measurement tool for wireless sensor nodes. in Electrical Engineering (ICEE), Iranian Conference on. 2018. IEEE.

  17. Pourghebleh B, Wakil K, Navimipour NJ (2019) A comprehensive study on the trust management techniques in the Internet of Things. IEEE Internet Things J 6(6):9326–9337

    Article  Google Scholar 

  18. Pourghebleh B, Hayyolalam V, Anvigh AA (2020) Service discovery in the Internet of Things: review of current trends and research challenges. Wireless Netw 26(7):5371–5391

    Article  Google Scholar 

  19. Kamalov F et al (2023) Internet of medical things privacy and security: challenges, solutions, and future trends from a new perspective. Sustainability 15(4):3317

    Article  Google Scholar 

  20. Vairachilai, S., et al., Body sensor 5 G networks utilising deep learning architectures for emotion detection based on EEG signal processing. Optik, 2022:170469.

  21. Bolhassani, M. and I. Oksuz. Semi-supervised segmentation of multi-vendor and multi-center cardiac MRI. in 2021 29th Signal Processing and Communications Applications Conference (SIU). 2021. IEEE.

  22. Rajput, S.P., et al., Using machine learning architecture to optimize and model the treatment process for saline water level analysis. J Water Reuse Desalination. 2022.

  23. Omidi, A., et al. Unsupervised domain adaptation of MRI skull-stripping trained on adult data to newborns. in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.

  24. Chekuri, S., et al. Integrated digital library system for long documents and their elements. in 2023 ACM/IEEE Joint Conference on Digital Libraries (JCDL). 2023. IEEE.

  25. Pazouki S, Haghifam MR (2021) Optimal planning and scheduling of smart homes’ energy hubs. Int Trans Electr Energy Syst 31(9):e12986

    Article  Google Scholar 

  26. Kumar P et al (2023) A blockchain-orchestrated deep learning approach for secure data transmission in IoT-enabled healthcare system. J Parallel Distribut Comput 172:69–83

    Article  Google Scholar 

  27. Velmurugadass P et al (2021) Enhancing blockchain security in cloud computing with IoT environment using ECIES and cryptography hash algorithm. Mater Today Proceed 37:2653–2659

    Article  Google Scholar 

  28. Wilczyński A, Kołodziej J (2020) Modelling and simulation of security-aware task scheduling in cloud computing based on blockchain technology. Simul Model Pract Theory 99:102038

    Article  Google Scholar 

  29. Li C et al (2022) Blockchain-based data trading in edge-cloud computing environment. Inf Process Manage 59(1):102786

    Article  ADS  Google Scholar 

  30. Rahman, A., et al., Towards a blockchain-SDN-based secure architecture for cloud computing in smart industrial IoT. Digital Communications and Networks, 2022.

  31. Ali A et al (2022) BCALS: Blockchain-based secure log management system for cloud computing. Transact Emerg Telecommun Technol 33(4):e4272

    Article  Google Scholar 

  32. Xu X et al (2020) Blockchain-based cloudlet management for multimedia workflow in mobile cloud computing. Multimedia Tools Appl 79:9819–9844

    Article  Google Scholar 

  33. Zhou H et al (2021) Building a blockchain-based decentralized ecosystem for cloud and edge computing: an ALLSTAR approach and empirical study. Peer-to-Peer Network Appl 14(6):3578–3594

    Article  Google Scholar 

  34. Liu G et al (2020) Game theory-based optimization of distributed idle computing resources in cloud environments. Theoret Comput Sci 806:468–488

    Article  MathSciNet  Google Scholar 

  35. Akkarajitsakul K, Hossain E, Niyato D (2012) Cooperative packet delivery in hybrid wireless mobile networks: a coalitional game approach. IEEE Trans Mob Comput 12(5):840–854

    Article  Google Scholar 

  36. Baranwal G, Vidyarthi DP (2016) Admission control in cloud computing using game theory. J Supercomput 72:317–346

    Article  Google Scholar 

  37. Emami Khansari, M. and S. Sharifian, A modified water cycle evolutionary game theory algorithm to utilize QoS for IoT services in cloud-assisted fog computing environments. J Supercomput. 2020;76(7):5578–5608.

  38. Yang J et al (2020) A task scheduling algorithm considering game theory designed for energy management in cloud computing. Futur Gener Comput Syst 105:985–992

    Article  Google Scholar 

  39. Furuncu E, Sogukpinar I (2015) Scalable risk assessment method for cloud computing using game theory (CCRAM). Comput Stand Inter 38:44–50

    Article  Google Scholar 

  40. Xu F et al (2022) Two-stage computing offloading algorithm in cloud-edge collaborative scenarios based on game theory. Comput Electr Eng 97:107624

    Article  Google Scholar 

  41. Hosseini S, Vakili R (2019) Game theory approach for detecting vulnerable data centers in cloud computing network. Int J Commun Syst 32(8):e3938

    Article  Google Scholar 

  42. Pillai PS, Rao S (2014) Resource allocation in cloud computing using the uncertainty principle of game theory. IEEE Syst J 10(2):637–648

    Article  ADS  Google Scholar 

  43. Monjezi V., et al., Information-theoretic testing and debugging of fairness defects in deep neural networks. arXiv preprint arXiv:2304.04199, 2023:1571–1582.

  44. Jery AE et al (2023) Experimental investigation and proposal of artificial neural network models of lead and cadmium heavy metal ion removal from water using porous nanomaterials. Sustainability 15(19):14183

    Article  CAS  Google Scholar 

  45. Mohseni, M., F. Amirghafouri, and B. Pourghebleh, CEDAR: a cluster-based energy-aware data aggregation routing protocol in the internet of things using capuchin search algorithm and fuzzy logic. Peer-to-Peer Networking and Applications, 2022:1–21.

  46. Pourghebleh B. and V Hayyolalam. A comprehensive and systematic review of the load balancing mechanisms in the Internet of Things. Cluster Comput. 2019:1–21.

  47. Pourghebleh, B., et al., A roadmap towards energy‐efficient data fusion methods in the Internet of Things. Concurrency and Computation: Practice and Experience, 2022:e6959.

  48. Qureshi KN, Jeon G, Piccialli F (2021) Anomaly detection and trust authority in artificial intelligence and cloud computing. Comput Netw 184:107647

    Article  Google Scholar 

  49. Tuli S et al (2022) HUNTER: AI based holistic resource management for sustainable cloud computing. J Syst Softw 184:111124

    Article  Google Scholar 

  50. Liu D et al (2020) HierTrain: fast hierarchical edge AI learning with hybrid parallelism in mobile-edge-cloud computing. IEEE Open J Commun Soc 1:634–645

    Article  Google Scholar 

  51. Tuli S, Casale G, Jennings NR (2021) MCDS: AI augmented workflow scheduling in mobile edge cloud computing systems. IEEE Trans Parallel Distrib Syst 33(11):2794–2807

    Google Scholar 

Download references

Acknowledgements

I would like to take this opportunity to acknowledge that there are no individuals or organizations that require acknowledgment for their contributions to this work.

Funding

No funding.

Author information

Authors and Affiliations

Authors

Contributions

FY contributed to writing–original draft preparation, conceptualization, supervision, and project administration, and LJ played a pivotal role in conceptualization, supervision, and the review and editing of the manuscript. All authors participated in the process and collectively read and approved the final version of the manuscript.

Corresponding author

Correspondence to Fan Yunlong.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yunlong, F., Jie, L. Incentive approaches for cloud computing: challenges and solutions. J. Eng. Appl. Sci. 71, 51 (2024). https://doi.org/10.1186/s44147-024-00389-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44147-024-00389-8

Keywords